playlist
stringclasses
160 values
file_name
stringlengths
9
102
content
stringlengths
29
329k
MIT_6S897_Machine_Learning_for_Healthcare_Spring_2019
13_Machine_Learning_for_Mammography.txt
ADAM YALA: OK, great. Well, thank you for the great setup. So for this section, I'm gonna talk about some of our work in interpreting mammograms for cancer. Specifically it's going to go into cancer detection and triage mammograms. Next, we'll talk about our technical approach in breast cancer risk. And then finally close up in the many, many different ways to mess up and the way things can go wrong, and how does it [INAUDIBLE] clinical implementation. So let's kind of look more closely at the numbers of the actual breast cancer screening workflow. So as Connie already said, you might see something like 1,000 patients. All them take mammograms. Of that 1,000, on average maybe 100 they called back for additional imaging. Of that 100, something like 20 will get biopsied. And you end up with maybe five or six diagnoses of breast cancer. So one very clear thing you see about problems when you look at this funnel is that way over 99% of people that you see in a given day are cancer-free. So your actual incidence is very low. And so there's kind of a natural question that can come up. What can you do in terms of modeling if you have an even OK cancer detection model to raise the incidence of this population but automatically reading a portion of the population is healthy. Does everybody just follow that broad idea? OK. That's enough head nods. So the broad idea here is you're going to train the cancer detection model to try to find cancer as well as we can. Given that, we're going to try to say, what's a threshold on a development set such that we can kind of say below the threshold no one has cancer. And if we use that at test times, simulating clinical implementation, what would that look like? And can we actually do better by doing this kind of process? And the kind of broad plan of how I'm gonna talk about this-- I'm gonna do this for the next product as well. Of course, we're going to talk about the kind of dataset collection and how we think about, like, you know, what is good data and how do we think about that. Next, the actual methodology and go into the general challenges when you're modeling mammograms for any computer mission tasks, specifically in cancer, and also, obviously, risk. And lastly, how we thought about the analysis and some kind of objectives there. So to kind of dive into it, we took consecutive mammograms. I'll get back into this later. This is actually quite important. We took consecutive mammograms from 2009 to 2016. This started off with about 280,000 cancers. And once we kind of filtered-- so at least one year follow up, we ended up with this final setting where we had 220,000 mammograms for training and about 26,000 for development and testing. And the way we had it, it all comes to say, is this a positive mammogram or not? We didn't look at what cancers were caught by the radiologists. We'd say, you know, what was cancer that was found in any means within a year? And where we looked to was through the radiology, EHR, and the Partners-- kind of five hospital registry. And there we were trying to save cancer-- if anyway we can tell a cancer occurred, let's mart it as such regardless of what others caught on MRI or some kind of later stage. And so the thing we're trying to do here is just mimic the real world of what are we trying to catch cancer. And finally, important details we always split by patient so that your results aren't just memorizing this specific patient didn't have cancer. And so we have some overlap that's some bad bias to have. OK. That's pretty simple. Now let's go into the modeling. There's going to kind of follow two chunks. One chunk is going to be on the kind of general challenges, and it's kind of shared between the variety of projects. And next is going to be kind of more specific analysis for this project. So kind of a general question you might be asking, I have some image. I have some outcome. Obviously, this is just image classification. How is it different from ImageNet? Well, it's quite similar. Most lessons are shared. But there are some key differences. So I gave you two examples. One of them is a scene in my kitchen. Can anyone tell me what the object is? This is not a particularly hard question. AUDIENCE: [Intermingled voices] Dog. Bear. ADAM YALA: Right. AUDIENCE: Dog. ADAM YALA: It is almost all of those things. So that is my dog, the best dog. OK. So can anyone tell me, now that you've had some training with Connie, if this mammogram indicates cancer? Well, it does. And this is unfair for a couple of reasons. Let's go into, like, why this is hard. It's unfair in part because you don't have the training. But it's actually a much harder signal to learn. So first let's kind of delve into it. In this kind of task, the image is really huge. So you have something like a 3,200 by 2,600 pixel image. This is a single view of a breast. And in that, the actual cancer they're looking for might be 50 by 50 pixels. So intuitively your signal to noise ratio is very different. Whereas an image that-- my dog is like the entire image. She's huge in real life and in that photo. And the image itself is much smaller. So not only do you have much smaller images, but you're kind of, like, the relative size of the object in there is much larger. To kind of further compound the difficulty, the pattern you're looking for inside the mammogram is really context-dependent. So if you saw that pattern somewhere else in the breast, it doesn't indicate the same thing. And so you really care about where in this kind of global context this comes out. And if you kind of take the mammogram at different times with different compressions, you would have this kind of non-rigid morphing of the image that's much more difficult to model. Whereas that's a more or less context-independent dog. You see that kind of frame kind of anywhere, you know it's a dog. And so it's a much easier thing to learn in a traditional computer vision setting. And so the core challenge here is that both the image is too big and too small. So if you're looking at just the number of cancers we have, the cancer might be less than 1% of the mammogram and about 0.7% of your images have cancers, even in this data set, which is from 2000 to 2016 MGH, a massive imaging center, in total across all of that, you will still have less than 2,000 cancers. And this is super tiny compared to regular object classification data sets. And this is looking at over a million images if you look at all the four views of the exams. And at the same time, it's also too big. So even if I downsample these images, I can only really fit three of them for a single GPU. And so this kind of limits the batch size I can work with. And whereas the kind of comparable, if I took just the regular image net size, I could fit batches of 128, easily happy days and do all this parallelization stuff, and it's just much easier to play with. And finally, the actual data set itself is quite large. And so you have to do some-- there's nuisances to deal with in terms of, like, just setting up your server infrastructure to handle these massive data sets, also be able to train efficiently. So you know, the core challenge here across all of these kind of tasks is, how do we make this model actually learn? The core problem is that our signal to noise ratio is quite low. So training ends up being quite unstable. And there's a kind of a couple of simple levers you can play with. The first lever is often deep learning initialization. Next, we're gonna talk about kind of the optimization or architecture choice and how this compares to what people often do in the community, including in a recent paper from yesterday. And then finally, we're gonna talk about something more explicit for the triage idea and how we actually use this model once it's trained. OK. So before I go into how we made these choices, I'm just going to say what we chose to give you context before I dive in. So we followed some image initialization. We use a relatively large batch size-ish of 24. And the way we do that is by taking 4 GPUs and just stepping a couple of times before doing an optimizer step. So when you do a couple rounds of back prop first to accumulate those gradients before doing optimization. And you sample balanced batches of training time. And for backbone architecture we use ResNet-18. It's just kind of, like, fairly standard. OK. But as I said before, one of the first key decisions is how do you think about your initialization? So this is a figure of ImageNet initialization versus random initialization. It's not any particular experiment. I've done this across many, many times. It's always like this. Where if you use image initialization, your loss drops immediately, both in train loss and development loss when you actually learn something. Whereas when you do random initialization, you kind of don't learn anything. And your loss kind of bounds around the top for a very long time before it finds some region where it quickly starts learning. And then it will plateau again for a long time before quickly start learning. And to kind of give some context, to give about 50 epochs takes on the order of, like, 15, 16 hours. And so to wait long enough to even see if random initialization could perform as well is beyond my level of patience. It just takes too long, and I have other experiments to be running. So this is more of an empirical observation that the image initialization learns immediately. And there's some kind of questions of why is this? Our theoretical understanding of this is not that strong. We have some intuitions of why this might be happening. We don't think it's anything about this particular filter of this dog is really great for breast cancer. That's quite implausible. But if you look it into a lot of the earlier research in terms of the right kind of random initialization for things like revenue networks, a lot of focus was on does the activation pattern not blow up as you go further down the line. One of the benefits of starting with the pre-trained network is that a lot of those kind of dynamics are already figured out for a specific task. And so shifting from that to other tasks has seemed to be not that challenging. Another possible area of explanation is actually in a BatchNorm statistics. So if you remember, we can only fit three images per GPU. And the way the BatchNorm initialization is implemented across every deep learning library that I know of, it computes independently per GPU to minimize the kind of inter-GPU communication. And so it's also less able to kind of guess from scratch. But if you're starting with the BatchNorm statistics to ImageNet and just slowly shifting it over, it might also result in some stability benefits. But in general, or like, a true deeper theoretical understanding, but as I said, it still eludes us. And it isn't something I can give too much conclusions about, unfortunately. OK. So that's initialization. And if you don't get this right, kind of nothing works for a very long time. So if you're gonna start a project in this space, try this. Next, another important decision that if you don't do, it kind of breaks, is your optimization/architecture choice. So as I said before, kind of a core problem in stability here is this idea that our just signal to noise ratio is really low. And so a very common approach throughout a lot of the prior work and things I actually have tried myself before is to say, OK, let's just break down this problem. We can train at a patch level first. We're going to take just subsets of a mammogram in this little bonding box, have it annotated for radiology findings like benign masses or calcification and things of that sort. We're going to pre-train on that task to have this kind of pixel level prediction. And then once we're done with that, we're going to fine tune that initialized model across the entire image. So you kind of have this two-stage training procedure. And actually, another paper that came out just yesterday does the exact same approach with some slightly different details. But one of the things we wanted to investigate is if you just-- oh, And the base architecture that's always used for this, there is quite a few valid options of things that just get reasonable performance and ImageNet, things like VGG, Wide ResNets and ResNets. In my experience, they all performed fairly similarly. So it's kind of a speed/benefit trade-off. And there's an advantage to using fully convolutional architectures because if you have fully connected layers that are assumed specific dimensionality, you can convert them to convolutional layers. They're just more convenient to start with a full convolutional architecture. There's going to be resolution invariant. Yes. AUDIENCE: In the last slide when you do patches-- ADAM YALA: Yes. AUDIENCE: How do you label every single patch? Are they just labeled with a global label? Or do you have to actually look and catch, and figure out what's happened? ADAM YALA: So normally what you do is you have positive patches labeled. And then you randomly sample other patches. So from your annotation-- so, for example, a lot people do this on public data sets like the Florida DSM dataset that has some entries, of like, here are benign masses, benign calcs, malignant calcs, et cetera. What people do then is take those annotations. They will randomly select other patches and say, if it's not there, it's negative. And I'm going to call it healthy. And then they'll say if this bonding box overlaps with patch by some marginal call, it's the same label. So do this heuristically. And other data sets that are proprietary also kind of play with a similar trick. In general, they don't actually label every single pixel accordingly. But there's relatively minor differences in how people do this. But the results are fairly similar, regardless. Yes. AUDIENCE: When you go from the patch level to the full image, if I understand correctly, the architecture hasn't quite changed because it's just convolution is over a larger-- ADAM YALA: Exactly. So the end thing right before we do the prediction is normally-- ResNet, for example, does a global average pool. Channel lies across the entire feature map. And so they just-- for the patch level they take in an image that's 250 by 250, do the global average pool across that to make the prediction. And when they just go up to the full resolution image, now you're taking a global average pool over a 3,000 by 2,000. AUDIENCE: And presumably there might be some scaling issue that you might need to adjust. Do you do any of that? Or are you just-- ADAM YALA: So you feed it in at the full resolution the entire time. So you just-- do you see what I mean? So you're taking a crop. So the resolution isn't changing. So the same filter map should be able to kind of scale accordingly. But if you do things like average pooling, then you're kind of-- any one thing that has a very high activation will get averaged down lower. And so, for example, in our work, we use max pooling to kind of get around that. Any other questions? But if this looks complicated, have no worries because we actually think it's totally unnecessary. And this is the next slide. So good for you. So as I said before, this kind of, what are the problems that signal to noise? So one obvious thing to kind of think about is, like, OK. Maybe doing SGD with a batch size of three when the lesion is less than 1% of the image is a bad idea. If I just take less noisy gradients by increasing my batch size, which means use more GPUs, take more steps before doing the weight update, we actually find that the need to do this actually goes away completely. So these are experiments I did in the publicly available data set a while back while we were figuring this out. If you take this kind of [INAUDIBLE] architecture and fine tune with a batch size of 2, 4, 10, 16, and compare that to just a one-stage training where you just do the [INAUDIBLE] beginning and initialized in ImageNet and as you use different batch sizes, you quickly start to close the gap on the development AUC. And so for all the experiments that we do broadly we find that we actually get reasonably stable training by just using a batch size of 20 and above. And this kind of comes down to if you use a batch size of one, it's just particularly unstable. In other details that we always sample the balanced batches. Cause otherwise you'd be sampling like, 20 batches before you see a single positive sample. You just don't learn anything. Cool. So that is like, if you do that, you don't do anything complicated. You don't do any fancy cropping or anything of that sort, or like, dealing with like VGG annotations. We found that the actual using VGG annotation for this task is not actually helpful. OK. No questions? Yes. AUDIENCE: So with the larger batch sizing you don't use the magnified patches? ADAM YALA: We don't. We just take the whole image from beginning. Pretend you-- like, can you just see the annotation as whole image, cancer with less than within a year. It's a much simpler setup. AUDIENCE: I don't get. That's the same thing I thought you said you couldn't do for memory reasons. ADAM YALA: Oh. So you just-- instead of-- so normally when you do, you're going to train the network, the most common approach is you do back prop and then step. Cause you do back prop several times, you're accumulating the gradients, at least in PyTorch. And then you can do step afterwards. So instead of doing the whole batch at one time, you just do it serially. So there you're just trading time for space. The minimum, though, is you have to fit at least a single image per GPU. And in our case we can fit three. But to make this actually scale, we use four GPUs at a time. Yes. AUDIENCE: How much is the trade-off with time? ADAM YALA: So if I'm gonna take one batch size any bigger, I would only do it in increments of let's say 12, because that's how much I can fit within my set of GPUs at the same time. But to control the size of the experiment you want to have the kind of the same number of gradient updates per experiment. So if I want to use a batch size of 48, so all my experiments, instead of taking about half a day, it takes about a day. And so there's kind of, like, this natural trade-off as you go along. So one of the things I mentioned at the very end is we're considering some adversarial approach for something. And one of the annoying things about that is that if I have five discriminator steps, oh my god. My my experiment-- I'll take three days per experiment. And [INAUDIBLE] update of someone that's trying to design a better model becomes really slow when the experiments start taking this long. Yes. AUDIENCE: So you said the annotations did not help with the training. Is that because the actual cancer itself is not really different from the dense tissue, and the location of that matters, and not the actual granularity of the-- what is the reason? ADAM YALA: So in general when something doesn't help, there's always kind of like a possibility of two things. One thing is that the whole image signal kind of subsumes that smaller scale signal. Or there is a better way to do it I haven't found that would help. And then this thing looks to us all very hard. As of now, so the task we're [INAUDIBLE] on is whole image classification. And so on that task it's possible that the kind of surrounding context-- so when you do a patch with an annotation, you kind of lose the context which it appears in. So it's possible that just by looking at the whole context every time, it's as good-- you don't get any benefit from kind of the zooming boxes. However, we're not evaluating on kind of an object detection type of evaluation metric. If you say how well we are catching the box. And if we were, we'd probably have much better luck with using the VGG annotation. Because you might be able to tell some of those discriminations by like, this looks like a breast that's likely to develop cancer at all. And the ability of the model to do that is part of why we can do risk modeling. Which is going to be the kind of the last bit of the talk. Yes. AUDIENCE: So do you do the object detection after you identify whether there's cancer or not? ADAM YALA: So as of now we don't do object detection in part because we're framing the problem as triage. So there is quite a few tool kits out there to draw more boxes on the mammogram. But the insight is that if there's 1,000 things to look at, looking at 2,000 things you drew more boxes per image. And it isn't necessarily the problem we're trying to look at. There's quite a bit of effort there. And it's something we might look into later in the future. But it's not the focus of this work. Yes. AUDIENCE: So Connie was saying that the same pattern appearing in different parts of the breast can mean different things. But when you're looking at the entire image as once, I would worry intuitively about whether the convolutional architecture is going to be able to pick that up or whether-- because you were looking for a very small cancer on a very large image. And then you were looking for the significance of that very small cancer in different parts of the image or in different contexts of the image. And I'm just-- I mean, it's a pleasant surprise that this works. ADAM YALA: So there is kind of like two pieces that can help explain that. So the first is that if you look at, like, the receptive fields of any given last receptive map at the very end of the network, each of those summarizes through these convolutions a fairly sizable part of the image. And so you are kind of, like, each pixel at the very end ends up being like something like a 50 by 50 image. That's by five total dimensions. And so each part does summarize this local context decently well. And when you do maximum at the very end, and you get some not perfect but OK global summary, what is the context of this image? So something like, let's say, some of the lower dimensions can summarize, like, is this a dense breast or kind of some of the other pattern information that might tell you what kind of breast this is. Whereas any one of them can tell you this looks like a cancer given its local context. So do you have some level summarization, both because of the channel-wise maxim of the end, and because each point through the many, many convolutions of different strides gives you some of that summary effect. OK, great. I'm going to jump forward. So we've talked about how to make this learn. It's actually not that tricky if we just do it carefully and tune. Now I'll talk about how to use this model to actually deliver on this triage idea. So some of my choices again, ImageNet initialization is going to make your life a happier time. Use bigger batch sizes. And architecture choice doesn't really matter if it's convolutional. And the overall setup that we do through this work and across many other projects we're training independently per image. Now this is a harder task because you don't actually have the-- you're not taking any of the other view, you're not taking prior mammograms. But this is for kind of more harder reasons than that. We're going to get the prediction for the whole exam by taking the maximum across the different images. So if I say this breast has cancer, the exam has cancer. So you should get it checked up. And at each development epoch we're going to evaluate the ability of the model to do triage task, which I'll step into in a second. And we're going to kind of take the best model that can do triage. So you're always kind of like, your true end metric is what you're measuring during training. And you're going to do model selection and kind of hyper patching based on that. And the way we're going to do triage and our goal here is to mark as many people as healthy without missing a single cancer that we always would have caught. So intuitively kind of by taking all the cancers that the radiologist would have caught, what's the probability of cancer across these images, and just take the minimum of those and call that the threshold. That's exactly what we do. And another detail that's quite relevant often is if you want these models to output a reasonable probability like this is the probability of cancer, and you train on a 50/50 sample the batches, by default your model thinks that the average incidence is 50%. So it's crazy confidence all the time. So to calibrate that one really simple trick is you do something called Platt's Method where you basically just fit like a two-parameter sigmoid or just scale and a shift to just-- on the development sets to make it actually fit the distribution. That way the average probability you would expect to actually fit the incidence. And you don't get this kind of like crazy off-kilter probabilities. OK. So analysis. The objectives of what we would try to do here is kind of similar across all the projects. One, does this thing work? Two, does this thing work across all the people it's supposed to work for? So we did a subgroup analysis. First we looked at the AUC in this model. So the ability to discriminate cancer is not. We did it across races. We have across MGH, age groups, and density categories. And finally, how does this relate to radiologist's assessments? And if we actually use this at test time on the test set, what would have happened? Kind of a simulation before a full clinical implementation. So overall AUC here was 82 with some confident from 80 to 85. And we did our analysis by age. We found that the performance was pretty similar across every age group. What's not shown here is the confidence intervals. So for example-- but the kind of key core takeaway here is that there was no noticeable gap in terms of by age group. We repeated this analysis by race, and we saw the same trend again. The performance kind of ranged generally around 82. And in places where the gap was bigger, the just confidence interval was bigger accordingly due to smaller sample sizes, cause MGH is 80% white. We saw the exact same trend by density. The outlier here is very dense breasts. But there's only like 100 of those on test set. So this confidence actually goes from like, 60 to 90. So as far as we know for the other three categories, it is very much tied to confidence interval and very similar, once again, around 82. OK. So we have a decent idea that this model seems at least with a publish of MGH actually serve the relevant populations that exist as far as we know so far. The next question is, how does the model assessment relate to the radiologist's assessment? So to look at that we looked at on the test, if you look at the radiologist's true positives, false positives, true negatives, false negatives. Where do they fall within the model distribution of percentile risk? And if there is below the threshold, we've got to color it in this kind of cyan color. And if it's above the threshold, we're going to color it in this purple color. So this is kind of triage, not triage. The first thing to notice-- this is the true positives-- is that there is like a pretty kind of steep drop-off. And so there is only one true positive fell below the threshold in a test set of 26,000 exams. So none of this difference was statistically significant. And the vast majority of them are kind of this top 10%. But you kind of see, like, there's a clear trend here that they kind of get piled up towards the higher percentages. Whereas if you look at the false positive assessments, this trend is much weaker. So you still see that there is some correlation that there's going to more false positives the higher amounts, but much less stark. And this actually means that a lot of radiologist's false positives we actually place below the threshold. And so because these assessments are completely concordant and we're not just modeling what the radiologist would have said, we get an anticipated benefit of actually reducing the false positives significantly because of the weight of disagreeing. And finally, kind of aiding that further, if you look at the true negative assessments, there is not that much trending between where it falls within this. So it shows that they're kind of picking up on different things and they're-- where they disagree gives them both areas to improve and ancillary benefits because now we can reduce false positives. This directly leads into assimilating the impact. So one of the things we did, we just said, OK. If people retrospective on the test set as a simulation before which truly plug it in, if people didn't rebuild the triage threshold-- so we can't catch any more cancer this way, but we can reduce false positives-- what would have happened? So at the top we have the original performance. So this is looking at 100% of mammograms, sensitivity was 98.6 with specificity of 93. And in the simulation the sensitivity dropped not significantly to 90.1, but significantly improved to 93.7 while looking at 81% of the mammograms. So this is like promising preliminary data. But to reevaluate this and go forward, our next step-- let's see if-- oh. I'm going to get to that in a second. Our next step is we need to do clinical implementation to really figure out-- because there's a core assumption here is that people read it the same way. But if you have this higher incidence, what does that mean? Can you focus more on the people that are more suspicious? And is the right way to do this just a single threshold to not read? Or have a double ended with the seniors cause they're much more likely to have cancer. And so there is quite a bit of exploration here to say, given we have these tools that give us some probability of cancer, that's not perfect, but gives us something. How well can we do that to improve care today? So as a quiz, can you tell which of these will be triaged? So this is no cherry-picking. I randomly picked four mammograms that were below and above the threshold. Can anyone guess which side-- left or right-- was triaged? This is not graded, Chris, so you know. AUDIENCE: Raise your hand for-- ADAM YALA: Oh yeah. Raise your hand for the left. OK. Raise your hand for right. Here we go. Well done. Well done. OK. And then next step, as I said before, is we need to kind of push to the clinical implementation because that's where the rubber hits the road. We identify is there any biases we didn't detect? And we need to say, can we deliver this value? So the next project is on assessing breast cancer risk. So this is the same mammogram I showed you earlier. It was diagnosed with breast cancer in 2014. It's actually my advisor, Regina's. And you can see that in 2013 you see it's there. In 2012 it looks much less prominence. And five years ago, really looking at breast cancer risk. So if you can tell from an image that is going to be healthy for a long time, you're really trying to model what's the likelihood of this breast developing cancer in the future. Now modeling breast cancer risk, as Connie earlier said, is not a new problem. It's been a quite researched one in the community. And the more classical approach that we're gonna look at other kind of global health factors-- the person's age, their family history, whether or not they've had menopause, and kind of any other of these kind of facts we can sort of say are markers of their health to try to predict whether this person's at risk of developing breast cancer. People have thought that the image contains something before. The way they've thought about this is through this kind of subjective breast density marker. And the improvements seen across this are kind of marginal from 61 to 63. And as before, the kind of sketch we're going to go through is dataset collection, modeling, and analysis. And dataset collection we followed a very similar template. We saw from consecutive mammograms from 2009 to 2012 we took outcomes from the EHR, once again, and the Partners Registry. We didn't do exclusions based on race or anything of that sort, or implants. But we did exclude negatives for followup. So if someone didn't have cancer in three years, but disappeared from the system, we didn't count them as negatives that we have some certainty in both the modeling and the analysis. And as always, we split patients into train, dev, test. The modeling is very similar. It's the same kind of templated lessons as from triage, except we experimented with a model that's only the image. And for the sake of analysis, a model that's the image model I just talked to you before concatenated with those traditional risk factors at the last layer and trained jointly. That make sense for everyone? So I'm going to call that ImageOnly an Image+RF or hybrid. OK. Cool? Our kind of goals for the analysis. As before, we want to see does this model actually serve the whole population? Is it going to be discriminative across race, menopause status, the family history? And how does it relate to kind of classical portions of risk? And are we actually doing any better? And so just diving directly into that, assuming there's no questions. Good. Just to kind of remind you, this is the kind of the setting. One thing I forgot to mention-- that's why I had the slide here to remind me-- is that we excluded cancers from the first year from the test set. So there is truly a negative screening population. That way we kind of disentangle cancer detection from cancer risk. OK. Cool. So Tyrer-Cuzick is the kind of prior state-of-the-art model. It's a model based out of the UK. Their developer is someone named Sir Cuzick, who was knighted for this work. It's very commonly used. So that one had an AUC of 62. Our image-only model had an AUC about 68. And hybrid one had an AUC of 70. So you know, what is this kind of AUC thing gives you when you look using a risk model. What it gives you is the ability to build better high-risk and low-risk cohorts. So in terms of looking at high-risk cohorts, our best model place about 30% of all the cancers in the population in the top 10%, and 3% of all the cancers in the bottom 10% compared to 18 and 5 to the prior state of the art. And so what this enables you to do, if you're going to say that this 10% should actually qualify for MRI, you can start fighting this problem of majority of people that get cancer don't have MRI, and the majority of people that get it don't need it. It's all about, is your risk model actually place the right people into the right buckets. Now we saw that this trend of outperforming the prior state of the art held across races. And one of the things that was kind of astonishing was that though Tyrer-Cuzick performed on white women, which makes sense because it was developed only using white women in the UK. It was worse than random [INAUDIBLE] for African-American women. And so this kind of emphasizes the importance of this kind of analysis to make sure that the kind of data that you have is reflective of the population that you're trying to serve and actually doing the analysis accordingly. So we saw that our model kind of held across races and as well across-- we see this trend from across pre-postmenopausal and with and without family history. One thing we did in terms of a more granular comparison of performance, if we just look at kind of like the risk thirds for our model and the Tyrer-Cuzick model, what's the trend that you see or the cases where kind of like which one is right that's kind of ambiguous. And what I should show in these boxes is the cancer incidence prevalence in the population. So the darker the box, the higher the incidence. And on the right-hand side are just random images from cases that fit within those boxes. Does that make sense for everyone? Great. So a clear trend that you see is that, for example, if TCv8 calls you a high risk but we call it low, that is a lower incidence than if we called it medium and they call it low. So kind of like you kind of see this straight column-wise pattern showing that discrimination truly does follow the deep learning model and not the classical approach. And by looking at the random images that were selected in case where we disagree, it supports the notion that it's not just that the column is just the most dense, crazy, dense looking breast, that there's something more subtle it's picking up that's actually indicative of breast cancer risk. Kind of a very similar analysis we looked at as if we look at just by a traditional breast density as labeled by the original radiologist on the development set or on the test set, we end up seeing the same trend where if someone is non-dense we call them high risk. They're much higher risk than someone that is dense than we call low risk. And as before, the kind of real next step here to make this truly valuable and truly useful is actually implementing a clinically seamless prospectively and with more centers and kind of more population to see does this work and does it deliver the kind of benefits that we care about. And viewing what is the leverage of change once you know that someone is high risk? Perhaps MRI, perhaps more frequent screening. And so this is the kind of gap between having a useful technology on the paper side to an actual useful technology in real life. So I am moving on schedule. So now I'm gonna talk about how to mess up. And it's actually quite interesting. There is like, so many ways. And I fall into them a few times myself, and it happens. And kind of following the sketch, you can mess up in dataset collection. That's probably the most common by far. You can mess up in modeling, which I'm doing right now. And it's very sad. And you can mess up in analysis, which is really preventable. So in dataset collection, enriched data sets are the kind of the most common thing you see in this space. You find in a public data set that's most likely going to be like 50-50 cancer, not cancer. And oftentimes these datasets collect can have some sort of bias within the way it was collected. So it might be that you have negative cases from less centers than you have positive cases. Or they're collected from different years. And actually, this is something we ran into earlier in our own work. Once upon a time, Connie and I were in Shanghai for the opening of a cancer center there. And at that time we had all the cancers from the MGH dataset, about 2,000. But the mammograms were still being collected annually from 2012-- from 2009. So at that time, we only had, like, half of the negatives by year, but all of the cancers. And all of a sudden I had to-- you know, I came from the slightly more complicated model, as one often does. I looked at several images at the same time. And my AUC went up to like, 95. And I had all this, like, bouncing off the wall. And then in-- you know, I had some suspicion of like, wait a second. This is too high. This is too good. And we completely realized that all these numbers were kind of a myth. But this level of-- kind of if you do these kind of case control things, you can oftentimes, unless you're very careful about the way it was constructed, you could easily run into these issues. And your testing set won't protect you from that. And so having a clean dataset that truly follows the kind of spectrum we expect to use it in-- i.e., a natural distribution, collected through routine clinical care is important to say will it behave as we actually want it to be used. In general, the only-- some of this you can think through in first principle. But it kind of stresses the importance of actually testing this prospectively in external validation to try to see does this work when I take away some of the biases in my dataset, and being really careful about that. The common approach of just controlling by age or by density is not enough when the model can catch really fine-grained signals. How to mess up in modeling. So there's been adventures in this space as well. One of the things I've recently discovered is that the actual mammography machine device that the machine was captured on-- so you saw a bunch of mammograms probably from different machines-- has an unexpected impact on the model. So the actual probability distribution-- the distribution of cancer probabilities by the model is not independent of the device. That's something we're going through now. We actually ran into this while working on clinical implementation is like this kind of conditional adversarial training set up to try to rectify this issue. It's important. So this is much harder to catch based on first principle. But it's important to think through as you kind of really start demoing out your computations. This will kind of-- these issues pop up easily, and they're harder to avoid. And lastly, and I think probably one that's probably the most important is messing up in analysis. So it's quite common in the previous section in this field-- yes. AUDIENCE: With the adversarial up there, just to understand what you do, do you that discriminate or predict the machine? And then you train against that? ADAM YALA: So my answer is going to be two parts. One, it doesn't work as well as I want it to yet. So really who knows? But my best hunch in terms of what's been done before for other kind of work, specifically in radio signals, is they use a conditional adversarial. So you're free to discriminate at both the label and the image presentation. You have to try to predict out the device to try to take away the information that's not just contained within the label distribution. And that's been shown to be very helpful for people trying to do [INAUDIBLE] detection based off on Wi-Fi-- or not Wi-Fi-- but like, radio waves. And the [INAUDIBLE] but also, it seems to be the most common approach I've seen in literature. So it's something that I'm going to try soon. I haven't implemented it. It was just GPU time and kind of waiting to queue up the experiment. And the last part in terms of how to mess up is this kind of analysis. One thing that's common is people assume that's it kind of like synthetic experiments or the same thing as clinical implementation. Like, people do reader studies very often. And it's quite common to see that when you do reader studies that it doesn't actually-- like, you might find that computer detection does a huge difference in reader studies. And it's-- Connie actual showed it was harmful in real life. And it's important to kind of like, do these real world experiments that we can say what is happening and just them the real benefit that I expected. And a hopefully less common nowadays mistake is that oftentimes people exclude all inconvenient cases. So there was a paper yesterday that just came out that the cancer detection used a kind of patched-up architecture which would read more closely into their details, they excluded all women with breasts that they considered too small by some threshold for like modeling convenience. But that might disproportionately affect specifically Asian women in that population. And so they didn't do a subgroup analysis for all the different races, so it's hard to know what is happening there. If your population is mostly white, which it is at MGH, and is at a lot of the centers that these colleges have developed, are reporting the average that you see isn't enough to really validate that. And so you can have things like Tyrer-Cuzick model that are worse than random and especially harmful for African-American women. And so guarding against that is you can do a lot of that based on first principle. But some of these things you can only really find out by actively monitoring to say, is there any subpopulation that I didn't think about a priority that could be harmed? And finally, so I talked about clinical deployments. We've actually done this a couple times. And I'm going to switch over to Connie real soon. In general, what you want to do is you want to make it as easy as plausible and possible for the in-house IT team to use your tool. We've gone through this with-- not like-- I don't-- depends on how you count. It's like once for density and then like three times at the same time. But I spent, like, many hours sitting there. And the broad way that we set it up so far is we just have a kind of docker as container to manage a web app that holds the model. This web app has kind of a backup processing toolkit. So the kind of steps that all of our deployments follow and I look under unified framework is the IT application would get some images out of the PAC system. It will send it over to application. We're going to convert to the PNG in the way that we expect, because we kind of encapsulate this functionality. Run for the models, send it back, and then write it back to the EHR. One of the things I ran into was that they didn't actually know how to use things like HTTP because it's not actually normal within their infrastructure. And so being cognizant that some of these more, like, tech standard things like just HTTP requests and responses and stuff is less standard within the inside of their infrastructure and kind of looking up how to actually do these things in like C Sharp, or whatever language they have, has been really what's enabled us to end block these things and actually plug it in. And that is it for my part. So I'm gonna hand it back-- oh, yes. AUDIENCE: So you're writing stuff in the IT application in C Sharp to do API requests? ADAM YALA: So they're writing it. I just meet them to tell them how to write it. But yes. So like, in general, like, there's libraries. So like, the entire environment is in Windows. And Windows has a very poor support for lots of things you would expect it to have a good support for. So there was like, if you wanted to send HP requests for like a multipart form and just put the images in that form, apparently that has bugs in it in like, Windows whatever version they use today. And so that vanilla version didn't work. Windows for Docker also has bugs. And I had to set up this kind of locking function for them to like, automatically table locks inside the container. And it just doesn't work in Windows for Docker. AUDIENCE: [INAUDIBLE] questions because he is short on time. ADAM YALA: Yeah. So we can get to this at the end. I want to hand off to Connie. If you have any questions, grab me after.
MIT_6S897_Machine_Learning_for_Healthcare_Spring_2019
16_Reinforcement_Learning_Part_1.txt
PROFESSOR: Hi, everyone. We're getting started now. So this week's lecture is really picking up where last week's left off. You may remember we spent the last week talking about cause inference. And I told you how, for last week, we're going to focus on a one-time setting. Well, as we know, lots of medicine has to do with multiple sequential decisions across time. And that'll be the focus of this whole week's worth of discussions. And as I thought about really what should I teach in this lecture, I realized that the person who knew the most about the topic was in fact a postdoctoral researcher in my lab. Most about this topic in the general area of the medical field. FREDRIK D. JOHANSSON: Thanks. I'll take it. AUDIENCE: Global [INAUDIBLE]. FREDRIK D. JOHANSSON: It's very fair. PROFESSOR: And so I invited him to come to us today and to give this as an invited lecture. And this is Fredrik Johansson. He'll be a professor in Chalmers, in Sweden, starting in September. FREDRIK D. JOHANSSON: Thank you so much, David. That's very generous. Yeah, so as David mentioned, last time we looked a lot at causal effects. And that's where we will start on this discussion, too. So I'll just start with this reminder, here-- we essentially introduced four quantities last time, or the last two lectures, as far as I know. We had two potential outcomes, which represented the outcomes that we would see of some treatment choice under the various choices. So, the two different choices-- 1 and 0. We had a set of covariates, x and a treatment, t. And we were interested in, essentially, what is the effect of this treatment, t, on the outcome, y, given the covariates, x. And the effect that we focused on that time was the conditional average treatment effect, which is exactly the difference between these potential outcomes-- a condition on the features. So the whole last week was about trying to identify this quantity using various methods. And the question that didn't come up so much-- or one question that didn't come up too much-- is how do we use this quantity? We might be interested in it, just in terms of its absolute magnitude. How large is the effect? But we might also be interested in designing a policy for how to treat our patients based on this quantity. So today, we will focus on policies. And what I mean by that, specifically, is something that takes into account what we know about a patient and produces a choice or an action as an output. Typically, we'll think of policies as depending on medical history, perhaps which treatments they have received previously, what state is the patient currently in. But we can also base it purely on this number that we produce last time-- the conditional average treatment effect. And one very natural policy is to say, pi of x is equal to the indicator function representing if this CATE is positive. So if the effect is positive, we treat the patient. If the effect is negative, we don't. And of course, positive will be relative to the usefulness of the outcome being high. But yeah, this is a very natural policy to consider. However, we can also think about much more complicated policies that are not just based on this number-- the quality of the outcome. We can think about policies that take into account legislation or cost of medication or side effects. We're not going to do that today, but that's something that you can keep in mind as we discuss these things. So David mentioned, we should now move from the one-step setting, where we have a single treatment acting at a single time and we only have to take into account the state of a patient once, basically. And we will move from that to the sequential setting. And my first example of such a setting is sepsis management. So, sepsis is a complication of an infection, which can have very disastrous consequences. It can lead to organ failure and ultimately death. And it's actually one of the leading causes of deaths in the ICU. So it's of course important that we can manage and treat this condition. When you start treating sepsis, the primary target-- the first things you should think about fixing-- is the infection itself. If we don't treat the infection, things are going to keep being bad. But even if we figure out the right antibiotic to treat the infection that is the source of the septic shock or the septic inflammation, there are a lot of different conditions that we need to manage. Because the infection itself can lead to fever, breathing difficulties, low blood pressure, high heart rate-- all these kinds of things that are symptoms, but not the cause in themselves. But we still have to manage them somehow so that the patient survives and is comfortable. So when I say sepsis management, I'm talking about managing such quantities over time-- over a patient's stay in the hospital. So, last time-- again, just to really hammer this in-- we talked about potential outcomes and the choice of a single treatment. So we can think about this in the septic setting as a patient coming in-- or a patient already being in the hospital, presumably-- and is presenting with breathing difficulties. So that means that their blood oxygen will be low because they can't breathe on their own. And we might want to put them on mechanical ventilation so that we can ensure that they get sufficient oxygen. We can view this as a single choice. Should we put the patient on mechanical ventilation or not? But what we need to take into account here is what will happen after we make that choice. What will be the side effects of this choice going further? Because we want to make sure that the patient is comfortable and in good health throughout their stay. So today, we will move towards sequential decision making. And in particular, what I alluded to just now is that decisions made in sequence may have the property that choices early on rule out certain choices later. And we'll see an example of that very soon. And in particular, we'll be interested in coming up with a policy for making decisions repeatedly that optimizes a given outcome-- something that we care about. It could be minimize the risk of death. It could be a reward that says that the vitals of a patients are in the right range. We might want to optimize that. But essentially, think about it now as having this choice of administering a medication or an intervention at any time, t-- and having the best policy for doing so. OK, I'm going to skip that one. OK, so I mentioned already one potential choice that we might want to make in the management of a septic patient, which is to put them on mechanical ventilation because they can't breathe on their own. A side effect of doing so is that they might suffer discomfort from being intubated. The procedure is not painless, it's not without discomfort. So something that you might have to do-- putting them on mechanical ventilation-- is to sedate the patient. So this is an action that is informed by the previous action, because if we didn't put the patient on mechanical ventilation, maybe we wouldn't consider them for sedation. When we sedate a patient, we run the risk of lowering their blood pressure. So we might need to manage that, too. So if their blood pressure gets too low, maybe we need to administer vasopressors, which artificially raise the blood pressure, or fluids or anything else that takes care of this issue. So just think of this as an example of choices cascading, in terms of their consequences, as we roll forward in time. Ultimately, we will face the end of the patient's stay. And hopefully, we managed the patient in a successful way so that their response or their outcome is a good one. What I'm illustrating here is that, for any one patient in our hospitals or in the health care system, we will only observe one trajectory through these options. So I will show this type of illustration many times, but I hope that you can realize the scope of the decision space here. Essentially, at any point, we can choose a different action. And usually, the number of decisions that we make in an ICU setting, for example, is much larger than we could ever test in a randomized trial. Think of all of these different trajectories as being different arms in a randomized controlled trial that you want to compare the effects or the outcomes of. It's infeasible to run such a trial, typically. So one of the big reasons that we are talking about reinforcement learning today and talking about learning policies, rather than causal effects in the setup that we did last week, is because the space of possible action trajectories is so large. Having said that, we now turn to trying to find, essentially, the policy that picks this orange path here-- that leads to a good outcome. And to reason about such a thing, we need to also reason about what is a good outcome? What is good reward for our agent, as it proceeds through time and makes choices? Some policies that we produce as machine learners might not be appropriate for a health care setting. We have to somehow restrict ourself to something that's realistic. I won't focus very much on this today. It's something that will come up in the discussion tomorrow, hopefully. And also the notion of evaluating something for use in the health care system will also be talked about tomorrow. AUDIENCE: Thursday. FREDRIK D. JOHANSSON: Sorry, Thursday. Next time. OK, so I'll start by just briefly mentioning some success stories. And these are not from the health care setting, as you can guess from the pictures. How many have seen some of these pictures? OK, great-- almost everyone. Yeah, so these are from various video games-- almost all of them. Well, games anyhow. And these are good examples of when reinforcement learning works, essentially. That's why I use these in this slide here-- because, essentially, it's very hard to argue that the computer or the program that eventually beat Lee Sedol. I think it's in this picture but also, later, Go champions, essentially. In the AlphaGo picture in the top left, it's hard to argue that they're not doing a good job, because they clearly beat humans here. But one of the things I want you to keep in mind throughout this talk is what is different between these kinds of scenarios? And we'll come back to that later. And what is different to the health care setting, essentially? So I simply added another example here, that's why I recognize it. So there was recently one that's a little bit closer to my heart, which is AlphaStar. I play StarCraft. I like StarCraft, so it should be on the slide. Anyway, let's move on. Broadly speaking, these can be summarized in the following picture. What goes into those systems? There's a lot more nuance when it comes to something like Go. But for the purpose of this class, we will summarize them with a slide. So essentially, one of the three quantities that matters for a reinforcement learning is the state of the environment, the state of the game, the state of the patient-- the state of the thing that we want to optimize, essentially. So in this case, I've chosen Tic-tac-toe here. We have a state which represents the current positions of the circles and crosses. And given that state of the game, my job as a player is to choose one of the possible actions-- one of the free squares to put my cross in. So I'm the blue player here and I can consider these five choices for where to put my next cross. And each of those will lead me to a new state of the game. If I put my cross over here, that means that I'm now in this box. And I have a new set of actions available to me for the next round, depending on what the red player does. So we have the state, we have the actions, and we have the next state, essentially-- we have a trajectory or a transition of states. And the last quantity that we need is the notion of a reward. That's very important for reinforcement learning, because that's what's driving the learning itself. We strive to optimize the reward or the outcome of something. So if we look at the action to the farthest right here, essentially I left myself open to an attack by the red player here, because I didn't put my cross there. Which means that, probably, if the red player is decent, he will put his circle here and I will incur a loss, essentially. So my reward will be negative, if we take positive to be good. And this is something that I can learn from going forward. Essentially, what I want to avoid is ending up in this state that's shown in the bottom right here. This is the basic idea of reinforcement learning for video games and for anything else. So if we take this board analogy or this example and move to the health care setting, we can think of the state of a patient as the game board or the state of the game. We will always call this St in this talk. The treatments that we prescribe or interventions will be At. And these are like the actions in the game, obviously. The outcomes of a patient-- could be mortality, could be managing vitals-- will be as the rewards in the game, having lost or won. And then up at the end here, what could possibly go wrong. Well, as I alluded to before, health is not a game in the same sense that a video game is a game. But they share a lot of mathematical structure. So that's why I make the analogy here. These quantities here-- S, A, and R-- will form something called a decision process. And that's what we'll talk about next. This is the outline for today and Thursday. I won't get to this today, but this is the talks we're considering. So a decision process is essentially the world that describes the data that we access or the world that we're managing our agent in. Very often, if you've ever seen reinforcement learning taught, you have seen this picture in some form, usually. Sometimes there's a mouse and some cheese and there's other things going on, but you know what I'm talking about. But there are the same basic components. So there's the concept of an agent-- let's think doctor for now-- that takes actions repeatedly over time. So this t here indicates an index of time and we see that essentially increasing as we spin around this wheel here. We move forward in time. So an agent takes an action and, at any time point, receives a reward for that action. And that would be Rt, as I said before. The environment is responsible for giving that reward. So for example, if I'm the doctor, I'm the agent, I make an action or an intervention to my patient, the patient will be the environment. And essentially, responses do not respond to my intervention. The state here is the state of the patient, as I mentioned before, for example. But it might also be a state more broadly than the patient, like the settings of the machine that they're attached to or the availability of certain drugs in the hospital or something like that. So we can think a little bit more broadly around the patient, too. I said partially observed here, in that I might not actually know everything about the patient that's relevant to me. And we will come back a little bit later to that. So there are two different formalizations that are very close to each other, which is when you'd know everything about s and when you don't. We will, for the longest part of this talk, focus on the way I know everything that is relevant about the environment. OK, to make this all a bit more concrete, I'll return to the picture that I showed you before, but now put it in context of the paper that you read. Was that the compulsory one? The mechanical ventilation? OK, great. So in this case, they had an interesting reward structure, essentially. The thing that they were trying to optimize was the reward related to the vitals of the patient. But also whether they were kept on mechanical ventilation or not. And the idea of this paper is that you don't want to keep a patient unnecessarily on mechanical ventilation, because it has the side effects that we talked about before. So at any point in time, essentially, we can think about taking a patient on or off-- and also dealing with the sedatives that are prescribed to them. So in this example, the state that they considered in this application included the demographic information of the patient, which doesn't really change over time. Their physiological measurements, ventilator settings, consciousness level, the dosages of the sedatives they use, which could be an action, I suppose-- and a number of other things. And these are the values that we have to keep track of, moving forward in time. The actions concretely included whether to intubate or extubate the patient, as well as the administer and dosing the sedatives. So this is, again, an example of a so-called decision process. And essentially, the process is the distribution of these quantities that I've been talking about over time. So we have the states, the actions, and the rewards. They all traverse or they all evolve over time. And the loss of how that happens is the decision process. I mentioned before that we will be talking about policies today. And typically, there's a distinction between what is called a behavior policy and a target policy-- or there are different words for this. Essentially, the thing that we observe is usually called a behavior policy. By that, I mean if we go to a hospital and watch what's happening there at the moment, that will be the behavior policy. And I will denote that mu. So that is what we have to learn from, essentially. So decision processes so far are incredibly general. I haven't said anything about what this distribution is like, but the absolutely dominant restriction that people make when they study system processes is to look at Markov decision processes. And these have a specific conditional independent structure that I will illustrate in the next slide-- well, I'll just define it mathematically here. It says, essentially, that all of the quantities that we care about-- the states. I guess that should say state. Rewards and the actions only depend on the most recent state in action. If we observe an action taken by a doctor in the hospital, for example-- to make a mark of assumption, we'd say that this doctor did not look at anything that happened earlier in time or any other information than what is in the state variable that we observe at that time. That is the assumption that we make. Yeah? AUDIENCE: Is that an assumption you can make for a health care? Because in the end, you don't have access to the real state, but only about what's measured about the state in health care. FREDRIK D. JOHANSSON: It's a very good question. So the nice thing in terms of inferring causal quantities is that we only need the things that were used to make the decision in the first place. So the doctor can only act on such information, too. Unless we don't record everything that the doctor knows-- which is also the case. So that is something that we have to worry about for sure. Another way to lose information, as I mentioned, that is relevant for this is if we look to-- What's the opposite of far? AUDIENCE: Near. FREDRIK D. JOHANSSON: Too near back in time, essentially. So we don't look at the entire history of the patient. And when I say St here, it doesn't have to be the instantaneous snapshot of a patient. We can also Include history there. Again, we'll come back to that a little later. OK, so the Markov assumption essentially looks like this. Or this is how I will illustrate, anyway. We have a sequence of states here that evolve over time. I'm allowing myself to put some dots here, because I don't want to draw forever. But essentially, you could think of this pattern repeating-- where the previous state goes into the next state, the action goes into the next state, and the action and state goes in through the reward. This is the world that we will live in for this lecture. Something that's not allowed under the mark of assumption is an edge like this, which says that an action at an early time influences an action at a later time. And specifically, it can't do so without passing through a state, for example. It very well can have an influence on At by this trajectory here, but not directly. That that's the Markov assumption in this case. So you can see that if I were to draw the graph of all the different measurements that we see during a state, essentially there are a lot of errors that I could have had in this picture that I don't have. So it may seem that the Markov assumption is a very strong one, but one way to ensure that the Markov assumption is more likely is to include more things in your state, including summaries of the history, et cetera, that I mentioned before. An even stronger restriction of decision processes is to assume that the states over time are themselves independent. So this goes by different names-- sometimes under the name contextual bandits. But the bandits part of that itself is not so relevant here. So let's not go into that name too much. But essentially, what we can say is here, the state at a later time point is not influenced directly by the state at a previous time point, nor the action of the previous time point. So if you remember what you did last week, this looks like basically T repetitions of the very simple graph that we had for estimating potential outcomes. And that is indeed mathematically equivalent. If we assume that this S here represents the state of a patient and all patients are drawn from some sum process, essentially. So that S0, 1, et cetera, up to St are all i.i.d. draws of the same distribution. Then we have, essentially, a model for t different patients with a single time step or single action, instead of them being dependent in some way. So we can see that by going backwards through my slides, this is essentially what we had last week. And we just have to add more arrows to get to whatever we have this week, which indicates that last week was a special case of this-- just as David said before. It also hints at the reinforcement learning problem being more complicated than the potential outcomes problem. And we'll see more examples of that later. But, like with causal effect estimation that we did last week, we're interested in the influences of just a few variables, essentially. So last time we studied the effect of a single treatment choice. And in this case, we will study the influence of these various actions that we take along the way. That will be the goal. And it could be either through an immediate effect on the immediate reward or it can be through the impact that an action has on the state trajectory itself. I told you about the world now that we live in. We have these Ss and As and Rs. And I haven't told you so much about the goal that we're trying to solve or the problem that we're trying to solve. Most RL-- or reinforcement and learning-- is aimed at optimizing the value of a policy or finding a policy that has a good return, a good sum of rewards. There are many names for this, but essentially a policy that does well. The notion of well that we will be using in this lecture is that of a return. So the return at a time step t, following the policy, pi, that I had before, is the sum of the future rewards that we see if we were to act according to that policy. So essentially, I stop now. I ask, OK, if I keep on doing the same as I've done through my whole life-- maybe that was a good policy. I don't know. And keep going until the end of time, how well will I do? What is the sum of those rewards that I get, essentially? That's the return. The value is the expectation of such things. So if I'm not the only person, but there is the whole population of us, the expectation over that population is the value of the policy. So if we take patients as a better analogy than my life, maybe, the expectation is over patients. If we fact on every patient in our population the same way-- according to the same policy, that is-- what is the expected return over those patients? So as an example, I drew a few trajectories again, because I like drawing. And we can think about three different patients here. They start in different states. And they will have different action trajectories as a result. So we're treating them with the same policy. Let's call it pi. But because they're in different states, they will have different actions at the same times. So here we take a 0 action, we go down. Here, we take a 0 action, we go down. That's what that means here. The specifics of this is not so important. But what I want you to pay attention to is that, after each action, we get a reward. And at the end, we can sum those up and that's our return. So each patient has one value for their own trajectory. And the value of the policy is then the average value of such trajectories. So that is what we're trying to optimize. We have now a notion of good and we want to find a pi such that V pi up there is good. That's the goal. So I think it's time for a bit of an example here. I want you to play along in a second. You're going to solve this problem. It's not a hard one. So I think you'll manage. I think you'll be fine. But this is now yet another example of a world to be in. This is the robot in a room. And I've stolen this slide from David, who stole it from Peter Bodik. Yeah, so credits to him. The rules of this world says the following-- if you tell the robot, who is traversing this set of tiles here-- if you tell the robot to go up, there's a chance he doesn't go up, but goes somewhere else. So we have the stochastic transitions, essentially. If I say up, he goes up with point a probability and somewhere else with uniform probability, say. So 0.8 up and then 0.2-- this is the only possible direction to go in if you start here. So 0.2 in that way. There's a chance you move in the wrong direction is what I'm trying to illustrate here. There's no chance that they're going in the opposite direction. So if I say right here, it can't go that way. The rewards in this game is plus 1 in the green box up there, minus 1 in the box here. And these are also terminal states. So I haven't told you what that is, but it's essentially a state in which the game ends. So once you get to either plus 1 or minus 1, the game is over. For each step that the robot takes, it incurs 0.04 negative reward. So that says, essentially, that if you keep going for a long time, your reward would be bad. The value of the policy will be bad. So you want to be efficient. So basically, you can figure out-- you want to get to the green thing, that's one part of it. But you also want to do it quickly. So what I want you to do now is to essentially figure out what is the best policy, in terms of in which way should the arrows point in each of these different boxes? Fill in the question mark with an arrow pointing in some direction. We know the different transitions will be stochastic, so you might need to take that into account. But essentially, figure out how do I have a policy that gives me the biggest expected reward? And I'll ask you in a few minutes if one of you is brave enough to put it on the board or something like that. AUDIENCE: We start the discount over time? FREDRIK D. JOHANSSON: There's no discount. AUDIENCE: Can we talk to our neighbor? FREDRIK D. JOHANSSON: Yes. It's encouraged. [INTERPOSING VOICES] FREDRIK D. JOHANSSON: So I had a question. What is the action space? And essentially, the action space is always up, down, left, or right, depending on if there's a wall or not. So you can't go right here, for example. AUDIENCE: You can't go left either. FREDRIK D. JOHANSSON: You can't go left, exactly. Good point. So each box at the end, when you're done, should contain an arrow pointing in some direction. All right, I think we'll see if anybody has solved this problem now. Who thinks they have solved it? Great. Would you like to share your solution? AUDIENCE: Yeah, so I think it's going to go up first. FREDRIK D. JOHANSSON: I'm going to try and replicate this. Ooh, sorry about that. OK, you're saying up here? AUDIENCE: Yeah. The basic idea is you want to reduce the chance that you're ever adjacent to the red box. So just do everything you can to stay far from it. Yeah, so attempt to go up and then once you eventually get there, you just have to go right. FREDRIK D. JOHANSSON: OK. And then? AUDIENCE: [INAUDIBLE]. FREDRIK D. JOHANSSON: OK. So what about these ones? This is also part of the policy, by the way. AUDIENCE: I hadn't thought about this. FREDRIK D. JOHANSSON: OK. AUDIENCE: But those, you [INAUDIBLE],, right? FREDRIK D. JOHANSSON: No. AUDIENCE: Minus 0.04. FREDRIK D. JOHANSSON: So discount usually means something else. We'll get to that later. But that is a reward for just taking any step. If you move into a space that is not terminal, you incur that negative reward. AUDIENCE: So if you keep bouncing around for a really long time, you incur a long negative reward. FREDRIK D. JOHANSSON: If we had this, there's some chance I'd never get out of all this. And very little chance of that working out. But it's a very bad policy, because you keep moving back and forth. All right, we had an arm somewhere. What should I do here? AUDIENCE: You could take a vote. FREDRIK D. JOHANSSON: OK. Who thinks right? Really? Who thinks left? OK, interesting. I don't actually remember. Let's see. Go ahead. AUDIENCE: I was just saying, that's an easy one. FREDRIK D. JOHANSSON: Yeah, so this is the part that we already determined. If we had deterministic transitions, this would be great, because we don't have to think about the other ones. This is what Peter put on the slide. So I'm going to have to disagree with the vote there, actually. It depends, actually, heavily on the minus 0.04. So if you increase that by a little bit, you might want to go that way instead. Or if you decrease-- I don't remember. Decrease, exactly. And if you increase it, you might get something else. It might actually be good to terminate. So those details matter a little bit. But I think you've got the general idea. And especially I like that you commented that you want to stay away from the red one, because if you look at these different paths. You go up there and there-- they have the same number of states, but there's less chance you end up in the red box if you take the upper route. Great. So we have an example of a policy and we have an example of a decision process. And things are working out so far. But how do we do this? As far as the class goes, this was a blackbox experiment. I don't know anything about how you figured that out. So reinforcement learning is about that-- reinforcement learning is try and come up with a policy in a rigorous way, hopefully-- ideally. So that would be the next topic here. Up until this point, are there any questions that you've been dying to ask, but haven't? AUDIENCE: I'm curious how much behavioral biases could play into the first Markov assumption? So for example, if you're a clinician who's been working for 30 years and you're just really used to giving a certain treatment. An action that you gave in the past-- that habit might influence an action in the future. And if that is a worry, how one might think about addressing it. FREDRIK D. JOHANSSON: Interesting. I guess it depends a little bit on how it manifests, in that if it also influenced your most recent action, maybe you have an observation of that already in some sense. It's a very broad question. What effect will that have? Did you have something specific in mind? AUDIENCE: I guess I was just wondering if it violated that assumption, that an action of the past influenced an action-- FREDRIK D. JOHANSSON: Interesting. So I guess my response there is that the action didn't really depend on the choice of action before, because the policy remained the same. You could have a bias towards an action without that being dependent on what you gave as action before, if you know what I mean. Say my probability of giving action one is 1, then it doesn't matter that I give it in the past. My policy is still the same. So, not necessarily. It could have other consequences. We might have reason to come back to that question later. Yup. AUDIENCE: Just practically, I would think that a doctor would want to be consistent. And so you wouldn't, for example, want to put somebody on a ventilator and then immediately take them off and then immediately put them back on again. So that would be an example where the past action influences what you're going to do. FREDRIK D. JOHANSSON: Completely, yeah. I think that's a great example. And what you would hope is that the state variable in that case includes some notion of treatment history. That's what your job is then. So that's why state can be somewhat misleading as a term-- at least for me, I'm not American or English-speaking. But yeah, I think of it as too instantaneous sometimes. So we'll move into reinforcement learning now. And what I had you do on the last slide-- well, I don't know which method you use, but most likely the middle one. There are three very common paradigms for reinforcement learning. And they are essentially divided by what they focus on modeling. Unsurprisingly, model-based RL focused on-- well, it has some sort of model in it, at least. What you mean by model in this case is a model of the transitions. So what state will I end up in, given the action in the state I'm in at the moment? So model-based RL tries to essentially create a model for the environment or of the environment. There are several examples of model-based RL. One of them is G-computation, which comes out of the statistic literature, if you like. And MDPs are essentially-- that's a Markov decision process, which is essentially trying to estimate the whole distribution that we talked about before. There are various ups and downsides of this. We won't have time to go into all of these paradigms today. We will actually focus only on value-based RL today. Yeah, you can ask me offline if you are interested in model-based RL. The rightmost one here is policy-based RL, where you essentially focus only on modeling the policy that was used in the data that you observed. And the policy that you want to essentially arrive at. So you're optimizing a policy and you are estimating a policy that was used in the past. And the middle one focuses on neither of those and focuses on only estimating the return-- that was the G. Or the reward function as a function of your actions and states. And it's interesting to me that you can pick any of the variables-- A, S, and R-- and model those. And you can arrive at something reasonable in reinforcement learning. This one is particularly interesting, because it doesn't try to understand how do you arrive at a certain return based on the actions in the states? It's just optimize the policy directly. And it has some obvious-- well, not obvious, but it has some downsides, not doing that. OK, anyway, we're going to focus on value-based RL. And the very dominant instantiation of value-based RL is Q-learning. I'm sure you've heard of it. It is what drove the success stories that I showed before, the goal in the StarCraft and everything. G-estimation is another example of this, which, again, has come from the statistic literature. But we'll focus on Q-learning today. So Q-learning is an example of dynamic programming, in some sense. That's how it's usually explained. And I just wanted to check-- how many have heard the phrase dynamic programming before? OK, great. So I won't go into details of dynamic programming in general. But the general idea is one of recursion. In this case, you know something about what is a good terminal state. And then you want to figure out how to get there and how to get to the state before that and the state before that and so on. That is the recursion that we're talking about. The end state that is the best here is fairly obvious-- that is the plus 1 here. The only way to get there is by stopping here first, because you can't move from here since it's a terminal state. Your only bet is that one. And then we can ask what is the best way to get to 3, 1? How do we get to the state before the best state? Well, we can say that one way is go from here. And one way from here. And as we got from the audience before, this is a slightly worse way to get there then from there, because here we have a possibility of ending up in minus 1. So then we recurse further and essentially, we end up with something like this that says-- or what I tried to illustrate here is that the green boxes-- I'm sorry for any colorblind members of the audience, because this was a poor choice of mine. Anyway, this bottom side here is mostly red and this is mostly green. And you can follow the green color here, essentially, to get to the best end state. And what I used here to color this in is this idea of knowing how good a state is, depending on how good the state after that state is. So I knew that plus 1 is a good end state over there. And that led me to recurse backwards, essentially. So the question, then, is how do we know that that state over there is a good one? When we have it visualized in front of us, it's very easy to see. And it's very easy because we know that plus 1 is a terminal state here. It ends there, so those are the only states we need to consider in this case. But more in general, how do we learn what is the value of a state? That will be the purpose of Q-learning. If we have an idea of what is a good state, we can always do that recursion that I explained very briefly. You find a state that has the high value and you figure out how to get there. So we're going to have to define now what I mean by value. I've used that word a few different times. I say recall here, but I don't know if I actually had it on a slide before. So let's just say this is the definition of value that we will be working with. I think I had it on a slide before, actually. This is the expected return. Remember, this G here was the sum of rewards going into the future, starting at time, t. And the value, then, of this state is the expectation of such returns. Before, I said that the value of an policy was the expectation of returns, period. And the value of a state and the policy is the value of that return starting in a certain state. We can stratify this further if we like and say that the value of a state action pair is the expected return, starting in a certain state and taking an action, a. And after that, following the policy, pi. This would be the so-called Q value of a state-action pair-- s, a. And this is where Q-learning gets its name. So Q-learning attempts to estimate the Q function-- the expected return starting in a state, s, and taking action, a-- from data. The Q-learning is also associated with a deterministic policy. So the policy and the Q function go together in this specific way. If we have a Q function, Q, that tries to estimate the value of a policy, pi, the pi itself is the arg max according to that Q. It sounds a little recursive, but hopefully it will be OK. Maybe it's more obvious if we look here. So Q, I said before, was the value of starting an s, taking action, a, and then following policy, pi. This is defined by the decision process itself. The best pi, the best policy, is the one that has the highest Q. And this is what we call a Q-star. Well, that is not what we call Q-star, that is what we call little q-star. Q-star, the best estimate of this, is obviously the thing itself. So if you can find a good function that assigns a value to a state-action pair, the best such function you can get is the one that is equal to little q-star. I hope that wasn't too confusing. I'll show on the next slide why that might be reasonable. So Q-learning is based on a general idea from dynamic programming, which is the so-called Bellman question. There we go. This is an instantiation of Bellman optimality, which says that the best state-action value function has the property that it is equal to the immediate reward of taking action, a, and state, s, plus this, which is the maximum Q value for the next state. So we're going to stare at this for a bit, because there's a bit here to digest. Remember, q-star assigns a value to any state action pair. So we have q-star here, we have q-star here. This thing here is supposed to represent the value going forward in time after I've made this choice, action, a, and state, s. If I have a good idea of how good it is to take action, a, instead of s, it should both incorporate the immediate reward that I get-- that's RT-- and how good that choice was going forward. So think about mechanical ventilation, as I said before. If we put a patient on mechanical ventilation, we have to do a bunch of other things after that. If none of those other things lead to a good outcome, this part will be low. Even if the immediate return is good. So for the optimal q-star, this quantity holds. We know that-- we can prove that. So the question is how do we find this thing? How do we find q-star? Because q-star is not only the thing that gives you the optimal policy-- it also satisfied this equality. This is not true for every Q function, but it's true for the optimal one. Questions? If you haven't seen this before, it might be a little tough to digest. Is the notation clear? Essentially, here you have the state that you are arriving at the next time. A prime is the parameter of this here, or the argument to this. You're taking the best possible q-star value and then state that you arrive at after. Yeah, go ahead. AUDIENCE: Can you instantiate an example you have on the board? FREDRIK D. JOHANSSON: Yes. Actually, I might do a full example of Q-learning in a second. Yes, I will. I'll get to that example then. Yeah, I was debating whether to do that. It might take some time, but it could be useful. So where are we? So what I showed you before-- the Bellman inequality. We know that this holds for the optimal thing. And if there is a quality that is true at an optimum, one general idea in optimization is this so-called fixed point iteration that you can do to arrive there. And that's essentially what we will do to get to a good Q. So a nice thing about Q-learning is that if your states and action spaces are small and discrete, you can represent the Q function as a table. So all you have to keep track of is, how good is the certain action in a certain state? Or all actions in all states, rather? So that's what we did here. This is a table. I've described to you the policy here, but what we'll do next is to describe the value of each action. So you can think of a value of taking the right one, bottom, top, and left, essentially. Those will be the values that we need to consider. And so what Q-learning can do with discrete states is to essentially start from somewhere, start from some idea of what Q is-- could be random, could be 0. And then repeat the following fixed-point iteration, where you update your former idea of what Q should be, with its current value plus essentially a mixture of the immediate reward for taking action, At, in that state, and the future reward, as judged by your current estimate of the Q function. So we'll do that now in practice. Yeah. AUDIENCE: Throughout this, where are we getting the transition probabilities or the behavior of the game? FREDRIK D. JOHANSSON: So they're not used here, actually. A value-based RL-- I didn't say that explicitly, but they don't rely on knowing the transition probabilities. What you might ask is where do we get the S and the As and the Rs from? And we'll get to that. How do we estimate these? We'll get to that later. Good question, though. I'm going to throw a very messy slide at you. Here you go. A lot of numbers. So what I've done now here is a more exhaustive version of what I put on the board. For each little triangle here represents the Q value for the state-action pair. So this triangle is, again, for the action right if you're in this state. So what I've put on the first slide here is the immediate reward of each action. So we know that any step will cost us minus 0.04. So that's why there's a lot of those here. These white boxes here are not possible actions. Up here, you have a 0.96, because it's 1, which is the immediate reward of going right here, minus 0.04. These two are minus 1.04 for the same reason-- because you arrive in minus 1. OK, so that's the first step and the second step done. We initialize Qs to be 0. And then we picked these two parameters of the problem, alpha and gamma, to be 1. And then we did the first iteration of Q-learning, where we set the Q to be the old version of Q, which was 0, plus alpha times this thing here. So Q was 0, that means that this is also 0. So the only thing we need to look at is this thing here. This also is 0, because the Qs for all states were 0, so the only thing we end up with is R. And that's what populated this table here. Next timestep-- I'm doing Q-learning now in a way where I update all the state-action pairs at once. How can I do that? Well, it depends on the question I got there, essentially. What data do I observe? Or how do I get to know the rewards of the S&A pairs? We'll come back to that. So in the next step, I have to update everything again. So it's the previous Q value, which was minus 0.04 for a lot of things, then plus the immediate reward, which was this RT. And I have to keep going. So the dominant thing for the table this time was that the best Q value for almost all of these boxes was minus 0.04. So essentially I will add the immediate reward plus that almost everywhere. What is interesting, though, is that here, the best Q value was 0.96. And it will remain so. That means that the best Q value for the adjacent states-- we look at this max here and get 0.96 out. And then add the immediate reward. Getting to here gives me 0.96 minus 0.04 for the immediate reward. And now we can figure out what will happen next. These values will spread out as you go further away from the plus 1. I don't think we should go through all of this, but you get a sense, essentially, how information is moved from the plus 1 and away. And I'm sure that's how you solved it yourself, in your head. But this makes it clear why you can do that, even if you don't know where the terminal states are or where the value of the state-action pairs are. AUDIENCE: Doesn't this calculation assume that if you want to move in a certain direction, you will move in that direction? FREDRIK D. JOHANSSON: Yes. Sorry. Thanks for reminding me. That should have been in the slide, yes. Thank you. I'm going to skip the rest of this. I hope you forgive me. We can talk more about it later. Thanks for reminding me, Pete, there, that one of the things I exploited here was that I assume just deterministic transitions. Another thing that I relied very heavily on here is that I can represent this Q function as a table. I drew all these boxes and I filled the numbers in. That's easy enough. But what if I have thousands of states and thousands of actions? That's a large table. And not only is it a large table for me to keep in memory-- it's also very bad for me statistically. If I want to observe anything about a state-action pair, I have to do that action in that state. And if you think about treating patients in a hospital, you're not going to try everything in every state, usually. You're also not going to have infinite numbers of patients. So how do you figure out what is the immediate reward of taking a certain action in a certain state? And this is where a function approximation comes in. Essentially, if you can't represent your data set table, either for statistical reasons or for memory reasons, let's say, you might want to approximate the Q function with a parametric or with a non-parametric function. And this is exactly what we can do. So we can draw now an analogy to what we did last week. I'm going to come back to this, but essentially instead of doing this fixed-point iteration that we did before, we will try and look for a function Q theta that is equal to R plus gamma max Q. Remember before, we had the Bellman inequality? We said that q-star S, A is equal to R S, A, let's say, plus gamma max A prime q star S prime A prime, where S prime is the state we get to after taking action A in state S. So the only thing I've done here is to take this equality and make it instead a loss function on the violation of this equality. So by minimizing this quantity, I will find something that has approximately the Bellman equality that we talked about before. This is the idea of fitted Q-learning, where you substitute the tabular representation with the function approximations, essentially. So just to make this a bit more concrete, we can think about the case where we have only a single step. There's only a single action to make, which means that there is no future part of this equation here. This part goes away, because there's only one stage in our trajectory. So we have only the immediate reward. We have only the Q function. Now, this is exactly a regression equation in the way that you've seen it when estimating potential outcomes. RT here represents the outcome of doing action A and state S. And Q here will be our estimate of this RT. Again, I've said this before-- if we have a single time point in our process, the problem reduces to estimating potential outcomes, just the way we saw it last time. We have curves that correspond outcomes under different actions. And we can do regression adjustment, trying to find an F such that this quantity is small so that we can model each different potential outcomes. And that's exactly what happens with the fitted Q iteration if you have a single timestep, too. So to make it even more concrete, we can say that there's some target value, G hat, which represents the immediate reward and the future rewards that is the target of our regression. And we're fitting some function to that value. So the question we got before was how do I know the transition matrix? How do I get any information about this thing? I say here on the slide that, OK, we have some target that's R plus future Q values. We have some prediction and we have an expectation of our transitions here. But how do I evaluate this thing? The transitions I have to get from somewhere, right? And another way to say that is what are the inputs and the outputs of our regression? Because when we estimate potential outcomes, we have a very clear idea of this. We know that y, the outcome itself, is a target. And the input is the covariates, x. But here, we have a moving target, because this Q hat, it has to come from somewhere, too. This is something that we estimate as well. So usually what happens is that we alternate between updating this target, Q, and Q theta. So essentially, we copy Q theta to become our new Q hat and we iterate this somehow. But I still haven't told you how to evaluate this expectation. So usually in RL, there are a few different ways to do this. And either depending on where you coming from, essentially, these are varyingly viable. So if we look back at this thing here, it relies on having tuples of transitions-- the state, the action, the next state, and the reward that I got. So I have to somehow observe those. And I can obtain them in various ways. A very common one when it comes to learning to play video games, for example, is that you do something called on-policy exploration. That means that you observe data from the policy that you're currently optimizing. You just play the game according to the policies that you have at the moment. And the analogy in health care would be that you have some idea of how to treat patients and you just do that and see what happens. That could be problematic, especially if you've got that policy-- if you randomly initialized it or if you got it for some somewhere very suboptimal. A different thing that we're more, perhaps, comfortable with in health care, in a restricted setting, is the idea of a randomized trial, where, instead of trying out some policy that you're currently learning, you decide on a population where it's OK to flip a coin, essentially, between different actions that you have. The difference between the sequential setting and the one-step setting is that now we have to randomize a sequence of actions, which is a little bit unlike the clinical trials that you have seen before, I think. The last one, which is the most studied one when it comes to practice, I would say, is the one that we talk about this week-- is off-policy evaluation or learning, in which case you observe health care records, for example. You observe registries. You observe some data from the health care system where patients have already been treated and you try to extract a good policy based on that information. So that means that you see these transitions between state and action and the next state and the reward. You see that based on what happened in the past and you have to figure out a pattern there that helps you come up with a good action or a good policy. So we'll focus on that one for now. The last part of this talk will be about, essentially, what we have to be careful with when we learn with off-policy data. Any questions up until this point? Yeah. AUDIENCE: So if [INAUDIBLE] getting there for the [INAUDIBLE],, are there any requirements that has to be met by [INAUDIBLE],, like how we had [INAUDIBLE] and cause inference? FREDRIK D. JOHANSSON: Yeah, I'll get to that on the next set of slides. Thank you. Any other questions about the Q-learning part? A colleague of mine, Rahul, he said-- or maybe he just paraphrased it from someone else. But essentially, you have to see RL 10 times before you get it, or something to that effect. I had the same experience. So hopefully you have questions for me after. AUDIENCE: Human reinforcement learning. FREDRIK D. JOHANSSON: Exactly. But I think what you should take from the last two sections, if not how to do Q-learning in detail, because I glossed over a lot of things. You should take with you the idea of dynamic programming and figuring out, how can I learn about what's good early on in my process from what's good late? And the idea of moving towards a good state and not just arriving there immediately. And there are many ways to think about that. OK, we'll move on to off-policy learning. And again, the set-up here is that we receive trajectories of patient states, actions, and rewards from some source. We don't know what these sources necessarily-- well, we probably know what the source is. But we don't know how these actions were performed, i.e., we don't know what the policy was that generated these trajectories. And this is the same set-up as when you estimated causal effects last week, to a large extent. We say that the actions are drawn, again, according to some behavior policy unknown to us. But we want to figure out what is the value of a new policy, pi. So when I showed you very early on-- I wish I had that slide again. But essentially, a bunch of patient trajectories and some return. Patient trajectories, some return. The average of those, that's called a value. If we have trajectories according to a certain policy, that is the value of that policy-- the average of these things. But when we have trajectories according to one policy and want to figure out the value of another one, that's the same problem as the covariate adjustment problem that you had last week, essentially. Or the confounding problem, essentially. The trajectories that we draw are biased according to the policy of the clinician that created them. And we want to figure out the value of a different policy. So it's the same as the confounding problem from the last time. And because it is the same as the confounding from last time, we know that this is at least as hard as doing that. We have confounding-- I already alluded to variance issues. And you mentioned overlap or positivity as well. And in fact, we need to make the same assumptions but even stronger assumptions for this to be possible. These are sufficient conditions. So, under very certain circumstances, you don't need them. I should say, these are fairly general assumptions that are still strict-- that's how I should put it. So last time, we looked at something called strong ignorability. I realized the text is pretty small in here. Can you see in the back? Is that OK? OK, great. So strong ignorability said that the potential outcomes-- Y0 and Y1-- are conditionally independent of the treatment, t, given the set of variables, x, or the variable, x. And that's saying that it doesn't matter if we know what treatment was given. We can figure out just based on x what would happen under either treatment arm, where we should treat this patient, with t equals 0, t equals 1. We had an idea of-- or an assumption of-- overlap, which says that any treatment could be observed in any state or any context, x. That's what that means. And that is only to ensure that we can estimate at least a conditional average treatment effect at x. And if we want to estimate the average treatment effect in a population, we would need to have that for every x in that population. So what happens in the sequential case is that we need even stronger assumptions. There's some notation I haven't introduced here and I apologize for that. But there's a bar here over these Ss and As-- I don't know if you can see it. That usually indicates in this literature that you're looking at the sequence, up to the index here. So all the states up until t have observed and all the actions up until t minus 1. So in order for the best policy to be identifiable-- or the value of a positive to be identifiable-- we need this strong condition. So the return of a policy is independent of the current action, given everything that happened in the past. This is weaker than the Markov assumption, to be clear, because there, we said that anything that happens in the future is conditionally independent, given the current state. So this is weaker, because we now just need to observe something in the history. We need to observe all confounders in the history, in this instance. We don't need to summarize them in S. And we'll get back to this in the next slide. Positivity is the real difficult one, though, because what we're saying is that at any point in the trajectory, any action should be possible in order for us to estimate the value of any possible policy. And we know that that's not going to be true in practice. We're not going to consider every possible action at every possible point in the health care setting. There's just no way. So what that tells us is that we can't estimate the value of every possible policy. We can only estimate the value of policies that are consistent with the support that we do have. If we never see action 4 at time 3, there's no way we can learn about a policy that does that-- that takes action 4 at time 3. That's what I'm trying to say. So in some sense, this is stronger, just because of how sequential settings work. It's more about the application domain than anything, I would say. In the next set of slides, we'll focus on sequential randomization or sequential ignorability, as it's sometimes called. And tomorrow, we'll talk a little bit about the statistics involved in or resulting from the positivity assumption and things like importance weighting, et cetera. Did I say tomorrow? I meant Thursday. So last recap on the potential outcome story. This is a slide-- I'm not sure if he showed this one, but it's one that we used in a lot of talks. And it, again, just serves to illustrate the idea of a one-timestep decision. So we have here, Anna. A patient comes in. She has high blood sugar and some other properties. And we're debating whether to give her medication A or B. And to do that, we want to figure out what would be her blood sugar under these different choices a few months down the line? So I'm just using this here to introduce you to the patient, Anna. And we're going to talk about Anna a little bit more. So treating Anna once, we can represent as this causal graph that you've seen a lot of times now. We had some treatment, A, we had some state, S, and some outcome, R. We want to figure out the effect of this A on the outcome, R. Ignorability in this case just says that the potential outcomes under each action, A, is conditionally independent of A, given S. And so we know that ignorability and overlap is sufficient conditions for identification of this effect. But what happens now if we add another time point? OK, so in this case, if I have no extra arrows here-- I just have completely independent time points-- ignorability clearly still holds. There's no links going from A to R, there's no from S to R, et cetera. So ignorability is still fine. If Anna's health status in the future depends on the actions that I take now, here, then the situation is a little bit different. So this is now not in the completely independent actions that I make, but the actions here influence the state in the future. So we've seen this. This is a Markov decision process, as you've seen before. This is very likely in practice. Also, if Anna, for example, is diabetic, as we saw in the example that I mentioned, it's likely that she will remain so. This previous state will influence the future state. These things seem very reasonable, right? But now I'm trying to argue about the sequential ignorability assumption. How can we break that? How can we break ignorability when it comes to the sequential, say? If you have an action here-- so the outcome at a later point depends on an earlier choice. That might certainly be the case, because we could have a delayed effect of something. So if we measure, say, a lab value which could be in the right range or not, it could very well depend on medication we gave a long time ago. And it's also likely that the reward could depend on a state which is much earlier, depending on what we include in that state variable. We already have an example, I think, from the audience on that. So actually, ignorability should have a big red cross over it, because it doesn't hold there. And it's luckily on the next slide. Because there are even more errors that we can have, conceivably, in the medical setting. The example that we got from Pete before was, essentially, that if we've tried an action previously, we might not want to try it again. Or if we knew that something worked previously, we might want to do it again. So if we had a good reward here, we might want to do the same thing twice. And this arrow here says that if we know that a patient had a symptom earlier on, we might want to base our actions on it later. We've known that the patient had an allergic reaction at some point, for example. We might not want to use that medication at a later time. AUDIENCE: But you can always put everything in a state. FREDRIK D. JOHANSSON: Exactly. So this depends on what you put in the state. This is an example where I should introduce these arrows to show that, if I haven't got that information here, then I introduce this dependence. So if I don't have the information about allergic reaction or some symptom before in here, then I have to do something else. So exactly that is the point. If I can summarize history in some good way-- if I can compress all of these four variables into some variable age standard for the history, then I have ignorability, with respect to that history, H. This is your solution and it introduces a new problem, because history is usually a really large thing. We know that history grows with time, obviously. But usually we don't observe patients for the same number of time points. So how do we represent that for a program? How do we represent that to a learning algorithm? That's something we have to deal with. You can pad history with 0s, et cetera, but if you keep every timestep and repeat every variable in every timestep, you get a very large object. That might introduce statistical problems, because now you have much more variance if you have new variables, et cetera. So one thing that people do is that they look some amount of time backwards-- so instead of just looking at one timestep back, you now look at a length k window. And your state essentially grows by a factor, k. And another alternative is to try and learn a summary function. Learn some function that is relevant for predicting the outcome that takes all of the history into account, but has a smaller representation than just t times the variables that you have. But this is something that needs to happen, usually. Most health care data, in practice-- you have to make choices about this. I just want to stress that that's something you really can't avoid. The last point I want to make is that unobserved confounding is also a problem that is not avoidable just due to summarizing history. We can introduce new confounding. That is a problem, if we don't summarize history well. But we can also have unobserved confounders, just like we can in the one-step setting. One example is if we have an unobserved confounded in the same way as we did before. It impacts both the action at time 1 and the reward at time 1. But of course, now we're in the sequential setting. The confounding structure could be much more complicated. We could have a confounder that influences an early action and a late reward. So it might be a little harder for us to characterize what is the set of potential confounders? So I just wanted to point that out and to reinforce that this is only harder than the one-step setting. So we're wrapping up now. I just want to end on a point about the games that we looked at before. One of the big reasons that these algorithms were so successful in playing games was that we have full observability in these settings. We know everything from the game board itself-- when it comes to Go, at least. We can debate that when it comes to the video games. But in Go, we have complete observability of the board. Everything we need to know for an optimal decision is there at any time point. Not only can we observe it through the history, but in the case of Go, you don't even need to look at history. We certainly have Markov dynamics with respect to the board itself. You don't ever have to remember what was a move earlier on, unless you want to read into your opponent, I suppose. But that's a game theoretic notion we're not going to get into here. But more importantly, we can explore the dynamics of these systems almost limitlessly, just by simulation and self-play. And that's true regardless if you have full observability or not-- like in StarCraft, you might not have full observability. But you can try your things out endlessly. And contrast that with having, I don't know, 700 patients with rheumatoid arthritis or something like that. Those are the samples you have. You're not going to get new ones. So that is an amazing obstacle for us to overcome if we want to do this in a good way. The current algorithms are really inefficient with the data that they use. And that's why this limitless exploration or simulation has been so important for these games. And that's also why the games are the success stories of this. A last point is that typically for these settings that I put here, we have no noise, essentially. We get perfect observations of actions and states and outcomes and everything like that. And that's really true in any real-world application. All right. I'm going to wrap up. Tomorrow-- nope, Thursday, David is going to talk about more explicitly if we want to do this properly in health care, what's going to happen? We're going to have a great discussion, I'm sure, as well. So don't mind the slide. It's Thursday. All right. Thanks a lot. [APPLAUSE]
MIT_6S897_Machine_Learning_for_Healthcare_Spring_2019
1_What_Makes_Healthcare_Unique.txt
[CLICK] DAVID SONTAG: So welcome to spring 2019 Machine Learning for Healthcare. My name is David Sontag. I'm a professor in computer science. Also I'm in the Institute for Medical Engineering and Science. My co-instructor today will be Pete Szolovits, who I'll introduce more towards the end of today's lecture, along with the rest of the course staff. So the problem. The problem is that healthcare in the United States costs too much. Currently, we're spending $3 trillion a year, and we're not even necessarily doing a very good job. Patients who have chronic disease often find that these chronic diseases are diagnosed late. They're often not managed well. And that happens even in a country with some of the world's best clinicians. Moreover, medical errors are happening all of the time, errors that if caught, would have prevented needless deaths, needless worsening of disease, and more. And healthcare impacts all of us. So I imagine that almost everyone here in this room have had a family member, a loved one, a dear friend, or even themselves suffer from a health condition which impacts your quality of life, which has affected your work, your studies, and possibly has led to a needless death. And so the question that we're asking in this course today is how can we use machine learning, artificial intelligence, as one piece of a bigger puzzle to try to transform healthcare. So all of us have some personal stories. I myself have personal stories that have led me to be interested in this area. My grandfather, who had Alzheimer's disease, was diagnosed quite late in his Alzheimer's disease. There aren't good treatments today for Alzheimer's, and so it's not that I would have expected the outcome to be different. But had he been diagnosed earlier, our family would have recognized that many of the erratic things that he was doing towards the later years of his life were due to this disease and not due to some other reason. My mother, who had multiple myeloma, a blood cancer, who was diagnosed five years ago now, never started treatment for her cancer before she died one year ago. Now, why did she die? Well, it was believed that her cancer was still in its very early stages. Her blood markers that were used to track the progress of the cancer put her in a low risk category. She didn't yet have visible complications of the disease that would, according to today's standard guidelines, require treatment to be initiated. And as a result, the belief was the best strategy was to wait and see. But unbeknownst to her and to my family, her blood cancer, which was caused by light chains which were accumulating, ended up leading to organ damage. In this case, the light chains were accumulating in her heart, and she died of heart failure. Had we recognized that her disease was further along, she might have initiated treatment. And there are now over 20 treatments for multiple myeloma which are believed to have life-lengthening effect. And I can give you four or five other stories from my own personal family and my friends, where similar things have happened. And I have no doubt that all of you have as well. So what can we do about it is the question that we want to try to understand in today's course. And don't get me wrong. Machine learning, artificial intelligence, will only be one piece of the puzzle. There's so many other systematic changes that we're going to have to make into our healthcare system. But let's try to understand what those AI elements might be. So let's start in today's lecture by giving a bit of a background on artificial intelligence and machine learning in healthcare. And I'll tell you why I think the time is right now, in 2019, to really start to make a big dent at this problem. And then I'll tell you about-- I'll give you a few examples of how machine learning is likely to transform healthcare over the next decade. And of course we're just guessing, but this is really guided by the latest and greatest in research, a lot of it happening here at MIT. And then we'll close today's lecture with an overview of what's different, what's unique about machine learning healthcare. All of you have taken some machine learning course in the past, and so you know the basics of supervised prediction. Many of you have studied things like clustering. And you're certainly paying attention to the news, where you see news every single day about Google, Facebook, Microsoft's latest advances in speech recognition, computer vision, and so on. So what's really different about trying to apply these techniques in the healthcare domain? The answer is that there's a huge amount of difference, and there are a lot of subtleties to doing machine learning right here. And we'll talk about that throughout the whole entire semester. So to begin, this isn't a new field. Artificial intelligence in medicine goes back to the 1970s or sometime even in the '60s. One of the earliest examples of trying to use artificial intelligence for diagnosis was this MYCIN system developed at Stanford, where the goal was try to identify bacteria that might cause infection and then to try to guide what would be the appropriate therapy for that bacteria. Now, it was found that this algorithm, this machine learning, this simple AI algorithm was able to propose a good therapy in 69% of cases, which at the time was better than the best or very good infectious disease experts. Now, it also had some other elements of it which were quite interesting. So those of you who've been following the latest in chat bots might find this a little prescient. So there's a dialog interface where the computer says, I am ready. The computer clinician responds, this is a 26-year-old male patient. The computer tests its understanding of what's going on. It says, the age of the patient is 26. The sex of the patient is male. The clinician writes, five days ago, he had respiratory tract infections. The computer asks what is his name. Joe. The computer responds, my understanding is the name of the patient is Joe. Respiratory tract is one of the symptoms the patient had. Then the clinician writes, a couple of days before the admission, he had malaise, which is general tiredness. And the computer responds, please give me a date of admission. The clinician responds, March 12, 1979, and the computer again confirms that it's understood appropriately. And this is the preface to the later diagnostic stages. So the ideas of how AI can really impact medicine have been around a long time. Yet these algorithms which have been shown to be very effective, even going back to the 1970s, didn't translate into clinical care. A second example, oh so equally impressive in its nature, was work from the 1980s in Pittsburgh, developing what is known as the INTERNIST-1 or Quick Medical Reference system. This was now used not for infectious diseases, but for primary care. Here one might ask, how can we try to do diagnosis at a much larger scale, where patients might come in with one of hundreds of different diseases and could report thousands of different symptoms, each one giving you some view, noisy view, into what may be going on with a patient's health. And at a high level, they modeled this as something like a Bayesian network. It wasn't strictly a Bayesian network. It was a bit more heuristic at the time. It was later developed to be so. But at a high level, there were a number of latent variables or hidden variables corresponding to different diseases the patient might have, like flu or pneumonia or diabetes. And then there were a number of variables on the very bottom, which were symptoms, which are all binary, so the diseases are either on or off. And here the symptoms are either present or not. And these symptoms can include things like fatigue or cough. They could also be things that result from laboratory test results, like a high value of hemoglobin A1C. And this algorithm would then take this model, take the symptoms that were reported for the patient, and try to do reasoning over what action might be going on with that patient, to figure out what the differential diagnosis is. There are over 40,000 edges connecting diseases to symptoms that those diseases were believed to have caused. And this knowledge base, which was probabilistic in nature, because it captured the idea that some symptoms would only occur with some probability for a disease, took over 15 person years to elicit from a large medical team. And so it was a lot of effort. And even in going forward to today's time, there have been few similar efforts at a scale as impressive as this one. But again, what happened? These algorithms are not being used anywhere today in our clinical workflows. And the challenges that have prevented them from being used today are numerous. But I used a word in my explanation which should really hint at it. I used the word clinical workflow. And this, I think, is one of the biggest challenges. Which is that the algorithms were designed to solve narrow problems. They weren't necessarily even the most important problems, because clinicians generally do a very good job at diagnosis. And there was a big gap between the input that they expected and the current clinical workflow. So imagine that you have now a mainframe computer. I mean, this was the '80s. And you have a clinician who has to talk to the patient and get some information. Go back to the computer. Type in a structured data, the symptoms that the patient's reporting. Get information back from the computer and iterate. As you can imagine, that takes a lot of time, and time is money. And unfortunately, it prevents it from being used. Moreover, despite the fact that it took a lot of effort to use it when outside of existing clinical workflows, these systems were also really difficult to maintain. So I talked about how this was elicited from 15 person years of work. There was no machine learning here. It was called artificial intelligence because one tries to reason in an artificial way, like humans might. But there was no learning from data in this. And so what that means is if you then go to a new place, let's say this was developed in Pittsburgh, and now you go to Los Angeles or to Beijing or to London, and you want to apply the same algorithms, you suddenly have to re-derive parts of this model from scratch. For example, the prior probability of the diseases are going to be very different, depending on where you are in the world. Now, you might want to go to a different domain outside of primary care. And again, one has to spend a huge amount of effort to derive such models. As new medicine discoveries are made, one has to, again, update these models. And this has been a huge blocker to deployment. I'll move forward to one more example now, also from the 1980s. And this is now for a different type of question. Not one of how do you do diagnosis, but how do you actually do discovery. So this is an example from Stanford. And it was a really interesting case where one took a data-driven approach to try to make medical discoveries. There was a database of what's called a disease registry from patients with rheumatoid arthritis, which is a chronic disease. It's an autoimmune condition, where for each patient, over a series of different visits, one would record, for example, here it shows this is visit number one. The date was January 17, 1979. The knee pain, patient's knee pain, was reported as severe. Their fatigue was moderate. Temperatures was 38.5 Celsius. The diagnosis for this patient was actually a different autoimmune condition called systemic lupus. We have some laboratory test values for their creatinine and blood nitrogen, and we know something about their medication. In this case, they were on prednisone, a steroid. And one has this data at every point in time. This almost certainly was recorded on paper and then later, these were collected into a computer format. But then it provides the possibility to ask questions and make new discoveries. So for example, in this work, there was a discovery module which would make causal hypotheses about what aspects might cause other aspects. It would then do some basic statistics to check about the statistical validity of those causal hypotheses. It would then present those to a domain expert to try to check off does this make sense or not. For those that are accepted, it then uses that knowledge that was just learned to iterate, to try to make new discoveries. And one of the main findings from this paper was that prednisone elevates cholesterol. That was published in the Annals of Internal Medicine in 1986. So these are all very early examples of data-driven approaches to improve both medicine and healthcare. Now flip forward to the 1990s. Neural networks started to become popular. Not quite the neural networks that we're familiar with in today's day and age, but nonetheless, they shared very much of the same elements. So just in 1990, there were 88 published studies using neural networks for various different medical problems. One of the things that really differentiated those approaches to what we see in today's landscape is that the number of features were very small. So usually features which were similar to what I showed you in the previous slide. So structured data that was manually curated for the purpose of using in machine learning. And there was nothing automatic about this. So one would have to have assistants gather the data. And because of that, typically, there were very small number of samples for each study that were used in machine learning. Now, these models, although very effective, and I'll show you some examples in the next slide, also suffered from the same challenges I mentioned earlier. They didn't fit well into clinical workflows. It was hard to get enough training data because of the manual efforts involved. And what the community found, even in the early 1990s, is that these algorithms did not generalize well. If you went through this huge effort of collecting training data, learning your model, and validating your model at one institution, and you then take it to a different one, it just works much worse. OK? And that really prevented translation of these technologies into clinical practice. So what were these different domains that were studied? Well, here are a few examples. It's a bit small, so I'll read it out to you. It was studied in breast cancer, myocardial infarction, which is heart attack, lower back pain, used to predict psychiatric length of stay for inpatient, skin tumors, head injuries, prediction of dementia, understanding progression of diabetes, and a variety of other problems, which again are of the nature that we see about, we read about in the news today in modern attempts to apply machine learning in healthcare. The number of training examples, as mentioned, were very few, ranging from 39 to, in some cases, 3,000. Those are individuals, humans. And the networks, the neural networks, they weren't completely shallow, but they weren't very deep either. So these were the architectures they might be 60 neurons, then 7, and then 6, for example, in terms of each of the layers of the neural network. By the way, that sort of makes, sense given the type of data that was fed into it. So none of this is new, in terms of the goals. So what's changed? Why do I think that despite the fact that we've had what could arguably be called a failure for the last 30 or 40 years, that we might actually have some chance of succeeding now. And the big differentiator, what I'll call now the opportunity, is data. So whereas in the past, much of the work in artificial intelligence in medicine was not data driven. It was based on trying to elicit as much domain knowledge as one can from clinical domain experts. In some cases, gathering a little bit of data. Today, we have an amazing opportunity because of the prevalence of electronic medical records, both in the United States and elsewhere. Now, here the United States, for example, the story wasn't that way, even back in 2008, when the adoption of electronic medical records was under 10% across the US. But then there wasn't an economic disaster in the US. And as part of the economic stimulus package, which President Obama initiated, there was something like $30 billion allocated to hospitals purchasing electronic medical records. And this is already a first example that we see of policy being really influential to create the-- to open the stage to the types of work that we're going to be able to do in this course today. So money was then made available as incentives for hospitals to purchase electronic medical records. And as a result, the adoption increased dramatically. This is a really old number from 2015 of 84% of hospitals, and now today, it's actually much larger. So data is being collected in an electronic form, and that presents an opportunity to try to do research on it. It presents an opportunity to do machine learning on it, and it presents an opportunity to start to deploy machine learning algorithms, where rather than having to manually input data for a patient, we can just draw it automatically from data that's already available in electronic form. And so there are a number of data sets that have been made available for research and development in this space. Here at MIT, there has been a major effort pioneered by Professor Roger Mark, in the ECS and Institute for Medical Engineering department, to create what's known as the PhysioNet or Mimic databases. Mimic contains data from over 40,000 patients and intensive care units. And it's very rich data. It contains basically everything that's being collected in the intensive care unit. Everything from notes that are written by both nurses and by attendings, to vital signs that are being collected by monitors that are attached to patients, collecting their blood pressure, oxygen saturation, heart rate, and so on, to imaging data, to blood test results as they're made available, and outcomes. And of course also medications that are being prescribed as it goes. And so this is a wealth of data that now one could use to try to study, at least study in a very narrow setting of an intensive care unit, how machine learning could be used in that location. And I don't want to under-emphasize the importance of this database, both through this course and through the broader field. This is really the only publicly available electronic medical record data set of any reasonable size in the whole world, and it was created here at MIT. And we'll be using it extensively in our homework assignments as a result. There are other data sets that aren't publicly available, but which have been gathered by industry. And one prime example is the Truven Market Scan database, which was created by a company called Truven, which was later acquired by IBM, as I'll tell you about more in a few minutes. Now, this data-- and there are many competing companies that have similar data sets-- is created not from electronic medical records, but rather from-- typically, it's created from insurance claims. So every time you go to see a doctor, there's usually some record of that that is associated to the billing of that visit. So your provider will send a bill to your health insurance saying basically what happened, so what procedures were performed, providing diagnoses that are used to justify the cost of those procedures and tests. And from that data, you now get a holistic view, a longitudinal view, of what's happened to that patient's health. And then there is a lot of money that passes behind the scenes between insurers and hospitals to corporate companies, such as Truven, which collect that data and then resell it for research purposes. And one of the biggest purchasers of data like this is the pharmaceutical industry. So this data, unfortunately, is not usually publicly available, and that's actually a big problem, both in the US and elsewhere. It's a big obstacle to research in this field, that only people who have millions of dollars to pay for it really get access to it, and it's something that I'm going to return to throughout the semester. It's something where I think policy can make a big difference. But luckily, here at MIT, the story's going to be a bit different. So thanks to the MIT IBM Watson AI Lab, MIT has a close relationship with IBM. And fingers crossed, it looks like we'll get access to this database for our homework and projects for this semester. Now, there are a lot of other initiatives that are creating large data sets. A really important example here in the US is President Obama's Precision Medicine Initiative, which has since been renamed to the All of Us Initiative. And this initiative is creating a data set of one million patients, drawn in a representative manner, from across the United States, to capture patients both poor and rich, patients who are healthy and have chronic disease, with the goal of trying to create a research database where all of us and other people, both inside and outside the US, could do research to make medical discoveries. And this will include data such as data from a baseline health exam, where the typical vitals are taken, blood is drawn. It'll combine data of the previous two types I've mentioned, including both data from electronic medical records and health insurance claims. And a lot of this work is also happening here in Boston. So right across the street at the Broad Institute, there is a team which is creating all of the software infrastructure to accommodate this data. And there are a large number of recruitment sites here in the broader Boston area where patients or any one of you, really, could go and volunteer to be part of this study. I just got a letter in the mail last week inviting me to go, and I was really excited to see that. So all sorts of different data is being created as a result of these trends that I've been mentioning. And it ranges from unstructured data, like clinical notes, to imaging, lab tests, vital signs. Nowadays, what we used to think about just as clinical data now has started to really come to have a very tight tie to what we think about as biological data. So data from genomics and proteomics is starting to play a major role in both clinical research and clinical practice. Of course, not everything that we traditionally think about healthcare data-- there are also some non-traditional views on health. So for example, social media is an interesting way of thinking through both psychiatric disorders, where many of us will post things on Facebook and other places about our mental health, which give a lens on our mental health. Your phone, which is tracking your activity, will give us a view on how active we are. It might help us diagnose early the variety of conditions as well that I'll mention later. So we have-- to this whole theme right now is about what's changed since the previous approaches at AI medicine. I've just talked about data, but data alone is not nearly enough. The other major change is that there has been decades' worth of work on standardizing health data. So for example, when I mentioned to you that when you go to a doctor's office, and they send a bill, that bill is associated with a diagnosis. And that diagnosis is coded in a system called ICD-9 or ICD-10, which is a standardized system where, for many, not all, but many diseases, there is a corresponding code associated with it. ICD-10, which was recently rolled out nationwide about a year ago is much more detailed than the previous coding system, includes some interesting categories. For example, bitten by a turtle has a code for it. Bitten by sea lion, struck by [INAUDIBLE].. So it's starting to get really detailed here, which has its benefits and its disadvantages when it comes to research using that data. But certainly, we can do more with detailed data than we could with less detailed data. Laboratory test results are standardized using a system called LOINC, here in the United States. Every lab test order has an associated code for it. I just want to point out briefly that the values associated with those lab tests are less standardized. Pharmacy, national drug codes should be very familiar to you. If you take any medication that you've been prescribed, and you look carefully, you'll see a number on it, and you see 0015347911, that number is unique to that medication. In fact, it's even unique to the brand of that medication. And there's an associated taxonomy with it. And so one can really understand in a very structured way what medications a patient is on and how those medications relate to one another. A lot of medical data is found not in the structured form, but in free text, in notes written by doctors. And these notes have, often, lots of mentions of symptoms and conditions in them. And one can try to standardize those by mapping them to what's called a unified medical language system, which is an ontology with millions of different medical concepts in them. So I'm not going to go too much more into these. They'll be the subject of much discussion in this semester, but particularly in the next two lectures by Pete. But I want to talk very briefly about what you can do once you have a standardized vocabulary. So one thing you can do is you could build APIs, or Application Programming Interfaces, for now sending that data from place to place. And FHIR, F-H-I-R, is a new standard, which has widespread adoption now here in the United States for hospitals to provide data both for downstream clinical purposes but also directly to patients. And in this standard, it will use many of the vocabularies I mentioned to you in the previous slides to encode diagnoses, medications, allergies, problems, and even financial aspects that are relevant to the care of this patient. And for those of you who have an Apple phone, for example, and if you open up a Apple Health Records, it makes use of this standard to receive data from over 50 different hospitals. And you should expect to see many competitors to them in the future, because of the fact that it's now an open standard. Now other types of data, like the health insurance claims I mentioned earlier, is often encoded in a slightly different data model. One which my lab works quite a bit with is called OMOP, and it's being maintained by a nonprofit organization called the Observational Health Data Sciences Initiative Odyssey. And this common data model gives a standard way of taking data from an institution which might have its own intricacies and really mapping it to this common language, so that if you write a machine learning algorithm once, then that machine learning algorithm reads in data in this format, you can then apply it somewhere else very easily. And the portions of these standards really can't be understated, the importance for translating what we're doing in this class into clinical practice. And so we'll be returning to these things throughout the semester. So we've talked about data. We've talked about standards. And the third wheel is breakthroughs in machine learning. And this should be no surprise to anyone in this room. All right, we've been seeing time and time again, over the last five years, benchmark after benchmark being improved upon and human performance beaten by state-of-the-art machine learning algorithms. Here I'm just showing you a figure that I imagine many of you have seen, on the error rates on the image net competition for object recognition. The error rates in 2011 were 25%. And even just a few years ago, it already surpassed human level to under 5%. Now, the changes that have led to those advances in object recognition are going to have some parallels in healthcare, but only up to some point. For example, there was big data, large training sets that were critical for this. There were algorithmic advances, in particular convolutional neural networks, that played a huge role. And there was open source software that was created, things like TensorFlow and PyTorch, which allow a researcher or industry worker in one place to very, very quickly build upon successes from other researchers in other places and then release the code, so that one can really accelerate the rate of progress in this field. Now, in terms of those algorithmic advances that have made a big difference, the ones that I would really like to point out because of their relevance to this course are learning with high dimensional features. So this was really the advances in the early 2000s, for example. And support vector machines and learning with L1 regularization as a type of sparsity. And then more recently, in the last six years, on stochastic gradient descent, like methods for very rapidly solving these convex optimization problems, that will play a huge role in what we'll be doing in this course. In the last few years, there have been a huge amount of progress in unsupervised and semi-supervised learning algorithms. And as I'll tell you about much later, one of the major challenges in healthcare is that despite the fact that we have a large amount of data, we have very little labeled data. And so these semi-supervised learning algorithms are going to play a major role in being able to really take advantage of the data that we do have. And then of course the modern deep learning algorithms. Convolutional neural networks, recurrent neural networks, and ways of trying to train them. So those played a major role in the advances in the tech industry. And to some extent, they'll play a major role in healthcare as well. And I'll point out a few examples of that in the rest of today's lecture. So all of this coming together, the data availability, the advances in other fields of machine learning, and the huge amount of potential financial gain in healthcare and the potential social impact it could have has not gone unnoticed. And there's a huge amount of industry interested in this field. These are just some examples from names I think many of you are familiar with, like DeepMind Health and IBM Watson to startup companies like Bay Labs and PathAI, which is here in Boston, all of which are really trying to build the next generation of tools for healthcare, now based on machine learning algorithms. There's been billions of dollars of funding in the recent quarters towards digital health efforts, with hundreds of different startups that are focused specifically on using artificial intelligence and healthcare. And there's the recognition that data is so essential to this process has led to an all-out purchasing effort to try to get as much of that data as you can. So for example, IBM purchased a company called Merge, which made medical imaging software and thus had accumulated a large amount of medical imaging data for $1 billion in 2015. They purchased Truven for $2.6 billion in 2016. Flatiron Health, which is a company in New York City focused on oncology, was purchased for almost $2 billion by Roche, a pharmaceutical company, just last year. And there's several more of these industry moves. Again, I'm just tying to get you thinking about what it really takes in this field, and getting access to data is actually a really important one, obviously. So let's now move on to some examples of how machine learning will transform healthcare. To begin with, I want to really lay out the landscape here and define some language. There are a number of different players when it comes to the healthcare space. They're us, patients, consumers. They are the doctors that we go to, which you could think about as providers. But of course they're not just doctors, they're also nurses and community health workers and so on. There are payers, which provide the-- where there is-- these edges are really showing relationships between the different players, so our consumers, we often, either from our job or directly from us, we will pay premiums for a health insurance company, to a health insurance company, and then that health insurance company is responsible for payments to the providers to provide services to us patients. Now, here in the US, the payers are both commercial and governmental. So many of you will know companies like Cigna or Aetna or Blue Cross, which are commercial providers of healthcare, of health insurance, but there are also governmental ones. For example, the Veterans Health Administration runs one of the biggest health organizations in the United States, servicing our veterans from the department, people who have retired from the Department of Defense, which has the one of the second biggest health systems, the Defense Health Agency. And that is an organization where-- both of those organizations, where both the payer and the provider are really one. The Center for Medicare and Medicaid Services here in the US provides health insurance for all retirees in the United States. And also Medicaid, which is run at a state level, provides health insurance to a variety of individuals who would otherwise have difficulty purchasing or obtaining their own health insurance. And those are examples of state-run or federally run health insurance agencies. And then internationally, sometimes the lines are even more blurred. So of course in places like the United Kingdom, where you have a government-run health system, the National Health Service, you have the same system both paying for and providing the services. Now, the reason why this is really important for us to think about already in lecture one is because what's so essential about this field is figuring out where the knob is that you can turn to try to improve healthcare. Where can we deploy machine learning algorithms within healthcare? So some algorithms are going to be better run by providers, others are going to be better run by payers, others are going to be directly provided to patients, and some all of the above. We also have to think about industrial questions, in terms of what is it going to take to develop a new product. Who will pay for this product? Which is again an important question when it comes to deploying algorithms here. So I'll run through a couple of very high-level examples driven from my own work, focused on the provider space, and then I'll bump up to talk a bit more broadly. So for the last seven or eight years, I've been doing a lot of work in collaboration with Beth Israel Deaconess Medical Center, across the river, with their emergency department. And the emergency department is a really interesting clinical setting, because you have a very short period of time from when a patient comes into the hospital to diagnose what's going on with them, to initiate therapy, and then to decide what to do next. Do you keep them in the hospital? Do you send them home? If you-- for each one of those things, what should the most immediate actions be? And at least here in the US, we're always understaffed. So we've got limited resources and very critical decisions to make. So this is one example of a setting where algorithms that are running behind the scenes could potentially really help with some of the challenges I mentioned earlier. So for example, one could imagine an algorithm which builds on techniques like what I mentioned to you for an internist one or quick medical reference, try to reason about what's going on with the patient based on the data that's available for the patient, the symptoms. But the modern view of this shouldn't, of course, use binary indicators of each symptom, which have to be entered in manually, but rather all of these things should be automatically extracted from the electronic medical record or listed as necessary. And then if one could reason about what's going on with a patient, we wouldn't necessarily want to use it for a diagnosis, although in some cases, you might use it for an earlier diagnosis. But it could also be used for a number of other more subtle interventions, for example, better triage to figure out which patients need to be seen first. Early detection of adverse events or recognition that there might be some unusual actions which might actually be medical errors that you want to surface now and draw attention to. Now, you could also use this understanding of what's going on with a patient to change the way that clinicians interact with patient data. So for example, one can try to propagate best practices by surfacing clinical decision support, automatically triggering this clinical decision support for patients that you think it might be relevant for. And here's one example, where it says, the ED Dashboard, the Emergency Department Dashboard decision support algorithms have determined this patient may be eligible for the atria cellulitis pathway. Cellulitis is often caused by infections. Please choose from one of the options. Enroll in the pathway, decline-- and if you decline, you must include a comment for the reviewers. Now, if you clicked on enroll in the pathway, at that moment, machine learning disappears. Rather, there is a standardized process. It's an algorithm, but it's a deterministic algorithm, for how patients with cellulitis should be properly managed, diagnosed, and treated. That algorithm comes from best practices, comes from clinicians coming together, analyzing past data, understanding what would be good ways to treat patients of this type, and then formalizing that in a document. The challenge is that there might be hundreds or even thousands of these best practices. And in an academic medical center, where you have patients coming-- where you have medical students or residents who are very quickly rotating through the system and thus may not be familiar with which are the most appropriate clinical guidelines to use for any one patient in this institution. Or if you go to a rural site, where this academic nature of thinking through what the right clinical guidelines are is a little bit less of the mainstream, everyday activity, the question of which one to use when is very challenging. And so that's where the machine learning algorithms can come in. By reasoning about what's going on with a patient, you might have a good guess of what might be appropriate for this patient, and you use that to automatically surface the right clinical decisions for a trigger. Another example is by just trying to anticipate clinician needs. So for example, if you think that this patient might be coming in for a psychiatric condition, or maybe you recognize that the patient came in that triage and was complaining of chest pain, then there might be a psych order set, which includes laboratory test results that are relevant for psychiatric patients, or a chest pain order set, which includes both laboratory tests and interventions, like aspirin, that might be suggested. Now, these are also examples where these order sets are not created by machine learning algorithms. Although that's something we could discuss later in the semester. Rather, they're standardized. But the goal of the machine learning algorithm is just to figure out which ones to show when directly to the clinicians. I'm showing you these examples to try to point out that diagnosis isn't the whole story. Thinking through what are the more subtle interventions we can do with machine learning and AI and healthcare is going to be really important to having the impact that it could have. So other examples, now a bit more on the diagnosis style, are reducing the need for specialist consults. So you might have a patient come in, and it might be really quick to get the patient in front of an X-ray to do a chest X-ray, but then finding the radiologist to review that X-ray could take a lot of time. And in some places, radiologist consults could take days, depending on the urgency of the condition. So this is an area where data is quite standardized. In fact, MIT just released last week a data set of 300,000 chest x-rays with associated labels on them. And one could try to ask the question of could we build machine learning algorithms using the convolutional neural network type techniques that we've seen play a big role in object recognition to try to understand what's going on with this patient. For example, in this case, the prediction is the patient has pneumonia, from this chest X-ray. And using those systems, it could help both reduce the load of radiology consults, and it could allow us to really translate these algorithms to settings which might be much more resource poor, for example, in developing nations. Now, the same sorts of techniques can be used for other data modalities. So this is an example of data that could be obtained from an EKG. And from looking at this EKG, one can try to predict, does the patient have a heart condition, such as an arrhythmia. Now, these types of data used to just be obtained when you go to a doctor's office. But today, they're available to all of us. For example, in Apple's most recent watch that was released, it has a single-lead EKG built into it, which can attempt to predict if a patient has an arrhythmia or not. And there are a lot of subtleties, of course, around what it took to get regulatory approval for that, which we'll be discussing later in the semester, and how one safely deploys such algorithms directly to consumers. And there, there are a variety of techniques that could be used. And in a few lectures, I'll talk to you about techniques from the '80s and '90s which were based on trying to signal processing, trying to detect where are the peaks of the signal, look at a distance between peaks. And more recently, because of the large wealth of data that is available, we've been using convolutional neural network-based approaches to try to understand this data and predict from it. Yet another example from the ER really has to do with not how do we care for the patient today, but how do we get better data, which will then result in taking better care of the patient tomorrow. And so one example of that, which my group deployed at Beth Israel Deaconess, and it's still running there in the emergency department, has to do with getting higher quality chief complaints. The chief complaint is usually a very short, two or three word quantity, like left knee pain, rectal pain, right upper quadrant, RUQ, abdominal pain. And it's just a very quick summary of why did the patient come into the ER today. And despite the fact that it's so few words, it plays a huge role in the care of a patient. If you look at the big screens in the ER, which summarize who are the patients and on what beds, they have the chief complaint next to it. Chief complaints are used as criteria for enrolling patients in clinical trials. It's used as criteria for doing retrospective quality research to see how do we care for patients in a particular type. So it plays a very big role. But unfortunately, the data that we've been getting has been crap. And that's because it was free text, and it was sufficiently high dimensional that just attempting to standardize it with a big dropdown list, like you see over here, would have killed the clinical workflow. It would've taken way too much time for clinicians to try to find the relevant one. And so it just wouldn't have been used. And that's where some very simple machine learning algorithms turned out to be really valuable. So for example, we changed the workflow altogether. Rather than the chief complaint being the first thing that the triage nurse assigns when the patient comes in, it's the last thing. First, the nurse takes the vital signs, patient's temperature, heart rate, blood pressure, respiratory rate, and oxygen saturation. They talk to the patient. They write up a 10-word or 30-word note about what's going on with the patient. Here it says, "69-year-old male patient with severe intermittent right upper quadrant pain. Began soon after eating. Also is a heavy drinker." So quite a bit of information in that. We take that. We use a machine learning algorithm, a supervised machine learning algorithm in this case, to predict a set of chief complaints now drawn from a standardized ontology. We show the five most likely ones, and the clinician, in this case, a nurse, could just click one of them, and it would enter it into there. We also allow the nurse to type in part of a chief complaint. But rather than just doing a text matching to find words that match what's being typed in, we do a contextual autocomplete. So we use our predictions to prioritize what's the most likely chief complaint that contains that sequence of characters. And that way it's way faster to enter in the relevant information. And what we found is that over time, we got much higher quality data out. And again, this is something we'll be talking about in one of our lectures in this course. So I just gave you an example, a few examples, of how machine learning and artificial tolerance will transform the provider space, but now I want to jump up a level and think through not how do we treat a patient today, but how do we think about the progression of a patient's chronic disease over a period of years. It could be 10 years, 20 years. And this question of how do we manage chronic disease is something which affects all aspects of the healthcare ecosystem. It'll be used by providers, payers, and also by patients themselves. So consider a patient with chronic kidney disease. Chronic kidney disease, it typically only gets worse. So you might start with the patient being healthy and then have some increased risk. Eventually, they have some kidney damage. Over time, they reach kidney failure. And once they reach kidney failure, typically, they need dialysis or a kidney transplant. But understanding when each of these things is going to happen for patients is actually really, really challenging. Right now, we have one way of trying to stage patients. The standard approach is known as the EGFR. It's derived predominantly from the patient's creatinine, which is a blood test result, and their age. And it gives you a number out. And from that number, you can get some sense of where the patient is in this trajectory. But it's really coarse grained, and it's not at all predictive about when the patient is going to progress to the next stage of the disease. Now, other conditions, for example, some cancers, like I'll tell you about next, don't follow that linear trajectory. Rather, patients' conditions and the disease burden, which is what I'm showing you in the y-axis here, might get worse, better, worse again, better again, worse again, and so on, and of course is a function of the treatment for the patient and other things that are going on with them. And understanding what influences, how a patient's disease is going to progress, and when is that progression going to happen, could be enormously valuable for many of those different parts of the healthcare ecosystem. So one concrete example of how that type of prediction could be used would be in a type of precision medicine. So returning back to the example that I mentioned in the very beginning of today's lecture of multiple myeloma, which I said my mother died of, there are a large number of existing treatments for multiple myeloma. And we don't really know which treatments work best for whom. But imagine a day where we have algorithms that could take what you know about a patient at one point in time. That might include, for example, blood test results. It might include RNA seq, which gives you some sense of the gene expression for the patient, that in this case would be derived from a sample taken from the patient's bone marrow. You could take that data and try to predict what would happen to a patient under two different scenarios. The blue scenario that I'm showing you here, if you give them treatment A, or this red scenario here, where you give them treatment B. And of course, treatment A and treatment B aren't just one-time treatments, but they're strategies. So they're repeated treatments across time, with some intervals. And if your algorithm says that under treatment B, this is what's going to happen, then you might-- the clinician might think, OK. Treatment B is probably the way to go here. It's going to long-term control the patient's disease burden the best. And this is an example of a causal question. Because we want to know how do we cause a change in the patient's disease trajectory. And we can try to answer this now using data. So for example, one of the data sets that's available for you to use in your course projects is from the Multiple Myeloma Research Foundation. It's an example of a disease registry, just like the disease registry I talked to you about earlier for rheumatoid arthritis. And it follows about 1,000 patients across time, patients who have multiple myeloma. What treatments they're getting, what their symptoms are, and at a couple of different stages, very detailed biological data about their cancer, in this case, RNA seq. And one could attempt to use that data to learn models to make predictions like this. But such predictions are fraught with errors. And one of the things that Pete and I will be teaching in this course is that there's a very big difference between prediction and prediction for the purpose of making causal statements. And the way that you interpret the data that you have, when your goal is to do treatment suggestion or optimization, is going to be very different from what you were taught in your introductory machine learning algorithms class. So other ways that we could try to treat and manage patients with chronic disease include early diagnosis. For example, patients with Alzheimer's disease, there's been some really interesting results just in the last few years, here. Or new modalities altogether. For example, liquid biopsies that are able to do early diagnosis of cancer, even without having to do a biopsy of the cancer tumor itself. We can also think about how do we better track and measure chronic disease. So one example shown on the left here is from Dina Katabi's lab here at MIT and CSAIL, where they've developed a system called Emerald, which is using wireless signals, the same wireless signals that we have in this room today, to try to track patients. And they can actually see behind walls, which is quite impressive. So using this for the signal, you could install what looks like just a regular wireless router in an elderly person's home, and you could detect if that elderly patient falls. And of course if the patient has fallen, and they're elderly, it might be very hard for them to get back up. They might have broken a hip, for example. And one could then alert the caregivers, maybe if necessary, bring in emergency support. And that could have a long-term outcome for this patient which would really help them. So this is an example of what I mean by better tracking patients with chronic disease. Another example comes from patients who have type 1 diabetes. Type 1 diabetes, as opposed to type 2 diabetes, generally develops in patients at a very early age. Usually as children it's diagnosed. And one is typically managed by having an insulin pump, which is attached to a patient and can give injections of insulin on the fly, as necessary. But there's a really challenging control problem there. If you give a patient too much insulin, you could kill them. If you give them too little insulin, you could really hurt them. And how much insulin you give them is going to be a function of their activity. It's going to be a function of what food they're eating and various other factors. So this is a question which the control theory community has been thinking through for a number of years, and there are a number of sophisticated algorithms that are present in today's products, and I wouldn't be surprised if one or two people in the room today have one of these. But it also presents a really interesting opportunity for machine learning. Because right now, we're not doing a very good job at predicting future glucose levels, which is essential to figure out how to regulate insulin. And if we had algorithms that could, for example, take a patient's phone, take a picture of the food that a patient is eating, have that automatically feed into an algorithm that predicts its caloric content and how quickly that'll be processed by the body. And then as a result of that, think about when, based on this patient's metabolic system, when should you start increasing insulin levels and by how much. That could have a huge impact in quality of life of these types of patients. So finally, we've talked a lot about how do we manage healthcare, but equally important is about discovery. So the same data that we could use to try to change the way that algorithms are implemented could be used to think through what would be new treatments and make new discoveries about disease subtypes. So at one point later in the semester, we'll be talking about disease progression modeling, and we'll talk about how to use data-driven approaches to discover different subtypes of disease. And on the left, here, I'm showing you an example of a really nice study from back in 2008 that used a k-means clustering algorithm to discover subtypes of asthma. One could also use machine learning to try to make discoveries about what proteins, for example, are important in regulating disease. How can we differentiate at a biological level which patients will progress quickly, which patients will respond to treatment. And that of course will then suggest new ways of-- new drug targets for new pharmaceutical efforts. Another direction also studied here at MIT, by quite a few labs, actually, has to do with drug creation or discovery. So one could use machine learning algorithms to try to predict what would a good antibody be for trying to bind with a particular target. So that's all for my overview. And in the remaining 20 minutes, I'm going to tell you a little bit about what's unique about machine learning in healthcare, and then an overview of the class syllabus. And I do see that it says, replace lamp in six minutes, or power will turn off and go into standby mode. AUDIENCE: We have that one [INAUDIBLE].. DAVID SONTAG: Ah, OK. Good. You're hired. If you didn't get into the class, talk to me afterwards. All right. AUDIENCE: [INAUDIBLE]. DAVID SONTAG: [LAUGHS] We hope. So what's unique about machine learning healthcare? I gave you already some hints at this. So first, healthcare is ultimately, unfortunately, about life or death decisions. So we need robust algorithms that don't screw up. A prime example of this, which I'll tell you a little bit more about towards the end of the semester is from a major software error that occurred something like 20, 30 years ago in a-- in an X-ray type of device, where an overwhelming amount of radiation was exposed to a patient just because of a software overflow problem, a bug. And of course that resulted in a number of patients dying. So that was a software error from decades ago, where there was no machine learning in the loop. And as a result of that and similar types of disasters, including in the space industry and airplanes and so on, led to a whole area of research in computer science in formal methods and how do we design computer algorithms that can check that a piece of software would do what it's supposed to do and would not make-- and that there are no bugs in it. But now that we're going to start to bring data and machine learning algorithms into the picture, we are really suffering for lack of good tools for doing similar formal checking of our algorithms and their behavior. And so this is going to be really important in the future decade, as machine learning gets deployed not just in settings like healthcare, but also in other settings of life and death, such as in autonomous driving. And it's something that we'll touch on throughout the semester. So for example, when one deploys machine learning algorithms, we need to be thinking about are they safe, but also how do we check for safety long-term? What are checks and balances that we should put into the deployment of the algorithm to make sure that it's still working as it was intended? We also need fair and accountable algorithms. Because increasingly, machine learning results are being used to drive resources in a healthcare setting. An example that I'll discuss in about a week and a half, when we talk about risk stratification, is that algorithms are being used by payers to risk stratify patients. For example, to figure out which patients are likely to be readmitted to the hospital in the next 30 days, or are likely to have undiagnosed diabetes, or are likely to progress quickly in their diabetes. And based on those predictions, they're doing a number of interventions. For example, they might send nurses to the patient's home. They might offer their members access to a weight loss program. And each of these interventions has money associated to them. They have a cost. And so you can't do them for everyone. And so one uses machine learning algorithms to prioritize who do you give those interventions to. But because health is so intimately tied to socioeconomic status, one can think about what happens if these algorithms are not fair. It could have really long-term implications for our society, and it's something that we're going to talk about later in the semester as well. Now, I mentioned earlier that many of the questions that we need to study in the field don't have good label data. In cases where we know we want to predict, there's a supervised prediction problem, often we just don't have labels for that thing we want to predict. But also, in many situations, we're not interested in just predicting something. We're interested in discovery. So for example, when I talk about disease subtyping or disease progression, it's much harder to quantify what you're looking for. And so unsupervised learning algorithms are going to be really important for what we do. And finally, I already mentioned how many of the questions we want to answer are causal in nature, particularly when you want to think about treatment strategies. And so we'll have two lectures on causal inference, and we'll have two lectures on reinforcement learning, which is increasingly being used to learn treatment policies in healthcare. So all of these different problems that we've talked about result in our having to rethink how do we do machine learning in this setting. For example, because driving labels for supervised prediction is very hard, one has to think through how could we automatically build algorithms to do what's called electronic phenotyping to discover, to figure out automatically, what is the relevant labels for a set of patients that one could then attempt to predict in the future. Because we often have very little data, for example, some rare diseases, there might only be a few hundred or a few thousand people in the nation that have that disease. Some common diseases present in very diverse ways and [INAUDIBLE] are very rare. Because of that, you have just a small number of patient samples that you could get, even if you had all of the data in the right place. And so we need to think through how can we bring through-- how can we bring together domain knowledge. How can we bring together data from other areas-- will everyone look over here now-- from other areas, other diseases, in order to learn something that then we could refine for the foreground question of interest. Finally, there is a ton of missing data in healthcare. So raise your hand if you've only been seeing your current primary care physician for less than four years. OK. Now, this was an easy guess, because all of you are students, and you probably don't live in Boston. But here in the US, even after you graduate, you go out into the world, you have a job, and that job pays your health insurance. And you know what? Most of you are going to go into the tech industry, and most of you are going to switch jobs every four years. And so your health insurance is going to change every four years. And unfortunately, data doesn't tend to follow people when you change providers or payers. And so what that means is for any one thing we might want to study, we tend to not have very good longitudinal data on those individuals, at least not here in the United States. That story is a little bit different in other places, like the UK or Israel, for example. Moreover, we also have a very bad lens on that healthcare data. So even if you've been going to the same doctor for a while, we tend to only have data on you when something's been recorded. So if you went to a doctor, you had a lab test performed, we know the results of it. If you've never gotten your glucose tested, it's very hard, though not impossible, to figure out if you might be diabetic. So thinking about how do we deal with the fact that there's a large amount of missing data, where that missing data has very different patterns across patients, and where there might be a big difference between train and test distributions is going to be a major part of what we discuss in this course. And finally, the last example is censoring. I think I've said finally a few times. So censoring, which we'll talk about in two weeks, is what happens when you have data only for small windows of time. So for example, you have a data set where your goal is to predict survival. You want to know how long until a person dies. But a person-- you only have data on them up to January 2009, and they haven't yet died by January 2009. Then that individual is censored. You don't know what would have happened, you don't know when they died. So that doesn't mean you should throw away that data point. In fact, we'll talk about learning algorithms that can learn from censored data very effectively. So there are a number of also logistical challenges to doing machine learning in healthcare. I talked about how having access to data is so important, but one of the reasons-- there are others-- for why getting large amounts of data in the public domain is challenging is because it's so sensitive. And removing identifiers, like name and social, from data which includes free text notes can be very challenging. And as a result, when we do research here at MIT, typically, it takes us anywhere from a few months-- which has never happened-- to two years, which is the usual situation, to negotiate a data sharing agreement to get the health data to MIT to do research on. And of course then my students write code, which we're very happy to open source under MIT license, but that code is completely useless, because no one can reproduce their results on the same data because they don't have access to it. So that's a major challenge to this field. Another challenge is about the difficulty in deploying machine learning algorithms due to the challenge of integration. So you build a good algorithm. You want to deploy it at your favorite hospital, but guess what? That hospital has Epic or Cerner or Athena or some other commercial electronic medical records system, and that electronic medical records system is not built for your algorithm to plug into. So there is a big gap, a large amount of difficulty to getting your algorithms into production systems, which we'll talk about as well during the semester. So the goals that Pete and I have for you are as follows. We want you to get intuition for working with healthcare data. And so the next two lectures after today are going to focus on what healthcare is really like, and what is the healthcare data that's created by the practice of healthcare like. We want you to get intuition for how to formalize machine learning challenges as healthcare problems. And that formalization step is often the most tricky and something you'll spend a lot of time thinking through as part of your problem sets. Not all machine learning algorithms are equally useful. And so one theme that I'll return to throughout the semester is that despite the fact that deep learning is good for many speech recognition and computer vision problems, it actually isn't the best match to many problems in healthcare. And you'll explore that also as part of your problem sets, or at least one of them. And we want you to understand also the subtleties in robustly and safely deploying machine learning algorithms. Now, more broadly, this is a young field. So for example, just recently, just about three years ago, was created the first conference on Machine Learning in Healthcare, by that name. And new publication venues are being created every single day by Nature, Lancet, and also machine learning journals, for publishing research on machine learning healthcare. Because it's one of those issues we talked about, like access to data, not very good benchmarks, reproducibility has been a major challenge. And this is again something that the field is only now starting to really grapple with. And so as part of this course, oh so many of you are currently PhD students or will soon be PhD students, we're going to think through what are some of the challenges for the research field. What are some of the open problems that you might want to work on, either during your PhD or during your future career.
MIT_6S897_Machine_Learning_for_Healthcare_Spring_2019
5_Risk_Stratification_Part_2.txt
[CLICK] [SQUEAK] [PAGES RUSTLING] [MOUSE DOUBLE-CLICKS] PROFESSOR: So today we'll be continuing along the theme of risk stratification. I'll spend the first half to 2/3 of today's lecture continuing where we left off last week before the discussion. I'll talk about how does one derive the labels that one uses within a supervised machine learning approach. I'll continue talking about how one evaluates risk stratification models. And then I'll talk about some of the subtleties that arise when you want to use machine learning for health care, specifically for risk stratification. And I think that's going to be one of the most interesting parts of today's lecture. In the last third of today's lecture, I'll be talking about how one can rethink the supervised machine learning problem, not to be a classification problem, but be something closer to a regression problem. And one now thinks about not will someone, for example, develop diabetes within one to three years from now, but when precisely will they develop diabetes-- so the time to event. Then one has to start to really think very carefully about the censoring issues that I alluded to last week. And so I'll formalize those notions in the language of survival modeling. And I'll talk about how one can do maximum likelihood estimation in that setting, and how one should do evaluation in that setting. So in our lecture last week, I gave you this example of risk stratification for type 2 diabetes. The goal, just to remind you, was as follows. 25% of people in the United States have undiagnosed type 2 diabetes. If we could take health insurance claims data that's available for everyone who has health insurance, and use that to predict who, in the near-term-- next one to three years-- is likely to be newly diagnosed with type 2 diabetes, then we could use it to risk-stratify patient population. We could use that, then, to figure out who is most at risk, do interventions for those patients, to try to get them diagnosed and get them started on treatment if relevant. But what I didn't talk much about was where did those labels come from. How do we know that someone had a diabetes onset in that window that I show up there on the top? So what are the answers? I mean, all of you should have read the paper by Razavian. And then also you should hopefully have some ideas. Thoughts? A hint-- it was in supplementary material. How did we define a positive case in that paper? Yep. AUDIENCE: Drugs they were on. PROFESSOR: Drugs they were on. OK, yeah, so for example, metformin, glucose-- sorry, insulin. AUDIENCE: I think they did include metformin actually. PROFESSOR: Metformin is a tricky case. Because metformin is often used for alternative indications. But there are many medications, such as insulin, which are used pretty exclusively for treating diabetes. And so you can look to see, does a patient have a record of taking one of these diabetic medications in that window that we're using to define the outcome? If you see a record of a medication, you might conjecture, this patient probably has diabetes. But what about it they don't have any medication listed in that time window? What could you conclude then? Any ideas? Yeah. AUDIENCE: If you look at the HBA1C value, and you know the normal range, and if you see the [INAUDIBLE] above like 7.5 or 7. PROFESSOR: So you're giving me an alternative approach, not looking at medications, but looking at laboratory test results. Look at their HBA1C results, which measures approximately an average of three-month glucose values. And if that's out of range, then they're diabetic. And that's, in fact, usually used as a definition of diabetes. But that didn't answer my original question. Why is just looking at diabetic medications not enough? AUDIENCE: Some of the diabetic medications can be used to treat other conditions. PROFESSOR: Sometimes there's ambiguity in diabetic medications. But we've sort of dealt with that already by trying to choose an unambiguous set. What are other reasons? AUDIENCE: You're starting with the medicine at the onset of diabetes [INAUDIBLE]. PROFESSOR: Oh, that's a really interesting point-- not the one I was thinking about, but I like it-- which is that a patient might have been diagnosed with type 2 diabetes, but they, for whatever reason, in that communication between provider and patient, they decided we're not going to start treatment yet. So they might not yet be on treatment for diabetes, yet the whole health care system might be very well aware that the patient is diabetic, in which case doing these interventions for that patient might be irrelevant. Yep, another reason? AUDIENCE: So a lot of people are just not diagnosed for diabetes. So they have it. So one label means that they have diabetes, and the other label is a combination of people who have and don't have diabetes. PROFESSOR: So the point was, often you just might not be diagnosed for diabetes. That, unfortunately, is not something that we're going to able to solve here. It is an issue, but we have no solution for it. No, rather there's a different point that I want to get at, which is that this data has biases in it. So even if a patient is on a diabetes medication, for whatever reason-- maybe they are paying cash for those medications. If they're paying cash for those medications, then there's not going to be any record for the patient taking those medications in the health insurance claims. Because the health insurer didn't have to pay for it. But the reason that you gave is also a very interesting reason. And both of them are valid. So for all of these reasons, just looking at the medications alone is going to be insufficient. And as was just suggested a moment ago, looking at other indicators, like, for example, does the patient have an abnormal blood glucose value or HBA1C value would also provide information. So it's non-trivial, right? And part of what you're going to be doing in your next problem set, problem set 2, is going to be thinking through how does one actually do this cohort construction, not just what is your inclusion/exclusion criteria, but also how do you really derive those labels from that data set. Now the traditional answer to this has two steps to it. Step 1 is to actually manually label some patients. So you take a few hundred patients, and you go through their data. You actually look at their data, and decide, is this patient diabetic or are they not diabetic? And the reason why you have to do that is because often what you might think of is obvious-- like, oh, if they're on diabetes medication, they're diabetic-- has flaws to it. And until you really dig down and look at the data, you might not recognize that that criteria has a flaw in it. So that chart review is really an essential part of this process. Then the second step is, how do you generalize to get that label now for everyone in your population. And again, there, there are usually two different types of approaches. The first approach is to come up with some simple rule to try to then extrapolate to everyone. For example, if they have, A, diabetes medication, or an abnormal lab test result, that would be an example of a rule. And then you could then apply that to everyone. But even those rules can be really tricky to derive. And I'll show you some examples of that in just a moment. And as we know, machine learning is sometimes good as an alternative for coming up with a rule. So there's often now a second approach to this being more and more commonly used in the literature, which is to actually use machine learning itself to derive the labels. And this is a bit subtle, because it's machine learning for machine learning. So I want to break that down for one second. When you're trying to derive the labels, what you want to know is not, at time T, what's going to happen at time T plus W and onwards-- that's the original machine learning task that we set out to solve-- but rather, given everything you know about the patient, including the future data, is this patient newly diagnosed with diabetes in that window that I show in black there, between T plus W and onward. OK? So for example, this machine learning problem, this new machine learning problem, could take, as input, lab test results, and medications, and a whole bunch of other data. And you then use the few examples you labeled in step 1 to try to predict, is this patient currently diabetic or not. You then use that model to extrapolate to the whole population. And now you have your outcome label. It might be a little bit imperfect, but hopefully it's much better than what you could have gotten with a rule. And then, now using those outcome labels, you solve your original machine learning problem. Is that clear? Any questions? AUDIENCE: I have one. PROFESSOR: Yep. AUDIENCE: How do you evaluate yourself then, if you have these labels that were produced with machine learning, which are probabilistic? PROFESSOR: So that's where this first step is really important. You've got to get ground truth somehow. And of course once you get that ground truth, you create a train-and-validate set of that ground truth. You run your machine learning algorithm with the trained one. You'd look at its performance metrics on that validate set for the label prediction problem. And that's how you get confidence in it. But let's try to break this down a little bit. So first of all, what does this chart review step look like? Well, if it's an electronic health record system, what you often do is you will pull up Epic, or Cerner, or whatever the commercial EHR system is. And you will actually start looking at the patient data. You'll read notes written by previous doctors about this patient. And you'll look at their blood test results across time, medications that they're on. And from that you can usually tell pretty coherent story what's going on with your patient. Of course even better-- or the best way to get data is to do a prospective study. So you actually have a research assistant standing in the room when a patient walks into a provider. And they talk to the patient, and they take down really very clear notes what this patient has, what they don't have. But that's usually too expensive to do prospectively. So usually what we do is do this retrospectively. Now, if you're working with health insurance claims data, you usually don't have the luxury of looking at notes. And so what, in my group, we type typically do is we build, actually, a visualization tool. And by the way, I'm a machine learning person. I don't know anything about visualization. Neither do I claim to be good at it. But you can't do the machine learning work unless you really understand your data. So we had to build this tool in order to look at the data, in order to try to do that first step of understanding, did we even characterize diabetes correctly. So I'm not going go deep into it. By the way, you can download this. It's an open source tool. But ballpark what I'm showing you here is one patient's data. I'm showing on this x-axis, time, going from April to December. And on the y-axis, I'm showing events as they occurred. So in orange are diagnosis codes that were recorded for the patient. In green are procedure codes. In blue are laboratory tests. And if you see, on a given line, multiple dots along that same line, it means that same lab test was performed multiple times. And you could click on it to see what the results were. And in this way, you could start to tell a coherent story what's going on with your patient. All right, so tools like this is what you're going to need to able to do that first step from something like health insurance claims data. Now, traditionally, that first step, which then leads you to label some data, and then, from there, you go and come up with these rules, or do a machine learning algorithm to get the label, usually that's a paper in itself. Of course, not of interest to the computer science community, but of extreme interest to the health care community. So usually there's a first paper, academic paper, which evaluates this process for deriving the label, and then there are much later papers which talk about what you could do with that label, such as the machine learning problem we originally set out to solve. So let's look at an example of one of those rules. Here is a rule, to derive from health insurance claims data whether a patient has type 2 diabetes. Now, this isn't quite the same one that we used in that paper, but it gets the idea across. First you look to see, did the patient have a diagnosis code for type 1 diabetes. If the answer is no, you continue. If the answer is yes, you've sort of ruled out. Because you say, OK, this patient's abnormal blood test results are because they have type 1 diabetes, not type 2 diabetes. Type 1 diabetes usually is what you can think of as juvenile diabetes, is diagnosed much earlier. And there's a different mechanism behind it. Then you look at other things-- OK, is there a diagnosis code for type 2 diabetes somewhere in the patient's data? If so, you go to the right, and you look to see, is there a medication, an Rx, for type 1 diabetes in the data. If the answer is no, you continue down this way. If the answer is yes, you go this way. A yes of a type 1 diabetes medication doesn't alone rule out the patient. Because maybe the same medications are used for type 1 as for type 2. So there's some other things you need to do there. Right, you can see that this starts to really quickly become complicated. And these manual-based approaches end up having pretty bad positive-- so they're designed usually to have pretty high positive predictive value. But they end up having pretty bad recall, in that they don't end up finding all of the patients. And that's really why the machine-learning-based approaches end up being very important for this type of problem. Now, this is just one example of what I call a phenotype. I call this a phenotype. That's just what the literature calls it. It's a phenotype for type 2 diabetes. And the word, phenotype, in this context is exactly the same thing as the label. Yep. AUDIENCE: What is abnormal mean? PROFESSOR: For example, if the HA1C result is 6.5 or higher, you might say the patient has diabetes. AUDIENCE: OK, so this is a lab result, not a medical-- PROFESSOR: Correct, yeah, thanks. Other questions. AUDIENCE: What's the phenotype, which part exactly is the phenotype, like, the whole thing? PROFESSOR: The whole thing, yeah. So the construction, where you say-- you follow this decision tree, and you get to a conclusion, which is case, which means, yes they're type 2 diabetic. And if ever you don't reach this point, then the answer is no, they're not type 2 diabetic. That's what I mean by-- so that labeling is what we're calling the phenotype of type 2 diabetes. Now later in the semester, people will use the word, phenotype, to mean something else. It's an overloaded term. But this is what it's called in this context as well. Now here's an example of a website-- it's from the PheKB project-- where you will find tens to close to 100 of these phenotypes that have been arduously created for a whole range of different conditions. OK, so if you go to this website, and you click on any one of these conditions, like appendicitis, autism, cataracts, you'll see a different diagram of this sort I just showed you. So this is a real thing. This is something that the medical community really needs to do in order to try to derive the label that we can then use in our machine learning task. AUDIENCE: I'm just curious, is the lab value ground truth? Like if somebody has diabetes, then they must have [INAUDIBLE]. It means they have been diagnosed, and they must have-- PROFESSOR: Well, so, for example, you might have an abnormal glucose value for a variety of reasons. One reason is because you might have what's called gestational diabetes, which is diabetes that's induced due to pregnancy. But those patients typically-- well, although it's a predictive factor, they don't always have long-term type 2 diabetes. So even the laboratory test alone doesn't tell the whole story. AUDIENCE: You could be diagnosed without having abnormal diabetic? PROFESSOR: That's much less common here. The story will change in the future, because there will be a whole range of new diagnosis techniques that might use new modalities, like gene expression, for example. But typically, today, the answer is yes to that. Yep. AUDIENCE: So if these are made by doctors, does that mean, for every single disease, there's one definitive phenotype? PROFESSOR: These are usually made by health outcomes researchers, which usually have clinicians on their team. But the type of people who often work on these often come from the field of epidemiology, for example. And so what was your question again? AUDIENCE: Is there just one phenotype for every single disease? PROFESSOR: Is there one phenotype for every different disease? In the ideal world, you'd have at least one phenotype for every single disease that could possibly exist. Now, of course, you might be interested in different aspects. Like you might be interested in not knowing just does the patient have autism, but where they are on their autism spectrum. You might not be interested in knowing just, do they have it now, but you also might want to know when did they get it. So there's a lot of subtleties that could go into this. But building these up is really slow. And validating them to make sure that they're going to work across multiple data sets is really challenging, and usually is a negative result. And so it's been a very slow process to do this manually, which has led me and many others to start thinking about the machine learning approaches for how to do it automatically. AUDIENCE: Just as a follow-up, is there any case where there's, like, five autism phenotypes, for example, or multiple competing ones? PROFESSOR: Yes. So there are often many different such rule-based systems that give you conflicting results. Yes, that happens all the time. AUDIENCE: Can these rule-based systems provide an estimate of when their condition was onset? PROFESSOR: Right, so that's getting at one of the subtleties I just mentioned-- can these tell you when the onset happened? They're not typically designed to do that, but one can come up with one to do it. And so one way to try to do that is you change those rules to have a time period associate to it. And then you can imagine applying those rules in a sliding window to the patient data to see, when is the first time that it triggers. And that would be one way to try to get a sense of when onset was. But there's a lot of subtleties to that, too. So I'm going to move on now. I just want to give it some sense of what that deriving the labels ends up looking like. Let's now turn to evaluation. So a very commonly used approach in this field is to compute what's known as the Receiver-Operator Curve, or ROC curve. And what this looks at is the following. First of all, this is well-defined for a binary classification problem. For a binary classification problem when you're using a model that outputs, let's say, a probability or some continuous value, then you could use that continuous valid prediction. If you wanted to make a prediction, you usually threshold it, right? So you say, if it's greater than 0.5, it's a prediction of 1. If it's less than 0.5, prediction of zero. But here we might be interested in not just what minimizes, let's say, 0-1 loss, but you might also be interested in trading off, let's say, false positives for false negatives. And so you might choose different thresholds. And you might want to quantify how do those trade-offs look for different choices of those thresholds of this continuous value prediction. And that's what the ROC curve will show you. So as you move along the threshold, you can compute, for every single threshold, what is the true positive rate, and what is the false positive rate. And that gives you a number. And you try all possible thresholds, that gives you a curve. And then you can compare curves from different machine learning algorithms. For example, here, I'm showing you, in the green line, the predictive model obtained by using what we're calling the traditional risk factors, so something like eight or 10 different risk factors for type 2 diabetes that are very commonly used in the literature. Versus in blue, it's showing you what you'd get if you just used a naive L1-regularized logistic regression model with no domain knowledge, just sort of throw in the bag of features. And you want to be up there. You want to be in that top left corner. That's the goal here. So you would like that blue curve to be up there, and then all the way to the right. Now, one way to try to quantify in a single number how useful any one ROC curve is is by looking at what's called the area under the ROC curve. And mathematically, this is exactly what you'd expect. This area is the area under the ROC curve. So you could just integrate the curve, and you get that number out. Now, remember, I told you you want to be in the upper left quadrant. And so the goal was to get an area under the ROC curve of a 1. Now, what would a random prediction give you? Any idea? So if you're to just flip a coin and guess-- what do you think? AUDIENCE: 0.5. PROFESSOR: 0.5? AUDIENCE: [INAUDIBLE] PROFESSOR: Well, so I was a little bit misleading when I said you just flip a coin. You got to flip a coin with sort of different noise rates. And each one of those will get you sort of a different place along this curve. And if you look at the curve that you get from these random guesses, it's going to be the straight line from 0 to 1. And as you said, that will then have an AUC of 0.5. So 0.5 is going to be random guessing. 1 is perfect. And your algorithm is going to be somewhere in between. Now, of relevance to the rest of today's lecture is going to be an alternative definition-- alternative way of computing the area under the ROC curve. So one way to compute it is literally as I said. You create that curve, and you integrate to get the area under it. But one can show mathematically-- I'm not going to give you the derivation here, but you can look it up on Wikipedia. One can show mathematically that an equivalent way of computing the area under the ROC curve is to compute the probability that an algorithm will rank a positive-labeled patient over a negative-labeled patient. So mathematically what I'm talking about is the following thing. You're going to sum over pairs of patients where-- I'm going to call x1 is a patient with label y1 equals 1. And x2 is a patient with label y-- actually, I'll call it-- yeah, with label x2 equals 1. So these are two different patients. I think I'm going to rewrite it like this-- xi and xj, just for generality's sake. So we're going to sum this up over all choices of i and j such that yi and yj have different labels. So that should say yj equals 0. And then you're going to look at-- what you want to happen, like suppose that you're using a linear model here. So your prediction is given to you by, let's say, w.xj. What you want is that this should be smaller than w.xi. So the j data point, remember, was the one that got the 0-th and the i-th data point is the one that got the 1 label. So we want the score of the data point that should've been 1 to be higher than the score of the data point which should've gotten the label 0. And you just count up-- this is an indicator function. You just count up how many of those were correctly ordered. And then you're just going to normalize by the total number of comparisons that you do. And it turns out that that is exactly equal to the area under the ROC curve. And it really makes clear that this is a notion that really cares about ranking. Are you getting the ranking of patients correct? Are you ranking the ones who should have been given 1 higher than the ones that should have gotten the label 0. And importantly, this whole measure is actually invariant to the label imbalance. So you might have a very imbalanced data set. But if you were to re-sample with now making it a balanced data set, your AUC of your predictive model wouldn't change. And that's a nice property to have when it comes to evaluating settings where you might have artificially created a balanced data set for computational concerns. Even though the true setting is imbalanced, there at least you know that the numbers are going to be the same in both settings. On the other hand, it also has lots of disadvantages. Because often you don't care about the performance of the whole entire curve. Often you care about particular parts along the curve. So for example, in last week's lecture, I argued that really what we often care about is just the positive predictive value for a particular threshold. And we want that to be as high as possible for as few people as possible. Like, find the 100 most risky people, and look at what fraction of them actually developed type 2 diabetes. And that setting, what you're really looking at is this part of the curve. And so it turns out there are generalizations of area under the curve that focus on parts of the curve. And that goes by the name of partial AUC. For example, if you just integrated from 0 to, let's say, 0.1 of the curve, then you could still get a number to compare two different curves, but it's focusing on the area of that curve that's actually relevant for your predictive purposes, for your task at hand. So that's all I want to say about receiver-operator characteristic curves. Any questions? Yep. AUDIENCE: Could you talk more about what the drawbacks were of using this. Does the class imbalance-- is the class imbalance, then, always a positive effect? PROFESSOR: So the thing is, when you want to use this approach, depending on how you're using the [INAUDIBLE],, you might not be able to tolerate a 0.8 false positive rate. So in some sense, what's going on in this part of the curve might be completely irrelevant for your task. And so one of the algorithms-- so one of these curves-- might look like it's doing really, really well over here, and pretty poorly over here. But if you're looking at the full area under the ROC curve, you won't notice that. And so that's one of the big problems. Yeah. AUDIENCE: And when would you use this versus precision recall or-- PROFESSOR: Yeah, so a lot of the community is interested in precision recall curves. And precision recall curves, as opposed to receiver-operator curves, have the property that they are not invariant to class imbalance, which in many settings is of interest, because it will allow you to capture these types of quantities. I'm not going to go into depth about your reasons for one or the other. But that's something that you could read up about, and I encourage you to post to Piazza about, and we have discussion on Piazza. So the last evaluation quantity that I want to talk about is known as calibration. And calibration, as I've defined it here, has to do with binary classification problems. Now, before you dig into this figure, which I'll explain in a moment, let me just give you the gist of what I mean by calibration. Suppose that your model outputs a probability. So you do logistic regression. You get a probability out. And your model says, for these 10 patients, that their likelihood of dying in the next 48 hours is 0.7. Suppose that's what your model output. If you were on the receiving end of that result, so you heard that, 0.7, what should you expect about those 10 people? What fraction of them should actually die in the next 48 hours? Everyone could scream out loud. [INTERPOSING VOICES] PROFESSOR: So seven of them. Seven of the 10 you would expect to die in the next 48 hours if the probability for all of them that was output was 0.7. All right, that's what I mean by calibration. So if, on the other hand, what you found was that only one of them died, then it would be a very weird number that you're outputting. And so the reason why this notion of calibration, which I'll define formally in a second, is so important, is when you're out putting a probability, and when you don't really know how that probability is going to be used. If you knew-- if you had some task loss in mind. And you knew that all that mattered was the actual prediction, 1 or 0, then that would be fine. But often predictions in machine learning are used in a much more subtle way. Like for example, often your doctor might have more information than your computer has. And so often they might want to take the result that your computer predicts, and weigh that against other evidence. Or in some settings, it's not just weighting about other evidence. Maybe it's also about making a decision. And that decision might take exertion-- a utility, for example, a patient preference for suffering versus getting a treatment that could have big, adverse consequences. And that's something that Pete is going to talk about much more later in the semester, I think, how to formalize that notion. But at this point, I just want to sort of get out the point that the probabilities themselves could be important. And having the probabilities be meaningful is something that one can now quantify. So how do we quantify it? Well, one way to try to quantify it is to create the following prompt. Actually, we'll call it a histogram. So on the x-axis is the predicted probability. So that's what I meant by p-hat. On the y-axis is the true probability. It's what I mean when I say the fraction of individuals with that predicted probability that actually got the positive outcome. That's going to be the y-axis. So I'll call that the true probability. And what we would like to see is that this is a line, a straight line, meaning these two should always be equal. And in the example I gave, remember I said that there were a bunch of people with 0.7 probability predicted, but for whom only one out of them actually got the positive end. So that would have been something like over here. Whereas you would have expected it to be over there. So you might ask, how do I create such a plot from finite data? Well, a common way to do so is to bin your data. So you'll create intervals. So this bin is the bin from 0 to 0.1. This bin is the bin from 0.1 to 0.2, and so on. And then you'll look to see, OK, how many people for whom the predicted probability was between 0 and 0.1 actually died? And you'll get a number out. And now here's where I can go to this plot. That's exactly what I'm showing you here. So for now, ignore the bar charts and the bottom, and just look at the line. So let's focus on just the green line. Here I'm showing you several different models. For now, just focus on the green line. So the green line, by the way, notice it looks pretty good. It's almost a straight line. So how did I compute it? Well, first of all, notice the number of ticks are 1, 2, 3, 4, 5, 6, 7, 8, 9, 10. OK, so there are 10 points along this line. And each of those corresponds to one of these bins. So the first point is the 0 to 0.1 bin. The second point is the 0.1 to 0.2 bin, and so on. So that's how it computed this. The next thing you notice is that I have confidence intervals. And the reason I compute these confidence intervals is because sometimes you just might not have that much data in one of these bins. So for example, suppose your algorithm almost never said that someone has a predictive probability of 0.99. Then until you get a ton of data, you're not going to know what fraction of those individuals actually went on to develop the event. And you should be looking at sort of the confidence interval of this line, which should take that into consideration. And a different way to try to understand that notion, now looking at the numbers, is what I'm showing you in the bar charts in the bottom. On the bar charts, I'm showing you the number of individuals or the fraction of individuals who actually got that predicted probability. So now let's start comparing the lines. So the blue line shown here is a machine learning algorithm which is predicting infection in the emergency rooms. It's a slightly different problem than the diabetes one we looked at earlier. And it's using a bag of words model from clinical text. The red line is using just chief complaint. So it's using one piece of structured data that you get at one point of time in the ER. So it's using very little information. And you can see that both models are somewhat well calibrated. But the intervals-- the confidence intervals of both the red and the purple lines gets really big towards the end. And if you look at these bar charts, it explains why, because the models that use less information end up being much more risk-averse. So they will never predict a very high probability. They will always sort of stay in this lower regime. And that's why we have very big confidence intervals there. OK, so that's all I want to say about evaluation. And I won't take any questions on this right now, because I really want to get on to the rest of the lecture. But again, if you have any questions, post to Piazza, and I'm happy to discuss them with you offline. So, in summary, we've talked about how to reduce risk stratification to binary classification. I've told you how to derive the labels. I've given you one example of machine learning algorithm you can use, and I talked to you about how to evaluate it. What could possibly go wrong? So let's look at some examples. And these are a small number of examples of what could possibly go wrong. There are many more. So here's some data. I'm showing you-- for the same problem we looked at before, diabetes onset, I'm showing you the prevalence of type 2 diabetes as recorded by, let's say, diagnosis codes across time. All right, so over here is 1980. Over here is 2012. Look at that. It is not a flat line. Now, what does that mean? Does that mean that the population is eating much more unhealthy from 1980 to 2012, and so more people are becoming diabetic? That would be one plausible answer. Another plausible explanation is that something has changed. So in fact I'm showing you with these blue lines, well, in fact, there was a change in the diagnostic criteria for diabetes. And so now the patient population actually didn't change much between, let's say, this time point at that time point. But what really led it to this big uptick, according to one theory, is because the diagnostic criteria changed. So who we're calling diabetic has changed. Because diseases are, at the end of the day, a human-made concept, you know, what do we call some disease. And so the data is changing, as you see here. Let me show you another example. Oh, by the way, so the consequence of that is that automatically-derived labels-- for example, if you use one of those phenotyping algorithms I showed you earlier, the rules-- what the label is derived for over here might be very different from the label that's derived from over here, particularly if it's using data such as diagnosis codes that have changed in meaning over the years. So that's one consequence. There'll be other consequences I'll tell you about later. Here's another example. And by the way, this notion is called non-stationarity, that the data is changing across time. It's not stationary. Here's another example. On the x-axis again I'm showing you time. Here each column is a month, from 2005 to 2014. And on the y-axis, for every sort of row of this table, I'm showing you a laboratory test. And here we're not looking at the results of the lab test, we're only looking at what fraction of-- at how many lab tests of that type were performed at this point in time. And now you might expect that, broadly speaking, the number of glucose tests, the number of white blood cell count tests, the number of neutrophil tests and so on might be pretty constant across time, on average, because you're averaging over lots of people. But indeed what you see here is that, in fact, there is a huge amount of non-stationarity. Which tests are ordered dramatically changes across time. So for example you see this one line over here, where it's all blue, meaning no one is ordering the test, until this point in time, when people start using it. What could that be? Any ideas? Yeah. AUDIENCE: [INAUDIBLE] PROFESSOR: So the test was used less, or really, in this case, not used at all. And then suddenly it was used. Why might that happen? In the back. AUDIENCE: A new test. PROFESSOR: A new test, right, because technology changes. Suddenly we come up with a new diagnostic test, a new lab test. And we can start using it, where it didn't exist before. So obviously there was no data on it before. What's another reason why it might have suddenly showed up? Yep. AUDIENCE: It could be like annual check-ups become mandatory, or that it's part of the test admission at hospital. Like, it's an additional test. PROFESSOR: I'll stick with your first example. Maybe that test becomes mandatory. OK, so maybe there's a clinical guideline that is created at this point in time, right there. And health insurers decide we're going to reimburse for this test at this point in time. And the test might've been really expensive. So no one would have done it beforehand. And now that the health insurance companies are going to pay for it, now people start doing it. So it might have existed beforehand. But if no one would pay for it, no one would use it. What's another reason why you might see something like this, or maybe even a gap like this? Notice, here in the middle, there's this huge gap in the middle. What might have explained that? AUDIENCE: [INAUDIBLE] PROFESSOR: Hold on. Yep, over here. AUDIENCE: Maybe your patient population is mostly of a certain age, and coverage for something changes once your age crosses a threshold. PROFESSOR: Yeah, so one explanation-- I think it's not plausible in this data set, but it is plausible for some data sets-- is that maybe your patients at time 0 were all of exactly the same age. So maybe there's some amount of alignment. And suddenly, at this point in time, let's say, women only get, let's say, their annual mammography once they turn a certain age. And so that might be one reason why you would see nothing until one point in time. And maybe that would change across time as well. Maybe they'll stop getting it at some point after menopause. That's not true, but let's say. So that's one explanation. In this case, it doesn't make sense, because the patient population is very mixed. So you could think about it as being roughly at steady state. So they're not-- you'll have patients of all ages here. What's another reason? Someone raised their hand over here. Yep. AUDIENCE: Yeah, I was just going to say, maybe the EMR shut down for awhile, and so they were only doing stuff on paper, and they only were able to record 4 things. PROFESSOR: Ding ding ding ding ding. Yes, that's right. So maybe the EMR shut down. Or in this case, we had data issues. So this data was acquired somehow. For example, maybe it was required through a contract with something like Webquest or LabCorp. And maybe, during that four-month interval, there was contract negotiation. And so suddenly we couldn't get the Data for that time period. Or maybe our databases crashed, and we suddenly lost all the data for that time period. This happens, and this happens all the time, and not just the health care industry, but other industries as well. And as a result of those systemic-type changes, your data is also going to be non-stationary across time. So now we've seen three or four different explanations for why this happens. And the reality is really a mixture of all of these. And just as in the previous-- so in the previous example, notice how what really changed here is that the derived labels might change meaning across time. Now the significance of the features used in the machine learning models would really change across time. And that's one of the consequences of this, particular if you're driving features from lab test values. Here's one last example. Again, on the x-axis here, I have time. On the y-axis here, I'm showing the number of times that you observed some diagnosis code of some kind. This cyan line is ICD-9 codes. And this red line are ICD-10 codes. You might remember that Pete mentioned in an earlier lecture that there was a big shift from ICD-9 coding to ICD-10 coding at some point. When was that time? It was precisely this time. And so if you think about the feature vector that you would derive for your machine learning problem, you would have one feature for all ICD-9 codes, and one-- a whole set of features for all ICD-10 codes. And those ICD-9-based features are going to be-- they're going to be used quite a bit in this time period. And then suddenly they're going to be completely sparse in this time period. And ICD-10 features start to become used. And you could imagine that if you did machine learning using just ICD-9 data, and then you tried to apply your model at this point in time, it's going to do horribly, because it's expecting features that it no longer has access to. And this happens all the time. And in fact, what I'm describing here is actually a major problem for the whole health care industry. For the next five years, everyone is going to grapple with this problem, because they want to use their historical data for machine learning, but their historical data is very different from their recent data. So now, in the face of all of this non-stationarity that I just described, did we do anything wrong in the diabetes risk stratification problem that I told you about earlier? Thoughts. That was my paper, by the way. Did I make an error? Thoughts. Don't be afraid. I'm often wrong. I'm just asking specifically about the way I evaluated the models. Yep. AUDIENCE: This wasn't an error, but one thing, like if I was a doctor I would like to see is the sensitivity to-- like, the inclusion criteria if I remove the HBA1C for instance. Like most people, they have compared to having either Rx or [INAUDIBLE] then kind of evaluating the-- PROFESSOR: So understanding the robustness to changing the data a bit is something that would be of a lot of interest. I agree. But that's not immediately suggested by the non-stationarity results. Not something that's suggested by non-stationarity results. Our TA in the front row has an idea. Yeah, let's hear it. AUDIENCE: The train and test distributions were drawn from the same-- or the train and tests were drawn from the same distribution. PROFESSOR: So in the way that we did our evaluation there, we said, OK, we're going to set it up such that on January 1, 2009, we're predicting what's going to happen in the following three years. And we segmented our patient population into train, validate, and test, but at all times, using that same setup, January 1 2009, as the prediction time. Now, we learned this model, and it's now 2018. We want to apply this model today. And I computed an area under the ROC curve. I computed positive predictive values using that retrospective data. And I handed those off to my partners. And they might hope that those numbers are reflective of what their models would do today. But because of these issues I just told you about-- for example, that the number of people who have type 2 diabetes, and even the definition of it has changed. Because of the fact that the laboratory-- ignore this part over here. That's just a fluke. But the fact, because of the laboratory tests that were available during training might be different from the ones that are available now, and because of the fact that we have only ICD-10 data now, and not ICD-9, for all of those reasons, our predictive performance is going to be really horrible now, Particularly because of this last issue of not having ICD-9s. Our predictive model is going to work horribly now if it was trained on data from 2008 or 2009. And so we would have never ever even recognized that if we used the validation set up that we had done there. So I wrote that paper when I was young and naive. [AUDIENCE CHUCKLING] I'm a little bit more gray-haired now. And so in our more recent work-- for example, this is a paper which we're working on right now, done by a master's student of mine, Helen Zhou, and is looking at predicting antibiotic resistance, now we're a little bit smarter about over evaluation setup. And we decided to set it up a little bit differently. So what I'm showing you now is the way that we chose, trained, validated and test for our population. So we segmented our data. So the x-axis here is time, and the y-axis here are people. So you can think of each person as being a different row. And you can imagine that we randomly sorted the rows. What we did is we segmented our data into these four quadrants. The first two quadrants, we used for train and validate. Notice, by the way, that we have different people in the training set as we do in the validate set. That's important for another quantity which I'll talk about in a minute. So we used this data for train and validate. And that's, again, very similar to the way we did it in the diabetes paper. But now, for testing, we use this future data. So we used data from 2014 to 2016. And one can imagine two different quadrants. You might be interested in knowing, for the same patients for whom you made predictions on during training, how would your predictions do for those same people at test time in the future data. And that's assuming that what we're predicting is something that's much more myopic in nature. In this case it was predicting, are they going to be resistant to some antibiotic? But you can also look at it for a completely different set of patients, for patients who are not used during training at all. And suppose that this 2 bucket isn't used at all, for those patients, how do we do, again, using the future data for that. And the advantage of this setup is that it can really help you assess non-stationarity. So if your model really took advantage of features that were available in 2007, 2008, 2009, but weren't available in 2014, you would see a big drop in your performance. Looking at the drop in performance from your validate set in this time period, to your test set from that time period, that drop in performance will be uniquely attributed to the non-stationarity. So it's a good way to diagnose it. Yep. AUDIENCE: Just some clarification on non-stationarity-- is it the fact that certain data is just lost altogether, or is it the fact that it's just encoded differently, and so then it's difficult to get that mapping correct? PROFESSOR: Both. Both of these happen. So I have a big research program now which is asking not just how-- so this is how you can evaluate and recognize there's a problem. But of course there's a really interesting research question, which is, how can you make use of the non-stationarity. Right, so for example, you had ICD-9/ICD-10 data. You don't want to just throw away the ICD-9 data. Is there a way to use it? So the naive answer, which is what the community is largely using today, is come up with a mapping. Come up with a manual mapping from ICD-9 to ICD-10 so that you can sort of manually transform your data into this new format such that the models you learn from this older time is useful in the future time. That's the boring and simple answer. But I think we could do much better. For example, we can learn new representations of the data. We can learn that mapping directly in order to optimize for your sort of most recent performance. And there's a whole bunch more that we can talk about later. Yep. AUDIENCE: [INAUDIBLE] non-stationary change, this will [INAUDIBLE] does not ensure robustness to the future. PROFESSOR: Correct. So this allows you to detect that a non-stationarity has happened. And it allows you to say that your model is going to generalize to 2014-2016. But of course, that doesn't mean that your model's going to generalize to 2016-2018. And so how do you do that? How do you have confidence in that? Well, that's a really interesting research question. We don't have good answers to that today. From a practical perspective, the best I can offer you today is, build in these checks and balances all the time. So continuously sort of evaluate how you're doing on the most recent data. And if you see big changes, throw a red flag. Build more checks and balances into your deployment process. If you see a bunch of patients who are getting predicted probabilities of 1, and in the past, you'd never predicted probability 1, that might tell you something. Then much later in the semester, we'll talk about robust machine learning approaches, for example, approaches that have been designed to be robust against adversaries. And those type of approaches as well will allow you to be much more robust to particular types of data set shift, of which non-stationarity is one example. But it's a big, open research field. Yep. AUDIENCE: So just to make sure I have the understanding correct, theoretically, if you could map everything from the old data set to the new data set, like the encodings, would it still be OK, like the results you get on the future data set? PROFESSOR: If you could do a perfect mapping, and it's one to one, and the distributions of those things also didn't change, then yeah. Really what you need to assess is, is there data set shift? Is your training distribution, after mapping, the same as your testing distribution? If the answer is yes, you're all good. If you're not, you're in trouble. Yep. AUDIENCE: What seems to be the test set of traits set here? Or what [INAUDIBLE]? PROFESSOR: So 1 is using data only from 2007-2013, 3 is using data only from 2014-2016. AUDIENCE: But in the case, like, the output we care about happened in, like, 2007-2013, then that observation would be not-- it wouldn't be useful. PROFESSOR: Yeah, so for the diabetes problem, there's also just inclusion/exclusion criteria that you have to deal with. For what I'm showing you here, I'm talking about a setting where you might be making multiple predictions for patients across time. So it's a much more myopic prediction task. But one could come up with an analogy to this for the diabetes setting. Like, for example, just hold out half of the patients at random. And then for your training set, use data up to 2009, and evaluate on data only up to 2013. And for your test set, pretend as if it was January 1, 2013, and look at performance up to 2017. And so that would be-- you're changing your prediction time to use more recent data. So the next subtlety is-- it's a name that I put on to it. This isn't a standard name. This is what I'm calling intervention-tainted outcomes. And so the example here came from your reading for today. The reading was this paper on intelligible models for health care predicting pneumonia risk in hospital 30-day admissions from KDD 2015. So in that paper, they give an example-- it's a very old example-- of trying to use a predictive model to understand a patient's risk of mortality when they come into the hospital. And what they learned-- and they used a rule-based learning algorithm-- and what they discovered was a rule that said if the patient has asthma, then they have low risk of dying. So these are all patients who have pneumonia. So a patient who comes in with pneumonia and asthma has a lower risk of dying than a patient who comes in with pneumonia and does not have a history of asthma. OK, that's what this rule says. And this paper argued that there's something wrong with that learned model. Any of you remember what that was? Someone who hasn't talked today, please. Yeah, in the back. AUDIENCE: It was that those with asthma had more aggressive treatment. So that means that they had a higher chance of survival. PROFESSOR: Patients with asthma had more aggressive treatment. In particular, they might have been admitted to the intensive care unit for more careful vigilance. And as a result, they had better outcomes. Yes, that's exactly right. So the real story behind this is that risk stratification, as we talked about the last couple weeks, it's used to drive interventions. And those interventions, if they happened in the past data, would change the outcomes. So in this case, you might imagine using the learned predictive model to say, a new patient comes in, this new patient has asthma, and so we're going to say they're low risk. And if we took a naive action based on that prediction, we might say, OK, let's send them home. They're at low risk of dying. But if we did that, we could be killing people. Because the reason why they were low risk is because they had those interventions in the past. So here's what's going on in that picture. You have your data, X. And you're trying to make a prediction at some point in time, let's say, emergency department triage. You want to predict some outcome Y, let's say, whether the patient dies at some defined point in the future. Now, the challenge is that, as stated in the machine learning tasks that you saw there, all you had access to was X and Y, the covariance of the features and the outcome. And so you're predicting Y from X, but you're marginalizing over everything that happens in between, in this case, the treatment. So the good outcomes, people surviving, might have been due to what's going on in between. But what's going on in between is not even observed in the data necessarily. So how do we address this problem? Well, the first thing I want you to think about is, can we even recognize that this is a problem? And that's where that article really suggests that using an unintelligible model, a model that you can introspect and try to understand a little bit, is actually really important for even recognizing that weird things are happening. And this is a topic which we will talk about in a lecture towards the end of the semester in much more-- Jack will talk about algorithms for interpreting machine learning models. So that's important. You've got to recognize what's going on. But what do you do about it? So here are some hacks. Hack number 1-- modify the model. This is the solution that is proposed in the paper you read. They said, OK, if it's a simple rule-based prediction that the learning algorithm outputs to you, you could see the rule that doesn't make sense, you could use your clinical insight to recognize it doesn't make sense. You might even be able to explain why it happened. And then you just remove that rule. So you manually modify the model to push it towards something that's more sensible. All right, so that's what was suggested. And I think it's nonsense. I don't think that's ever going to work in today's world. In today's world of high-dimensional models, there's always going to be surrogates which are somehow picked up by a learning algorithm that you will not even recognize. And it will be really hard to modify it in the way that you want. Maybe it's impossible using the simple approach, by the way. Another interesting research question-- how do you actually make this work in a high-dimensional setting? But for now, let's say we don't know how to do it in a high-dimensional setting. So what are your other choices? Hack number 2 is to redefine the outcome altogether, to change what you're predicting. So for example, if you go back to this picture, and instead of trying to predict Y, death, if you could try to find some surrogate for the thing you care about, which is pre-treatment, and you predict that thing instead, then you'll be back in business. And so, for example, in one of the optional readings for-- or actually I think in the second required reading for today's class, it was a paper about risk revocation for sepsis, which is often caused by infection. And what they show in that article is that there are laboratory test results, such as lactate, and there are others, which can give you a hint that this patient might be on a path to clinical deterioration. And that test might precede the interventions to try to take care of that condition. And so if you instead change your outcome to be predicting that surrogate, then you're getting around this problem that I just pointed out. Now, a third hack is from one of the optional readings from today's lecture, this paper by Suchi Saria and her students, from Science Translational Medicine 2015. It's a really well-written paper. I highly recommend reading it. In that paper, they suggest formalizing the problem as one of censoring, which is what we'll be talking about for the very last third of today's lecture. In particular, what they say is suppose you see that a patient is treated for the condition. Let's say they're treated for sepsis. Then if the patient is treated for that condition, then we don't know what would have happened to them had they not been treated. So we don't observe the outcome, death given no treatment. And so we're going to treat it as an unknown outcome. And for patients who were not treated, but ended up dying due to sepsis, then they're not censored. And what I'll show you in the later part of the class is how to learn from censored data. So this is another formalization which tries to address this problem that we pointed out. Now, I call these hacks because, really, I think what we should be doing is formalizing it using the language of causality. Once you do this introspection and you realize that there is treatment, in fact, you should be rethinking about the problem as one of now having three quantities of interest. There's the patient, everything you know about them at triage. That's the X-variable I showed you before. There's the outcome, let's say, Y. And then there's that everything that happened in between, in particular the interventions that happened in between. We'll call that T, for treatment. And the question that one would like to ask in order to figure out how to optimally care for the patient is one of, will admission to the ICU, which is the intervention that we're considering here, will that lower the likelihood of death for the patient? And now when I say lower, I don't mean correlation, I mean causation. Will it actually lower the patient's risk of dying? I think we need to hit these questions on the head with actually thinking about causality to try to formalize this properly. And if you do that, this will be a solution which will generalize to the high-dimensional settings that we care about in machine learning. And this will be a topic that we'll talk really in-depth after spring break. But I wanted to give you this as one motivation for why it's so important-- there are many other reasons-- to really think about it from a causal perspective. OK, so subtlety number 3-- there's been a ton of hype in the media about deep learning and health care. A lot of it is very well warranted. For example, the advances we're seeing in areas ranging from radiology and pathology to interpretation of EKGs are all really being transformed by deep learning algorithms. But the problems I've been telling you about for the last couple of weeks, of doing risk stratification on electronic health record data, such as taxed notes, such as lab test results and vital signs, diagnosis codes, that's a different story. And in fact, if you look closely at all of the papers, all the papers that have been published in the last few years that have been trying to apply the gauntlet of deep learning algorithms at those problems, in fact, the gains are very small. And so what I'm showing you here is just one example of such a paper. This is a paper that received a lot of media attention. It's a Google paper called "Scalable and Accurate Deep Learning with Electronic Health Records." And if you go across the United States, if you go internationally, you talk to chief medical information officers, they're all going to be telling you about this paper. They've all read it, they've all heard about it, and they all want to use it. But what is this actually doing? What's going on behind the scenes? Well, this paper uses the same sorts of data we've been talking about. It takes vitals, notes, orders, medications, thinks about it as a timeline, summarizes it, then uses a recurrent neural network. It also uses attentional architectures. And there's some pretty smart people on this paper-- you know, Greg Corrado, Jeff Dean, are all co-authors of this paper. They know what they're doing. All right, so they use these algorithms to predict a number of downstream problems-- readmission risk, for example, 30-day readmission, like you read about in your readings for this week. And they see they get pretty good predictions. But if you go to the supplementary material, which is a bit hard to find, but here's the link for all of you, and I'll post it to my slides. And if you look at the very last figure in that supplementary material, you'll see something interesting. So here are those three different tasks that they studied-- inpatient mortality prediction, 30-day readmission, length-of-stay prediction. The first line each of these buckets is what your deep learning algorithm does. Over here, they have two different hospitals. I think it might have been University of Chicago and Stanford. And they're showing the area under the ROC curve, which we've talked about, performance for each of these tasks for their best models. And in the parentheses, they give confidence intervals-- let's say something like 95% confidence intervals-- for area under the ROC curve. Now, the second line that you see is called full-feature enhanced baseline. It's using the same data, but it's using something very close to the feature represetnation that you saw in the paper by Narges Razavian, so that paper on diabetes prediction that I told you about and we've been criticizing. So it's using that L1-regularized logistic regression with a smart set of features. And what you see across all three settings is that the results are not physically significantly different. So let's look at the first one, hospital A, deep learning, 0.95 AUC. This L1-regularized logistic regression, 0.93. 30-day readmission, 0.77, 0.75, 0.86, 0.85. And the confidence intervals are all overlapping. So what's going on? So I think what you're seeing here, first of all, is a recognition by the machine learning community that-- in this case, a late recognition that simpler approaches tend to work well with this type of data. I don't think this was the first thing that they tried. They tried probably the deep learning algorithms first. Second, we're all grasping at this, and we all want to come up with these better algorithms, but so far we're not doing that well. And I'll tell you more about that in just a second. But before I finish with the slide, I want to give you a punch line I think is really important. You might come home from this and say, you know what, it's not that much better, but it's a little bit better-- 0.95 to 0.93. Suppose it was tight confidence intervals, there might be a few patients whose lives you could save with that. But because all the issues I've told you about up until now, of non-stationary, for example, those gains disappear. In many cases, they even reverse when you actually go to deploy these models because of that data set shift for non-stationarity. It so happens that the simpler models tend to generalize better when your data changes on you. And this is nicely explored in this paper from Kenneth Jung and Nigam Shah in Journal of Biomedical Informatics, 2015. So this is something that I want you to think about. Now let's try to answer why. Well, the areas where we've been seeing recurrent neural networks doing really well-- in, for example, speech recognition, natural language processing, are areas where, often-- for example, you're predicting what is the next word in a sequence of words, the previous few words are pretty predictive. Like, what is the next [PAUSES] that I'm going to say? What is it? AUDIENCE: Word. PROFESSOR: Word, right, and you knew that, right, because it was pretty obvious to predict that. And so the models that are good at predicting for that type of data, it doesn't mean that they should be good for predicting for a different type of sequential data. Sequential data which, by the way, lives in many different time scales. Patients who are hospitalized, you get tons of data for them at a time, and then you might go months without any data on them. Data with lots of missing data. Data with multivariate observations at each point in time, not just a single word at that point in time. All right, so it's a different setting. And we shouldn't expect that the same architectures that have been developed for other problems will generalize immediately to these problems. Now, I do conjecture that there are lots of nonlinear attractions where deep neural networks could be very powerful at predicting for. But I think they're subtle. And I don't think that we have enough data currently to deal with the fact that the data is messy and that the non-linear interactions are subtle. We just can't find them right now. But this shouldn't mean that we're not going to find them a few years from now. I think this deservedly is a very interesting research direction to work on. And a final reason to point out is that the features that are going into these types of models are actually really cleverly-chosen features. A laboratory test result, like looking at your A1C-- what is A1C? So it's something that had been developed over decades and decades of research, where you recognize that looking at a particular protein is actually informative as something about a patient's health. So the features that we're using that go into these models were designed-- first, they were designed for humans to look at. And second, they were designed to really help you with decision-making, or largely independent features from other information that you have about a patient. And all of those are reasons, really, I think why we're observing these subtleties. OK, so for the last 10 minutes of class-- I'm going to have to hold questions, because I want to get through all the material. But please post them to Piazza. For the last 10 minutes of class, I want to change gears a little bit, and talk about survival modeling. So often we want want to talk about predicting time to some event. So this red dot here-- sorry, this black line here is what I mean by an event. That event might be, for example, a patient dying. It might mean a married couple getting divorced. It might mean the day that what you graduate from MIT. And the red dot here denotes censored events. So for whatever reason, we don't have data on this patient, patient S3, after time step 4. They were censored. So we do know that the event didn't occur prior to time step 4. But we don't know if and when it's going to occur after time step 4, because we have missing data there. OK, so this is what I mean by right-censored data. So you might ask, why not just use classification-- like binary classification-- in this setting? And that's exactly what we did earlier. We thought about formalizing the diabetes risk stratification problem as looking to see what happens years 1 to 3 after the time of prediction. That was with a gap of one year. And there a couple of reasons why that's perhaps not what you really wanted to do. First, you have less data to use during training. Because you've suddenly excluded patients for whom-- or to differently-- if you have patients for whom they were censored during that time window, you're throwing them out. So you have fewer data points there. That was part of our inclusion/exclusion criteria. Also, when you go to deploy these models, your model might say, yes, this patient is going to develop type 2 diabetes between one and three years from now. But in fact what happened is they develop type 2 diabetes 3.1 years from now. So your model would count this as a negative. Or it would be a false positive. The prediction would be a false positive. But in reality, your model wasn't actually that bad. We did pretty well. We didn't quite get the right range, but they did get diagnosed diabetes right outside that time window. And so your measures of performance are going to be pessimistic. You might be doing better than you thought. Now, you can try to address these two challenges in many ways. You can imagine a multi-task learning framework where you try to predict what's going to happen one to two years from now, what's going to happen two to three years from now, three to four, and so on. Each of those are different binary classification models. You might try to tie together the parameters of those models via a multi-task learning formulation. And that will get you closer to what you care about. But what I'll tell you about in the last five minutes is a much more elegant approach to trying to deal with that. And it's akin to regression. So that leads to my second point-- why not just treat this as a regression problem? Predict time to event. You have some continuous valued outcome, the time until diagnosis diabetes. Just try to minimize mean squared-- minimize your squared error trying to predict that continuous value. Well, the first challenge to think about is, remember where that mean squared error loss function came from. It came from thinking about your data as coming from a Gaussian distribution. And if you do maximum likelihood estimation of this Gaussian distribution, it turns out to look like minimizing a squared loss. So it's making a lot of assumptions about the outcome. For one, it's making the assumption that outcome could be negative or positive. A Gaussian distribution doesn't have to be positive. But here we know that T is always non-negative. In addition, there might be long tails. We might not know exactly when the patient's going to develop diabetes, but we know it's not going to be now. It's going to be at some point in the far future. And that may also look very non-Gaussian. So typical regression approaches aren't quite what you want. But there's another really important problem, which is that if you naively remove those censored points-- like, what do you do for the individuals where you never observe the time-- where the never get diabetes, because they were censored? Well, if you just remove those from your learning algorithm, then you're biasing your results. So for example, if you think about the average age of diabetes onset, if you only look at people who actually were observed to get diabetes, it's going to be much closer to now. Because obviously the people who were censored are people who got it much later from the censoring time. So that's another serious problem. So the way they we're trying to formalize this mathematically is as follows. Now we should think about having data which has, again, features x, outcome-- what we usually call Y for the outcome in regression, but here I'll call it capital T, because of the time to the event. And now we have an additional variable-- so it's no longer a two-point, now it's a triple-- b. And b is going to be a binary variable, which is saying, was this individual censored-- was the time, t, denoting a censoring event, or was it denoting the actual event happening? So it's distinguishing between the red and the black. So black is b equals 0. Red is b equals 1. OK, so now we can talk about learning a density, P of t, which I'll also call f of t, which is the probability of death at time t. And associated with any density, of course, is the cumulative density function, which is the integral from 0 to any point of the density. Here we'll actually look at 1 minus the CDF, what's called the survival function. So it's looking at probability of T, actual time of the event, being larger than some quantity, little t. And that's, of course, just the integral of the density from little t to infinity. All right, so this is the survival function. It's of a lot of interest. You want to know, is the patient going to be diagnosed with diabetes two or more years from now. So pictorially, what you're interested in is something like this. You want to estimate these conditional distributions. So I call it conditional because you want to condition on the covariant to individual x. So what I'm showing you, this black line, is your density, little f of t. And this white area here, the integral from little t to infinity, meaning all this white area, is capital S of t. It's the probability of surviving longer than time little t. OK, so the first thing you might do is say, we get these data, these tuples, and we want to try to estimate that function, little f, the probability of death at some time. Or, equivalently, you might want to estimate the survival time, capital S of t, which is the CDF version. And these two are related to another just by some calculus. So a method called the Kaplan-Meier estimator is a non-parametric method for estimating that survival probability, capital S of t. So this is the probability that an individual lives more than some time period. So first I'll explain to you this plot, then I'll tell you how to compute it. So the x-axis of this plot is time. The y-axis is this survival property, capital S of t. It's the probability that an individual lives more than this amount of time. I think this x-axis is in days, so 500, 1,000, 1,500, 2,000. This figure, by the way, was created by one of my students who's studying a multiple myeloma data set. So you could then ask, well, under what covariants do you want to compute this survival? So here, this method I'll tell you about, is very good for when you don't have any features. So all you want to do is estimate that density by itself. And of course you could apply a method for multiple populations. So what I'm showing you here is applying it for two different populations. Suppose there's just a single binary feature. And we're going to apply it to the x equals 0 and to x equals 1. That gets you two different curves out. But here the estimator is going to work independently for each of the two populations. So what you see here on this red line is for the x equals 0 population. We see that, at time 0, everyone is alive, as you would expect. And at time 1,000, roughly 60% individuals are still alive for time 1,000. And that sort of stays constant. Now you see that, for the other subgroup, the x equals 1 subgroup, again, time step 0, as you would expect, everyone is alive. But they survive much longer. At time step 1,000, over 75% of them are still alive. And of course of interest here is also confidence balance. I'm not going to tell you how can you do that, but it's in some of the optional readings. And by the way, there are more optional readings given on the bottom of these slides. And so you see that there is a statistically significant difference between x equals 1 and x equals 0. These people seem to be surviving longer than these people. And you get that immediately from this curve. So how do we compute that? Well, we take those observed times, those capital Ts, and here I'm going to call them just y. I'm going to sort them. So these are sorted times. And I don't care whether they were censored or not censored. So y is just all of the times for all of the patients, whether they are censored or not. dK I want you think about as 1. It's the number of events that occurred at that time. So if everyone had a unique time of censoring or death, then dK is always 1. K is indexing one of these things. n of K is the number of individuals alive and uncensored by the K-th time point. Then what this estimator says is that S of t-- so the estimator at any point in time-- is given to you by the product over K such that y of K is less than or equal to t. So it's going over the observed times up to little t, of 1 minus the ratio of 1 over-- so I'm thinking about dK as 1-- 1 over the number of people who are alive and uncensored by that time. And that has a very intuitive definition. And one can prove that this estimator gives you a consistent estimator of the number of people who are alive-- sorry, the number of survival probability at any one point in time for censored data. And that's critical. This works for censored data. So I'm past time today. So I'll finish the last few slides on Tuesday's lecture. So that's all for today. Thanks.
MIT_6S897_Machine_Learning_for_Healthcare_Spring_2019
24_Robustness_to_Dataset_Shift.txt
[SQUEAKING] [RUSTLING] [CLICKING] DAVID SONTAG: OK, so then today's lecture is going to be about data set shifts, specifically how one can be robust to data set shift. Now, this is the topic that we've been alluding to throughout the semester. And the setting that I want you to be thinking about is as follows. You're a data scientist working at, let's say, Mass General Hospital, and you've been very careful in setting up your machine learning task to make sure that the data is well specified, the labels that you're trying to predict are well specified. You train on a valid-- you train on your training data, you test it on a held-out set, you see that the model generalizes well, you do chart review to make sure what you're predicting is actually what you think you're predicting, and you even do prospective deployment where you then let your machine learning algorithm drive some clinical decision support, and you'd see things are working great. Now what? What happens after this stage when you go to deployment? What happens when your same model is going to be used not just tomorrow but also next week, the following week, the next year? What happens if your model, which is working well at this one hospital, then wants to-- then there's another institution, say, maybe Brigham and Women's Hospital, or maybe UCSF, or some rural hospital in the United States wants to use the same model, will it keep working in this "short term to the future" time period or in a new institution? That's the question which we're going to be talking about in today's lecture. And we'll be talking about how one can deal with data set shift of two different varieties. The first variety is adversarial perturbations to data, and the second variety is data due to-- the data that changes for natural reasons. Now, the reason why it's not at all obvious that your machine learning algorithm should still work in the setting is because the number one assumption we make when we do machine learning is that your training distribution, your training data, is drawn from the same distribution as your test data. So if you now go to a setting where your data distribution has changed, even if you've computed your accuracy using your held-out data and it looks good, there's no reason that should continue to look good in this new setting, where the data distribution has changed. A simple example of what it means for a data distribution to change might be as follows. Suppose that we have as input data, and we're trying to predict some label, which maybe meant something like, why if a patient has-- or will be newly diagnosed with type 2 diabetes, and this is an example which we-- which we talked about when we introduce risk stratification, you learn a model to predict y from x. And now suppose you go to a new institution where their definition of what type 2 diabetes means has changed. For example, maybe they don't actually have type 2 diabetes coded in their data, maybe they only have diabetes coded in their data, which is lumping together both type 1 and type 2 diabetes, type 1 being what's usually juvenile diabetes and is actually a very distinct disease from type 2 diabetes. So now the notion of what diabetes is is different. Maybe the use case is also slightly different. And there's no reason, obviously, that your model, which was used to predict type 2 diabetes, would work for that new label. Now, this is an example of a very type-- of a type of data set shift which is perhaps for you obvious nothing should work in the setting because here the distribution of P of y given x changes, meaning even if you have the same individual, your distribution P(y) given x in, let's say, the distribution P(0) and the distribution P of y given x and P(1), where this is, let's say, one institution, this is another, these now are two different distributions if the meaning of the label has changed. So for the same person, there might be different distribution over what y is. So this is one type of data shift. And a very different type of data set shift is where we assume that these two are equal. And so that would, for example, rule out this type of data set shift. But rather what changes is P of x from location 1 to location-- to location 2. And this is the type of data set shift which will be focused on in today's lecture. It goes by the name of covariate shift. And let's look at two different examples of that. The first example would be of an adversarial perturbation. And so we've-- you've all seen the use of convolutional neural networks for image classification problems. This is just one illustration of such an architecture. And with such an architecture, one could then attempt to do all sorts of different object classification or image classification tasks. You could take as input this picture of a dog, which is clearly a dog. And you could modify it just a little bit. Just add in a very small amount of noise. What I'm going to do is now I'm going to create a new image which is that original image. Now with every single pixel, I'm going to add a very small epsilon in the direction of that noise. And what you get out is this new image, which you could stare at however long you want, you're not going to able to tell the difference. Basically to the human eye, these two look exactly identical. Except when you take your machine learning classifier, which is trained on original unperturbed data, and now apply it to this new image, it's classified as an ostrich. And this observation was published in a paper in 2014 called "Intriguing properties of neural networks." And it really kickstarted a huge surge of interest in the machine learning community on adversarial perturbations to machine learning. So asking questions, if you were to perturb inputs just a little bit, how does that change your classifier's output? And could that be used to attack machine learning algorithms? And how can one defend against it? By the way, as an aside, this is actually a very old area of research. And even back in the land of linear classifiers, these questions had been studied. Although I won't get into it in this course. So this is a type of data set shift in the sense that what we want is that this should still be classified as an ostrich-- as a dog. So the actual label hasn't changed. We would like this distribution over the labels, given the perturbed into it, to be slightly different, except that now the distribution of inputs is a little bit different because we're allowing for some noise to be added to each of the inputs. And in this case, the noise actually isn't random, it's adversarial. And towards the end of today's lecture, I'll give you an example of how one can actually generate the adversarial image, which can change the classifier. Now, the reason why we should care about these types of things in this course are because I expect that this type of data set shift, which is not at all natural, it's adversarial, is also going to start showing up in both computer vision and non-computer vision problems in the medical domain. There was a nice paper by Sam Finlayson, Andy Beam, and Isaac Kohane recently, which presented several different case studies of where these problems could really arise in health care. So, for example, here what we're looking at is an image classification problem arising from dermatology. You're given as input an image. For example, you would like that this image be classified as an individual having a particular type of skin disorder, a nevus, and this other image, melanoma. And what one can see is that with a small perturbation of the input, one can completely swap the label that would be assigned to it from one to the other. And in this paper, which we're going to post as optional readings for today's course, they talk about how one could maliciously use these algorithms for benefit. So, for example, imagine that a health insurance company now decides in order to reimburse for an expensive biopsy of a patient's skin, a clinician or a nurse must first take a picture of the disorder and submit that picture together with the bill for the procedure. And imagine now that the insurance company were to have a machine learning algorithm be an automatic check, was this procedure actually reasonable for this condition? And if it isn't, it might be flagged. Now, a malicious user could perturb the input such that it would, despite the patient having perhaps even completely normal-looking skin, could nonetheless be classified by a machine learning algorithm as being abnormal in some way, and thus perhaps could get reimbursed by that procedure. Now, obviously this is an example of a nefarious setting where we would then hope that such an individual would be caught by the police, sent to jail. But nonetheless, what we would like to be able to do is build checks and balances into the system such that that couldn't even happen because to a human it's obvious that you shouldn't be able to trick-- trick anyone with such a very minor perturbation. So how do you build algorithms that could also be not tricked as easily as humans wouldn't be tracked? AUDIENCE: Can I ask a question DAVID SONTAG: Yeah. AUDIENCE: For any of these samples, did the attacker need access to the network? Is there a way to [? attack it? ?] DAVID SONTAG: So the question is whether the attacker needs to know something about the function that's being used for classifying. There are examples of both what are called white box and black box attacks, where in one setting you have access to the function and other settings you don't. And so both have been studied in the literature, and there are results showing that one can attack in either setting. Sometimes you might need to know a little bit more. Like, for example, sometimes you need to have the ability to query the function a certain number of times. So even if you don't know exactly what the function is, like you don't know the weights of the neural network, as long as you can query it sufficiently many times, you'll be able to construct adversarial examples. That would be one approach. Another approach would be, oh, maybe we don't know the function, but we know something about the training data. So there are ways to go about doing this even if you don't perfectly know the function. Does that answer your question? So what about a natural perturbation? So this figure just pulled from lecture 5 when we talked about non-stationarity in the context of risk stratification, that's just to remind you here the x-axis is time, that y-axis is different types of laboratory test results that might be ordered, and the color denotes how many of those laboratory tests were ordered in a certain population at a point in time. So what we would expect to see if the data was stationary is that every row would be a homogeneous color. But instead what we see is that there are points in time, for example, a few month integrals over here, when suddenly it looks like, for some of the laboratory tests, they were never performed. That's most likely due to a data problem, or perhaps the feed of data from that laboratory test provider got lost, there were some systems problem. But they're also going to be settings where, for example, a laboratory test is never used until it's suddenly used. And that might be because it's a new test that was just invented or approved for reimbursement at that point in time. So this is an example of non-stationarity. And, of course, this could also result in changes in your data distribution, such as what I described over there, over time. And the third example is when you then go across institutions, wherein, of course, both the language that might be used-- you might think of a hospital in the United States versus a hospital in China, the clinical notes will be written in completely different languages, that'll would be an extreme case. And a less extreme case might be two different hospitals in Boston where the acronyms or the shorthand they use for some clinical terms might actually be different because of local practices. So, what do we do? This is all a setup. And for the rest of the lecture, what I'll talk about is first, very briefly, how one can build in population-level checks for has something changed. And then the bulk of today's lecture, we'll be talking about how to develop transfer learning algorithms and how one could think about defenses to adversarial attacks. So before I show you that first slide for bullet one, I want to have a bit of discussion. You've suddenly done that thing of learning machine learning algorithm in your institution, and you want to know, will this algorithm work at some other institution? You pick up the phone, you call up your collaborating data scientists at another institution, what are the questions that you should ask them when we're trying to understand, will your algorithm work there as well? Yeah. AUDIENCE: What kind of lab test information they collect [INAUDIBLE]. DAVID SONTAG: So what type of data do they have on their patients, and do they have similar data types or features available for their patient population? Other ideas, someone who hasn't spoken in the last two lectures, maybe someone in the far back there, people who have their computer out. Maybe you with your hand in your mouth right there, yeah, you with your glasses on. Ideas. [STUDENT LAUGHS] AUDIENCE: Sorry, can you repeat the question? DAVID SONTAG: You want me to repeat the question? The question was as follows. You learn your machine learning algorithm at some institution, and you want to apply it now in a new institution. What questions should you ask of that new institution to try to assess whether your algorithm will generalize in that new institution? AUDIENCE: I guess it depends on your problem you're looking at, like whether you're trying to learn possible differences in your population, if you're requiring data with particular [INAUDIBLE] use. So I'd envision it that you'd want to, like are your machines calibrated [INAUDIBLE]?? Do they use techniques to acquire the data? DAVID SONTAG: All right. So let's break down each of the answers that you gave. The first answer that you gave was, are there differences in the population? What would be an exa-- someone else now, what are we an example of a difference in a population? Yep. AUDIENCE: Age distribution You might have younger people in maybe Boston versus like a Massachusetts [INAUDIBLE].. DAVID SONTAG: So you might have younger people in Boston versus older people who are in Central Massachusetts. How might a change in age distribution affect your ability of your algorithms to generalize? Yep. AUDIENCE: [? Possibly ?] health patterns, where young people are very different from [INAUDIBLE] who have some diseases that are clearly more prevalent in populations that are older [? than you. ?] DAVID SONTAG: Thank you. So sometimes we might expect a different just set of diseases to occur for a younger population versus an older population. So I type 2 diabetes, hypertension, these are diseases that are often diagnosed when patients-- when individuals are 40s, 50s, and older. If you have people who are in their 20s, you don't typically see those diseases in a younger population. And so what that means is if your model, for example, was trained on a population of very young individuals, then it might not be able to-- and suppose you're doing something like predicting future cost, so something which is not directly tied to the disease itself, the features that are predictive of future cost in a very young population might be very different from features-- for predictors of cost in a much older population because of the differences in conditions that those individuals have. Now the second answer that was given had to do with calibration of instruments. Can you elaborate a bit about that? AUDIENCE: Yeah. So I was thinking [? clearly ?] in the colonoscopy space. But if you're collecting-- so in that space, you're collecting videos of colons. And so you can have machines that are calibrated very differently, let's say different light exposure, different camera settings. But you also have that the GIs and physicians have different techniques as to how they explore the colon. So the video data itself is going to be very different. DAVID SONTAG: So the example that was given was of colonoscopies and data that might be collected as part of that. And the data that could be-- the data that could be collected could be different for two different reasons. One, because the-- because the actual instruments that are collecting the data, for example, imaging data, might be calibrated a little bit differently. And a second reason might be because the procedures that are used to perform that diagnostic test might be different in each institution. Each one will result in slightly different biases to the data, and it's not clear that an algorithm trained on one type of procedure or one type of instrument would generalize to another. So these are all great examples. And so when one reads a paper from the clinical community on developing a new risk stratification tool, what you will always see in this paper is what's known as "Table 1." Table 1 looks a little bit like this. Here I pulled one of my own papers that was published in JAMA Cardiology for 2016 where we looked at how to try to find patients with heart failure who are hospitalized. And I'm just going to walk through what this table is. So this table is describing the population that was used in the study. At the very top, it says these are characteristics of 47,000 hospitalized patients. Then what we've done is, using our domain knowledge, we know that this is a heart failure population, and we know that there are a number of different axes that differentiate patients who are hospitalized that have heart failure. And so we enumerate over many of the features that we think are critical to characterizing the population, and we give descriptive statistics on each one of those features. You always start with things like age, gender, and race. And so here, for example, the average age was 61 years old, this was, by the way, NYU Medical School, 50.8% female, 11.2% Black, African-American, 17.6% of individuals were on Medicaid, which was a state-provided health insurance for either disabled or lower-income individuals. And then we looked at quantities like what types of medications were patients on. 41% of-- 42% of inpatient patients were on something called beta blockers. 31.6% of outpatients were on beta blockers. We then looked at things like laboratory test results. So one can look at the average creatinine values, the average sodium values of this patient population. And in this way, it described what is the population that's being studied. Then when you go to the new institution, that new institution receives not just the algorithm, but they also receive this Table 1 that describes a population in which the algorithm was learned on. And they could use that together with some domain knowledge to think through questions like what we were eliciting-- what I elicited from you in our discussion so that we could think, is it actually-- does it make sense that this model will generalize to this new institution? Are the reasons why it might not? And you could do that even before doing any prospective evaluation on the new population. So almost all of you should have something like Table 1 in your project write-ups because that's an important part of any study in this field is describing, what is the population that you're doing your study on? You agree with me, Pete? PETER SZOLOVITS: Yeah. I would just at that Table 1, if you're doing a case control study, you will have two columns that show the distributions in the two populations, and then a p-value of how likely those differences are to be significant. And if you leave that out, you can't get your paper published. DAVID SONTAG: I'll just repeat Pete's answer for the recording. If you are-- this table is for a predictive problem. But if you're thinking about a causal inference type problem, where there's a notion of different intervention groups, then you'd be expected to report the same sorts of things, but for both the case population, the people who received, let's say, treatment one, and the control population of people who receive treatment zero. And then you would be looking at differences between those populations as well at the individual feature level as part of the descriptive statistics for that study. Now, this-- yeah. AUDIENCE: Is this to identify [? individually ?] [? between ?] those peoples? [INAUDIBLE] institutions to do like t-tests on those tables-- DAVID SONTAG: To see if they're different? No, so they're always going to be different. You go to a new institution, it's always going to look different. And so just looking to see how something changed is not-- the answer's always going to be yes. But it enables a conversation to think through, OK, this, and then you might look-- you might use some of the techniques that Pete's going to talk about next week on interpretability to understand, well, what is the model actually using. Then you might ask, oh, OK, well, the model is using this thing, which makes sense in this population but might not make sense in another population. And it's these two things together that make the conversation. Now, this question has really come to the forefront in recent years in close connection to the topic that Pete discussed last week on fairness in machine learning. Because you might ask if a classifier is built in some population, is it going to generalize to another population if that population that has learned on was very biased, for example, it might have been all white people. You might ask, is that classifier going to work well in another population that might perhaps include people of different ethnicities? And so that has led to a concept which was recently published. This working draft that I'm showing the abstract from was just a few weeks ago called "Datasheets for data sets." And the goal here is to standardize the process of describing-- of eliciting the information about what is it about the data set that really played into your model? And so I'm going to walk you through very briefly just through a couple of elements of what an example data set for a datasheet might look like. This is too small for you to read, but I'll blow up one section in just a second. So this is a datasheet for a data set called Studying Face Recognition in an Unconstrained Environment. So it's for computer vision problem. There are going to be a number of questionnaires, which this paper that I point you to outlines. And you as the model developer go through that questionnaire and fill out the answers to it, so including things about motivation for the data set creation composition and so on. So in this particular instance, this data set called Labeled Faces in the Wild was created to provide images that study face recognition in an unconstrained [INAUDIBLE] settings, where image characteristics such as pose, elimination, resolution, focus cannot be controlled. So it's intended to be real-world settings. Now, one of the most interesting sections of this report that one should release with the data set has to do with how was the data preprocessed or cleaned? So, for example, for this data set, it walks through the following process. First, raw images were obtained from the data set, and it consisted of images and captions that were found together with that image in news articles or around the web. Then there was a face detector that was run on the data set. Here were the parameters of the face detector that were used. And then remember, the goal here is to study face detection. And so-- so one has to know, how were the-- how were the labels determined? And how would one, for example, eliminate if there was no face in this image? And so there they described how a face was detected and how a region was determined to not be a face in the case that it wasn't. And finally, it describes how duplicates were removed. And if you think back to the examples we had earlier in the semester from medical imaging, for example in pathology and radiology, similar data set constructions had to be done there. For example, one would go to the PAC System where radiology images are stored, one would-- one would decide which images are going to be pulled out, one would go to radiography reports to figure out how do we extract the relevant findings from that image, which would give the labels for that predictive-- for that learning task. And each step there will incur some bias and some-- which one then needs to describe carefully in order to understand what might the bias be of the learned classifier. So I won't go into more detail on this now, but this will also be one of the suggested readings for today's course. And it's a fast read. I encourage you to go through it to get some tuition for what are questions we might want to be asking about data sets that we create. And for the rest of this semester-- for the rest of the lecture today, I'm now going to move on to some more technical issues. So we have to do it. We're doing machine learning now. The populations might be different. What do we do about it? Can we change the learning algorithm in order to hope that your algorithm might transfer better to a new institution? Or if we get a little bit of data from that new institution, could we use that small amount of data from the new institution or a future time point in the future to retrain our model to do well in that slightly different distribution? So that's the whole field of transfer learning. So you have data drawn from one distribution on p of x and y, and maybe we have a little bit of data drawn from a different distribution q of x,y. And under the covariate shift assumption, I'm assuming that q(x,y) is equal to q of x times p of y given x, namely that the conditional distribution of y given x hasn't changed. The only thing that might have changed is your distribution over x. So that's what the covariate shift assumption would assume. So suppose that we have some small amount of data drawn from the new distribution q. How could we then use that in order to perhaps retrain our classifier to do well for that new institution? So I'll walk through four different approaches to do so. I'll start with linear models, which are the simplest to understand, and then I'll move on to deep models. The first approach to something that you've seen already several times in this course. We're going to think about transfer as a multi-task learning problem, where one of the tasks has much less data than the other task. So if you remember when we talked about disease progression modeling, I introduced this notion of regularizing the weight vectors so that they could be close to one another. At that time, we were talking about weight vectors predicting disease progression in different time points in the future. We could use exactly the same idea here, where you take your classifier, your linear classifier that was trained on a really large corpus, I'm going to call that-- I'm going to call the weights of that classifier w old, and then I'm going to solve a new optimization problem, which is minimizing over the weights w that minimizes some loss. So this is where your training-- your new training data come in. So I'm going to assume that the new training get D is drawn from the q distribution. And I'm going to add on a regularization that asks that w should stay close to w old. Now, if the amount of data you have-- if D, the data from that new institution, was very large, then you wouldn't need this at all because you would be able to just-- you would be able to ignore the classifier that you learned previously and just refit everything to that new institution's data. Where something like this is particularly valuable is if there was a small amount of data set shift, and you only have a very small amount of labeled data from that new institution, then this would allow you to change your weight vector just a little bit. So if this coefficient was very large, it would say that the new w can't be too far from the old w. So it'll allow you to shift things a little bit in order to do well on the small amount of data that you have. So, for example, if there is a feature which was previously predictive, but that feature is no longer present in the new data set, so, for example, it's all identically zero, then, of course, the new weight vect-- the new weight for that feature is going to be set to 0, and that weight you can think about as being redistributed to some of the other features. Does this makes sense? Any questions? So this is the simplest approach to transfer learning. And before you ever try anything more complicated, always try this. Uh, yep. So the second approach is also with a linear model, but here we're no longer going to assume that the features are still useful. So there might-- when you go from-- when you go from a-- your first institution, let's say, I'm GH on the left, you learn your model, and you can apply it to some new institution, let's say, UCSF on the right, it could be that there is some really big change in the feature set such that-- such that the original features are not at all useful for the new feature set. And a really extreme example of that might be the setting that I gave earlier when I said, your model's trained on English, and you're testing it out in Chinese. That would be an example-- if you use a bag of words model, that would be an example where your model, obviously, wouldn't generalize at all because your features are completely different. So what would you do in that setting? What's the simplest thing that you might do? So you're taking a text classifier learned in English, and you want to apply it in a setting where that language is Chinese. What would you do? AUDIENCE: Train on them. DAVID SONTAG: Translate, you said. And there was another answer. AUDIENCE: Or try train an RN. DAVID SONTAG: Train an RN to do what? AUDIENCE: To translate. DAVID SONTAG: Train an RN-- oh, OK. So assume that you have some ability to do machine translation, you translate from English to-- from Chinese to English. It has to be that direction because the original classifier was trained in English. And then your new function is the composition of the translation and the original function, right? And then you can imagine doing some fine tuning if you had a small amount of data. Now, the simplest translation function might be just use a dictionary. So you look up a word, and if that word has an analogy in another language, you say, OK, this is the translation. But there are always going to be some words in your language which don't have a very good translation. And so you might imagine that the simplest approach would be to translate, but then to just drop out words that don't have a good analog and force your classifier to work with, let's say, just the shared vocabulary. Everything we're talking about here is an example of a manually chosen decision. So we're going to manually choose a new representation for the data such that we have some amount of shared features between the source and target data sets. So let's talk about electronic health record 1 and electronic health record 2. By the way, the slides that I'll be presenting here are from a paper published in KDD by Jan, Tristan, your instructor, Pete, and John Guttag. So you have to go two electronic health records, electronic health record 1, electronic health record 2. How can things change? Well, it could be that the same concept in electronic health record 1 might be mapped to a different encoding, so that's like an English-to-Spanish type translation, in electronic health record 2. Another example of a change might be to say that some concepts are removed, like maybe you have laboratory test results in electronic health record 1 but not in electronic health record 2. So that's why you see an edge to nowhere. Another change might be there might be new concepts. So the new institution might have new types of data that the old institution didn't have. So what do you do in that setting? Well, one approach we would say, OK, we have some small amount of data from electronic health record 2. We could just train using that and throw away your original data from electronic health record 1. Now, of course, if you only had a small amount of data from the target to distribution, then that's going to be a very poor approach because you might not have enough data to actually learn a reasonable enough model. A second obvious approach would be, OK, we're going to just train on electronic health record 1 and apply it. And for those concepts that aren't present anymore, so be it. Maybe things won't work very well. A third approach, which we were alluding to before when we talked about translation, would be to learn a model just in the intersection of the two features. And what this work does, as they say, we're going to manually redefine the feature set in order to try to find as much common ground as possible. And this is something which really involves a lot of domain knowledge. And I'm going to be using this as a point of contrast from what I'll be talking about in 10 or 15 minutes, where I talk about how one could do this without that domain knowledge that we're going to use here. So the setting that they looked at is one of predicting outcomes, such as in-hospital mortality or length of stay. The model which is going to be used as a bag-of-events model. So we will take a patient's longitudinal history up until the time of prediction. We'll look at different events that occurred. And this study was done using PhysioNet. And MIMIC, for example, events are encoded with some number, like 5814 might correspond to a CVP alarm, 1046 might correspond to pain being present, 25 might correspond to the drug heparin being given and so on. So we're going to create one feature for every event which has some number-- which is encoded with some number. And we'll just say 1 if that event has occurred, 0 otherwise. So that's the representation for a patient. Now, because when one goes though this new institution, EHR2, the way that events are encoded might be completely different. One won't be able to just use the original feature representation. And that's the English-to-Spanish example that I gave. But instead, what one could try to do is come up with a new feature set where that feature set could be derived from each of the different data sets. So, for example, since each one of the events in MIMIC has some text description that goes with it, event one corresponds to ischemic stroke, event 2, hemorrhagic stroke, and so on, one could attempt to map-- use that English description of the feature to come up with a way to map it into a common language. In this case, the common language is the UMLS, the United Medical Language System that Pete talked about a few lectures ago. So we're going to now say, OK, we have a much larger feature set where we've now encoded ischemic stroke as this concept, which is actually the same ischemic stroke, but also as this concept and that concept, which are more general versions of that original one. So this is just general stroke, and it could be multiple different types of strokes. And the hope is that even if in-- even if the model doesn't-- even if some of these more specific ones don't show up in the new institution's data, perhaps some of the more general concepts do show up there. And then what you're going to do is you're going to learn your model now on this expanded translated vocabulary, and then translate it. And at the new institution, you'll also be using that same common data model. And that way one hopes to have much more overlap in your feature set. And so to evaluate this, the authors looked at two different time points within MIMIC. One time point was when the Beth Israel Deaconess Medical Center was using electronic health record called CareView. And the second time point was when that hospital was using a different electronic health record called MetaVision. So this is an example actually of non-stationarity. Now because of them using two different electronic health records, the encodings were different. And that's why this problem arose. And so we're going to use this approach, and we're going to then learn a linear model on top of this new encoding that I just described. And we're going to compare the results by looking at how much performance was lost due to using this new encoding, and how well we generalize from one-- from one-- from the source task to the target task. And so here's the first question, which is, how much do we lose by using this new encoding? So as a comparison point for looking at predicting in-hospital mortality, we'll look at, what is the predictive performance if you're to just use an existing, very simple risk score called the SAPS score? And that's this red line where that y-axis here is the area under the ROC curve, and the x-axis is how much time in advance you're predicting, so the prediction gap. So using this very simple score, SAPS get somewhere between 0.75 and 0.80, area under the ROC curve. But if you were to use all of the events data, which is much, much richer than what went into that simple SAPS score, you would get the purple curve, which is-- the purple curve, which is SAPS plus the event data, or the blue curve, which is just the events data. And you can see you can get substantially better predictive performance by using that much richer feature set. The SAPS score has the advantage that it's easier to generalize because it's so simple, those feature elements, one could trivially translate to any new EHR, either manually or automatically, and thus it'll always be a viable route. Whereas this blue curve, although it gets better predictive performance, you have to really worry about these generalization questions. And the same story happens in both of the source task and the target task. Now the second question to ask is, well, how much do you lose when you use the new representation of the data? And so here looking at, again, both of the two-- both EHRs, what we see first in red is the same red curvature-- is the same as the blue curvature on the previous slide. It's using SAPS plus the item IDs, so using all of the data. And then the blue curve here, which is a bit hard to see, but it's right there, it's substantially lower. So that's what happens if you now use this new representation. And you see that you do lose something by trying to find a common vocabulary. The performance does get hit a bit. But what's particularly interesting is when you attempt to generalize, you start to see a swap. So if we now-- so now the colors are going to be quite similar. Red here was at the very top before. So red is using the original representation of the data. Before it was at the very top. Shown here is the training error on this institution, CareView. You see, there's so much rich information in the original feature set that it's able to do very good predictive performance. But once you attempt to translate it, so you train on CareView, but you test on MetaVision, then the test performance shown here by this solid red line is actually the worst of all of the system. So there's a substantial drop in performance because not all of these features are present in the new EHR. On the other hand, when the translated version, despite the fact that it's a little bit worse when evaluated on the source, it generalizes much better. And so you see a significantly better performance that's shown by this blue curve here when you use this translated vocabulary. There's a question. AUDIENCE: So would you train with full features? So how do you apply [? with ?] them if the other [? full ?] features are-- you just [INAUDIBLE].. DAVID SONTAG: So, you assume that you have come up with a mapping from the features in both of the EHRs to this common feature vocabulary of QEs. And the way that this mapping is going to be done in this paper is based on the text of the-- of the events. So you take the text-based description of the event, and you come up with a deterministic mapping to this new UMLS-based representation. And then that's what's being used. There's no fine tuning being done in this particular example. So I consider this to be a very naive application of transfer. The results are exactly what you would expect the results to be. And, obviously, a lot of work had to go into doing this. And there's a bit of creativity in thinking that you should use the English-based description of the features to come up with the automatic mapping, but the story ends there. And so a question which all of you might have is, how could you try to do such an approach automatically? How could we automatically find representations-- new representations of the data that are likely to generalize from, let's say, a source distribution to a target distribution? And so to talk about that, we're going to now start thinking through representation learning-based approaches, of which deep models are particularly capable. So the simplest approach to try to do transfer learning in the context of, let's say, deep neural networks, would be to just chop off part of the network and reuse that-- some internal representation of the data in this new location. So the picture looks a little bit like this. So the data might feed in the bottom. There might be a number of convolutional layers, some fully connected layers. And what you decide to do is you're going to take this model that's trained in one institution, you chop it at some layer, it might be, for example, prior to the last fully connected layer, and then you're going to take that-- take the new representation of your data, now the representation of the data is what you would get out after doing some convolutions followed by a single fully connected layer, and then you're going to take your target distribution's data, which you might only have a small amount of, and you learn a simple model on top of that new representation. So, for example, you might learn a shallow classifier using a support vector machine on top of that new representation. Or you might add in some more-- a couple more layers of a deep neural network, and then fine tune the whole thing end to end. So all of these have been tried. And in some cases, one works better than another. And we saw already one example of this notion in this course. And that was when Adam Yala spoke in lecture 13 about breast cancer and mammography, where in his approach he said that he had tried both taking a randomly initialized classifier and comparing that to what would happen if you initialized with a well-known ImageNet-based deep neural network for the problem. And he had a really interesting story that he gave. In his case, he had enough data that he actually didn't need to initialize using this pre-trained model from ImageNet. If he had just done a random initialization, eventually-- and this x-axis, I can't remember, it might be hours of training or epochs, I don't remember, it's time-- eventually the right initialization gets to a very similar performance. But for his particular case, if you were to do a initialization with ImageNet and then fine tune, you get there much, much quicker. And so it was for the computational reason that he found it to be useful. But in many other applications in medical imaging, the same tricks become essential because you just don't have enough data in the new test case. And so one makes use of, for example, the filters which one learns from an ImageNet's task, which is dramatically different from the medical imaging problems, and then using those same filters together with a new top layer, set of top layers in order to fine tune it for the problem that you care about. So this would be the simplest way to try to hope for a common representation for transfer in a deep architecture. But you might ask, how would you do the same sort of thing with temporal data, not image data, maybe data that's from language, or data from time series of health insurance claims? And for that you really want to be thinking about recurrent neural networks. So just to remind you, recurrent neural network is a recurrent architecture where you take as input some vector. For example, if you're doing language modeling, that vector might be encoding, just a one-hot encoding of what is the word at that location. So, for example, this vector might be all zeros, except for the fourth dimension, which is a 1, denoting that this word is the word, quote, "class." And then it's fed into a recurrent unit, which takes the previous hidden state, combined it with the current input, and gets you a new hidden state. And in this way, you read in-- you encode the full input. And then you might predict-- make a classification based on the hidden state of the last time [? step. ?] That would be a common approach. And here would be a very simple example of a recurrent unit. Here I'm using S to denote in a state. Often you will see H used to denote the hidden state. This is a particularly simple example, where there's just a single non-linearity. So you take your previous hidden state, you hit it with some matrix Ws,s and you add that to the input being hit by a different matrix. You now have a combination of the input plus the previous hidden state. You apply non-linearity to that, and you get your new hidden state out. So that would be an example of a typical recurrent unit, a very simple recurrent unit. Now, the reason why I'm going through these details is to point out that the dimension of that Ws,x matrix is the dimension of the hidden state, so the dimension of s, by the vocabulary size if you're using a one-hot encoding of the input. So if you have a huge vocabulary, that matrix, Ws,x, is also going to be equally large. And the challenge that that presents is that it would lead to overfitting on rare words very quickly. And so that's a problem that could be addressed by instead using a low-rank representation of that Ws,x matrix. In particular, you could think about introducing a lower dimensional bottleneck, which in this picture I'm denoting as xt prime, which is your original xt input, which is the one-hot encoding, multiplied by a new matrix We. And then your recurrent unit only takes inputs of that hidden-- of that xt prime's dimension, which is k, which might be dramatically smaller than v. And you can even think about each column of that intermediate representation, We, as a word embedding. It's a way of-- and this is something that Pete talked quite a bit about when we were thinking about natural language-- when we were talking about natural language processing. And many of you would have heard about it in the context of things like Word2Vec. So if one wanted to take a setting, for example, one institution's data where you had a huge amount of data, learn every current neural network on that institution's data, and then generalize it to a new institution, one way of trying to do that, if you think about, what is the thing that you chop, one answer might be, all you do is you keep the word embedding. So you might say, OK, I'm going to keep the We's, I'm going to translate it back to my new institution. But I'm going to let the recurrent unit parameters-- the recurrent parameters, for example, that Ws,s you might allow it to be relearned for each new institution. And so that might be one approach of how to use the same idea that we had from feed forward networks within a recurrent setting. Now, all of this is very general. And what I want to do next is to instantiate it a bit in the context of health care. So since the time that Pete presented the extensions of Word2Vec such as BERT and ELMo, and I'm not going to-- I'm not going to go into them now, but you can go back to Pete's lecture from a few weeks ago to remind yourselves what those were, since the time he presented that lecture, there are actually three new papers that actually tried to apply this in the health care context, one of which was from MIT. And so these papers all have the same sort of idea. They're going to take some data set-- and these papers all use MIMIC. They're going to take that text data, they're going to learn some word embeddings or some low-dimensional representations of all words in the vocabulary. In this case, they're not learning a static representation for each word. Instead these BERT and ELMo approaches are going to be learning-- well, you can think of it as dynamic representations. They're going to be a function of the word and their context on the left and right-hand sides. And then what they'll do is they'll then take those representations and attempt to use them for a completely new task. Those new tasks might be on MIMIC data. So, for example, these two tasks are classification problems on MIMIC. But they might also be on non-MIMIC data. So these two tasks are from classification problems on clinical text that didn't even come from MIMIC at all. So it's really an example of translating what you learned from one institution to another institution. These two data sets were super small. Actually, all of these data sets were really, really small compared to the original size of MIMIC. So there might be some hope that one could learn something that really improves generalization. And indeed, that's what plays out. So all these tasks are looking at a concept detection task. Given a clinical note, identify the segments of text within a note that refer to, for example, a disorder, or a treatment, or something else, which you then in a second stage might normalize to the UMLS. So what's really striking about these results is what happens when you go from the left to the right column, which I'll explain in a second, and what happens when you go top to bottom across each one of these different tasks. So the left column are the results. And these results are an F score, the results, if you were to use embeddings trained on a non-clinical data set, or said definitely, not on MIMIC but on some other more general data set. The second column is what would happen if you trained those embedding on a clinical data set, in this case, MIMIC. And you see pretty big improvements from the general embeddings to the MIMIC-based embeddings. What's even more striking is the improvements that happen as you get better and better embeddings. So the first row are the results if you were to use just Word2Vec embeddings. And so, for example, for the I2B2 Challenge in 2010, you get 82.65 F score using Word2Vec embeddings. And if you use a very large BERT embedding, you get 90.25 F score-- F measure, which is substantially higher. And the same findings were found time and time again across different tasks. Now, what I find really striking about these results is that I had tried many of these things a couple of years ago, not using BERT or ELMo, but using Word2Vec, and GloVe, and fastText. And what I found is that using word embedding approaches for these problems didn't-- even if you threw that in as additional features on top of other state-of-the-art approaches to this concept extraction problem, it did not improve predictive performance above the existing state of the art. However, in this paper, here they use the simplest possible algorithm. They used a recurrent neural network fed into a conditional random field for the purpose of classifying each word into each of these categories. And the feature represent-- the features that they used are just these embedding features. So with just the Word2Vec embedding features, the performance is crap. You don't get anywhere close to the state of art. But with the better embeddings, they actually obtain-- actually, they improved on the state of the art for every single one of these tasks. And that is without any of the manual feature engineering which we have been using in the field for the last decade. So I find this to be extremely promising. Now you might ask, well, that is for one problem, which is classification of concepts-- or identification of concepts. What about for a predictive problem? So a different paper also published-- what month is it now, May-- so last month in April, looked at a predicted problem of 30-day readmission prediction using discharge summaries. This also was valued on MIMIC. And their evaluation looked at the area under the ROC curve of two different approaches. The first approach, which is using a bag-of-words model, like what you did in your homework assignment, and the second approach, which is the top row there, which is using BERT embeddings, which they call Clinical BERT. And this, again, is something which I had tackled for quite a long time. So I worked on these types of readmission problems. And bag-of-words model is really hard to beat. In fact, did any of you beat it in your homework assignment? If you remember, there was an extra question, which is, oh, well, maybe if we used a deep learning-based approach for this problem, maybe you could get better performance. Did anyone get better performance? No. How many of you actually tried it? Raise your hand. OK, so one-- a couple of people who are afraid to say, but yeah. So a couple of people who tried, but not many. But I think the reason why it's very challenging to do better with, let's say, a recurrent neural network versus a bag-of-words model is because there is-- a lot of the subtlety in understanding the text is in terms of understanding the context of the text. And that's something that using these newer embeddings is actually really good at because they can get-- they could use the context of words to better represent what each word actually means. And they see substantial improvement in performance using this approach. What about for non-text data? So you might ask when we have health insurance claims, we have longitudinal data across time. There's no language in this. It's a time series data set. You have ICD-9 codes at each point in time, you have maybe lab test results, medication records. And this is very similar to the market scan data that you used in your homework assignment. Could one learn embeddings for this type of data, which is also useful for transfer? So one goal might be to say, OK, let's take every ICD-9, ICD-10 code, every medication, every laboratory test result, and embed those event types into some lower dimensional space. And so here's an example of an embedding. And you see how-- this is just a sketch, by the way-- you see how you might hope that diagnosis codes for autoimmune conditions might be all near each other in some lower dimensional space, diagnosis codes for medications that treat some conditions should be near each other, and so on. So you might hope that such structure might be discovered by an unsupervised learning algorithm that could then be used within a transfer learning approach. And indeed, that's what we found. So I wrote a paper on this in 2015/16. And here's one of the results from that paper. So this is just a look at nearest neighbors to give you some sense of whether the embedding's actually capturing the structure of the data. So we looked at nearest neighbors of the diagnosis ICD-9 diagnosis code 710.0, which is lupus. And what you find is that another diagnosis code, also for lupus, is the first closest result, followed by connective tissue disorder, or Sicca syndrome, which is Sjogren's disease, Raynaud's syndrome, and other autoimmune conditions. So that makes a lot of sense. You can also go across data types, like ask, what is the nearest neighbor from this diagnosis code to laboratory tests? And since we've embedded lab tests and diagnosis codes all in the same space, you can actually get an answer to that. And what you see is that these lab tests, which by the way are exactly lab tests that are commonly used to understand progression in this autoimmune condition, are the closest neighbors. Similarly, you can ask the same question about drugs and so on. And by the way, we have made all of these embeddings publicly available on my lab's GitHub. And since the time that I wrote this paper, there have been a number of other papers, that I give citations to at the bottom here, tackling a very similar problem. This last one also put there embeddings publicly available, and is much larger than the one that we had So these things, I think, would also be very useful as one starts to think about how one can transfer knowledge learned on one institution to another institution where you might have much less data than that other institution. So finally I want to return back to the question that I raised in bullet two here, where we looked at a linear model with a manually chosen representation, and ask, could we-- instead of just naively chopping your deep neural network at some layer and then fine tuning, could one have learned a representation of your data specifically for the purpose of encouraging good generalization to a new institution? And there has been some really exciting work in this field that goes by the name of Unsupervised Domain Adaptation. So the setting that's considered here is where you have data from-- you have data from first some institution, which is x comma y. But then you want to do prediction from a new institution where all you have access to at training time is x. So as opposed to the transfer settings that I talked about earlier, now for this new institution, you might have a ton of unlabeled data. Whereas before I was talking about having just a small amount of label data, but I never talked of the possibility of having a large amount of unlabeled data. And so you might ask, how could you use that large amount of unlabeled data from that second institution in order to learn representation that actually encourages similarities from one solution to the other? And that's exactly what these domain adversarial training approaches will do. What they do is they add a second term to the last function. So they're going to minimize-- the intuition is you're going to minimize-- you're going to try to learn parameters that minimize your loss function evaluated on data set 1. But intuitively, you're going to ask that there also be a small distance, which I'll just note as d here, between D1 and D2. And so I'm being a little bit loose with notation here, but when I calculate distance here, I'm referring to distance in representation space. So you might imagine taking the middle layer of your deep neural network, so taking, let's say, this layer, which we're going to call the feature layer, or the representation layer, and you're going to say, I want that my data under the first institution should look very similar to the data under the second institution. So the first few layers of your deep neural network are going to attempt to equalize the two data sets so that they look similar to another, at least in x space. And we're going to attempt to find representations of your model that get good predictive performance on the data set for which you actually have the labels and for which the induced representations, let's say, the middle layer look very similar across the two data sets. And one way to do that is just to try to predict for each-- you now get a-- for each data point, you might actually say, well, which data set did it come from, data set 1 or data set 2? And what you want is that your model should not be able to distinguish which data set it came from. So that's what it says, gradient reverse layer you want to be able to-- you want to ensure that predicting which data set that data came from, you want to perform badly on that loss functions. It's like taking the minus of that loss. And so we're not going to go into the details of that, but I just wanted to give you a reference to that approach in the bottom. And what I want to do is just spend one minute at the very end talking now about defenses to adversarial attacks. And conceptually this is very simple. And that's why I can actually do it in one minute. So we talked about how one could easily modify an image in order to turn the prediction from, let's say, pig to airliner. But how could we change your learning algorithm actually to make sure that, despite the fact that you do this perturbation, you still get the right prediction out, pig? Well, to think through that, we have to think through, how do we do machine learning? Well, a typical approach to machine learning is to learn some parameters theta minimized your empirical loss. Often we use deep neural networks, which look a little like this. And we do gradient descent where we attempt to minimize some loss surfaced, find some parameters theta have as low loss as possible. Now, when you think about an adversarial example and where they come from, typically one finds an adversarial example in the following way. You take your same loss function, now for specific input x, and you try to find some perturbation delta to x an additive perturbation, for example, such that you increase the loss as much as possible with respect to the correct label y. And so if you've increased the loss with respect to the correct label y, intuitively then when you try to see, well, what should you predict for this new perturbed input, there's going to be a lower loss for some alternative label, which is why the prediction-- the class that's predicted actually changes. So now one can try to find these adversarial examples using the same type of gradient-based learning algorithms that one uses for learning in the first place. But what one can do is you can use a gradient descent method now-- instead of gradient descent, gradient ascent. So you take this optimization problem for a given input x, and you try to maximize that loss for that input x with this vector delta, and you're now doing gradient ascent. And so what types of delta should you consider? You can imagine small perturbations, for example, delta that have very small maximum values. That would be an example of an L-infinity norm. Or you could say that the sum of the perturbations across, let's say, all of the dimensions has to be small. That would be corresponding to like an L1 or an L2 norm bound on what delta should be. So now we've got everything we need actually to think about defenses to this type of adversarial perturbation. So instead of minimizing your typical empirical loss, what we're going to do is we're going to attempt to minimize an adversarial robust loss function. What we'll do is we'll say, OK, we want to be sure that no matter what the perturbation is that one adds the input, the true label y still has low loss. So you want to find parameters theta which minimize this new quantity. So I'm saying that we should still do well even for the worst-case adversarial perturbation. And so now this would be the following new learning objective, where we're going to minimize over theta with respect to the maximum of our delta. And you have to restrict the family that these perturbations could live in, so if that delta that you're maximizing with respect to is the empty set, you get back the original learning problem. If you let it be, let's say, all L-infinity bounded perturbations of maximum size of 0.01, then you're saying we're going to allow for a very small amount of perturbations. And the learning algorithm is going to find parameters theta such that for every input, even with a small perturbation to it, adversarially chosen, you still get good predictive performance. And this is now a new optimization problem that one can solve. And we've now reduced the problem of finding an adversarial robust model to a new optimization problem. And what the field has been doing in the last couple of years is coming up with new optimization approaches to try to solve those problems fast. So, for example, this paper published an ICML in 2018 by Zico Kolter and his student-- Zico just visited MIT a few weeks ago-- what it did is it said, we're going to use a convex relaxation to the rectified linear unit, which is used in many deep neural network architectures. And what it's going to do it's then going to say, OK, we're going to think about how a small perturbation to the input would be propagated in terms of getting how much that could actually change the output. And if one could be bound at every layer by layer how much a small perturbation affects the output of that layer, then one could propagate from the very bottom all the way to the loss function of the top to try to bound how much the loss function itself changes. And a picture of what you would expect out is as follows. On the left-hand side here, you have a data point, red and blue, and the decision boundary that's learned if you didn't do this robust learning algorithm. On the right, you have now-- you'll notice a small square around each data point. That corresponds to a maximum perturbation of some limited amount. And now you notice how the decision boundary doesn't cross any one of those squares. And that's what would be found by this learning algorithm. Interestingly, one can look at the filters that are learned by convolutional neural network using this new learning algorithm. And you find that they're much more sparse. And so this is a very fast moving field. Every time a new adversarial attack-- every time a new adversarial defense mechanism comes up, someone comes up with a different type of attack, which breaks it. And usually that's from one of two reasons. One, because the defense mechanism isn't provable, and so one could try to come up with a theorem which says, OK, as long as you don't perturbate more than some amount, these are the results you should expect. The other flip of the coin is, even if you come up with some provable guarantee, there might be other types of attacks. So, for example, you might imagine a rotation to the input instead of an L-infinity bounded norm that you add to it. And so for every new type of attack model, you have to think through new defense mechanisms. And so you should expect to see some iteration in the space. And there's a website called robust-ml.org, where many of these attacks and defenses are being published to allow for the academic community to make progress here. And with that, I'll finish today's lecture.
MIT_6S897_Machine_Learning_for_Healthcare_Spring_2019
25_Interpretability.txt
PROFESSOR: OK, so the last topic for the class is interpretability. As you know, the modern machine learning models are justifiably reputed to be very difficult to understand. So if I give you something like the GPT2 model, which we talked about in natural language processing, and I tell you that it has 1.5 billion parameters and then you say, why is it working? Clearly the answer is not because these particular parameters have these particular values. There is no way to understand that. And so the topic today is something that we raised a little bit in the lecture on fairness, where one of the issues there was also that if you can't understand the model you can't tell if the model has baked-in prejudices by examining it. And so today we're going to look at different methods that people have developed to try to overcome this problem of inscrutable models. So there is a very interesting bit of history. How many of you know of George Miller's 7 plus or minus 2 result? Only a few. So Miller was a psychologist at Harvard, I think, in the 1950s. And he wrote this paper in 1956 called "The Magical Number 7 Plus or Minus 2-- Some Limits On Our Capacity for Processing Information." It's quite an interesting paper. So he started off with something that I had forgotten. I read this paper many, many years ago. And I'd forgotten that he starts off with the question of how many different things can you sense? How many different levels of things can you sense? So if I put headphones on you and I ask you to tell me on a scale of 1 to n how loud is the sound that I'm playing in your headphone, it turns out people get confused when you get beyond about five, six, seven different levels of intensity. And similarly, if I give you a bunch of colors and I ask you to tell me where the boundaries are between different colors, people seem to come up with 7 plus or minus 2 as the number of colors that they can distinguish. And so there is a long psychological literature of this. And then Miller went on to do experiments where he asked people to memorize lists of things. And what he discovered is, again, that you could memorize a list of about 7 plus or minus 2 things. And beyond that, you couldn't remember the list anymore. So this tells us something about the cognitive capacity of the human mind. And it suggests that if I give you an explanation that has 20 things in it, you're unlikely to be able to fathom it because you can't keep all the moving parts in your mind at one time. Now, it's a tricky result, because he does point out even in 1956 that if you chunk things into bigger chunks, you can remember seven of those, even if they're much bigger. And so people who are very good at memorizing things, for example, make up patterns. And they remember those patterns, which then allow them to actually remember more primitive objects. So you know-- and we still don't really understand how memory works. But this is just an interesting observation, and I think plays into the question of how do you explain things in a complicated model? Because it suggests that you can't explain too many different things because people won't understand what you're talking about. OK. So what leads to complex models? Well, as I say, overfitting certainly leads to complex models. I remember in the 1970s when we started working on expert systems in healthcare, I made a very bad faux pas. I went to the first joint conference between statisticians and artificial intelligence researchers. And the statisticians were all about understanding the variance and understanding statistical significance and so on. And I was all about trying to model details of what was going on in an individual patient. And in some discussion after my talk, somebody challenged me. And I said, well, what we AI people are really doing is fitting what you guys think is the noise, because we're trying to make a lot more detailed refinements in our theories and our models than what the typical statistical model does. And of course, I was roundly booed out of the hall. And people shunned me for the rest of the conference because I had done something really stupid to admit that I was fitting noise. And of course, I didn't really believe that I was fitting noise. I believed that what I was fitting was what the average statistician just chalks up to noise. And we're interested in more details of the mechanisms. So overfitting we have a pretty good handle on by regularization. So you can-- you know, you've seen lots of examples of regularization throughout the course. And people keep coming up with interesting ideas for how to apply regularization in order to simplify models or make them fit some preconception of what the model ought to look like before you start learning it from data. But the problem is that there really is true complexity to these models, whether or not you're fitting noise. There's-- the world is a complicated place. Human beings were not designed. They evolved. And so there's all kinds of bizarre stuff left over from our evolutionary heritage. And so it is just complex. It's hard to understand in a simple way how to make predictions that are useful when the world really is complex. So what do we do in order to try to deal with this? Well, one approach is to make up what I call just-so stories that give a simplified explanation of how a complicated thing actually works. So how many of you have read these stories when you were a kid? Nobody? My God. OK. Must be a generational thing. So Rudyard Kipling was a famous author. And he wrote the series of just-so stories, things like How the Lion Got His Mane and How the Camel Got His Hump and so on. And of course, they're all total bull, right? I mean, it's not a Darwinian evolutionary explanation of why male lions have manes. It's just some made up story. But they're really cute stories. And I enjoyed them as a kid. And maybe you would have, too, if your parents had read them to you. So I mean, I use this as a kind of pejorative because what the people who follow this line of investigation do is they take some very complicated model. They make a local approximation to it that says, this is not an approximation to the entire model, but it's an approximation to the model in the vicinity of a particular case. And then they explain that simplified model. And I'll show you some examples of that through the lecture today. And the other approach which I'll also show you some examples of is that you simply trade off somewhat lower performance for a simple-- a model that's simple enough to be able to explain. So things like decision trees and logistic regression and so on typically don't perform quite as well as the best, most sophisticated models, although you've seen plenty of examples in this class where, in fact, they do perform quite well and where they're not outperformed by the fancy models. But in general, you can do a little better by tweaking a fancy model. But then it becomes incomprehensible. And so people are willing to say, OK, I'm going to give up 1% or 2% in performance in order to have a model that I can really understand. And the reason it makes sense is because these models are not self-executing. They're typically used as advice for some human being who makes ultimate decisions. Your surgeon is not going to look at one of these models that says, take out the guy's left kidney and say, OK, I guess. They're going to go, well, does that make sense? And in order to answer the question of, does that make sense? It really helps to know what the model is-- what the model's recommendation is based on. What is its internal logic? And so even an approximation to that is useful. So the need for trust, clinical adoption of ML models-- there are two approaches in this paper that I'm going to talk about where they say, OK, what you'd like to do is to look at case-specific predictions. So there is a particular patient in a particular state and you want to understand what the model is saying about that patient. And then you also want to have confidence in the model overall. And so you'd like to be able to have an explanatory capability that says, here are some interesting representative cases. And here's how the model views them. Look through them and decide whether you agree with the approach that this model is taking. Now, remember my critique of randomized controlled trials that people do these trials. They choose the simplest cases, the smallest number of patients that they need in order to reach statistical significance, the shortest amount of follow-up time, et cetera. And then the results of those trials are applied to very different populations. So Davids talked about the cohort shift as a generalization of that idea. But the same thing happens in these machine learning models that you train on some set of data. The typical publication will then test on some held-out subset of the same data. But that's not a very accurate representation of the real world. If you then try to apply that model to data from a totally different source, the chances are you will have specialized it in some way that you don't appreciate. And the results that you get are not as good as what you got on the held-out test data because it's more heterogeneous. I think I mentioned that Jeff Drazen, the editor-in-chief of the New England Journal, had a meeting about a year ago in which he was arguing that the journal shouldn't ever publish a research study unless it's been validated on two independent data sets because he's tired of publishing studies that wind up getting retracted because-- not because of any overt badness on the part of the investigators. They've done exactly the kinds of things that you've learned how to do in this class. But when they go to apply that model to a different population, it just doesn't work nearly as well as it did in the published version. And of course, there are all the publication bias issues about if 50 of us do the same experiment and by random chance some of us are going to get better results than others. And those are the ones that are going to get published because the people who got poor results don't have anything interesting to report. And so there's that whole issue of publication bias, which is another serious one. OK. So I wanted to just spend a minute to say, you know, explanation is not a new idea. So in the expert systems era that we talked about a little bit in one of our earlier classes, we talked about the idea that we would take medical-- human medical experts and debrief them of what they knew and then try to encode those in patterns or in rules or in various ways in a computer program in order to reproduce their behavior. So Mycin was one of those programs-- [INAUDIBLE] PhD thesis-- in 1975. And they published this nice paper that was about explanation and rule acquisition capabilities of the Mycin system. And as an illustration, they gave some examples of what you could do with the system. So rules, they argued, were quite understandable because they say if a bunch of conditions, then you can draw the following conclusion. So given that, you can say, well, when the program comes back and says, in light of the site from which the culture was obtained and the method of collection, do you feel that a significant number of organism 1 were detected-- were obtained? In other words, if you took a sample from somebody's body and you're looking for an infection, do you think you got enough organisms in that sample? And the user says, well, why are you asking me this question? And the answer in terms of the rules that the system works by is pretty good. It says it's important to find out whether there's therapeutically significant disease associated with this occurrence of organism 1. We've already established that the culture is not one of those that are normally sterile and the method of collection is sterile. Therefore, if the organism has been observed in significant numbers, then there's strongly suggestive evidence that there's therapeutically significant disease associated with this occurrence of the organism. So if you find bugs in a place carefully collected, then that suggests that you ought to probably treat this patient if there are were bunch of-- enough bugs there. And there's also strongly suggestive evidence that the organism is not a contaminant, because the collection method was sterile. And you can go on with this and you can say, well, why that? So why that question? And it traces back in its evolution of these rules and it says, well, in order to find out the locus of infection, it's already been established that the site of the culture is known. The number of days since the specimen was obtained is less than 7. Therefore, there is therapeutically significant disease associated with this occurrence of the organism. So there's some rule that says if you've got bugs and it happened within the last seven days, the patient probably really does have an infection. And I mean, I've got a lot of examples of this. But you can keep going why. You know, this is the two-year-old. But why, daddy? But why? But why? Well, why is it important to find out a locus of infection? And, well, there's a reason, which is that there is a rule that will conclude, for example, that the abdomen is a locus of infection or the pelvis is a locus of infection of the patient if you satisfy these criteria. And so this is a kind of rudimentary explanation that comes directly out of the fact that these are rule-based systems and so you can just play back the rules. One of the things I like is you can also ask freeform questions. 1975, the natural language processing was not so good. And so this worked about one time in five. But you could walk up to it and type some question. And for example, do you ever prescribe carbenicillin for pseudomonas infections? And it says, well, there are three rules in my database of rules that would conclude something relevant to that question. So which one do you want to see? And if you say, I want to see rule 64, it says, well, that rule says if it's known with certainty that the organism is a pseudomonas and the drug under consideration is gentamicin, then a more appropriate therapy would be a combination of gentamicin and carbenicillin. Again, this is medical knowledge as of 1975. But my guess is the real underlying reason is that there probably were pseudomonas that were resistant by that point, to gentamicin, and so they used a combination therapy. Now, notice, by the way, that this explanation capability does not tell you that, right? Because it doesn't actually understand the rationale behind these individual rules. And at the time there was also research, for example, by one of my students on how to do a better job of that by encoding not only the rules or the patterns, but also the rationale behind them so that the explanations could be more sensible. OK. Well, the granddaddy of the standard just-so story approach to explanation of complex models today comes from this paper and a system called LIME-- Locally Interpretable Model-agnostic Explanations. And just to give you an illustration, you have some complicated model and it's trying to explain why the doctor or the human being made a certain decision, or why the model made a certain decision. And so it says, well, here are the data we have about the patient. We know that the patient is sneezing. And we know their weight and their headache and their age and the fact that they have no fatigue. And so the explainer says, well, why did the model decide this patient has the flu? Well, positives are sneeze and headache. And a negative is no fatigue. So it goes into this complicated model and it says, well, I can't explain all the numerology that happens in that neural network or Bayesian network or whatever network it's using. But I can specify that it looks like these are the most important positive and negative contributors. Yeah? AUDIENCE: Is this for notes only, or it's for all types of data? PROFESSOR: I'll show you some other kind of data in a minute. I think they originally worked it out for notes, but it was also used for images and other kinds of data, as well. OK. And the argument they make is that this approach also helps to detect data leakage, for example in one of their experiments, the headers of the data had information in them that that correlated highly with the result. I think there-- I can't remember if it was these guys, but somebody was assigning study IDs to each case. And they did it a stupid way so that all the small numbers corresponded to people who had the disease and the big numbers corresponded to the people who didn't. And of course, the most parsimonious predictive model just used the ID number and said, OK, I got it. So this would help you identify that, because if you see that the best predictor is the ID number, then you would say, hmm, there's something a little fishy going on here. Well-- so here's an example where this kind of capability is very useful. So this was another-- this was from a newsgroup. And they were trying to decide whether a post was about Christianity or atheism. Now, look at these two models. So there's algorithm 1 and algorithm 2 or model 1 and model 2. And when you explain a particular case about using model 1, it says, while the words that I consider important are God, mean, anyone, this, Koresh, and through-- does anybody remember who David Koresh was? He was some cult leader who-- I can't remember if he killed a bunch of people or bad things happened. Oh, I think he was the guy in Waco, Texas that the FBI and the ATF went in and set their place on fire and a whole bunch of people died. So the prediction in this case is atheism. And you notice that God and Koresh and Mean are negatives. And anyone this and through are positives. And you go, I don't know, is that good? But then you look at algorithm 2 and you say, this also made the correct prediction, which is that this particular article is about atheism. But the positives were the word by and in, not terribly specific. And the negatives were things like NNTP. You know what that is? That's the Network Time Protocol. It's some technical thing, and posting and host. So this is probably like metadata that got into the header of the articles or something. So it happened that in this case, algorithm 2 turned out to be more accurate than algorithm 1 on their held out test data, but not for any good reason. And so the explanation capability allows you to clue in on the fact that even though this thing is getting the right answers, it's not for sensible reasons. OK. So what would you like from an explanation? Well, they say you'd like it to be interpretable. So it should provide qualitative understanding of the relationship between the input variables and the response. But they also say that that's going to depend on the audience. It requires sparsity for the George Miller argument that I was making before. You can't keep too many things in mind. And the features themselves that you're explaining must make sense. So for example, if I say, well, the reason this decided that is because the eigenvector for the first principle component was the following, that's not going to mean much to most people. And then they also say, well, it ought to have local fidelity. So it must correspond to how the model behaves in the vicinity of the particular instance that you're trying to explain. And their third criterion, which I think is a little iffier, is that it must be model-agnostic. In other words, you can't take advantage of anything you know that is specific about the structure of the model, the way you trained it, anything like that. It has to be a general purpose explainer that works on any kind of complicated model. Yeah? AUDIENCE: What is the reasoning for that? PROFESSOR: I think their reasoning for why they insist on this is because they don't want to have to write a separate explainer for each possible model. So it's much more efficient if you can get this done. But I actually question whether this is always a good idea or not. But nevertheless, this is one of their assumptions. OK. So here's the setup that they use. They say, all right, x is a vector in some D-dimensional space that defines your original data. And what we're going to do in order to make the data explainable, in order to make the data, not the model, explainable, is we're going to define a new set of variables, x prime, that are all binary and that are in some space of dimension D prime that is probably lower than D. So we're simplifying the data that we're going to explain about this model. Then they say, OK, we're going to build an explanation model, g, where g is a class of interpretable models. So what's an interpretable model? Well, they don't tell you, but they say, well, examples might be linear models, additive scores, decision trees, falling rule lists, which we'll see later in the lecture. And the domain of this is this input, the simplified input data, the binary variables in D prime dimensions, and the model complexity is going to be some measure of the depth of the decision tree, the number of non-zero weights, and the logistic regression-- the number of clauses in a falling rule list, et cetera. So it's some complexity measure. And you want to minimize complexity. So then they say, all right, the real model, the hairy, complicated full-bore model is f. And that maps the original data space into some probability. And for example, for classification, f is the probability that x belongs to a certain class. And then they also need a proximity measure. So they need to say, we have to have a way of comparing two cases and saying how close are they to each other? And the reason for that is because, remember, they're going to give you an explanation of a particular case and the most relevant things that will help with that explanation are the ones that are near it in this high dimensional input space. So they then define their loss function based on the actual decision algorithm, based on the simplified one, and based on the proximity measure. And they say, well, the best explanation is that g which minimizes this loss function plus the complexity of g. Pretty straightforward. So that's our best model. Now, the clever idea here is to say, instead of using all of the data that we started with, what we're going to do is to sample the data so that we take more sample points near the point we're interested in explaining. We're going to sample in the simplified space that is explainable and then we'll build that g model, the explanatory model, from that sample of data where we weight by that proximity function so the things that are closer will have a larger influence on the model that we learn. And then we recapture the-- sort of the closest point to this simplified representation. We can calculate what its answer should be. And that becomes the label for that point. And so now we train a simple model to predict the label that the complicated model would have predicted for the point that we've sampled. Yeah? AUDIENCE: So the proximity measure is [INAUDIBLE]?? PROFESSOR: It's a distance function of some sort. And I'll say more about it in a minute, because that's one of the critiques of this particular method has to do with how do you choose that distance function? But it's basically a similarity. So here's a nice, graphical explanation of what's going on. Suppose that the actual model-- the decision boundary is between the blue and the pink regions. OK. So it's this god awful, hairy, complicated decision model. And we're trying to explain why this big, red plus wound up in the pink rather than in the blue. So the approach that they take is to say, well, let's sample a bunch of points weighted by shortest distance. So we do sample a few points out here. But mostly we're sampling points near the point that we're interested in. We then learn a linear boundary between the positive and the negative cases. And that boundary is an approximation to the actual boundary in the more complicated decision model. So now we can give an explanation just like you saw before which says, well, this is some D prime dimensional space. And so which variables in that D prime dimensional space are the ones that influence where you are on one side or another of this newly computed decision boundary, and to what extent? And that becomes the explanation. OK? Nice idea. So if you apply this to text classification-- yes? AUDIENCE: I was just going to ask if the-- there's a worry that if explanation is just fictitious, like, we can understand it? But is there reason to believe that we should believe it if that's really the true nature of things that the linear does-- you know, it would be like, OK, we know what's going on here. But is that even close to reality? PROFESSOR: Well, that's why I called it a just-so story, right? Should you believe it? Well, the engineering disciplines have a very long history of approximating extremely complicated phenomena with linear models. Right? I mean, I'm in a department of electrical engineering and computer science. And if I talk to my electrical engineering colleagues, they know that the world is insanely complicated. Nevertheless, most models in electrical engineering are linear models. And they work well enough that people are able to build really complicated things and have them work. So that's not a proof. That's an argument by history or something. But it's true. Linear models are very powerful, especially when you limit them to giving explanations that are local. Notice that this model is a very poor approximation to this decision boundary or this one, right? And so it only works to explain in the neighborhood of the particular example that I've chosen. Right? But it does work OK there. Yeah. AUDIENCE: [INAUDIBLE] very well there? [INAUDIBLE] middle of the red space then the-- PROFESSOR: Well, they did. So they sample all over the place. But remember that that proximity function says that this one is less relevant to predicting that decision boundary because it's far away from the point that I'm interested in. So that's the magic. AUDIENCE: But here they're trying to explain to the deep red cross, right? PROFESSOR: Yes. AUDIENCE: And they picked some point in the middle of the red space maybe. Then all the nearby ones would be red and [INAUDIBLE].. PROFESSOR: Well, but they would-- I mean, suppose they picked this point, instead. Then they would sample around this point and presumably they would find this decision boundary or this one or something like that and still be able to come up with a coherent explanation. OK, so in the case of text, you've seen this example already. It's pretty simple. For their proximity function, they use cosine distance. So it's a bag of words model and they just calculate cosine distance between different examples by how much overlap there is between the words that they use and the frequency of words that they use. And then they choose k-- the number of words to show just as a preference. So it's sort of a hyperparameter. They say, you know, I'm interested in looking at the top five words or the top 10 words that are either positively or negatively an influence on the decision, but not the top 10,000 words because I don't know what to do with 10,000 words. Now, what's interesting is you can also then apply the same idea to image interpretation. So here is a dog playing a guitar. And they say, how do we interpret this? And so this is one of these labeling tasks where you'd like to label this picture as a Labrador or maybe as an acoustic guitar. But some reason-- some labels also decide that it's an electric guitar. And so they say, well, what counts in favor of or against each of these? And the approach they take is a relatively straightforward one. They say let's define a super pixel as a region of pixels within an image that have roughly the same intensity. So if you've ever used Photoshop, the magic selection tool can be adjusted to say, find a region around this point where all the intensities are within some delta of the point that I've picked. And so it'll outline some region of the picture. And what they do is they break up the entire image into these regions. And then they treat those as if they were the words in the words style explanation. So they say, well, this looks like an electric guitar to the algorithm. And this looks like an acoustic guitar. And this looks like a Labrador. So some of that makes sense. I mean, you know, that dog's face does kind of look like a Lab. This does look kind of like part of the body and part of the fret work of a guitar. I have no idea what this stuff is or why this contributes to it being a dog. But such is-- such is the nature of these models. But at least it is telling you why it believes these various things. So then the last thing they do is to say, well, OK, that helps you understand the particular model. But how do you convince yourself-- I mean, a particular example where a model is applied to it. But how do you convince yourself that the model itself is reasonable? And so they say, well, the best technique we know is to show you a bunch of examples. But we want those examples to kind of cover the gamut of places that you might be interested in. And so they say, let's create this matrix-- an explanation matrix where these are the cases and these are the various features, you know, the top words or the top pixel elements or something, and then we'll fill in the element of the matrix that tells me how strongly this feature is correlated or anti-correlated with the classification for that model. And then it becomes a kind of set covering issue of find a set of models that gives me the best coverage of explanations across that set of features. And then with that, I can convince myself that the model is reasonable. So they have this thing called the sub modular pick algorithm. And you know, probably if you're interested, you should read the paper. But what they're doing is essentially doing a kind of greedy search that says, what features should I add in order to get the best coverage in that space of features by documents? And then they did a bunch of experiments where they said, OK, let's compare the results of these explanations of these simplified models to two sentiment analysis tasks of 2,000 instances each. Bag of words as features-- they compared it to decision trees, logistic regression, nearest neighbors, SVM with the radial basis function, kernel, or random forests that use word to vacuum beddings-- highly non-explainable-- with 1,000 trees and K equal 10. So they chose 10 features to explain for each of these models. They then did a side calculation that said, what are the 10 most suggestive features for each case? And then they said, does that covering algorithm identify those features correctly? And so what they show here is that their method line does better in every case than a random sampling-- that's not very surprising-- or a greedy sampling or a partisan sampling, which I don't know the details of. But in any case, there's what this graph is showing is that of the features that they decided were important in each of these cases, they're recovering. So their recall is up around 90, 90-plus percent. So in fact, the algorithm is identifying the right cases to give you a broad coverage across all the important features that matter in classifying these cases. They then also did a bunch of human experiments where they said, OK, we're going to ask users to choose which of two classifiers they think is going to generalize better. So this is like the picture I showed you of the Christianity versus atheism algorithm, where presumably if you were a Mechanical Turker and somebody showed you an algorithm that has very high accuracy but that depends on things like finding the word NNTP in a classifier for atheism versus Christianity, you would say, well, maybe that algorithm isn't good to generalize very well, because it's depending on something random that may be correlated with this particular data set. But if I try it on a different data set, it's unlikely to work. So that was one of the tasks. And then they asked them to identify features like that that looked bad. They then ran this Christianity versus atheism test and had a separate test set of about 800 additional web pages from this website. The underlying model was a support vector machine with RBF kernels trained on the 20 newsgroup data-- I don't know if you know that data set, but it's a well-known, publicly available data set. They got 100 Mechanical Turkers and they said, OK, we're going to present each of them six documents and six features per document in order to ask them to make this. And then they did an auxiliary experiment in which they said, if you see words that are no good in this experiment, just strike them out. And that will tell us which of the features were bad in this method. And what they found was that the human subjects choosing between two classifiers were pretty good at figuring out which was the better classifier. Now, this is better by their judgment. And so they said, OK, this submodular pick algorithm-- which is the one that I didn't describe in detail, but it's this set covering algorithm-- gives you better results than a random pick algorithm that just says pick random features. Again, not totally surprising. And the other thing that's interesting is if you do the feature engineering experiment, it shows that as the Turkers interacted with the system, the system became better. So they started off with real world accuracy of just under 60%. And using the better of their algorithms, they reached about 75% after three rounds of interaction. So the users could say, I don't like this feature. And then the system would give them better features. Now, they tried a similar thing with images. And so this one is a little funny. So they trained a deliberately lousy classifier to classify between wolves and huskies. This is a famous example. Also it turns out that huskies live in Alaska and so-- and wolves-- I guess some wolves do, but most wolves don't. And so the data set on which that-- which was used in that original problem formulation, there was an extremely accurate classifier that was trained. And when they went to look to see what it had learned, basically it had learned to look for snow. And if it saw snow in the picture, it said it's a husky. And if it didn't see snow in the picture, it said it's a wolf. So that turns out to be pretty accurate for the sample that they had. But of course, it's not a very sophisticated classification algorithm because it's possible to put a wolf in a snowy picture and it's possible to have your Husky indoors with no snow. And then you're just missing the boat on this classification. So these guys built a particularly bad classifier by having all wolves in the training set had snow in the picture and none of the huskies did. And then they presented cases to graduate students like you guys with machine learning backgrounds. 10 balance test predictions. But they put one ringer in each category. So they put in one husky in snow and one wolf who was not in snow. And the comparison was between pre and post experiment trust and understanding. And so before the experiment, they said that 10 of the 27 students said they trusted this bad model that they trained. And afterwards, only 3 out of 27 trusted it. So this is a kind of sociological experiment that says, yes, we can actually change people's minds about whether a model is a good or a bad one based on an experiment. Before only 12 out of 27 students mentioned snow as a potential feature in this classifier, whereas afterwards almost everybody did. So again, this tells you that the method is providing some useful information. Now this paper set off a lot of work, including a lot of critiques of the work. And so this is one particular one from just a few months ago, the end of December. And what these guys say is that that distance function, which includes a sigma, which is sort of the scale of distance that we're willing to go, is pretty arbitrary. In the experiments that the original authors did, they set that distance to 75% of the square root of the dimensionality of the data set. And you go, OK. I mean, that's a number. But it's not obvious that that's the best number or the right number. And so these guys argue that it's important to tune the size of the neighborhood according to how far z, the point that you're trying to explain, is from the boundary. So if it's close to the boundary, then you ought to take a smaller region for your proximity measure. And if it's far from the boundary, this addresses the question you guys were asking about what happens if you pick a point in the middle. And so they show some nice examples of places where, for instance, if you compare this explaining this green point, you get a nice green line that follows the local boundary. But explaining the blue point, which is close to a corner of the actual decision boundary, you got a line that's not very different from the green one. And similarly for the red point. And so they say, well, we really need to work on that distance function. And so they come up with a method that they call LEAFAGE, which basically says, remember, what LINE did is it sampled nonexistent cases, simplified nonexistent cases. But here they're going to sample existing cases. So they're going to learn from the training-- the original training set. But they're going to sample it by proximity to the example that they're trying to explain. And they argue that this is a good idea because, for example, in law, the notion of precedent is that you get to argue that this case is very similar to some previously decided case, and therefore it should be decided the same way. I mean, Supreme Court arguments are always all about that. Lower court arguments are sometimes more driven by what the law actually says. But case law has been well established in British law, and then by inheritance in American law for many, many centuries. So they say, well, case-based reasoning normally involves retrieving a similar case, adapting it, and then learning that as a new precedent. And they also argue for contrastive justification, which is not only why did you choose x, but why did you choose x rather than y as giving a more satisfying and a more insightful explanation of how some model is working? So they say, OK, similar setup. f solves the classification problem where x is the data and y is some binary classifier, you know 0, 1, if you like. The training set is a bunch of x's. y sub true is the actual answer. y predicted is what f predicts on that x. And to explain f of z equals some particular outcome, you can define the allies of a case as ones that come up with the same answer. And you can define the enemies as one that wants to come up with a different answer. So now you're going to sample both the allies and the enemies according to a new distance function. And the intuition they had is that the reason that the distance function in the original line work wasn't working very well is because it was a spherical distance function in n dimensional space. And so they're going to bias it by saying that the distance, this b, is going to be some combination of the difference in the linear predictions plus the difference in the two points. And so the contour lines of the first term are these circular contour lines. This is what lime was doing. The contour lines of the second term are these linear gradients. And they add them to get sort of oval-shaped things. And this is what gives you that desired feature of being more sensitive to how close this point is to the decision boundary. Again, there are a lot of relatively hairy details, which I'm going to elide in the class today. But they're definitely in the paper. So they also did a user study on some very simple prediction models. So this was how much is your house worth based on things like how big is it and what year was it built in and what's some subjective quality judgment of it? And so what they show is that you can find examples that are the allies and the enemies of this house in order to do the prediction. So then they apply their algorithm. And it works. It gives you better answers. I'll have to go find that slide somewhere. All right. So that's all I'm going to say about this idea of using simplified models in the local neighborhood of individual cases in order to explain something. I wanted to talk about two other topics. So this was a paper by some of my students recently in which they're looking at medical images and trying to generate radiology reports from those medical images. I mean, you know, machine learning can solve all problems. I give you a collection of images and a collection of radiology reports, should be straightforward to build a model that now takes new radiological images and produces new radiology reports that are understandable, accurate, et cetera. I'm joking, of course. But the approach they took was kind of interesting. So they've taken a standard image decoder. And then before the pooling layer, they take essentially an image embedding from the next to last layer of this image encoding algorithm. And then they feed that into a word decoder and word generator. And the idea is to get things that appear in the image that correspond to words that appear in the report to wind up in the same place in the embedding space. And so again, there's a lot of hair. It's an LSDM based encoder. And it's modeled as a sentence decoder. And within that, there is a word decoder, and then there's a generator that generates these reports. And it uses reinforcement learning. And you know, tons of hair. But here's what I wanted to show you, which is interesting. So the encoder takes a bunch of spatial image features. The sentence decoder uses these image features in addition to the linguistic features, the word embeddings that are fed into it. And then for ground truth annotation, they also use a remote annotation method, which is this chexpert program, which is a rule-based program out of Stanford that reads radiology reports and identifies features in the report that it thinks are important and correct. So it's not always correct, of course. But that's used in order to guide the generator. So here's an example. So this is an image of a chest and the ground truth-- so this is the actual radiology report-- says cardiomegalia is moderate. Bibasilar atelectasis is mild. There's no pneumothoraxal or cervical spinal fusion is partially visualized. Healed right rib fractures are incidentally noted. By the way, I've stared at hundreds of radiological images like this. I could never figure out that this image says that. But that's why radiologists train for many, many years to become good at this stuff. So there was a previous program done by others called TieNet which generates the following report. It says AP portable upright view of the chest. There's no call no focal consolidation, effusion, or pneumothorax. The cardio mediastinal silhouette is normal. Imaged osseous structures are intact. So if you compare this to that, you say, well, if the cardio mediastinal silhouette is normal, then where would the lower cervical spinal fusion, being partially visualized, because that's along the middle. And so these are not quite consistent. So the system that these students built says there's mild enlargement of the cardiac silhouette. There is no pleural effusion or pneumothorax. And there's no acute osseous abnormalities. So it also missed the healed right rib fractures that were incidentally noted. But anyway, it's-- you know, the remarkable thing about a singing dog is not how well it sings but the fact that it sings at all. And the reason I included this work is not to convince you that this is going to replace radiologists anytime soon, but that it had an interesting explanation facility. And the explanation facility uses attention, which is part of its model, to say, hey, when we reach some conclusion, we can point back into the image and say what part of the image corresponds to that part of the conclusion. And so this is pretty interesting. You say in upright and lateral views of the chest in red, well, that's kind of the chest in red. There's moderate cardiomegaly, so here the green certainly shows you where your heart is. OK. About there and a little bit to the left. And there's no pleural effusion or pneumothorax. This one is kind of funny. That's the blue region. So how do you show me that there isn't something? And we were surprised, actually, the way it showed us that there isn't something is to highlight everything outside of anything that you might be interested in, which is not exactly convincing that there's no pleural effusion. And here's another example. There is no relevant change, tracheostomy tube in place, so that roughly is showing a little too wide. But it's showing roughly where a tracheostomy tube might be. Bilateral pleural effusion and compressive atelectasis. Atelectasis is when your lung tissues stick together. And so that does often happen in the lower part of the lung. And again, the negative shows you everything that's not part of the action. Yeah? AUDIENCE: [INAUDIBLE]. PROFESSOR: Yes. AUDIENCE: [INAUDIBLE] PROFESSOR: No. It's trying to predict the whole model-- the whole node. AUDIENCE: And it's not easier to have, like, one node for, like, each [INAUDIBLE]? PROFESSOR: Yeah. But these guys were ambitious. You know, they-- what was it? Jeff Hinton said a few years ago that he wouldn't want his children to become radiologists because that field is going to be replaced by computers. I think that was a stupid thing to say, especially when you look at the state of the art of how well these things work. But if that were true, then you would, in fact, want something that is able to produce an entire radiology report. So the motivation is there. Now, after this work was done, we ran into this interesting paper from Northeastern, which says-- but listen guys-- attention is not explanation. OK. So attention is clearly a mechanism that's very useful in all kinds of machine learning methods. But you shouldn't confuse it with an explanation. So they say, well, assumption-- it's the assumption that the input units are accorded high attention-- that are accorded high attention weights are responsible for the model outputs. And that may not be true. And so what they did is they did a bunch of experiments where they studied the correlation between the attention weights and the gradients of the model parameters to see whether, in fact, the words that had high attention were the ones that were most decisive in making a decision in the model. And they found that the evidence that correlation between intuitive feature importance measures, including gradient and feature erasure approaches-- so this is ablation studies and learn detention weights is weak. And so they did a bunch of experiments. There are a lot of controversies about this particular study. But what you find is that if you calculate the concordance, you know, on different data sets using different models, you see that, for example, the concordance is not very high. It's less than a half for this data set. And you know, some of it below 0, so the opposite for this data set. Interestingly, things like diabetes, which come from the mimic data, have narrower bounds than some of the others. So they seem to have a more definitive conclusion, at least for the study. OK. Let me finish off by talking about the opposite idea. So rather than building a complicated model and then trying to explain it in simple ways, what if we just built a simple model? And Cynthia Rudin, who's now at Duke, used to be at the Sloan School at MIT, has been championing this idea for many years. And so she has come up with a bunch of different ideas for how to build simple models that trade off maybe a little bit of accuracy in order to be explainable. And one of her favorites is this thing called a falling rule list. So this is an example for a mammographic mass data set. So it says, if some lump has an irregular shape and the patient is over 60 years old, then there's an 85% chance of malignancy risk, and there are 230 cases in which that happened. If this is not the case, then if the lump has the speculated margin-- so it has little spikes coming out of it-- and the patient is over 45, then there's a 78% chance of malignancy. And otherwise, if the margin is kind of fuzzy, the edge of it is kind of fuzzy, and the patient is over 60, then there's a 69% chance. And if it has an irregular shape, then there's a 63% chance. And if it's lobular and the density is high, then there's a 39% chance. And if it's round and the patient is over 60, then there's a 26% chance. Otherwise, there's a 10% chance. And the argument is that that description of the model, of the decision-making model, is simple enough that even doctors can understand it. You're supposed to laugh. Now, there are still some problems. So one of them is-- notice some of these are age greater than 60, age greater than 45, age greater than 60. It's not quite obvious what categories that's defining. And in principle, it could be different ages in different ones. But here's how they build it. So this is a very simple model that's built by a very complicated process. So the simple model is the one I've just showed you. There's a Bayesian approach, a Bayesian generative approach, where they have a bunch of hyper parameters, falling rule list parameters, theta-- they calculate a likelihood, which is given a particular theta, how likely are you to get the answers that are actually in your data given the model that you generate? And they start with a possible set of if clauses. So they do frequent clause mining to say what conditions, what binary conditions occur frequently together in the database. And those are the only ones they're going to consider because, of course, the number of possible clauses is vast and they don't want to have to iterate through those. And then for each set of-- for each clause, they calculate a risk score which is generated by a probability distribution under the constraint that the risk score for the next clause is lower or equal to the risk score for the previous clause. There are lots of details. So there is this frequent itemset mining algorithm. It turns out that choosing r sub l to be the logs of products of real numbers is an important step in order to guarantee that monotonicity constraint in a simple way. l, the number of clauses, is drawn from a Poisson distribution. And you give it a kind of scale that says roughly how many clauses would you be willing to tolerate in your following rule list? And then there's a lot of computational hair where they do-- they get mean a posteriori probability estimation by using a simulated annealing algorithm. So they basically generate some clauses and then they use swap, replace, add, and delete operators in order to try different variations. And they're doing hill climbing in that space. There's also some Gibbs sampling, because once you have one of these models, simply calculating how accurate it is is not straightforward. There's not a closed form way of doing it. And so they're doing sampling in order to try to generate that. So it's a bunch of hair. And again, the paper describes it all. But what's interesting is that on a 30 day hospital readmission data set with about 8,000 patients, they used about 34 features, like impaired mental status, difficult behavior, chronic pain, feels unsafe, et cetera. They mind rules or clauses with support more than 5% of the database and no more than two conditions. They set the expected length of the decision list to be eight clauses. And then they compared the decision model they got to SVM's random force logistic regression cart and an inductive logic programming approach. And shockingly to me, their method-- the following rule list method-- got an AUC of about 0.8, whereas all the others did like 0.79, 0.75 logistic regression, as usual outperformed the one they got slightly. Right? But this is interesting, because their argument is that this representation of the model is much more easy to understand than even a logistic regression model for most human users. And also, if you look at-- these are just various runs and the different models. And their model has a pretty decent AUC up here. I think the green one is the logistic regression one. And it's slightly better because it outperforms their best model in the region of low false positive rates, which may be where you want to operate. So that may actually be a better model. So here's their readmission rule list. And it says if the patient has bed sores and has a history of not showing up for appointments, then there's a 33% probability that they'll be readmitted within 30 days. If-- I think some note says poor prognosis and maximum care, et cetera. So this is the result that they came up with. Now, by the way, we've talked a little bit about 30 day readmission predictions. And getting over about 70% is not bad in that domain because it's just not that easily predictable who's going to wind up back in the hospital within 30 days. So these models are actually doing quite well, and certainly understandable in these terms. They also tried on a variety of University of California-Irvine machine learning data sets. These are just random public data sets. And they tried building these falling rule list models to make predictions. And what you see is that the AUCs are pretty good. So on the spam detection data set, their system gets about 91. Logistic regression, again, gets 97. So you know, part of the unfortunate lesson that we teach in almost every example in this class is that simple models like logistic regression often do quite well. But remember, here they're optimizing for explainability rather than for getting the right answer. So they're willing to sacrifice some accuracy in their model in order to develop a result that is easy to explain to people. So again, there are many variations on this type of work where people have different notions of what counts as a simple, explainable model. But that's a very different approach than the LIME approach, which says build the hairy model and then produce local explanations for why it makes certain decisions on particular cases. All right. I think that's all I'm going to say about explainability. This is a very hot topic at the moment, and so there are lots of papers. I think there's-- I just saw a call for a conference on explainable machine learning models. So there's more and more work in this area. So with that, we come to the end of our course. And I just wanted-- I just went through the front page of the course website and listed all the topics. So we've covered quite a lot of stuff, right? You know, what makes health care different? And we talked about what clinical care is all about and what clinical data is like and risk stratification, survival modeling, physiological time series, how to interpret clinical text in a couple of lectures, translating technology into the clinic. The italicized ones were guest lectures, so machine learning for cardiology and machine learning for differential diagnosis, machine learning for pathology, for mammography. David gave a couple of lectures on causal inference and reinforcement learning where David and a guest-- which I didn't note here-- disease progression and sub typing. We talked about precision medicine and the role of genetics, automated clinical workflows, the lecture on regulation, and then recently fairness, robustness to data set shift, and interpretability. So that's quite a lot. I think we're-- we the staff are pretty happy with how the class has gone. It was our first time as this crew teaching it. And we hope to do it again. I can't stop without giving an immense vote of gratitude to Irene and Willy, without whom we would have been totally sunk. [APPLAUSE] And I also want to acknowledge David's vision in putting this course together. He taught a sort of half-size version of a class like this a couple of years ago and thought that it would be a good idea to expand it into a full semester regular course and got me on board to work with him. And I want to thank you all for your hard work. And I'm looking forward to--
MIT_6S897_Machine_Learning_for_Healthcare_Spring_2019
9_Translating_Technology_Into_the_Clinic.txt
PETER SZOLOVITS: Fortunately, I have a guest today, Dr. Adam Wright, who will be doing an interview-style session and will answer questions for you. This is Adam's bread and butter, exactly how to translate this kind of technology into the clinic. He's currently in the partner system at the Brigham, I guess. But he's about to become a traitor and leave us in Boston and occupy a position at Vanderbilt University, for which we wish him luck. But I'm glad that we caught him before he leaves this summer. OK, so quite frankly, I wish that I could tell you a much happier story than the one that you're going to hear from me during the prepared part of my talk. And maybe Adam will cheer us up and make us more optimistic, based on his experience. So you may have noticed that AI is hot. So HIMSS, for example, is the Health Information Management Systems Society. It's a big-- they hold annual meetings and consist of a lot of vendors and a lot of academics. And it's one of these huge trade show kinds of things, with balloons hanging over booths and big open spaces. So for example, they're now talking about a AI-powered health care. On the other hand, it's important to remember this graph. So this is the sort of technology adoption graph. And it's called the hype cycle. And what you see here is that R&D-- that's us-- produces some wonderful, interesting idea. And then all of a sudden, people get excited about it. So who are the people that get most excited about it? It's the people who think they're going to make a fortune from it. And these are the so-called vulture capitalists-- venture capitalists. And so the venture capitalists come in and they encourage people like us to go out and found companies-- or if not us, then our students to go found companies. And figure out how to turn this nascent idea into some important moneymaking enterprise. Now the secret of venture capital is that they know that about 90% of the companies that they fund are going to tank. They're going to do very badly. And so as a result, what they hope for and what they expect-- and what the good ones actually get-- is that one in 10 that becomes successful makes so much money that it makes up for all of the investment that they poured into the nine out of 10 that do badly. So I actually remember in the 1990s, I was helping a group pitch a company to Kleiner Perkins, which is the big venture-- one of the big venture capital funds in Silicon Valley. And we walked into their boardroom and they had a copy of the San Jose Mercury News, which is the local newspaper for Silicon Valley, on their table. And they were just beaming, because there was an article that said that in the past year, the two best and the two worst investments in Silicon Valley had been by their company. But that's pretty good, right? If you get two winners and two really bad losers, you're making tons and tons of money. So they were in a good mood and they funded us. We didn't make them any money. So what you see on this curve is that there is a kind of set of rising expectations that comes from the development of these technologies. And you have some early adopters. And then you have the newspapers writing about how this is the revolution and everything will be different from here on out. Then you have some additional activity beyond the early adopters. And then people start looking at this and going, well, it really isn't as good as it's cracked up to be. Then you have the steep decline where there's some consolidation and some failures. And people have to go back to venture capital to try to get more money in order to keep their companies going. And then there's a kind of trough, where people go, oh well, this was another of these failed technological innovations. Then gradually, you start reaching what this author calls the slope of enlightenment, where people realize that, OK, it's not really as bad as we thought it was when it didn't meet our lofty expectations. And then gradually, if it's successful, then you get multiple generations of the product and it does achieve adoption. The adoption almost never reaches the peak that it was expected to reach at the time of the top of the hype cycle. But it becomes useful. It becomes profitable. It becomes productive. Now I've been around long enough to see a number of these cycles go by. So in the 1980s, for example, at a time that was now jokingly referred to as AI summer-- where people were building expert systems and these expert systems were going to just revolutionize everything-- I remember going to a conference where the Campbell Soup Company had built an expert system that was based on the expertise of some old timers who were retiring. And what this expert system did is it told you how to clean the vats of soup-- y know, these giant million-gallon things where they make soup-- when you're switching from making one kind of soup to another. So you know, if you're making beef consomme and you switch to making beef barley soup, you don't need to clean the vat at all. Whereas if you're switching from something like clam chowder to a consomme, then you need to clean it really well. So this was exactly the kind of thing that they were doing. And there were literally thousands of these applications being built. At the top of the hype cycle, all kinds of companies, like Campbell's Soup and the airlines and everybody was investing huge amounts of money into this. And then there was a kind of failure of expectations. These didn't turn out to be as good as people thought they were going to be, or as valuable as people thought they were going to be. And then all of a sudden came AI winter. So AI winter followed AI summer. There was no AI fall, except in a different sense of the word fall. And all of a sudden, funding dried up and the whole thing was declared a failure. But in fact today, if you go out there and you look at-- Microsoft Excel has an expert system-based help system bundled inside it. And there are tons of such applications. It's just that now they're no longer considered cutting-edge applications of artificial intelligence. They're simply considered routine practice. So they've become incorporated, without the hype, into all kinds of existing products. And they're serving a very useful role. But they didn't make those venture capital firms the tons of money that they had hoped to make. There was a similar boom and bust cycle in the 2000s around the creation of the worldwide web and e-commerce. OK, so e-commerce. Again, there was this unbelievably inflated set of expectations. Then around the year 2000, there was a big crash, where all of a sudden people realized that the value in these applications was not as high as what they expected it to be. Nevertheless, you know Amazon is doing just fine. And there are plenty of online e-commerce sites that are in perfectly good operating order today. But it's no longer the same hype about this technology. It's just become an accepted part of the way that you do business in almost everything. Yeah. AUDIENCE: When you speak of expert systems, does that mean rule-based systems? PETER SZOLOVITS: They were either rule-based or pattern matching systems. There were two basic kinds. I think a week from today, I'm going to talk about some of that and how it relates to modern machine learning. So we'll see some examples. OK, well, a cautionary tale is IBM's Watson Health. So I assume most of you remember when Watson hit the big time by beating the Jeopardy champions. This was back in the early 2010s or something. I don't remember exactly which year. And they had, in fact, built a really impressive set of technologies that went out and read all kinds of online sources and distilled them into a kind of representation that they could very quickly look up things when they were challenged with a Jeopardy question. And then it had a sophisticated set of algorithms that would try to find the best answer for some question. And they even had all kinds of bizarre special-purpose things. I remember there was a probabilistic model that figured out where the Daily Double squares were most likely to be on the Jeopardy board. And then they did a utility theoretic calculation to figure out if they did hit the Daily Double, what was the optimum amount of money to bet, based on the machine's performance, in order to optimize. They decided that humans typically don't bet enough when they have a chance on the Daily Double. So there was a lot of very special-purpose stuff done for this. So this was a huge publicity bonanza. And IBM decided that next they were going to tackle medicine. So they were going to take this technology and apply it to medicine. They were going to read all of the medical journals and all of the electronic medical records that they could get their hands on. And somehow this technology would again distill the right information, so that they could answer questions like a Jeopardy question, except not stated in its funny backward way. Where you might say, OK, for this patient, what is the optimum therapy? And it would go out and use the same technology to figure that out. Now that was a perfectly reasonable thing to try. The problem they ran into was this hype cycle, that the people who made this publicly-known were their marketing people and not their technical people. And the marketing people overpromised like crazy. They said surely this is just going to solve all these problems. And we won't need anymore research in this area, because man, we got it. I'm overstating it, even from the marketing point of view. And so Watson for Oncology used this cloud-based supercomputer to digest massive amounts of data. That data included all kinds of different things. So I'm going to go into a little bit of detail about what some of their problems were. This is from an article in this journal, Statnews, which did an investigative piece on what happened with Watson. So you know, they say what I just said. Breathlessly promoting its signature brand, IBM sought to capture the world's imagination and quickly zeroed in on a high-profile target, which was cancer. So this was going to solve the problem of some patient shows up, is diagnosed with cancer, and you want to know how to treat this person. So this would use all of the literature and all of everything that it had gathered from previous treatments of previous patients. And it would give you the optimal solution. Now it has not been a success. There are a few dozen hospitals that have adopted the system. Very few of them in the United States, more of them abroad. And the foreigners complain that its advice is biased toward American patients and American approaches. To me, the biggest problem is that they haven't actually published anything that validates, in a scientific sense, that this is a good idea. That it's getting the right answers. My guess is the reason for this is because it's not getting the right answers, a lot of the time. But that doesn't prevent marketing from selling it. The other problem is that they made a deal with Memorial Sloan Kettering-- which is one of the leading cancer hospitals in the country-- to say, we're going to work with you guys and your oncologists in order to figure out what really is the right answer. So I think they tried to do what their marketing says that they're doing, which is to really derive the right answer from reading all of the literature and looking at past cases. But I don't think that worked well enough. And so what they wound up doing is turning to real oncologists, saying, what would you do under these circumstances? And so what they wound up building is something like a rule-based system that says, if you see the following symptoms and you have the following genetic defects, then this is the right treatment. So the promise that this was going to be a machine learning system that revolutionized cancer care by finding the optimal treatment really is not what they provided. And as the article says, the system doesn't really create new knowledge. So it's AI only in the sense of providing a search engine that, when it makes a recommendation, can point you to articles that are a reasonable reflection of what it's recommending. Well, I'm going to stop going through this litany. But you'll see it in the slides, which we'll post. They had a big contract with M.D. Anderson, which is another leading cancer center in the United States. M.D. Anderson spent about $60 million on this contract, implementing it. And they pulled the plug on it, because they decided that it just wasn't doing the job. Now by contrast, there was a much more successful attempt years ago, which was less driven by marketing and more driven by medical need. And the idea here was CPOE, stands for Computerized Physician Order Entry. The idea behind CPOE was that if you want to affect the behavior of clinicians in ordering tests or drugs or procedures, what you want to do is to make sure that they are interacting with the computer. So that when they order, for example, some insanely expensive drug, the system can come back and say, hey, do you realize that there's a drug that costs 1/100 as much, which according to the clinical trials that we have on record is just as effective as the one that you've ordered? And so for example, here at the Beth Israel many years ago, they implemented a system like that. And in the first year, they showed that they saved something like $16 million in the pharmacy, just by ordering cheaper variants of drugs that could have been very expensive. And they also found that the doctors who were doing the ordering were perfectly satisfied with that, because they just didn't know how expensive these drugs were. That's not one of the things that they pay attention to. So there are many applications like that that are driven by this. And again, here are some statistics. You can reduce error rates by half. You can reduce severe medication errors by 88%. You can have a 70% reduction in antibiotic-related adverse drug events. You can reduce length of stay, which is another big goal that people go after. And at least if you're an optimist, you can believe these extrapolations that say, well, we could prevent 3 million adverse drug events at big city hospitals in the United States if everybody used systems like this. So the benefits are that it prompts with warnings against possible drug interactions, allergies, or overdoses. It can be kept up to date by some sort of mechanism where people read the literature and keep updating the databases this is driven from. And it can do mechanical things like eliminate confusion about drug names that sound similar. Stuff like that. So the Leapfrog Group, which does a lot of meta analyses and studies of what's effective, really is behind this and pushing it very strongly. Potential future benefits, of course, are that if the kinds of machine learning techniques that we talk about become widely used, then these systems can be updated automatically rather than by manual review. And you can gain the advantages of immediate feedback as new information becomes available. Now the adoption of CPOE was recommended by the National Academy of Medicine. They wanted every hospital to use this by 1999. And of course, it hasn't happened. So I couldn't find current data, but 2014 data shows that CPOE, for example, for medication orders, is only being used in about 25% of the hospitals. And at that time, people were extrapolating and saying, well, it's not going to reach 80% penetration until the year 2029. So it's a very slow adoption cycle. Maybe it's gotten better. The other problem-- and one of the reasons for resistance-- is that it puts additional stresses on people. So for example, this is a study of how pharmacists spend their time. So clinical time is useful. That's when they're consulting with doctors, helping them figure out appropriate dosage for patients. Or they're talking to patients, explaining to them how to take their medications, what side effects to watch out for, et cetera. These distributive tasks-- it's a funny term-- mean the non-clinical part of what they're doing. And what you see is that hospitals that have adopted CPOE, they wind up spending a little bit more time on the distributive tasks and a little bit less time on the clinical tasks. Which is probably not in the right direction, in terms of what pharmacists were hoping for out of systems like this. Now people have studied the diffusion of new medical technologies. And I think I'll just show you the graph. So this is in England, but this is the adoption for statins. So from the time they were introduced-- statins is the drug that keeps your cholesterol low. From the time they were introduced until they were being used, essentially, at 100% of places was about five and a half, six years. So reasonably fast. If you look at the adoption of magnetic resonance imaging technology, it took five years for it to have any adoption whatsoever. And that's because it was insanely expensive. So there were all kinds of limitations. You know, even in Massachusetts, you have to get permission from some state committee to buy a new MRI machine. And if another hospital in your town already had one, then they would say, well, you shouldn't buy one because you should be able to use this other hospital's MRI machine. Same thing happened with CT. But as soon as those limitations were lifted, boom. It went up and then continues to go up. Whereas stents, I actually don't know why they were delayed by that long. But this is for people with blockages in coronary arteries or other arteries. You can put in a little mesh tube that just keeps that artery open. And that adoption was incredibly quick. So different things get adopted at different rates. Now the last topic I want to talk about before-- yeah. AUDIENCE: So what happens in those years where you just have spikes? What's doing it? PETER SZOLOVITS: So according to those authors, in the case of stents, there were some champions of the idea of stenting who went around and convinced their colleagues that this was the right technology to use. So there was just an explosive growth in it. In the other technologies, in the MRI case, money mattered a lot because they're so expensive. Stents are relatively cheap. And in the case of statins, those are also relatively cheap. Or they've become cheap since they went off patent. Originally, they were much more expensive. But there are still adoption problems. So for example, there was a recommendation-- I think about 15, maybe even 20 years ago-- that said that anybody who has had a heart attack or coronary artery disease should be taking beta blockers. And I don't remember what the adoption rate is today, but it's only on the order of a half. And so why? This is a dirt cheap drug. For reasons not quite understood, it reduces the probability of having a second heart attack by about 35%. So it's a really cheap protective way of keeping people healthier. And yet it just hasn't suffused practice as much as people think it should have. All right. So how do we assure the quality of these technologies before we foist them on the world? This is tricky. So John Ioannidis, a Stanford professor, has made an extremely successful career out of pointing out that most biomedical research is crap. It can't be reproduced. And there are some famous publications that show that people have taken some area of biomedicine, and they've looked at a bunch of well-respected published studies. And they've gone to the lab and they've tried to replicate those studies. Half the time or three-quarters of the time, they fail to do so. You go, oh my god, this is horrible. It is horrible. Yeah. AUDIENCE: You mean like they failed to do so, so they won't reproduce the exact same results? Or what exactly-- PETER SZOLOVITS: Worse than that. So it's not that there are slight differences. It's that, for example, a result that was shown to be statistically significant in one study, when they repeat the study, is no longer statistically significant. That's bad, if you base policy on that kind of decision. So Ioannidis has a suggestion, which would probably help a lot. And that is, basically, make known to everybody all the studies that have failed. So the problem is that if you give me a big data set and I start mining this data set, I'm going to find tons and tons of interesting correlations in this data. And as soon as I get one that has a good p value, my students and I go, fantastic. Time to publish. Now consider the fact that I'm not the only person in this role. So you know, David's group is doing the same thing. And John Guttag's and Regina Barzilay's and all of our colleagues at every other major university and hospital in the United States. So there may be hundreds of people who are mining this data. And each of us has slightly different ways of doing it. We select our cases differently. We preprocess the data differently. We apply different learning algorithms to them. But just by random chance, some of us are going to find interesting results, interesting patterns. And of course, those are the ones that get published. Because if you don't find an interesting result, you're not going to submit it to a journal and say, you know I looked for the following fact phenomenon and I was unable to find it. Because the journal says, well, that's not interesting to anybody. So Ioannidis is recommending that, basically, every study that anybody undertakes should be registered. And if you don't get a significant result, that should be known. And this would allow us to make at least some reasonable estimate of whether the significant results that were gotten are just the statistical outliers that happened to reach p equal 0.05 or whatever your threshold is, or whether it's a real effect because not that many people have been trying this. Yeah. AUDIENCE: [INAUDIBLE] why do you think this is? Is it because of the size of some core patients? Or bias in the assay? Or just purely randomness in the study? PETER SZOLOVITS: It could be any of those. It could be that your hospital has some biased data collection. And so you find an effect. My hospital doesn't, and so I don't find it. It could be that we just randomly sub-sampled a different sample of the population. So it's very interesting. Last year I was invited to a meeting by Jeff Drazen, who's the executive editor of the New England Journal. And he's thinking about-- has not decided-- but he's thinking about a policy for the New England Journal, which is like the top medical journal, that says that he will not publish any result unless it's been replicated on two independent data sets. So that's interesting. And that's an attempt to fight back against this problem. It's a different solution than what Ioannidis is recommending. So this was a study by Enrico Carrara. And he's talking about what it means to replicate. And again, I'm not going to go through all this. But there's the notion that replication might mean exact replication, i.e. You do exactly the same thing on exactly the same kind of data, but in a different data set. And then partial replication, conceptual replication, which says, you follow the same procedures but in a different environment. And then quasi replication-- either partial or conceptual. And these have various characteristics that you can look at. It's an interesting framework. So this is not a new idea. The first edition of this book, Evaluation Methods in Biomedical Informatics, was called Evaluation Methods in Medical Informatics by the same authors and was published a long time ago. I can't remember. This one is relatively recent. And so they do a multi-hundred page, very detailed evaluation of exactly how one should evaluate clinical systems like this. And it's very careful and very cautious, but it's also very conservative. So for example, one of the things that they recommend is that the people doing the evaluation should not be the people who developed the technique, because there's innately bias. You know, I want my technique to succeed. And so they say, hand it off to somebody else who doesn't have that same vested interest. And then you're going to get a more careful evaluation. So Steve Pauker and I wrote a response to one of their early papers recommending this that said, well, that's so conservative that it sort of throws the baby out with the bathwater. Because if you make it so difficult to do an evaluation, you'll never get anything past it. So we proposed instead a kind of staged evaluation that says, first of all, you should do regression testing so that every time you use these agile development methods, you should have the set of cases that your program has worked on before. You should automatically rerun them and see which ones you've made better and which ones you've made worse. And that will give you some insight into whether what you're doing is reasonable. Then you might also build tools that look at automating ways of looking for inconsistencies in the models that you're building. Then you have retrospective review, judged by clinicians. So you run a program that you like over a whole bunch of existing data, like what you're doing with Mimic or with Market Scan. And then you do it prospectively, but without actually affecting patients. So you do it in real time as the data is coming in, but you don't tell anybody what the program results in. You just ask them to evaluate in retrospect to see whether it was right. And you might say, well, what's the difference between collecting the data in real time and collecting the data retrospectively? Historically, the answer is there is a difference. So circumstances differ. The mechanisms that you have for collecting the data differ. So this turns out to be an important issue. And then you can run a prospective controlled trial where you're interested in evaluating both the answer that you get from the program, and ultimately the effect on health outcomes. So if I have a decision support system, the ultimate proof of the pudding is if I run that decision support system. I give advice to clinicians, the clinicians change their behavior sometimes, and the patients get a better outcome. Then I'm convinced that this is really useful. But you have to get there slowly, because you don't want to give them worse outcomes. That's unethical and probably illegal. And you want to compare this to the performance of unaided doctors. So the Food and Drug Administration has been dealing with this issue for many, many years. I remember talking to them in about 1976, when they were reading about the very first expert system programs for diagnosis and therapy selection. And they said, well, how should we regulate these? And my response at the time was, God help us. Keep your hands off. Because if you regulate it, then you're going to slow down progress. And in any case, none of these programs are being used. These programs are being developed as experimental programs in experimental settings. They're not coming anywhere close to being used on real patients. And so there is not a regulatory issue. And about every five years, FDA has revisited that question. And they have continued to make essentially the same decision, based on the rationale that, for example, they don't regulate books. If I write a textbook that explains something about medicine, the FDA is not going to see whether it's correct or not. And the reason is because the expectation is that the textbook is making recommendations, so to speak, to clinical practitioners who are responsible experts themselves. So the ultimate responsibility for how they behave rests with them and not with the textbook. And they said, we're going to treat these computer programs as if they were dynamic textbooks, rather than colleagues who are acting independently and giving advice. Now as soon as you try to give that advice, not to a professional, but to a patient, then you are immediately under the regulatory auspices of FDA. Because now there is no professional intermediate that can evaluate the quality of that advice. So what FDA has done, just in the past year, is they've said that we're going to treat these AI-based quote-unquote devices as medical devices. And we're going to apply the same regulatory requirements that we have for these devices, except we don't really know how to do this. So there's a kind of experiment going on right now where they're saying, OK, submit applications for review of these devices to us. We will review them. And we will use these criteria-- product quality, patient safety, clinical responsibility, cybersecurity responsibility, and a so-called proactive culture in the organization that's developing them-- in order to make a judgment of whether or not to let you proceed with marketing one of these things. So if you look, there are in fact about 10 devices, quote-unquote-- these are all software-- that have been approved so far by FDA. And almost all of them are imaging devices. They're things that do convolutional networks over one thing or another. And so here are just a few examples. Imagen has OsteoDetect, which analyzes two-dimensional X-ray images for signs of distal radius fracture. So if you break your wrist, then this system will look at the X-ray and decide whether or not you've done that. Here's one from IDx, which looks at the photographs of your retina and decides whether you have diabetic retinopathy. And actually, they've published a lot of papers that show that they can also identify heart disease and stroke risk and various other things from those same photographs. So FDA has granted them approval to market this thing. Another one is Viz, which automatically analyzes CT scans for ER patients and is looking for blockages and major brain blood vessels. So this can obviously lead to a stroke. And this is an automated technique that does that. Here's another one. Arterys measures and tracks tumors or potential cancers in radiology images. So these are the ones that have been approved. And then I just wanted to remind you that there's actually plenty of literature about this kind of stuff. So the book on the left actually comes out next week. I got to read a pre-print of it, by Eric Topol, who's one of these doctors who writes a lot about the future of medicine. And he actually goes through tons and tons of examples of not only the systems that have been approved by FDA, but also things that are in the works that he's very optimistic that these will again revolutionize the practice of medicine. Bob Wachter, who wrote the book on the left a couple of years ago, is a little bit more cautious because he's chief of medicine at UC San Francisco. And he wrote this book in response to them almost killing a kid by giving him a 39x overdose of a medication. They didn't quite succeed in killing the kid. So it turned out OK. But he was really concerned about how this wonderful technology led to such a disastrous outcome. And so he spent a year studying how these systems were being used, and writes a more cautionary tale. So let me turn to Adam, who as I said, is a professor at the Brigham and Harvard Medical School. Please come and join me, and we can have a conversation. ADAM WRIGHT: So my name is Adam Wright. I'm an associate professor of medicine at Harvard Medical School. In that role, I lead a research program and I teach the introduction to biomedical informatics courses at the medical school. So if you're interested in the topics that Pete was talking about today, you should definitely consider cross-registering in VMI 701 or 702. The medical school certainly always could use a few more enthusiastic and technically-minded machine learning experts in our course. And then I have a operational job at Partners. Partners is the health system that includes Mass General Hospital and the Brigham and some community hospitals. And I work on Partners eCare, which is our kind of cool brand name for Epic. So Epic is the EHR that we use at Partners. And I help oversee the clinical decision support there. So we have a decision support team. I'm the clinical lead for monitoring and evaluation. And so I help make sure that our decision support systems of the type that Pete's talking about work correctly. So that's my job at the Brigham and at Partners. PETER SZOLOVITS: Cool. And I appreciate it very much. ADAM WRIGHT: Thanks. I appreciate the invitation. It's fun to be here. PETER SZOLOVITS: So Adam, the first obvious question is what kind of decision support systems have you guys actually put in place? ADAM WRIGHT: Absolutely. So we've had a long history at the Brigham and Partners of using decision support. Historically, we developed our own electronic health record, which was a little bit unusual. About three years ago, we switched from our self-developed system to Epic, which is a very widely-used commercial electronic health record. And to the point that you gave, we really started with a lot of medication-related decision support. So that's things like drug interaction, alerting. So you prescribe two drugs that might interact with each other. And we use a table-- no machine learning or anything too complicated-- that says, we think this drug might interact with this. We raise an alert to the doctor, to the pharmacist. And they make a decision, using their expertise as the learned intermediary, that they're going to continue with that prescription. Let's have some dosing support, allergy checking, and things like that. So our first set of decision support really was around medications. And then we turned to a broader set of things like preventative care reminders, so identifying patients that are overdue for a mammogram or a pap smear or that might benefit from a statin or something like that. Or a beta blocker, in the case of acute myocardial infarction. And we make suggestions to the doctor or to other members of the care team to do those things. Again, those historically have largely been rule-based. So some experts sat down and wrote Boolean if-then rules, using variables that are in a patient's chart. We have increasingly, though, started trying to use some predictive models for things like readmission or whether a patient is at risk of falling down in the hospital. A big problem that patients often encounter is they're in the hospital, they're kind of delirious. The hospital is a weird place. It's dark. They get up to go to the bathroom. They trip on their IV tubing, and then they fall and are injured. So we would like to prevent that from happening. Because that's obviously kind of a bad thing to happen to you once you're in the hospital. So we have some machine learning-based tools for predicting patients that are at risk for falls. And then there is a set of interventions like putting the bed rails up or putting an alarm that buzzes when if they get out of bed. Or in more extreme cases, having a sitter, like a person who actually sits in the room with them and tries to keep them from getting up or assists them to the bathroom. Or calls someone who can assist them to the bathroom. So we have increasingly started using those machine learning tools. Some of which we get from third parties, like from our electronic health record vendor, and some of which we sort of train ourselves on our own data. That's a newer pursuit for us, is this machine learning. PETER SZOLOVITS: So when you have something like a risk model, how do you decide where to set the threshold? You know, if I'm at 53% risk of falling, should you get a sitter to sit by my bedside? ADAM WRIGHT: It's complicated, right? I mean, I would like to say that what we do is a full kind of utility analysis, where we say, we pay a sitter this much per hour. And the risk of falling is this much. And the cost of a fall-- most patients who fall aren't hurt. But some are. And so you would calculate the cost-benefit of each of those things and figure out where on the ROC curve you want to place yourself. In practice, I think we often just play it by ear, in part because a lot of our things are intended to be suggestions. So our threshold for saying to the doctor, hey, this patient is at elevated risk for fall, consider doing something, is pretty low. If the system were, say, automatically ordering a sitter, we might set it higher. I would say that's an area of research. I would also say that one challenge we have is we often set and forget these kinds of systems. And so there is kind of feature drift and patients change over time. We probably should do a better job of then looking back to see how well they're actually working and making tweaks to the thresholds. Really good question. PETER SZOLOVITS: But these are, of course, very complicated decisions. I remember 50 years ago talking to some people in the Air Force about how much should they invest in safety measures. And they had a utility theoretic model that said, OK, how much does it cost to replace a pilot if you kill them? ADAM WRIGHT: Yikes. Yeah. PETER SZOLOVITS: And this was not publicized a lot. ADAM WRIGHT: I mean, we do calculate things like quality-adjusted life-years and disability-adjusted life-years. So there is-- in all of medicine as people deploy resources, this calculus. And I think we tend to assign a really high weight to patient harm, because patient harm is-- if you think about the oath the doctors swear, first do no harm. The worst thing we can do is harm you in the hospital. So I think we have a pretty strong aversion to do that. But it's very hard to weigh these things. I think one of the challenges we often run into is that different doctors would make different decisions. So if you put the same patient in front of 10 doctors and said, does this patient need a sitter? Maybe half would say yes and half would say no. So it's especially hard to know what to do with a decision support system if the humans can't agree on what you should do in that situation. PETER SZOLOVITS: So the other thing we talked about on the phone yesterday is I was concerned-- a few years ago, I was visiting one of these august Boston-area hospitals and asked to see an example of somebody interacting with this Computerized Physician Order Entry system. And the senior resident who was taking me around went up to the computer and said, well, I think I remember how to use this. And I said, wait a minute. This is something you're expected to use daily. But in reality, what happens is that it's not the senior doctors or even the medium senior doctors. It's the interns and the junior residents who actually use the systems. ADAM WRIGHT: This is true. PETER SZOLOVITS: And the concern I had was that it takes a junior resident with a lot of guts to go up to the chief of your service and say, doctor x, even though you asked me to order this drug for this patient, the computer is arguing back that you should use this other one instead. ADAM WRIGHT: Yeah, it does. And in fact, I actually thought of this a little more after we chatted about it. We've heard from residents that people have said to them, if you dare page me with an Epic suggestion in the middle of the night, I'll never talk to you again. So just override all of those alerts. I think that one of the challenges is-- and some culpability on our part-- is that a lot of these alerts we give have a PPV of like, 10 or 20%. They are usually wrong. We think it's really important, so we really raise these alerts a lot. But people experience this kind of alert fatigue, or what people call alarm fatigue. You see this in cockpits, too. But people get too many alerts, and they start ignoring the alerts. They assume that they're wrong. They tell the resident not to page them in the middle of the night, no matter what the computer says. So I do think that we have some responsibility to improve the accuracy of these alerts. I do think machine learning could help us. We're actually just having a meeting about a pneumococcal vaccination alert. This is something that helps people remember to prescribe this vaccination to help you not get pneumonia. And it takes four or five variables into account. We started looking at the cases where people would override the alert. And they were mostly appropriate. So the patient is in a really extreme state right now. Or conversely, the patient is close to the end of life. And they're not going to benefit from this vaccination. If the patient has a phobia of needles, if the patient has an insurance problem. And we think there's probably more like 30 or 40 variables that you would need to take into account to make that really accurate. So the question is, when you have that many variables, can a human develop and maintain that logic? Or would we be better off trying to use a machine learning system to do that? And would that really work or not? PETER SZOLOVITS: So how far are we from being able to use a machine learning system to do that? ADAM WRIGHT: I think that the biggest challenge, honestly, relates to the availability and accuracy of the data in our systems. So Epic, which is the EHR that we're using-- and Cerner and Allscripts and most of the major systems-- have various ways to run even sophisticated machine learning models, either inside of the system or bolted onto the system and then feeding model inferences back into the system. When I was giving that example of the pneumococcal vaccination, one of the major problems is that there's not always a really good structured way in the system that we indicate that a patient is at the end of life and receiving comfort measures only. Or that the patient is in a really extreme state, that we're in the middle of a code blue and that we need to pause for a second and stop giving these kind of friendly preventive care suggestions. So I would actually say that the biggest barrier to really good machine-learning-based decision support is just the lack of good, reliably documented, coded usable features. I think that the second challenge, obviously, is workflow. You said-- it's sometimes hard to know in the hospital who a patient's doctor is. The patient is admitted. And on the care team is an intern, a junior resident, and a fellow, an attending, several specialists, a couple of nurses. Who should get that message or who should get that page? I think workflow is second. This is where I think you may have said, I have some optimism. I actually think that the technical ability of our EHR software to run these models is better than it was three or five years ago. And it's, actually, usually not the barrier in the studies that we've done. PETER SZOLOVITS: So there were attempts-- again, 20 years ago-- to create formal rules about who gets notified under what circumstances. I remember one of the doctors I worked with at Tufts Medical Center was going crazy, because when they implemented a new lab information system, it would alert on every abnormal lab. And this was crazy. But there were other hospitals that said, well, let's be a little more sophisticated about when it's necessary to alert. And then if somebody doesn't respond to an alert within a very short period of time, then we escalate it to somebody higher up or somebody else on the care team. And that seemed like a reasonable idea to me. But are there things like that in place now? ADAM WRIGHT: There are. It works very differently in the inpatient and the outpatient setting. At the inpatient setting, we're writing very acute care to a patient. And so we have processes where people sign in and out of the care team. In fact, these prevalence of these automated messages is an incentive to do that well. If I go home, I better sign myself out of that patient, otherwise I'm going to get all these pages all night about them. And the system will always make sure that somebody is the responding provider. It becomes a little thornier in the outpatient setting, because a lot of the academic doctors at the Brigham only have clinic half a day a week. And so the question is, if an abnormal result comes back, should I send it to that doctor? Should I send it to the person that's on call in that clinic? Should I send it to the head of the clinic? There are also these edge cases that mess us up a lot. So a classic one is a patient is in the hospital. I've ordered some lab tests. They're looking well, so I discharge the patient. The test is still pending at the time the patient is discharged. And now, who does that go to? Should it go to the patient's primary care doctor? Do they have a primary care doctor? Should it go to the person that ordered the test? That person may be on vacation now, if it's a test that takes a few weeks to come back. So we still struggle with-- we call those TPADs-- tests pending at discharge. We still struggle with some of those edge cases. But I think in the core, we're pretty good at it. PETER SZOLOVITS: So one of the things we talked about is an experience I've had and you've probably had that-- for example, a few years ago I was working with the people who run the clinical labs at Mass General. And they run some ancient laboratory information systems that, as you said, can add and subtract but not multiply or divide. ADAM WRIGHT: They can add and multiply, but not subtract or divide. Yes. And it doesn't support negative numbers. Only unsigned integers. PETER SZOLOVITS: So there are these wonderful legacy systems around that really create horrendous problems, because if you try to build anything-- I mean, even a risk prediction calculator-- it really helps to be able to divide as well as multiply. So we've struggled in that project. And I'm sure you've had similar experiences with how do we incorporate a decision support system into some of this squeaky old technology that just doesn't support it? So what's the right approach to that? ADAM WRIGHT: There are a lot of architectures and they all have pros and cons. I'm not sure if any one of them is the right approach. I think we often do favor using these creaky old technology or the new technology. So Epic has a built in rule engine. That laboratory you talked about has a basic calculation engine with some significant limitations to it. So where we can, we often will try to build rules internally using these systems. Those tend to have real-time availability of data, the best ability to sort of push alerts to the person right in their workflow and make though those alerts actionable. In cases where we can't do that-- like for example, a model that's too complex to execute in the system-- one thing that we've often done is run that model against our data warehouse. So we have a data warehouse that extracts the data from the electronic health record every night at midnight. So if we don't need real-time data, it's possible to run-- extract the data, run a model, and then actually write a risk score or a flag back into the patient's record that can then be shown to the clinician, or used to drive an alert or something like that. That works really well, except that a lot of things that happen-- particularly in an inpatient setting, like predicting sepsis-- depend on real-time data. Data that we need right away. And so we run into the challenge where that particular approach only works on a 24-hour kind of retrospective basis. We have also developed systems that depend on messages. So there's this-- HL7 is a standard format for exchanging data with an electronic health record. There's various versions and profiles of HL7. But you can set up an infrastructure that sits outside of the EHR and gets messages in real time from the EHR. It makes inferences and sends messages back into the EHR. Increasingly, EHRs also do support kind of web service approaches. So that you can register a hook and say, call my hook whenever this thing happens. Or you can pull the EHR to get data out and use another web service to write data back in. That's worked really well for us. You can also ask the EHR to embed an app that you develop. So people here may have heard-- or should hear at some point-- about SMART on FHIR, which is a open kind of API that allows you to develop an application and embed that application into an electronic health record. We've increasingly been building some of those applications. The downside right now of the smart apps is that they're really good for reading data out of the record and sort of visualizing or displaying it. But they don't always have a lot of capability to write data back into the record or take actions. Most of the EHR vendors also have a proprietary approach, like an app store. So Epic calls theirs the App Orchard. And most of the EHRs have something similar, where you can join a developer program and build an application. And those are often more full-featured. They tend to be proprietary. So if you build one Epic app, you have to then build a Cerner app and an Allscripts app and an eClinicalWorks app separately. There are often heavy fees for joining those programs, although the EHR vendors-- Epic in particular-- have lowered their prices a lot. The federal government, the Office of the National Coordinator of Health IT, just about a week and a half ago released some new regulations which really limit the rate at which vendors can charge application developers for API access basically to almost nothing, except for incremental computation costs or special support. So I think that may change everything now that that regulation's been promulgated. So we'll see. PETER SZOLOVITS: So contrary to my pessimistic beginning, this actually is the thing that makes me most optimistic. That even five years ago, if you looked at many of these systems, they essentially locked you out. I remember in the early 2000s, I was at the University of Pittsburgh, where they had one of the first centers that was doing heart-lung transplants. So their people had built a special application for supporting heart-lung transplant patients, in their own homemade electronic medical records system. And then UPMC went to Cerner at the time. And I remember I was at some meeting where the doctors who ran this heart-lung transplant unit were talking to the Cerner people and saying, how could we get something to support our special needs for our patients? And Cerner's answer was, well, commercially it doesn't make sense for us to do this. Because at the time there were like four hospitals in the country that did this. And so it's not a big money maker. So their offer was, well, you pay us an extra $3 million and within three years we will develop the appropriate software for you. So that's just crazy, right? I mean, that's a totally untenable way of going about things. And now that there are systematic ways for you either to embed your own code into one of these systems, or at least to have a well-documented, reasonable way of feeding data out and then feeding results back into the system, that makes it possible to do special-purpose applications like this. Or experimental applications or all kinds of novel things. So that's great. ADAM WRIGHT: That's what we're optimistic about. And I think it's worth adding that there's two barriers you have to get through right. One is Epic has to sort of let you into their App Orchard, which is the barrier that is increasingly lower. And then you need to find a hospital or a health care provider that wants to use your app, right. So you have to clear both of those, but I think it's increasingly possible. You've got smart people here at MIT, or at the hospitals that we have in Boston always wanting to build these apps. And I would say five years ago we would've told people, sorry, it's not possible. And today we're able, usually, to tell people that if there's clinical interest, the technical part will fall into place. So that's exciting for us. PETER SZOLOVITS: Yeah ADAM WRIGHT: Yeah AUDIENCE: Question about that. ADAM WRIGHT: Absolutely AUDIENCE: Some of the applications that you guys develop in house, do you also put those on the Epic Orchard, or do you just sort of implement it one time within your own system? ADAM WRIGHT: Yeah, there's a lot of different ways that we share these applications, right. So a lot of us are researchers. So we will release an open source version of the application or write a paper and say, this is available. And we'll share it with you. The App Orchard is particularly focused on applications that you want to sell. So our hospital hasn't decided that we wanted to sell any applications. We've given a lot of applications away. Epic also has something called the Community Library, which is like the AppOrchard, but it's free instead of costing money. And so we released a ton of stuff through the Community Library. To the point that I was poking out before, one of the challenges is that if we build a Smart on FHIR app, we're able to sort of share that publicly. And we can post that on the web or put it on GitHub. And anybody can use it. Epic has a position that their APIs are proprietary. And they represent Epic's valuable intellectual property or trade secrets. And so we're only allowed to share those apps through the Epic ecosystem. And so, we often now, when we get a grant-- most of my work is through grants-- we'll have an Epic site. And we'll share that through the Community Library. And we'll have a Cerner site. And we'll share it through Cerner's equivalent. But I think until the capability of the open APIs, like Smart on FHIR, reaches the same level as the proprietary APIs, we're still somewhat locked into having to build different versions and distribute three-- each EHR under separate channels. Really, really good question. PETER SZOLOVITS: And so what's lacking in things like Smart on FHIR-- ADAM WRIGHT: Yeah. PETER SZOLOVITS: --that you get from the native interfaces? ADAM WRIGHT: So it's very situational, right. So, for example, in some EHR implementations, the Smart on FHIR will give you a list of the patient's current medications but may not give you historical medications. Or it will tell you that the medicine is ordered, but it won't tell you whether it's been administered. So one half of the battle is less complete data. The other one is that most EHRs are not implementing, at this point, the sort of write back capabilities, or the actionable capabilities, that Smart on FHIR is sort of working on. And it's really some standards for us. So if we want to build an application that shows how a patient fits on a growth curve, that's fine. If we went to build an application that suggests ordering medicines, that can be really challenging. Whereas the internal APIs that the vendors provide typically have both read and write capabilities. So that's the other challenge. PETER SZOLOVITS: And do the vendors worry about, I guess two related things, one is sort of cognitive overload. Because if you build 1,000 Smart on FHIR apps, and they all start firing for these inpatients, you're going to be back in the same situation of over-alerting. And the other question is, are they worried about liability? Since if you were using their system to display recommendations, and those recommendations turn out to be wrong and harm some patient, then somebody will reach out to them legally because they have a lot of money. ADAM WRIGHT: Absolutely. They're worried about both of those. Related particularly to the second one, they're also worried about just sort of corruption or integrity of the data, right. So somehow if I can write a medication order directly to the database, and it may bypass certain checks that would be done normally. And I could potentially enter a wrong or dangerous order. The other thing that we're increasingly hearing is concerns about protection of data, sort of Cambridge Analytica style worries, right. So if I, as an Epic patient, authorize the Words With Friends app to see my medical record, and then they post that on the web, or monetize it in some sort of a tricky way, what liability, if any, does my health care provider organization, or my-- the EHR vendor, have for that? And the new regulations are extremely strict, right. They say that if a patient asks you to, and authorizes an app to access their record, you may not block that access, even if you consider that app to be a bad actor. So that's I think an area of liability that is just beginning to be sorted out. And it is, I think, some cause for concern. But at the same time, you could imagine a universe where, I think, there are conservative health organizations that would choose to never authorize any application to avoid risk. So how you balance that is not yet solved. PETER SZOLOVITS: Well-- and to avoid leakage. ADAM WRIGHT: Absolutely. PETER SZOLOVITS: So I remember years ago there was a lot of reluctance, even among Boston area hospitals, to share data, because they were worried that another hospital could cherry pick their most lucrative patients by figuring out something about them. So I'm sure that that hasn't gone away as a concern. ADAM WRIGHT: Absolutely, yeah. PETER SZOLOVITS: OK, we're going to try to remember to repeat the questions you're asking-- ADAM WRIGHT: Oh great, OK. PETER SZOLOVITS: --because of the recording. ADAM WRIGHT: Happy to. PETER SZOLOVITS: Yeah. AUDIENCE: So how does a third party vendor deploy a machine learning model on your system? So is that done through Epic? Obviously, there's the App Orchard kind of thing, but is there ways to go around that and go directly into partners and whatnot? And how does that work? ADAM WRIGHT: Yeah. So the question is how does a third party vendor deploy an application or a machine learning model or something like that? And so with Epic, there's always a relationship between the vendor of the application and the health care provider organization. And so we could work together directly. So if you had an app that the Brigham wanted to use, you could share that app with us in a number of ways. So Epic supports this thing called Predictive Modeling Markup Language, or PMML. So if you train a model, you can export a PMML model. And I can import it into Epic and run it natively. Or you can produce a web service that I call out to and gives me an answer. We could work together directly. However, there are some limitations in what I'm allowed to tell you or share with you about Epic's data model and what Epic perceives to be their intellectual property. And it is facilitated by you joining this program. Because if you join this program, you get access to documentation that you would otherwise not have access to. You may get access to a test harness or a test system that lets you sort of validate your work. However, people who join the program often think that means that I can then just run my app at every customer, right. But with Epic, in particular, you have to then make a deal with me to use it at the Brigham and make a deal with my colleague to use at Stanford. Other EHR vendors have developed a more sort of centralized model where you can actually release it and sell it, and I can pay for it directly through the app store and integrate it. I think that last mile piece hasn't really been standardized yet. AUDIENCE: I guess one of my questions there is, what happens in the case that I don't want to talk to Epic at all? And just I looked at your data and just like Brigham and Women's stuff. And I build a really good model. You saw how it works, and we just want to deploy it. ADAM WRIGHT: Epic would not stop us from doing that. The only real restriction is that Epic would limit my ability to tell you stuff about Epic's guts. And so you would need a relatively sophisticated health care provider organization who could map between some kind of platonic data, clinical data, model and Epic's internal data model. But if you had that, you could. And at the Brigham, we have this iHub Innovation Program. And we're probably working with 50 to 100 startups doing work like that, some of whom are members of the Epic App Orchard and some who choose not to be members of the Epic App Orchard. It's worth saying that joining the App Orchard or these programs entails revenue sharing with Epic and some complexity. That may go way down with these new regulations. But right now, some organizations have chosen not to partner with the vendors and work directly with the health care provider organizations. PETER SZOLOVITS: So on the quality side of that question, if you do develop an application and field it at the Brigham, will Stanford be interested in taking it? Or are they going to be concerned about the fact that somehow you've fit it to the patient population in Boston, and it won't be appropriate to their data? ADAM WRIGHT: Yeah, I think that's a fundamental question, right, is to what extent do these models generalize, right? Can you train a model at one place and transfer it to another place? We've generally seen that many of them transfer pretty well, right. So if they really have more to do with kind of core human physiology, that can be pretty similar between organizations. If they're really bound up in a particular workflow, right, they assume that you're doing this task, this task, this task in this order, they tend to transfer really, really poorly. So I would say that our general approach has been to take a model that somebody has, run it retrospectively on our data warehouse, and see if it's accurate. And if it is, we might go forward with it. If it's not, we would try to retrain it on our data, and then see how much improvement we get by retraining it. PETER SZOLOVITS: And so have you in fact imported such models from other places? ADAM WRIGHT: We have, yeah. Epic provides five or six models. And we've just started using some of them at the Brigham or just kind of signed the license to begin using them. And I think Epic's guidance and our experience is that they work pretty well out of the box. PETER SZOLOVITS: Great. AUDIENCE: So could you say a little bit more about these rescores that are being deployed, maybe they work. Maybe they don't. How can you really tell whether they're working, even just beyond patient shift over time, just like how people react to the scores. Like I know a lot of the bias in fairness works is like people, if a score agrees with their intuition, they'll trust it. And if it doesn't, they ignore the score. So like how-- what does the process look like before you deploy the score thing and then see whether it's working or not? ADAM WRIGHT: Yeah, absolutely. So the question is, we get a risk score, or we deploy a new risk score that says, patient has a risk of falling, or patient has a risk of having sepsis or something like that. We tend to do several levels of evaluation, right. So the first level is, when we show the score, what do people do, right? If we-- typically we don't just show a score, we make a recommendation. We say, based on the score we think you should order a lactate to see if the patient is at risk of having sepsis. First we look to see if people do what we say, right. So we think it's a good sign if people follow the suggestions. But ultimately, we view ourselves as sort of clinical trialists, right. So we deploy this model with an intent to move something, to reduce the rate of sepsis, or to reduce the rate of mortality in sepsis. And so we would try to sort of measure, if nothing else, do a before and after study, right, measure the rates before, implement this intervention, and measure the rates after. In cases where we're less sure, or where we really care about the results, we'll even do a randomized trial, right. So we'll give half of the units will get the alert, half the units won't get the alert. And we'll compare the effect on a clinical outcome and see what the difference is. In our opinion, unless we can show an effect on these clinical measures, we shouldn't be bothering people, right. Pete made this point that what's the purpose of having-- if we have 1,000 alerts, everyone will be overwhelmed. So we should only keep alerts on if we can show that they're making a real clinical difference. AUDIENCE: And are those sort of like just internal checks, are there papers of some of these deployments? ADAM WRIGHT: It's our-- it's our intent to publish everything, right. I mean, I think we're behind. But I'd say, we publish everything. We have some things that we've finished that we haven't published yet. They're sort of the next thing to sort of come out. Yeah. AUDIENCE: I guess so earlier we were talking about how the models are just used to give recommendations to doctors. Do you have any metric, in terms of how often the model recommendation matches with the doctor's decision? ADAM WRIGHT: Yeah, absolutely. AUDIENCE: Can you repeat the question? ADAM WRIGHT: Oh yeah. Thanks, David. So the question is, do we ever check to see how often the model recommendation matches what the doctor does? And so there's sort of two ways we do that. We'll often retrospectively test the model back. I think Pete shared a paper from Cerner where they looked at these sort of suggestions that they made to order lactates or to do other sort of sepsis work. And they looked to see whether the recommendations that they made matched what the doctors had actually done. And they showed that they, in many cases, did. So that'll be the first thing that we do is, before we even turn the model on, we'll run it in silent mode and see if the doctor does what we suggest. Now the doctor is not a perfect supervision, right, because the doctor may neglect to do something that would be good to do. So then when we turn it on, we actually look to see whether the doctor takes the action that we suggested. And if we're doing it in this randomized mode, we would then look to see whether the doctor takes the action we suggested more often in the case where we show the alert, than where we generate the alert but just logged it and don't-- don't show it. Yeah. Yes, sir? AUDIENCE: So you'd mentioned how there's kind of related to fatigue-- ADAM WRIGHT: Yeah. AUDIENCE: --if it's a code blue, these alarms will-- ADAM WRIGHT: Right. AUDIENCE: And you said that cockpits have-- pilots now-- ADAM WRIGHT: Yeah. AUDIENCE: --that have similar problems. My very limited understanding of aviation is that if you're flying, say, below 10,000 feet, then almost all of the-- ADAM WRIGHT: Yeah. AUDIENCE: --alarms get turned off, and-- ADAM WRIGHT: Yeah. AUDIENCE: --I don't know if there seems to be an airlock for that, for-- ADAM WRIGHT: Yeah. AUDIENCE: --hospitals yet. And is that just because the technology workflow is not mature enough yet, only 10 years old? ADAM WRIGHT: Yeah. AUDIENCE: Or is that kind of the team's question about the incentives between if you build the tool and it doesn't flag this thing-- ADAM WRIGHT: Yeah. AUDIENCE: --the patient dies, then they could sued. And so they're just very-- ADAM WRIGHT: Yeah, no, we try, right? So since we often don't know about the situations in a structured way at the EHR. And so most of our alerts are suppressed in the operating room, right? So during an-- when a patient is on anesthesia, their physiology is being sort of manually controlled by a doctor. And so we often suppress the alerts in those situations. I guess I didn't say the question, but the question was, do we try to take situations into account or how much can we? We didn't used to know that a code blue was going on, because we used to do most of our code blue documentation on paper. We now use this code narrator, right? So we can tell when a code blue starts and when a code blue ends. A code blue is a cardiac arrest and resuscitation of a patient. And so we actually do increasingly turn a lot of alerting off during a code blue. I get an email or a page whenever a doctor overrides an alert and writes a cranky message. And they'll often say something like, this patient is dying of a myocardial infarction right now, and your bothering me about this influenza vaccination. And then what I'll do is I'll go back-- no, seriously, I had that yesterday. And so what I'll do is I'll go back and look in the record and say, what signs did I have this patient sort of in extremis? And in that particular case, it was a patient who came into the ED and very little documentation had been started, and so there actually were very few signs that the patient was in the acute state. I think this, someday, could be sorted by integrating monitor data and device data to figure that out. But at that point, we didn't have a good, structured data at that moment, in the chart, that said this patient is so ill that it's offensive to suggest an influenza vaccination right now. PETER SZOLOVITS: Now, there are hospitals that have started experimenting with things like acquiring data from the ambulance as the patient is coming in so that the ED is already primed with preliminary data. ADAM WRIGHT: Yeah. PETER SZOLOVITS: And in that circumstance, you could tell. ADAM WRIGHT: So this is the interoperability challenge, right? So we actually get the run sheet, all of the ambulance data, to us. It comes in as a PDF that's transmitted from the ambulance emergency management system to our EHR. And so it's not coming in in a way that we can read it well. But to your point, exactly, if we were better at interoperability-- I've also talked to hospitals who use things like video cameras and people's badges, and if there's 50 people hovering around a patient, that's a sign that something bad is happening. And so we might be able to use something like that. But yeah, we'd like to be better at that. PETER SZOLOVITS: So why did HL7 version 3 not solve all of these problems? ADAM WRIGHT: This is a good philosophical question. Come to BMI 701 and 702 and we'll talk about the standards. HL7 version-- to his question-- version 2 was a very practical standard. Version 3 was a very deeply philosophical standard-- PETER SZOLOVITS: Aspirational. ADAM WRIGHT: --aspirational, that never quite caught on. And it did in pieces. I mean, FHIR is a simplification of that. PETER SZOLOVITS: Yeah. ADAM WRIGHT: Yes, sir? AUDIENCE: So I think usually, the machine learning models evaluates the difficult [INAUDIBLE].. ADAM WRIGHT: Yes, sir. AUDIENCE: When it comes to a particular patient, is there a way to know how reliable the model is? ADAM WRIGHT: Yeah, I mean, there's calibration, right? So we can say this model works particularly well in these patients, or not as well in these patients. There are some very simple equations or models that we use, for example, where we use a different model in African-American patients versus non-African-American patients, because there's some data that says this model is better calibrated in this subgroup of patients versus another. I do think, though, to your point, that there's a suggestion, an inference from a model-- this patient is at risk of a fall. And then there's this whole set of value judgments and beliefs and knowledge and understanding of a patient's circumstances that are very human. And I think that that's largely why we deliver these suggestions to a doctor or to a nurse. And then that human uses that information plus their expertise and their relationship and their experience to make a suggestion, rather than just having the computer adjust the knob on the ventilator itself. A question that people always ask me, and that you should ask me, is, will we eventually not need that human? And I think I'm more optimistic than some people that there are cases where the computer is good enough, or the human is poor enough, that it would be safe to have a close to closed loop. However, I think those cases are not the norm. I think that there'll be more cases where human doctors are still very much needed. PETER SZOLOVITS: So just to add that there are tasks where patients are fungible, in the words that I used a few lectures ago. So for example, a lot of hospitals are developing models that predict whether a patient will show up for their optional surgery, because then they can do a better job of over-scheduling the operating room in the same way that the airlines over over-sell seats. Because, statistically, you could win doing that. Those are very safe predictions, because the worst thing that happens is you get delayed. But it's not going to have a harmful outcome on an individual patient. ADAM WRIGHT: Yeah, and conversely, there are people that are working on machine learning systems for dosing insulin or adjusting people's ventilator settings, and those are high-- PETER SZOLOVITS: Those are the high risk. ADAM WRIGHT: --risk jobs. PETER SZOLOVITS: Yep. All right, last question because we have to wrap up. AUDIENCE: You had alluded to some of the [INAUDIBLE] problems-- ADAM WRIGHT: Yes. AUDIENCE: --of some of these models. I'm, one, curious how long [INAUDIBLE].. ADAM WRIGHT: Yeah. AUDIENCE: And I guess, two, once it's been determined that actually a significant issue has occurred, what are some of the decisions that you made regarding tradeoffs of using the out-of-date model that looks at [INAUDIBLE] signal versus the cost of retraining? ADAM WRIGHT: Retraining? Yeah. Yeah, absolutely. So the question is the set-and-forget, right? We build the model. The model may become stale. Should we update the model? And how do we decide to do that? I mean, we're using-- it depends on what you define as a model. We're using tables and rules that we've developed since the 1970s. I think we have a pretty high desire to empirically revisit those. There's a problem in the practice called knowledge management or knowledge engineering, right? How do we remember which of our knowledge bases need to be checked again or updated? And we'll often, just as a standard, retrain a model or re-evaluate a knowledge base every six months or every year because it's both harmful to patients if this stuff is out-of-date, and it also makes us look stupid, right? So if there's a new paper that comes out and says, beta blockers are terrible poison, and we keep suggesting them, then people no longer believe the suggestions that we make, that said, we still make mistakes, right? I mean, things happen all of the time. A lot of my work has focused on malfunctions in these systems. And so, as an example, empirically, the pharmacy might change the code or ID number for a medicine, or a new medicine might come on the market, and we have to make sure to continually update the knowledge base so that we're not suggesting an old medicine or overlooking the fact that the patient has already been prescribed a new medicine. And so we tried to do that prospectively or proactively. But then we also tried to listen to feedback from users and fix things as we go. Cool. PETER SZOLOVITS: And just one more comment on that. So some things are done in real time. There was a system, many years ago, at the Intermountain Health in Salt Lake City, where they were looking at what bugs were growing out of microbiology samples in the laboratory. And of course, that can change on an hour-by-hour or day-to-day basis. And so they were updating those systems that warned you about the possibility of that kind of infection in real time by taking feeds directly from the laboratory. ADAM WRIGHT: That's true. PETER SZOLOVITS: All right, thank you very much. ADAM WRIGHT: No, thank you, guys. [APPLAUSE]
MIT_6S897_Machine_Learning_for_Healthcare_Spring_2019
22_Regulation_of_Machine_Learning_Artificial_Intelligence_in_the_US.txt
PROFESSOR: All right. Let's get started. Welcome, ladies and gentlemen. Today it's my pleasure to introduce two guest speakers who will talk about the regulation of AI and machine learning and about both the federal FDA level regulation and about IRB issues of regulation within institutions. So the first speaker is Andy Coravos. Andy is the CEO and founder of Elektra Labs, which is a small company that's doing digital biomarkers for health care. And Mark is a data and software engineer for the Institute for Next Generation Healthcare at Mount Sinai in New York and was kind enough to come up to speak to us today. So with that, I'm going to introduce them and sit back and enjoy. ANDY CORAVOS: Thank you. Thank you for having us. Yeah, so I am working on digital biomarkers, and I'm also a research collaborator at the Harvard MIT Center for Regulatory Sciences. So you all have a center that is looking just at how regulators should think about some of these problems. And then I'm also an advisor at the Biohacking Village at DEFCON, which we can talk a little bit more about. My background-- I'm a software engineer, had worked with the FDA formerly as an entrepreneur resident in the digital health unit, and then spent some time in corporate land. MARK SHERVEY: I'm Mark Shervey. I work at the Institute for Next Generation Healthcare at Mount Sinai. I've been there about three years now. My background is in software and data engineering, coming mostly from banking and media. So this is a new spot. And most of my responsibilities focus around data security and IRB and ethical responsibilities. ANDY CORAVOS: We also know how much people generally love regulatory conversations, so we will try to make this very fun and exciting for you. If you do have questions, as regulations are weird and they're constantly changing, you can also shoot us a note on Twitter. We'll respond back if you have things that come up. Also, the regulatory community on Twitter, amazing. When somebody comes out with, like, what does real world data actually mean, everybody is talking to one another. So once you start tapping into-- I'm sure you have your own Twitter communities, but if you tap into the regulatory Twitter community, it is a very good one. The digital health unit tweets a lot at the FDA. OK. Disclaimers-- these are our opinions and the information that you'll see here does not necessarily reflect the United States government or the institutions that we are affiliated with. And policies and regulations are constantly changing. So by the time we have presented this to you, most likely parts of it are wrong. So you should definitely interact early and often with relevant regulatory institutions. Your lawyers might say not to do that. There are definitely different ways, and we can talk through how you'd want to do that. But especially as a software engineer and developing anything on the data side, if you spend too much time developing a product that is never going to get through, it is really a wasted period of time. So working with the regulators, and given how open they are right now to getting feedback, as you saw with the paper that you read, is going to be important. And then the last thing, which Mark and I talk a lot about, is many of these definitions and frameworks have not actually happened yet. And so when somebody says a biomarker, they might actually not mean a biomarker, they might mean a measurement. I'm sure you know this. When someone's like, I work in AI, and you're like, what does that actually mean? So you should ask us questions. And if you think about it, the type of knowledge that you have is a very specific, rare set of knowledge compared to almost everybody else in the country. And so as the FDA and other regulators start thinking about how to regulate and oversee these technologies, you can have a really big amount of influence. And so what we're going to do is a little bit of the dry stuff around regulatory, and then I am going to somewhat plead with you and also teach you how to submit public comments so that you can be part of this regulatory process. And then-- MARK SHERVEY: I will speak about the Institutional Review Board. How many people in here have worked with an IRB or aware of them? OK, good. That's a good mix. So it'll be a quick thing, just kind of reviewing when to involve the IRB, how to involve the IRB, things you need the IRB for and some things that you don't, as an alternative to taking the FDA approach. ANDY CORAVOS: All right, good. And then I'll go first, and then we'll go through IRBs, and then we'll leave the last part for your impressions of the paper. OK. So before I start, I'll ground us in some ideas around algorithmically-driven health care products. So as you know, these can have wide ranges of what they can do. A general framework that I like to use to think about them is products that measure, that diagnose, or treat. So measurement products might include things like digital biomarkers or clinical decision support. Diagnostics might take that measurement and then say whether or not somebody has some sort of condition given those metrics. And then treatment are ideas around digital therapeutics. How many people here think that software can treat a person? A few, maybe. OK. And I think one thing that people don't think about always when they have these sorts of tools is-- and you all probably think about this a lot more-- is even something as simple as a step count is an algorithm. So it takes your gyroscope, accelerometer, height, weight, and age, and then it predicts whether or not you've made a step. And if you think about the types of different steps that people make, older people drag their feet a little bit more than younger people. So an algorithm for a step count looks very different from an algorithm for younger people for step count. And so all of these tools have some level of error, and they're all algorithms, effectively. One of my favorite frameworks as you start thinking about-- a lot of people are very interested in the measurement side around what's called digital biomarkers. And it turns out that the FDA realized that many people, even within their own agency, didn't know what a biomarker was, and everyone was using the term slightly differently, probably how people approach you slightly differently of what machine learning actually is. And so there is a really good framework around the seven different types of biomarkers that I'd highly recommend you read if you go into this area. A digital biomarker only, in my definition and how other people have started to use this, is the way that that measurement is collected. And so you might have a monitoring biomarker, a diagnostic biomarker, but it is collected in an ambulatory, remote way that is collecting digital data. And this type of data is very tricky. To give you an example of why this is particularly difficult to regulate, so think about a couple of products that just look at something that would be simple, like AFib. So AFib is an abnormal heart condition. You might have seen in the news that a number of different companies are coming out with AFib products. Put simply, there is obviously a large stack of different types of data, and one person's raw data is another person's processed data. So what you see on this chart is a list of five different companies that are all developing AFib products, from whether or not they develop the product internally, which is the green part, versus whether or not they might use a third party product, so developing an app on top of somebody else's product. And so in a broad way, thinking about going from the operating system to the sensor data. So somebody might be using something like a PPG sensor and collecting this sort of data from their watch and then doing some sort of signal processing, then making another algorithm that makes some sort of diagnostic. And then you have some sort of user interface on top of that. So if you are the FDA, where would you draw the line? Which part of this product, when somebody says my product is validated, should it be actually validated? And then thinking about what does it actually mean if something is verified versus validated. So verified being like, if I walk 100 steps, does this thing measure 100 steps? And then validation being, does 100 steps mean something for my patient population or for my clinical use case? And so one of the things that the FDA has started to think through is how might you decouple the hardware components from the software components, where you think about some of the hardware components as the ways that you would-- effectively, the supply chain for collecting that data, and then you would be using something on top. And so maybe you have certain types of companies that might do some sort of verification or validation lower down the stack, and then you can innovate higher up. And so these measurements have pretty meaningful impacts. So in the past, a lot of these tools, you really had to go into the clinic. It was very expensive to get these sorts of measurements. And more and more, a number of different companies are getting their products cleared to use in some care settings with a doctor or possibly to inform decisions at home. All right. And so in the last of some examples is around digital therapeutics. So I had worked with a company that was using a technology based out of UCSF that is developing, effectively, a video game for pediatric ADHD. And so when kids play the game, they reduce their ADHD symptoms. And one of things that's pretty exciting about this game it is a 30-day protocol. And unlike something like Ritalin or Adderall, where you have to take that drug every day for the rest of your life as you reduce the symptoms, this seems to have an effect that, after 30 days, is more long-term and that when you test somebody months down the line, they still retain the effects of the treatment. So this technology was taken out of UCSF and licensed to a company called Akili, who decided, hey, we should just structure ourselves like a drug company. So they raised venture capital like a drug company, they ran clinical trials like a drug company, and they're now submitting to the FDA and might be the first prescription drug. So anybody who was told that video games might rot your brain, you now have a revenge, maybe. So the FDA has been looking at more and more of these tools. I don't have to tell you, you're probably thinking a lot about it. And the FDA has been clearing a number of these different types of algorithms. And one of the questions that has come up is, what part of the agency should you think about? What are you claiming when you use these sorts of algorithms? And what ones should be cleared and what's not? And how should we really think about the regulatory oversight for them? And a lot of these technologies enable things that are really quite exciting, too. So it's not just about the measurement, but what you can do with them. So one thing that has a lot of people really excited about is an idea around decentralized clinical trials. No block chains here. You might be able to build it with a blockchain, but not necessary. So on the y-axis, you can think about where are the data collected. So is it collected at a clinical site, or is it collected remotely? And then the method is how it's collected. So do you need a human to do the interaction, or is it fully virtual? So at the top you can think about somebody doing telemedicine, where they call into somebody at home and then they might ask some questions and fill out a survey. On the bottom, you can imagine in a research facility where I'm using a number of different instruments, and perhaps I'm in a Parkinson's study and you're measuring my tremor with some sort of accelerometer. And so the challenge that's happening is a lot of people use all of these terms for different things when they mean decentralized trials. Is it telemedicine? Is it somebody who's instrumented with a lot of wearables? How do you know that the data are accurate? But this is, I think, in many instances really exciting because the number one reason why people don't want to enroll in a clinical trial is to get a placebo. I think nobody really wants to participate in research if you're not getting the actual drug. And then the other reason is location. People don't want to have to drive in, find parking, participate. And this allows people to participate from home. And the FDA has been doing a lot of work around how to rethink the clinical trial design process and incorporate some of this real world data into decision-making. Before I jump into some of the regulatory things, I want to just set a framework of how to think about what these tools can do. So these are three different scenarios of how you might use software in a piece of clinical research. So imagine that somebody has Parkinson's and you want to measure how their Parkinson's is changing over time using a smartphone-based test. You have a standard Parkinson's drug that they would use, and then you would collect the endpoint data, which is how you see if that drug has performed using a piece of software. Another idea would be, say you have an insulin pump and then you have a CGM that is measuring your blood sugar levels, and you want to dose the insulin based on those readings. You might have software both on the interventional side and on the endpoint side. Or, like the company we talked about, which has a digital product, they said the only thing we want to change in the study is that the intervention is digital, but we want you to compare us like any other intervention for pediatric ADHD. So we want to use standard endpoints and not make that an innovation. The challenge here is the first one, most likely, would go to the drug side of the FDA. The second one would go to both the drug and the device side of the FDA as a combination product. And the final one would just go to devices, which has been generally handling software. We've never really had products at the FDA, in my opinion, where-- we don't have drugs that can measure, diagnose, and treat and change all the different ways. And so you're now having software hitting multiple different parts of a system, or it might even be the same product, but in one instance it's used as an intervention, another instance it's used as a diagnostic, another it's to inform or expand labeling. And so the lines are not as clean anymore about how you would manage this product. So how do you manage these? There's a couple agencies that are responsible for thinking through and overseeing health care software. The big one that we'll spend most of our time on is the FDA. But it's also worth thinking about how they interact with some of the other ones, including ONC, FCC, and FTC. So the FDA is responsible for safety and effectiveness and for facilitating medical product innovation and ensuring that patients have access to high quality products. The ONC is responsible for health information technology. And you can imagine where the lines between storing data and whether or not you're making a diagnosis on that data start to get really vague, and it really might be the exact same product but just the change of what claim you're making on that product. And most of these products have some level of connectivity to them, so they also are working with FCC and have to abide by the ways that these tools are regulated by this agency. And then finally, and probably most interesting, is around the FTC, which is really focused on informing consumer choice. And if you think about FDA and FTC, they're actually really similar. So both of these agencies are responsible for consumer protection, and the FDA really takes that with a public health perspective. So in many instances, if you've seen some of the penalties around somebody having deceptive practices, it actually wasn't the FDA who stepped in, it was the FTC. And I think some of the agencies are thinking about where do their lines end and where do others begin. And in many instances, as we've really seen with a lot of probably bad behavior that happens in tech, there's really gaps across multiple places where nobody's stepping in. And then there's some also non-regulatory agencies to think about. An important one is around standards and technology. You probably think about this all the time with interoperability and whether or not you can actually import the data. There are people who spend a lot of time thinking about standards. It is a very painful and very important job to promote innovation. OK. So the FDA has multiple centers. I'm going to use a lot of acronyms, so you might want to write this down or take a picture. And I'll try to minimize my acronyms. But there are three centers that will be the most interesting for you. So CDER is the one for drugs, and this is the one where you would have a regular drug and possibly use a software product to see how that drug is performing. CDHR is for devices. And CBER is for biological products. I will probably use drugs and biologics in a very similar sort of way. And the distinctions that we'll spend most of our time on are around drugs versus device. There's a bunch of policy that is coming out that is both exciting and making things change. So one of the big ones is around the 21st Century Cures. This has accelerated a lot of innovation in health care. It's also changed the definition of what device is, which has a pretty meaningful impact on software. And the FDA has been thinking a lot about how you would actually incorporate these products in. I think there is a lot of people who are really excited about them. There's a lot of innovation, and so how do we create standards both to expand labeling, be able to actually ingest digital data, and have these sorts of digital products that are actually under FDA oversight and not just weird snake oil on the app store? But what is a medical device? Pretty much, a device is like anything that's not the other centers, which has a big catch-all for all the other components. And so one of the big challenges for people is thinking about what a device is. If you think about generally what the FDA does, it doesn't always make sure that your products are safe and effective. They check whether or not you claim that they are safe and effective. So it's really all about claims management and what you're claiming that this product can do and evaluating that for marketing. Obviously if your product causes very significant harm, that is an issue. But the challenge really happens to be when somebody makes-- the product can do something that it doesn't necessarily claim to do, but then you are able to imply that it does other things. Most people don't really have a really good understanding of what the difference is between informing a product versus diagnosing a product, and so I think in many instances for the public, it gets a bit confusing. So as we talked about before, the FDA has been thinking about how do you decouple the hardware from the software, and they've come up with a concept around software as a medical device, so the software that is effectively defined as having no hardware components where you can evaluate just this product, and this is pronounced SaMD. And SaMDs are pretty interesting. This is very hard to read, but I pulled it straight from legal documents, so you know I'm not changing it. So something that's interesting about SaMD-- so if you go all the way to the end-- so if you have electronic health care data that's just storing health data, that is not a SaMD and often can go straight to market and is not regulated by the FDA. If you have a piece of software that is embedded into a system, so something like a pacemaker or a blood infusion pump, then that is software in a medical device, and that's not a SaMD. So there's a line between these about what the functionality is that the product is doing, and then how serious is it, and that informs how you would be evaluated for that product. And if you haven't noticed, I try not to almost ever use the term device. So when I talked about these connected wearables and other sorts of tools, I will use the word tool and not device because this has a very specific meaning for the FDA. And so if you're curious whether or not your product is a device or your algorithm is a device, first, you should talk to the regulators and talk to your lawyer. And we'll play a little game. So there are two products here. One is an Apple product and one is a Fitbit product. Which one is a device? I'm going to call on somebody randomly, or someone can raise their hand and offer as tribute. OK, which one? AUDIENCE: I think Apple received 510(k) clearance, so I'd say the Apple Watch device. I'm not sure about the Fitbit, but if it's one or the other, then it's probably not. ANDY CORAVOS: That's very sharp. So we'll talk about this. Apple did submit to the FDA for clearance, and they submitted for de novo, which is very similar to a 510(k). And they submitted for two products, two SaMDs. One was based on the signal from their PPG, and the second was on the app. So it has two devices, neither of which are hardware. And the Fitbit has, today, no devices. How about now? Is it a device, or is it not a device? Trick question, obviously, because there are two devices there, and then a number of things that are not devices. So it really just depends on what you are claiming the product does. And back to that set of modularity, what is actually the product? So is the product a signal processing algorithm? Is the product an app? Is the product the whole entire system? And so people are thinking about, strategically, frankly, which parts are devices because you might want somebody else to be building on your system. So maybe you want to make your hardware a device, and then other people can build off of it. And so there are strategic ways of thinking about it. So the crazy thing here, if you can imagine this, is that the exact same product can be a device or not a device through just a change of words and no change in hardware or code. So if you think about whether or not my product is a device, it's actually generally not the most useful question. The more useful question is, what is the intended use of the product? And so, are you making a medical device claim with what your product is doing? Obviously this is a little bit overwhelming, I think, in trying to figure out how to navigate all of this. And the FDA recognizes that, and their goal is to increase innovation. And so particularly for products like software, they're having constant updates. It seems a little bit difficult if you're constantly figuring out all the different words and how you're going to get these products to market. So something that I think is really innovative by the FDA is piloting-- this is one example of a program that they're thinking through, which is working with nine different companies. And the idea is, can you pre-certify an entire company that is developing software as an excellent company across a series of objectives, and then allow them to ship additional updates? So today, if you had an update and you wanted to make a change, you have to go through an entire 510(k) or de novo process or other type of process, which is pretty wild. If you imagine that we only would let Facebook ship one update a year, that would be crazy. And we don't expect Facebook to maintain or sustain a human life. And so being able to have updates in a more regular fashion is very important. But how do you know that that change is going to have a big impact or not? And I'll pause on this, but you all read the document. I'm actually very glad that you read this document without talking to us because you were the exact audience of somebody who would not necessarily have the background, and so it needs to be put in a way that is readable for people who are developing these types of products to know how to go into them. We'll save some time at the end of the discussion because I'm curious how you perceived the piece. But you should definitely trust your first reading as a honest, good reading. You also probably read it way more intensely than any other person who is reading it, and so the notes that you took are valid. And I'm curious what you saw. OK. Another thing to help you be cool at cocktail hour. FDA cleared, not the same thing as FDA approved. OK. So for devices, there are three pathways to think about. One is the 510(k), the next is de novo, the next is a premarket approval, also known as a PMA. They're generally stratified by whether or not something is risky. And the type of data that you have to submit to be able to get one of these clearances varies. So the more risky you are, the more type of data that you have to have. So de novos are granted, but people often will say cleared. 510(k)s are cleared. Very few products that you've seen go through a PMA process. AUDIENCE: I have a question. ANDY CORAVOS: Tell me. AUDIENCE: Do you know why Apple chose to do a de novo instead of a 510(k)? ANDY CORAVOS: I am not Apple, but if I had to guess, once you create a de novo, you can then become a predicate for other things. And so if they wanted to create a new class of predicates that they could then build on over time, and they didn't want to get stuck in an old type of predicate system, I think, strategically, the fact that they picked a PPG and their app-- I don't know what they'll eventually do over time, but I think it's part of their long-term strategy. Great question. OK. So the tools are safe and effective, perhaps, depending on how much data is submitted. But what about the information that's collected from the tools? So today, our health care system has pretty strong protections for biospecimens, your blood, your stool, your genomic data. But we really don't have any protections around digital specimens. You can imagine how many data breaches we constantly have and what ads get served to us on Facebook. A lot of this is considered wellness data, not actually health data. But in many instances, you are finding quite a lot of health information from somebody in that. And I have a lot more. We can nerd about this forever. But generally, there's a couple of things that are good to know, is that with most of this data, you can't really de-identify it anymore. Who here thinks I could de-identify my genome? You can't, right? My genome's unique to me. Maybe you can strip out personally identifiable information, but you're not really going to de-identify it. I am uniquely identifiable with 30 seconds of walk data. So all of this biometric signatures is pretty specific. And so there are some agencies today who are thinking about how you might handle these sorts of tools. But in the end, there is, I think, a pretty substantial gap. So in general, the FDA is really focused on safety and efficacy, and safety is considered much more of a body safety and not as a we are very programmable as humans in the type of information that we see or change type of safety. So the data that we collect-- FTC could have a lot of power here, but they're a much smaller agency that isn't as well-resourced. And there's a couple of different organizations that are trying to think through how to do rulemaking for Internet of Things and how that data is being used. But generally, in my opinion, we probably need some sort of congressional action around non-discrimination of digital specimen data, which would require a Congress that could think through, I think, a really difficult problem of how you would handle data rights and management. OK. So I'll go through a couple examples of how government agencies are interacting with members of the public, which I think you might find interesting. So, many of the government agencies are really thinking through, realizing that they are not necessarily the experts in their field in how do they get the data that they need. So a couple pieces that will be interesting for you, I think. One is there is a joint group with the FDA and Duke, where they're thinking through what's called novel endpoints. So if you are working on a study today where you realize that you're measuring something better than the quote gold standard, and the gold standard is actually quite a terrible gold standard, how do you create and develop a novel metric that might not have a reference standard or a legacy standard? And this is a way of thinking through that. The second is around selecting a mobile technology. This used to be called mobile devices, and they changed it for the same reason around not calling things a device unless it is a device. And so these are thinking through what type of connected tech would you want to use to generate the patient data that you might use in your study. All right. Who here knows what DEFCON is? Three of you. OK. So DEFCON is a hacker conference. It is probably one of the biggest hacker conferences. It is a conference that if you do have the joy of going to, you should not bring your phone and you should not bring your computer, and you should definitely not connect to the internet because there is a group called the Wall of Sheep, and they will just straight stream all your Gmail passwords plain text and your account logins and anything that you are putting on the internet. This group is amazing. You may have also heard about them because they bought a number of voting machines last year, hacked them, found the voting records, and sent them back to Congress and said, hey, you should probably fix this. DEFCON has a number of villages that sit under the main DEFCON. One of them is called Biohacking Village. And there is some biohacking, so, like, doing the RFID chipping, citizen science. But there's also a set of people at Biohacking Village that do what's called white hat hacking. So for people who know about this, there's black cat, where you might encrypt somebody's website and then hold them for ransom and do things that are disruptive. White hat hackers are considered ethical hackers, where they are doing security research on a product. So the hackers in the Biohacking Village started to do a lot of work on pacemakers, which are connected technologies. A lot of pacemaker companies-- an easy way to think about how they're thinking about this was the pacemaker companies are generally trying to optimize for battery life. They don't want to do anything that's computationally expensive. Turns out, encrypting things is computationally expensive. They did a relatively trivial exploit where they were able to reverse engineer the protocol. Pacemakers stay in a low power mode as long as they can. If you ping it, it will turn into high power mode, so you can drain a multi-year battery of the pacemaker into a couple days or weeks. They were also able to reverse engineer the shock that a pacemaker can deliver upon a cardiac event. And so this has pretty significant implications for what this exploit can do. With any normal tech company, when you have an exploit of this type, you can go to Facebook, you can go to Amazon, there is something called a coordinated disclosure, you might have a bug bounty, and then you share the update, you can submit the update, and then you're done. With the device companies, what was generally happening is the researchers were going to the device companies, hey, we found this exploit, and the device companies were saying, thank you, we are going to sue you now. And the security researchers were like, why are you suing us? And they said, you're tampering with our product, we are regulated by agencies, we can't just ship updates whenever we want, and so we have to sue you. Turns out that is not true. And the FDA found out about this and they're like, you can't just do security researchers. If you have a security issue, you have to fix that. And so the FDA did something that was pretty bold, which was three years ago, they went to DEFCON. And if anyone has actually gone to DEFCON, you would know that you do not go to DEFCON if you are part of the government because there is a game called Find the Fed, and you do not want to be found. And of course, NSA, CIA, a lot of members of the government will go to DEFCON, but it is generally not a particularly friendly environment. The Biohacking Village said, hey, we will protect you, we will give you a speaker slot, we really want to work together with you. And so over the last three years, the agency has been working closely with security researchers to really think through the best ways of doing cybersecurity, particularly for connected devices. And so if you look at the past couple of guidances, there's a premarket and post-market guidance where they've been collaborating, and they're very good and strong guidances. So the FDA did something really interesting, which was in January, they announced a new initiative, which I think is quite amazing, called #WeHeartHackers. And if you go to WeHeartHackers.org, the FDA has been encouraging device manufacturers, like Medtronic and BD, and Philips, and Thermo Fisher and others, to bring their devices and work together with security researchers. Another group that is probably worth knowing is that if you think about what a lot of these connected products do, they, in many instances, might augment or change the way that a clinician does their work. And so today, if you are a clinician and you graduate from med school, you would take something like a Hippocratic oath to do no harm. Should the software engineers and the manufacturers of these products also take some sort of oath to do no harm? And would that oath look similar or different? And that line of thinking helped people realize that there are entire professional communities and societies for people who do this sort of thing for doctors in their specialties, so a society for neuro oncology, society for radiology. But there's really no society for people who practice digital medicine. So there is a group that is starting now, which you all might like to join because I think you would all be part of this type of community, which is the society for-- it's called the DIME Society. And so if you're thinking through, how do I do informed consent with these sorts of digital products, what are the new ways that I need to think through regulation, how am I going to work with my IRB, this society could be a resource for you. All right. So how do you participate in the rulemaking process? One is, I would highly encourage, if you get a chance to, to serve some time in government. There are more opportunities to do that through organizations like the Presidential Innovation Fellow, to be an entrepreneur resident somewhere, to be part of the US Digital Service. The payment system of CMS is millions of line of COBOL, and so that obviously needs some fixing. And so if you want to do a service, I think this is a really important way. Another way that you can do it is submitting to a public docket. And so this is something I will be asking you to do, and we'll talk about it after, is how can you take what you learned in that white paper and ways that you can share back with the agency of how you would think about developing rules and laws around AI and machine learning. There's a much longer resource that you can look at, my friend Mina wrote, which is that-- these are a couple of things to know. So anyone can comment, you will be heard. If you write a very long comment, someone at the agency, probably multiple, will have to read every single thing that you write, so please be judicious in how you do that. But you will be heard. And most of the time comments come from big organizations and people who have come together and not from the people who are experiencing and using a lot of the products. So in my opinion, I think someone like you is a really important comment and voice for the agency to have, and to have a technical perspective. Another way that you can do this, which I'm going to put Irene on the spot, is we need new regulatory paradigms. And so when you are out at beers or ice cream, or whatever you do for fun, you can think through new models. And so we were kicking around an idea of, could you use a clinical trial framework to think about AI in general? So algorithms perform differently on different patient populations and different groups. You need inclusion/exclusion criteria. Should this be something maybe we even expand beyond health care algorithms to how you decide whether or not someone gets bail or teacher benefits? And then the fun thing about putting your ideas online, if you do that, is then people start coming to you. And we realized there was a group in Italy who had proposed a version of FDA for algorithms, and you start to collect people who are thinking about things that you're thinking about. And now we will dig into the thing that you most likely will spend more time with than the government, which is your IRB. MARK SHERVEY: Thank you. OK. I could probably not give the rest of this talk if you just follow the thing on the bottom. If you don't know if you're doing human subject research, ask the IRB, ask your professor, ask somebody. I think most of what I'm going to say is going to be a lot softer, squishier than what Andy went around, and it's really just to try to get the thought process going through your head of if we're doing actual human research, if the IRB has to be involved, what actually constitutes human research? And just to be sure that you're aware of what's going on there all the time. We've done this. So research is systematic investigation to develop or contribute generalizable knowledge. So you can do that on a rock. What's important about human subjects research is that people's lives are on the line. Generally, the easiest thing to know is if there's any sort of identifiable information with the data that you're working with, that is going to fall under human subjects research. Things that won't are publicly available, anonymous data. There's all sorts of imaging training data sets that you can use that are anonymized to what is an acceptable level. But to Andy's point, there's really no way to truly de-identify a data set. And with the amount of data that we're working with all right now in the world, it's becoming impossible to de-identify any data set if you have any other reference data set. So anytime you're working with any people, you are almost certainly going to have to involve the IRB, again. So why the IRB is there, it's not specifically to slap you on the wrists. It's not that anything's expected to purposely do anything wrong. Although that has happened, that's such a small amount that it's just unhelpful to think that everybody is malicious. So you're not going to do anything particularly wrong, but there are things that you just may not know. And this is not the IRB's 1,000th rodeo, so if you bring something up to them, they'll know almost immediately. Participants are giving up their time and information, so the IRB, more than keeping the institution from harm, is really protecting the patients first and the institution at the same time. But the main role is to protect the participants. Specifically, here's something that might not go through everybody's head, research that may be questionable or overly manipulative. That gets into compensation for studies. You can imagine certain places in an impoverished nation that you say, we'll pay $50,000 per person to come participate in this study, you can imagine people want to be in that study and it can become a problem. So the IRB is also a huge part of making sure that the studies aren't actually affecting anybody negatively in that kind of sense. Now, before I do, this next slide gets dark for a second, so we'll try to move through it. But it talks about how the IRB came about. So we start with the Nuremberg Code, human research conducted on prisoners and others, not participants but subjects of research. Tuskegee experiment, another thing that people were not properly consented into the studies. They didn't know what they were actually being tested for, so they couldn't possibly have consented. The study went for 40 years instead of six months. And even after a standard of care had been established, the study continued on. That essentially is what began the National Commission for Protection of Human Subjects, which led to the IRB being the requirement for research. And then five years later, the Belmont Report came out, essentially enumerating these three basic principles, respect for participants, beneficence as far as do no harm, don't take extra blood if it just makes it more convenient, don't add extra drug if you just want to see what happens, and then just making sure that participants are safe outside of any other harm that you can do. So we follow the Belmont Report. That's essentially the state of the art that we have now with modernization moving forward. This is not something to really worry about, but HHS has a great site that has a flow chart for just about any circumstance that you can think of to decide if you're actually doing human subjects research or not. This is pretty much the most basic one. You can go through it on your own. Just to highlight the main thing that I think you guys will all probably be worried about, is you will be collecting identifiable data, which just immediately puts you in IRB land. So anytime you can identify that that's a thing that's happening, you're just there, so you don't really have to go through any of this. What is health data? So you have names obviously. Most of these are either identifications or some sort of identifying thing. The two, I guess, that a lot of people maybe gloss over that aren't so obvious is zip codes. You have to limit them to the first three numbers of a zip code, which gives a generalizable area without actually dialing in on a person's place. Dates are an extremely sensitive topic. So anytime you're working with actual dates, which I assume in wearable technologies you're going to be dealing with time series data and that kind of stuff. There are different ways of making that less sensitive. But anytime you're dealing with research, anytime we're dealing with the electronic health records, we deal in years, not in actual dates, which can-- it creates problems if you are trying to do time series analysis for somebody's entire health record, in which case you can get further clearance to work with more identifiable data. But that is progressive as it can be. There's no reason to start with that kind of data if you don't. So it's always on a need to know. Finally, if you're working with patients older than 90, 90 or older, they are just generalized as a category of greater than 90. The rest of these, I think, are fairly guessable, so we don't have to go through them. But those are the tricky ones that some people don't catch. Again, just limit the collection of PHI as strictly as possible. If you don't need it, don't get it. If you're sharing the data, instead of sharing an entire data set if you do have strong PHI, limit what you're giving or sharing to another researcher. That's just a hygiene issue, and it's really limiting the amount of errors that can happen. So why is this so important? The IRB, again, is particularly interested in protecting patients and making sure that there's as little harm, if any, done as possible to patients. Just general human decency and respect. There's institutional risk if something is done without an IRB, and you can't publish if you have done human subjects research without an IRB. Those two are kind of the stick, but the carrot really should be the top two, as far as just human decency and making sure that you've protected any patients or any participants that you have involved in your research. These are a couple of violations. We don't have to get too far into it, but they were both allegedly conducted without any IRB approval. There's possible fraud involved, and it ruined both of their careers. But it put people at huge exposures to unhealthy conditions. This is probably a much bigger common issue that you're going to have. PHI data breaches, they happen a lot. They're generally not breaches from the outside. They're accidents. Somebody will set up a web server on the machine serving PHI because they found it easier to work at home one day. It could just be they don't know how software is set up. So anytime you're working with PHI, you've really got to overdo it on knowing exactly how you're working with it. Other breaches are losing unencrypted computers, putting data on a thumb drive and losing it. The gross amount of data breaches happen just from negligence and not being as careful as you want to be. So that's always good to keep in mind. I guess a new thing with the IRB and digital research is things have been changing now from face to face recruitment and research into being able to consent online to be able to reach millions of people across the world and allowing them to consent on their own. So this has become, obviously, a new thing since the Belmont Report, and it's something that we are working closely with our IRB to make sure that we're being as respectful as we can to the patients, but also making sure that we can develop software solutions that are not hurting anybody and develop into swim lanes. So what we've come up with a framework for is that there's a project which is-- we're studying all cancers. So you can post reports about different research that's going on, things that seem important. A study is an actual person that's consented to a protocol, which is human research and subject to IRB. Then we'll have a platform that the users will use, and that will be like a website or an iPhone app that they can get literature information about what the project is going on. And then we'll have a participant who is actually part of a study, who's, again, covered under IRB through consent. So why this kind of development has been important, the old way of software development was the waterfall approach, where you work for three weeks, implement something, work for three weeks, implement something, where we have moved to a Agile approach in software. And so while Agile makes our lives a lot easier as far as development, we can't be sure what we're doing isn't going to affect patients in certain contexts. So within a study, working Agile makes no sense. We have we want to work with the IRB to approve things, but IRB approval takes between two and four weeks for expedited things. When we talk about projects and stuff, that's where we want to work safely in an Agile environment and try to figure out places where the IRB doesn't necessarily have to be involved or doesn't want to be involved and that there isn't any added patient risk whatsoever in working in that kind of environment. So it's working with software products versus studies, and so working with the IRB to be sure that we can separate those things and make sure that things move on as well as possible without any added harm. So that's these categories again. So project activity would be social media outreach, sharing content that is relevant to the project and kind of just informing about a general idea. A study activity is what you would generally be used to with consent, data sharing, actually participating in a study, whether it's through a wearable, answering questions, and then withdrawing in the process. And the study activities are 100% IRB, where the project activities that aren't directly dealing with the study can hopefully be separated in most cases. So the three takeaways really are just if you don't know, ask, limit the collection of PHI as strictly as possible, and working in Agile developments are great but it is unsafe in a lot of human research, so we have to focus on where that can be used and where it can't. And that's it. Thank you. [APPLAUSE] Oh. AUDIENCE: I have a question about how it's actually done. So as the IRB, how do you make sure that your researcher is complying? Is that, like, writing a report, doing a PDF, or is there a third party service? MARK SHERVEY: Yeah, yeah. So we certify all of our researchers with human research and HIPAA compliance, just blanket. And if you provide that and your certifications are up to date, it's an understanding that the researcher knows what they should be looking out for and that the IRB understands. AUDIENCE: So is that a third party? MARK SHERVEY: Oh, yeah, yeah. We use a third party. You can have-- I don't-- we use a third party. PROFESSOR: Can I just add-- MARK SHERVEY: Oh, yeah. PROFESSOR: So at MIT, there's something called COUHES, the Committee on Use of Humans as Experimental Subjects, and they are our official IRB. It used to be all paper. Now there's an electronic way where you can apply for a COUHES protocol. And it's a reasonably long document in which you describe the purpose of the experiment, what you're going to do, what kind of people you're going to recruit, what recruiting material you're going to use, how you will handle the data, what security provisions you have. Of course, if you're doing something like injecting people with toxins, then that's a much more serious kind of thing, and you have to describe the preliminary data on why you think this is safe and so on. And that gets reviewed at, essentially, one of three levels. There is exempt review, which is-- you can't exempt yourself, but they can exempt you. And what they would say is, this is a minimal risk kind of problem. So let's say you're doing a data only study using mimic data, and you've done the city training, you've signed the data use agreement. You're supposed to get IRB permission for it. There is an exception for students in a classroom, in which case I'm responsible rather than making you responsible. But if you screw it up, I'm responsible. The second level is an expedited approval, which is a low risk kind of approval, typically data only studies. But it may involve things like using limited data sets, where, for example, if you're trying to study the geographical distribution of disease, then you clearly need better geographical identifiers than a three-digit zip code, or if you're trying to study a time series, as Mark was talking about, you need actual dates. And so you can get approval to use that kind of data. And then there's the full on review, which takes much longer, where they do actually bring in people to evaluate the safety of what you're proposing to do. So far, my experience is that mostly with the kinds of studies that we do that are representative of the material we're studying in class, we don't have to get into that third category because we're not actually doing anything that is likely to harm individual patients, except in a kind of reputational or data-oriented sense, and that doesn't require the full blown review. So that's the local situation. MARK SHERVEY: Yeah, thank you. Yeah, I think I misunderstood the full range of the question. Yeah, and that's roughly our same thing. So we have-- Eddie Golden is our research project manager, who is my favorite person in the office for this kind of stuff. She keeps on top of everything and makes sure that the right people are listed on research and that people are taken off, that kind of stuff. But it's a good relationship with the IRB on that kind of stuff. Yeah? AUDIENCE: So I'm somewhat unfamiliar with Agile software development practices. On a high level, it's just more parallelized and we update more frequently? MARK SHERVEY: Yeah, yeah. I don't know if I took that slide out, but there's something where Amazon will deploy 50 million updates per year or something like that. So it's constantly on an update frequency instead of just building everything up and then dropping it. And that's just been a new development in software. AUDIENCE: Can we ask questions to both you guys? ANDY CORAVOS: Yeah. AUDIENCE: Can you tell us more about Elektra Labs? I couldn't fully understand. Are you guys more consultantancy for all these, we'll call them, tool companies? Or is it more like a lobbying kind of thing? The reason I ask this is also because I wonder what your opinion is on a third party source for determining whether these things are a good or bad kind of thing because it seems like the FDA would have trouble understanding. So if you had some organic certified kind of thing, would that be a useful solution? Or where does that go wrong? ANDY CORAVOS: Mm-hm, yeah. So what we're building with Elektra is effectively a pharmacy for connecting technologies. So the way that today you have pharmacies that have a formulary of all the different drugs that are available, this is effectively like a digital pharmacy, like a Kelley Blue Book of all the different tools. And then we're building out a label for each of them based on as much objective data as we can, so that we're not scoring whether or not something's good or bad. Because in most instances, things aren't good or bad and absolute, they're good or bad for a purpose. And so you can imagine something-- maybe you need really high levels of accuracy, so you need to know whether or not that tool has been verified and validated in certain contexts in certain patient populations. Even if the tool's accurate, if you're to recharge it all the time or you can't wear it in the shower, you won't have the usability, or if the APIs are really hard to work with. And then security profile, whether or not they have coordinated disclosure, how they handle things like the tool companies, like a software bill of materials and what kind of software is used. And then even if the tool is accurate, even if it's relatively usable, even if it's secure, that doesn't solve the Cambridge Analytica problem, so how tools are doing a third party transfer. And so one of the philosophies is we don't score, but we are building out the data set so when you are evaluating a certain tool, it's like a nutrition label. Sometimes you need more sugar, sometimes you need more protein. Maybe you need more security, maybe you really need to think about the data rates. Maybe you can take a leave on some of the accuracy levels. And so we're all building out this ability to evaluate the tools, and then also to deploy them like the way that a pharmacy would deploy them out. One thing I would like to do with the group, if you all are down for it, out of civic duty-- and I'm serious, though. Voting is very important and submitting your comments to the public register is very important. And I read all your comments because Irene sent them to me, and they were very good. And I know probably people who came here want to polish everything and make them perfect. You can submit them exactly how they are. And I am very much hoping that we get about 95% of you to submit, and the 5% of you that didn't, like, your internet broke or something. You can submit tonight. I will email Irene because you already have done the work, and you can submit it. But I would like to just hear some of your thoughts. So what I'm going to do is I'm going to use that same framework around, what would you keep? What would you change? And then change can also include, like, what was so confusing in there, that it didn't even really make sense? Part of the confusion might be that it was-- some regulations are confusing. But some of the confusion is that part of that document was not written by people who-- some people have technical backgrounds and some do not. And so sometimes some of the language might not actually be used in the way that industry is using it today, so refining the language. And then what did you see that was missing? So here's what we're going to do. Keep, change slash confusing, and then start or add. And before I ask you, I want you to look at the person next to you, seriously, and if there's three of you, that's fine, and I want you to tell them when you will be submitting the comment. Is it tonight, tomorrow, or you are choosing not to? Just look at them and talk. [INDISTINCT CHATTER] There will be a link. I will send you all links. I will make this very easy. OK. All right. Who wants to start? We got three things on the board. Yes? AUDIENCE: So one thing that-- I don't know if this is confusing or just intentionally vague, but for things like quality systems and machine learning practices, who sets those standards and how can they be adapted or changed? ANDY CORAVOS: Mm-hm. I don't also know that answer, and so I would like you to submit-- one of the things that is nice is then that you have to respond, yeah. And I think it's also a little bit confusing, even the language. So people are using different things. People call it GXP, good manufacturing practice, good clinical practice. These are maintained, I think, in some instances by different orgs. I wonder if good algorithm practice gap or good machine learning practice-- yeah, that's a good thing. So who owns GXP? OK. Yes? You didn't have a question? No. AUDIENCE: Just wanted to share something. ANDY CORAVOS: Yes. AUDIENCE: One of the things I found really to keep were the examples in the appendix. I don't know [INAUDIBLE]. --general guidelines, and so it's more that the language itself is more generalized and so the examples are really hopeful for what is a specific situation that's analogous to make. ANDY CORAVOS: Yep. Like that? Yeah, examples are helpful. Yep? AUDIENCE: Speaking of specifics, I thought around transparency they could have been much more specific and that we should generally adhere to guidelines as opposed to the exact set of data that is-- this algorithm is exactly what's coming out of it, the exact quality metrics, things like that that hold people accountable as opposed to there are many instances to not be transparent. And so if those aren't as specific, I worry that not that much really would happen there. The analog like I thought of was when Facebook asks for your data, they say here are the things that we need or that we're using, and it's very explicit. And then you can have a choice of whether or not you actually want that. ANDY CORAVOS: OK. AUDIENCE: So seeing something like that [INAUDIBLE].. ANDY CORAVOS: So part of it is transparency, but also user choice in data selection or-- AUDIENCE: Yeah, I think that was, for me, more of an analog because choice in the medical setting is a bit more complex. Someone who doesn't have the ability in that case or the knowledge to actually make that choice. ANDY CORAVOS: Yeah. AUDIENCE: I think at the very least saying this algorithm is using this, and maybe some sort of choice. So you can work with someone, and maybe there's some parameters around what you would or would not have that choice. ANDY CORAVOS: Yep. Yes? AUDIENCE: What if you added something about algorithm bias? Because I know that that's been relevant for a lot of other industries in terms of confidence within the legal system, and then also in terms of facial recognition not working fully across races. So I think that breaking things down by population and ensuring equitable across different populations is important. ANDY CORAVOS: Yep. I don't know if I slept enough, so if I just gave this example-- but a friend of mine called me last week and asked for PPGs, so the sensor on the back. She was asking me if it works on all skin colors and whether or not it responds differently. And if it responds differently, whether or not somebody has a tattoo. And so for some of the big registries that are doing bring your own device data, you can have unintended biases in the data sets just because of how that it's processing. So yeah. What do you think? What are ways-- I think Irene's worked with some of this. How do you think about whether or not something is-- what would be a good system for the agency to consider around bias? AUDIENCE: I think maybe coming into consideration with [INAUDIBLE] system might be part of the GNLP. But I think it would be the responsibility of the designer to assess [INAUDIBLE]. ANDY CORAVOS: OK. AUDIENCE: As a note, bearing this is our next lecture, so anyone who might be confused or want to talk about it more, we will have plenty material next time. ANDY CORAVOS: You want to pick someone? MARK SHERVEY: I'm sorry. Go ahead. AUDIENCE: Me? MARK SHERVEY: Yeah. ANDY CORAVOS: Cold call. [LAUGHTER] Yeah? AUDIENCE: Just adding off at another place, so it looked like there was a period for providing periodic reportings to the FDA on updates and all that. There could also be like a scorecard of bias on subpopulations or something to that effect. ANDY CORAVOS: Mm-hm. That's cool. Have you seen any places that do something like that? AUDIENCE: I remember when I read Weapons of Math Destruction from Cathy O'Neil, she mentioned some sort of famous audit. But I don't really remember the details. ANDY CORAVOS: Yeah. When you do submit your comment, if you have ideas or links, or it can be posts or blogs or whatever, just link them in because one thing that you'll find is that we read a lot of things, probably the same things on Twitter, but other groups don't necessarily see all of that. So I think that Cathy O'Neil is really interesting work, but yeah, just tag stuff. It doesn't have to be formatted amazingly. PROFESSOR: So in some of the communities that I follow, not on Twitter but email and on the web, there's been a lot of discussion about really terrible design of information systems in hospitals and how these lead to errors. Now, I know from your slide, Andy, that the FDA has defined those to be out of its purview. But it seems to me that there's probably, at the moment, more harm being done by information systems that encourage really bad practice or that allow bad practice than there is by retinopathy, AI, machine learning techniques that make mistakes. So just this morning, for example, somebody posted a message about a patient who had a heart rate of 12,000, which seems extremely unlikely. [LAUGHTER] ANDY CORAVOS: Yep. PROFESSOR: And the problem is that when you start automating processes that are based on the information that is collected in these systems, things can go really screwy when you get garbage data. ANDY CORAVOS: Yeah. Have you thought about that with your system? MARK SHERVEY: We cannot get good data. I mean, you're not going to get good data out of those systems. What you're seeing is across the board, and there's not much you can do about it other than validate good ranges and go from there. PROFESSOR: Well, I can think of things to do. For example, if FDA were interested in regulating such devices-- oh, sorry, such tools-- ANDY CORAVOS: Yeah. Well, they would regulate devices. So one of the funny things with FDA is-- and I should have mentioned this-- is the FDA does not regulate the practice of medicine. So doctors can do whatever they want. They regulate-- well, you should look up exactly-- the way I interpret it is they regulate the marketing that a manufacturer would do. So I actually wonder if the EHRs would be considered practice of medicine or if it would be a marketing from the EHR company, and maybe that's how it could be under their purview. PROFESSOR: Yeah. ANDY CORAVOS: Yeah. Yes? AUDIENCE: I guess something that I was surprised not to see as much about were privacy issues in this. I know there's ways where you can train machine learning models and extract information that the data was trained on. At least I'm pretty sure that exists. It's not my expertise. But I was wondering if anything like that [INAUDIBLE] have someone try to extract the data that you can't. But you talked about that a lot in your section of the talk, but I don't remember it as much [INAUDIBLE].. ANDY CORAVOS: OK. Yep. Realistically, how many of you do think you'll actually submit a comment? A couple. So if you're thinking you wouldn't submit a comment, just out of curiosity, I won't argue with you, I'm just curious, what would hold you back from submitting a comment? If you didn't raise your hand now, I get to cold call you. Yes? AUDIENCE: I raised my hand before. We were just talking. Most of us have our computers open now. If you really want us to submit it as is, if you put it up, we could all submit. ANDY CORAVOS: OK. OK, OK. Wow. AUDIENCE: We are 95% [INAUDIBLE].. ANDY CORAVOS: All right. PROFESSOR: So while Andy is looking that up, I should say when the HIPAA regulations, the privacy regulations were first proposed, the initial version got 70,000 public comments about it. And it is really true that the regulatory agency, in that case, it was Health and Human Services, had to respond to every one of those by law. And so they published reams of paper about responding to all those requests. So they will take your comments seriously because they have to. AUDIENCE: I was going to say, is there any way of anonymously commenting? Or does it have to be tied to us, out of curiosity? ANDY CORAVOS: I don't know. I think it's generally-- I don't know, I'd have to look at it again. I think most of them are public comments. I mean, I guess if you wanted to, maybe you could coordinate your comments and you could-- yeah, OK. Irene is willing to group comment. So you can also send if you'd like to do it that way, and it can be a set of class comments, if you would prefer. The Bitly is capital MIT all lowercase loves FDA, will send you over to the docket. I'm amazed that that Bitly has not been taken already. PROFESSOR: [INAUDIBLE] has been asleep on the job. ANDY CORAVOS: What other questions do you all have? Yes? AUDIENCE: So what is the line between an EHR and a SaMD? Because it said it earlier that EHR is exempted, but then it also says, oh, for example, with SaMD, it could be collecting physiological signals, and then they might send an audible alarm to indicate [INAUDIBLE]. And my understanding is some of EHRs do that. ANDY CORAVOS: Mm-hm. AUDIENCE: And so would they need to be retroactively approved and partially SaMD-ified? Or how's that work? ANDY CORAVOS: So I'm not a regulator, so you should ask your regulator. A couple resources that could help you decide this is, again, it's about what you're claiming the product does, perhaps not what it actually does. The next thing, which I don't think-- I mean, I think if it really does that, you should also claim that it does what it does, especially if it's confusing for people. There's a couple of regulations that might be helpful. One is called Clinical Decision Support. And if you read any FDA things, they love their algorithms-- I mean, they love their algorithms, but they also love their acronyms. So Clinical Decision Support is CDS, and then also Patient Decision Support. There's a guidance that just came out around the two types of decision support tools, and I would guess maybe that is supporting a decision, that EHR. So it might actually be considered something that would be regulated. There's also a lot of weird-- we didn't go into it, but there are many instances where something might actually be a device and the FDA says it's a device, but it will do something called enforcement discretion, which says it's a device but we will not regulate it as such. Which is actually a little bit risky for a manufacturer because you are device, but you can now go straight to market. In some instances, you still have to register and list the product, but you don't have to necessarily get reviewed. And it also could eventually be reviewed. So the line of, is it a device, is it a device and you have to register, is it a device and you have to get cleared or approved, is why you should early and often-- yes? AUDIENCE: I enjoyed your game with regards to Fitbit and Apple. And I have a question about the app. I know that you're not Apple either, but why do you think that Apple went for FDA approval versus Fitbit who didn't? What were the motivations for the companies to do that? ANDY CORAVOS: I would say, in public documents, Fitbit has expressed an interest in working with the FDA. I don't know at what point they have decided what they submitted or had their package. They're also working with the pre-cert program. So I don't know what's happening behind the scenes. Yeah? AUDIENCE: Does it give them a business edge, perhaps, to get FDA approval? ANDY CORAVOS: I cannot comment on that. AUDIENCE: OK, no worries. ANDY CORAVOS: Yeah. I would say, generally, people want to use tools that are trustworthy, and developing more tools that have somebody of evidence is a really important thing. I think the FDA is one way of having evidence. I think there are other ways that tools and devices can continue to build evidence. My hope is over time, that a lot of these things that we consider to be wellness tools also have evidence around them. Maybe in some instances we don't always regulate vitamins, but you want to still trust that your vitamin doesn't have sawdust in it, right, and that it's a real product. And so the more that we, I think, push companies to have evidence and that we use products that do that, I hope over time this helps us. PROFESSOR: Does it give them any legal protection to have it be classified as an FDA device? ANDY CORAVOS: I'm not sure about that. Historically, it has helped with reimbursement. So a class two product has been easier to reimburse. That also is generally changing, but that helps with the business model around that. PROFESSOR: Yeah. Well, I want to thank you both very much. That was really interesting. And I do encourage all of you to participate in this regulatory process by submitting your comments. And I enjoyed the presentations. Thank you. ANDY CORAVOS: Yeah. MARK SHERVEY: Thank you. ANDY CORAVOS: Thank you. [APPLAUSE]
MIT_6S897_Machine_Learning_for_Healthcare_Spring_2019
18_Disease_Progression_Modeling_and_Subtyping_Part_1.txt
DAVID SONTAG: So we're done with our segment on causal inference and reinforcement learning. And for the next week, today and Tuesday's lecture, we'll be talking about disease progression modeling and disease subtyping. This is, from my perspective, a really exciting field. It's one which has really a richness of literature going back to somewhat simple approaches from a couple of decades ago up to some really state of the art methods, including one which is in one of your readings for today's lecture. And I could spent a few weeks just talking about this topic. But instead, since we have a lot to cover in this course, what I'll do today is give you a high-level overview of one approach to try to think through these questions. The methods in today's lecture will be somewhat simple. They're meant to illustrate how simple methods can go a long way. And they're meant to illustrate, also, how one could learn something really significant about clinical outcomes and about predicting these progression from these simple methods. And then in Tuesday's lecture, I'll ramp it up quite a bit. And I'll talk about several more elaborate approaches towards this problem, which tackle some more substantial problems that we'll really elucidate at the end of today's lecture. So there's three types of questions that we hope to answer when studying disease progression modeling. At a high level, I want you to think about this type of picture and have this in the back of your head throughout today and Tuesday's lecture. What you're seeing here is a single patient's disease trajectory across time. On the x-axis is time. On the y-axis is some measure of disease burden. So for example, you could think about that y-axis as summarizing the amount of symptoms that a patient is reporting or the amount of pain medication that they're taking, or some measure of what's going on with them. And initially, that disease burden might be somewhat low, and maybe even the patient's in an undiagnosed disease state at that time. As the symptoms get worse and worse, at some point the patient might be diagnosed. And that's what I'm illustrating by this gray curve. This is the point in time which the patient is diagnosed with their disease. At the time of diagnosis, a variety of things might happen. The patient might begin treatment. And that treatment might, for example, start to influence the disease burden. And you might see a drop in disease burden initially. This is a cancer. Unfortunately, often we'll see recurrences of the cancer. And that might manifest by a uphill peak again, where it is burden grows. And once you start second-line treatment, that might succeed in lowering it again and so on. And this might be a cycle that repeats over and over again. For other diseases for which have no cure, for example, but which are managed on a day-to-day basis-- and we'll talk about some of those-- you might see, even on a day-by-day basis, fluctuations. Or you might see nothing happening for a while. And then, for example, in autoimmune diseases, you'll see these flare-ups where the disease burden grows a lot, then comes down again. It's really inexplicable why that happens. So the types of questions that we'd like to really understand here are, first, where is the patient in their disease trajectory? So a patient comes in today. And they might be diagnosed today because of symptoms somehow crossing some threshold and them coming into the doctor's office. But they could be sort of anywhere in this disease trajectory at the time of diagnosis. And a key question is, can we stage patients to understand, for example, things like, how long are they likely to live based on what's currently going on with them? A second question is, when will the disease progress? So if you have a patient with kidney disease, you might want to know something about, when will this patient kidney disease need a transplant? Another question is, how will treatment effect that disease progression? That I'm sort of hinting at here, when I'm showing these valleys that we conjecture to be affected by treatment. But one often wants to ask counterfactual questions like, what would happen to this patient's disease progression if you did one treatment therapy versus another treatment therapy? So the example that I'm mentioning here in this slide is a rare blood cancer named multiple myeloma. It's rare. And so you often won't find data sets with that many patients in them. So for example, this data set which I'm listening in the very bottom here from the Multiple Myeloma Research Foundation CoMMpass study has roughly 1,000 patients. And it's a publicly available data set. Any of you can download it today. And you could study questions like this about disease progression. Because you can look at laboratory tests across time. You could look at when symptoms start to rise. You have information about what treatments a patient is on. And you have outcomes, like death. So for multiple myeloma, today's standard for how one would attempt to stage a patient looks a little bit like this. Here I'm showing you two different staging systems. On the left is a Durie-Salmon Staging System, which is a bit older. On the right is what's called the Revised International Staging System. A patient walks into their oncologist's office newly diagnosed with multiple myeloma. And after doing a series of blood tests, looking at quantities such as their hemoglobin rates, amount of calcium in the blood, also doing, let's say, a biopsy of the patient's bone marrow to measure amounts of different kinds of immunoglobulins, doing gene expression assays to understand various different genetic abnormalities, that data will then feed into a staging system like this. So in the Durie-Salmon Staging System, a patient who is in stage one is found to have a very low M-component production rate. So that's what I'm showing over here. And that really corresponds to the amount of disease activity as measured by their immunoglobulins. And since this is a blood cancer, that's a very good marker of what's going on with the patient. So at sort of this middle stage, which is called neither stage one nor stage three, is characterized by, in this case-- well, I'm not going to talk with that. If you go to stage three for here, you see that the M-component levels are much higher. If you look at X-ray studies of the patient's bones, you'll see that there are lytic bone lesions, which are caused by the disease and really represent an advanced status of the disease. And if you were to measure for the patient's urine the amount of light-chain production, you see that it has much larger values as well. Now, this is an older staging system. In the middle, now I'm showing you a newer staging system, which is both dramatically simpler and involves some newer components. So for example, in stage one, it looks at just four quantities. First it looks at the patient's albumin and beta-2 microglobulin levels. Those are biomarkers that can be easily measured from the blood. And it says no high-risk cytogenetics. So now we're starting to bring in genetic quantities in terms of quantifying risk levels. Stage three is characterized by significantly higher beta-2 microglobulin levels, translocations corresponding to particular high-risk types of genetics. This will not be the focus of the next two lectures, but Pete is going to go much more detail in two genetic aspects of precision medicine in a week and a half now. And in this way, each one of these stages represents something about the belief of how far along the patient is and is really strongly used to guide treatment therapy. So for example, patient is in stage one, an oncologist might decide we're not going to treat this patient today. So a different type of question, whereas you could think about this as being one of characterizing on a patient-specific level-- one patient walks in. We want to stage that specific patient. And we're going to look at some long-term outcomes and look at the correlation between stage and long-term outcomes. A very different question is a descriptive-type question. Can we say what will the typical trajectory of this disease look like? So for example, we'll talk about Parkinson's disease for the next couple of minutes. Parkinson's disease is a progressive nervous system disorder. It's a very common one, as opposed to multiple myeloma. Parkinson's affects over 1 in 100 people, age 60 and above. And like multiple myeloma, there is also disease registries that are publicly available and that you could use to study Parkinson's. Now, various researchers have used those data sets in the past. And they've created something that looks a little bit like this to try to characterize, at now a population level, what it means for a patient to progress through their disease. So on the x-axis, again, I have time now. On the y-axis, again, it denotes some level of disease disability. But what we're showing here now are symptoms that might arise at different parts of the disease stage. So very early in Parkinson's, you might have some sleep behavior disorders, some depression, maybe constipation, anxiety. As the disease gets further and further along, you'll see symptoms such as mild cognitive impairment, increased pain. As the disease goes further on, you'll see things like dementia and an increasing amount of psychotic symptoms. And information like this can be extremely valuable for a patient who is newly diagnosed with a disease. They might want to make life decisions like, should they buy this home? Should they stick with their current job? Can they have a baby? And all of these questions might really be impact-- the answer to those questions might be really impacted by what this patient could expect their life to be like over the next couple of years, over the next 10 years or the next 20 years. And so if one could characterize really well what the disease trajectory might look like, it will be incredibly valuable for guiding those life decisions. But the challenge is that-- this is for Parkinson's. And Parkinson's is reasonably well understood. There are a large number of diseases that are much more rare, where any one clinician might see a very small number of patients in their clinic. And figuring out, really, how do we combine the symptoms that are seen in a very noisy fashion for a small number of patients, how to bring that together to a coherent picture like this is actually very, very challenging. And that's where some of the techniques we'll be talking about in Tuesday's lecture, which talks about how do we infer disease stages, how do we automatically align patients across time, and how do we use very noisy data to do that, will be particularly valuable. But I want to emphasize one last point regarding this descriptive question. This is not about prediction. This is about understanding, whereas the previous slide was about prognosis, which is very much a prediction-like question. Now, a different type of understanding question is that of disease subtyping. Here, again, you might be interested in identifying, for a single patient, are they likely to progress quickly through their disease? Are they likely to progress slowly through their disease? Are they likely to respond to treatment? Are they not likely to respond to treatment? But we'd like to be able to characterize that heterogeneity across the whole population and summarize it into a small number of subtypes. And you might think about this as redefining disease altogether. So today, we might say patients who have a particular blood abnormality, we will say are multiple myeloma patients. But as we learn more and more about cancer, we increasingly understand that, in fact, every patient's cancer is very unique. And so over time, we're going to be subdividing diseases, and in other cases combining things that we thought were different diseases, into new disease categories. And in doing so it will allow us to better take care of patients by, first of all, coming up with guidelines that are specific to each of these disease subtypes. And it will allow us to make better predictions based on these guidelines. So we can say a patient like this, in subtype A, is likely to have the following disease progression. A patient like this, in subtype B, is likely to have a different disease progression or be a responder or a non-responder. So here's an example of such a characterization. This is still sticking with the Parkinson's example. This is a paper from a neuropsychiatry journal. And it uses a clustering-like algorithm, and we'll see many more examples of that in today's lecture, to characterize patients into, to group patients into, four different clusters. So let me walk you through this figure so you see how to interpret it. Parkinson's patients can be measured in terms of a few different axes. You could look at their motor progression. So that is shown here in the innermost circle. And you see that patients in Cluster 2 seem to have intermediate-level motor progression. Patients in Cluster 1 have very fast motor progression, means that their motor symptoms get increasingly worse very quickly over time. One could also look at the response of patients to one of the drugs, such as levodopa that's used to treat patients. Patients in Cluster 1 are characterized by having a very poor response to that drug. Patients in Cluster 3 are characterized as having intermediate, patients in Cluster 2 as having good response to that drug. Similarly one could look at baseline motor symptoms. So at the time the patient is diagnosed or comes into the clinic for the first time to manage their disease, you can look at what types of motor-like symptoms do they have. And again, you see different heterogeneous aspects to these different clusters. So this is one means-- this is a very concrete way, of what I mean by trying to subtype patients. So we'll begin our journey through disease progression modeling by starting out with that first question of prognosis. And prognosis, from my perspective, is really a supervised machine-learning problem. So we can think about prognosis from the following perspective. Patient walks in at time zero. And you want to know something about what will that patient's disease status be like over time. So for example, you could ask, at six months, what is their disease status? And for this patient, it might be, let's say, 6 out of 10. And where these numbers are coming from will become clear in a few minutes. 12 months down the line, their disease status might be 7 out of 10. 18 months, it might be 9 out of 10. And the goal that we're going to try to tackle for the first half of today's lecture is this question of, how do we take the data, what I'll call the x vector, available for the patient at baseline and predict what will be these values at different time points? So you could think about that as actually drawing out this curve that I showed you earlier. So what we want to do is take the initial information we have about the patient and say, oh, the patient's disease status, or their disease burden, over time is going to look a little bit like this. And for a different patient, based on their initial covariance, you might say that their disease burden might look like that. So we want to be able to predict these curves in this-- for this presentation, there are going to actually be sort of discrete time points. We want to be able to predict that curve from the baseline information we have available. And that will give us some idea of how this patient's going to progress through their disease. So in this case study, we're going to look at Alzheimer's disease. Here I'm showing you two brains, a healthy brain and a diseased brain, to really emphasize how the brain suffers under Alzheimer's disease. We're going to characterize the patient's disease status by a score. And one example of such a score is shown here. It's called the Mini Mental State Examination, summarized by the acronym MMSE. And it's going to look as follows. For each of a number of different cognitive questions, a test is going to be performed, which-- for example, in the middle, what it says is registration. The examiner might name three objects like apple, table, penny, and then ask the patient to repeat those three objects. All of us should be able to remember a sequence of three things so that when we finish the sequence, you should be able to remember what the first thing in the sequence was. We shouldn't have a problem with that. But as patients get increasingly worse in their Alzheimer's disease, that task becomes very challenging. And so you might give 1.4 correct for each correct. And so if the patient gets all three, if they repeat all three of them, then they get three points. If they can't remember any of them, zero points. Then you might continue. You might ask something else like subtract 7 from 100, then repeat some results, so some sort of mathematical question. Then you might return back to that original three objects you asked about originally. Now it's been, let's say, a minute later. And you say, what were those three objects I mentioned earlier? And this is trying to get at a little bit longer-term memory and so on. And one will then add up the number of points associated with each of these responses and get a total score. Here it's out of 30 points. If you divide by 3, you get the story I give you here. So these are the scores that I'm talking about for Alzheimer's disease. They're often characterized by scores to questionnaires. But of course, if you had done something like brain imaging, the disease status might, for example, be inferred automatically from brain imaging. If you had a smartphone device, which patients are carrying around with them, and which is looking at mobile activity, you might be able to automatically infer their current disease status from that smartphone. You might be able to infer it from their typing patterns. You might be able to infer it from their email or Facebook habits. And so I'm just trying to point out, there are a lot of different ways to try to get this number of how the patient might be doing at any one point in time. Each of those an interesting question. For now, we're just going to assume it's known. So retrospectively, you've gathered this data for patients, which is now longitudinal in nature. You have some baseline information. And you know how the patient is doing over different six-month intervals. And we'd then like to be able to predict to those things. Now, if this were-- we can now go back in time to lecture three and ask, well, how could we predict these different things? So what are some approaches that you might try? Why don't you talk to your neighbor for a second, and then I'll call on a random person. [SIDE CONVERSATION] OK. That's enough. My question was sufficiently under-defined that if you talk longer, who knows what you'll be talking about. Over here, the two of you-- the person with the computer. Yeah. How would you tackle this problem? AUDIENCE: Me? OK. DAVID SONTAG: No, no, no. Over here, yeah. Yeah, you. AUDIENCE: I would just take, I guess, previous data, and then-- yeah, I guess, any previous data with records of disease progression over that time span, and then treated [INAUDIBLE]. DAVID SONTAG: But just to understand, would you learn five different models? So our goal is to get these-- here I'm showing you three, but it might be five different numbers at different time points. Would you learn one model to predict what it would be at six months, another to predict what would be a 12 months? Would you learn a single model? Other ideas? Somewhere over in this part of the room. Yeah. You. AUDIENCE: [INAUDIBLE] DAVID SONTAG: Yeah. Sure. AUDIENCE: [INAUDIBLE] DAVID SONTAG: So use a multi-task learning approach, where you try to learn all five at that time and use what? What was the other thing? AUDIENCE: So you can learn to use these datas in six months and also use that as your baseline [INAUDIBLE].. DAVID SONTAG: Oh, that's a really interesting idea. OK. So the suggestion was-- so there are two different suggestions, actually. The first suggestion was do a multi-task learning approach, where you attempt to learn-- instead of five different and sort of independent models, try to learn them jointly together. And in a second, we'll talk about why it might make sense to do that. The different thought was, well, is this really the question you want to solve? For example, you might imagine settings where you have the patient not at time zero but actually at six months. And you might want to know what's going to happen to them in the future. And so you shouldn't just use the baseline information. You should recondition on the data you have available for time. And a different way of thinking through that is you could imagine learning a Markov model, where you learn something about the joint distribution of the disease stage over time. And then you could, for example, even if you only had baseline information available, you could attempt to marginalize over the intermediate values that are unobserved to infer what the later values might be. Now, that Markov model approach, although we will talk about it extensively in the next week or so, it's actually not a very good approach for this problem. And the reason why is because it increases the complexity. So when you are learn-- in essence if you wanted to predict what's going on at 18 months, and if, as an intermediate step to predict what goes on at 18 months, you have to predict what's going to go on at 12 months, and then the likelihood of transitioning from 12 months to 18 months, then you might incur error in trying to predict what's going on at 12 months. And that error is then going to propagate as you attempt to think about the transition from 12 months to 18 months. And that propagation of error, particularly when you don't have much data, is going to really hurt the [INAUDIBLE] of your machine learning algorithm. So the method I'll be talking about today is, in fact, going to be what I view as the simplest possible approach to this problem. And it's going to be direct prediction approach. So we're directly going to predict each of the different time points independently. But we will tie together the parameters of the model, as was suggested, using a multi-task learning approach. And the reason why we're going to want to use a multi-task learning approach is because of data sparsity. So imagine the following situation. Imagine that we had just binary indicators here. So let's say patient is OK, or they're not OK. So the data might look like this-- 0, 0, 1. Then the data set you might have might look a little bit like this. So now I'm going to show you the data. And one row is one patient. Different columns are different time points. So the first patient, as I showed you before, is 0, 0, 1. Second patient might be 0, 0, 1, 0. Third patient might be 1, 1, 1, 1. Next patient might be 0, 1, 1, 1. So if you look at the first time point here, you'll notice that you have a really imbalanced data set. There's only a single 1 in that first time point. If you look at the second time point, there are two. It's more of a balanced data set. And then in the third time point, again, you're sort of back into that imbalanced setting. What that means is that if you were to try to learn from just one of these time points by itself, particularly in the setting where you don't have that many data points alone, that data sparsity and in outcome label is going to really hurt you. It's going to be very hard to learn any interesting signal just from that time point alone. The second problem is that the label is also very noisy. So not only might you have lots of imbalance, but there might be noise in the actual characterizations. Like for this patient, maybe with some probability, you would calculate 1, 1, 1, 1. With some other probability, you would observe 0, 1, 1, 1. And it might correspond to some threshold in that score I showed you earlier. And just by chance, a patient, on some day, passes the threshold. On the next day, they might not pass that threshold. So there might be a lot of noise in the particular labels at any one time point. And you wouldn't want that noise to really dramatically affect your learning algorithm based on some, let's say, prior belief that we might have that there might be some amount of smoothness in this process across time. And the final problem is that there might be censoring. So the actual data might look like this. For much later time points, we might have many fewer observations. And so if you were to just use those later time points to learn your predictive model, you just might not have enough data. So those are all different challenges that we're going to attempt to solve using a multi-task learning approach. Now, to put some numbers to these things, we have these four different time points. We're going to have 648 patients at the six-month time interval. And at the four-year time interval, there will only be 87 patients due to patients dropping out of the study. So the key idea here will be, rather than learning these five independent models, we're going to try to jointly learn the parameters corresponding to those models. And the intuitions that we're going to try to incorporate in doing so are that there might be some features that are useful across these five different prediction tasks. And so I'm using the example of biomarkers here as a feature. Think of that like a laboratory test result, for example, or an answer to a question that's available baseline. And so one approach to learning is to say, OK, let's regularize the learning of these different models to encourage them to choose a common set of predictive features or biomarkers. But we also want to allow some amount of flexibility. For example, we might want to say that, well, at any one time point, there might be couple of new biomarkers that are relevant for predicting that time point. And there might be some small amounts of changes across time. So what I'll do right now is I'll introduce to you the simplest way to think through multi-task learning, which-- I will focus specifically on a linear model setting. And then I'll show you how we can slightly modify this simple approach to capture those criteria that I have over there. So let's talk about a linear model. And let's talk about regression. Because here, in the example I showed you earlier, we were trying to pick the score that's a continuous value number. We want to try to predict it. And we might care about minimizing some loss function. So if you were to try to minimize a squared loss, imagine a scenario where you had two different prediction problems. So this might be time point 0, and this might be time point 12, for six months and 12 months. You can start by summing over the patients, looking at your mean squared error at predicting what I'll say is the six-month outcome label by some linear function, which, I'm going to have it as subscript 6 to denote that this is a linear model for predicting the six-month time point value, dot-producted with your baseline features. And similarly, your loss function for predicting, this one is going be the same. But now you'll be predicting the y12 label. And we're going to have a different weight vector for predicting that. Notice that x is the same. Because I'm assuming in everything I'm telling you here that we're going to be predicting from baseline data alone. Now, a typical approach and try to regularize in this setting might be, let's say, to do L2 regularization. So you might say, I'm going to add onto this some lambda times the weight vector 6 squared. Maybe-- same thing over here. So the way that I set this up for you so far, right now, is two different independent prediction problems. The next step is to talk about how we could try to tie these together. So any idea, for those of you who have not specifically studied multi-task learning in class? So for those of you who did, don't answer. For everyone else, what are some ways that you might try to tie these two prediction problems together? Yeah. AUDIENCE: Maybe you could share certain weight parameters, so if you've got a common set of biomarkers. DAVID SONTAG: So maybe you could share some weight parameters. Well, I mean, the simplest way to tie them together is just to say, we're going to-- so you might say, let's first of all add these two objective functions together. And now we're going to minimize-- instead of minimizing just-- now we're going to minimize over the two weight vectors jointly. So now we have a single optimization problem. All I've done is I've now-- we're optimizing. We're minimizing this joint objective where I'm summing this objective with this objective. We're minimizing it with respect to now two different weight vectors. And the simplest thing to do what you just described might be to say, let's let W6 equal to W12. So you might just add in this equality constraint saying that these two weight vectors should be identical. What would be wrong with that? Someone else, what would be wrong with-- and I know that wasn't precisely your suggestion. So don't worry. AUDIENCE: I have a question. DAVID SONTAG: Yeah. What's your question? AUDIENCE: Is x-- are those also different? DAVID SONTAG: Sorry. Yeah. I'm missing some subscripts, right. So I'll put this in superscript. And I'll put subscript i, subscript i. And it doesn't matter for the purpose of this presentation whether these are the same individuals or different individuals across these two problems. You can imagine they're the same individual. So you might imagine that there are n individuals in the data set. And we're summing over the same n people for both of these sums, just looking at different outcomes for each of them. This is the six-month outcome. This is the 12-month outcome. Is that clear? All right. So the simplest thing to do would be just to not-- now that we have a joint optimization problem, we could constrain the two weight vectors to be identical. But of course, this is a bit of an overkill. This is like saying that you're going to just learn a single prediction problem, where you sort of ignore the difference between six months and 12 months and just try to predict-- you put those under there and just predict them both together. So you had another suggestion, it sounded like. AUDIENCE: Oh, no. You had just asked why that was not it. DAVID SONTAG: Oh, OK. And I answered that. Sorry. What could we do differently? Yeah, you. AUDIENCE: You could maybe try to minimize the difference between the two. So I'm not saying that they need to be the same. But the chances that they're going to be super, super different isn't really high. DAVID SONTAG: That's a very interesting idea. So we don't want them to be the same. But I might want them to be approximately the same, right? AUDIENCE: Yeah. DAVID SONTAG: And what's one way to try to measure how different these two are? AUDIENCE: Subtract them. DAVID SONTAG: Subtract them, and then do what? So these are vectors. So you-- AUDIENCE: Absolute value. DAVID SONTAG: So it's not absolute value of a vector. What can you do to turn a vector into a single number? AUDIENCE: Take the norm [INAUDIBLE].. DAVID SONTAG: Take a norm of it. Yeah. I think what you meant. So we might take the norm of it. What norm should we take? AUDIENCE: L2? DAVID SONTAG: Maybe the L2 norm. OK. And we might say we want that. So if we said that this was equal to 0, then, of course, that's saying that they have to be the same. But we could say that this is, let's say, bounded by some epsilon. And epsilon now is a parameter we get to choose. And that would then say, oh, OK, we've now tied together these two optimization problems. And we want to encourage that the two weight vectors are not that far from each other. Yep? AUDIENCE: You represent each weight vector as-- have it just be duplicated and force the first place to be the same and the second ones to be different. DAVID SONTAG: You're suggesting a slightly different way to parameterize this by saying that W12 is equal to W6 plus some delta function, some delta difference. Is that you're suggesting? AUDIENCE: No, that you have your-- say it's n-dimensional, like each vector is n-dimensional. But now it's going to be 2n-dimensional. And you force the first n dimensions to be the same on the weight vector. And then the others, you-- DAVID SONTAG: Now, that's a really interesting idea. I'll return to that point in just a second. Thanks. Before I return to that point, I just want to point out this isn't the most immediate think optimize. Because this is now a constrained optimization problem. What's our favorite algorithm for convex optimization in machine learning, and non-convex optimization? Everyone say it out loud. AUDIENCE: Stochastic gradient descent. DAVID SONTAG: TAs are not supposed to answer. AUDIENCE: Just muttering. DAVID SONTAG: Neither are faculty. But I think I heard enough of you say stochastic gradient descent. Yes. Good. That's what I was expecting. And well, you could do projected gradient descent. But it's much easier to just get rid of this. And so what we're going to do is we're just going to put this into the objective function. And one way to do that-- so one motivation would be to say we're going to take the Lagrangian of this inequality. And then that'll bring this into the objective. But you know what? Screw that motivation. Let's just erase this. And I'll just say plus something else. So I'll call that lambda 1, some other hyper-parameter, times now W12 minus W6 squared. Now let's look to see what happens. If we were to push this lambda 2 to infinity, remember we're minimizing this objective function. So if lambda 2 is pushed to infinity, what is the solution of W12 with respect to W6? Everyone say it out loud. AUDIENCE: 0. DAVID SONTAG: I said "with respect to." So there, 1 minus other is 0. Yes. Good. All right. So it would be forcing them that they be the same. And of course, if lambda 2 is smaller, then it's saying we're going to allow some flexibility. They don't have to be the same. But we're going to penalize their difference by the squared difference in their norms. So this is good. And so you raised a really interesting question, which I'll talk about now, which is, well, maybe you don't want to enforce all of the dimensions to be the same. Maybe that's too much. So one thing one could imagine doing is saying, we're going to only enforce this constraint for-- [INAUDIBLE] we're only going to put this penalty in for, let's say, dimensions-- trying to think the right notation for this. I think I'll use this notation. Let's see if you guys like this. Let's see if this notation makes sense for you. What I'm saying is I'm going to take the-- d is the dimension. I'm going to take the first half of the dimensions to the end. I'm going to take that vector and I'll penalize that. So it's ignoring the first half of the dimensions. And so what that's saying is, well, we're going to share parameters for some of this weight vector. But we're not going to worry about-- we're going to let them be completely dependent of each other for the rest. That's an example of what you're suggesting. So this is all great and dandy for the case of just two time points. But what do we do if then we have five time points? Yeah? AUDIENCE: There's some percentage of shared entries in that vector. So instead of saying these have to be in common, you say, treat all of them [INAUDIBLE].. DAVID SONTAG: I think you have the right intuition. But I don't really know how to formalize that just from your verbal description. What would be the simplest thing you might think of? I gave you an example of how to do, in some sense, pairwise similarity. Could you just easily extend that if you have more than two things? You have idea? Nope? AUDIENCE: [INAUDIBLE] DAVID SONTAG: Yeah. AUDIENCE: And then I'd get y1's similar to y2, and y2 [INAUDIBLE] y3. And so I might just-- DAVID SONTAG: So you might say w1 is similar to w2. w2 is similar to w3. w3 is similar to w4 and so on. Yeah. I like that idea. I'm going to generalize that just a little bit. So I'm going to start thinking now about graphs. And we're going to now define a very simple abstraction to talk about multi-task learning. I'm going to have a graph where I have one node for every task and an edge between tasks, between nodes, if those two tasks, we want to encourage their weights to be similar to another. So what are our tasks here? W6, W12. So in what you're suggesting, you would have the following graph. W6 goes to W12 goes to W24 goes to W36 goes to W48. Now, the way that we're going to transform a graph into an optimization problem is going to be as follows. I'm going to now suppose that I'm going to let-- I'm going to define a graph on V comma E. V, in this case, is going to be the set 6, 12, 24, and so on. And I'll denote edges by s comma t. And E is going to refer to a particular two tasks. So for example, the task of six, predicting at six months, and the task of predicting at 12 months. Then what we'll do is we'll say that the new optimization problem is going to be a sum over all of the tasks of the loss function for that task. So I'm going to ignore what is. I'm just going to simply write-- over there, I have two different loss functions for two different tasks. I'm just going to add those together. I'm just going to leave that in this abstract form. And then I'm going to now sum over the edges s comma t in E in this graph that I've just defined of Ws minus Wt squared. So in the example that I go over there in the very top, there were only two tasks, W6 and W12. And we had an edge between them. And we penalized it exactly in that way. But in the general case, one could imagine many different solutions. For example, you could imagine a solution where you have a complete graph. So you may have four time points. And you might penalize every pair of them to be similar to one another. Or, as was just suggested, you might think that there might be some ordering of the tasks. And you might say that you want that-- instead of a complete graph, you're going to just have a chain graph, where, with respect to that ordering, you want every pair of them along the ordering to be close to each other. And in fact, I think that's probably the most reasonable thing to do in a setting of disease progression modeling. Because, in fact, we have some smoothness type prior in our head about these values. The values should be similar to one another when they're very close time points. I just want to mention one other thing, which is that from an optimization perspective, if this is what you had wanted to do, there is a much cleaner way of doing it. And that's to introduce a dummy node. I wish I had more colors. So one could instead introduce a new weight vector. I'll call it W. I'll just call it W with no subscript. And I'm going to say that every other task is going to be connected to it in that star. So here we've introduced a dummy task. And we're connecting every other task to it. And then, now you'd have a linear number of these regularization terms in the number of tasks. But yet you are not making any assumption that there exists some ordering between them in the task. Yep? AUDIENCE: Do you-- DAVID SONTAG: And W is never used for prediction ever. It's used during optimization. AUDIENCE: Why do you need a W0 instead of just doing it based on like W1? DAVID SONTAG: Well, if you do it based on W1, then it's basically saying that W1 is special in some way. And so everything sort of pulled towards it, whereas it's not clear that that's actually the right thing to do. So you'll get different answers. And I'd leave that as an exercise for you to try to derive. So this is the general idea for how one could do multi-task learning using linear models. And I'll also leave it as an exercise for you to think through how you could take the same idea and now apply it to, for example, deep neural networks. And you can believe me that these ideas do generalize in the ways that you would expect them to do. And it's a very powerful concept. And so whenever you are tasked with-- when you tackle problems like this, and you're in settings where a linear model might do well, before you believe that someone's results using a very complicated approach is interesting, you should ask, well, what about the simplest possible multi-task learning approach? So we already talked about one way to try to make the regularization a bit more interesting. For example, we could attempt to regularize only some of the features' values to be similar to another. In this paper, which was tackling this disease progression modeling problem for Alzheimer's, they developed a slightly more complicated approach, but not too much more complicated, which they call the convex fused sparse group lasso. And it does the same idea that I gave here, where you're going to now learn a matrix W. And that matrix W is precisely the same notion. You have a different weight vector per task. You just stack them all up into a matrix. L of W, that's just what I mean by the sum of the loss functions. That's the same thing. The first term in the optimization problem, lambda 1 times the L1 norm of W, is simply saying-- it's exactly like the sparsity penalty that we typically see when we're doing regression. So it's simply saying that we're going to encourage the weights across all of the tasks to be as small as possible. And because it's an L1 penalty, it adds the effect of actually trying to encourage sparsity. So it's going to push things to zero wherever possible. The second term in this optimization problem, this lambda 2 RW squared, is also a sparsely penalty. But it's now pre-multiplying the W by this R matrix. This R matrix, in this example, is shown by this. And this is just one way to implement precisely this idea that I had on the board here. So what this R matrix is going to say it is it's going to say for-- it's going to have one-- you can have as many rows as you have edges. And you're going to have-- for the corresponding task which is S, you have a 1. For the corresponding task which is T, you have a minus 1. And then if you multiply this R matrix by W transpose, what you get is precisely these types of pair-wise comparisons out, the only difference being that here, instead of using a L2 norm, they penalized using an L1 norm. So that's what that second term is, lambda 2 RW transposed. It's simply an implementation of precisely this idea. And that final term is just a group lasso penalty. It's nothing really interesting happening there. I just want to comment-- I had forgotten to mention this. The loss term is going to be precisely a squared loss. This F refers to a Frobenius norm, because we've just stacked together all of the different tasks into one. And the only interesting thing that's happening here is this S, which we're doing an element-wise multiplication. What that S is is simply a masking function. It's saying, if we don't observe a value at some time point, like, for example, if either this is unknown or censored, then we're just going to zero it out. So there will not be any loss for that particular element. So that S is just the mask which allows you to account for the fact that you might have some missing data. So this is the approach used in that KDD paper from 2012. And returning now to the Alzheimer's example, they used a pretty simple feature set with 370 features. The first set of features were derived from MRI scans of the patient's brain. In this case, they just derived some pre-established features that characterize the amount of white matter and so on. That includes some genetic information, a bunch of cognitive scores. So MMSE was one example of an input to this model, at baseline is critical. So there are a number of different types of cognitive scores that were collected at baseline, and each one of those makes up some feature, and then a number of laboratory tests, which I'm just noting as random numbers here. But they have some significance. Now, one of the most interesting things about the results is if you compare the predictive performance of the multi-task approach to the independent regressor approach. So here we're showing two different measures of performance. The first one is some normalized mean squared error. And we want that to be as low as possible. And the second one is R, as in R squared. And you want that to be as high as possible. So one would be perfect prediction. On this first column here, it's showing the results of just using independent regressors-- so if instead of tying them together with that R matrix, you had R equal to 0, for example. And then in each of the subsequent columns, it shows now learning with this objective function, where we are pumping up increasingly high this lambda 2 coefficient. So it's going to be asking for more and more similarity across the tasks. So you see that even with a moderate value of lambda 2, you start to get improvements between this multi-task learning approach and the independent regressors. So the average R squared, for example, goes from 0.69 up to 0.77. And you notice how we have 95% confidence intervals here as well. And it seems to be significant. As you pump that lambda value larger, although I won't comment about the statistical significance between these columns, we do see a trend, which is that performance gets increasingly better as you encourage them to be closer and closer together. So I don't think I want to mention anything else about this result. Is there a question? AUDIENCE: Is this like a holdout set? DAVID SONTAG: Ah, thank you. Yes. So this is on a holdout set. Thank you. And that also reminded me of one other thing I wanted to mention, which is critical to this story, which is that you see these results because there's not much data. If you had a really large training set, you would see no difference between these columns. Or, in fact, if you had a really data set, these results would be worse. As you pump lambda higher, the results will get worse. Because allowing flexibility among the different tasks is actually a better thing if you have enough data for each task. So this is particularly valuable in the data-poor regime. When it goes to try to analyze the results in terms of looking at the feature importances as a function of time, so one row here corresponds to the weight vector for that time point's predictor. And so here we're just looking at four of the time points, four of the five time points. And the columns correspond to different features that were used in the predictions. And the colors correspond to how important that feature is to the prediction. You could imagine that being something like the norm of the corresponding weight in the linear model, or a normalized version of that. What you see are some interesting things. First, there are some features, such as these, where they're important at all different time points. That might be expected. But then there also might be some features that are really important for predicting what's going to happen right away but are really not important to predicting longer-term outcomes. And you start to see things like that over here, where you see that, for example, these features are not at all important for predicting in the 36th time point but were useful for the earlier time points. So from here, now we're going to start changing gears a little bit. What I just gave you is an example of a supervised approach. Is there a question? AUDIENCE: Yes. If a faculty member may ask this question. DAVID SONTAG: Yes. I'll permit it today. AUDIENCE: Thank you. So it's really two questions. But I like the linear model, the one where Fred suggested, better than the fully coupled model. Because it seems more intuitively plausible to-- DAVID SONTAG: And indeed, it's the linear model which is used in this paper. AUDIENCE: Ah, OK. DAVID SONTAG: Yes. Because you noticed how that R was sort of diagonal in-- AUDIENCE: So it's-- OK. The other observation is that, in particular in Alzheimer's, given our current state of inability to treat it, it never gets better. And yet that's not constrained in the model. And I wonder if it would help to know that. DAVID SONTAG: I think that's a really interesting point. So what Pete's suggesting is that you could think about this as-- you could think about putting an additional constraint in, which is that you can imagine saying that we know that, let's say, yi6 is typically less than yi12, which is typically less than yi24 and so on. And if we were able to do perfect prediction, meaning if it were the case that your predicted y's are equal to your true y's, then you should also have that W6 dot xi is less than W12 dot xi, which should be less than W24 dot xi. And so one could imagine now introducing these as new constraints in your learning problem. In some sense, what it's saying is, well, we may not care that much if we get some errors in the predictions, but we want to make sure that at least we're able to sort the patients correctly, a given patient correctly. So we want to ensure at least some monotonicity in these values. And one could easily try to translate these types of constraints into a modification to your learning algorithm. For example, if you took any pair of these-- let's say, I'll take these two together. One could introduce something like a hinge loss, where you say you want that-- you're going to add a new objective function, which says something like, you're going to penalize the max of 0 and 1 minus-- and I'm going to screw up this order. But it will be something like W-- so I'll derive it correctly. So this would be W12 minus W24 dot product with xi, we want to be less than 0. And so you could look at how far from 0 is it. So you could look at W12-- do, do, do. You might imagine a loss function which says, OK, if it's greater than 0, then you have problem. And we might penalize it at, let's say, a linear penalty however greater than 0 it is. And if it's less than 0, you don't penalties at all. So you say something like this, max of W12 minus W24 dot product xi. And you might add something like this to your learning objective. That would try to encourage-- that would penalize violations of this constraint using a hinge loss-type loss function. So that would be one approach to try to put such constraints into your learning objective. A very different approach would be to think about it as a structured prediction problem, where instead of trying to say that you're going to be predicting a given time point by itself, you want to predict the vector of time points. And there's a whole field of what's called structured prediction, which would allow one to formalize objective functions that might encourage, for example, smoothness in predictions across time that one could take advantage of. But I'm not going to go more into that for reasons of time. Hold any more questions to the end of the lecture. Because I want to make sure I get through this last piece. So what we've talked about so far is a supervised learning approach to trying to predict what's going to happen to a patient given what you know at baseline. But I'm now going to talk about a very different style of thought, which is using an unsupervised learning approach to this. And there are going to be two goals of doing unsupervised learning for tackling this problem. The first goal is that of discovery, which I mentioned at the very beginning of today's lecture. We might not just be interested in prediction. We might also be interested in understanding something, getting some new insights about the disease, like discovering that there might be some subtypes of the disease. And those subtypes might be useful, for example, to help design new clinical trials. Like maybe you want to say, OK, we conjecture that patients in this subtype are likely to respond best to treatment. So we're only going to run the clinical trial for patients in this subtype, not in the other one. It might be useful, also, to try to better understand the disease mechanism. So if you find that there are some people who seem to progress very quickly through their disease and other people who seem to progress very slowly, you might then go back and do new biological assays on them to try to understand what differentiates those two clusters. So the two clusters are differentiated in terms of their phenotype, but you want to go back and ask, well, what is different about their genotype that differentiates them? And it might also be useful to have a very concise description of what differentiates patients in order to actually have policies that you can implement. So rather than having what might be a very complicated linear model, or even non-linear model, for predicting future disease progression, it would be much easier if you could just say, OK, for patients who have this biomarker abnormal, they're likely to have very fast disease progression. Patients who are likely have this other biomarker abnormal, they're likely to have a slow disease progression. And so we'd like to be able to do that. That's what I mean by discovering disease subtypes. But there's actually a second goal as well, which-- remember, think back to that original motivation I mentioned earlier of having very little data. If you have very little data, which is unfortunately the setting that we're almost always in when doing machine learning in health care, then you can overfit really easily to your data when just using it strictly within a discriminative learning framework. And so if one were to now change your optimization problem altogether to start to bring in an unsupervised loss function, then one can hope to get much more out of the limited data you have and save the labels, which you might overfit on very easily, for the very last step of your learning algorithm. And that's exactly what we'll do in this segment of the lecture. So for today, we're going to think about the simplest possible unsupervised learning algorithm. And because the official prerequisite for this course was 6036, and because clustering was not discussed in 6036, I'll spend just two minutes talking about clustering using the simplest algorithm called K-means, which I hope almost all of you know. But this will just be a simple reminder. How many clusters are there in in this figure that I'm showing over here? Let's raise some hands. One cluster? Two clusters? Three clusters? Four clusters? Five clusters? OK. And are these red points more or less showing where those five clusters are? No. No, they're not. So rather there's a cluster here. There's a cluster here, there, there, there. All right. So you were you are able to do this really well, as humans, looking at two dimensional data. The goal of algorithms like K-means is to show how one could do that automatically for high-dimensional data. And the K-means algorithm is very simple. It works as follows. You hypothesize a number of clusters. So here we have hypothesized five clusters. You're going to randomly initialize those cluster centers, which I'm denoting by those red points shown here. Then in the first stage of the K-means algorithm, you're going to assign every data point to the closest cluster center. And that's going to induce a Voronoi diagram where every point within this Voronoi cell is closer to this red point than to any other red point. And so every data point in this Voronoi cell will then be assigned to this data point. Every data point in this Voronoi cell will be assigned to that data point and so on. So we're going to now assign all data points to the closest cluster center. And then we're just going to average all the data points assigned to some cluster center to get the new cluster center. And you repeat. And you're going to stop this procedure when no point in time is changed. So let's look at a simple example. Here we're using K equals 2. We just decided there are only two clusters. We've initialized the two clusters shown here, the two cluster centers, as this red cluster center and this blue cluster center. Notice that they're nowhere near the data. We've just randomly chosen. They're nowhere near the data. It's actually pretty bad initialization. The first step is going to assign data points to their closest cluster center. So I want everyone to say out loud either red or green, to which cluster center it's going to point to, what it is going to be assigned to this step. [INTERPOSING VOICES] AUDIENCE: Red. Blue. Blue. DAVID SONTAG: All right. Good. We get it. So that's the first assignment. Now we're going to average the data points that are assigned to that red cluster center. So we're going to average all the red points. And the new red cluster center will be over here, right? AUDIENCE: No. DAVID SONTAG: Oh, over there? Over here? AUDIENCE: Yes. DAVID SONTAG: OK. Good. And the blue cluster center will be somewhere over here, right? AUDIENCE: Yes. DAVID SONTAG: OK. Good. So that's the next step. And then you repeat. So now, again, you assign every data point to its closest cluster center. By the way, the reason why you're seeing what looks like a linear hyperplane here is because there are exactly two cluster centers. And then you repeat. Blah, blah, blah. And you're done. So in fact, I think I've just shown you the convergence point. So that's the K-means algorithm. It's an extremely simple algorithm. And what I'm going to show you for the next 10 minutes of lecture is how one could use this very simple clustering algorithm to better understand asthma. So asthma is something that really affects a large number of individuals. It's characterized by having difficulties breathing. It's often managed by inhalers, although, as asthma gets more and more severe, you need more and more complex management schemes. And it's been found that 5% to 10% of people who have severe asthma remain poorly controlled despite using the largest tolerable inhaled therapy. And so a really big question that the pharmaceutical community is extremely interested in is, how do we come up with better therapies for asthma? There's a lot of money in that problem. I first learned about this problem when a pharmaceutical company came to me when I was a professor at NYU and asked me, could they work with me on this problem? I said no at the time. But I still find it interesting. [CHUCKLING] And at that time, the company pointed me to this paper, which I'll tell you about in a second. But before I get there, I want to point out what are some of the big picture questions that everyone's interested in when it comes to asthma. The first one is to really understand what is it about either genetic or environmental factors that underlie different subtypes of asthma. It's observed that people respond differently the therapy. It is observed that some people aren't even controlled with therapy. Why is that? Third, what are biomarkers, what are ways to predict who's going to respond or not respond to any one therapy? And can we get better mechanistic understanding of these different subtypes? And so this was a long-standing question. And in this paper from the American Journal of Respiratory Critical Care Medicine, which, by the way, has a huge number of citations now-- it's sort of a prototypical example of subtyping. That's why I'm going through it. They started to answer that question using a data-driven approach for asthma. And what I'm showing you here is the punch line. This is that main result, the main figure over the paper. They've characterized asthma in terms of five different subtypes, really three type. One type, which I'll show over here, was sort of inflammation predominant; one type over there, which is called early symptom predominant; and another here, which is sort of concordant disease. And what I'll do over the next few minutes is walk you through how they came up with these different clusters. So they used three different data sets. These data sets consisted of patients who had asthma and already had at least one recent therapy for asthma. They're all nonsmokers. But they were managed in-- they're three disjoint set of patients coming from three different populations. The first group of patients were recruited from primary care practices in the United Kingdom. All right. So if you're a patient with asthma, and your asthma is being managed by your primary care doctor, then it's probably not too bad. But if your asthma, on the other hand, were being managed at a refractory asthma clinic, which is designed specifically for helping patients manage asthma, then your asthma is probably a bit more severe. And that second group of patients, 187 patients, were from that second cohort of patients managed out of an asthma clinic. The third data set is much smaller, only 68 patients. But it's very unique because it is coming from a 12-month study, where it was a clinical trial, and there were two different types of treatments applied given to these patients. And it was a randomized control trial. So the patients were randomized into each of the two arms of the study. I'll describe to you what the features are on just the next slide. But first I want to tell you about how their pre-processes to use within the K-means algorithm. Continuous-valued features where z-scored in order to normalize their ranges. And categorical variables were represented just by a one-hot encoding. Some of the continuous variables were furthermore transformed prior to clustering by taking the logarithm of the features. And that's something that can be very useful when doing something like K-means. Because it can, in essence, allow for that Euclidean distance function, which is using K-means, to be more meaningful by capturing more of a dynamic range of the feature. So these were the features that went into the clustering algorithm. And there are very, very few, so roughly 20, 30 features. They range from the patient's gender and age to their body mass index, to measures of their function, to biomarkers such as eosinophil count that could be measured from the patient's sputum, and more. And there a couple of other features that I'll show you later as well. And you could look to see how did these quantities, how did these populations, differ. So on this column, you see the primary care population. You look at all of these features in that population. You see that in the primary care population, the individuals are-- on average, 54% percent of them are female. In the secondary care population, 65% of them are female. You notice that things like-- if you look at to some measures of lung function, it's significantly worse in that secondary care population, as one would expect. Because these are patients with more severe asthma. So next, after doing K-means clustering, these are the three clusters that result. And now I'm showing you the full set of features. So let me first tell you how to read this. This is clusters found in the primary care population. This column here is just the average values of those features across the full population. And then for each one of these three clusters, I'm showing you the average value of the corresponding feature in just that cluster. And in essence, that's exactly the same as those red points I was showing you when I describe to you K-means clustering. It's the cluster center. And one could also look at the standard deviation of how much variance there is along that feature in that cluster. And that's what the numbers in parentheses are telling you. So the first thing to note is that in Cluster 1, which the authors of the study named Early Onset Atopic Asthma, these are very young patients, average of 14, 15 years old, as opposed to Cluster 2, where the average age was 35 years old-- so a dramatic difference there. Moreover, we see that these are patients who have actually been to the hospital recently. So most of these patients have been to the hospital. On average, these patients have been to hospital at least once recently. And furthermore, they've had severe asthma exacerbations in the past 12 months, at least, on average, twice per patient. And those are very large numbers relative to what you see in these other clusters. So that's really describing something that's very unusual about these very young patients with pretty severe asthma. Yep? AUDIENCE: What is the p-value [INAUDIBLE]?? DAVID SONTAG: Yeah. I think the p-value-- I don't know if this is a pair-wise comparison. I don't remember off the top of my head. But it's really looking at the difference between, let's say-- I don't know which of these cl-- I don't know if it's comparing two of them or not. But let's say, for example, it might be looking at the difference between this and that. But I'm just hypothesizing. I don't remember. Cluster 2, one other hand, was predominately female. So 81% of the patients were female there. And they were largely overweight. So their average body mass index was 36, as opposed to the other two clusters, where the average body mass index was 26. And Cluster 3 consisted of patients who really have not had that severe asthma. So the average number of previous hospital admissions and asthma exacerbations was dramatically smaller than in the other two clusters. So this is the result of the finding. And then you might ask, well, how does that generalize to the other two populations? So they then went to the secondary care population. And they reran the clustering algorithm from scratch. And this is a completely disjoint set of patients. And what they found, what they got out, is that the first two clusters exactly resembled Clusters 1 and 2 from the previous study on the primary care population. But because this is a different population with much more severe patients, that third cluster earlier of benign asthma doesn't show up in this new population. And there are two new clusters that show up in this new population. So the fact that those first two clusters were consistent across two very different populations gave the authors confidence that there might be something real here. And then they went and they explored that third population, where they had longitudinal data. And that third population they were then using to ask, does it not-- so up until now, we've only used baseline information. But now we're going to ask the following question. If we took the baseline data from those 68 patients and we were to separate them into three different clusters based on the characterizations found in the other two data sets, and then if we were to look at long-term outcomes for each cluster, would they be different across the clusters? And in particular, here we actually looked at not just predicting a progression, but we're also looking at prediction-- we're looking at differences in treatment response. Because this was a randomized-control trial. And so there are going to be two arms here, what's called the clinical arm, which is the standard clinical care, and what's called the sputum arm, which consists of doing regular monitoring of the airway inflammation, and then tight trading steroid therapy in order to maintain normal eosinophil counts. And so this is comparing two different treatment strategies. And the question is, do these two treatment strategies result in differential outcomes? So when the clinical trial was originally performed and they computed the average treatment effect, which, by the way, because the RCT was particularly simple-- you just averaged outcomes across the two arms-- they found that there was no difference across the two arms. So there was no difference in outcomes across the two different therapies. Now what these authors are going to do is they're going to rerun the study. And they're going to now, instead of just looking at the average treatment effect for the whole population, they're going to use-- they're going to look at the average treatment each of the clusters by themselves. And the hope there is that one might be able to see now a difference, maybe that there was heterogeneous treatment response and sometimes that therapy worked for some people and not for others. And these were the results. So indeed, across these three clusters, we see actually a very big difference. So if you look here, for example, the number of commenced on oral corticosteroids, which is a measure of an outcome-- so you might want this to-- I can't remember, small or large. But there was a big difference between these two clusters. And this cluster, the number commenced under the first arm is two; in this other cluster for patients who got the second arm, nine; and exactly the opposite for this third cluster. The first cluster, by the way, had only three patients in it. So I'm not going to make any comment about it. Now, since these go in completely opposite directions, it's not surprising that the average treatment effect across the whole population was zero. But what we're seeing now is that, in fact, there is a difference. And so it's possible that the therapy is actually effective but just for a smaller number of people. Now, this study would've never been possible had we not done this clustering beforehand. Because it has so few patients, only 68 patients. If you attempted to both search for the clustering at the same time as, let's say, find clusters to differentiate outcomes, you would overfit the data very quickly. So it's precisely because we did this unsupervised sub-typing first, and then use the labels not for searching for the subtypes but only for evaluating the subtypes, that we're actually able to do something interesting here. So in summary, in today's lecture, I talked about two different approaches, a supervised approach for predicting future disease status and an unsupervised approach. And there were a few major limitations that I want to emphasize that we'll return to in the next lecture and try to address. The first major limitation is that none of these approaches differentiated between disease stage and subtype. In both of the two approaches, we assumed that there were some amount of alignment of patients at baseline. For example, here we assume that the patients at time zero were somewhat similar to another. For example, they might have been newly diagnosed with Alzheimer's at that point in time. But often we have a data set where we have no natural alignment of patients in terms of disease stage. And if we attempted to do some type of clustering like I did in this last example, what you would get out, naively, would be one cluster for disease stage. So patients who are very early in their disease stage might look very different from patients who are late in their disease stage. And it will completely conflate disease stage from disease subtype, which is what you might actually want to discover. The second limitation of these approaches is that they only used one time point per patient, whereas in reality, such as you saw here, we might have multiple time points. And we might want to, for example, do clustering using multiple time points. Or we might want to use multiple time points to understand something about disease progression. The third limitation is that they assume that there is a single factor, let's say disease subtype, that explained all variation in the patients. In fact, there might be other factors, patient-specific factors, that one would like to use in your noise model. When you use an algorithm like K-means for clustering, it presents no opportunity for doing that, because it has such a naive distance function. And so in next week's lecture, we're going to move in to start talking a probabilistic modeling approaches to these problems, which will give us a very natural way of characterizing variation along other axes. And finally, a natural question you should ask is, does it have to be unsupervised or supervised? Or is there a way to combine those two approaches. All right. We'll get back to that on Tuesday. That's all.
MIT_6S897_Machine_Learning_for_Healthcare_Spring_2019
8_Natural_Language_Processing_NLP_Part_2.txt
PETER SZOLOVITS: All right. Let's get started. Good afternoon. So last time, I started talking about the use of natural language processing to process clinical data. And things went a little bit slowly. And so we didn't get through a lot of the material. I'm going to try to rush a bit more today. And as a result, I have a lot of stuff to cover. So if you remember, last time, I started by saying that a lot of the NLP work involves coming up with phrases that one might be interested in to help identify the kinds of data that you want, and then just looking for those in text. So that's a very simple method. But it's one that works reasonably well. And then Kat Liao was here to talk about some of the applications of that kind of work in what she's been doing in cohort selection. So what I want to talk about today is more sophisticated versions of that, and then move on to more contemporary approaches to natural language processing. So this is a paper that was given to you as one of the optional readings last time. And it's work from David Sontag's lab, where they said, well, how do we make this more sophisticated? So they start the same way. They say, OK, Dr. Liao, let's say, give me terms that are very good indicators that I have the right kind of patient, if I find them in the patient's notes. So these are things with high predictive value. So you don't want to use a term like sick, because that's going to find way too many people. But you want to find something that is very specific but that has a high predictive value that you are going to find the right person. And then what they did is they built a model that tries to predict the presence of that word in the text from everything else in the medical record. So now, this is an example of a silver-standard way of training a model that says, well, I don't have the energy or the time to get doctors to look through thousands and thousands of records. But if I select these anchors well enough, then I'm going to get a high yield of correct responses from those. And then I train a machine learning model that learns to identify those same terms, or those same records that have those terms in them. And by the way, from that, we're going to learn a whole bunch of other terms that are proxies for the ones that we started with. So this is a way of enlarging that set of terms automatically. And so there are a bunch of technical details that you can find out about by reading the paper. They used a relatively simple representation, which is essentially a bag-of-words representation. They then sort of masked the three words around the word that actually is the one they're trying to predict just to get rid of short-term syntactic correlations. And then they built an L2-regularized logistic regression model that said, what are the features that predict the occurrence of this word? And then they expanded the search vocabulary to include those features as well. And again, there are tons of details about how to discretize continuous values and things like that that you can find out about. So you build a phenotype estimator from the anchors and the chosen predictors. They calculated a calibration score for each of these other predictors that told you how well it predicted. And then you can build a joint estimator that uses all of these. And the bottom line is that they did very well. So in order to evaluate this, they looked at eight different phenotypes for which they had human judgment data. And so this tells you that they're getting AUCs of between 0.83 and 0.95 for these different phenotypes. So that's quite good. They, in fact, were estimating not only these eight phenotypes but 40-something. I don't remember the exact number, much larger number. But they didn't have validated data against which to test the others. But the expectation is that if it does well on these, it probably does well on the others as well. So this was a very nice idea. And just to illustrate, if you start with something like diabetes as a phenotype and you say, well, I'm going to look for anchors that are a code of 250 diabetes mellitus, or I'm going to look at medication history for diabetic therapy-- so those are the silver-standard goals that I'm looking at. And those, in fact, have a high predictive value for somebody being in the cohort. And then they identify all these other features that predict those, and therefore, in turn, predict appropriate selectors for the phenotype that they're interested in. And if you look at the paper again, what you see is that this outperforms, over time, the standard supervised baseline that they're comparing against, where you're getting much higher accuracy early in a patient's visit to be able to identify them as belonging to this cohort. I'm going to come back later to look at another similar attempt to generalize from a core using a different set of techniques. So you should see that in about 45 minutes, I hope. Well, context is important. So if you look at a sentence like Mr. Huntington was treated for Huntington's disease at Huntington Hospital, located on Huntington Avenue, each of those mentions of the word Huntington is different. And for example, if you're interested in eliminating personally identifiable health information from a record like this, then certainly you want to get rid of the Mr. Huntington part. You don't want to get rid of Huntington's disease, because that's a medically relevant fact. And you probably do want to get rid of Huntington Hospital and its location on Huntington Avenue, although those are not necessarily something that you're prohibited from retaining. So for example, if you're trying to do quality studies among different hospitals, then it would make sense to retain the name of the hospital, which is not considered identifying of the individual. So we, in fact, did a study back in the mid 2000s, where we were trying to build an improved de-identifier. And here's the way we went about it. This is a kind of kitchen sink approach that says, OK, take the text, tokenize it. Look at every single token. And derive things from it. So the words that make up the token, the part of speech, how it's capitalized, whether there's punctuation around it, which document section is it in-- many databases have sort of conventional document structure. If you've looked at the mimic discharge summaries, for example, there's a kind of prototypical way in which that flows from beginning to end. And you can use that structural information. We then identified a bunch of patterns and thesaurus terms. So we looked up, in the UMLS, words and phrases to see if they matched some clinically meaningful term. We had patterns that identified things like phone numbers and social security numbers and addresses and so on. And then we did parsing of the text. So in those days, we used something called the Link Grammar Parser, which, doesn't make a whole lot of difference what parser. But you get either a constituent or constituency or dependency parse, which gives you relationships among the words. And so it allows you to include, as features, the way in which a word that you're looking at relates to other words around it. And so what we did is we said, OK, the lexical context includes all of the above kind of information for all of the words that are either literally adjacent or within n words of the original word that you're focusing on, or that are linked by within k links through the parse to that word. So this gives you a very large set of features. And of course, parsing is not a solved problem. And so this is an example from that story that I showed you last time. And if you see, it comes up with 24 ambiguous parses of this sentence. So there are technical problems about how to deal with that. Today, you could use a different parser. The Stanford Parser, for example, probably does a better job than the one we were using 14 years ago and gives you at least more definitive answers. And so you could use that instead. And so if you look at what we did, we said, well, here is the text "Mr." And here are all the ways that you can look it up in the UMLS. And it turns out to be very ambiguous. So M-R stands not only for mister, but it also stands for Magnetic Resonance. And it stands for a whole bunch of other things. And so you get huge amounts of ambiguity. "Blind" turns out also to give you various ambiguities. So it maps here to four different concept-unique identifiers. "Is" is OK. "79-year-old" is OK. And then "male," again, maps to five different concept-unique identifiers. So there are all these problems of over-generation from this database. And here's some more, but I'm going to skip over that. And then the learning model, in our case, was a support vector machine for this project, in which we just said, well, throw in all the-- you know, it's the kill them all, and God will sort them out kind of approach. So we just threw in all these features and said, oh, support vector machines are really good at picking out exactly what are the best features. And so we just relied on that. And sure enough, so you wind up with literally millions of features. But sure enough, it worked pretty well. And so Stat De-ID was our program. And you see that on real discharge summaries, we're getting precision and recall on PHI up around 98 and 1/2%, 95 and 1/4%, which was much better than the previous state of the art, which had been based on rules and dictionaries as a way of de-identifying things. So this was a successful example of that approach. And of course, this is usable not only for de-identification. But it's also usable for entity recognition. Because instead of selecting entities that are personally identifiable health information, you could train it to select entities that are diseases or that are medications or that are various other things. And so this was, in the 2000s, a pretty typical way for people to approach these kinds of problems. And it's still used today. There are tools around that let you do this. And they work reasonably effectively. They're not state of the art at the moment, but they're simpler than many of today's state of the art methods. So here's another approach. This was something we published a few years ago, where we started working with some psychiatrists and said, could we predict 30-day readmission for a psychiatric patient with any degree of reliability? That's a hard prediction. Willie is currently running an experiment where we're asking psychiatrists to predict that. And it turns out, they're barely better than chance at that prediction. So it's not an easy task. And what we did is we said, well, let's use topic modeling. And so we had this cohort of patients, close to 5,000 patients. About 10% of them were readmitted with a psych diagnosis. And almost 3,000 of them were readmitted with other diagnoses. So one thing this tells you right away is that if you're dealing with psychiatric patients, they come and go to the hospital frequently. And this is not good for the hospital's bottom line because of reimbursement policies of insurance companies and so on. So of the 4,700, only 1,240 were not readmitted within 30 days. So there's very frequent bounce-back. So we said, well, let's try building a baseline model using a support vector machine from baseline clinical features like age, gender, public health insurance as a proxy for socioeconomic status. So if you're on Medicaid, you're probably poor. And if you have private insurance, then you're probably an MIT employee and/or better off. So that's a frequently used proxy, a comorbidity index that tells you sort of how sick you are from things other than your psychiatric problems. And then we said, well, what if we add to that model common words from notes. So we said, let's do a TF-IDF calculation. So this is term frequency divided by log of the document frequency. So it's sort of, how specific is a term to identify a particular kind of condition? And we take the 1,000 most informative words, and so there are a lot of these. So if you use 1,000 most informative words from these nearly 5,000 patients, you wind up with something like 66,000 words, unique words, that are informative for some patient. But if you limit yourself to the top 10, then it only uses 18,000 words. And if you limit yourself to the top one, then it uses about 3,000 words. And then we said, well, instead of doing individual words, let's do a latent Dirichlet allocation. So topic modeling on all of the words, as a bag of words-- so no sequence information, just the collection of words. And so we calculated 75 topics from using LDA on all these notes. So just to remind you, the LDA process is a model that says every document consists of a certain mixture of topics, and each of those topics probabilistically generates certain words. And so you can build a model like this, and then solve it using complicated techniques. And you'd wind up with topics, in this study, as follows. I don't know. Can you read these? They may be too small. So these are unsupervised topics. And if you look at the first one, it says patient, alcohol, withdrawal, depression, drinking, and Ativan, ETOH, drinks, medications, clinic inpatient, diagnosis, days, hospital, substance, use treatment program, name. That's a de-identified use/abuse problem number. And we had our experts look at these topics. And they said, oh, well, that topic is related to alcohol abuse, which seems reasonable. And then you see, on the bottom, psychosis, thought features, paranoid psychosis, paranoia symptoms, psychiatric, et cetera. And they said, OK, that's a psychosis topic. So in retrospect, you can assign meaning to these topics. But in fact, they're generated without any a priori notion of what they ought to be. They're just a statistical summarization of the common co-occurrences of words in these documents. But what you find is that if you use the baseline model, which used just the demographic and clinical variables, and you say, what's the difference in survival, in this case, in time to readmission between one set and another in this cohort, and the answer is they're pretty similar. Whereas, if you use a model that predicts based on the baseline and 75 topics, the 75 topics that we identified, you get a much bigger separation. And of course, this is statistically significant. And it tells you that this technique is useful for being able to improve the prediction of a cohort that's more likely to be readmitted from a cohort that's less likely to be readmitted. It's not a terrific prediction. So the AUC for this model was only on the order of 0.7. So you know, it's not like 0.99. But nevertheless, it provides useful information. The same group of psychiatrists that we worked with also did a study with a much larger cohort but much less rich data. So they got all of the discharges from two medical centers over a period of 12 years. So they had 845,000 discharges from 458,000 unique individuals. And they were looking for suicide or other causes of death in these patients to see if they could predict whether somebody is likely to try to harm themselves, or whether they're likely to die accidentally, which sometimes can't be distinguished from suicide. So the censoring problems that David talked about are very much present in this. Because you lose track of people. It's a highly imbalanced data set. Because out of the 845,000 patients, only 235 committed suicide, which is, of course, probably a good thing from a societal point of view but makes the data analysis hard. On the other hand, all-cause mortality was about 18% during nine years of a follow-up. So that's not so imbalanced. And then what they did is they curated a list of 3,000 terms that correspond to what, in the psychiatric literature, is called positive valence. So this is concepts like joy and happiness and good stuff, as opposed to negative valence, like depression and sorrow and all that stuff. And they said, well, we can use these types of terms in order to help distinguish among these patients. And what they found is that, if you plot the Kaplan-Meier curve for different quartiles of risk for these patients, you see that there's a pretty big difference between the different quartiles. And you can certainly identify the people who are more likely to commit suicide from the people who are less likely to do so. This curve is for suicide or accidental death. So this is a much larger data set, and therefore the error bars are smaller. But you see the same kind of separation here. So these are all useful techniques. Now I'll to another approach. This was work by one of my students, Yuon Wo, who was working with some lymphoma pathologists at Mass General. And so the approach they took was to say, well, if you read a pathology report about somebody with lymphoma, can we tell what type of lymphoma they had from the pathology report if we blank out the part of the pathology report that says, "I, the pathologist, think this person has non-Hodgkin's lymphoma," or something? So from the rest of the context, can we make that prediction? Now, Yuon took a kind of interesting, slightly odd approach to it, which is to treat this as an unsupervised learning problem rather than as a supervised learning problem. So he literally masked the real answer and said, if we just treat everything except what gives away the answer as just data, can we essentially cluster that data in some interesting way so that we re-identify the different types of lymphoma? Now, the reason this turns out to be important is because lymphoma pathologists keep arguing about how to classify lymphomas. And every few years, they revise the classification rules. And so part of his objective was to say, let's try to provide an unbiased, data-driven method that may help identify appropriate characteristics by which to classify these different lymphomas. So his approach was a tensor factorization approach. You often see data sets like this that's, say, patient by a characteristic. So in this case, laboratory measurements-- so systolic/diastolic blood pressure, sodium, potassium, et cetera. That's a very vanilla matrix encoding of data. And then if you add a third dimension to it, like this is at the time of admission, 30 minutes later, 60 minutes later, 90 minutes later, now you have a three-dimensional tensor. And so just like you can do matrix factorization, as in the picture above, where we say, my matrix of data, I'm going to assume is generated by a product of two matrices, which are smaller in dimension. And you can train this by saying, I want entries in these two matrices that minimize the reconstruction error. So if I multiply these matrices together, then I get back my original matrix plus error. And I want to minimize that error, usually root mean square, or mean square error, or something like that. Well, you can play the same game for a tensor by having a so-called core tensor, which identifies the subset of characteristics that subdivide that dimension of your data. And then what you do is the same game. You have matrices corresponding to each of the dimensions. And if you multiply this core tensor by each of these matrices, you reconstruct the original tensor. And you can train it again to minimize the reconstruction loss. So there are, again, a few more tricks. Because this is dealing with language. And so this is a typical report from one of these lymphoma pathologists that says immunohistochemical stains show that the follicles-- blah, blah, blah, blah, blah-- so lots and lots of details. And so he needed a representation that could be put into this matrix tensor, this tensor factorization form. And what he did is to say, well, let's see. If we look at a statement like this, immuno stains show that large atypical cells are strongly positive for CD30, negative for these other surface expressions. So the sentence tells us relationships among procedures, types of cells, and immunologic factors. And for feature choice, we can use words. Or we can use UMLS concepts. Or we can find various kinds of mappings. But he decided that in order to retain the syntactic relationships here, what he would do is to use a graphical representation that came out of, again, parsing all of these sentences. And so what you get is that this creates one graph that talks about the strongly positive for CD30, large atypical cells, et cetera. And then you can factor this into subgraphs. And then you also have to identify frequently occurring subgraphs. So for example, large atypical cells appears here, and also appears there, and of course will appear in many other places. Yeah? AUDIENCE: Is this parsing domain in language diagnostics? For example, did they incorporate some sort of medical information here or some sort of linguistic-- PETER SZOLOVITS: So in this particular study, he was using the Stanford Parser with some tricks. So the Stanford Parser doesn't know a lot of the medical words. And so he basically marked these things as noun phrases. And then the Stanford Parser also doesn't do well with long lists like the set of immune features. And so he would recognize those as a pattern, substitute a single made-up word for them, and that made the parser work much better on it. So there were a whole bunch of little tricks like that in order to adapt it. But it was not a model trained specifically on this. I think it's trained on Wall Street Journal corpus or something like that. So it's general English. AUDIENCE: Those are things that he did manually as opposed to, say, [INAUDIBLE]? PETER SZOLOVITS: No. He did it algorithmically, but he didn't learn which algorithms to use. He made them up by hand. But then, of course, it's a big corpus. And he ran these programs over it that did those transformations. So he calls it two-phase parsing. There's a reference to his paper on the first slide in this section if you're interested in the details. It's described there. So what he wound up with is a tensor that has patients on one axis, the words appearing in the text on another axis. So he's still using a bag-of-words representation. But the third axis is these language concept subgraphs that we were talking about. And then he does tensor factorization on this. And what's interesting is that it works much better than I expected. So if you look at his technique, which he called SANTF, the precision and recall are about 0.72 and 0.854 macro-average and 0.754 micro-average, which is much better than the non-negative matrix factorization results, which only use patient by word or patient by subgraph, or, in fact, one where you simply do patient and concatenate the subgraphs and the words in one dimension. So that means that this is actually taking advantage of the three-way relationship. If you read papers from about 15, 20 years ago, people got very excited about the idea of bi-clustering, which is, in modern terms, the equivalent of matrix factorization. So it says given two dimensions of data, and I want to cluster things, but I want to cluster them in such a way that the clustering of one dimension helps the clustering of the other dimension. So this is a formal way of doing that relatively efficiently. And tensor factorization is essentially tri-clustering. So now I'm going to turn to the last of today's big topics, which is language modeling. And this is really where the action is nowadays in natural language processing in general. I would say that the natural language processing on clinical data is somewhat behind the state of the art in natural language processing overall. There are fewer corpora that are available. There are fewer people working on it. And so we're catching up. But I'm going to lead into this somewhat gently. So what does it mean to model a language? I mean, you could imagine saying it's coming up with a set of parsing rules that define the syntactic structure of the language. Or you could imagine saying, as we suggested last time, coming up with a corresponding set of semantic rules that say a concept or terms in the language correspond to certain concepts and that they are a combinatorially, functionally combined as the syntax directs, in order to give us a semantic representation. So we don't know how to do either of those very well. And so the current, the contemporary idea about language modeling is to say, given a sequence of tokens, predict the next token. If you could do that perfectly, presumably you would have a good language model. So obviously, you can't do it perfectly. Because we don't always say the same word after some sequence of previous words when we speak. But probabilistically, you can get close to that. And there's usually some kind of Markov assumption that says that the probability of emitting a token given the stuff that came before it is ordinarily dependent only on n previous words rather than on all of history, on everything you've ever said before in your life. And there's a measure called perplexity, which is the entropy of the probability distribution over the predicted words. And roughly speaking, it's the number of likely ways that you could continue the text if all of the possibilities were equally likely. So perplexity is often used, for example, in speech processing. We did a study where we were trying to build a speech system that understood a conversation between a doctor and a patient. And we ran into real problems, because we were using software that had been developed to interpret dictation by doctors. And that was very well trained. But it turned out-- we didn't know this when we started-- that the language that doctors use in dictating medical notes is pretty straightforward, pretty simple. And so it's perplexity is about nine, whereas conversations are much more free flowing and cover many more topics. And so its perplexity is about 73. And so the model that works well for perplexity nine doesn't work as well for perplexity 73. And so what this tells you about the difficulty of accurately transcribing speech is that it's hard. It's much harder. And that's still not a solved problem. Now, you probably all know about Zipf's law. So if you empirically just take all the words in all the literature of, let's say, English, what you discover is that the n-th word is about one over n as probable as the first word. So there is a long-tailed distribution. One thing you should realize, of course, is if you integrate one over n from zero to infinity, it's infinite. And that may not be an inaccurate representation of language, because language is productive and changes. And people make up new words all the time and so on. So it may actually be infinite. But roughly speaking, there is a kind of decline like this. And interestingly, in the brown corpus, the top 10 words make up almost a quarter of the size of the corpus. So you write a lot of thes, ofs, ands, a's, twos, ins, et cetera, and much less hematemesis, obviously. So what about n-gram models? Well, remember, if we make this Markov assumption, then all we have to do is pay attention to the last n tokens before the one that we're interested in predicting. And so people have generated these large corpora n-grams. So for example, somebody, a couple of decades ago, took all of Shakespeare's writings-- I think they were trying to decide whether he had written all his works or whether the earl of somebody or other was actually the guy who wrote Shakespeare. You know about this controversy? Yeah. So that's why they were doing it. But anyway, they created this corpus. And they said-- so Shakespeare had a vocabulary of about 30,000 words and about 300,000 bigrams, and out of 844 million possible bigrams. So 99.96% of bigrams were never seen. So there's a certain regularity to his production of language. Now, Google, of course, did Shakespeare one better. And they said, hmm, we can take a terabyte corpus-- this was in 2006. I wouldn't be surprised if it's a petabyte corpus today. And they published this. They just made it available. So there were 13.6 million unique words that occurred at least 200 times in this tera-word corpus. And there were 1.2 billion five-word sequences that occurred at least 40 times. So these are the statistics. And if you're interested, there's a URL. And here's a very tiny part of their database. So ceramics, collectibles, collectibles-- I don't know-- occurred 55 times in a terabyte of text. Ceramics collectibles fine, ceramics collectibles by, pottery, cooking, comma, period, end of sentence, and, at, is, et cetera-- different number of times. Ceramics comes from occurred 660 times, which is reasonably large number compared to some of its competitors here. If you look at four-grams, you see things like serve as the incoming, blah, blah, blah, 92 times; serve as the index, 223 times; serve as the initial, 5,300 times. So you've got all these statistics. And now, given those statistics, we can then build a generator. So we can say, all right. Suppose I start with the token, which is the beginning of a sentence, or the separator between sentences. And I say sample a random bigram starting with the beginning of a sentence and a word, according to its probability, and then sample the next bigram from that word and all the other words, according to its probability, and keep doing that until you hit the end of sentence marker. So for example, here I'm generating the sentence, I, starts with I, then followed by want, followed by two, followed by get, followed by Chinese, followed by food, followed by end of sentence. So I've just generated, "I want to get Chinese food," which sounds like a perfectly good sentence. So here's what's interesting. If you look back again at the Shakespeare corpus and saying, if we generated Shakespeare from unigrams, you get stuff like at the top, "To him swallowed confess here both. Which. Of save on trail for are ay device and rote life have." It doesn't sound terribly good. It's not very grammatical. It doesn't have that sort of Shakespearean English flavor. Although, you do have words like nave and ay and so on that are vaguely reminiscent. Now, if you go to bigrams, it starts to sound a little better. "What means, sir. I confess she? Then all sorts, he is trim, captain." That doesn't make any sense. But it starts to sound a little better. And with trigrams, we get, "Sweet prince, Falstaff shall die. Harry of Monmouth," et cetera. So this is beginning to sound a little Shakespearean. And if you go to quadrigrams, you get, "King Henry. What? I will go seek the traitor Gloucester. Exeunt some of the watch. A great banquet serv'd in," et cetera. I mean, when I first saw this, like 20 years ago or something, I was stunned. This is actually generating stuff that sounds vaguely Shakespearean and vaguely English-like. Here's an example of generating the Wall Street Journal. So from unigrams, "Months the my and issue of year foreign new exchanges September were recession." It's word salad. But if you go to trigrams, "They also point to ninety nine point six billion from two hundred four oh six three percent of the rates of interest stores as Mexico and Brazil." So you could imagine that this is some Wall Street Journal writer on acid writing this text. Because it has a little bit of the right kind of flavor. So more recently, people said, well, we ought to be able to make use of this in some systematic way to help us with our language analysis tasks. So to me, the first effort in this direction was Word2Vec, which was Mikolov's approach to doing this. And he developed two models. He said, let's build a continuous bag-of-words model that says what we're going to use is co-occurrence data on a series of tokens in the text that we're trying to model. And we're going to use a neural network model to predict the word from the words around it. And in that process, we're going to use the parameters of that neural network model as a vector. And that vector will be the representation of that word. And so what we're going to find is that words that tend to appear in the same context will have similar representations in this high-dimensional vector. And by the way, high-dimensional, people typically use like 300 or 500 dimensional vectors. So there's a lot of-- it's a big space. And the words are scattered throughout this. But you get this kind of cohesion, where words that are used in the same context appear close to each other. And the extrapolation of that is that if words are used in the same context, maybe they share something about meaning. So the other model is a skip-gram model, where you're doing the prediction in the other direction. From a word, you're predicting the words that are around it. And again, you are using a neural network model to do that. And you use the parameters of that model in order to represent the word that you're focused on. So what came as a surprise to me is this claim that's in his original paper, which is that not only do you get this effect of locality as corresponding meaning but that you get relationships that are geometrically represented in the space of these embeddings. And so what you see is that if you take the encoding of the word man and the word woman and look at the vector difference between them, and then apply that same vector difference to king, you get close to queen. And if you apply it uncle, you get close to aunt. And so they showed a number of examples. And then people have studied this. It doesn't hold it perfectly well. I mean, it's not like we've solved the semantics problem. But it is a genuine relationship. The place where it doesn't work well is when some of these things are much more frequent than others. And so one of the examples that's often cited is if you go, London is to England as Paris is to France, and that one works. But then you say as Kuala Lumpur is to Malaysia, and that one doesn't work so well. And then you go, as Juba or something is to whatever country it's the capital of. And since we don't write about Africa in our newspapers, there's very little data on that. And so that doesn't work so well. So there was this other paper later from van der Maaten and Geoff Hinton, where they came up with a visualization method to take these high-dimensional vectors and visualize them in two dimensions. And what you see is that if you take a bunch of concepts that are count concepts-- so 1/2, 30, 15, 5, 4, 2, 3, several, some, many, et cetera-- there is a geometric relationship between them. So they, in fact, do map to the same part of the space. Similarly, minister, leader, president, chairman, director, spokesman, chief, head, et cetera form a kind of cluster in the space. So there's definitely something to this. I promised you that I would get back to a different attempt to try to take a core of concepts that you want to use for term-spotting and develop an automated way of enlarging that set of concepts in order to give you a richer vocabulary by which to try to identify cases that you're interested in. So this was by some of my colleagues, including Kat, who you saw on Tuesday. And they said, well, what we'd like is the fully automated and robust, unsupervised feature selection method that leverages only publicly available medical knowledge sources instead of VHR data. So the method that David's group had developed, which we talked about earlier, uses data from electronic health records, which means that you move to different hospitals and there may be different conventions. And you might imagine that you have to retrain that sort of method, whereas here the idea is to derive these surrogate features from knowledge sources. So unlike that earlier model, here they built a Word2Vec skip-gram model from about 5 million Springer articles-- so these are published medical articles-- to yield 500 dimensional vectors for each word. And then what they did is they took the concept names that they were interested in and their definitions from the UMLS, and then they summoned the word vectors for each of these words, weighted by inverse document frequency. So it's sort of a TF-IDF-like approach to weight different words. And then they went out and they said, OK, for every disease that's mentioned in Wikipedia, Medscape, eMedicine, the Merck Manuals Professional Edition, the Mayo Clinic Diseases and Conditions, MedlinePlus Medical Encyclopedia, they used named entity recognition techniques to find all the concepts that are related to this phenotype. So then they said, well, there's a lot of randomness in these sources, and maybe in our extraction techniques. But if we insist that some concept appear in at least three of these five sources, then we can be pretty confident that it's a relevant concept. And so they said, OK, we'll do that. Then they chose the top k concepts whose embedding vectors are closest by cosine distance to the embedding of this phenotype that they've calculated. And they say, OK, the phenotype is going to be a linear combination of all these related concepts. So again, this is a bit similar to what we saw before. But here, instead of extracting the data from electronic medical records, they're extracting it from published literature and these web sources. And again, what you see is that the expert-curated features for these five phenotypes, which are coronary artery disease, rheumatoid arthritis, Crohn's disease, ulcerative colitis, and pediatric pulmonary arterial hypertension, they started with 20 to 50 curated features. So these were the ones that the doctors said, OK, these are the anchors in David's terminology. And then they expanded these to a larger set using the technique that I just described, and then selected down to the top n that were effective in finding relevant phenotypes. And this is a terrible graph that summarizes the results. But what you're seeing is that the orange lines are based on the expert-curated features. This is based on an earlier version of trying to do this. And SEDFE is the technique that I've just described. And what you see is that the automatic techniques for many of these phenotypes are just about as good as the manually curated ones. And of course, they require much less manual curation. Because they're using this automatic learning approach. Another interesting example to return to the theme of de-identification is a couple of my students, a few years ago, built a new de-identifier that has this rather complicated architecture. So it starts with a bi-directional recursive neural network model that is implemented over the character sequences of words in the medical text. So why character sequences? Why might those be important? Well, consider a misspelled word, for example. Most of the character sequence is correct. There will be a bug in it at the misspelling. Or consider that a lot of medical terms are these compound terms, where they're made up of lots of pieces that correspond to Greek or Latin roots. So learning those can actually be very helpful. So you start with that model. You then could concatenate the results from both the left-running and the right-running recursive neural network. And concatenate that with the Word2Vec embedding of the whole word. And you feed that into another bi-directional RNN layer. And then for each word, you take the output of those RNNs, run them through a feed-forward neural network in order to estimate the prob-- it's like a soft max. And you estimate the probability of this word belonging to a particular category of personally identifiable health information. So is it a name? Is it an address? Is it a phone number? Is it or whatever? And then the top layer is a kind of conditional random field-like layer that imposes a sequential probability distribution that says, OK, if you've seen a name, then what's the next most likely thing that you're going to see? And so you combine that with the probability distributions for each word in order to identify the category of PHI or non-PHI for that word. And this did insanely well. So optimized by F1 score, we're up at a precision of 99.2%, recall of 99.3%. Optimized by recall, we're up at about 98%, 99% for each of them. So this is doing quite well. Now, there is a non-machine learning comment to make, which is that if you read the HIPAA law, the HIPAA regulations, they don't say that you must get rid of 99% of the personally identifying information in order to be able to share this data for research. It says you have to get rid of all of it. So no technique we know is 100% perfect. And so there's a kind of practical understanding among people who work on this stuff that nothing's going to be perfect. And therefore, that you can get away with a little bit. But legally, you're on thin ice. So I remember many years ago, my wife was in law school. And I asked her at one point, so what can people sue you for? And she said, absolutely anything. They may not win. But they can be a real pain if you have to go defend yourself in court. And so this hasn't played out yet. We don't know if a de-identifier that is 99% sensitive and 99% specific will pass muster with people who agree to release data sets. Because they're worried, too, about winding up in the newspaper or winding up getting sued. Last topic for today-- so if you read this interesting blog, which, by the way, has a very good tutorial on BERT, he says, "The year 2018 has been an inflection point for machine learning models handling text, or more accurately, NLP. Our conceptual understanding of how best to represent words and sentences in a way that best captures underlying meanings and relationships is rapidly evolving." And so there are a whole bunch of new ideas that have come about in about the last year or two years, including ELMo, which learns context-specific embeddings, the Transformer architecture, this BERT approach. And then I'll end with just showing you this gigantic GPT model that was developed by the OpenAI people, which does remarkably better than the stuff I showed you before in generating language. All right. If you look inside Google Translate, at least as of not long ago, what you find is a model like this. So it's essentially an LSTM model that takes input words and munges them together into some representation, a high-dimensional vector representation, that summarizes everything that the model knows about that sentence that you've just fed it. Obviously, it has to be a pretty high-dimensional representation, because your sentence could be about almost anything. And so it's important to be able to capture all that in this representation. But basically, at this point, you start generating the output. So if you're translating English to French, these are English words coming in, and these are French words going out, in sort of the way I showed you, where we're generating Shakespeare or we're generating Wall Street Journal text. But the critical feature here is that in the initial version of this, everything that you learned about this English sentence had to be encoded in this one vector that got passed from the encoder into the decoder, or from the source language into the target language generator. So then someone came along and said, hmm-- someone, namely these guys, came along and said, wouldn't it be nice if we could provide some auxiliary information to the generator that said, hey, which part of the input sentence should you pay attention to? And of course, there's no fixed answer to that. I mean, if I'm translating an arbitrary English sentence into an arbitrary French sentence, I can't say, in general, look at the third word in the English sentence when you're generating the third word in the French sentence. Because that may or may not be true, depending on the particular sentence. But on the other hand, the intuition is that there is such a positional dependence and a dependence on what the particular English word was that is an important component of generating the French word. And so they created this idea that in addition to passing along the this vector that encodes the meaning of the entire input and the previous word that you had generated in the output, in addition, we pass along this other information that says, which of the input words should we pay attention to? And how much attention should we pay to them? And of course, in the style of these embeddings, these are all represented by high-dimensional vectors, high-dimensional real number vectors that get combined with the other vectors in order to produce the output. Now, a classical linguist would look at this and retch. Because this looks nothing like classical linguistics. It's just numerology that gets trained by stochastic gradient descent methods in order to optimize the output. But from an engineering point of view, it works quite well. So then for a while, that was the state of the art. And then last year, these guys, Vaswani et al. came along and said, you know, we now have this complicated architecture, where we are doing the old-style translation where we summarize everything into one vector, and then use that to generate a sequence of outputs. And we have this attention mechanism that tells us how much of various inputs to use in generating each element of the output. Is the first of those actually necessary? And so they published this lovely paper saying attention is all you need, that says, hey, you know that thing that you guys have added to this translation model. Not only is it a useful addition, but in fact, it can take the place of the original model. And so the Transformer is an architecture that is the hottest thing since sliced bread at the moment, that says, OK, here's what we do. We take the inputs. We calculate some embedding for them. We then want to retain the position, because of course, the sequence in which the words appear, it matters. And the positional encoding is this weird thing where it encodes using sine waves so that-- it's an orthogonal basis. And so it has nice characteristics. And then we run it into an attention model that is essentially computing self-attention. So it's saying what-- it's like Word2Vec, except in a more sophisticated way. So it's looking at all the words in the sentence and saying, which words is this word most related to? And then, in order to complicate it some more, they say, well, we don't want just a single notion of attention. We want multiple notions of attention. So what does that sound like? Well, to me, it sounds a bit like what you see in convolutional neural networks, where often when you're processing an image with a CNN, you're not only applying one filter to the image but you're applying a whole bunch of different filters. And because you initialize them randomly, you hope that they will converge to things that actually detect different interesting properties of the image. So the same idea here-- that what they're doing is they're starting with a bunch of these attention matrices and saying, we initialize them randomly. They will evolve into something that is most useful for helping us deal with the overall problem. So then they run this through a series of, I think, in Vaswani's paper, something like six layers that are just replicated. And there are additional things like feeding forward the input signal in order to add it to the output signal of the stage, and then normalizing, and then rerunning it, and then running it through a feed-forward network that also has a bypass that combines the input with the output of the feed-forward network. And then you do this six times, or n times. And that then feeds into the generator. And the generator then uses a very similar architecture to calculate output probabilities, And then it samples from those in order to generate the text. So this is sort of the contemporary way that one can do translation, using this approach. Obviously, I don't have time to go into all the details of how all this is done. And I'd probably do it wrong anyway. But you can look at the paper, which gives a good explanation. And that blog that I pointed to also has a pointer to another blog post by the same guy that does a pretty good job of explaining the Transformer architecture. It's complicated. So what you get out of the multi-head attention mechanism is that-- here is one attention machine. And for example, the colors here indicate the degree to which the encoding of the word "it" depends on the other words in the sentence. And you see that it's focused on the animal, which makes sense. Because "it," in fact, is referring to the animal in this sentence. Here they introduce another encoding. And this one focuses on "was too tired," which is also good. Because "it," again, refers to the thing that was too tired. And of course, by multi-headed, they mean that it's doing this many times. And so you're identifying all kinds of different relationships in the input sentence. Well, along the same lines is this encoding called ELMo. People seem to like Sesame Street characters. So ELMo is based on a bi-directional LSTM. So it's an older technology. But what it does is, unlike Word2Vec, which built an embedding for each type-- so every time the word "junk" appears, it gets the same embedding. Here what they're saying is, hey, take context seriously. And we're going to calculate a different embedding for each occurrence in context of a token. And this turns out to be very good. Because it goes part of the way to solving the word-sense disambiguation problem. So this is just an example. If you look at the word "play" in GloVe, which is a slightly more sophisticated variant of the Word2Vec approach, you get playing, game, games, played, players, plays, player, play, football, multiplayer. This all seems to be about games. Because probably, from the literature that they got this from, that's the most common usage of the word "play." Whereas, using this bi-directional language model, they can separate out something like, "Kieffer, the only junior in the group, was commended for his ability to hit in the clutch, as well as his all-around excellent play." So this is presumably the baseball player. And here is, "They were actors who had been handed fat roles in a successful play." So this is a different meaning of the word play. And so this embedding also has made really important contributions to improving the quality of natural language processing by being able to deal with the fact that single words have multiple meanings not only in English but in other languages. So after ELMo comes BERT, which is this Bidirectional Encoder Representations from Transformers. So rather than using the LSTM kind of model that ELMo used, these guys say, well, let's hop on the bandwagon, use the Transformer-based architecture. And then they introduced some interesting tricks. So one of the problems with Transformers is if you stack them on top of each other there are many paths from any of the inputs to any of the intermediate nodes and the outputs. And so if you're doing self-attention, you're trying to figure out where the output should pay attention to the input, the answer, of course, is like, if you're trying to reconstruct the input, if the input is present in your model, what you will learn is that the corresponding word is the right word for your output. So they have to prevent that from happening. And so the way they do it is by masking off, at each level, some fraction of the words or of the inputs at that level. So what this is doing is it's a little bit like the skip-gram model in Word2Vec, where it's trying to predict the likelihood of some word, except it doesn't know what a significant fraction of the words are. And so it can't overfit in the way that I was just suggesting. So this turned out to be a good idea. It's more complicated. Again, for the details, you have to read the paper. I gave both the Transformer paper and the BERT paper as optional readings for today. I meant to give them as required readings, but I didn't do it in time. So they're optional. But there are a whole bunch of other tricks. So instead of using words, they actually used word pieces. So think about syllables and don't becomes do and apostrophe t, and so on. And then they discovered that about 15% of the tokens to be masked seems to work better than other percentages. So those are the hidden tokens that prevent overfitting. And then they do some other weird stuff. Like, instead of masking a token, they will inject random other words from the vocabulary into its place, again, to prevent overfitting. And then they look at different tasks like, can I predict the next sentence in a corpus? So I read a sentence. And the translation is not into another language. But it's predicting what the next sentence is going to be. So they trained it on 800 million words from something called the Books corpus and about 2 and 1/2 million-word Wikipedia corpus. And what they found was that there is an enormous improvement on a lot of classical tasks. So this is a listing of some of the standard tasks for natural language processing, mostly not in the medical world but in the general NLP domain. And you see that you get things like an improvement from 80%. Or even the GPT model that I'll talk about in a minute is at 82%. They're up to about 86%. So a 4% improvement in this domain is really huge. I mean, very often people publish papers showing a 1% improvement. And if their corpus is big enough, then it's statistically significant, and therefore publishable. But it's not significant in the ordinary meaning of the term significant, if you're doing 1% better. But doing 4% better is pretty good. Here we're going from like 66% to 72% from the earlier state of the art-- 82 to 91; 93 to 94; 35 to 60 in the CoLA task corpus of linguistic acceptability. So this is asking, I think, Mechanical Turk people, for generated sentences, is this sentence a valid sentence of English? And so it's an interesting benchmark. So it's producing really significant improvements all over the place. They trained two models of it. The base model is the smaller one. The large model is just trained on larger data sets. Enormous amount of computation in doing this training-- so I've forgotten, it took them like a month on some gigantic cluster of GPU machines. And so it's daunting, because you can't just crank this up on your laptop and expect it to finish in your lifetime. The last thing I want to tell you about is this GPT-2. So this is from the OpenAI Institute, which is one of these philanthropically funded-- I think, this one, by Elon Musk-- research institute to advance AI. And what they said is, well, this is all cool, but-- so they were not using BERT. They were using the Transformer architecture but without the same training style as BERT. And they said, the secret is going to be that we're going to apply this not only to one problem but to a whole bunch of problems. So it's a multi-task learning approach that says, we're going to build a better model by trying to solve a bunch of different tasks simultaneously. And so they built enormous models. By the way, the task itself is given as a sequence of tokens. So for example, they might have a task that says translate to French, English text, French text. Or answer the question, document, question, answer. And so the system not only learns how to do whatever it's supposed to do. But it even learns something about the tasks that it's being asked to work on by encoding these and using them as part of its model. So they built four different models. Take a look at the bottom one. 1.5 billion parameters-- this is a large model. This is a very large model. And so it's a byte-level model. So they just said forget words, because we're trying to do this multilingually. And so for Chinese, you want characters. And for English, you might as well take characters also. And the system will, in its 1.5 billion parameters, learn all about the sequences of characters that make up words. And it'll be cool. And so then they look at a whole bunch of different challenges. And what you see is that the state of the art before they did this on, for example, the Lambada data set was that the perplexity of its predictions was a hundred. And with this large model, the perplexity of its predictions is about nine. So that means that it's reduced the uncertainty of what to predict next ridiculously much-- I mean, by more than an order of magnitude. And you get similar gains, accuracy going from 59% to 63% accuracy on a-- this is the children's something-or-other challenge-- from 85% to 93%-- so dramatic improvements almost across the board, except for this particular data set, where they did not do well. And what really blew me away is here's an application of this 1.5 billion-word model that they built. So they said, OK, I give you a prompt, like the opening paragraph of a Wall Street Journal article or a Wikipedia article. And you complete the article by using that generator idea that I showed you before, that just uses the language model and picks the most likely word to come next and emits that as the next word. So here is a prompt that says, "A train carriage containing controlled nuclear materials was stolen in Cincinnati today. Its whereabouts are unknown." By the way, this is made up. I mean, this is not a real news article. And the system comes back with a completion that says, "The incident occurred on the downtown train line, which runs from Covington and Ashland stations. In an email to Ohio news outlets, the US Department of Energy said it's working with the Federal Railroad Administration to find the thief," et cetera. This looks astoundingly good. Now, the paper from which this comes-- this is actually from a blog, but they've also published a paper about it-- claims that these examples are not even cherry-picked. If you go to that page and pick sample 1, 2, 3, 4, 5, 6, et cetera, you get different examples that they claim are not cherry-picked. And every one of them is really good. I mean, you could imagine this being an actual article about this actual event. So somehow or other, in this enormous model, and with this Transformer technology, and with the multi-task training that they've done, they have managed to capture so much of the regularity of the English language that they can generate these fake news articles based on a prompt and make them look unbelievably realistic. Now, interestingly, they have chosen not to release that trained model. Because they're worried that people will, in fact, do this, and that they will generate fake news articles all the time. They've released a much smaller model that is not nearly as good in terms of its realism. So that's the state of the art in language modeling at the moment. And as I say, the general domain is ahead of the medical domain. But you can bet that there are tons of people who are sitting around looking at exactly these results and saying, well, we ought to be able to take advantage of this to build much better language models for the medical domain and to exploit them in order to do phenotyping, in order to do entity recognition, in order to do inference, in order to do question answering, in order to do any of these kinds of topics. And I was talking to Patrick Winston, who is one of the good old-fashioned AI people, as he characterizes himself. And the thing that's a little troublesome about this is that this technology has virtually nothing to do with anything that we understand about language or about inference or about question answering or about anything. And so one is left with this queasy feeling that, here is a wonderful engineering solution to a whole set of problems, but it's unclear how it relates to the original goal of artificial intelligence, which is to understand something about human intelligence by simulating it in a computer. Maybe our BCS friends will discover that there are, in fact, transformer mechanisms deeply buried in our brain. But I would be surprised if that turned out to be exactly the case. But perhaps there is something like that going on. And so this leaves an interesting scientific conundrum of, exactly what have we learned from this type of very, very successful model building? OK. Thank you. [APPLAUSE]
MIT_6S897_Machine_Learning_for_Healthcare_Spring_2019
10_Application_of_Machine_Learning_to_Cardiac_Imaging.txt
PROFESSOR: So welcome, everyone. Today is the first of what will be a series of four guest lectures throughout the semester. There will be two guest lectures, starting the week from today, and then there'll be another one towards the end of the semester. And what Pete and I decided to do is to bring in people who know a lot more than us about some area of expertise. In today's instance, it's going to be about cardiovascular medicine, in particular about how to use imaging and machine learning on images in that context. And for today's lecture, we're very excited to have professor Rahul Deo to speak. Rahul's name kept on showing up, as I did research over the last couple of years. First, my group was starting to get interested in echocardiography, and we said, oh, here's an interesting paper to read on it. We read it, and then we read another paper on doing subtyping of ejection fraction which is a type of heart failure, and we read it. I wasn't really paying attention to the names on the papers, and then suddenly, someone told me, there's this guy moving to Boston next month who's doing a lot of interesting work and interesting machine learning. You should go meet him. And of course, I meet him, and then I tell him about these papers I read, and he said, oh, I wrote all of those papers. He was a senior author on them. So Rahul's been around for a while. He is already a senior in his field. He started out doing his medical school training at Cornell, in Cornell Medical School, in New York City, at the same time as doing his PhD at Rockefeller University. And then he spent the first large chunk-- after his post-doctoral training, up here in Boston, at Harvard Medical School-- he spent a large chunk of his career as faculty at UCSF, in California. And just moved back this past year to take a position as the chief data scientist-- is that right-- for the One Brave Idea project which is a very large initiative joint between MIT and Brigham and Women's Hospital to study cardiovascular medicine. He'll tell you more maybe. And Rahul's research has really gone the full spectrum, but the type of things you'll hear about today is actually not what he's been doing most of his career, amazingly so. Most of his career, he's been thinking more about genotype and how to really bridge that genotype-phenotype branch, but I asked him specifically to talk about imaging. So that's what he'll be focusing on today in his lecture. And without further ado, thank you, Rahul, for coming here. [APPLAUSE] RAHUL DEO: So I'm used to lecturing the clinical audiences, so you guys are by far the most technical audience. So please spare me a little bit, but I actually want to encourage interruptions, questions. This is a very opinionated lecture, so that if anybody has sort of any questions, reservations, please bring them up during lecture. Don't wait till the end. And in part, it's opinionated because I feel passionately that the stuff we're doing needs to make its way into practice. It's not by itself purely academically interesting. We need to study the things we're doing. We're already picking up what everybody else here is already doing. So it's OK from that standpoint, but it really has to make its way. And that means that we have to have some mature understanding of what makes its way into practice, where the resistance will be. So the lecture will be peppered throughout with some opinions and comments in that, and hopefully, that will be useful. So just a quick outline, just going to introduce cardiac structure and function which is probably not part of the regular undergraduate and graduate training here at MIT. Talk a little bit about what the major cardiac diagnostics are and how they use them. And all this is really to help guide the thought and the decision making about how we would ever automate and bring this into-- how to bring machine learning, artificial intelligence, into actual clinical practice. Because I need to give enough background so you realize what the challenges are, and then the question probably every has is where's the data? How would how would one get access to some of this stuff to be able to potentially do work in this area? And then, I'm going to venture a little bit into computer vision and just talk about some of the topics that at least I've been thinking about that are relevant to what we're doing. And then talk about some of this work around an automated pipeline for echocardiogram, not as by any means a gold standard but really just as sort of an initial foray into trying to make a dent into this. And then thinking a little bit about what lessons-- David mentioned that you talked about electrocardiogram last week or last class, and so a little bit of some of the ideas from there, and how they would lend themselves to insights about future types of approaches with automated interpretation. And then my background is actually more in biology. So I'm going to come back and say, OK, enough with all this imaging stuff, what about the biology? How can we make some insights there? OK. So every time people try to get funding for coronary heart disease, they try to talk up just how important it is. So this is still-- we have some battles with the oncology people-- but this is still the leading cause of death in the world. And then people like I, you're just emphasizing the developed world. There's lots of communicable diseases that matter much more. So even if you look at those, and you look at the bottom here, this still, if this is all causes of death age-adjusted, cardiovascular disease is still number one amongst that. So certainly it remains important and increasingly so in some of the developing world also. So it's important to think a little bit about what the heart does, because this is going to guide at least the way that diseases have been classified. So the main thing the heart does is it's a pump, and it delivers oxygenated blood throughout the circulatory system to all the tissues that need it-- the brain, the kidneys, the muscles, and oxygen, of course, is required for ATP production. So it's a pretty impressive organ. It pumps about five liters of blood a minute, and with exercise, that can go up five to seven-fold or so, with conditioned athletes, not me, but other people can ramp that up substantially. And we have this need to keep a very, very regular beat, so if you pause for about three seconds, you are likely to get lightheaded or pass out. So you have to maintain this rhythmic beating of your heart, and you can compute what that would be, and somewhere around two billion beats in a typical lifetime. So I'm going to show a lot of pictures and videos throughout this. So it's probably worthwhile just to take a pause a little bit and talk about what the anatomy of the heart is. So the heart sits like this, so the pointy part is kind of sitting out to the side, like that. And so I'm going to just sort of describe the flow of blood. So the blood comes in something called the inferior vena cava or the superior vena cava, that's draining from the brain. This is draining from the lower body, and then enters into a chamber called the right atrium. It moves through something called the tricuspid valve into what's called the right ventricle. The right ventricle has got some muscle to it. It pumps into the lungs. There, the blood picks up oxygen, so that's why it's shown as being red here. The oxygenated native blood comes through the left atrium and then into the left ventricle through something called the mitral valve. We'll show you some pictures of the mitral valve later on. And then the left ventricle, which is the big workhorse of the heart, pumps blood through the rest of the body, through a structure of the aorta. So in through the right heart, through the lungs, through the left heart, to the rest of the body. And then shown here in yellow is the conduction system. So you guys got a little bit of a conversation last class on the electrical system. So the sinoatrial node is up here in the right atrium, and then conduction goes through. So the P wave on an EKG represents the conduction through there. You get through the AV node, where there's a delay which is a PR interval, and then you get spreading through the ventricles which is the QRS complex, and then repolarization is the T wave. So that's the electrical system, and of course, these things have to work intimately together. Every single basic kind of cardiac physiology will show this diagram called the Wiggers diagram which really just shows the interconnectedness of the electrical system. So there's the EKG up there. These are the heart sounds that a provider would listen to with the stethoscope, and this is capturing the flow of sort of the changes in pressure in the heart and in the aorta. So heart fills during a period of time called diastole. The mitral valve closes. The ventricle contracts. The pressure increases. This is a period of time called systole. Eventually, something called the aortic valve pops open, and blood goes through the rest of the body. The heart finally starts to relax. The atrioventricular valve closes. Then, you fill again. So this happens again and again and again in a cyclical way, and you have this combination of electrical and mechanical properties. OK. So I have some pictures here. These are all MRIs. I'm going to talk about echocardiography which is these very ugly, grainy things that I unfortunately have to work with. MRIs are beautiful but very expensive. So there's a reason for that. So this is something called the long axis view of the heart. So this is the thick walled left ventricle there. This is the left atrium there, and you can see this beautiful turbulent flow of blood in there, and it's flowing from the atrium to the ventricle. This is another patient's. It's called the short axis view. There is the left ventricle and the right ventricle there. So we're kind of looking at it somewhat obliquely, and then this is another view called the physical. It's a little bit dull there. I'm sorry. We can brighten it a little bit. This is the what's called the four chamber view. So you can see the left ventricle and right ventricle here. So the reason for these different views is, ultimately, that people have measures of function and measures of disease that go along with these specific views. So you're going to see them coming back again and again. OK. So the way that physicians like to organize disease definitions really around some of these same kind of functions. So failures of the heart to pump properly causes a disease called heart failure, and this shows up in terms of being out of breath, having fluid buildup in the belly and in the legs, and this is treated with medications. Sometimes, you can have some artificial devices to help the heart pump, and ultimately, you could even have a transplant, depending on how severe it is. So that's the pump. Blood supply to the heart ultimately can also be blocked, and that causes a disease called coronary artery disease. If blood is completely blocked, you can get something called a heart attack or myocardial infarction. That's chest pain, sometimes shortness of breath, and we open up those blocked vessels by angioplasty, stick a stent in there, or bypass them altogether. And then the flow of blood has to be one way. So abnormalities of flow of the blood through valves is valvular disease, and so you can have either two type valves, so that's called stenosis. Or you can have leaky valves. That's called regurgitation. That shows up as light-headedness, shortness of breath, fainting, and then you've got to fix those valves. And finally, there's abnormalities of rhythm. So something like atrial fibrillation which is a quivering of the atrium, so too slow heartbeats, which would look like cardiac, can present as palpitations, fainting, or even sudden death. And you can stick a pacemaker in there, defibrillator in there, or try to burn off the arrhythmia. OK. So this is like the very physiology-centric view, but the truth is that the heart has a whole lot of cells. So there's a lot more biology there than simply just thinking about the pumping and the electrical function. Only 30% of the cells or so are these cardiomyocytes. So these are the cells that are involved in contraction. These are cells that are excitable, but that's only 30% of the cells. There is endothelials in the cell. There's fibroblasts. There's a bunch of blood cells in there too, certainly a lot of red blood cells in there too. So you have lots of other things. So we're going to come back to here a little bit when talking about how should we be thinking about disease? The historic way is to think about pumping and electrical activation, but really, there's maybe a little bit more complexity here that needs to be addressed. OK. So there's a lot of different-- so cardiology is very imaging-centric, and as a result, it's very expensive. Because imaging costs a lot of money to do, and so I have dollar signs here reflecting the sorts of different tests we do. So you saw the cheapest one last week, electrocardiogram, so one dollar sign, and that has lots of utility. For example, one could diagnose an acute heart attack with that. Echocardiography, which involves sound waves, is ultimately more used for quantifying structure and function, can pick up heart failure, valvular disease, high blood pressure in the lungs. So that's another modality. MRI, which is just not used all that much in this country, is very expensive. It does largely the same things, and you can imagine, even though it's beautiful, people have not had an easy time and able to justify why it's any better than this slightly cheaper modality. And then you have angiography which can either be by CAT scan or by X-ray. And that visualizes the flow of blood through the heart and looks for blockages which are going to be stented, ballooned up and stented. And then you had these kind of non-invasive technologies, like PET and SPECT that use radionucleotides, like technetium, rubidium, and they look for abnormalities in blood flow to detect whether or non-invasively there's some patch of the heart that isn't getting enough blood. If you get one of these, and it's abnormal, often, you go over there, and you take a trip to the movies-- as my old teachers used to say-- and then you may find yourself with an angioplasty or stent or bypass. So one of the sad things about cardiology is we don't define our diseases by biology. We define our diseases often related to whether the anatomy of the physiology is abnormal or normal, usually based on some of these images or some of these numbers. OK. So we have to make decisions, and we often use these very same things too to be able to make some decisions. So we have to decide whether we want to put a defibrillator, and to do so, you often need to get an echocardiogram to look at the pumping function of the heart. If you want to decide on whether somebody needs angioplasty, you have to get an angiogram. If you want to decided to get a valve replacement, you need an echo. But some of these other ones actually don't involve any imaging, and this is sort of one of the challenges that I'm going to talk about is that all of the future-- you can imagine building brand new risk models, new classification models. You're stuck with the data that's out there, and the data that's out there is ultimately being collected because somebody feels like it's worth paying for it already. So if you want to build a brand new risk model for who's going to have a myocardial infarction, you're probably not going to have any echocardiograms to be able to use for that, because nobody is going to have paid for that to be collected in the first place. So this is a problem. To be able to innovate, I've got to keep on coming back to that, because I think you're going to be shocked by the small sample sizes that we face in some of these things. And part of it is because if you just want to piggyback on what insurers are going to be willing to pay for to get your data, you're going to be stuck with only being able to work off the stuff we already know something about. So much of my work has been really trying to think about how we can change that. OK, so just a little bit more, and then we can get into a little bit more meat. So sort of the universal standard for how imaging data is stored is something called DICOMs, or Digital Imaging and Communications standard, and really, the end of the day, there is some compressed data for the images. There's a DICOM header, which I'll show you in a moment. It's lots of nice Python libraries that are available to be able to work with this data, and there's a free viewer you could use too. OK. So where do I get access to this? So this has actually been an incredible pain. So hospitals are set up to be clinical operations. They're not set up to make it easy for you to get gobs of data for being able to do machine learning. It's just not really there. And so sometimes, you have some of these data archives that store this data, but there's lots of reasons for why people make that difficult. And one of them is because often images have these burned in pixels with identifiable information. So you'll have a patient's name emblazoned in the image. You'll have date of birth. You'll have kind of other attributes. So you're stuck with that, and not only is it a problem that they're there, the vendors don't make it easy to be able to get rid of that information. So you actually have a problem that they don't really make it easy to download in bulk or de-identify this. And part of the reason is because then it would make it easy for you to switch vendors and have somebody else take over. So they make it a little bit hard for you. Once it's in there, it's hard for you to get it out, and people are selling their data. That's certainly happening too. So there's a little bit of attempts to try to control things that way, and many of the labels you want are stored separately. So you want to know what the diseases of these people. So you have the raw imaging data, but all the clinical stuff is somewhere else. So you have to sometimes link that, and so you need to get access there. And so just to give you a little bit of an idea of scale, so we're about to get all the ECGs from Brigham and Women's which is about 30 million historically, and this is all related to cost. So positron emission tomography, you can get about 8,000 or so, and we're one of the busiest centers for that. Echocardiograms are in the 300,000 to 500,000 range in archives. So that gets a little bit more interesting. OK. This is what a DICOM header looks like. You have some sort of identifiers, and then you have some information there, attributes of the images, patient name, date of birth, frame rate. These kind of things are there, and there's some variability. So it's never quite easy. OK. So these different modalities have some different benefits to them which is why they're used for one disease or the other. And so one of the real headaches is that the heart moves. So the chest wall moves, because we breathe, and the heart moves too. So you have to image something that has enough temporal frequency that you're not overwhelmed by the basic movement of the heart itself, and so some of these things aren't great. So SPECT or PET acquire their images, which are radioactive counts, over minutes. So that's certainly a problem when it comes to something that's moving like that, and if you want to have high resolution. So typically, you have very poor spatial resolution for something that ultimately doesn't deal well with the moving aspect. So coronary angiography has very, very fast frame rates. So that's X-ray, and that's sort of very fast. Echocardiography can be quite fast. MRI and CT are not quite as good, and so there's some degradation of the image. As a result, people do something called gating, where they'll take the electrocardiogram, the ECG, and try to line up different portions of different heartbeats. And say, well, we'll take this image from here, line it up with this one from there, this one-- I'm going to talk a little bit about that, about registration, but ultimately, that's a problem that people have to deal with. So it's a computer vision problem of interest. OK. Preamble is almost done. OK. So why do we even imagine any of this stuff is going to be useful? So it turns out that the practice of interpreting involves a lot of manual measurements. So people like me, and people who have trained for way too long, find themselves getting little rulers and measuring various things. So for example, this is a narrowing of an artery. So you could take a little bit of calipers and measure across that and compare it to here and say, ah, this is 80% narrowed. You could measure the area of this chamber, the left ventricle, and you can measure its area is, and you can see, ah, its peak area is this. It's minimum area is this. Therefore, it's contracting a certain amount. So we do those things. We measure those things by hand. And the other thing we do is we actually diagnose things just by looking at them. So this is a disease called cardiac amyloid characterized by some thickening. I'll show you a little bit more about that and some sparkling here. So people do look and say, ah, this is what this is. So there's kind of a classification problem that comes either at the image or video level. So we'll talk about whether this is even worth doing. AUDIENCE: I have a question. RAHUL DEO: Yes. AUDIENCE: Is this with software, or do you literally take a ruler and measure? RAHUL DEO: So the software involves clicking at one point, stretching something, and clicking another point. So it's a little better than pulling the ruler out of your back pocket, but not that much better. OK. So we're going to talk about or three little areas, and again, this is not-- I got involved in this really in the last two years or so. It's nice of David to ask me to speak here, but I think there are probably people in this room who have a lot more experience in this space. But the areas that have been relevant to what we've been doing has been image classification and then semantic segmentation. So image classification being assigning a label to an image, very great. Semantic segmentation, assigning each pixel to a class label, and we haven't done anything around the image registration, but there are some interesting problems I've been thinking about there. And that's really mapping different sets of images onto one coordinate system. OK. So seems obvious that image classification would be something that you would imagine a physician does, and so maybe we can mimic that. Seems like a reasonable thing that happens. So lots of things that radiologists, people who interpret images, do involve terms of recognition, and they're really fast. So it takes them a couple of minutes to often do things like detect if there's cancer, detect if somebody has pneumonia, detect if there's breast cancer in a mammogram, tells there's fluid in the heart, and then even less than that, one minute often, 30 seconds, they can very, very fast. So you can imagine the wave of excitement around image classification was really post-image net, so maybe about three years, four years, or so ago. We're always a little slow in medicine, so a little bit behind other fields. And the places that they went were the places where there are huge data sets already, and where there's simple recognition tests. So chest X-rays and mammograms are both places that had a lot of attention, and other places have been slowed down by just how hard it is to get data. So if you can't get a big enough data set, then you're not going to be able to do much. OK. So David mentioned, you guys already covered very nicely, and this is probably kind of old hat. But I would say that prior to convolutional neural networks, nothing was happening in the image classification space in medicine. It was just not. People weren't even thinking that it was even worth doing. Now, there's a lot of interest, and so I have many different companies coming and asking for help with some of these things. And so it is now a very attractive thing in terms of thinking, and I think people haven't thought out all that well how we're going to use that. So for example, if it takes a radiologist a minute to two minutes to read something, how much benefit are you going to get to automate it? And the real problem is you can't take that radiologist away. They're still there, because they're the ones who are on the hook. And they're going to get sued, and it's among the most sued profession in medicine. So there's lots of people who can read an X-ray. You don't need to have all that training. But if you're the one who's going to be sued, it ends up being that there really isn't any task shifting in medicine. There isn't that kind of, oh, I'm going to let such and such take on 99%, and just tell me when there is a problem. It just doesn't happen, because they ultimately don't feel comfortable passing that on. So that's something to think about. So you have a task that's relatively easy for a very, very expensive and skilled person to do, and they refuse to give it up. OK. So that's a problem, but you can imagine there is some scenarios-- and we'll talk more about this-- as to where that could be. So let's say it's overnight. The radiologist is sleeping comfortably at home, and you have a bunch of studies being done in the emergency room. And you want to figure out, OK, which one should we call them about? So you can imagine there could be triage, because the status quo would be, we'll take them one by one. Maybe you could imagine sifting through them quickly and then re-prioritizing them. They'll still be looked at. Every single one will still be looked at. It's just the order may change. So that's an example, and you could imagine there could be separate-- someone else could read at the same time. And we'll come back to this in terms of whether or not you could have two streams and whether or not that is a scenario that would make some sense. And maybe, in resource-poor settings, where we're not teaming with the radiologist, maybe that makes sense too. So we'll come back to that too. OK. So here's another problem. So almost everything in medicine requires some element of confirmation of a visual finding, and some of the reasons are very simple. So let's say you want to talk about there being a tumor. So if you're going to ask a surgeon to biopsy it, you better tell them where it is. It's not enough to just say, this image has a tumor somewhere on it. So there is some element of that that you're going to need to be a little bit more detailed than simply making a classification with a level one image, but I would say beyond that. Let's say, I'm going to try to get one of my patients to go for valve surgery. I'll sit with them, bring up their echo, sit side by side with them, and point them to where it is. Bring up a normal one and compare, because I want them to be involved in the decision. I want them to feel like they're not just trust-- and they have to trust me. At the end of the day, they don't even know that I'm showing-- I'll show them their name, but ultimately, there is some element of trust. They're not able to do this, but at the same time, there is this sense of shared decision making. You're trying to communicate to somebody, whose life is really at risk here, that this is why we're doing this decision. So the more you could imagine that there is obscuring, the more difficult it is to make that case. So medicine is this-- I found this review by Bin Yu from Berkeley, just came out, and it talks about this tension between predictive accuracy and descriptive accuracy. So this is of the typical thing we think about that matters, and there's lots of people who've written about this thing. Medicine is tough in that it's very demanding in this space here, and it's almost inflexible in this space here. So it's a tough nut to crack in terms of being able to make some progress, and so we'll talk more about when that's likely to happen. OK. So this again may be something that's very familiar to you. So we had this problem in terms of some of the disease detection models, and I didn't find this all that satisfying in terms being able to successfully localize. So just digging through the literature, it looks like this idea of being able to explain what part of the image is driving a certain classification. That field is modestly old. Maybe it goes back before that. But ultimately, there's two broad ways. You can imagine finding an exemplary image that maximally activates the classical work, or you can take a given image and say, what aspect of it is driving the classification? And so in this paper here did both those things. They either went through and optimized-- starting from an average of all the training data-- they optimized the intensities until they maximized the score for a given class. So that's what's shown here. And then another way to do it is in some sense you could take a derivative of the score function relative to the intensities of all the pixels and come up with something like this. But you could imagine, if you showed this to a patient, they wouldn't be very satisfied. So it's very difficult to make a case that this is super useful, but it seems like this field has progressed somewhat, and I haven't tried this out. This is a paper by Max Welling and company, out by a couple of years, and maybe you guys are familiar with this. But this ultimately is a little bit of a different approach in the sense that they take patches, the sort of purple-like patch here, and they compare the final score, or class label, relative to what it-- so taking the intensity here and replacing it by a conditional result sampling from the periphery. And just comparing those two things and seeing whether or not you either get activation, which is the red here. This is the way that they did the conditional sampling, and then blue would be the negative contributors. And there, you can imagine, there's a little bit more distinction here, and then something a little bit more on the medical side is this is a brain MRI. And so depending on this patch size, you get a different degree of resolution to localizing some areas of the image that are relevant. So this is something that we're going to expect a lot of demands from the medical field in terms of being able to show this. And at least our initial forays weren't very satisfying doing this with what we were doing, but maybe these algorithms have gotten better. OK. So next thing that matters. OK. So this is what people do. So I did my cardiology fellowship in MGH, and I just traced circles. That's what I did. I just trace circles, and I stretched a ruler across, and then fed that in. At least the program computed the volumes for me, the areas and volumes, but otherwise, you have to do this yourself. And so this is like a task that's done, and sometimes you may have to-- here's an example of volumes being computed by tracing these sorts of things and much radiology reports just involve doing that. So this seems like a very obvious task we should be able to improve on. So medicine tends to be not the most creative in terms of trying a bunch of different architectures. So if you look at the papers, they all jump on the U-net as being the favorite architecture for semantic segmentation. So maybe familiar to people here, really, it just captures this encoding or contracting layer. Where you're downsampling, and then there's a symmetric upsampling that takes place. And then ultimately, there's these skip connections, where you take an image, and then you can catonate it with this upsampled layer, and this helps get a little bit more localization. So we used this for our paper, and we'll talk about this a little bit, and it's very popular within the medical literature. One of the things that was quite annoying is that what you would find for some of the images, you'd find, let's say, a ventricle. You'd find this nicely segmented area, and then you'd find this little satellite ventricle that the image would just pick. The problem is that this pixel-level classification tends to be a problem, and a human would never make that mistake. But that tends to be something that sounds like it is common in the-- this is a common tension is that this sort of focusing on relatively limited scales ends up being problematic, when it comes to picking up the global architecture. And so there's lots of different solutions it looks like in the literature. I just highlighted some of these from a paper that was published from Google a little while ago. One of the things that's captured is these ideas of dilated convolutions, and so that you have convolutions built on convolutions. And so ultimately, you have a much bigger receptive field for this layer, though you haven't really increased the number of parameters that you have to learn. So there is some. It seems like there's lots. This is not just a problem for us but a problem for many people in this field. So we need to be a little bit more adventurous in terms of trying some of these other methods. We did try a little bit of that and didn't find a gains, but I think, ultimately, there still needs to be a little bit more work there. OK. So the last thing I'm going to talk about before getting into my work is really this idea of image registration. So I talked about how there are sometimes some techniques that have limitations, either in terms of spatial resolution or temporal resolution. So this is a PET scan here, this sort of reddish glow here, and in the background, we have a CAT scan of the heart. And so clearly, this is a poorly registered image, where you have the PET scan kind of floating out here, when it really should be lined up here. And so you have something that's registered better there. I also mentioned this problem but gating. So ultimately, if you have an image taken from different cardiac cycles, you're going to have align them in some way. It seems like a very mature problem in computer vision world. We haven't done anything in this space, but ultimately, it has been around for decades. If not, I would just at least touch it, touch upon it. So this is sort of the old school way, and then now people are starting to use conditional variational autoencoders to be able to learn geometric transformations. This is the Siemens group out in Princeton that has this paper. Again, nothing I'm going to focus on, just wanted to bring it up as being an area that remains of interest. OK. So I think we're doing OK, but you said 4:00. PROFESSOR: 3:55 RAHUL DEO: 3:55. OK. All right, and interrupt. Please, interrupt. OK? I'm hoping that I'm not talking too fast. OK. As David said, this was not my field, but increasingly, there is some interest in terms of getting involved in it, in part because of my frustrations with clinical medicine. So this is one of my frustrations with clinical medicine. So cardiology has not really changed, and one of the things it fails at miserably is picking up early-onset disease. So here's the typical profile, a little facetious. So people like me in our early 40s, start to already have some problems with some of these numbers. So I like to joke that, since I came back to the Harvard system from California, my blood pressure has gone up 10 points which is true, unfortunately. So these changes already start to happen, and nobody does anything about it. So you can go to your doctor, and you're also saying, no, I don't want to be on any medicine. They're like, no, no, you shouldn't be on any medicine. So you kind hem and haw, and a decade goes by, 15 years go by. And then finally, you're like, OK, well, it looks like at least my coworkers are on some medicines, or maybe I'll be willing to do that. And so they've got lots of stuff you can be treated, but it is often very difficult, and you see this at the doctor level too. Yes. AUDIENCE: For the optical values, how much personal deviation is there for the values? RAHUL DEO: So the optimal value is fixed and is just like a reference value. And you can be off-- so blood pressure, let's say. So people consider optimal to be less than 120 over less than 80. People are in the 200s. So you'd be treated in the 200s, but there'll be lots of people in the 140s and the 150s, and there'll be a degree of kind of nihilism about that for some time. And my patients would be like, oh, I got into the fight with the parking attendant. I just had a really bad phone-- there's like countless excuses for why it is that one shouldn't start a medication, and this can go on for a long time. Yes. AUDIENCE: [INAUDIBLE]. How can you assess the risk [INAUDIBLE] for blood pressure? Is that, like, noise [INAUDIBLE]?? RAHUL DEO: Yeah. So OK. So that's a great point. So yeah. So the question is that many of the things that we're seeing as risk factors have inherent variability to them. Blood sugar is another great example of those things. If you could have a single-point estimate that arises in the setting of a single clinic visit, how much do you trust that? So it's a couple of things related to that. So one of them is that people could be sent home with monitors, and they can have 24-hour monitors. In Europe, that's much more often done than here. And then, the thing is that often they'll say that, and then you go look at like six consecutive visits, and they all have something elevate, but it's true. This is a noisy point estimate, and people have shown that averages tend to do better. But at the same time, if that's all you have-- and the bias is interesting. Because the bias comes from some degree of stress, but we have lots of stress in our life. I hopefully am not the most stressful part of my patient's life, and so I think that ultimately there are-- and the problem with that is it's a good reason for someone to talk you out of them starting them on anything. And that's what ends up happening, and so this can be a really long period of time. OK. So this is the grim part. OK? So it turns out that once symptoms develop for something like heart failure, decline is fast. So 50% mortality in five years, after somebody gets hospitalized for their first heart failure admission, and often the symptoms are just around that time. So unfortunately, these things tend to be irreversible changes that happen in the background, and largely, you don't really have any symptoms until late in the game. So we have this problem, where we have this huge stretch. We know that there is risk factors, but we have this huge stretch, where nobody is doing anything about them. And then we have sort things going downhill relatively quickly after that. And unfortunately, I would make a case that probably responsiveness is probably best did this phase over there. Expense is really all over there. So we really want to find-- and this is what I consider to be missing in medicine. I'm going to come back to this again a little bit later on-- but really, we want to have these-- if you're going to do something in this asymptomatic phase, it better be cheap. You're not going to be getting MRIs every day or every year for people who have no symptoms. The system would bankrupt if you had that. So we need these low cost metrics that can tell us, at an individual level, not just if we had 1,000 people like you, somebody would benefit. And this is what my patients would say is that they would be so excited about their EKG or their echo being done every year, because they want to know, how does it look like compared to last year? They want some comparison at their level, not just some public health report about this being a benefit to 100 people like you. And so it shouldn't be both low cost, should be reflective at something an individual level, should be relatively specific to the disease process, expressive in some way, and should get better with therapy. I think that's one of the things that's pretty important is if somebody does the things you ask them to do, hopefully, that will look better. And then that would be motivating, and I think that's how people get motivated is that they get responses. So I would make a case that even simple things like an ultrasound-- and I have one showed here-- really does capture some of these things, and not all those things, but they have some of those things. So you have, for example, that in the setting of high blood pressure, the left ventricular mass starts to thicken, and this is a quantitative, continuous measure. It just thickens over time, and the heart starts to change. The pumping function can get worse over time. The left atrium, which is this structure over here, this thin-walled structure is amazing in the sense that it's almost this barometer for the pressure in the heart. Oh, that's a horrible reference. OK, but it tends to get kind of bigger and bigger in a very subtle way before any symptoms happen. So you have this, and this is just one view. Right? So this is a simple view acquired from an ultrasound that captures some of these things at an individual level. So this gets to some of my thoughts around where we could imagine automated interpretation benefiting. So if you want to think about where you're less likely. So with these very, very difficult, end-stage, or complex decisions, where you have a super skilled person even collecting the data in the first place. They've gone through training. They're super experienced. You have a very expensive piece of hardware used to collect the data. You have an expert interpreting it. This is done late in the disease course. You have to make really hard decisions, and you don't want to mess it up. So probably not good places to try to stick in an automated system in there, but what would be attractive would be to try to enable studies that are not even being done at all. So move to the primary care setting. Use low cost handhelds. So there's even now companies that are starting to try to automate acquisition of the data by helping people collect it and guide them to collecting the right views. Early in the disease course, no real symptoms here. Decision support just around whether you should start some meds or intensify them, low liability, low cost. So this is a place where we wanted to focus in terms of being able to introduce some kind of innovations in this space. OK. So this comes back to this slide of I talked about where you could imagine some of these things being low hanging fruit, but maybe those aren't the ones that we should be focusing on we should instead be focusing on enabling more data at low cost, getting more out of the data that we're collecting, and helping people even acquire it in the first place. So that's one category of things, and that's the one I just highlighted in the previous slide. You can imagine something running in the background at a hospital system level and just checking to see whether there's anybody who was missed in some ways. And then triage I'm going to talk about in the next slide. I'll come back to that, and then really-- and this is, again, one of the reasons I got into this-- we want to do something that elevates practice beyond just simply repeating what we already do. And so this idea of quantitative tracking of intermediate states, subclasses of disease, which is actually the real reason I got into this space is because I wanted to increase scale of data to be able to do this, and this is where you potentially would like to go. So the ECG example is an interesting one, because automated systems for ECG interpretation have been around for 40 or 50 years, and they really got going around the early 2000s, when people realized-- there's a pattern called an ST elevation. I'm not sure if you guys talked about that. This is a marker of complete stoppage of blood flow to the heart. So muscle starts to die. And then the early 2000s, there was a quality movement that said, as soon as anybody sees that, you should get to somebody doing something about it within an hour and a half or so. And so the problem was that in the old days and the old way to do this-- and even this was around the time I was a resident-- you would have to first call the cardiologist. Wake him up. They would come. You'd send them the image. They would look at it. Then, they would decide whether or not this was the pattern they were seeing, and then they would activate the lab, the cath lab. They would come in, and you were losing about an hour, hour and a half in this process. And so instead they decided that automated systems could be used to be able to enable ambulance personnel or emergency room docs, so non-cardiologists, to be able to say, hey, look, this is what we think is going on. Let's bring the team in, and so people would get mobilized. People would come to the hospital. Nobody would do anything in terms of starting the case, until somebody confirmed it, but already, the whole wheels were turning. And so you have this triage system, where you're making a decision. You're not finalizing the decision, but you're speeding things up. And so this is an example where you could imagine it's important to try to offload this to something. So this is an example, and there's going to be false positives. And people will laugh and mock the emergency room doctors and mock the ambulance drivers and say, ah, they don't know what they're doing. They don't have any experience. But ultimately, people were dying, because they were waiting for the cardiologist to be available to read the ECG. So you've got to think about those in terms of places where there may be cost for delay. OK. So coming back to echoes. OK. So why does an echo get studied? Because this is probably not something that is typical. It's a compilation of videos, and there are about 70 different videos typically in the studies that we do at the centers that we're at. And they're taken over multiple cycles and multiple different views, and often it takes somebody pretty skilled to acquire those views. And they take about 45 minutes to an hour to gather that data, multiple different views, and the stenographer is changing the depth to zoom in on given structures. And so you can understand that there's already somebody who was already very experienced in this process even collecting the data which is a problem. Because you need to take them out of the picture, because they're expensive to be able to do those things. So we were doing at UCSF 12,000 to 50,000. Brigham was probably a little busier at 30,000 to 35,000. Medicare back, in 2011, had seven million of these perform, and there's probably hundreds of millions of these archives, so lots of data. So we published a paper last year trying to automate really all of the main processes around this, and part of the reason to do all is it doesn't help you to have one little bit automated. Because at the end of the day, if you have to have a cardiologist doing everything else and a stenographer doing everything else, what have you really saved by having one little step? So the goal here was to start from raw study, coming straight off the machine, and try to do everything. And so that involves sorting through all these different views, coming up with empirical quality score with it, segmenting all the five primary views that we use. Directly detecting some diseases, and then computing all the standard mass and volume types of measurements that come from this. So we wanted to do it all, and this was, I think, it wasn't strikingly original in the algorithms that were used. But at the same time, it was very bold for anybody in the community to try to take this on, and of course, in general, all the backlash you could imagine when, you try to do something like this. I still hear it, but there's excitement. And certainly on the industry side, there's really excitement in that this is feasible. So I was running biology lab, back in 2016 or so, and then decided-- so my cousin's husband is the Dean of Engineering at Penn, and I emailed him and said, do you know anyone at Berkeley? I live near there. I have a very long commute, and I was like closer to there. Is anybody you know there? So he's like, yeah. I know Ruzena Bajcsy there. She used to be a Penn, and I know Alyosha Efros. And so he just emailed them and said, can you meet? [INAUDIBLE] And so I met some of them, and then I tried to find some people who were willing to work. So I just spent a day a week there for about two years, just hanging out, writing, code and try to get this project off the ground. So we have a few different institutions. Jeff Zhang was a senior undergraduate at the time. He's at Illinois right now as a graduate student. It's interesting, because it's hard to get grad student level people excited over stuff that's applications of existing algorithms, but they're happy to advise. So I ended up having to write a lot of the code myself. And undergraduates are, of course, excited to do these kind of things, because it's better than homework, and I can pay. But I think, ultimately, it's interesting to try to find that sweet spot and also find things that ultimately could be interesting from an algorithmic standpoint too. So I'm trying to do more of that these days. OK. So we aren't the first to even do something around classifying views. So somebody already had publish something, but we wanted to be a little bit more nuanced than that. In that we wanted to be able to distinguish, for example, whether this structure, the left ventricle, is cut off. Because we don't want to measure it if it's cut off, and we don't want to measure the atrium if it's completely cut off here. So we wanted to be able to have a classifier able to distinguish between some of those things. It's not an easy task, and a lot of these labels were me riding the train in my very long commute from East Bay, in California, to UCSF. And so I did a lot of labeling, and I did a lot of segmentation too. So I could fly a lot. And that's the other thing that's kind of interesting is that you often need-- even to do the grunt work-- you may need somebody fairly specialized to do it which is OK, but yeah, so that ended up being me for a lot of this. So I traced a lot of these images, and then I got some other people to help out. But you're not going to get a computer science undergraduate to trace art structures for you, nor are you going to get them excited about doing this. So we didn't end up having that much data, and I think we could probably get better than that. But we had the five main views, and we implemented a modified version of unit algorithm. We imposed a bit of a penalty to keep this problem of, for example, a little stray ventricle being out there. We imposed a penalty to say, well, if that's too far away from the center then, we're going to have the loss function take that into account. That helped somewhat, but so that was our approach to-- this is a pretty substantial deal to be able to do all these things that normally would be very tedious. And as a result, when we start to analyze things, we can segment every single frame of every single video. The typical echo reader will take two frames and trace them. That's it. That's all you get. So we can do everything over every single cardiac cycle, because there's amazing variability from beat to beat. And so it's silly to think that that should be the gold standard, but that is the gold standard. So we had thousands of echoes. So that's the other thing. So it turns out that it's almost impossible to get access to echoes, so I wrote a keystroke encoder that sat at the front end and just mimicked me entering in studies and downloading them. So that was the only way I could get. So I had about 30,000 studies built up over a year, but there's no way to do bulk download. And so again, you've got to do some grunt work to be willing to play this space. So we had a fair number of studies we could use in terms of where we had measurements and decent values in terms of that. I think it's interesting in terms of thinking about how good one can-- how close one can get. And one of the things we found is that, when there were big deviations-- these are Bland-Altman plots-- almost always the manual ones were wrong. AUDIENCE: Why is that? RAHUL DEO: Oh, OK. OK. So Bland-Altman plots, so people don't like using correlations in the medical-- so Bland and Altman published a paper in the Lancet about 30 years ago complaining that correlations and correlation coefficient are ultimately not good metrics. Because you could have some substantial bias, and really you want to know, if this is the gold standard, you need to get that value. So it really is just looking at differences between, let's say, the reference value and the, let's say, automated value, and then plotting that against the mean of the two. So that's it. I did it as percentages here, but ultimately, it's just that. It's that you're just taking the mean of, let's, say the left ventricular volume. You have a mean of the automated versus the manually measured one, and then you compare what the difference is of one minus the other, and so you'll be on one side or the other. So ideally, you would just be sitting perfectly on this line, and then you're going to look and see whether or not you're clustered on one side or the other. So that's just the typical thing. People try to avoid correlation coefficients, because they kind of consider them to be not really telling you whether or not-- there really is a gold standard, and there truly is a value here, and you want to be near that value. And so that's the standard for looking at comparison of diagnostics. So we had about 8,000 things. The reviewers gave us a hard time for the space up here, and there are not that many studies up here, but ultimately, there are some. And when we manually looked at a bunch of them, always the manual ones were just wrong. Either there is a typo or something like that, so that was reassuring, but we were sometimes very wrong. And you'd find that the places we'd be wrong would be these ridiculously complex congenital heart studies that we had never been given examples like that before. So that's a lesson to be learned is that, sometimes, you're going to be really off in these sorts of approaches, and you have to think a little bit. And what we ended up doing is having an interative cycle, where we would identify those and feed them back and of keep on doing that, but that still needs to be improved upon. OK. So function, again, there's, a couple of measures a function. There's a company that has something out there in this space, got FDA approved for having an automated ejection fraction. So I think we're better than their numbers, overall, but yeah. I think that that's just one of those things you're expected to be able to do. And then here's a problem that we run into. So we're comparing to the status quo which, like I said, is one person tracing two images and comparing them. That's it. So we're processing potentially 200, 300 different frames per study and competing median, smoothing across. We're doing a whole lot more than that. So what do we do about that in terms of the gold standard? And if you just take into observer variability, you're going to have up to 8% to 9% in absolute compared to 60% of the reference. So that's horrible. So what are you supposed to do? And I think so one thing people do is they take multiple readers and ask them to do that. But this is like, are you're going to get a bunch of cardiologists to do like 1,000 studies for you? It's very hard to imagine somebody doing that. You could compare it to another modality. So we haven't done this yet, but you could, for example, compare it to MRI and say whether or not you're more consistent with another modality. And then this is indirect, but you can go to like outcomes in a trial and see whether or not you do a better job. So there are things you can do. One of the things we decided to do is look for correlations of structures within a study itself and say, well, the mass-- so we know that, for example, thickened hearts lead to larger increases of pressure and left atrial enlargement. So we can look for correlations between those things and see whether we do a better job. I'd say, for, the most part we're about on par with everything that's there. So I don't think we're any better. Sometimes we're better. Sometimes we're worse. And I think, for the most part, this was another way to try to get at this, because we were stuck with this. How do you work with a gold standard that ultimately I don't think anybody really trusts as a gold standard? And this is a problem that just has to keep on coming up. This is just an example of where you could facilitate this idea of low cost serial imaging and point of care. So these are patients who are getting chemotherapy, and so so Herceptin-- not herception, Herceptin, it's like inception-- is an EGFR inhibitor that causes cardiac toxicity, and so people are getting screening echoes. So you could imagine, if you make it easier to acquire and interpret that, all you want to care about is the function and the size. So you can imagine automating that. So we just did this as proof of concept that you could imagine doing something like this. And for the last thing I want to talk about-- or sorry, the last thing in this space-- is that you could also imagine directly detecting disease. And so you have to say, well, why is that even worthwhile? Yes. AUDIENCE: I was curious. I guess it's going back to the idea of if you look at blended models between human groud truth and maybe a biological ground truth, [INAUDIBLE] versus sort of what you could get from an MRI or something-- or maybe not necessarily an MRI, but what you were saying based on the underlying biology, or if those two things are generally kept separate? RAHUL DEO: Yeah. These are early days for a lot of this, and I think, anytime you make anything more complicated, then the readers will give you a hard time, but you can imagine that. And especially, you may want to tune things to be able to be closer to something like that. So yeah, I think, unfortunately, people are pretty conservative in terms of how they interpret, but it does make some sense that there's probably something that-- Ideally, you want to be able to have something that is useful, and useful may not be exactly the same thing as mimicking what humans are doing. So no, I think it's a good idea. And I think that this is going to be-- this next wave-- is going to be thinking a little bit more about that in terms of like how do we improve on what's going on over there, rather than simply dragging it back to that? OK. So there are multiple rare diseases. I use to have a clinic that would focus on these, and they tend to get missed at centers that don't see them that often. So one place you could imagine is you can focus on trying to pick those up, and you could imagine, this could be just surveillance running in the background. It doesn't have to be kind of real time identification. So there's a few diseases where it's very reasonable to do these things, where it's very obvious. So this is a disease called hypertrophic cardiomyopathy. I used to see it in my clinic. So abnormally thickened hearts, leading cause of sudden death in young athletes. So Reggie Lewis, there's a bunch of people who've died suddenly from this condition. Unstable heart rhythm, sudden death, heart failure, it runs in families, and there are things you can do, if you identified it. And so it's actually a fairly easy task, in the sense that it tends to be quite obvious. So we built the classification model around this, and we tried to understand what it was doing in part. And so we tried to do some of these kind of attention or saliency type things, and they were very unsatisfying, in part because I think there's so many different features across the whole image. So you're just getting this blob, but I think maybe we just weren't implementing it correctly. I'm not really sure, but you have a left atrium gets bigger. The heart gets thicker. There's so many changes across the image. It was unsatisfying in terms of that. So we did something simple and just took the output of the probabilities and compared it to some simple things that we actually know about these things and found that there was some degree of correlation. But I would like to make that a little bit better. Cardiac amyloid, a very popular disease for which there are now therapies. And so pharma is very interested in identifying these people, and they really get missed at a pretty high rate. So we built another model for this. Usually, we had about 250 or 300 cases for each of these things and maybe a few thousand controls. And then this one's a little interesting. This is mitral valve prolapse. So this is what a prolapsing valve looks like. If you imagine the plane of the valve here, it buckles back. So it does this, and that's abnormal, and this is a normal valve. So you notice, it doesn't buckle back in. So it's a little interesting in that there's really only one part of the cardiac cycle that would really highlight this abnormality, at least that's the way that-- so the way that it's read clinically is people wait for this one part of the cardiac cycle where it's buckled back. They draw an imaginary line across, and they measure what the displacement is there, and so we built a reasonable model focusing. So we phased these images and picked the part of the cardiac cycle, those relevant, all in an automated way and built a model around that and pretty good, in terms of being able to do that, in terms of being in detect that. Yes. AUDIENCE: And so is this model on images at a certain time? Like can you just go back? Because obviously, you weren't doing videos. Right? RAHUL DEO: Well, so we would take the whole video. We were segmenting it. We were phasing it, figuring out what the part of the-- when was the end systole in that, and then using those as the-- so using a stack of those to be able to classify. AUDIENCE: So how do you know the time point? RAHUL DEO: Well, that's I'm saying. So we we're using the variation in the volumes. AUDIENCE: The segmentation would allow you to know the time point. RAHUL DEO: Exactly, because so a typical echo will have an ECG to use to gate, but the handhelds don't. So we want to move away from the things that involve the fanciness and all the bells and whistles. We're trying to use the image alone to be able to tell the cardiac cycle. So that's how we did it. Yes. AUDIENCE: So you mentioned handhelds. With the ultrasounds [INAUDIBLE],, are they different from these? RAHUL DEO: They look pretty similar. We got some now, and they look pretty similar in terms of the quality of the images, and you can acquire the very same view. So I think we haven't shown that we can do it off those, in part because there just isn't enough training data. But they look pretty nice, and I know at UCSF and at Brigham, all the fellows are using it. It looks pretty much the same in terms of the-- the transducers are similar, and image quality is very good. Resolution is very good. Frame rate probably doesn't get up as high necessarily, but for the most part, I don't think it's that different. So that is the next phase. Yes. AUDIENCE: Could you comment on-- so you mentioned how each of these three examples could be used within a surveillance algorithm. RAHUL DEO: Yeah. AUDIENCE: Could you comment on where along this true positive, false positive trade-off you would actually be realistic to use this? RAHUL DEO: Yeah. That's a good point. I think it would vary for every single one of those, and you really want to have some costs on what the-- so I would typically err on the side of higher sensitivity and dump it on the cardiologists to be able to-- so I would work, but I think you have to pick some-- let's say, you're a product manager. AUDIENCE: Just choose one of these three, and maybe-- RAHUL DEO: OK. Yeah. So this is a pretty rare disease. So your priors are pretty low in terms of these individuals. And so I think you probably would probably want to err somewhere along this area here, and so just working on what the-- so you probably will still be a relatively high rate of false positives even that space. But I would argue that it would take the treating cardiologist potentially just a few minutes to look at that study again, and if you picked up one of those patients, that would be a big win. So I think that the cost probably wouldn't be that high, and you just have to make the case. So therapy for amyloid, for example, this is a nice sharp up stroke there. There's new drugs out there that are sort of begging for patients, and they're having a real hard time identifying them. So you could imagine again, it's sort of a calculus based on what the benefits would be for that identification and what burden you're placing on the individuals to have to over read something. And you could probably tune that depending on what the disease is and who you're pitching it to. But you're right, you're going to crush people if like 1 in 100 ends up taking a true positive then you're not going to get many fans. Yes. AUDIENCE: Could you comment on whether, for example, [INAUDIBLE] basis, the ones that you're able to predict very well at that point you just chose what distinguishes the ones that are defined well? RAHUL DEO: So that's a good point, and I don't really know in the sense that I haven't looked that closely. But I'm going to guess, they're very thick and very obvious in that sort of sense. So we have a ECG model that may pick this up early. What you want is something to fix it up when it's treatable, not having something that's ridiculously exaggerated. So you may need multiple modalities some of which are more sensitive than others that can catch earlier stage disease to be able to do that. So there are interesting things about this disease in particular. So cataracts sometimes happen before-- so ideally, the way you do this is-- and I'm actually consulting around something like this-- you ideally want a mixture of electronic health record, something from other findings-- mirror findings, eye findings, plus maybe something cardiac plus and have something that ideally catches the disease in the ideal most treated state. And maybe echo's not the best one, and I think that we'll come back to that at the end. We have a little bit of time. OK. So UCSF is filing-- I don't know. I don't think this is actually patentable, but they are filing for a patent. I'm just filling the paperwork out today in terms of-- I don't know. But my code is all freely available anyway, for academic, non-profit use, and they're just trying to make it better. I think, ultimately, my view as an academic here is to try to show what's possible. And then, if you want to get a commercial product, then you need people to weigh in on the industry side and make something pretty and make it usable and all that. But I think, ultimately, I'm trying to just show, hey, if we could do this in a scalable way and find out something new, then you guys can catch up and do something that ultimately can be deployed. And what's interesting is I have a collaborator in New Zealand. There, they're are resource poor. So they have a huge backlog of patients. They don't have enough stenographers, and they don't have enough cardiologists. So they're trying to implement this super ultra quick five-minute study and then have automation. And so they want our accuracy to be a little bit better, but I think they're ready to roll out, if we're able to get something that has probably more training data. Yes. Are you from New Zealand? AUDIENCE: No. I think you started talking about the trade-off between accuracy and-- so in academia, I get the sense that they're always chasing perfect accuracy. RAHUL DEO: Yeah. AUDIENCE: But as you said, you're not going to get rid of cardiologists in the diagnosis. So I have a philosophical question of are you chasing the wrong thing? Should we chase perfect accuracy? RAHUL DEO: Yeah. So the question is around what should our goals be? So should we be just chasing after a level of accuracy that may be either very, very difficult to attain? And especially, if there's never a scenario where there'll be no clinician involved, should we instead be thinking about something that gets good enough to that next step? And I think that's a really good point. And what's interesting is-- and also it's interesting from the industry side-- is the field starts with the mimicking mode, because it's much harder to change practice. It's much easier to just pop something in and say, hey, I know you have to make these measurements. Let me make them for you, and you could look at them and see if you agree. So that's what ECGs do. Right? So nobody these days is measuring the QR rests width. Nobody does that. That's just not done. If you've got a number that's absurd, you'll change it. But for the most part, you're like, it's close enough, but you almost have to start with that. To do something that's transformative is very hard to do. So I think something that involves-- and I talked to David about this. It's sort of like the man-machine interface is fascinating to think about how do we together come up with something better? But it's just much harder to get that adopted, because it requires buy-in in a way that's different than just you do my work for me, but more that we come together to do something better. And I think that's going to be interesting as to how to chip away at that problem. OK. So a couple of musings, then I'm going to talk a little bit about One Brave Idea, if we have time, or I can stop and take questions instead, because it's a little bit of a biology venture. OK. So I do think that we should really look. People give me a hard time around echo, and I'm like, well, ECG's been around for a long time, and there's automation there. So let's think about how it's used there, and then see whether or not-- it's not as outlandish as people think. So I think a lot of these routine measurements are just going to be done in an automated way. Already in our software, you can put out a little picture and overlay the segmentation on the original image and say how good it looks. So that's easy. So you can do that. And then this kind of idea of point of care automated diagnoses can make some sense around some emergency-type situations. So maybe you need a quick check of function. Maybe you want to know if they have a lot of fluid around the heart, and you don't necessarily want to wait. So those will be the places where there may be some kind of innovations around just getting something done quickly. And then you always have somebody checking in the background, layer on, a little the heart attack thing I showed you, and I think this problem in echo is there. And so if you need skilled people to be able to acquire the data in the first place, you're stuck, because they can read an echo. A really good stenography can read the whole study for you. So if you already have that person involved in the pipeline, then it's really hard to introduce a big advance. So you need to figure out how to take a primary care doc off the street, put a machine in their hand, and let them get the image and then automate all the interpretation for them. And so until you can task shift into that space, you're stuck with having still too high a level of skill. So there are these companies that are in the space now, and there's a few that are trying. It's easy to imagine, if you can train a neural network to classify a view, you could get it to-- this gets to this idea of registration a little bit-- you can recognize if you're off by 10 degrees, or if you need a translation. You could just train a model to be able to do that. So I think that's already happening right now. So it's a question as to whether that will get adopted or not, but I think that, ultimately, if you want to get shifting towards sort of less skilled personnel, you need to do something in that space. OK. So this is where it gets a little bit harder is to think about how to make stuff and elevate medicine beyond what we're doing. And this gets back to this problem I mentioned is, at the end of the day, you can't find new uses for echo, unless the data is already there for you to be able to show that there's more value than there currently is, sort of this chicken and egg thing. So in some sense, what I hope to introduce in some way that we can get much bigger data sets, and they don't have to be 100 video data sets. They can be three video data sets, but we want to be able to figure out how to enable more and more of these studies. So then you can sort of imagine learning many more complicated things. You want to track people over time. You want to look at treatment responses. So you've got to look at where the money is already and see who could do this. So pharma companies are interested, because they have these phase II trials. They may only have three months or six months to show some benefit for a drug, and they're really interested in seeing whether there's differences after a month, two months, three months, four months. So that may be a place where you get-- and they're being frugal, but they have money. So you could imagine, if you could introduce this pipeline in there and just have handheld, simple, quick to acquire, far more frequency, and you show a treatment response, and that's kind of transformative then. Because then, you could imagine, that can get rolled out in practice after that. So you need somebody to bankroll this to start with, and then you could imagine, once you have a use case, then you could imagine it getting much more. And this idea of surveillance, you could imagine that would be very doable, that you could just have something taking-- The problem is, you can even get the data in the archives anyway, but let's say you can get that. You could just have this system looking for amyloid, looking for whatever, and that would be a win too is to be able to imagine doing something like that. It's not putting any pressure on the clinical workflow. It's not making anybody look bad. I think, ultimately, it's trying to just figure out if-- well, maybe somebody may be looking bad if they miss something, but yeah. I think it is just trying to identify individuals. And so this is an area I think that's hard, and so this kind of idea, this is where I started a little bit, around this kind of idea of this disease subclassification and risk models. And so that's like more sophisticated than anything we're doing. I think we're pretty crude at this kind of stuff, but one of the challenges is people just aren't interested in new categories or new risk models, if they don't have some way that they can change practice. And that becomes more difficult, because then you need to not only introduce the model, you need to show how incorporating that model in some way is able to either identify people who respond. It always comes down to therapies at the end of the day. So can you tell me some subclass of people who will do better on this drug, which means that you have to have trial data that has all those people with all that data. And unfortunately, because echoes are so expensive and places like the Brigham charge like $3,000 per echo, then you only have like 100 people who have an echo in a trial or 300 people have an echo. You have a 5,000 person trial, and 5% of them have an echo. So you need to change the way that gets done, because you're massively underpowered to be able to detect anything that's sort of a subgroup within that kind of work. So yeah, unfortunately, the research pace of things outpaces the change in practice in terms of the space, until we're able to enable more data collection. So I can stop there. I was going to talk about blood cells in slides. PROFESSOR: We can take some questions. RAHUL DEO: Yeah. Yeah. Yeah. OK. Why don't we do that. Yes. AUDIENCE: When CT reconstruction started, I remember seeing some papers where people said, well, we know roughly what to the anatomy should look like, and so we can fill in missing details. In those days, the slices were run before, and so they would hallucinate what the structure looked like. RAHUL DEO: Yeah. AUDIENCE: And of course, that has the benefit of giving you a better model, but it also does risk that it's hallucinated data. Have you guys tried doing that with some of the-- RAHUL DEO: Yeah. That's a great point. So OK. So the question was so cardiac imaging has a very long history, and so there was a period of time where there's these kind of active modelers around morphologies of the heart. And so people had these models around what the heart should look like from many, many, many studies. And they were using that, back at the time, when you had these relatively coarse multi-slice scanners for a CT, they would reconstruct the 3D image of the heart based on some pre-existing geometric model for what the heart should look like. And there's, of course, a benefit to that, but some risk in the sense that somebody may be very different in the space that's missing. And so the question is whether those kind of priors can be introduced in some way, and it hasn't been straightforward as to how to do that. Whenever you look at these ridiculously poor segmentations, you're like, this is idiotic. We should be able to introduce some of that, and I've seen people, for example, put an autoencoder. That's not exactly getting at it, but it's actually getting it somewhat with these coarser features. But no, I think in terms of using some degree of geometric priors, I think I may have seen some literature in that space. We haven't tried anything there. We don't have any data to do that, unfortunately, and I suspect, yeah, I just don't know how difficult that is. AUDIENCE: You mentioned that you don't want to see a small additional atrium off at a distance. So that's, in a way, building in knowledge. RAHUL DEO: Yeah. No. I remember when I was starting this space. I was like this is idiotic. Why can't we do this? Why don't we have some way of doing that? We couldn't find at that time any architectures that were straightforward to be able to do that, but I'm sure there is something in that space. And we didn't also have the data for those priors ourselves. There's a long history of these de novo heart modelers that exist out there from Oxford and the New Zealand group for that matter who've been doing some of this kind of multi-scale modeling. It will be interesting to see whether or not there is anybody who pushes forward in that space, or is it just more data? I think that's always that tension. AUDIENCE: Can I ask about ultrasounds? RAHUL DEO: Yeah. AUDIENCE: You didn't show us ultrasounds. Right? RAHUL DEO: Yeah, I did. AUDIENCE: Oh, you did? RAHUL DEO: Yeah. The echoes are ultrasounds. AUDIENCE: Oh, OK, but that's really expensive ultrasound. Right? Like there are cheaper ultrasounds that you could imagine that you constantly do. Right? RAHUL DEO: Yeah. So there is a company that just came out with the $2,000 handheld ultrasound, the subscription model. Yeah. So I think that Philips has a handheld device around the $8,000 marker, so $2,000 is getting quite cheap. So that's I think the space for handheld devices. AUDIENCE: We're talking about resource-poor countries. RAHUL DEO: Yeah. AUDIENCE: In a developing country, where maybe they have very few doctors per population kind of thing. What kind of imaging might be useful that we could then apply computer vision algorithms to? RAHUL DEO: I think ultrasound is that sweet spot. It has versatility, and its cost is about where-- and I'm sure those companies rented it out for much lower cost in those kinds of places too. We're putting together-- or I put together-- actually, it may not have been funded. I'm not sure. But looking at sub-Saharan Africa and collaborating with one of the Brigham doctors who travels out to sub-Saharan Africa and looking to try to build some of these automated detection type of things in that space. So no, I think there is definite interest in that, and then there may be a much bigger win there then the stuff I'm proposing. But yeah, no, I think that's a very good point, and that would be-- it's also, it's portable. You could have a phone-based thing. So it's actually very attractive from that standpoint. PROFESSOR: [INAUDIBLE] RAHUL DEO: All right. I feel like I'm changing the topic substantially but not totally. OK. So this is that slide I showed, and I pitched it in a way to try to motivate you to think of ultrasound. But I'm not sure ultrasound really achieves all these things, in the sense I wouldn't call it the greatest biological tool to get at underlying disease pathways. Some of these things may be late, like David said, or maybe not so reversible. So we've been given this One Brave Idea thing $85 million now to make some dent in a specific disease, so coronary artery disease or coronary heart disease. It's that arrogant tech thing, where you just dump a lot of money somewhere and think you're going to solve all problems. And happy to take it, but I think that there are some problems. So this is what I wanted to do, so I've wanted to do this for probably the last five, six years, before I even started here, and this has motivated me in part for quite a while. And so here's our problems. OK. So we're studying heart disease, so coronary artery disease or coronary heart disease is the arteries in the heart. You can't get at those. So you can't do any biology. You can't do the stuff the cancer people-- do you can biopsy that. You can't do anything there. So you're stuck with the thing that you want to get at is inaccessible. I talked about how a lot of the imaging is expensive, but all those other omic stuff is really expensive too. So that's going to be not so possible, and you're not going to be able to do serial $1,000 proteomics on people either. That's not happening anytime soon. And then everything I talked about, we were woefully inadequate in terms of sample size, especially if we want to characterize underlying complex biological processes. So we expect we're going to need high dimensional data, and we're going to need huge sample sizes. There's Vladimir Vapnik over there. And then here's another problem. OK? So this stuff takes time. These diseases take time. So if I introduce a new assay right now, how am I going to show that any of this is going to be beneficial? Because this disease develops or 10 to 20 years. So I'm not going to talk about the solution to that, well, a little bit. OK. So one of the issues with a lot of the data that's out there is it's not particularly expressive. It's a lot of that just the same clinical stuff, the same imaging stuff. So all these big studies, these billion dollar big studies, ultimately just have echoes and MRIs and maybe a little bit of genetics, but they really don't have stuff that is this low cost expressive biological stuff that we ideally want to be able to do. So this is really expensive and makes $85 million look like a joke, and it's not all that rich in terms of complexity. So we wanted to do something different, and so this is the crazy thing. We're focusing on circulating cells, and so this is a compromise. And there's a reasonably good case to be made for their involvement. So there's lots of data to suggest that these are causal mediators of coronary artery disease or coronary heart disease. So you can find them in the plaques. So patients who have autoimmune diseases certainly have accelerated forms after atherosclerosis. There are drugs. There's a drug called canakinumab that inhibits IL-1 one beta secretion from macrophages, and this has mortality benefit in coronary artery disease. There are mutations in the white blood cell population themselves that are associated with early heart attack. So there's a lot there, and this has been going-- and there's plenty of mouse models that show that if you make mutations only in the white blood cell compartment, that you will completely change that the disease course itself. So there's a good amount of data out there to suggest that there is an informative kind of cell type there. It's accessible. There's lots of predictive models already there that could be done with some of this, and they express many of the genes that are involved. And there's a window on many of these biological processes. So we're focusing on computer vision approaches to this data. So we decided, if we can't do the omic stuff, because it costs too much, we're going to take slides and have tens of thousands of cells per individual. And then we can introduce fluorescent dyes that can focus on lots of different organelles. And then we can potentially expand the phenotypic space by adding all kinds of perturbations that can be able to unmask attributes of people that may not even be relatively there at baseline. And I think I've been empowered by the computer vision experience with the echo stuff, and I'm like, hey, I can do this. I can train these models. So we're in a position now where we can-- this stuff costs a few dollars per person. It's cheap, and you can just keep on expanding phenotypic space. You can bring in drugs. You can bring in whatever you want here, and you're still in that dollars type range. So we just piggy-back, and we just hover around-- just a couple of research assistants were hovering around clinics. And we can do thousands of patients a month, so tens of thousands of patients a year. So we can get into a deep learning sample size here, and so we want these primary assays to be low cost, reproducible, expressive, ideally responsive to therapy. So that's this space here, and there's lots of stuff that we have. We have all the medical record data on all these people, and we can selectively do somatic sequencing. We can do genome associations. We have all ECG data. We have selective positron emission data. So it's lots of additional thought, and we want to be able to walk our cheap assay towards those things are more expensive but for which there's much more historical data. So that's what I do with my life these days, and the time problem has been solved. Because we found a collaborary MGH who has 3 1/2 million of these records in terms of cell counting and cytometer data going back for about three years. So we should be able to get some decent events in that time. I need to build a document classification model for 3 1/2 million records and decide whether they have coronary heart disease, but sounds like that's doable. We're fearless in this space. And then they also have 13 million images, so hundreds of thousands of people worth of slides. So we can at the very least, get decent weights for transfer learning from some of this data, and we're doing this for acute heart attack patients. So yeah, so this is what I'm doing, ultimately, and so it's this bridge between existing imaging, existing conventional medical data, and this low cost, expressive, serial-type of stuff that ultimately hoping to expand phenotypic space and keep the cost down. I think all my lessons from working with expensive imaging data has motivated me to build something around this space. So this is my it's my baby right now. And so lots of things for people to be involved in, if they want to, and these are some of the funding sources. All right. Thank you. [APPLAUSE]
MIT_6S897_Machine_Learning_for_Healthcare_Spring_2019
3_Deep_Dive_Into_Clinical_Data.txt
PETER SZOLOVITS: So last time we talked about what medicine does, and today I want to take a deep dive into medical data. And I'm going to use as examples a lot of stuff from the MIMIC database, which is one of the databases that we're going to be using in this class. Some of you are probably familiar with it, and some of you are not. And there are, I hope, some takeaway lessons from this discussion. So for example, a few years ago, when MIMIC-III was about to be released, I was playing with the data, and I looked at the distribution of heart rates in the CareVue part of the database. So MIMIC, for those of you who don't know, has intensive care data from about 60-something thousand admissions to intensive care units at the Beth Israel Deaconess Medical Center over a period of about 12 years. And one of the technical difficulties that we encountered is that in the middle of that time period. The hospitals shifted from one information system that they used in their intensive care unit to another. CareVue is the old one. MetaVision is the new one. And of course, they're not exactly compatible. So we'll see some examples of that. So this is the old data. So this is from CareVue. And you look at that and say, well, heart rates range from 40 to 200 roughly, which is OK. But then there's this funny thing. There are two peaks. So where, if ever, do you see two peaks in physiological data? Not typical. And so my initial reaction was-- [LAUGHTER] So then I looked a little closer, and I said, hmm, what do the heart rates look like from these two systems? And if you look in CareVue, you see the picture that I just showed you. And if you look in MetaVision, you see this other picture, which looks more like what you would normally expect. And so I'm sitting there scratching my head going, OK, there must be some difference between these. It's not that simultaneous with the switchover of the hospital from one information system to another. Physiology of people changed, and all of a sudden some subset of people started having faster heart rates. Right? But if you think about that what subset of people have faster heart rates? AUDIENCE: Athletes. PETER SZOLOVITS: Hmm? AUDIENCE: Babies? AUDIENCE: If you're in a stress test. PETER SZOLOVITS: Unh-hmm. AUDIENCE: Is it children? PETER SZOLOVITS: Yeah, kids. So I said, hmm, interesting. So anyway, if you look at the statistics, you see that the mean heart rate in CareVue is 108, and the mean heart rate in MetaVision is 87. But of course, means are not that meaningful when you look at these bimodal distributions. So then I said, well, what if we just look at adults? So we look at people from age greater than 1 up to age 90. And I'll say a word about that in a minute. And I look at those two distributions. They look pretty close. They look pretty similar. So that means that the number of patients of different ages in the adult group is similar in the two data sets. But if I don't exclude the very young or the very old, then I see this funny distribution where I have suppressed ages greater than 90 but not the young. And what you see is that in CareVue there's this giant spike at age 0. So what happened at the hospital is that under the old system it was also being used in the NICU, the Neonatal Intensive Care Unit. And the new system was not being used in the NICU. And therefore, they didn't capture data about babies. And in fact, if you look at age versus heart rate of the entire population, you see two very peculiar things. So here are the adults that we've been talking about, and here are the babies. And sure enough, they have higher heart rates. And then here are these 300-year-old people. [LAUGHTER] You go, wow, I don't think I'm going to have a heart rate when I'm 300 years old. So who are those people? Anybody have a clue? Yeah? AUDIENCE: Entry errors. PETER SZOLOVITS: Sorry? AUDIENCE: Entry errors? PETER SZOLOVITS: There are too many of them. Yeah, entry errors is always a possibility, but there's quite a few data points there. Yeah? AUDIENCE: [INAUDIBLE] PETER SZOLOVITS: Close. It's not quite missing data. So HIPAA, the Health Insurance Portability and Accountability Act, defines a set of criteria about protecting personal health information. And one of the things you are not allowed to do is to specify the age of somebody who is 90 years old or older. And the reason is because the number of 97-year-olds is pretty small. And so if I tell you that Willy is 97 years old, then you're going to be able to pick him out of a population relatively easily, and so it's prohibited to say that. So as a result, everybody who's 90 or older gets labeled as being 300 years old in the database. It's an artifact. It's like back in my youth, I worked as a computer programmer at a health sciences computing facility at UCLA. And we used to have a convention that missing data was represented by 99999. And of course, if you average that into a real data set, you get garbage, which people did regularly. So there are problems with this, and we're running into one of those. If you look at just the adults, the two systems actually look very similar. So the blue and red dots, or the two systems, and I've drawn the trend lines between them, and you can see that they're very similar. So it looks like as you get older, your heart rate declines very slightly. But it does so equally in the two data sets. Yeah? AUDIENCE: On the previous slide, beyond 300, it looks like they're older than 300? PETER SZOLOVITS: Well that's because the ages there are computed at the time that the heart rate is measured. And so if you are 300 years old when you're admitted to the hospital, if you stay in the hospital for six months, then you're 300 and 1/2 years old by the time of that measurement. [LAUGHS] So that's why there are data points to the right of 300. Yeah, good catch. OK, and then this is what the babies look like. And of course, they do have higher heart rates. And here here are the oldsters. So actually, there are people out to 310 years old because maybe they were discharged from the hospital. And then at age 100, they came back. You know, maybe they were 90 years old at the time they were initially admitted 10 years later. They came back, and we recorded more data about them, and so this is all relative to that 300. OK, so that's just one example. And the lesson there is be careful when you look at data because it can really easily fool you 'cause there are all kinds of funny things about the way it's collected, about these artifactual things like 300-year-old patients and so on. So here's a catalog of the types of data that are available to us. So we have the typical kind of electronic health record data from hospitals-- demographics, age, sex, socioeconomic status, insurance type, language, religion, living situation, family structure, location, work, et cetera. We have vital signs-- your weight, your height, your pulse, respiration rate, body temperature, et cetera. So these are typically the things that if you ever go to a doctor's office, or you go into a hospital, the nurse will take you aside and weigh you and measure your height and check your blood pressure and take your temperature and stuff like that. These are standard vital signs, and so we have lots of those recorded. Medications-- prescription medications, over-the-counter drugs, illegal drugs if you're willing not to lie to your health care provider, alcohol. Again, one of my earliest days, I was hanging out with a cardiologist at Tufts Medical Center, and we see this elderly lady who looks kind of terrible. And we're talking to her-- well, the doctor is talking to her. I'm trying to stay out of the way. And he says, so do you drink alcohol? And she says, oh, no, never touch the stuff. And then we talk some more, and we go out of the patient's room. And the doctor turns to me out of earshot of the patient and says, oh, she's a chronic drunk. I said, well, how do you know? And he says, well, from lab tests, from the appearance of her skin, from her general demeanor, from various sort of ineffable factors. And so patients lie. They really do because they don't want to tell you things. Medications, by the way, is a big deal. So there is this whole field called med red, medication reconciliation, which is the hospitals or the doctors' offices attempt to figure out what medications you're actually taking. So I'm a member of the MIT health plan, and if I sign into my health plan account, it tells me that I'm taking some pills that I got 12 years ago as part of a laboratory test, where I took two pills which were supposed to have some physiological effect, and then they measured that. And I've never gotten another pill and never taken one since then, nor would it be particularly good for me. But it's still on my record, and there's no notice of it ever having been discontinued. And that's a real problem because if you're taking care of a patient, you'd like to understand what drugs they're actually taking, and it's hard to know. Then lab tests-- so this is the things that you imagine that we do a lot of, and these are components of the blood and the urine mainly, but also of the stool, saliva, spinal fluid, fluid taken off the belly, joint fluid, bone marrow, stuff coming out of your lungs. It's anything and any place where you can produce some specimen, they can send it to a lab and measure things in it, and they measure lots and lots of different kinds of things. And these are often useful. Pathology, qualitative and quantitative examination of any body tissue, for example, biopsy samples or surgical scraps. You know, if they do an operation, they cut something out of you, that typically winds up on a pathologist's bench, who then tries to figure out what its characteristics are and that's, again, useful information. Microbiology-- ever since Pasteur, we know that organisms cause disease. And so we're quite interested in knowing what organisms are growing inside your body. And typically, testing is not only to identify the organism but also to figure out which antibiotics it's sensitive to and insensitive to. And so you'll see things like reports of sensitivity testing at various dilutions. In other words, they try to give a strong dose of an antibiotic a week weaker dose a week or dose a weaker dose a week or dose to see which is the minimum level of dosing that's enough to kill the bacteria. There's a comma missing there, but input, output of fluids is another important thing because people, especially in the hospital, often get either dehydrated or over hydrated. And neither of those is good for you, and so trying to keep track of what's going into you and what's coming out of you is important. Then there are tons of notes. So an important one that we're going to look at in this class is discharge summaries. So these are the typically long notes that are written at the end of a hospitalization. So this is a summary of why you came in, what they did to you, the main things they discovered about you, and then plans for what to do after your discharge. Where are you going to go? What drugs are you going to be taking? When are you supposed to come back for follow up, et cetera. I'll show you an excruciatingly long one of those later in the lecture today. But we also have notes from attendings and/or residents, nurses, various specialties, consultants. The referring physician-- if somebody sends you to the hospital, that doctor will usually write a note saying this is what I'm interested in. Here's why I'm sending in the patient. There are letters back to the referring physician saying, OK, this is what we found out. Here's the answer to the question you were asking. There are emergency department notes. So that's often the first contact between the patient and the health care system. So these are all important. And then there's tons and tons of billing data. So remember the EHR systems were initially designed by accountants. And they were designed for the purpose of billing. And so we capture a lot of data about formalized ways of describing the condition of the patient and what was done to the patient in order to submit the right bills. You obviously want to bill through it as much as possible. But you have to be able to justify the bills that you submit because insurance companies and Medicare and Medicaid don't have a good sense of humor. And if you submit bills for things that you can't justify, then you get penalized. And then there are administrative data like, which service are you on? So this this is occasionally a confusing thing. You can go into the hospital and have heart problems, but it turns out that the heart intensive care unit, the cardiac intensive care unit, is full up with patients. But there's an extra bed in the pulmonary intensive care unit, and so they stick you in that unit, but you're still on the cardiology service. And so there are these sort of mixture kinds of cases that you still have to take care of. Transfers are when you get transferred from one place to another in the hospital. Imaging data-- so I'm not going to talk about that much today, but there are X-rays, ultrasound, CT, MRI, PET scans, retinal scans, endoscopy, photographs of your skin and stuff like that. So this is all imaging data, and there's been a tremendous amount of progress recently in applying machine learning techniques to try to interpret the contents of these data. So these are also very important. And then there's the whole quantified self movement. I mean, how many of you where an activity tracker? Only about 1/3? I'm surprised at a place like MIT. [LAUGHTER] So you know, we measure steps and elevation change and workouts. And you can record vital signs and diet and your blood sugar, especially if you're diabetic; allergies, allergic incidents. There's all this mindfulness, mood, sleep, pain, sexual activity. And then people have developed this idea of N of 1 experiments. For example, I had a student some years ago who suffered from psoriasis. It's a grody condition of the skin. And the problem is there are no good cures for it. And so people who suffer from psoriasis try all kinds of things. You know, they stop eating nonce for a while, or they douse themselves with vinegar. Or they do whatever crazy thing comes to mind. And we don't have a good theory for how to treat this disease. But on the other hand, some things work for some people. And so there's a whole methodology that has been developed that says, when you try these things, act like a scientist. Have hypotheses. Take good notes. Collect good data. Be cognizant of things like onset periods, where you know you may have to drip vinegar on yourself for a week before you see any effect. So if that doesn't do a thing after one day, don't stop. And furthermore, if you stop then don't start something new immediately because you will then be confused about whether this is the effect of the thing you were on before or the new thing that you're trying. So there's all sorts of ideas like that. So this is a slide from our paper on MIMIC-III. And it gives you a kind of overview of what's going on with the patient. So if you look at this-- I'm going to point with my hands-- at the top is something very important. This patient starts off at full code. That means that if something bad happens to him, he wants everything to be done to try to save him. And he winds up in comfort measures only, which means that if something bad happens to him, he wants to die-- or his family does if he's unconscious. So what else do we know about this guy? Well GCS is the Glasgow Coma Score. And it's a way of quantifying people's level of consciousness. And you see that at the beginning this patient is oriented, and then gets confused. And finally, is only making incomprehensible words or sounds. Motor, he's able to obey commands. Eventually, he's only able to flex when you stimulate his muscles. So he's no longer conscious. Eye movements-- he's able to follow you spontaneously. He's able to orient to speech. And eventually orientation at all. So this is clearly somebody who's going downhill quickly and, in fact, dies at the end of this episode. Now, we then look at labs so we can see what is their level of platelets at about the time that they're measured, their creatinine level, their white blood cell count, the neutrophils percentage, et cetera. And there's not every possible data point on the slide. This is just illustrative. The next section is medications. So the person is on morphine. They're on Vancomycin, which is an antibiotic. Piperacillin-- I don't know what that is. Does somebody know? AUDIENCE: Antibiotic. PETER SZOLOVITS: It's what? AUDIENCE: It's antibiotic. PETER SZOLOVITS: OK. Sodium chloride 9%, So that's just keeping him hydrated. Amiodarone and dextrose. So dextrose is giving him some energy. And then these are the various measurements. So you see the heart rate, for example, is up pretty high and is going up near the end. The oxygen saturation starts off pretty good. But here we're down to 60% or 50% O2 sat, which is supposed to be above about 92 in order to be considered reasonable. So again, this is a very consistent picture of things going very badly wrong for this particular patient. So this is all the data in the database. Now, if you want to try to analyze some of this stuff, you can say, well, let's look at the ages at the time of the last lab measurement in the database. So we have the times of all the lab measurements. So we can see that many of the ICU population are fairly old. There's a relatively small number of young people and then a growing number of older people in both females and males. If we look at age at admission by gender-- so this is age at admission not age at the time the last lab measurement was done-- it's a pretty similar curve. So we see that females were 64.21 at time of last lab measurement; 63.5 at the time of admission. So we can look at demographics, and demographics typically includes these kinds of factors, which I've mentioned before. And again, if we're interested in the relationship between this and, for example, the age distribution, we see that if you look at the different admission types-- so you can be either admitted for an emergency for some urgent care or electively. And it doesn't seem to make a whole lot of difference, at least in the means of the population age distribution. On the other hand, if you look at insurance type and, say, who's paying the bills, there is a big difference in the age distributions. Now, why do you think that private insurance drops way off at about 65? AUDIENCE: Isn't insurance always covered for everyone by the state health? PETER SZOLOVITS: It's because of Medicare. So Medicare covers people who are 65 years old. There's a terrible story I have to tell you. I was talking to somebody at an insurance company who's a bit cynical, and he said suppose that you see a 63-year-old patient who's developing type 2 diabetes, what should you do for him? Well, there are standard things you should do for somebody developing type 2 diabetes, like get him to eat better, get him to lose weight, get him to exercise more, et cetera, et cetera. But his cynical answer was absolutely nothing. Why? Well it's very cheap to do nothing. Most people who develop type 2 diabetes don't get real sick in the next two years. And by the time this patient is 65, he'll be the government's responsibility, not the insurance company's. Nice. AUDIENCE: Yeah. PETER SZOLOVITS: So of course a lot of the elderly are insured by Medicare or Medicaid, not that surprising. Self-pay is a pretty small number because it's insanely expensive to pay for your own health care. What about where you came from? Were you referred from a clinic, or were you an emergency room admit? Or were you referred from an HMO or et cetera? And other than a transfer from a skilled nursing facility or transfer within the facility, within the hospital, it doesn't make much difference. The averages there and the distributions look moderately similar. If you're coming from a skilled nursing facility, if you are in a skilled nursing facility, you're probably old because younger people don't typically need skilled nursing care. And I'm not sure why transfers within the facility are significantly younger ages, but that's true from the MIMIC data. What about age at admission by language? So some people speak English. Some people speak not available. Some people speak Spanish, et cetera. So it turns out the Russians are the oldest. And that may have to do with immigration patterns, or I don't know exactly why. But that's what the data show. If you do it by ethnicity, it turns out that African-Americans, on the whole, are somewhat younger than whites. And Hispanics are somewhat younger yet. So that means that those subpopulations apparently need intensive care earlier in life than whites. So this is a topic that's very hot right now, discussions about how bias might play into health care. Yeah? AUDIENCE: What does unable to obtain mean? PETER SZOLOVITS: It just means that somebody refused to say what their ethnicity was. AUDIENCE: When they were asked this? PETER SZOLOVITS: Yeah. I think. I'm not positive. AUDIENCE: So just to confirm. This also represents Boston's population dynamics too, right? PETER SZOLOVITS: It's the catchment basin of the Beth Israel Deaconess Hospital, which is Boston clearly. But there are-- it turns out that a lot of North Shore people go to Mass General, and so different hospitals have different catchment basins. AUDIENCE: Does it have anything to do with like, is this just the ICU? Or is this everybody who goes to the hospital or the ER? PETER SZOLOVITS: These are all people who at some point were in the ICU. So these are the sicker patients. Yeah? AUDIENCE: So just want to double-check there's a higher proportion of black, African American people in the population here as well because the red is higher than the others? PETER SZOLOVITS: No, actually-- I don't remember if I have that graph-- I think this is cumulative. AUDIENCE: Oh, OK. PETER SZOLOVITS: So most people are white for whatever definition of white we're using. And I think it's only the increment that you see on top. All right, how about marital status? Well, according to this, it's bad to be single. So I could sort of see that for hospitalization. I'm not sure why it's true for the ICU because if you don't have anybody at home to take care of you when you get sick, it seems reasonable that you'd be more likely to wind up in the hospital. But I don't know why you'd wind up in intensive care. Yeah? AUDIENCE: Isn't it possible that those are also single people are probably younger than married people, and those are probably younger than-- PETER SZOLOVITS: Yes, yeah. AUDIENCE: [INAUDIBLE] people. PETER SZOLOVITS: Yeah, that's probably also right. So here's an interesting question, a little bit related to something you'll see on the next problem set. So could we predict in-hospital mortality from just these demographic features? So I'm using a tool in language called R. This is a general linear model, and I've set it up to do basically logistic regression. And it says I'm predicting whether you die in the hospital based on these demographic factors. And it turns out that the only ones that are highly significant are age. So that's not surprising, that older people are more likely to die than younger people. It's generally true. And if I'm unable to obtain your ethnicity, or I don't know your ethnicity, then you're more likely to die. I have no clue why that might be the case. And other things are not as significant. So if you speak Spanish or English, you're slightly less likely to die. You see a negative contribution here. And if you speak Russian, you're slightly less likely to die. But it's significant not at the p equal 0.05 level, but it is at the p equal 0.06 level. And marriage doesn't seem to make much difference in predicting whether you're going to die or not. Now, remember, this is ICU patients. And we're looking at in-hospital mortality. AUDIENCE: For ethnicity, can they learn that at any point in this study, or just right at the beginning? Or do you know? Because I don't know. PROFESSOR: I don't know. AUDIENCE: Because it could be that unable to obtain means that they died before we can ask them. PROFESSOR: No, because there wouldn't be that many of those people, I think. There are not that many people who don't live past the intake interview. And they do ask them. AUDIENCE: [INAUDIBLE] PROFESSOR: Yeah, that would be an example. But I don't think you'd see enough such people to show up statistically. OK. Well, so I've already mentioned that there is this problem of having moved from CareVue view to MetaVision just in the MIMIC database. But of course, this is a much bigger problem around the country and around the world, because every hospital has its own way of keeping records. And wouldn't it be nice if we had standards? And of course, there's this funny phrase, the wonderful thing about standards is that there's so many to choose from. So for example, if you look at prescriptions in the MIMIC database, here are two particular prescriptions for subject number 57139 admitted on admission ID 155470. And so they have the same start date but different end dates. One is a prescription for Tylenol, acetaminophen, and the other is for clobetasol propionate 0.05% cream. That's a skin lotion thing for-- I think it's a steroid skin cream. So if you look in the BI's database, they have their own private formulary code where this thing is acet325 and this thing is clob.05C30, right? And if you look, there there's also something called a GSN, which is some commercial coding system for drugs. Maybe having to do with who their drug supplier is at the hospital. And these have different codes. There's the National Drug Code, which is an FDA assigned nine digit code that specifies who made the drug, what form it's in, and what's its strength. And so you get these. Then there's a human readable description that says Tylenol comes in 325 milligram tablets. And the clobetasol comes in 30 gram tubes. And the dose is supposed to be 325 to 650, i.e. one to two tablets measured in milligrams. The dose here is one application, whatever that is. I don't know what the 0.01 means. And this is a tablet and that's a tube. And this is taken orally. That's administered on the skin, right? So this is a local database. AUDIENCE: For a doctor, they just [INAUDIBLE] PROFESSOR: At most hospitals, that's true now. It wasn't true when the MIMIC database started being collected. And the BI was relatively late in moving to that compared to some of the other hospitals in the Boston area. Each hospital has its own digitorata for what it thinks is most important. And I think the BI just didn't prioritize it as much as some of the other hospitals. OK, so then I said, well, if you look at prescriptions, how often are they given? So remember, we have about 60,000 ICU stays. And so iso-osmotic dextrose was given 87,000 times to various people. Sodium chloride 0.9 percent flush. Do you know what that is? Have you ever had an IV? So periodically, the nurse comes by and squirts a little bit of stuff in the IV to make sure that it hasn't clogged up. That's what that is. Insulin, SW. I don't know. Salt water? I don't know what SW is. Magnesium sulfate, dextrose five in water. Furosimide is a diuretic. Potassium chloride replenishes potassium that people are often low on. And then you go, so why is there this D5W and that D5W? And that's probably some data in the system, OK? One of them has an NDC code associated with it and the other one doesn't but probably should. Yeah. AUDIENCE: I was actually going to ask, does yours mean that they're standard across hospitals or just that we don't have the data? PROFESSOR: The NDC code should be standard across the country, because those are FDA assigned codes. But not every hospital uses them, OK? And for the ones that say zero, I'm not sure why they're not associated with a code in this hospital's database. OK, next most common, you see normal saline, 0.9 percent sodium chloride. So that was the same stuff as the flush solution but this time not being used for flush. Metoprolol is a beta blocker. Here's another insulin this time with an NDC code, et cetera. I love bag and vial, OK? So these are not exactly medications. A bag is literally like a baggy that they put something into, and a vial is literally something that they put pills in. And why is that in the database? Because they get to charge for it, OK? And I don't know what the charge is, but it wouldn't surprise me if you're paying $5 for a plastic bag to put something in. OK, so if we say, well, how many pharmacy orders are there per admission at this hospital, and the answer is a lot. So if you look at-- it's a very long tailed distribution, goes out to about 2,500. But you see, if I blow up just the numbers up to about 200, there's a very large number of people with two prescriptions filled, and then a fairly declining number with more. And then it's a very long tail. So can you imagine 2,500 things prescribed for you during a hospital stay? Well, a little more about standards, so NDC is probably the best of the coding systems. And it's developed by the FDA. The picture up on the top right shows that the first four digits are the so-called labeler. That's usually the person who produced the drugs, or at least the person who distributes them. The second four digit number is the form of the drug, so whether it's capsules, or tablets, or liquid, or whatever and the dose. And then the last two digits are a package code which translates into the total number of doses that are in a package, right? So this is a godsend. And all of the robotic pharmacies and so on rely on using this kind of information nowadays. Unfortunately, they ran out of four digit numbers, and so there's now a-- they added an extra digit, but they didn't do it systematically, and so sometimes they added an extra digit to the labeler and sometimes to the product code. And so there is a nightmare of translations between the old codes and the new codes. And you have to have a code dictionary in order to do it properly and so on. OK, well, if that weren't good enough, the International Council for the Harmonization of Technical requirements for Pharmaceuticals for Human Use developed another coding system called MedDRA, which is also used in various places. And this is an international standard, which is, of course, incompatible with the NDC. CPT is the Common Procedural Terminology, which we'll talk about in a little bit. And they have a subrange of their codes which also correspond to medication administration. And so this is yet another way of coding giving medicines. And then the HCPCS is yet another set of codes for specifying what medicines you've given to somebody. And then I had mentioned this GSN number, which apparently the Beth Israel uses. This as a commercial coding system from a company called First Databank that is in the business of trying to produce standards. But in this case, they're producing ones that are pretty redundant with other existing standards. But nevertheless, for historical reasons, or for whatever reasons, people are using these. OK, enough of drugs. So what procedures were done to a patient? If you look in MIMIC, there are three tables. There's procedures ICD, which has ICD-9 codes for about a quarter million procedures. There's CPT events, which has about half a million, 600,000 events that are coded in the CPT terminology. And then MetaVision, the newer of the two systems, has about a quarter million procedure events that are coded in that system. So some examples, here's the most common ICD-9 procedure codes. So ICD-9 code 3893 of which there are 14,000 instances is venous catheterization, not elsewhere classified. So what's venous catheterization? It's when somebody sticks an IV in your vein, OK? Very common. You show up at a hospital. Before they ask you your name, they stick an IV in your arm. That's a billable event, too. Then insertion of an endotracheal tube, you know, if you're having any problems like that, they stick something down your throat. Ventral infusion of concentrated nutritional substances, so if you're not able to eat, then they feed you through a stomach tube, OK? So that's what that is. Continuous invasive mechanical ventilation for less than 96 consecutive hours, so this is being put on a ventilator that's breathing for you, et cetera. So you see that there is a very long tail of these. So those are the ICD-9 codes. Now, CPT has its own procedure codes that go into a tremendous amount of detail. So for example, this is the medicine subsection, and it shows you the kinds of drugs that you're being administered that are involved in dialysis, or psychiatry, or vaccines, or whatever. And then here are the surgical and the radiological codes. And there's tons and tons of detail on these. Yeah. AUDIENCE: So how can they put these codes as 1,000 to 1,022? This is really annoying for anyone-- PROFESSOR: No, these are categories. So if you drill down, there's a fanout of that tree and you get down to individual codes. Just as a nasty surprise, CPT is owned by the American College of Physicians, and they could sue me if I showed you the actual codes because they're copyrighted. And you have to pay them if you use those codes. It's crazy. OK, so if you look at the number of all of these codes per admission, you see a distribution like this. Or if I separate them out, you see that there are more ICD-9 codes and fewer of the CPT and the codes that are in MetaVision. But they look somewhat similar in their distributions. OK, lab measurements. So you send off a sputum sample, blood, urine, piece of your brain, something. They stick it in some goo and measure something about it. So what is it that they're measuring? Well, it turns out that hematocrit is the most common measurement. So this is how much hemoglobin is in your blood, or what fraction in your blood, and is very important for sick people. And the second most important is potassium, then sodium creatinine, chloride, urea nitrogen, bicarbonate, et cetera. So this is a long, long list of different things that can be measured, and all the stuff is in the database. So for example, here's patient number two in the database. And on July 17 of 2138, this is part of the deidentification process to make it difficult to figure out who the patient actually is. This person got a test for their blood and they reported atypical lymphocytes. So there are a couple of interesting things to note here. One is that some things have a value and others don't. So this is a qualitative measure, so there's no value associated with it. Just the fact of the label tells you what the result of the test was. The other thing that's interesting is this last column, which is LOINK, and I'll say a word about that in a minute-- actually right now. So LOINK is the Logical Observation Identifiers Names and Codes. It was developed by our colleagues at Regenstrief Clinic in Indiana about 15 years ago, maybe 20 years ago at this point. And the attempt was to say every different type of laboratory test ought to have a unique name, and they ought to be hierarchical so that if you have, for example, three different ways of measuring serum potassium, that they're related to each other but that they're distinct from each other, because there may be circumstances under which the errors that you get from one measurement versus another are different. And so this is the standard way. If you send off your blood sample to a lab, they send back a string like this to the hospital or to your doctor's office that says, it's coded in this OBX coding system, and here is the LOINK code, and this is the SNOMED interpretation. And so this string is the way that your hospital's EHR or your doctor's office system figures out what the result of the test was. HL7 is this 30-something year old organization that has been working on standardizing stuff like this. And LOINK is part of their standardization. So if you look at these, you say, well, again, how many tests per admission? Again, a huge, long tail up to about 15,000 for a very small number of patients. If you look at lab tests per admission, you can do a log transform and get something that looks like a more reasonable distribution. By the way, that's a very generic lesson when we're going to do analyses of these data, is that, often, doing a transform of some sort, like in this case, a log, takes some funny looking distribution and turns it into something that looks plausibly normal, which is better for a lot of the techniques we use. Yeah. AUDIENCE: [INAUDIBLE] means the same thing? Like, for instance-- PROFESSOR: Yes. AUDIENCE: --hematocrit [INAUDIBLE] PROFESSOR: Yes AUDIENCE: --same? PROFESSOR: Yes AUDIENCE: Always same? PROFESSOR: Yes, that's the whole idea of creating the standard. And that has been pretty successful, pretty successfully adopted. OK, chart events. So these are the things that nurses typically enter at the bedside. And so there are 5.1, 5.2 million heart rates measured in the MIMIC database. And calprevslig is an artifact. It exists in every record. And it's some calibration something or other that doesn't mean anything. I've never been able to figure out exactly what it is. SPO2 is the partial pressure of oxygen in your blood. If you use a pulse oximeter, that's what that's measuring. Respiratory rate, heart rhythm, ectopy type, dot, dot, dot. Now, you might be troubled by the fact that here is heart rate again, right? But I've already shown you this, that heart rate in CareVue and heart rate in MetaVision were coded under different codes in the joint system that we created out of those two databases. And so you have to take care of figuring out what's what if you're trying to analyze this data. Not only do we have that problem of different age distributions across the two different data sets, but we also just have the mechanical problem that there will be things with the same label that may or may not represent the same measurement at different times in the system. OK, this is the number of chart entries per admission, again, on a log scale. So you see that there are about 10 to the 3.5 chart entries per admission, so thousands of admissions, of chart events per admission. We also track outputs. So Foley catheter allows your bladder to drain without your having consciously to go to the bathroom, so they collect that information. There are 1.9 million recordings of how much fluid came out of your bladder. Chest tubes will drain stuff out of your chest if you have congestion. Urine is if you pee regularly, stool out, et cetera. And again, I'm not sure I understand what the difference is between urine out Foley versus Foley. They may be the same thing but one from CareVue and one from MetaVision, so again, typical kinds of problems. If you look at the number of output events per admission, you're seeing on the order of 100, roughly. Well, if you're tracking outputs, you should also track inputs, and so they do. And so D5W is this dextrose in water, 0.9 percent normal saline. Propofol is an anesthetic. Insulin, heparin, blood thinner, et cetera. Fentanyl is, I think, an opioid, if I remember right. So these are various things that are given to people. And they affect the volume of the person. So this is an attempt to keep the person in balance and keep track of that. MetaVision inputs are classified somewhat differently but they have similar kinds of data. And if you combine them, you get, again, a distribution on a log scale that shows that there are on the order of 10 to the fifth input events, so quite a few input events, because this is recorded periodically. Now, the paper that I-- yeah. AUDIENCE: What's the input again? Is that when you come to the hospital and get admitted or-- PROFESSOR: No, no, no. It's an input into you. So it's like you drink a glass of water, the nurse is supposed to record it. Although, she doesn't always because she may not notice it. But if they hang an IV bag and pour a liter of liquid into you, they do record that, OK? All right, so I had you read this interesting paper and a discussion prior to that paper, because one of the authors is a former student of mine. And I know one of the other guys pretty well. And the former student, Zak Kohane, came back some years ago from a conference in California and was explaining to me that he ran into a venture capitalist who discovered that there is an interesting physiological variation in the abnormality of lab tests that are done at night. And he suspected that there was a diurnal variation that lab tests actually become more abnormal at night than they do during the day. And Zak, who is not only a computer science PhD but also a practicing doctor, turns to him and says, you're an idiot, right? Who has their blood drawn at 3 o'clock in the morning. It's typically not healthy people, right? So this is another of these nice confounding stories where, if you have a test done in the middle of the night, it probably indicates that you're sicker. So he and Griffin recruited their third author and went off and did a very large scale study of this question, which is what the paper that I asked you to read reports on. And so I said, well, I wonder if I could reproduce that study in the MIMIC database. And the answer, just in case you get your hopes up, was no, in large part because we just don't have the right kind of data. So there are not that many white blood counts that were measured in the MIMIC database, for example. But if you look at the-- this is MIMIC data. And if you say, what's the fraction of abnormal white blood count values by hour-- so this is midnight to midnight. And each hour, there's some fraction of these test results that are abnormal. And sure enough, what you see is that, at 5 o'clock in the morning, a much higher fraction of them is abnormal than at 3 o'clock in the afternoon, OK, which is consistent with Zak's peremptory comment about the guy being an idiot. So once again, I said, well, can we build a really simple model that predicts who's going to die in the hospital in this case? That's the easiest one to predict because I have that data. We could get three-year survival data, which is what they were looking at. But it's harder and it runs into censoring problems of what happens if the person was hospitalized less than three years before the end of our data collection period and so on. And so I avoided that. But what this is showing you is, for each of the hours, zero to 24, what is the number of measurements? And for each of those hours, what is the fraction of those measurements that's abnormal, OK? So I said, well, let's just throw it into a logistic regression model. And what comes out is something really weird, which is that a few particular hours are significant, but most of them are not. And that looks like noise to me, right? Because you wouldn't expect that, at 8 o'clock in the morning, the fact that you had something measured matters. But at 9 o'clock in the morning, it doesn't. That doesn't seem sensible. So I don't think there's enough signal here. And in fact, when I looked at the number of white blood count measurements at night and related to mortality-- so false means people lived and true means they died. But you see that there's not a whole lot of difference between the distributions. But you also see that the number of white blood counts is relatively small in this database. And so I think we just don't have enough data to do it. On the other hand, if you look at a panel of different drugs, you look at mean values of blood urea nitrogen or calcium chloride, CO2, et cetera, you see that there is variation across time. So there is some sort of variance that's either caused by the diurnal physiology of the human body or by the routine practice of medicine, about when people choose to take lab measurements. And in fact, if you look at the fraction of high end low lab values, they do vary by hour. And in particular, if you look at white blood counts, you see that the fraction of high values goes up at night and the fraction of low values goes down at night, right, which is consistent with what they saw as well. There is another way to measure it, which is, instead of using normal ranges, the lab actually gives you a call that says, is this value normal, low, or high? And we can use that. That's a little bit more subtle because it depends on calibration of the equipment and is updated as the calibration changes. So that's probably a little bit more accurate. But you see essentially the same phenomenon here. But if you look at the distributions of when measurements are done that turn out to be normal versus when they turn out to be abnormal, there is a lot of similarity between the normal and the abnormal curves of when those measurements are taken. So we're not seeing that. OK, let me race through to the end. This is my heartbeat from my watch. You can actually download the stuff and put it in your favorite analysis engine and take a look. So here I was running across the Harvard bridge. And if you look at my heart rate variability over the 30 seconds or so, you see that the interbeat interval ranges from about 550 to about 600 and whatever 20 milliseconds. And so you could calculate my heart rate variability, which is thought to be an indicator of heart health and so on. You can calculate that I was running at a pace of about 100-- my heart was beating at a pace of about 100 beats per minute. So you know there's all sorts of information like that available. Now, as I said, I'm not going to get into this today, but this was a very successful recently published paper where they're able to take a look at images of the lung. So this is a transverse scan of the lung. And they have a deep learning machine that is able to identify these two yellow marked things as pulmonary emboli as opposed to these other things that are just random flecks in the tissue. And I can't do that by eyeball. Maybe a good radiologist might be able to, but this is claimed in the paper to outperform decent radiologists already. This was one of the articles that led Geoff Hinton to make this rather stupid pronouncement that said, tell your children not to become radiologists because the profession will be over by the time they get fully trained, which I don't believe. They may do different things, but they won't go away. This was a slide from Ron Kikinis at the Brigham, and they're using automated techniques of analyzing white matter in order to identify lupus lesions. So lupus is a bad disease that shows up in these magnetic resonance images in certain ways. The last thing I want to talk about today is notes. So my students did a little exercise last semester where we tried to see how good is the average ape, namely member of my research group, at predicting mortality? And so we took a bunch of cases from the MIMIC data set, blinded to the question of whether the person lived or died. We gave the data to people in a kind of visualization tool, sort of like the one that I showed you earlier, that summarizes the case, and then also gave people access to the notes, the deidentified notes about those cases, to see whether people could predict, better than a coin flip, whether somebody was going to live or die. And the answer is yes, slightly better, OK? Not immensely better but slightly better. And furthermore, it looks like, by giving them feedback, so as they're looking at these cases and trying to make the prediction, they make a prediction, you tell them if they were right or wrong, we learn. And so we get slightly better than slightly better than random, right? It's kind of interesting. OK, so one of the things I discovered is that, at least when I was playing the monkey in this exercise, I found the notes to be immensely useful, much more useful than the trend lines of laboratory data. Partly, it's because I'm used to reading English. I'm not so used to reading graphs of laboratory data. But part of it is that there is a level of human understanding that is transmitted in the nursing notes and in the discharge summaries and so on that you don't get from just looking at raw data. And so there is very much the sense, which we're going to talk about in a couple of weeks, of how can we take advantage of that information, extract it, and use it in the kinds of modeling that we want to do? So in MIMIC, if you look, we have nursing notes, and radiology reports, and more nursing notes, and electrocardiogram reports, and doctor's notes, and discharge summaries, and echocardiograms, respiratory, et cetera. And if you look at the distribution of the lengths of these, these are, unfortunately, not on the same scale. But the discharge summary is the thing that's written at the time you leave the hospital. So this is sort of the summary of everything that happened to you during your hospitalization. And it's long. So, you know, it goes up to like 30,000 characters. You know, it's a short story, not so short short story. Nursing notes tend to be shorter. They run up to about 3,000 characters. This other set of nursing notes, which I think comes from the other system, is a little bit longer. It goes up to about 5,000. Doctor's notes are a little bit longer yet. They go up to about 10,000, 15,000 characters, typically. And there are various other kinds of notes. So I just wanted to show you a few of these. Here's a brief nursing note. So this is a patient who is hypotensive but not in shock. Patient remains on this drug drip at 0.75 micrograms per kilogram per minute, no titration needed at this time. Their blood pressure is stable at more than 100. Their mean arterial pressure is 65, greater than 65. Wean them from this drug presumably if it's tolerated. A wound infection, so anterior groin area open and oozing moderate amounts of thin, pink-tinged serous fluid. Patient's stooling with small amounts of stool on something and dangerously close to the open wound, et cetera. So this is sort of the nurse's snapshot. She just went in, saw the patient-- by the way, I say she, but probably a vast majority of nurses in Boston area hospitals really are women, but there are some male nurses-- and will record sort of a snapshot of what's going on with the patient. What are the concerns? In principle, this is going to be useful not only as a part of the medical record, but also when this nurse goes off shift and the next nurse comes on shift. Then this is a recording of what the state of the patient was the last time they were seen by the nurse. In reality, the nurses tend to tell each other verbally rather than relying on the written version. I remember one time talking to a nurse in an intensive care unit in another part of the country, and I said, so whoever reads your notes, and she says, quality assurance officers, so the hospital has people responsible for trying to assess the quality of care they're giving, and lawyers when there's a lawsuit. And she was very happy because she had saved the hospital 10 million dollars by having carefully recorded that some procedure had been done to a patient who then had a bad outcome and was suing the hospital for their neglect in not having done this. But because it was in the note, that was proof that it actually had been done, and therefore the hospital wasn't liable. But there is a lot of information in here. Now, I'm going to show you many pages of a typical discharge summary. So this is somebody on the surgery service who came in complaining of leg pain, redness, and swelling secondary to infection of the left femoral popliteal bypass. So she had surgery-- I think she. Yeah, female. She had surgery which didn't heal well, so major surgical or invasive procedure, incision and drainage and pulse irrigation of the left groin, and left above-knee popliteal site incisions with exploration of bypass graft, and excision of the entire left common femoral artery to above-knee blah, blah, blah, blah blah, blah. So this is what they did. History of the present illness-- she's a 45-year-old woman who underwent the left femoral, a.k.a. doctor something or other with PTFE, whatever that is, over a month ago on a certain date. By the way, these bracketed asterisked things are where we've taken out identifying information from the record. She had been doing well post-operatively and was seen in the clinic six days prior to presentation. At this time, she acutely developed nausea, vomiting, fevers, and progressive redness, swelling, pain of her left thigh, et cetera, OK? So that's just page one of many pages. Yeah. AUDIENCE: Just a question. Is this completely [INAUDIBLE] information [INAUDIBLE] patient's name or date? PROFESSOR: Not in this system. There are people-- Henry Chueh at Mass General spent 10 years building a system that had autocomplete and so on. And some doctors liked it and some doctors hated it. And the MGH threw out all of their old systems in order to buy Epic, and so it's gone. It was like 10 years of work down the drain. But it was not a spectacular success. Because whenever you have auto complete, you have to anticipate every possible answer. And people are very creative, and they always want to type something that you didn't anticipate. So it's hard to support it. AUDIENCE: What is Epic? That's like the new-- PROFESSOR: Epic is a big company that has been winning all the recent contests for installing electronic medical record systems. Remember in my last lecture, I showed that we're reaching about 100% saturation? So they've been winning a lot of the installation deals. And they're getting a lot of the subsidy. The estimate I heard was that Partners Healthcare, which is MGH at the Brigham and a couple of other hospitals, spent somewhere on the order of two billion dollars installing the system. So that included all the customizations and all the training and all the administrative stuff that went with it. But that's a huge amount of money. AUDIENCE: I agree. PROFESSOR: OK, so we have past medical history-- pack a day smoker, abused cocaine but says she stopped six months ago, has asthma, type 2 diabetes. Social history, family history. These are of the physical exam results. So it's giving you a lot of information about the person. Description of the wound down at the bottom. Pertinent lab results. So these are copied out of the laboratory tables. Yeah. AUDIENCE: Just to double check with the drug results-- PROFESSOR: Sorry? AUDIENCE: Just to double check with the drug results two slides back-- PROFESSOR: Yeah AUDIENCE: It said-- so it has the fake dates of 2190 up there. PROFESSOR: Yep. AUDIENCE: So the fact that there was a positive test in 2187 would mean a year ago. PROFESSOR: Yeah. AUDIENCE: So that's the medication. PROFESSOR: Yeah, the deindenfication technology here maintains the relative dates but not the absolute dates. So these are results, again, copied out of the laboratory database into the discharge summary. Brief hospital course, and then a review of systems, so what's going on neurologically, cardiovascular, pulmonary, GI, GU, et cetera. Infectious disease, endocrine, hematology, prophylaxis. And at the time was discharged, the patient was doing well, no fever and stable vital signs, tolerating a regular diet, ambulating, voiding without assistance, and pain was well controlled. Medications on admission, so this was the medication reconciliation. Discharge medication, so this is what she's being sent home on. Discharge disposition is to the home with some follow up service, and she's going home. And the discharge diagnosis is infected left femoral popliteal bypass graft and the condition. And these are the instructions to the patient that say, you know, here's what you can do, here is when you should come back and tell us if something is going wrong, et cetera. And here's what you should report if it happens. You know, if you have sudden severe bleeding or swelling, do this. Follow up with doctor somebody or other. Call his clinic at this number to schedule an appointment and then follow up with doctor somebody else in two weeks. I think this is the same one. So just a couple of final words about standards. So you saw in David's introductory lecture a reference to Odyssey, which is a standard method of encoding the kind of data that we're talking about today. There is a likelihood that the next release of the MIMIC database will adopt the Odyssey formats rather than the-- yeah. David's shaking his head, wondering why. Me, too. AUDIENCE: Odyssey hasn't handled clinical notes very well yet. PROFESSOR: Well, so, you know, what always happens, as you say, I'm going to adopt the standard asterisk with the following extensions. And that's probably what's going to happen. But it means that the central tables, you know, the ICD-9 code tables and the drug tables, some things like that, are likely to wind up adopting the formats of the Odyssey database. You should also know about this thing called FHIR, F-H-I-R, the Fast Health Interoperability Resources. So HL7 is the standards organization that had a tremendous success in the early 1990s in solving the problem of how to allow laboratories to report lab data back to the hospitals or the clinics that ordered the labs. And that character string with the up arrows and the vertical bars and so on that I showed you before that had LOINK encoded in it is that standard. That's called HL7 Version 2. It's still in use very widely, they then got ambitious and suffered second system syndrome, which is they decided to build HL7 Version 3, which I used to teach in a class here 10 years ago. But one of my friends who works for a company that helps hospitals implement that sent me a 38 megabyte PDF file that describes what you need to know in order to implement that system. And as a result, nobody was doing it. So FHIR is a gross simplification of that that starts off and says, if a doctor refers a new patient to you, what is the minimum set of data that you need to know in order to take care of that person? And FHIR tries to provide just that subset of all of the data. It has become a standard mainly because, after Congress spent $42 billion dollars or so bribing people into buying these information systems, they got mad that the information systems they bought couldn't talk to each other. And so they called in, on the carpet, the heads of these IT companies, health IT companies, and they yelled at them and they made them promise that there would be interoperability. They promised. And out of that came FHIR. It was probably simultaneously developed but they adopted it. And so now, in principle, it's possible to exchange data between different hospitals, at least to the level of that degree of harmonization of the data. In reality, the companies don't want you to do that because they like there to be friction in not being able to take all your data to a different hospital, because it is more likely to leave you at the one that you're at. So there is complicated socioeconomic kinds of issues in all this. But at least the standard exists and is becoming more and more widely deployed as long as Congress pays attention. It's ugly. So here is what a patient looks like, right? It's the usual unreadable XML garbage. But fortunately, there are parsers that can turn it into JSON and simpler representations. And so that's pretty common. So the terminologies that exist are LOINK, NBC, ICD-9 and 10. SNOMED I didn't talk about today. DSM-5 is the Diagnostic and Statistical Manual for Psychiatrists. That's used as a common coding method for describing psychiatric disease. And there are many more of these. There's something called the Unified Medical Language Systems Metathesaurus from the National Library of Medicine that integrates about 180 of these different terminologies. And so there is a nice one-stop shop where you can get all these things from them. So takeaway lessons, know your data. Remember that first example of the heart rates, that comes up over and over again. And doing machine learning and analysis on data that you don't understand is likely to lead you to false conclusions. Harmonization is difficult and time consuming. And there are lots of things for which we just don't have standards, and so everybody develops their own representations. I had a PhD student about a decade ago who, in his thesis, wrote that he spent about half his time cleaning data. And I gave that thesis to another student who started a few years later who read it, and he comes to me just awestruck and he says, what? He only spent half his time cleaning? Unfortunately, that's roughly where we are in this field. So sorry to be a downer, but that's the current state of the art. And next time, David will start by looking at actually building some models with these kinds of data and showing you what we can accomplish. Thank you.
MIT_6S897_Machine_Learning_for_Healthcare_Spring_2019
17_Reinforcement_Learning_Part_2.txt
DAVID SONTAG: A three-part lecture today, and I'm still continuing on the theme of reinforcement learning. Part one, I'm going to be speaking, and I'll be following up on last week's discussion about causal inference and Tuesday's discussion on reinforcement learning. And I'll be going into sort of one more subtlety that arises there and where we can develop some nice mathematical methods to help with. And then I'm going to turn over the show to Barbra, who I'll formally introduce when the time comes. And she's going to both talk about some of her work on developing and evaluating dynamic treatment regimes, and then she will lead a discussion on the sepsis paper, which was required reading from today's class. So those are the three parts of today's lecture. So I want you to return back, put yourself back in the mindset of Tuesday's lecture where we talked about reinforcement learning. Now, remember that the goal of reinforcement learning was to optimize some reward. Specifically, our goal is to find some policy, which I can note as pi star, which is the arg max over all possible policies pi of v of pi, where just to remind you, v of pi is the value of the policy pi. Formally, it's defined as the expectation of the sum of the rewards across time. So the reason why I'm calling this an expectation with like the pi is because there's stochasticity both in the environment, and possibly pi is going to be a stochastic policy. And this is summing over the time steps, because this is not just a single time step problem. But we're going to be considering interventions across time of the reward at each point in time. And that reward function could either be at each point in time or you might imagine that this is 0 for all time steps, except for the last time step. So the first question I want us to think about is, well, what are the implications of this as a learning paradigm? If we look what's going on over here, hidden in my story is also an expectation over x, the patient, for example, or the initial state. And so this intuitively is saying, let's try to find a policy that has high expected reward, average [INAUDIBLE] over all patients. And I just want you to think about whether that is indeed the right goal. Can anyone think about a setting where that might not be desirable? Yeah. AUDIENCE: What if the reward is the patient living or dying? You don't want it to have high ratings like saving two patients and [INAUDIBLE] and expect the same [INAUDIBLE]. DAVID SONTAG: So what happens if this reward is something mission critical like a patient dying? You really want to try to avoid that from happening as much as possible. Of course, there are other criteria that we might be interested in as well. And both in Frederick's lecture on Tuesday and in the readings, we talked about how there might be other aspects about making sure that a patient is not just alive but also healthy, which might play into your reward functions. And there might be rewards associated with those. And if you were to just, for example, put a positive or negative infinity for a patient dying, that's a nonstarter, right, because if you did that, unfortunately in this world, we're not always going to be able to keep patients alive. And so you're going to get into an infeasible optimization problem. So minus infinity is not an option. We're going to have to put some number to it in this type of approach. But then you're going to start trading off between patients. In some cases, you might have a very high reward for-- there are two different solutions that you might imagine, one solution where the reward is somewhat balanced across patients and another situation where you have really small values of reward for some patients and a few patients with very large values and rewards. And both of them could be the same average, obviously. But both are not necessarily equally useful. We might want to say that we prefer to avoid that worst-case situation. So one could imagine other ways of formulating this optimization problem, like maybe you want to control the worst-case reward instead of the average-case reward. Or maybe you want to say something about different quartiles. I just wanted to point that out, because really that's the starting place for a lot of the work that we're doing here. So now I want us to think through, OK, returning back to this goal, we've done our policy iteration or we've done our Q learning, that is, and we get a policy out. And we might now want to know what is the value of that policy? So what is our estimate of that quantity? Well, to get that, one could just try to read it off from the results of Q learning by just computing that the pi-- what I'm calling v pi hat-- the estimate is just equal to now a maximum over actions a of your Q function evaluated at whatever your initial state is and the optimal choice of action a. So all I'm saying here is that the last step of the algorithm might be to ask, well, what is the expected reward of this policy? And if you remember, the Q learning algorithm is, in essence, a dynamic programming algorithm working its way from the sort of large values of time up to the present. And it is indeed actually computing this expected value that you're interested in. So you could just read it off from the Q values at the very end. But I want to point out that here there's an implicit policy built in. So I'm going to compare this in just a second to what happens under the causal inference scenario. So just a single time step in potential outcomes framework that we're used to. Notice that the value of this policy, the reason why it's a function of pi is because the value is a function of every subsequent action that you're taking as well. And so now let's just compare that for a second to what happens in the potential outcomes framework. So there, our starting place-- so now I'm going to turn our attention for just one moment from reinforcement learning now back to just causal inference. In reinforcement learning, we talked about policies. How do we find policies to do well in terms of some expected reward of this policy? But yet when we were talking about causal inference, we only used words like average treatment effect or conditional average treatment effect, where for example, to estimate the conditional average treatment effect, what we said is we're going to first learn, if we use a covariate adjustment approach, we learn some function f of x comma t, which is intended to be an approximation of the expected value of your outcome y given x comma-- I'll say y of t. There. So that notation. So the goal of covariate adjustment was to estimate this quantity. And we could use that then to try to construct a policy. For example, you could think about the policy pi of x, which simply looks to see is-- we'll say it's 1 if CATE or your estimate of CATE for x is positive and 0 otherwise. Just remind you, the way that we got the estimate of CATE for an individual x was just by looking at f of x comma 1 minus f of x comma 0. So if we have a policy-- so now we're going to start thinking about policies in the context of causal inference, just like we were doing in reinforcement learning. And I want us to think through what would the analogous value of the policy be? How good is that policy? It could be another policy, but right now I'm assuming I'm just going to focus on this policy that I show up here. Well, one approach to try to evaluate how good that policy is, is exactly analogous to what we did in reinforcement learning. In essence, what we're going to say is we evaluate the quality of the policy by summing over your empirical data of pi of xi. So this is going to be 1 if the policy says to give treatment 1 to individual xi. In that case, we say that the value is f of x comma 1. Or if you gave the second-- if the policy would give treatment 0, the value of the policy on that individual is 1 minus pi of x times f of x comma 0. So I'm going to call this sort of an empirical estimate of what you should think about as the reward for a policy pi. And it's exactly analogous to the estimate of v of pie that you would get from a reinforcement learning context. But now we're talking about policies explicitly. So let's try to dig down a little bit deeper and think about what this is actually saying. Imagine the story where you just have a single covariate x. We'll think about x as being, let's say, the patient's age. And unfortunately there's just one color here. But I'll do my best with that. And imagine that the potential outcome y0 as a function of the patient's age x looks like this. Now imagine that the other potential outcome y1 looked like that. So I'll call this the y1 potential outcome. Suppose now that the policy that we're defining is this. So we're going to give treatment one if the condition of our treatment effect is positive and 0 otherwise. I want everyone to draw what the value of that policy is on a piece of paper. It's going to be-- I'm sorry-- I want everyone to write on a piece of paper what the value of the policy would be for each individual. So it's going to be a function of x. And now I want it to be-- I'm looking for y of pi of x. So I'm looking for you to draw that plot. And feel free to talk to your neighbor. In fact, I encourage you to talk to your neighbor. [SIDE CONVERSATION] Just to try to connect this a little bit better to what I have up here, I'm going to assume that f-- this is f of x1, and this is f of x0. All right. Any guesses? What does this plot look like? Someone who hasn't spoken in the last one week and a half, if possible. Yeah? AUDIENCE: Does it take like the max of the functions at all point, like, it would be y0 up until they intersect and then y1 afterward? DAVID SONTAG: So it would be something like this until the intersection point. AUDIENCE: Yeah. DAVID SONTAG: And then like that afterwards. Yeah. That's exactly what I'm going for. And let's try to think through why is that the value of the policy? Well, here the CATE, which is looking at a difference between these two lines as negative-- so for every x up to this crossing point, the policy that we've defined over there is going to perform action-- wait. Am I drawing this correctly? Maybe it's actually the opposite, right? This should be doing action one. Here. OK. So here the CATE is negative. And so by my definition, the action performed is action 0. And so the value of the policy is actually this one. [INTERPOSING VOICES] DAVID SONTAG: Oh. Wait. Oh, good. [INAUDIBLE] Because this is the graph I have in my notes. Oh, good. OK. I was getting worried. OK. So it's this action, all the way up until you get over here. And then over here, now the CATE suddenly becomes positive. And so the action chosen is 1. And so the value of that policy is y1. So one could write this a little bit differently for-- in the case of just two policies, and now I'm going to write this in a way that it's really clear. In the case of just two actions, one could write this equivalently as an average over the data points of the maximum of fx comma 0 and f of x comma 1. And this simplification turning this formula into this formula is making the assumption that the pi that we're being evaluated on is precisely this pi. So this simplification is only for that pi. For another policy, which is not looking at CATE or for example, which might threshold CATE at a gamma, it wouldn't quite be this. It would be something else. But I've gone a step further here. So what I've shown you right here is not the average value but sort of individual values. I have shown you the max function. But what this is actually looking at is the expected reward, which is now averaging across all x. So to truly draw a connection between this plot we're drawing and the average reward of that policy, what we should be looking at is the average of these two functions, which is we'll say something like that. And that value is the expected reward. Now, this all goes to show that the expected reward of this policy is not a quantity that we've considered in the previous lectures, at least not in the previous lectures in causal inference. This is not the same as the average treatment effect, for example. So I've just given you one way to think through, number one, what is the policy that you might want to derive when you're doing causal inference? And number two, what is one way to estimate the value of that policy, which goes through the process of estimating potential outcomes via covariate adjustment? But we might wonder, just like when we talked about in causal inference where I said there are two approaches or more than two, but we focused on two, using covariate adjustment and doing inverse propensity score weighting, you might wonder is there another approach to this problem all together? Is there an approach which wouldn't have had to go through estimating the potential outcomes? And that's what I'll spend the rest of this third of the lecture focused talking about. And so to help you page this back in, remember that we derived in last Thursday's lecture an estimator for the average treatment effect, which was 1 over n times the sum over data points that got treatment 1 of yi, the observed outcome for that data point, divided by the propensity score, which I'm just going to write as ei. So ei is equal to the probability of observing t equals 1 given the data point xi minus a sum over data point i such that ti equals 0 of yi divided by 1 minus ei. And by the way, there was a lot of confusion in class why do I have a 1 over n here, a 1 over n here, but right now I just took it out all together, and not 1 over the number of positive points and 1 over the number of 0 data points. And I expanded the derivation that I gave in class, and I posted new slides online after class. So if you're curious about that, go to those slides and look at the derivation. So in a very analogous way now, I'm going to give you a new estimator for this same quantity that I had over here, the expected reward of a policy. Notice that this estimator here, it made sense for any policy. It didn't have to be the policy which looked at, is CATE just greater than 0 or not? This held for any policy. The simplification I gave was only in this particular setting. I'm going to give you now another estimator for the average value of a policy, which doesn't go through estimating potential outcomes at all. Analogous to this is just going to make use of the propensity scores. And I'll call it R hat. Now I'm going to put a superscript IPW for inverse propensity weighted. And it's a function of pi, and it's given to you by the following formula-- 1 over n sum over the data points of an indicator function for if the treatment, which was actually given to the i-th patient, is equal to what the policy would have done before the i-th patient. And by the way, here I'm assuming that pi is a deterministic function. So the policy says for this patient, you should do this treatment. So we're going to look at just the data points for which the observed treatment is consistent with what the policy would have done for that patient. And this indicator function is 0 otherwise. And we're going to divide it by the probability of ti given xi. So the way I'm writing this, by the way, is very general. So this formula will hold for nonbinary treatments as well. And that's one of the really nice things about thinking about policies, which is whereas when talking about average treatment effect, average treatment effect sort of makes sense in the comparative sense, comparing one to another. But when we talk about how good is a policy, it's not a comparative statement at all. The policy does something for everyone. You could ask, well, what is the average value of the outcomes that you get for those actions that we're taking for those individuals? So that's why I'm writing a slightly more general fashion already here. Times yi obviously. So this is now a new estimator. I'm not going to derive it for you in class, but the derivation is very similar to what we did last week when we tried to drive the average treatment effect. And the critical point is we're dividing by that propensity score, just like we did over there. So this, if all of the assumptions made sense, you had infinite data, should give you exactly the same estimate as this. But here, you're not estimating potential outcomes at all. So you never have to try to impute the counterfactuals. Here, all it relies on knowing is that you have the propensity scores for each of the data points in your training set or in a data set. So for example, this opens the door to tons of new exciting directions. Imagine that you had a very large observational data set. And you learned a policy from it. For example, you might have done covariate adjustment and then said, OK, based on covariate adjustment, this is my new policy. So you might have gotten it via that approach. Now you want to know how good is that. Well, suppose that you then run a randomized control trial. And then you run a randomized control trial, you have 100 people, maybe 200 people, and so not that many. So not nearly enough people to have actually estimated your policy alone. You might have needed thousands or millions of individuals to estimate your policy. Now you're only going to have a couple individuals that you could actually afford to do a randomized control trial on. For those people, because you're flipping a coin for which treatment they're going to get, suppose that were in a binary setting where the only two treatments, then this value is always 1/2 1/2. And what I'm giving you here is going to be an unbiased estimate of how good that policy is, which one can now estimate using that randomized control trial. Now, this also might lead you to think through the question of, well, rather than estimating the policy through-- rather than obtaining a policy through the lens of optimizing CATE, of figuring how to estimate CATE, maybe we could have skipped that all together. For example, suppose that we had that randomized control trial data. Now imagine that rather than 100 individuals, you had a really large randomized control trial with 10,000 individuals in it. This now opens the door to thinking about directly maximizing or minimizing, depending whether you want this to be large or small, pi with respect to this quantity, which completely bypasses the goal of estimating the condition of average treatment effect. And you'll notice how this looks exactly like a classification problem. This quantity here looks exactly like a 0 1 loss. And the only difference is that you're weighting each of the data points by this inverse propensity. So one can reduce the problem of actually finding an optimal policy here to that of a weighted classification problem, in the case of a discrete set of treatments. There are two big caveats to that line of thinking. The first major caveat is that you have to know these propensity scores. And so if you have data coming from randomized control trial, you will know this propensity scores or if you have, for example, some control over the data generation process. For example, if you are an ad company and you get to choose which ad to show to your customers, then you look to see who clicks on what, you might know what that policy was that was showing things. In that case, you might exactly know the propensity scores. In health care, other than in randomized control trials, we typically don't know this value. So we either have to have a large enough randomized control trial that we won't over-fit by trying to directly minimize this or we have to work within an observational data setting. But we have to estimate the propensity scores directly. So you would then have a two-step procedure, where first you estimate these propensity scores, for example, by doing logistic regression. And then you attempt to maximize or minimize this quantity in order to find the optimal policy. And that has a lot of challenges, because this quantity shown in the very bottom here could be really small or really large in an observational data set due to these issues of having very small overlap between your treatments. And this being very small implies then that the variant of this estimator is very, very large. And so when one wants to use an approach like this, similar to when one wants to use an average treatment effect estimator, and when you're estimating these propensities, often you might need to do things like clipping of the propensity scores in order to prevent the variants from being too large. That then, however, leads to a biased estimate typically. I wanted to give you a couple of references here. So one is Swaminathan and Joachims, J-O-A-C-H-I-M-S ACML 2015. In that paper, they tackle this question. They focus on the setting where the propensity scores are known, such as do it half from a randomized controlled trial. And they recognize that you might decide that you prefer something like a biased estimator because of the fact that these propensity scores could be really small. And so they use some generalization results from the machine learning theory community in order to try to control the variants of the estimator as a function of these propensity scores. And they then learn, directly minimize the policy which is what they call counterfactual regret minimization, in order to allow one to generalize as best as possible from the small amount of data you might have available. A second reference that I want to give just to point you into this literature, if you're interested, is by Nathan Kallus and his student, I believe Angela Zhou, from NeurIPS 2018. And that was a paper which was one of the optional readings for last Thursday's class. Now, that paper they also start from something like this, from this perspective. And they say that, oh, now that we're working in this framework, one could think about what happens if you have actually unobserved confounding. So there, you might not actually know the true propensity scores, because there are unobserved confounders that you don't observe. And that you can think about trying to bound how wrong your estimator can be as a function of how much you don't know this quantity. And they show that when you try to-- if you think about having some backup strategy, like if your goal is to find a new policy which performs as best as possible with respect to an old policy, then it gives you a really elegant framework for trying to think about a robust optimization of this, even taking into consideration the fact that there might be unobserved confounding. And that works also in this framework. So I'm nearly done now. I just want to now finish with a thought, can we do the same thing for policies learned by reinforcement learning? So now that we've sort of built up this language that's returned to the RL setting. And there one can show that you can get a similar estimate for the value of a policy by summing over your observed sequences, summing over the time steps of that sequence of the reward observed at that time step times a ratio of probabilities, which is going from the first time step up to time little t of the probability that you would actually take the observed action t prime, given that you are in the observed state t prime, divided by the probability-- this is the analogy of the propensity score, the probability under the data generating process-- of seeing action a given that you are in state t prime. So if, as we discussed there, you had a deterministic policy, then this pi, it would just be a delta function. And so this would just be looking at-- this estimator would only be looking at sequences where the precise sequence of actions taken are identical to the precise sequence of actions that the policy would have taken. And the difference here is that now instead of having a single propensity score, one has a product of these propensity scores corresponding to the propensity of observing that action given the corresponding state at each point along the sequence. And so this is nice, because this gives you one way to do what's called off-policy evaluation. And this is an estimator, which is completely analogous to the estimator that we got from Q learning. So if all assumptions were correct, and you had a lot of data, then those two should give you precisely the same answer. But here, like in the causal inference setting, we are not making the assumption that we can do covariate adjustment well. Or said differently, we're not assuming that we can fit the Q function well. And this is now, just like there, based on the assumption that we have the ability to really accurately know what the propensity scores are. So it now gives you an alternative approach to do evaluation. And you could think about looking at the robustness of your estimates from these two different estimators. And this is the most naive of the estimators. There are many ways to try to make this better, such as by doing w robust estimators. And if you want to learn more, I recommend reading this paper by Thomas and Emma Brunskill in ICML 2016. And with that, I want Barbra to come up and get set up. And we're going to transition to the next part of the lecture. Yes. AUDIENCE: Why do we sum over t and take the project across all t? DAVID SONTAG: One easy way to think about this is suppose that you only had a reward of the last time step. If you only had a reward of the last time step, then you wouldn't have this sum over t, because the rewards in the earlier steps would be 0. You would just have that product going from 0 up to capital T of last time step. The reason why you have it up to at each time step is because one wants to be able to appropriately weigh the likelihood of seeing that reward at that point in time. One could rewrite this in other ways. I want to hold other questions, because this part of the lecture is going to be much more interesting than my part of the lecture. And with that, I want introduce Barbra. Barbra, I first met her when she invited me to give a talk in her class last year. She's an instructor at Harvard Medical School-- or School of Public Health. She recently finished her PhD in 2018. And her PhD looked at many questions related to the themes of the last couple of weeks. Since that time, in addition continuing her research, she's been really leading the way in creating data science curriculum over at Harvard. So please take it away. BARBRA DICKERMAN: Thank you so much for the introduction, David. I'm very happy to be here to share some of my work on evaluating dynamic treatment strategies, which you've been talking about over the past few lectures. So my goals for today, I'm just going to breeze over defining dynamic treatment strategies, as you're already familiar with it. But I would like to touch on when we need a special class of methods called g-methods. And then we'll talk about two different applications, different analyses, that have focused on evaluating dynamic treatment strategies. So the first will be an application of the parametric g-formula, which is a powerful g-method to cancer research. And so the goal here is to give you my causal inference perspective on how we think about this task of sequential decision making and then with whatever time remains, we'll be discussing a recent publication on the AI clinician to talk through the reinforcement learning perspective. So I think it'll be a really interesting discussion, where we can share these perspectives, talk about the relative strengths and limitations as well. And please stop me if you have any questions. So you already know this. When it comes to treatment strategies, there's three main types. There's point interventions happening at a single point in time. There's sustained interventions happening over time. When it comes to clinical care, this is often what we're most interested in. Within that, there are static strategies, which are constant over time. And then there's dynamic strategies, which we're going to focus on. And these differ in that the intervention over time depends on evolving characteristics. So for example, initiate treatment at baseline and continue it over follow up until a contraindication occurs, at which point you may stop treatment and decide with your doctor whether you're going to switch to an alternate treatment. You would still be adhering to that strategy, even though you quit. The comparison here being do not initiate treatment over follow up, likewise unless an indication occurs, at which point you may start treatment and still be adhering to the strategy. So we're focusing on these because they're the most clinically relevant. And so clinicians encounter these every day in practice. So when they're making a recommendation to their patient about a prevention intervention, they're going to be taking into consideration the patient's evolving comorbidities. Or when they're deciding the next screening interval, they'll consider the previous result from the last screening test when deciding that. Likewise for treatment, deciding whether to keep the patient on treatment or not. Is the patient having any changes in symptoms or lab values that may reflect toxicity? So one thing to note is that while many of the strategies that you may see in clinical guidelines and in clinical practice are dynamic strategies, these may not be the optimal strategies. So maybe what we're recommending and doing is not optimal for patients. However, the optimal strategies will be dynamic in some way, in that they will be adapting to individuals' unique and evolving characteristics. So that's why we care about them. So what's the problem? So one problem deals with something called treatment confounder feedback, which you may have spoken about in this class. So conventional statistical methods cannot appropriately compare dynamic treatment strategies in the presence of treatment confounder feedback. So this is when time varying confounders are affected by previous treatment. So if we kind of ground this in a concrete example with this causal diagram, let's say we're interested in estimating the effect of some intervention A, vasopressors or it could be IV fluids, on some outcome Y, which we'll call survival here. We know that vasopressors affect blood pressure, and blood pressure will affect subsequent decisions to treat with vasopressors. We also know that hypotension-- so again, blood pressure, L1, affects survival, based on our clinical knowledge. And then in this DAG, we also have the node U, which represents disease severity. So these could be potentially unmeasured markers of disease severity that are affecting your blood pressure and also affecting your probability of survival. So if we're interested in estimating the effect of a sustained treatment strategy, then we want to know something about the total effect of treatment at all time points. We can see that L1 here is a confounder for the effect of A1 on Y so we have to do something to adjust for that. And if we were to apply a conventional statistical method, we would essentially be conditioning on a collider and inducing a selection bias. So an open path from A0 to L1 to U to Y. What's the consequence of this? If we look in our data set, we may see an association between A and Y. But that association is not because there's necessarily an effect of A on Y. It might not be causal. It may be due to this selection bias that we created. So this is the problem. And so in these cases, we need a special type of method that can handle these settings. And so a class of methods that was designed specifically to handle this is g-methods. And so these are sometimes referred to as causal methods. They've been developed by Jamie Robins and colleagues and collaborators since 1986. And they include the parametric g-formula, g-estimation of structural nested models, and inverse probability weighting of marginal structural models. So in my research, what I do is I combine g-methods with large longitudinal databases to try to evaluate dynamic treatment strategies. So I'm particularly interested in bringing these methods to cancer research, because they haven't been applied much there. So a lot of my research questions are focused on answering questions like, how and when can we intervene to best prevent, detect, and treat cancer? And so I'd like to share one example with you, which focused on evaluating the effect of adhering to guideline-based physical activity interventions on survival among men with prostate cancer. So the motivation for this study, there's a large clinical organization, ASCO, the American Society of Clinical Oncology, that had actually called for randomized trials to generate these estimates for several cancers. The thing with prostate cancer is it's a very slowly progressing disease. So the feasibility of doing a trial to evaluate this is very limited. The trial would have to be 10 years long probably. So given that, given the absence of this randomized evidence, we did the next best thing that we could do to generate this estimate, which was combine high-quality observational data with advanced EPI methods, in this case parametric g-formula. And so we leveraged data from the Health Professionals Follow-up Study, which is a well-characterized prospective cohort study. So in these cases, there's a three-step process that we take to extract the most meaningful and actionable insights from observational data. So the first thing that we do is we specify the protocol of the target trial that we would have liked to conduct had it been feasible. The second thing we do is we make sure that we measure enough covariates to approximately adjust for confounding and achieve conditional exchangeability. And then the third thing we do is we apply an appropriate method to compare the specified treatment strategies under this assumption of conditional exchangeability. And so in this case, eligible men for this study had been diagnosed with non-metastatic prostate cancer. And at baseline, they were free of cardiovascular and neurologic conditions that may limit physical ability. For the treatment strategies, men were to initiate one of six physical activity strategies at diagnosis and continue it over followup until the development of a condition limiting physical activity. So this is what made the strategies dynamic. The intervention over time depended on these evolving conditions. And so just to note, we pre-specified these strategies that we were evaluating as well as the conditions. Men were followed until diagnosis, until death, and to followup 10 years after diagnosis or administrative end to followup, whichever happened first. Our outcome of interest was all cause mortality within 10 years. And we were interested in estimating the per protocol effect of not just initiating these strategies but adhering to them over followup. And again, we applied the parametric g-formula. So I think you've already heard about the g-formula in a previous lecture, possibly in a slightly different way. So I won't spend too much time on this. So the g-formula, essentially the way I think about it is a generalization of standardization to time varying exposures and confounders. So it's basically a weighted average of risks, where you can think of the weights being the probability density functions of the time varying confounders, which we estimate using parametric regression models. And we approximate the weighted average using Monte Carlo simulation. So practically how do we do this? So the first thing we do is we fit parametric regression models for all of the variables that we're going to be studying. So for treatment confounders and death at each followup time. The next thing we do is Monte Carlo simulation where essentially what we want to do is simulate the outcome distribution under each treatment strategy that we're interested in. And then we bootstrap the confidence intervals. So I'd like to show you kind of in a schematic what this looks like, because it might be a little bit easier to see. So again, the idea is we're going to make copies of our data set, where in each copy everyone is adhering to the strategy that we're focusing on in that copy. So how do we construct each of these copies of the data set? We have to build them each from the ground up, starting with time 0. So the values of all of the time varying covariates at time 0 are sampled from their empirical distribution. So these are actually observed values of the covariates. How do we get the values at the next time point? We use the parametric regression models that I mentioned that we fit in step 1. Then what we do is we force the level of the intervention variable to be whatever was specified by that intervention strategy. And then we estimate the risk of the outcome at each time period given these variables, again using the parametric regression model for the outcome now. And so we repeat this over all time periods to estimate a cumulative risk under that strategy, which is taken as the average of the subject-specific risks. So this is what I'm doing. This is kind of under the hood what's going on with this method. DAVID SONTAG: So maybe we should try to put that in language of what we saw in the class. And let me know if I'm getting this wrong. So you first estimate the markup decision process, which allows you to simulate from the underlying data distribution. So you know that probability of this sort of next sequence of observations, given the previous sequence and action and previous actions, and then with that, then you could then intervene and simulate the forms. Because that was, if you remember Frederick gave you three different buckets of approaches. Then he focused on the middle one. This is the left-most bucket. The right? AUDIENCE: Yes. DAVID SONTAG: So we didn't talk about it. AUDIENCE: No, [INAUDIBLE] model based on relevance. BARBRA DICKERMAN: Yeah. Yes. DAVID SONTAG: But it's very sensible. AUDIENCE: Yeah. But it seems very hard. BARBRA DICKERMAN: What's that? AUDIENCE: Sorry. Oh, it seems very hard to model this [INAUDIBLE].. BARBRA DICKERMAN: Yeah. So that is a challenge. That is the hardest part about this. And it's relying on a lot of assumptions, yeah. So the primary results that kind of come out after we do all of this. So this is the estimated risk of all cause mortality under several physical activity interventions. So I'm not going to focus too much on the results. I want to focus on two main takeaways from this slide. One thing to emphasize is we pre-specified the weekly duration of physical activity. Or you can think of this like the dose of the intervention. We pre-specified that. And this was based on current guidelines. So the third row of each band, we did look at some dose or level beyond the guidelines to see if there might be additional survival benefits. But these were all pre-specified. We also pre-specified all of the time varying covariates that made these strategies dynamic. So I mentioned that men were excused from following the recommended physical activity levels if they developed one of these listed conditions, metastasis, MI, stroke, et cetera. We pre-specified all of those. It's possible that maybe a different dependence on a different time varying covariate may have led to a more optimal strategy. There was a lot that remained unexplored. So we did a lot of sensitivity analyses as part of this project. I'd like to focus, though, on the sensitivity analyses that we did for potential unmeasured confounding by chronic disease that may be severe enough to affect both physical activity and survival. And so the g-formula is actually providing a natural way to at least partly address this by estimating the risk of these physical activity interventions that are at each time point t only applied to men who are healthy enough to maintain a physical activity level at that time. And so again in the main analysis, we excused men from following the recommended levels if they developed one of these serious conditions. So in sensitivity analyses, we then expanded this list of serious conditions to also include the conditions that are shown in blue text. And so this attenuated our estimates but didn't change our conclusions. One thing to point out is that the validity of this approach rests on the assumption that at each time t we had available data needed to identify which men were healthy at that time enough to do the physical activity. Yeah. AUDIENCE: Sorry, just to double-check, does excuse mean that you remove them? BARBRA DICKERMAN: Great question. So because the strategy was pre-specified to say that if you develop one of these conditions, you may essentially do whatever level of physical activity you're able to do. So importantly-- I'm glad you brought this up-- we did not censor men at that time. They were still followed, because they were still adhering to the strategy as defined. Thanks for asking. And so given that we don't know whether the data contain at each time t the information necessary to know, are these men healthy enough at that time, we therefore conducted a few alternate analyses in which we lagged physical activity and covariate data by two years. And we also used a negative outcome control to explore potential unmeasured confounding by clinical disease or disease severity. So what's the rationale behind this? So in the DAGs below for the original analysis, we have physical activity A. We have survival Y. And this may be confounded by disease severity U. So when we see an association between A and Y in our data, we want to make sure that it's causal, that it's because of the blue arrow, and not because of this confounding bias, the red arrow. So how can we potentially provide evidence for whether that red pathway is there? We selected questionnaire nonresponse as an alternate outcome, instead of survival, that we assumed was not directly affected by physical activity, but that we thought would be similarly confounded by disease severity. And so when we repeated the analysis with a negative outcome control, we found that physical activity had a nearly null effect on questionnaire nonresponse, as we would expect, which provides some support that in our original analysis, the effect of physical activity on death was not confounded through the pathways explored through the negative control. So one thing to highlight here is the sensitivity analyses were driven by our subject matter knowledge. And there's nothing in the data that kind of drove this. And so just to recap this portion. So g-methods are a useful tool, because they let us validly estimate the effect of pre-specified dynamic strategies and estimate adjusted absolute risks, which are clinically meaningful to us, and appropriately adjusted survival curves, even in the presence of treatment confounder feedback, which occurs often in clinical questions. And of course, this is under our typical identifiability assumptions. So this makes it a powerful approach to estimate the effects of currently recommended or proposed strategies that therefore we can specify and write out precisely as we did here. However, these pre-specified strategies may not be the optimal strategies. So again, when I was doing this analysis, I was thinking there are so many different weekly durations of physical activity that we're not looking at. There are so many different time-varying covariates where we could have different dependencies on those for these strategies over time. And maybe those would have led to better survival outcomes among these men, but all of that was unexplored.
MIT_6S897_Machine_Learning_for_Healthcare_Spring_2019
12_Machine_Learning_for_Pathology.txt
PROFESSOR: All right, everyone, so we are very happy to have Andy Beck as our invited speaker today. Andy has a very unique background. He's trained both as a computer scientist and as a clinician. His specialty is in pathology. When he was a student at Stanford, his thesis was on how one could use machine learning algorithms to really understand a pathology data set, at the time, using more traditional regression-style approaches to understanding what the field is now called computational pathology. But his work was really at the forefront of his field. Since then, he's come to Boston, where he was an attending and faculty at Beth Israel Deaconess Medical Center. In the recent couple of years, he's been running a company called PathAI, which is, in my opinion, one of the most exciting companies of AI in medicine. And he is my favorite invited speaker-- ANDY BECK: He says that to everyone. PROFESSOR: --every time I get an opportunity to invite someone to speak. And I think you'll be really interested in what he has to say. ANDY BECK: Great. Well, thank you so much. Thanks for having me. Yeah, I'm really excited to talk in this course. It is a super exciting time for machine learning in pathology And if you have any questions throughout, please feel free to ask. And so for some background on what pathology is-- it's so like, if you're a patient. You go to the doctor, and AI could apply in any aspect of this whole trajectory, and I'll kind of talk about specifically in pathology. So you go to the doctor. They take a bunch of data from you. You talk to them. They get signs and symptoms. Typically, if they're at all concerned, and it could be something that's a structural alteration that's not accessible just through taking blood work, say, like a cancer, which is one of the biggest things, they'll send you to radiology where they want to-- the radiology is the best way for acquiring data to look for big structural changes. So you can't see single cells in radiology. But you can see inside the body and see some large things that are changing to make evaluations for, like, you have a cough, like are you looking at lung cancer, or are you looking at pneumonia? And radiology only takes you so far. And people are super excited about applying AI to radiology, but I think one thing they often forget is these images are not very data-rich compared to the core data types. I mean, this is my bias from pathology, but radiology gets you some part of the way, where you can sort of triage normal stuff. And the radiologist will have some impression of what they're looking at. And often, that's the bottom line in the radiology report is impression-- concerning for cancer, or impression-- likely benign but not sure, or impression-- totally benign. And that will also guide subsequent decisions. But if there's some concern that something serious is going on, the patient undergoes a pretty serious procedure, which is a tissue biopsy. So pathology requires tissue to do what I'm going to talk about, which is surgical pathology that requires tissue specimen. There's also blood-based things. But then this is the diagnosis where you're trying to say is this cancer? Is this not cancer? And that report by itself can really guide subsequent decisions, which could be no further treatment or a big surgery or a big decision about chemotherapy and radiotherapy. So this is one area where you really want to incorporate data in the most effective way to reduce errors, to increase standardization, and to really inform the best treatment decision for each patient based on the characteristics of their disease. And the one thing about pathology that's pretty interesting is it's super visual. And this is just a kind of random sampling of some of the types of different imagery that pathologists are looking at every day. I think this is one thing that draws people to this specialty is a saying in radiology, you're sort of looking at an impression of what might be happening based on sending different types of images and acquiring the data and sort of trying to estimate what's going on. Whereas here, you're actually staining pieces of tissue and looking by eye at actual individual cells. You can look within cells. You can look at how populations of cells are being organized. And for many diseases, this still represents sort of the core data type that defines what's going on, and is this something with a serious prognosis that requires, say, surgery? Or is this something that's totally benign? All of these are different aspects of benign processes. And so just the normal human body creates all these different patterns. And then there's a lot of patterns of disease. And these are all different subtypes of disease that are all different morphologies. So there's sort of an incredible wealth of different visual imagery that the pathologist has to incorporate into their diagnosis. And then there's, on top of that, things like special stains that can stain for specific organisms, for infectious disease, or specific patterns of protein expression, for subtyping disease based on expression of drug targets. And this even more sort of increases the complexity of the work. So for many years, there's really nothing new about trying to apply AI or machine learning or computation to this field. It's actually a very natural field, because it's sort of laboratory-based. It's all about data processing. You take this input, things like images, and produces output, what a diagnosis is. So people have really been trying this for 40 years or so now. This is one of the very first studies that sort of just tried to see, could we train a computer to identify the size of cancer cells through a process they called morphometry, here on the bottom? And then could we just use sort of measurements about the size of cancer cells in a very simple model to predict outcome? And in this study, they have a learning set that they're learning from and then a test set. And they show that their system, as every paper that ever gets published shows, does better than the two competing approaches. Although even in this best case scenario, there's significant degradation from learning to test. So one, it's super simple. It's using very simple methods, and the data sets are tiny, 38 learning cases, 40 test cases. And this is published in The Lancet, which is the leading biomedical journal even today. And then people got excited about AI sort of building off of simple approaches. And back in 1990, it was thought artificial neural nets would be super useful for quantitative pathology for sort of obvious reasons. But at that time, there was really no way of digitizing stuff at any sort of scale, and that problem's only recently been solved. But sort of in 2000, people were first thinking about once the slides are digital, then you could apply computational methods effectively. But kind of nothing really changed, and still, to a large degree, hasn't changed for the predominance of pathology, which I'll talk about. But as was mentioned earlier, I was part of one of the first studies to really take a more machine learning approach to this. And what we mean by machine learning versus prior approaches is the idea of using data-driven analysis to figure out the best features. And now you can do that in an even more explicit way with machine learning, but there's sort of a progression from measuring one or two things in a very tedious way on very small data sets to, I'd say, this way, where we're using some traditional regression-based machine learning to measure larger numbers of features. And then using things like those associations, those features with patient outcome to focus your analyses on the most important ones. And the challenging machine learning task here and really one of the core tasks in pathology is image processing. So how do we train computers to sort of have the knowledge of what is being looked at that any pathologist would want to have? And there's a few basic things you'd want to train the computer to do, which is, for example, identify where's the cancer? Where's the stroma? Where are the cancer cells? Where are the fibroblasts, et cetera? And then once you train a machine learning based system to identify those things, you can then extract lots of quantitative phenotypes out of the images. And this is all using human-engineered features to measure all the different characteristics of what's going on in an image. And machine learning is being used here to create those features. And then we use other regression-based methods to associate these features with things like clinical outcome. And in this work, we show that by taking a data-driven approach, sort of, you begin to focus on things like what's happening in the tumor microenvironment, not just in the tumor itself? And it sort of turned out, over the past decade, that understanding the way the tumor interacts with the tumor microenvironment is sort of one of the most important things to do in cancer with things like fields like immunooncology being one of the biggest advances in the therapy of cancer, where you're essentially just regulating how tumor cells interact with the cells around them. And that sort of data is entirely inaccessible using traditional pathology approaches and really required a machine learning approach to extract a bunch of features and sort of let the data speak for itself in terms of which of those features is most important for survival. And in this study, we showed that these things are associated with survival. I don't know if you guys do a lot of Kaplan-Meier plots in here. PROFESSOR: They saw it once, but taking us through it slowly is never a bad idea. ANDY BECK: Yeah, so these are-- I feel there's one type of plot to know for most of biomedical research, and it's probably this one. And it's extremely simple. So it's really just an empirical distribution of how patients are doing over time. So the x-axis is time. And here, the goal is to build a prognostic model. I wish I had a predictive one in here, but we can talk about what that would look like. But a prognostic model, any sort of prognostic test in any disease in medicine is to try to create subgroups that show different survival outcomes. And then by implication, they may benefit from different therapies. They may not. That doesn't answer that question, but it just tells you if you want to make an estimate for how a patient's going to be doing in five years, and you can sub-classify them into two groups, this is a way to visualize it. You don't need two groups. You could do this with even one group, but it's frequently used to show differences between two groups. So you'll see here, there's a black line and a red line. And these are groups of patients where a model trained not on these cases was trained to separate high-risk patients from low-risk patients. And the way we did that was we did logistic regression on a different data set, sort of trying to classify patients alive at five years following diagnosis versus patients deceased, five years diagnosis. We build a model. We fix the model. Then we apply it to this data set of about 250 cases. And then we just ask, did we actually effectively create two different groups of patients whose survival distribution is significantly different? So what this p-value is telling you is the probability that these two curves come from the same underlying distribution or that there's no difference between these two curves across all of the time points. And what we see here is there seems to be a difference between the black line versus the red line, where, say, 10 years, the probability of survival is about 80% in the low-risk group and more like 60% in the high-risk group. And overall, the p-value's very small for there being a difference between those two curves. So that's sort of like what a successful type Kaplan-Meier plot would look like if you're trying to create a model that separates patients into groups with different survival distributions And then it's always important for these types of things to try them on multiple data sets. And here we show the same model applied to a different data set showed pretty similar overall effectiveness at stratifying patients into two groups. So why do you think doing this might be useful? I guess, yeah, anyone? Because there's actually, I think this type of curve is often confused with one that actually is extremely useful, which I would say-- yeah? PROFESSOR: Why don't you wait? ANDY BECK: Sure. PROFESSOR: Don't be shy. You can call them. ANDY BECK: All right. AUDIENCE: Probably you can you use this to start off when the patient's of high-risk and probably at five years, if the patient has high-risk, probably do a follow-up. ANDY BECK: Right, exactly. Yeah, yeah. So that would be a great use. PROFESSOR: Can you repeat the question for the recording? ANDY BECK: So it was saying like if you know someone's at a high risk of having an event prior to five years, an event is when the curve goes down. So definitely, the red group is at 40, almost double or something the risk of the black group. So if you have certain interventions you can do to help prevent these things, such as giving an additional treatment or giving more frequent monitoring for recurrence. Like if you can do a follow-up scan in a month versus six months, you could make that decision in a data-driven way by knowing whether the patient's on the red curve or the black curve. So yeah, exactly right. It helps you to make therapeutic decisions when there's a bunch of things you can do, either give more aggressive treatment or do more aggressive monitoring of disease, depending on is it aggressive disease or a non-aggressive disease. The other type of curve that I think often gets confused with these that's quite useful is one that directly tests that intervention. So essentially, you could do a trial of the usefulness, the clinical utility of this algorithm, where on the one hand, you make the prediction on everyone and don't do anything differently. And then the other one is you make a prediction on the patients, and you actually use it to make a decision, like more frequent treatment or more frequent intervention. And then you could do a curve, saying among the high-risk patients, where we actually acted on it, that's black. And if we didn't act on it, it's red. And then, if you do the experiment in the right way, you can make the inference that you're actually preventing death by 50% if the intervention is causing black versus red. Here, we're not doing anything with causality. We're just sort of observing how patients do differently over time. But frequently, you see these as the figure, the key figure for a randomized control trial, where the only thing different between the groups of patients is the intervention. And that really lets you make a powerful inference that changes what care should be. This one, you're just like, OK, maybe we should do something differently, but not really sure, but it makes intuitive sense. But if you actually have something from a randomized clinical trial or something else that allows you to infer causality, this is the most important figure. And you can actually infer how many lives are being saved or things by doing something. But this one's not about intervention. It's just about sort of observing how patients do over time. So that was some of the work from eight years ago, and none of this has really changed in practice. Everyone is still using glass slides and microscopes in the clinic. Research is a totally different story. But still, 99% of clinic is using these old-fashioned technologies-- microscopes from technology breakthroughs in the mid-1800s, staining breakthroughs in the late 1800s. The H and E stain is the key stain. So aspects of pathology haven't moved forward at all, and this has pretty significant consequences. And here's just a couple of types of figures that really allow you to see the primary data for what a problem interobserver variability really is in clinical practice. And this is just another, I think, really nice, empirical way of viewing raw data, where there is a ground truth consensus of experts, who sort of decided what all these 70 or so cases were, through experts always knowing the right answer. And for all of these 70, called them all the category of atypia, which here is indicated in yellow. And then they took all of these 70 cases that the experts that are atypia and sent them to hundreds of pathologists across the country and for each one, just plotted the distribution of different diagnoses they were receiving. And quite strikingly-- and this was published in JAMA, a great journal, about four years ago now-- they show this incredible distribution of different diagnoses among each case. So this is really why you might want a computational approach is there should be the same color. This should just be one big color or maybe a few outliers, but for almost any case, there's a significant proportion of people calling it normal, which is yellow-- or sorry, tan, then atypical, which is yellow, and then actually cancer, which is orange or red. PROFESSOR: What does atypical mean? ANDY BECK: Yeah, so atypical is this border area between totally normal and cancer, where the pathologist is saying it's-- which is actually the most important diagnosis because totally normal you do nothing. Cancer-- there's well-described protocols for what to do. Atypia, they often overtreat. And that's sort of the bias in medicine is always assume the worst when you get a certain diagnosis back. So atypia has nuclear features of cancer but doesn't fully. You know, maybe you get 7 of the 10 criteria or three of the five criteria. And it has to do with sort of nuclei looking a little bigger and a little weirder than expected but not enough where the pathologist feels comfortable calling it cancer. And that's part of the reason that that shows almost a coin flip. Of the ones the experts called atypia, only 48% was agreed with in the community. The other interesting thing the study showed was intraobserver variability is just as big of an issue as interobserver. So a person disagrees with themselves after an eight month washout period pretty much as often as they disagree with others. So another reason why computational approaches would be valuable and why this really is a problem. And this is in breast biopsies. The same research group showed quite similar results. This was in British Medical Journal in skin biopsies, which is another super important area, where, again they have the same type of visualization of data. They have five different classes of severity of skin lesions, ranging from a totally normal benign nevus, like I'm sure many of us have on our skin to a melanoma, which is a serious, malignant cancer that needs to be treated as soon as possible. And here, the white color is totally benign. The darker blue color is melanoma. And again, they show lots of discordance, pretty much as bad as in the breast biopsies. And here again, the intraobserver variability with an eight-month washout period was about 33%. So people disagree with themselves one out of three times. And then these aren't totally outlier cases or one research group. The College of American Pathologists did a big summary of 116 studies and showed overall, an 18.3% median discrepancy rate across all the studies and a 6% major discrepancy rate, which would be a major clinical decision is the wrong one, like surgery, no surgery, et cetera. And those sort of in the ballpark agree with the previously published findings. So a lot of reasons to be pessimistic but one reason to be very optimistic is the one area where AI is not-- not the one area, but maybe one of two or three areas where AI is not total hype is vision. Vision really started working well as, I don't if you've covered in this class but with deep convolutional neural nets in 2012. And then all the groups sort of just kept getting incrementally better year over year. And now this is an old graph from 2015, but there's been a huge development of methods even since 2015, where now I think we really understand the strengths and the weaknesses of these approaches. And pathology sort of has a lot of the strengths, which is super well-defined, very focused questions. And I think there's lots of failures whenever you try to do anything more general. But for the types of tasks where you know exactly what you're looking for and you can generate the training data, these systems can work really well. So that's a lot of what we're focused on at PathAI is how do we extract the most information out of pathology images really doing two things. One is understanding what's inside the images and the second is using deep learning to sort of directly try to infer patient level phenotypes and outcomes directly from the images. And we use both traditional machine learning models for certain things, like particularly making inference at the patient level, where n is often very small. But anything that's directly operating on the image is almost some variant always of deep convolutional neural nets, which really are the state of the art for image processing. And we sort of, a lot of what we think about at PathAI, and I think what's really important in this area of ML for medicine is generating the right data set and then using things like deep learning to optimize all of the features in a data-driven away, and then really thinking about how to use the outputs of these models intelligently and really validate them in a robust way, because there's many ways to be fooled by artefacts and other things. So just some of the-- not to belabor the points, but why these approaches are really valuable in this application is it allows you to exhaustively analyze slides. So a pathologist, the reason they're making so many errors is they're just kind of overwhelmed. I mean, there's two reasons. One is humans aren't good at interpreting visual patterns. Actually, I think that's not the real reason, because humans are pretty darn good at that. And there are difficult things where we can disagree, but when people focus on small images, frequently they agree. But these images are enormous, and humans just don't have enough time to study carefully every cell on every slide. Whereas, the computer, in a real way, can be forced to exhaustively analyze every cell on every slide, and that's just a huge difference. It's quantitative. I mean, this is one thing the computer is definitely better at. It can compute huge numerators, huge denominators, and exactly compute proportions. Whereas, when a person is looking at a slide, they're really just eyeballing some percentage based on a very small amount of data. It's super efficient. So you can analyze-- this whole process is massively paralyzable, so you can almost do a slide as fast as you want based on how much you're willing to spend on it. And it allows you not only do all of of these, sort of, automation tasks exhaustively, quantitatively, and efficiently but also discover a lot of new insights from the data, which I think we did in a very early way, back eight years ago, when we sort of had human-extracted features correlate those with outcome. But now you can really supervise the whole process with machine learning of how you go from the components of an image to patient outcomes and learn new biology that you didn't know going in. And everyone's always like, well, are you just going to replace pathologists? And I really don't think this is, in any way, the future. In almost every field that's sort of like where automation is becoming very common, the demand for people who are experts in that area is increasing. And like airplane pilots is one I was just learning about today. They just do a completely different thing than they did 20 years ago, and now it's all about mission control of this big system and understanding all the flight management systems and understanding all the data they're getting. And I think the job has not gotten necessarily simpler, but they're much more effective, and they're doing much different types of work. And I do think the pathologist is going to move from sort of staring into a microscope with a literally very myopic focus on very small things to being more of a consultant with physicians, integrating lots of different types of data, things that AI is really bad at, a lot of reasoning about specific instances, and then providing that guidance to physicians. So I think the job will look a lot different, but we never really needed more diagnosticians in the future than in the past. So one example, I think we sent out a reading about this was this concept of breast cancer metastasis is a good use case of machine learning. And this is just a patient example. So a primary mass is discovered. So one of the big determinants of the prognosis from a primary tumor is has it spread to the lymph nodes? Because that's one of the first areas that tumors metastasize to. And the way to diagnose whether tumors have metastasized to lymph nodes is to take a biopsy and then evaluate those for the presence of cancer where it shouldn't be. And this is a task that's very quantitative and very tedious. So the International Symposium on Biomedical Imaging organized this challenge called the Chameleon 16 Challenge, where they put together almost 300 training slides and about 130 test slides. And they asked a bunch of teams to build machine learning based systems to automate the evaluation of the test slides, both to diagnose whether the slide contained cancer or not, as well as to actually identify where in the slides the cancer was located. And kind of the big machine learning challenge here, why you can't just throw it into a off-the-shelf or on the web image classification tool is the images are so large that it's just not feasible to throw the whole image into any kind of neural net. Because they can be between 20,000 and 200,000 pixels on a side. So they have millions of pixels. And for that, we do this process where we start with a labeled data set, where there are these very large regions labeled either as normal or tumor. And then we build procedures, which is actually a key component of getting machine learning to work well, of sampling patches of images and putting those patches into the model. And this sampling procedure is actually incredibly important for controlling the behavior of the system, because you could sample in all different ways. You're never going to sample exhaustively just because there's far too many possible patches. So thinking about the right examples to show the system has an enormous effect on both the performance and the generalizability of the systems you're building. And some of the, sort of, insights we learned was how best to do the, sort of, sampling. But once you have these samples, it's all data driven-- sure. AUDIENCE: Can you talk more about the sampling strategy schemes? ANDY BECK: Yeah, so from a high level, you want to go from random sampling, which is a reasonable thing to do, to more intelligent sampling, based on knowing what the computer needs to learn more about. And one thing we've done and-- so it's sort of like figuring-- so the first step is sort of simple. You can randomly sample. But then the second part is a little harder to figure out what examples do you want to enrich your training set for to make the system perform even better? And there's different things you can optimize for, for that. So it's sort of like this whole sampling actually being part of the machine learning procedure is quite useful. And you're not just going to be sampling once. You could iterate on this and keep providing different types of samples. So for example, if you learn that it's missing certain types of errors, or it hasn't seen enough of certain-- there's many ways of getting at it. But if you know it hasn't seen enough types of examples in your training set, you can over-sample for that. Or if you see you have a confusion matrix and you see it's failing on certain types, you can try to figure out why is it failing on those and alter the sampling procedure to enrich for that. You could even provide outputs to humans, who can point you to the areas where it's making mistakes. Because often you don't have exhaustively labeled. In this case, we actually did have exhaustively labeled slides. So it was somewhat easier. But you can see there's even a lot of heterogeneity within the different classes. So you might do some clever tricks to figure out what are the types of the red class that it's getting wrong, and how am I going to fix that by providing it more examples? So I think, sort of, that's one of the easier things to control. Rather than trying to tune other parameters within these super complicated networks, in our experience, just playing with the training, the sampling piece of the training, it should almost just be thought of as another parameter to optimize for when you're dealing with a problem where you have humongous slides and you can't use all the training data. AUDIENCE: So decades ago, I met some pathologists who were looking at cervical cancer screening. And they thought that you could detect a gradient in the degree of atypia. And so not at training time but at testing time, what they were trying to do was to follow that gradient in order to find the most atypical part of of the image. Is that still believed to be true? ANDY BECK: Yeah. That it's a continuum? Yeah, definitely. PROFESSOR: You mean within a sample and in the slides. ANDY BECK: Yeah, I mean, you mean just like a continuum of aggressiveness. Yeah, I think it is a continuum. I mean, this is more of a binary task, but there's going to be continuums of grade within the cancer. I mean, that's another level of adding on. If we wanted to correlate this with outcome, it would definitely be valuable to do that. To not just say quantitate the bulk of tumor but to estimate the malignancy of every individual nucleus, which we can do also. So you can actually classify, not just tumor region but you can classify individual cells. And you can classify them based on malignancy. And then you can get the, sort of, gradient within a population. In this study, it was just a region-based, not a cell-based, but you can definitely do that, and definitely, it's a spectrum. I mean, it's kind of like the atypia idea. Everything in biology is pretty much on a spectrum, like from normal to atypical to low-grade cancer, medium-grade cancer, high-grade cancer, and these sorts of methods do allow you to really more precisely estimate where you are on that continuum. And that's the basic approach. We get the big whole site images. We figure out how to sample patches from the different regions to optimize performance of the model during training time. And then during testing time, just we take a whole big whole site image. We break it into millions of little patches. Send each patch individually. We don't actually-- you could potentially use spatial information about how close they are to each other, which would make the process less efficient. We don't do that. We just send them in individually and then visualize the output as a heat map. And this, I think, isn't in the reference I sent so the one I sent showed how you were able to combine the estimates of the deep learning system with the human pathologist's estimate to make the human pathologist's error rate go down by 85% and get to less than 1%. And the interesting thing about how these systems keep getting better over time and potentially they over-fit to the competition data set-- because I think we submitted, maybe, three times, which isn't that many. But over the course of six months after the first closing of the competition, people kept competing and making systems better. And actually, the fully automated system on this data set achieved an error rate of less than 1% by the final submission date, which was significantly better than both the pathologists in the competition, which is the error rate, I believe, cited in the initial archive paper. And also, they took the same set of slides and sent them out to pathologists operating in clinical practice, where they had really significantly higher error rates, mainly due to the fact, they were more constrained by time limitations in clinical practice than in the competition. And most of the errors they are making are false negatives. Simply, they don't have the time to focus on small regions of metastasis amid these humongous giga pixel-size slides. AUDIENCE: In the paper, you say you combined the machine learning options with the pathologists, but you don't really say how. Is that it that they look at the heat maps, or is it just sort of combined? ANDY BECK: Yeah, no, it's a great question. So today, we do it that way. And that's the way in clinical practice we're building it, that the pathologists will look at both and then make a diagnosis based on incorporating both. For the competition, it was very simple, and the organizers actually did it. They interpreted them independently. So the pathologists just looked at all the slides. Our system made a prediction. It was literally the average of the probability that that slide contained cancer. That became the final score, and then the AUC went to 99% from whatever it was, 92% by combining these two scores. AUDIENCE: I guess they make uncorrelated errors. ANDY BECK: Exactly. They're pretty much uncorrelated, particularly because the pathologists tend to have almost all false negatives, and the deep learning system tends to be fooled by a few things, like artefact. And they do make uncorrelated errors, and that's why there's a huge bump in performance. So I kind of made a reference to this, but any of these competition data sets are relatively easy to get really good at. People have shown that you can actually build models that just predict a data set using deep learning. Like, deep learning is almost too good at finding certain patterns and can find artefact. So it's just a caveat to keep in mind. We're doing experiments on lots of real-world testing of methods like this across many labs with many different standing procedures and tissue preparation procedures, et cetera, to evaluate the robustness. But that's why competition results, even ImageNet always need to be taken with a grain of salt. And then but we sort of think the value add of this is going to be huge. I mean, it's hard to tell because it's such a big image, but this is what a pathologist today is looking at under a microscope, and it's very hard to see anything. And with a very simple visualization, just of the output of the AI system as red where cancer looks like it is. It's clearly a sort of great map of the areas they need to be sure to focus on. And this is real data from this example, where this bright red area, in fact, contains this tiny little rim of metastatic breast cancer cells that would be very easy to miss without that assistant sort of just pointing you in the right place to look at, because it's a tiny set of 20 cells amid a big sea of all these normal lymphocytes. And here's another one that, again, now you can see from low power. It's like a satellite image or something, where you can focus immediately on this little red area, that, again, is a tiny pocket of 10 cancer cells amid hundreds of thousands of normal cells that are now visible from low power. So this is one application we're working on, where the clinical use case will be today, people are just sort of looking at images without the assistance of any machine learning. And they just have to kind of pick a number of patches to focus on with no guidance. So sometimes they focus on the right patches, sometimes they don't, but clearly they don't have time to look at all of this at high magnification, because that would take an entire day if you were trying to look at 40X magnification at the whole image. So they sort of use their intuition to focus. And for that reason, they end up, as we've seen, making significant number of mistakes. It's not reproducible, because people focus on different aspects of the image, and it's pretty slow. And they're faced with this empty report. So they have to actually summarize everything they've looked at in a report. Like, what's the diagnosis? What's the size? So let's say there's cancer here and cancer here, they have to manually add the distances of the cancer in those two regions. And then they have to put this into a staging system that incorporates how many areas of metastasis there are and how big are they? And all of these things are pretty much automatable. And this is the kind of thing we're building, where the system will highlight where it sees cancer, tell the pathologist to focus there. And then based on the input of the AI system and the input of the pathologist can summarize all of that data, quantitative as well as diagnostic as well as summary staging. Sort of if the pathologist then takes this is their first version of the report, they can edit it, confirm it, sign it out. That data goes back into the system, which can be used for more training data in the future and the case is signed out. So it's much faster, much more accurate, and standardized once this thing is fully developed, which it isn't yet. So this is a great application for AI, because you really do need-- you actually do have a ton of data, so you need to do an exhaustive analysis that has a lot of value. It's a task where the local image data in a patch, which is really what this current generation of deep CNN's are really good at, is enough. So we're looking at things at the cellular level. Radiology actually could be harder, because you often want to summarize over larger areas. Here, you really often have the salient information in patches that really are scalable in current ML systems. And then we can interpret the output to the model. So it really isn't-- even though the model itself is a black box, we can visualize the output on top of the image, which gives us incredible advantage in terms of interpretability of what the models are doing well, what they're doing poorly on. And it's a specialty, pathology, where sort of 80% is not good enough. We want to get as close to 100% as possible. And that's one sort of diagnostic application. The last, or one of the last examples I'm going to give has to do with precision immunotherapy, where we're not only trying to identify what the diagnosis is but to actually subtype patients to predict the right treatment. And as I mentioned earlier, immunotherapy is a really important and exciting, relatively new area of cancer therapy, which was another one of the big advances in 2012. Around the same time that deep learning came out, the first studies came out showing that targeting a protein mostly on tumor cells but also on immune cells, the PD-1 or the PD-L1 protein, which the protein's job when it's on is to inhibit immune response. But in the setting of cancer, the inhibition of immune response is actually bad for the patient, because the immune system's job is to really try to fight off the cancer. So they realized a very simple therapeutic strategy just having an antibody that binds to this inhibitory signal can sort of unleash the patient's own immune system to really end up curing really serious advanced cancers. And that image on the top right sort of speaks to that, where this patient had a very large melanoma. And then they just got this antibody to target, to sort of invigorate their immune system, and then the tumor really shrunk. And one of the big biomarkers for assessing which patients will benefit from these therapies is the tumor cell or the immune cell expressing this drug target PD-1 or PD-L1. And the one they test for is PD-L1, which is the ligand for the PD-1 receptor. So this is often the key piece of data used to decide who gets these therapies. And it turns out, pathologists are pretty bad at scoring this, not surprisingly, because it's very difficult, and there's millions of cells potentially per case. And they show an interobserver agreement of only 0.86 for scoring on tumor cells, which isn't bad, but 0.2 for scoring it on immune cells, which is super important. So this is a drug target. We're trying to measure to see which patients might get this life-saving therapy, but the diagnostic we have is super hard to interpret. And some studies, for this reason, have shown sort of mixed results about how valuable it is. In some cases, it appears valuable. In other cases, it appears it's not. So we want to see would this be a good example of where we can use machine learning? And for this type of application, this is really hard, and we want to be able to apply it across not just one cancer but 20 different cancers. So we built a system at PathAI for generating lots of training data at scale. And that's something that a competition just won't get you. Like that competition example had 300 slides. Once a year, they do it. But we want to be able to build these models every week or something. So now, we have something 500 pathologists signed into our system that we can use to label lots of pathology data for us and to really build these models quickly and really high quality. So now we have something like over 2 and 1/2 million annotations in the system. And that allows us to build tissue region models. And this is immunohistochemistry in a cancer, where we've trained a model to identify all of the cancer epithelium in red, the cancer stroma in green. So now we know where the protein is being expressed, in the epithelium or in the stroma. And then we've also trained cellular classification. So now, for every single cell, we classify it as a cell type. Is it a cancer cell or a fibroblast or a macrophage or a lymphocyte? And is it expressing the protein, based on how brown it is? So while pathologists will try to make some estimate across the whole slide, we can actually compute for every cell and then compute exact statistics about which cells are expressing this protein and which patients might be the best candidates for therapy. And then the question is, can we identify additional things beyond just PD-L1 protein expression that's predictive of response to immunotherapy? And we've developed some machine learning approaches for doing that. And part of it's doing things like quantitating different cells and regions on H and E images, which currently aren't used at all in patient subtyping. But we can do analyses to extract new features here and to ask, even though nothing's known about these images and immunotherapy response, can we discover new features here? And this would be an example routinely of the types of features we can quantify now using deep learning to extract these features on any case. And this is sort of like every sort of pathologic characteristic you can sort of imagine. And then we correlate these with drug response and can use this as a discovery tool for identifying new aspects of pathology predictive of which patients will respond best. And then we can combine these features into models. This is sort of a ridiculous example because they're so different. But this would be one example where the output of the model, and this is totally fake data but I think it's just to get to the point. Is here, the color indicates the treatment, where green would be the immunotherapy, red would be the traditional therapy, and the goal is to build a model to predict which patients actually benefit from the therapy. So this may be an easy question, but what do you think, if the model's working, what would the title of the graph on the right be versus the graph on the left if these are the ways of classifying patients with our model, and the classifications are going to be responder class or non-responder class? And the color indicates the drug. AUDIENCE: The drug works or it doesn't work. ANDY BECK: That's right but what's the output of the model? But you're right. The interpretation of these graphs is drug works, drug doesn't work. It's kind of a tricky question, right? But what is our model trying to predict? AUDIENCE: Whether the person is going to die or not? It looks like likelihood of death is just not as high on the right. ANDY BECK: I think the overall likelihood is the same on the two graphs, right versus left. You don't know how many patients are in each arm. But I think the one piece on it-- so green is experimental treatment. Red is conventional treatment. Maybe I already said that. So here, and it's sort of like a read my mind type question, but here the output of the model would be responder to the drug would be the right class of patients. And the left class of patients would be non-responder to the drug. So you're not actually saying anything about prognosis, but you're saying that I'm predicting that if you're in the right population of patients, you will benefit from the blue drug. And then you actually see that on this right population of patients, the blue drug does really well. And then the red drug are patients who we thought-- we predicted would benefit from the drug, but because it's an experiment, we didn't give them the right drug. And in fact, they did a whole lot worse. Whereas, the one on the left, we're saying you don't benefit from the drug, and they truly don't benefit from the drug. So this is the way of using an output of a model to predict drug response and then visualizing whether it actually works. And it's kind of like the example I talked about before, but here's a real version of it. And you can learn this directly using machine learning to try to say, I want to find patients who actually benefit the most from a drug. And then in terms of how do we validate our models are correct? I mean, we have two different ways. One is do stuff like that. So we build a model that says, respond to drug, don't respond to a drug. And then we plot the Kaplan-Meier curves. If it's image analysis stuff, we ask pathologists to hand label. Many cells, and we take the consensus of pathologists as our ground truth and go from there. AUDIENCE: The way you're presenting it, it makes it sound like all the data comes from the pathology images. But in reality, people look at single nucleotide polymorphisms or gene sequences or all kinds of clinical data as well. So how do you get those? ANDY BECK: Yeah, I mean, the beauty of the pathology data is it's always available. So that's why a lot of the stuff we do is focused on that, because every clinical trial patient has treatment data, outcome data, and pathology images. So it's like, we can really do this at scale pretty fast. A lot of the other stuff is things like gene expression, many people are collecting them. And it's important to compare these to baselines or to integrate them. I mean, two things-- one is compare to it as a baseline. What can we predict in terms of responder, non-responder using just the pathology images versus using just gene expression data versus combining them? And that would just be increasing the input feature space. Part of the input feature space comes from the images. Part of it comes from gene expression data. Then you use machine learning to focus on the most important characteristics and predict outcome. And the other is if you want to sort of prioritize. Use pathology as a baseline because it's available on everyone. But then an adjuvant test that costs another $1,000 and might take another two weeks, how much does that add to the prediction? And that would be another way. So I think it is important, but a lot of our technology to developing our platform is focused around how do we most effectively use pathology and can certainly add in gene expression date. I'm actually going to talk about that next-- one way of doing it. Because it's a very natural synergy, because they tell you very different things. So here's one example of integrating, just kind of relative to that question, gene expression data with image data, where the cancer genome analysis, and this is all public. So they have pathology images, RNA data, clinical outcomes. They don't have the greatest treatment data, but it's a great place for method development for sort of ML in cancer, including pathology-type analyses. So this is a case of melanoma. We've trained a model to identify cancer and stroma and all the different cells. And then we extract, as you saw, sort of hundreds of features. And then we can rank the features here by their correlation with survival. So now we're mapping from pathology images to outcome data and we find just in a totally data-driven way that there's some small set of 15 features or so highly associated with survival. The rest aren't. And the top ranking one is an immune cell feature, increased area of stroma plasma cells that are associated with increased survival. And this was an analysis that was really just linking the images with outcome. And then we can ask, well, what are the genes underlying this pathology? So pathology is telling you about cells and tissues. RNAs are telling you about the actual transcriptional landscape of what's going on underneath. And then we can rank all the genes in the genome just by their correlation with this quantitative phenotype we're measuring on the pathology images. And here are all the genes, ranked from 0 to 20,000. And again, we see a small set that we're thresholding at a correlation of 0.4, strongly associated with the pathologic phenotype we're measuring. And then we sort of discover these sets of genes that are known to be highly enriched in immune cell genes. Sort of which is some form of validation that we're measuring what we think we're measuring, but also this sets of genes are potentially new drug targets, new diagnostics, et cetera, that was uncovered by going from clinical outcomes to pathology data to the underlying RNA signature. And then kind of the beauty of the approach we're working on is it's super scalable, and in theory, you could apply it to all of TCGA or other data sets and apply it across cancer types and do things like find-- automatically find artefacts in all of the slides and kind of do this in a broad way. And then sort of the most interesting part, potentially, is analyzing the outputs of the models and how they correlate with things like drug response or underlying molecular profiles. And this is really the process we're working on, is how do we go from images to new ways of measuring disease pathology? And kind of in summary, a lot of the technology development that I think is most important today for getting ML to work really well in the real world for applications in medicine is a lot about being super thoughtful about building the right training data set. And how do you do that in a scalable way and even in a way that incorporates machine learning? Which is kind of what I was talking about before-- intelligently picking patches. But that sort of concept applies everywhere. So I think there's almost more room for innovation on the defining the training data set side than on the predictive modeling side, and then putting the two together is incredibly important. And for the kind of work we're doing, there's already such great advances in image processing. A lot of it's about engineering and scalability, as well as rigorous validation. And then how do we connect it with underlying molecular data as well as clinical outcome data? Versus trying to solve a lot of the core vision tasks, which there's already just been incredible progress over the past couple of years. And in terms of in our world, things we think a lot about, not just the technology and putting together our data sets but also, how do we work with regulators? How do we make strong business cases for partners working with to actually change what they're doing to incorporate some of these new approaches that will really bring benefits to patients around quality and accuracy in their diagnosis? So in summary-- I know you have to go in four minutes-- this has been a longstanding problem. There's nothing new about trying to apply AI to diagnostics or to vision tasks, but there are some really big differences in the past five years that, even in my short career, I've seen a sea change in this field. One is availability of digital data-- it's now much cheaper to generate lots of images at scale. But even more important, I think, are the last two, which is access to large-scale computing resources is a game-changer for anyone with access to cloud computing or large computing resources. Just, we all have access to a sort of arbitrary compute today, and 10 years ago, that was a huge limitation in this field. As well as these really major algorithmic advances, particularly deep CNN's revision. And, in general, AI works extremely well when problems can be defined to get the right type of training data, access, large-scale computing, as well as implement things like deep CNNs that work really well. And it sort of fails everywhere else, which is probably 98% of things. But if you can create a problem where the algorithms actually work, you can have lots of data to train on, they can succeed really well. And this sort of vision-based AI-powered pathology is broadly applicable across, really, all image-based tasks and pathology. It does enable integration with things like omics data-- genomics, transcriptonics, SNP data, et cetera. And in the near future, we think this will be incorporated into clinical practice. And even today, it's really central to a lot of research efforts. And I just want to end on a quote from 1987, where in the future, AI can be expected to become staples of pathology practice. And I think we're much, much closer than 30 years ago. And I want to thank everyone at PathAI, as well as Hunter, who really helped put together a lot of these slides. And we do have lots of opportunities for machine learning engineers, software engineers, et cetera, at PathAI. So certainly reach out if you're interested in learning more. And I'm happy to take any questions, if we have time. So thank you. [APPLAUSE] AUDIENCE: Yes, I think generally very aggressive events. I was wondering how close is this to clinical practice? Is there FDA or-- ANDY BECK: Yeah, so I mean, actual clinical practice, probably 2020, like early, mid-2020. But I mean, today, it's very active in clinical research, so like clinical trials, et cetera, that do involve patients, but it's in a much more well-defined setting. But the first clinical use cases, at least of the types of stuff we're building, will be, I think, about a year from now. And I think it will start small and then get progressively bigger. So I don't think it's going to be everything all at once transforms in the clinic, but I do think we'll start seeing the first applications out. And they will go-- some of them will go through the FDA, and there'll be some laboratory-developed tests. Ours will go through the FDA, but labs themselves can actually validate tools themselves. And that's another path. AUDIENCE: Thanks. ANDY BECK: Sure. PROFESSOR: So have you been using observational data sets? You gave one example where you tried to use data from a randomized controlled trial, or both trials, you used different randomized control trials for different efficacies of each event. The next major segment of this course, starting in about two weeks, will be about causal inference from observational data. I'm wondering if that is something PathAI has gotten into yet? And if so, what has your finding been so far? ANDY BECK: So we have focused a lot on randomized controlled trial data and have developed methods around that, which sort of simplifies the problem and allows us to do, I think, pretty clever things around how to generate those types of graphs I was showing, where you truly can infer the treatment is having an effect. And we've done far less. I'm super interested in that. I'd say the advantages of RCTs are people are already investing hugely in building these very well-curated data sets that include images, molecular data, when available, treatment, and outcome. And it's just that's there, because they've invested in the clinical trial. They've invested in generating that data set. To me, the big challenge in observational stuff, there's a few but I'd be interested in what you guys are doing and learn about it, is getting the data is not easy, right? The outcome data is not-- linking the pathology images with the outcome data even is, actually, in my opinion, harder in observational way than in RCT. Because they're actually doing it and paying for it and collecting it in RCTs. No one's really done a very good job of-- TCGA would be a good place to play around with because that is observational data. And we want to also, we generally want to focus on actionable decisions. And RCT is sort of perfectly set up for that. Do I give drug X or not? So I think if you put together the right data set and somehow make the results actionable, it could be really, really useful, because there is a lot of data. But I think just collecting the outcomes and linking them with images is actually quite hard. And ironically, I think it's harder for observational than for randomized control trials, where they're already collecting it. I guess one example would be the Nurses' Health Study or these big epidemiology cohorts, potentially. They are collecting that data and organizing it. But what were you thinking about? Do you have anything with pathology in mind for causal inference from observational data? PROFESSOR: Well, I think, the example you gave, like Nurses' Health Study or the Framingham study, where you're tracking patients across time. They're getting different interventions across time. And because of the way the study was designed, in fact, there are even good outcomes for patients across times. So that problem in the profession doesn't happen there. But then suppose you were to take it from a biobank and do pathologies? You're now getting the samples. Then, you can ask about, well, what is the effect of different interventions or treatment plans on outcomes? The challenge, of course, drawing inferences there is that there was bias in terms of who got what treatments. That's where the techniques that we talk about in class would become very important. I just say, I appreciate the challenges that you mentioned. ANDY BECK: I think it's incredibly powerful. I think the other issue I just think about is that treatments change so quickly over time. So you don't want to be like overfitting to the past. But I think there's certain cases where the therapeutic decisions today are similar to what they were in the past. There are other areas, like immunooncology, where there's just no history to learn from. So I think it depends on the-- PROFESSOR: All right, then with that, let's thank Andy Beck. [APPLAUSE] ANDY BECK: Thank you.
MIT_6S897_Machine_Learning_for_Healthcare_Spring_2019
4_Risk_Stratification_Part_1.txt
DAVID SONTAG: Today we'll be talking about risk stratification. After giving you a broad overview of what I mean by risk stratification, we'll give you a case study which you read about in your readings for today's lecture coming from early detection of type 2 diabetes. And I won't be, of course, repeating the same material you read about it in your readings. Rather I'll be giving some interesting color around what are some of the questions that we need to be thinking about as machine learning people when we try to apply machine learning to problems like this. Then I'll talk about some of the subtleties. What can go wrong with machine learning based approaches to risk stratification? And finally, the last half of today's lecture is going to be a discussion. So about 3:00 PM, you'll see a man walk through the door. His name is Leonard D'Avolio. He is a professor at Brigham Women's Hospital. He also has a startup company called Sift, which is working on applying risk stratification now, and they have lots of clients. So they've been really deep in the details of how to make this stuff work. And so we'll have an interview between myself and him, and we'll have opportunity for all of you to ask questions as well. And that's what I hope will be the most exciting part of today's lecture. Then going on beyond today's lecture, we're now in the beginning of a sequence of three lectures on very similar topics. So next Thursday, we'll be talking about survival modeling. And you can think about it as an extension of today's lecture, talking about what you should do if your data has centering, which I'll define for you shortly. Although today's lecture is going to be a little bit more high level, next Thursday's lecture is where we're going to really start to get into mathematical details about how one should tackle machine learning problems with centered data. And then the following lecture after that is going to be on physiological data, and that lecture will also be much more technical in nature compared to the first couple of weeks of the course. So what is risk stratification? At a high level, you think about risk stratification as a way of taking in the patient population and separating out all of your patients into one of two or more categories. Patients with high risk, patients with low risk, and maybe patients somewhere in the middle. Now the reason why we might want to do risk stratification is because we usually want to try to act on those predictions. So the goals are often one of coupling those predictions with known interventions. So for example, patients in the high risk pool-- we will attempt to do something for those patients to prevent whatever that outcome is of interest from occurring. Now risk stratification is quite different from diagnosis. Diagnosis often has very, very stringent criteria on performance. If you do a mis-diagnosis of something, that can have very severe consequences in terms of patients being treated for conditions that they didn't need to be treated for, and patients dying because they were not diagnosed in time. Risk stratification you think of as a little bit more fuzzy in nature. We want to do our best job of trying to push patients into each of these categories-- high risk, low risk, and so on. And as I'll show you throughout today's lecture, the performance characteristics that we'll often care about are going to be a bit different. We're going to look a bit more at quantities such as positive predictive value. Of the patients we say are high risk, what fraction of them are actually high risk? And in that way, it differs a bit from diagnosis. Also as a result of the goals being different, the data that's used is often very different. In risk stratification, often we use data which is very diverse. So you might bring in multiple views of a patient. You might use auxiliary data such as patients' demographics, maybe even socioeconomic information about a patient, all of which very much affect their risk profiles but may not be used for a unbiased diagnosis of the patient. And finally in today's economic environment, risk stratification is very much targeted towards reducing cost of the US health care setting. And so I'll give you a few examples of risk stratification, some of which have cost as a major goal others which don't. The first example is that of predicting an infant's risk of severe morbidity. So this is a premature baby. My niece, for example, was born three months premature. It was really scary for my sister and my whole family. And the outcomes of patients who are born premature have really changed dramatically over the last century. And now patients who are born three months premature, like my niece, actually can survive and do really well in terms of long term outcomes. But of the many different inventions that led to these improved outcomes, one of them was having a very good understanding of how risky a particular infant might be. So a very common score that's used to try to characterize risk for infant birth, generally speaking, is known as the Apgar score. For example when my son was born, I was really excited when a few seconds after my son was delivered, the nurse took out a piece of paper and computed the Apgar score. I studied that, really interesting, right? And then I got back to some other things that I had to do. But that score isn't actually as accurate as it could be. And there is this paper, which we'll talk about in a week and a half, by Suchi Saria who's a professor at Johns Hopkins, which looked at how one could use a machine learning based approach to really improve our ability to predict morbidity in infants. Another example, which I'm pulling from the readings for today's lecture, has to do with-- for patients who come into the emergency department with a heart related condition, try to understand do they need to be admitted to the coronary care unit? Or is it safe enough to let that patient go home and be managed by their primary care physician or their cardiologist outside of the hospital setting? Now that paper, you might have all noticed, was from 1984. So this isn't a new concept. Moreover, if you look at the amount of data that they used in that study, it was over 2,000 patients. They had a nontrivial number of variables, 50 something variables. And they used a non-trivial machine learning algorithm. They used logistic regression with a feature selection built in to prevent themselves from over fitting to the data. And the goal there was very much cost oriented. So the premise was that if one could quickly decide these patients who've just come to the ER are not high risk and we could send them home, then we'll be able to reduce the large amount of cost associated with those admissions to coronary care units. And the final example I'll give right now is that of predicting likelihood of hospital readmission. So this is something which is getting a real lot of attention in the United States health care space over the last few years because of penalties which the US government has imposed on hospitals who have a large number of patients who have been released from the hospital, and then within the next 30 days readmitted to the hospital. And that's part of the transition to value based care, which Pete mentioned in earlier lectures. And so the premise is that there are many patients who are hospitalized but are not managed appropriately on discharge or after discharge. For example, maybe this patient who has a heart condition wasn't really clear on what they should have done when they go home. For example, what medications should they be taking? When should they follow up with their cardiologist? What things they should be looking out for, in terms of warning signs that they should go back to the hospital or call their doctor for. And as a result of that poor communication, it's conjectured that these poor outcomes might occur. So if we could figure out which of the patients are likely to have those readmissions, and if we could predict that while the patients are still in the hospital, then we could change the way that discharge is done. For example, we could send a nurse or a social worker to talk to the patient. Go really slowly through the discharge instructions. Maybe after the patient is discharged, one could have a nurse follow up at the patient's home over the next few weeks. And in this way, hopefully reduce the likelihood of that readmission. So at a high level, there's the old versus the new. And this is going to be really a discussion throughout the rest of today's lecture. What's changed since that 1984 article which you read for today's readings? Well, the traditional approaches to risk stratification are based on scoring systems. So I mentioned to you a few minutes ago, the Apgar scoring system is shown here. You're going to say for each of these different correct criteria-- activity, pulse, grimace, appearance, respiration-- you look at the baby, and you say well, activity is absent. Or maybe they're active movement. Appearance might be pale or blue, which would get 0 points, or completely pink which gets 2 points. And for each one of these answers, you add up the corresponding points. You get a total number of points. And you look over here and you say, OK, well if you have a 0 to 3 points, the baby is at severe risk. If they have 7 to 10 points, then the baby is low risk. And there are hundreds of such scoring rules which have been very carefully derived through studies not dissimilar to the one that you read for today's readings, and which are actually widely used in the health care system today. But the times have been changing quite rapidly in the last 5 to 10 years. And now, what most of the industry is moving towards are machine learning based methods that can work with a much higher dimensional set of features and solve a number of key challenges of these early approaches. First-- and this is perhaps the most important aspect, they can fit more easily into clinical workflows. So the scores I showed you earlier are often done manually. So one has to think to do the score. One has to figure out what the corresponding inputs are. And as a result of that, often they're not used as frequently as they should be. Second, the new machine learning approaches can get higher accuracy potentially, due to their ability to use many more features than the traditional pitches. And finally, they can be much quicker to drive. So all of the traditional scoring systems had a very long research and development process that led to their adoption. First, you gather the data. Then you build the models. Then you check the models. Then you do an evaluation in one hospital. Then you do a prospective evaluation in many hospitals. And each one of those steps takes a lot of time. Now with these machine learning based approaches, it raises the possibility of a research assistant sitting in a hospital, or in a computer science department, saying oh, I think it would be really useful to derive a score for this problem. You take data that's available. You apply your machine learning algorithm. And even if it's a condition or an outcome which occurs very infrequently, if you have access to a large enough data set you'll be able to get enough samples in order to actually predict that somewhat very narrow outcome. And so as a result, it really opens the door to rethinking about the way that risk stratification can be used. But as a result, there are also new dangers that are introduced. And we'll talk about some of those in today's lecture, and we'll continue to talk about those in next Thursday's lecture. So these models are being widely commercialized. Here is just an example from one of many companies that are building risk stratification tools. This is from Optum. And what I'm showing you here is the output from one of their models which is predicting COPD related hospitalizations. And so you'll see that this is a population level view. So for all of the patients who are of interest to that hospital, they will score the patient-- using either one of the scores I showed you earlier, the manual ones, or maybe a machine learning based model-- and they'll be put into one of these different categories depending on the risk level. And then one can dig in deeper. So for example, you could click on one of those buckets and try to see well, who are the patients that are highest at risk. And what are some potentially impactible aspects of those patients' health? Here, I'm showing you for a slightly different problem that are predicting high risk diabetes patients. And you see that for each patient, we're listing the number of A1C tests, the value of the last A1C test, the day that it was performed. And in this way, you could notice oh, this patient is at high risk of having diabetes. But look, they haven't been tracking their A1C. Maybe they have uncontrolled diabetes. Maybe we need to get them into the clinic, get their blood tested, see whether maybe they need a change in medication, and so on. So in this way, we can stratify the patient population and think about interventions that can be done for that subset of them. So I'll move now into a case study of early detection of type 2 diabetes. The reason why this problem is of importance is because it's estimated that there are 25% of patients with undiagnosed type 2 diabetes in the United States. And that number is equally large as you go to many other countries internationally. So if we can find patients who currently have diabetes or are likely to develop diabetes in the future, then we could attempt to impact them. So for example, we could develop new interventions that can prevent those patients from worsening in their diabetes progression. For example, weight loss programs or getting patients on first line diabetic treatments like Metformin. But the key problem which I'll be talking about today is really, how do you find that at risk population? So the traditional approach to doing that is very similar to that Apgar score. This is a scoring system used in Finland which asks a series of questions and has points associated with each answer. So what's the age of the patient? What's their body mass index? Do they eat vegetables, fruit? Have they ever taken anti hypertension medication? And so on, and you get a final score out, right? Lower than 7 would be 1 in 100 risk of developing type 2 diabetes. Higher than 20 is very high risk. 1 in 2 people will develop type 2 diabetes in the next 10 years. But as I mentioned, these scores haven't had the impact that we had hoped that they might have. And the reason really is because they haven't been actually used nearly as much as they should be. So what we will be thinking through is, can we change the way in which risk stratification is done? Rather than it having to be something which is manually done, when you think to do it, we can make it now population wide. We could, for example, take data that's already available from a health insurance company, use machine learning. Maybe we don't have access to all of those features I showed you earlier. Maybe we don't know the patient's weight, but we will use machine learning on the data that we do have to try to find other surrogates of those things we don't have, which might predict diabetes risk. And then we can apply it automatically behind the scenes for millions of different patients and find the high risk population and perform interventions for those patients. And by the way, the work that I'm telling you about today is work that really came out of my lab's research in the last few years. So this is an example going back to the set of stakeholders, which we talked about in the first lecture. This is an example of a risk stratification being done at the payer level. So the data which is going to be used for this problem is administrative data, data that you typically find in health insurance companies. So I'm showing you here a single patient's timeline and the type of data that you would expect to be available for that patient across time. In red, it's showing their eligibility records. When had they been enrolled in that health insurance? And that's really important, because if they're not enrolled in the health insurance on some month, then the lack of data for that patient isn't because nothing happened. It's because we just don't have visibility into it. It's missing. In green, I'm showing medical claims which are associated with diagnosis codes that Pete talked about last week, procedure codes, CPT codes. We know what the specialist was that the patient went to see, like cardiologists, primary care physician, and so on. We know where the service was performed, and we know when it was performed. And then from pharmacy, we have access to medication records shown in the top right there. We know what medication was prescribed, and we have it coded to the NDC code-- National Drug Code, which Pete talked about again last Tuesday. We know the number of days' supply of the medication, the number of refills that are available still, and so on. And finally, we have access to laboratory tests. Now traditionally, health insurance companies only know what tests were performed because they have to pay for that test to be performed. But more and more, health insurance companies are forming partnerships with companies like Quest and LabCorps to actually get access also to the results of those lab tests. And in the data set that I'll tell you about today, we actually do have those lab test results as well. So what are these elements for this population? This population comes from Philadelphia. So if we look at the top diagnosis codes, for example, we'll see that of 135,000 patients who had laboratory data, there were over 400,000 different diagnosis codes for hypertension. You'll notice that's greater than the number of people. That's because they occurred multiple times across time. Other common diagnosis codes included hyperlipidemia, hypertension, type 2 diabetes. And you'll notice that there's actually quite a bit of interesting detail here. Even in diagnosis codes, you'll find things that sound more like symptoms-- like fatigue, which is over here. Or you also have records of procedures, in many cases. Like they got a vaccination for influenza. Here's another example. This is now just telling you something about the broad statistics of laboratory tests in this population. Creatinine, potassium, glucose, liver enzymes are all the most popular lab tests. And that's not surprising, because often there is a panel called the CBC panel which is what you would get in your annual physical. And that has many of these top laboratory test results. But then as you look down into the tail, there are many other laboratory test results that are more specialized in nature. For example, hemoglobin A1C is used to track roughly 3 month average of blood glucose and is used to understand a patient's diabetes status. So that's just to give you a sense of what is the data behind the scenes. Now let's think, how do we really derive-- how do we tackle-- how do we formulate this risk stratification problem as a machine learning problem? Well today, I'll give you one example of how to formulate it as a machine learning problem. But in Tuesday's lecture, I'll tell you several other ways. Here, we're going to think about a reduction to binary classification. We're going to go back in time. We're going to pretend it's January 1, 2009. We're going to say suppose that we had run this risk stratification algorithm on every single patient on January 1, 2009. We're going to construct features from the data in the past, so the past few years. We're going to predict something about the future. And there many things you could attempt to predict about the future. I'm showing you here 3 different prediction tasks corresponding to different gaps-- a 0 year gap, a 1 year gap, and a 2 year gap. And for each one of these, it asks will the patient newly develop type 2 diabetes in that prediction window? So for example, for this prediction task we're going to exclude patients who have developed type 2 diabetes between 2009 and 2011. And we're only going to count as positives patients who get newly diagnosed with type 2 diabetes between 2011 and 2013. And one of the reasons why you might want to include a gap in the model is because often, there's label leakage. So if you look at the very top set up, often what happens is a clinician might have a really good idea that the patient might be diabetic, but it's not yet coded in a way which our algorithms can pick up. And so on January 1, 2009 the primary care physician for the patient might be well aware that this patient is diabetic, might already be doing interventions based on it. But our algorithm doesn't know that, and so that patient, because of the signals that are present in the data, is going to at the very top of our prediction list. We're going to say this patient is someone you should be going after. But that's really not an interesting patient to be going after, because the clinicians are probably already doing interventions that are relevant for that patient. Rather, we want to find the patients where the diabetes might be more unexpected. And so this is one of the subtleties that really arises when you try to use retrospective clinical data to derive your labels to use within machine learning for risk stratification. So in the result I'll tell you about, I'm going to use a 1 year gap. Another problem is that the data is highly censored. So what I mean by censoring is that we often don't have full visibility into the data for a patient. For example, patients might have only come into the health insurance in 2013, and so January 1, 2009 we have no data on them. They didn't even exist in the system at all. So there are two types of censoring. One type of censoring is called left censoring. It means when we don't have data to the left, for example in the feature construction window. Another type of censoring is called right censoring. It means when we don't have data about the patient to the right of that time line. And for each one of these in our work here, we tackle it in a different way. For left centering, we're going to deal with it. We're going to say OK, we might have limited data on patients. But we will use whatever data is available from the past 2 years in order to make our predictions. And for patients who have less data available, that's fine. We have sort of a more sparse feature vector. For right centering, it's a little bit more challenging to deal with in this binary reduction, because if you don't know what the label is, it's really hard to use within, for example, a supervised machine learning approach. In Tuesday's lecture, I'll talk about a way to deal with right censoring. In today's lecture, we're going to just ignore it. And the way that we'll ignore it is by changing the inclusion and exclusion criteria. We will exclude patients for whom we don't know the label. And to be clear, that could be really problematic. So for example, imagine if you go back to this picture here. Imagine that we're in this scenario. And imagine that if we only have data on a patient up to 2011, we remove them from the data set, OK? Because we don't have full visibility into the 2010 to 2012 time window. Well, suppose that exactly the day before the patient was going to be removed from the data set-- right before the data disappears for the patient because, for example, they might change health insurers-- they were diagnosed with type 2 diabetes. And maybe the reason why they changed health insurers had to do with them being diagnosed with type 2 diabetes. Then we've excluded that patient from the population, and we might be really biasing the results of the model, by now taking away a whole set of the population where this model would've been really important to apply. So thinking about how you really do this inclusion exclusion and how that changes the generalizability of the model you get is something that should be at the top of your mind. So the machine learning algorithm used in that paper which you've read is L1 regularized logistic regression. One of the reasons for using L1 regularized logistic regression is because it provides a way to use a high dimensional feature set. But at the same time, it allows one to do feature selection. So I'll go more into detail on that in just a moment. All of you should be familiar with the idea of formulating machine learning as an optimization problem where you have some loss function, and you have some regularization term-- w, in this case, as the weights of your linear model, which we're trying to learn. For those of you who've seen support vector machines before, support vector machines will use what's called L2 regularization where we'll be putting a penalty on the L2 norm of the weight vector. Instead, what we did in this paper is used L1 regularization. So this penalty is defined over here. It's summing over the features and looking at the absolute value for each of the weights and summing those up. So one of the reasons why L1 regularization has what's known as a sparsity benefit can be explained by this picture. So this is just a demonstration by sketch. Suppose that we're trying to solve this optimization problem here. So this is the level set of your loss function. It's a quadratic function. And suppose that instead of adding on your regularization as a second term to your optimization problem, you were to instead put in a constraint. So you might say we're going to minimize the loss subject to the L1 norm of your weight vector being less than 3. Well, then what I'm showing you here is weight space. I'm showing you 2 dimensions. This x-axis is weight 1. This y-axis is weight 2. And if you put an L1 constraint-- for example, you said that the sum of the absolute values of weight 1 and weight 2 have to be equal to 1-- then the solution space has to be along this diamond. On the other hand, if you put an L2 constraint on your weight vector, then it would correspond to this feasibility space. For example, this would say something like the L2 norm over the weight vector has to be equal to 1. So it would be a ball, saying that the radius has to always be equal to 1. So suppose now you're trying to minimize that objective function, subject to the solution having to be either on the ball, which is what you would do if you were optimizing the L2 norm, versus living on this diamond, which is what would happen if you're optimizing the L1 norm. Well, the optimal solution is going to be in essence the closest point along the circle, which gets as close as possible to the middle of that level set. So over here, the closest point is that 1. And you'll see that this point has a non-zero w1 and w2. Over here, the closest point is over here. Notice that has a zero value of w1 and a non-zero value of w2, thus it's found a sparser solution than this one. So this is just to give you some intuition about why using L1 regularization results in sparse solutions to your optimization problem. And that could be beneficial for two purposes. First, it can help prevent over fitting in settings where there exists a very good risk model that uses a small number of features. And to point out, that's not a crazy idea that there might exist a risk model that uses a small number of features, right? Remember, think back to that Apgar score or the FINDRISC, which was used to predict diabetes in Finland. Each of those had only 5 to 20 questions. And based on the answers to those 5 to 20 questions, one could get a pretty good idea of what the risk is of that patient, right? So the fact that there might be a small number of features that are together sufficient is actually a very reasonable prior. And it's one reason why L1 regularization is actually very well suited to these types of risk stratification problems on this type of data. The second reason is one of interpretability. If one wants to then ask, well, what are the features that actually were used by this model to make predictions? When you find only 20 or a few features, you can enumerate all of them and look to see what they are. And in that way, understand what is going on into the predictions that are made. And that also has a very big impact when it comes to translation. So suppose you built a model using data from this health insurance company. And this health insurance company just happened to have access to a huge number of features. But now you want to go somewhere else and apply the same model. If what you've learned is a model with only a few hundred features, you're able to dwindle it down. Then it provides an opportunity to deploy your model much more easily. The next place you go to, you only need to get access to those features in order to make your predictions. So I'll finish up in the next 5 minutes in order to get to our discussion with Leonard. But I just want to recap what are the features that go into this model, and what are some of the valuations that we use. So the features that we used here were ones that were designed to take into consideration that there is a lot of missing data for patients. So rather than think through do we impute this feature, do we not impute this feature, we simply look to see were these features ever observed? So we choose our feature space in order to already account for the fact that there's a lot missing. For example, we look to see what types of specialists has this doctor seen in the past, been to in the past? For every possible specialist, we put a 1 in the corresponding dimension if the patient has seen that type of specialist and 0 otherwise. For the top 1,000 most common medications, we look to see has the patient ever taken his medication, yes or no? And again, 0 or 1 in the corresponding dimension. For laboratory tests, that's where we do something which is a little bit different. We look to see, first of all, was a laboratory test ever administered? And then we say OK, if it was administered, was the result ever low, out of bounds on the lower side? Was the result ever high? Was the result ever normal? Is the value increasing? Is the value decreasing? Is the value fluctuating? I noticed that each one of these quantities is well-defined, even for patients who don't ever have any laboratory test results available, right? The answer would be 0, it was never administered. And 0, it was never low. 0, it was never high, and so on. OK? AUDIENCE: Is the value increasing? Is it every time, or how do you define? DAVID SONTAG: So increasing here-- first of all, if there is only a single value observed then it's 0. If there were at least 2 values observed, then you look to see was there ever any adjacent pair of observations where the second one was higher than the first one? That's the way it was defined here. AUDIENCE: Then it has increased and then decreased. You put 1 and 1 on the [INAUDIBLE].. DAVID SONTAG: Correct. That's what we did here. And it's extremely simple, right? So there are lots of better ways that you could do this. And in fact, this is an example which we'll come back to perhaps a little bit in the next lecture and then more in subsequent lectures when we talk about using recurrent neural networks to try to summarize time series data. Because one could imagine that using such an approach could actually automatically learn such features. AUDIENCE: Just to double check, is fluctuating one of the other two [INAUDIBLE]? DAVID SONTAG: Fluctuating is exactly the scenario that was just described. It can go up, and then it goes down. Has to do both, yeah. Yep? AUDIENCE: It said in the first question, [INAUDIBLE] together. Was the test ever administered [INAUDIBLE]?? And the value you have there is 1. DAVID SONTAG: Correct. So indeed, there is a huge amount of correlation between these features. If any of these were 1, then this is also going to be 1. AUDIENCE: Especially the results. DAVID SONTAG: Yeah, but you would still want to include this 1 in here. So imagine that all of these were 0. You don't know if they're 0 because these things didn't happen or because the test was never performed. AUDIENCE: Are the low, high, normal-- DAVID SONTAG: They're just binary indicators here, right? AUDIENCE: Doesn't it have to fit into one category? DAVID SONTAG: Well, no. Oh, I see what you're saying. So you're saying if the result was ever present, then it would be at least 1 of these 3. Maybe. It gets into some of the technical details which I don't remember right now. It was a good question. And this is the next most really important detail. The way I just described this, there was no notion of time in that. But of course when these things happened can be really important. So the next thing we do is we re-compute all of these features for different time buckets. So we compute them for the last 6 months of history, for the last 24 months of history, and then for all of the past history. And we can catenate together all of those feature vectors and what you get out. In this case, it was something like a 42,000 dimensional feature vector. By the way, it's 42,000 dimensional and not higher because the features that we used for diagnosis codes for this paper were not temporal in nature. And one could easily make them temporal in nature, in which case it'd be more like 60,000 features. I'm going to skip over the deriving labels and get back to that next time. I just want to briefly talk about how does one evaluate these types of models. And I'll give you one view on evaluations, and shortly we'll hear a very different type of view. So here, what I'm showing you are the variables that have been selected by the model and have non-zero weight. So for example, the very top you see impaired fasting glucose, which is used by the model. It's not surprising because we're trying to predict is the patient likely to develop type 2 diabetes. Now you might ask, if a patient has a diagnosis code for impaired fasting glucose aren't they already diabetic? Shouldn't they have been excluded? And the answer is no, because there are also patients who are pre-diabetic in this data set, who have been intentionally included because we don't know which of them are going to go on to develop type 2 diabetes. And so this is an indicator that the patient has been previously flagged as being pre-diabetic. And it obviously makes sense that would be at the very top of the predictive variables. But there are also many things that are a little bit less obvious. For example, here we see obstructive sleep apnea and esophageal reflux as being chosen by the model to be predictive of the patient developing type 2 diabetes. What we would conjecture is that those variables, in fact, act as surrogates for the patient being obese. Obesity is very seldom coded in commercial health insurance claims. And so with this variable, despite the fact that the patient might be obese, if this variable is not observed then patients who are obese often have what's called sleep apnea. So they might stop breathing for short periods of time during their sleep. And so that then would be a sign of obesity. So I talked about how the criteria which we use to evaluate risk stratification models are a little bit different from the criteria used to evaluate diagnosis models. Here I'll tell you one of the measures that we often use, and it's called positive predictive value. So what we'll do is look at after you've learned your model. Look at the top 100 predictions, top 1,000 predictions, top 10,000 predictions, and look to see what fraction of those patients went on to actually develop type 2 diabetes. Now of course, this is done using held up data. Now the reason why you might be interested in different levels is because you might want to target different interventions depending on the risk and cost. For example, a very low cost intervention-- one of the ones that we did-- was sending a text message to patients who are suspected to have high risk of developing type 2 diabetes. If they've not been to see their eye doctor in the last year, we send them a text message saying maybe you want to go see your eye doctor. Remember, you get a free eye checkup. And this is a very cheap intervention, and it's a very subtle intervention. The reason why it can be effective is because patients who develop type 2 diabetes, once that diabetes progresses it leads to something called diabetic retinopathy, which is often caught in an eye exam. And so that could be one mechanism for patients to be diagnosed. And so since it's so cheap, you could do it for 10,000 people. So you take the 10,000 most risky people. You apply the intervention for them, and you look to see which of those people actually had developed diabetes in the future. In the model that I showed you, 10% of that population went on to develop type 2 diabetes 1 to 3 years from then. The comparison point I'm showing you here, this blue bar, is if you used a model which is derived using a very small number of features, so not a machine learning based approach. And there, only 6% of the people went on to develop type 2 diabetes from the top 10,000. On the other hand, other interventions you might want to do are much more expensive. So for example, you might only be able to do that intervention for 100 people because it costs so much money, and you have a limited budget as a health insurer. And so for those people, you could ask well, what is the positive predictive value of those top 100 predictions? And here, that was 15% using the machine learning based model and less than half of that using the more traditional approach. So I'm going to stop here. There's a lot more that I can and will say. But I'll have to get to it in next Thursday's lecture, because I'd like our guest to come down, and we will have a bit of a discussion. To be clear, this is the first time that we've ever had this type of class interaction which is why, by the way, I ran a little bit late. I hadn't ever done something like this before. So it's an experiment. Let's see what happens. So, do you say Leonard? LEONARD D'AVOLIO: Len's fine. DAVID SONTAG: Len, OK. So Len, could you please introduce yourself? LEONARD D'AVOLIO: Sure. My name is Len D'Avolio. I'm an assistant professor at Harvard Medical School. I am also the CEO and founder of a company called Sift. Do you want a little bit of background or no? DAVID SONTAG: Yeah, a little bit of background. LEONARD D'AVOLIO: Yeah, so I've spent probably the last 15 years or so trying to help health care learn from its data in new ways. And of all the fields that need your help, I would say health care for both societal, but also just from a where we're at with our ability to use data standpoint is a great place for you guys to invest your time. I've been doing this for government, in academia as a researcher, publishing papers. I've been doing this for non-profits in this country and a few others. But every single project that I've been a part of has been an effort to bring in data that has always been there, but we haven't been able to learn from until now. And whether that's at the VA building out there, genomic science infrastructure, recruiting and enrolling a million veterans to donate their blood and their EMR, or at Ariadne Labs over out of Harvard School of Public Health and the Brigham, improving childbirth in India-- it's all about how can we get a little bit better over and over again to make health care a better place for folks. DAVID SONTAG: So tell me, what is risk stratification from your perspective? Defining that I found to be one of the most difficult parts of today's lecture. LEONARD D'AVOLIO: Well, thank you for challenging me with it. [LAUGHTER] So it's a rather generic term, and I think it depends entirely on the problem you're trying to solve. And every time I go at this, you really have to ground yourself in the problem that you're trying to solve. Risk could be running out of a medical supply in an operating room. Risk could be an Apgar score. Risk could be from pre-diabetic to diabetic. Risk could be an older person falling down in their home. So really, what is it to me? I'm very much caught up in the tools analogy. These are wonderful tools with which a skilled craftsman surrounded by others that have skills could go ahead and solve very specific problems. This is a hammer. It's one that we spend a lot of time refining and applying to solve problems in health care. DAVID SONTAG: So why don't you tell us about some of the areas where your company has been applying risk stratification today at a very high level. And then we'll choose on of them to dive a bit deeper into. LEONARD D'AVOLIO: Sure. So the way we describe what we do is it's performance improvement. And I'm just giving you a little background, because it'll tell you which problems I'm focused on. So it's performance improvement, and to be candid, the types of things we like to improve the performance of are how do we keep people out of the hospital. I'm not going to soapbox on this too much, but I think it matters. Like the example that you gave that you were employed to help solve was by an insurer, and insurance companies-- there's probably 30 industries in health care. It's not one industry. And every one of them has different and oftentimes competing incentives. And so the most logical application for these technologies is to help do preventative things. But only about, depending on your math, between 8% and 12% of health care is financially incentivized to do preventative things. The rest are the hospitals and the clinics. And when you think of health care, you probably think of those types of organizations. They don't typically pay to keep you out of those facilities. DAVID SONTAG: So as a company, you know, you've got to make a profit of entry. So you need to focus on the ones where there's a financial incentive. LEONARD D'AVOLIO: You focus on where there's a financial incentive. And in my case, I wanted to build a company where the financial incentive aligned with keeping people healthy. DAVID SONTAG: So what are some of these examples? LEONARD D'AVOLIO: Sure. So we do a lot with older populations. With older populations, it becomes very important to understand who care managers should approach, because their risk levels are rising. A lot of risk stratification, the old way that you described, identifies people that are already at their most acute. So it's sort of skating to where the puck has been. You're getting attention because you are at the absolute peak of your acuity. We're trying to help care management organizations find people that are rising risk. And even when we do that, we try to get-- I mean, the power of these technologies is to move away from one size fits all. So when we think about rising risk, we think about in a behavioral health environment, it is the rising risk of an inpatient psychiatric admission. That is a very specific application. There are things we can do about it. As opposed to risk, which if you think about what's being done in other industries, Amazon does not consider us all consumers. There are individuals that are very likely to react to certain offers at certain times. And so we're trying to bring this sort of more granular approach into health care, where we sit with teams and they're used to just having generic risk scores. We're trying to help them think through which older people are likely to fall down. We do work in diabetes also, so which children with type 1 diabetes shouldn't just be scheduled for an appointment every 3 months, but you should go to them right now? So those are some examples, but the themes are very consistent. It's helping organizations move away from rather generic, one size fits all toward what are the more actionable. So even graduation from care management, because now you should be having serious illness conversations because you're nearing end of life, or palliative care referrals, or hospice referrals. DAVID SONTAG: OK, so I want to choose a single one to dive into. And I want to choose one that you've worked on the longest and where you're already doing at least the initial parts of an evaluation of it. And so I think when we talked on the phone, psyche ER was one of those examples. Tell us a bit about that one. LEONARD D'AVOLIO: Yeah. Well, I'll just walk you through the problem to be solved. DAVID SONTAG: Please, yeah. LEONARD D'AVOLIO: Sure. So we work with a large behavioral health care organization. They are contracted by health plans, in effect, to treat people that have mental health challenges. And the traditional way of identifying anyone for care management is again, you get a risk score. When you sort the highest ranking in terms of odds ratio variables, it's because you were already admitted, because you're older, because you have more medications. So they were using a similar approach, finding the most acute people. So the very first thing we do in all of our engagements is an understanding. Where is the greatest opportunity? And this has very little to do with machine learning. It's just what's happening today? Where are these things happening? Who is caring for these folks? Everyone wants to reduce hospital admissions. But there's a difference between hospital admissions because you're not taking your meds, and hospital admissions because you're addicted to opioids, and hospital admissions because you have chronic complex bipolar schizophrenia. So we wanted to first understand well, where is the greatest cost? What types of things are happening most frequently? And then you want to have the clinical team tell you well, these are the types of resources we have. We have people that can address these issues, or we have interventions designed to solve these problems. And so you bring together where is the greatest possible return on your investment from both a data standpoint, a financial standpoint, but also and we can do something about it. After you do that, it's only then-- after you have full agreement from executive teams-- that this is the very narrow thing that we think we can address. Then we begin to apply machine learning to try to solve the problem. DAVID SONTAG: So what did that funnel lead to? What did you decide was the thing to address? LEONARD D'AVOLIO: Yeah, it was tried to reduce inpatient psychiatric admissions. And even then, the traditional way of reducing admissions-- just because it came out of this tradition of 30 day readmissions-- has always been thought of in terms of 30 days out. But when we interviewed the teams, they said actually for this particular condition it takes us more like 90 days to be able to have an impact. And so that clinical understanding mixed with what we have the resources to address, that's what steers then the application of machine learning to solve a specific problem. DAVID SONTAG: OK, so psychiatric inpatient admission-- so these are patients who come to the ER for some psychiatric related problem, and then when they're in the Er they're admitted to the hospital. They're in the hospital for anywhere from a day to a few days. And you want to find when are those going to happen in the future? LEONARD D'AVOLIO: Yeah. DAVID SONTAG: What type of data is useful for that? LEONARD D'AVOLIO: Sure. You don't have to just get through the ED, though. That's the most common, any unplanned acute admission. DAVID SONTAG: Got it. So what kind of data is most useful for predicting that? LEONARD D'AVOLIO: Yeah. So I think a philosophy that you all should take is whatever data you have, it should be your competitive advantage in solving the problem. And that's different in the way this has been done where folks have made an algorithm somewhere else, and then they're coming and telling you, hey, as long as you have claims data, then plug in my variables and I can help you. Our approach-- and this is sort of derived from my interest from the start in solving the problem and try to make the tools work faster-- is whatever data you have, we will bring it in and consider it. What ultimately then wins is dependent on the problem. But you would not be surprised to learn that there is some value in claims data. You put labs up there. There's a lot of value in labs. When it comes to behavioral health, and this is where you really have to understand health care, it's incredibly under diagnosed. There is a stigma attached to carrying diagnosis codes that would describe you as having mental health challenges. And so claims alone is not sufficient for that reason. We find a lot of lift from care management. So when you have a care manager, that care manager is assessing you and you are filling out forms and serving you and giving you different types of sort of functional assessments or activities of daily living assessments. That data turns out to be very powerful. And then, a dark horse that most people aren't used to using, we get a lot of lift out of the clinicians whether it's the psychiatrist or care manager's notes. So there is value in the written descriptions of a nurse's or a care manager's impressions of what's wrong, what has been done, what hasn't been done, and so on. DAVID SONTAG: So tell me a bit about the development process. So you figure out what you want to predict. You at least have that in words. You have your data in one place. Then what? LEONARD D'AVOLIO: Yeah. Well, you wouldn't be surprised. The very first thing we do is just try to throw a logistic regression at it. We want the story to make sense to begin with, and we're always looking for the simplest solution to the problem. Then the team sort of iterates back and forth through based on how this data looks and the characteristics of it-- the density, the sparsity-- based on what we understand about this data, these guys are in and out of the plan. So we may have issues with data not existing in the time windows that you had described. Then they're working their way through algorithms and feature selection approaches that seem to fit for the data that we have. DAVID SONTAG: But what error metrics do you optimize for? LEONARD D'AVOLIO: You're going to have to ask them. It's been too long. DAVID SONTAG: OK. [LAUGHTER] LEONARD D'AVOLIO: I'm 10 years out of being allowed to write code. But yeah, then it's an iterative process where we have to be-- this is a big deal. We have to be able to translate. We do positive predictive value, obviously. And I like the way you describe that, because a lot of folks that have been trained in statistics for medicine, whether it's epidemiology or the like, are always looking for an r squared or an area under ROC. And we have to help them understand that you can only care for so many people. So you don't really care what the area under ROC is for a population of, for this client, 300,000 in the one plan that we were serving. You really care about for the top 100 or 200, and really that number should be derived based on your capacity. DAVID SONTAG: Yeah. LEONARD D'AVOLIO: So if I can give you 7 out of 10 for 100, you might go knock on their door. But for, let's say, between 1,000 and 2,000 that number goes down to 4 out of 10. Maybe you should go with a less expensive intervention. Huge education component, helping people understand what they're seeing and how to interpret it, and helping them connect it back to what they're going to do with it. And then I think probably, in courses to follow, you'll go into all of the challenges with interpretability and the like. But they all exist. DAVID SONTAG: So tell me a bit about how it's deployed. So once you build a model, how do you get your client to start using it? LEONARD D'AVOLIO: Yeah. So you don't start getting them ready when the model's ready. I've learned the hard way that's far too late to involve them in the process. And in fact, the one bullet you had up here that I didn't completely agree with was this idea that these approaches are easier to plug into a workflow. Putting a number into an electronic health record may be easier. But when I think workflow, it's not just that the number appears at the right time. It's the culture of getting-- put it this way. These care managers have spent the last 20, 30 years learning who needs their help, and everything about their training and their experience is to care for the people that are most acute. All of the red flags are going off. And here comes a bunch of nerds and computer science people that are suggesting that no, rather than your intuition and experience of 30 years you should trust what a computer says to do. DAVID SONTAG: So there are two parts I want to understand better. LEONARD D'AVOLIO: Sure. DAVID SONTAG: First, how you deal with that problem, and second, I actually am curious about the technical details. Do you give them predictions on a piece of paper? Do you use APIs? LEONARD D'AVOLIO: Yeah. Well, let me answer the technical one first because it's a faster answer. You remember at the beginning of this, I said health care is pretty immature from a technical standpoint? So it's never a piece of paper, but it can be an Excel spreadsheet delivered via secure FTP once a month, because that's all they're able to take right now based on their state of affairs. It can be a real time call to an API. What we learn to do informing a company serving health care is do not create a new interface. Do not create a new log in. Accommodate whatever workflow and systems they already have in place. So build for flexibility as opposed to giving them something else to log into. You have very little time. And the other thing is clinicians hate their information technology. They love their phones, but they hate what their organization forces them to use. Now that may be a gross generalization, but I don't think it's too far off. Data is sort of a four letter word. DAVID SONTAG: So over the last week, the students have been learning about things like FHIR and so on. Are these any of the APIs that you use? LEONARD D'AVOLIO: No. So those are technologies with enormous potential. You put up a paper that described a risk stratification algorithm from 1984. That paper, I'm sure, was supported with evidence that it could make a big difference. I'm getting awfully close to standing on a soapbox again, but you have to understand that health care is paid for based on delivering care. And the more complex the care is, the more you get paid. And I'm not telling you this, I'm kind of sharing with them. You know that. So the idea that a technology like FHIR would open up EHRs to allow people to just kind of drop things in or out, thereby taking away the monopoly that the electronic health records have-- these are tough investments for the electronic health record vendor to make. They're being forced by the federal government. And they saw the writing on the wall, so they're moving ahead. And there's great examples coming out of Children's, Ken Mandl and the like, where some progress has been made. But I live in right now, I have to get this done inside of the health care of today. And very few of the organizations that we not just work with but would even talk to are in a position, like FHIR ready. In 5 years, I think I'll be telling you-- DAVID SONTAG: Hopefully something different, yeah. All right, so can you briefly answer that first question about what do you have to give around a prediction in order for it to be acted upon effectively? LEONARD D'AVOLIO: Yes. So the very first thing you have to do is-- so we invite the clinical team to be part of the project from the very beginning. It's just really important. If you show up with a prediction, you've lost. They're part of the team. Remember, I say we're triangulating what they can and can't do, and what might matter what might not. They are literally part of the team. And as we're moving through, how would one evaluate whether or not this works? We show them, these are some of the people we found. Oh yeah, that makes sense. I know Mr. Smith. And so it's a real show and tell process from the start. DAVID SONTAG: So once you get closer to that, after development phase has been done, then what? LEONARD D'AVOLIO: After the development phase, if you've done a great job you get away from the show me what variable mattered on a per patient basis. So you can show folks the odds ratios on a model is easy enough to produce. You can show people these are the features that matter at the model level. Where this gets tougher is all of health care is used to Apgar scores which are based on 5 things. We all know what they are. And the machine learning results, the models that we have been talking about in behavioral health-- I think the model that we're using now is over 3,700 variables with at least a little bit of a contribution. So how do you square up the culture of 5 to 7 variables? And in fact, I gave you the variables and you ran the hypothesis testing algorithm versus more of an inductive approach, where thousands of variables are actually contributing incrementally. And it's a double edged sword, because you could never show somebody 3,700 variables. But if you show them 3 or 4, then the answer is, well that's obvious. I knew that. DAVID SONTAG: Right, like the impaired fasting glucose one. LEONARD D'AVOLIO: Yes, exactly. So really, I just paid you to tell me that somebody who has been admitted is likely to readmit. You know, that's the challenge. So striking that balance between-- really, it's education more than anything, because I don't think that an algorithm created that uses 3,700 variables can then be turned into decision support where it can present you 2 or 3 that you could rely upon and then make informed decisions. And part of the education process is we also say forget about the number. If I were to give you this person, what would you do next? And the answer is always, well I would look at their chart. The analogy we use that we find is helpful is this is GPS, right? GPS isn't going to give you like a magic, underground highway that we didn't know about. It's going to suggest the roads that you're familiar with. The advantage it has is that unlike you in the car as you're driving, it's just aware of more than you are and it can do the math a little bit faster than you can. And so it's going to give you a suggestion, and it's going to tell you more often than not, in your situation, I'm going to save you a few minutes. DAVID SONTAG: Yeah. LEONARD D'AVOLIO: Now you're still the driver. You could still decide to take 93 South and so be it. It could be that the GPS is not aware of the fact that you really like the view on Memorial Drive versus Storrow, and so you're going to do that. And so we try to help people understand that it just has access to a little bit more than you do, and it's going to get you there a little bit faster. DAVID SONTAG: All right, I'm going to stop you here because I want to leave some time for some questions from the audience. So I'll make the following request. Try to keep it to quick responses so we can get to as many questions as we can. AUDIENCE: How much is there a worry that certain demographic groups are under diagnosed and have less access to care? And then, would have a lower risk edification, and then potentially be de-prioritized? How do you think about adjusting that? LEONARD D'AVOLIO: Yeah, so that was a great question. I'll try to answer it very fast. DAVID SONTAG: And could you repeat the question as quickly as possible as well? [LAUGHTER] LEONARD D'AVOLIO: Yeah. I mean, models can be biased by experience. And do you worry about smaller size populations being overlooked? Safe to say, is that fair? DAVID SONTAG: And the question was also about the training data that you used. LEONARD D'AVOLIO: Well, that's what I implied. DAVID SONTAG: Yeah, OK. LEONARD D'AVOLIO: OK. So all right, this work we're doing in behavioral health-- and we've done this in a few other environments-- if there is a different demographic for which you would do something different and they may be lost in the shuffle, we do bring that to their attention. DAVID SONTAG: Next question! Is there someone in the back there? LEONARD D'AVOLIO: You went too fast. DAVID SONTAG: OK, over here. AUDIENCE: How do you evaluate [INAUDIBLE]?? Would you be willing to sacrifice the data of [INAUDIBLE] to re-approve the [INAUDIBLE]?? DAVID SONTAG: I'm going to repeat the question. You talked about how it's like reading tea leaves to just show a couple of the top features anyway from a linear model. So why not just get rid of all that interpretability altogether? Does that open the door to that possibility for you? LEONARD D'AVOLIO: You're saying get rid of all the interpretability. I think the question was are you willing to trade performance for interpretability. DAVID SONTAG: Yes. LEONARD D'AVOLIO: And that could be an answer to it. Just throw it out. So if I can get our partners to the point where they truly understand what we're doing here and they have been part of evaluating the model, success is when they don't need to-- on a per patient, who needs my help basis-- see the 3,000 variables. But that does mean that as you're building the model, you will show them the patients. You will show them the variables. So that's what I try to walk them to. DAVID SONTAG: So it's about building up trust as you go. LEONARD D'AVOLIO: Absolutely. That being said in some situations, depending on whether it's clinically appropriate-- I mean, if I'm in the hundredth percentile here, but interpretability can get me pretty far, I'm willing to make that trade. And that's the difference. Don't fall in love with the hammer, right? Fall in love with building the home, and then you're easy enough to just swap it out. DAVID SONTAG: Next question! Over there. AUDIENCE: Yeah, how much time do you spend engaging with [INAUDIBLE] and physicians before staring to sort of build your model. LEONARD D'AVOLIO: So actually, first we spend time with the CEO and the CFO and the CMO-- chief medical, chief executive, chief financial. Because if there isn't at least a 5 to 1 financial return for solving this problem, you will never make it all the way down the chain to doing something that matters. And so what I have learned is the math is fantastic. We can model all sorts of fun things. But if I can't figure out how it makes them or saves them-- we have like a $5 million mark, right? For the size of our company, if I can't help you make 5 million, I know you won't pay me. So we start there. As soon as we have figured out that there is money to be made or saved in getting these folks the right care at the right time, then yes the clinicians are on the team. We have what's called a working group-- project manager, clinical lead, someone who's liaison to the data. We have a team and a communication structure that embeds the clinician. And we have clinicians on the team. DAVID SONTAG: I think you'll find in many different settings that's what it really takes to get machine learning implemented. You have to have working groups of administration, clinicians, users, and engineers, and others. Over here there's a question. AUDIENCE: Actually, it's a question for both of you, so about the data connection. So I know as people, we try to connect all kinds of data to train the machine learning model. But when you have some preliminary model, can you have some insights to guide you to target certain data, so that you can know that this new information can be very informative for prediction tasks or even design data experiments? DAVID SONTAG: So I'll repeat the question. Sometimes we don't already have the data we want. Could we use data driven approaches to find what data we should get? LEONARD D'AVOLIO: So we're doing this right now. There's a popular thing in the medical industry. Everyone's really fired up about social determinants of health, and so that has been branded and marketed and sold. And so now customers are saying to us, well hey, do you have social determinants of health data? And that's interesting to me, because they've never looked at anything but claims. And now they're suggesting go buy a third party data set which may not add more value than simply having the zip code. And we say of course, we can bring in new data. We bring in weather pattern. We bring in all kinds of funny data when the problem calls for it. That's the easy part. The real challenge is will it add value? Should we invest our time and energy in doing this? So if you've got all kinds of fantastic data, run with it and then see where you fall short. The data just doesn't tell you, now go out and get a different type of data. If the performance is low clinically and based on intuition, it makes sense that another data source may boost. Then we'll try it. If it's free, we'll try it quicker. If it costs money, we'll talk to the client about it. DAVID SONTAG: For both of those, I'll give you my answer to that question. If you have a high dimensional enough starting place, often that can give you a hint of where to go next. So in the example I showed you there, even though obesity is very seldom coded in claims data, we saw that it still showed up as a useful feature, right? So that then hints to us, well maybe if we got higher quality obesity data it would be an even better model. And so sometimes you can use that type of trick. There is a question over here. AUDIENCE: We use codes to [INAUDIBLE] by calculating how much the hospital will gain by limiting [INAUDIBLE]? DAVID SONTAG: OK, so this is going to be the last question that we're going to end on. And it really has to do with one of evaluation and thinking about the impact of an intervention based on their predictions. How much does that causal effect show up in both the way that you formalize problems, then evaluate the effect of your predictions? LEONARD D'AVOLIO: Yeah. So the most important thing to know is no customer will ever pay you for a positive predictive value. They don't care, right? They care about will you help them save or make money solving a problem. So cost effectiveness starts at the beginning. But the nice thing about a positive predictive value approach-- and there's so much literature that can tell you what does the average cost of certain things having happened. So the very first part of any engagement for us is well, you guys are here. This is the cost of being there. If you improved by 10%, if we can get approval to that, then we start to model. And we say well look, of the top 100 people 70 of them are the right people. Multiply that by the potential cost. If you think you can prevent 10 of those terrible things from occurring, that's worth this much. So cost effectiveness data is at the start. It's in the modeling stage. And then at the end, we never show them how good we did at predicting. We show them the baseline. We say baseline activities outcomes-- where were you, what are you doing, and then did it make a difference. And the last part is always in dollars and cents, too. DAVID SONTAG: Although Len didn't mention it here, he also does quite some work when trying to think through this causal effect. And we talked about how you use propensity matching, for example, in your work. We won't be able to get into that in today's discussion, but we'll come back to those questions when we talk about causal inference in a few weeks. That's all for today, thanks. [APPLAUSE]
MIT_6S897_Machine_Learning_for_Healthcare_Spring_2019
14_Causal_Inference_Part_1.txt
DAVID SONTAG: So today's lecture is going to be about causality. Who's heard about causality before? Raise your hand. What's the number one thing that you hear about when thinking about causality? Yeah? AUDIENCE: Correlation does not imply causation. DAVID SONTAG: Correlation does not imply causation. Anything else come to mind? That's what came to my mind. Anything else come to mind? So up until now in the semester, we've been talking about purely predictive questions. And for purely predictive questions, one could argue that correlation is good enough. If we have some signs in our data that are predictive of some outcome of interest, we want to be able to take advantage of that. Whether it's upstream, downstream, the causal directionality is irrelevant for that purpose. Although even that isn't quite true, right, because Pete and I have been hinting throughout the semester that there are times when the data changes on you, for example, when you go from one institution to another or when you have non-stationary. And in those situations, having a deeper understanding about the data might allow one to build an additional robustness to that type of data set shift. But there are other reasons as well why understanding something about your underlying data generating processes can be really important. It's because often, the questions that we want to answer when it comes to health care are not predictive questions, their causal questions. And so what I'll do now is I'll walk through a few examples of what I mean by this. Let's start out with what we saw in Lecture 4 and in Problem Set 2, where we looked at the question of how we can do early detection of type 2 diabetes. You used Truven MarketScan's data set to build a risk stratification algorithm for detecting who is going to be newly diagnosed with diabetes one to three years from now. And if you think about how one might then try to deploy that algorithm, you might, for example, try to get patients into the clinic to get them diagnosed. But the next set of questions are usually about the so what question. What are you going to do based on that prediction? Once diagnosed, how will you intervene? And at the end of the day, the interesting goal is not one of how do you find them early, but how do you prevent them from developing diabetes? Or how do you prevent the patient from developing complications of diabetes? And those are questions about causality. Now, when we built a predictive model and we introspected at the weight, we might have noticed some interesting things. For example, if you looked at the highest negative weights, which I'm not sure if we did as part of the assignment but is something that I did as part of my research study, you see that gastric bypass surgery has the biggest negative weight. Does that mean that if you give an obese person gastric bypass surgery, that will prevent them from developing type 2 diabetes? That's an example of a causal question which is raised by this predictive model. But just by looking at the weight alone, as I'll show you this week, you won't be able to correctly infer that there is a causal relationship. And so part of what we will be doing is coming up with a mathematical language for thinking about how does one answer, is there a causal relationship here? Here's a second example. Right before spring break we had a series of lectures about diagnosis, particularly diagnosis from imaging data of a variety of kinds, whether it be radiology or pathology. And often, questions are of this sort. Here is a woman's breasts. She has breast cancer. Maybe you have an associated pathology slide as well. And you want to know what is the risk of this person dying in the next five years. So one can take a deep learning model, learn to predict what one observes. So in the patient in your data set, you have the input and you have, let's say, survival time. And you might use that to predict something about how long it takes from diagnosis to death. And based on those predictions, you might take actions. For example, if you predict that a patient is not risky, then you might conclude that they don't need to get treatment. But that could be really, really dangerous, and I'll just give you one example of why that could be dangerous. These predictive models, if you're learning them in this way, the outcome, in this case let's say time to death, is going to be affected by what's happened in between. So, for example, this patient might have been receiving treatment, and because of them receiving treatment in between the time from diagnosis to death, it might have prolonged their life. And so for this patient in your data set, you might have observed that they lived a very long time. But if you ignore what happens in between and you simply learn to predict y from X, X being the input, then a new patient comes along and you predicted that new patient is going to survive a long time, and it would be completely the wrong conclusion to say that you don't need to treat that patient. Because, in fact, the only reason the patients like them in the training data lived a long time is because they were treated. And so when it comes to this field of machine learning and health care, we need to think really carefully about these types of questions because an error in the way that we formalize our problem could kill people because of mistakes like this. Now, other questions are ones about not how do we predict outcomes but how do we guide treatment decisions. So, for example, as data from pathology gets richer and richer and richer, we might think that we can now use computers to try to better predict who is likely to benefit from a treatment than humans could do alone. But the challenge with using algorithms to do that is that people respond differently to treatment, and the data which is being used to guide treatment is biased based on existing treatment guidelines. So, similarly, to the previous question, we could ask, what would happen if we trained to predict past treatment decisions? This would be the most naive way to try to use data to guide treatment decisions. So maybe you see David gets treatment A, John gets treatment B, Juana gets treatment A. And you might ask then, OK, a new patient comes in, what should this new patient be treated with? And if you've just learned a model to predict from what you know about the treatment that David is likely to get, then the best that you could hope to do is to do as well as existing clinical practice. So if we want to go beyond current clinical practice, for example, to recognize that there is heterogeneity in treatment response, then we have to somehow change the question that we're asking. I'll give you one last example, which is perhaps a more traditional question of, does X cause y? For example, does smoking cause lung cancer is a major question of societal importance. Now, you might be familiar with the traditional way of trying to answer questions of this nature, which would be to do a randomized controlled trial. Except this isn't exactly the type of setting where you could do randomized controlled trials. How would you feel if you were a smoker and someone came up to you and said, you have to stop smoking because I need to see what happens? Or how would you feel if you were a non-smoker and someone came up to you and said, you have to start smoking? That would be both not feasible and completely unethical. And so if we want to try to answer questions like this from data, we need to start thinking about how can we design, using observational data, ways of answering questions like this. And the challenge is that there's going to be bias in the data because of who decides to smoke and who decides not to smoke. So, for example, the most naive way you might try to answer this question would be to look at the conditional likelihood of getting lung cancer among smokers and getting lung cancer among non-smokers. But those numbers, as you'll see in the next few slides, can be very misleading because there might be confounding factors, factors that would, for example, both cause people to be a smoker and cause them to receive lung cancer, which would differentiate between these two numbers. And we'll have a very concrete example of this in just a few minutes. So to properly answer all of these questions, one needs to be thinking in terms of causal graphs. So rather than the traditional setup in machine learning where you just have inputs and outputs, now we need to have triplets. Rather than having inputs and outputs, we need to be thinking of inputs, interventions, and outcomes or outputs. So we now need be having three quantities in mind. And we have to start thinking about, well, what is the causal relationship between these three? So for those of you who have taken more graduate level machine learning classes, you might be familiar with ideas such as Bayesian networks. And when I went to undergrad and grad school and I studied machine learning, for the longest time I thought causal inference had to do with learning causal graphs. So this is what I thought causal inference was about. You have data of the following nature-- 1, 0, 0, 1, dot, dot, dot. So here, there are four random variables. I'm showing the realizations of those four binary variables one per row, and you have a data set like this. And I thought causal inference had to do with taking data like this and trying to figure out, is the underlying Bayesian network that created that data, is it X1 goes to X2 goes to X3 to X4? Or I'll say, this is X1, that's X2, x3, and X4. Or maybe the causal graph is X1, to X2, to X3, to x4. And trying to distinguish between these different causal graphs from observational data is one type of question that one can ask. And the one thing you learn in traditional machine learning treatments of this is that sometimes you can't distinguish between these causal graphs from the data you have. For example, suppose you just had two random variables. Because any distribution could be represented by probability of X1 times probability of X2 given X1, according to just rule of conditional probability, and similarly, any distribution can be represented as the opposite, probability of X2 times probability of X1 given X2, which would look like this, the statement that one would make is that if you just had data involving X1 and X2, you couldn't distinguish between these two causal graphs, X1 causes X2 or X2 causes X1. And usually another treatment would say, OK, but if you have a third variable and you have a V structure or something like X1 goes to x2, X1 goes to X3, this you could distinguish from, let's say, a chain structure. And then the final answer to what is causal inference from this philosophy would be something like, OK, if you're in a setting like this and you can't distinguish between X1 causes X2 or X2 causes X1, then you do some interventions, like you intervene on X1 and you look to see what happens to X2, and that'll help you disentangle these directions of causality. None of this is what we're going to be talking about today. Today, we're going to be talking about the simplest, simplest possible setting you could imagine, that graph shown up there. You have three sets of random variables, X, which is perhaps a vector, so it's high dimensional, a single random variable T, and a single random variable Y. And we know the causal graph here. We're going to suppose that we know the directionality, that we know that X might cause T and X and T might cause Y. And the only thing we don't know is the strength of the edges. All right. And so now let's try to think through this in context of the previous examples. Yeah, question? AUDIENCE: Just to make sure-- so T does not affect X in any way? DAVID SONTAG: Correct, that's the assumption we're going to make here. So let's try to instantiate this. So we'll start with this example. X might be what you know about the patient at diagnosis. T, I'm going to assume for the purposes of today's class, is a decision between two different treatment plans. And I'm going to simplify the state of the world. I'm going to say those treatment plans only depend on what you know about the patient at diagnosis. So at diagnosis, you decide, I'm going to be giving them this sequence of treatments at this three-month interval or this other sequence of treatment at, maybe, that four-month interval. And you make that decision just based on diagnosis and you don't change it based on anything you observe. Then the causal graph of relevance there is, based on what you know about the patient at diagnosis, which I'm going to say X is a vector because maybe it's based on images, your whole electronic health record. There's a ton of data you have on the patient at diagnosis. Based on that, you make some decision about a treatment plan. I'm going to call that T. T could be binary, a choice between two treatments, it could be continuous, maybe you're deciding the dosage of the treatment, or it could be maybe even a vector. For today's lecture, I'm going to suppose that T is just binary, just involves two choices. But most of what I'll tell you about will generalize to the setting where T is non-binary as well. But critically, I'm going to make the assumption for today's lecture that you're not observing new things in between. So, for example, in this whole week's lecture, the following scenario will not happen. Based on diagnosis, you make a decision about treatment plan. Treatment plan starts, you got new observations. Based on those new observations, you realize that treatment plan isn't working and change to another treatment plan, and so on. So that scenario goes by a different name, which is called dynamic treatment regimes or off-policy reinforcement learning, and that we'll learn about next week. So for today's and Thursday's lecture, we're going to suppose you base on what you know about the patient at this time, you make a decision, you execute the decision, and you look at some outcome. So X causes T, not the other way around. And that's pretty clear because of our prior knowledge about this problem. It's not that the treatment affects what their diagnosis was. And then there's the outcome Y, and there, again, we suppose the outcome, what happens to the patient, maybe survival time, for example, is a function of what treatment they're getting and aspects about that patient. So this is the causal graph. We know it. But we don't know, does that treatment do anything to this patient? For whom does this treatment help the most? And those are the types of questions we're going to try to answer today. Is the setting clear? OK. Now, these questions are not new questions. They've been studied for decades in fields such as political science, economics, statistics, biostatistics. And the reason why they're studied in those other fields is because often you don't have the ability to intervene, and one has to try to answer these questions from observational data. For example, you might ask, what will happen to the US economy if the Federal Reserve raises US interest rates by 1%? When's the last time you heard of the Federal Reserve doing a randomized controlled trial? And even if they had done a randomized controlled trial, for example, flipped a coin to decide which way the interest rates would go, it wouldn't be comparable had they done that experiment today to if they had done that experiment two years from now because the state of the world has changed in those years. Let's talk about political science. I have close colleagues of mine at NYU who look at Twitter, and they want to ask questions like, how can we influence elections, or how are elections influenced? So you might look at some unnamed actors, possibly people supported by the Russian government, who are posting to Twitter or their social media. And you might ask the question of, well, did that actually influence the outcome of the previous presidential election? Again, in that scenario, it's one of, well, we have this data, something happened in the world, and we'd like to understand what was the effect of that action, but we can't exactly go back and replay to do something else. So these are fundamental questions that appear all across the sciences, and of course they're extremely relevant in health care, but yet, we don't teach them in our introduction to machine learning classes. We don't teach them in our undergraduate computer science education. And I view this as a major hole in our education, which is why we're spending two weeks on it in this course, which is still not enough. But what has changed between these fields, and what is relevant in health care? Well, the traditional way in which these questions were asked in statistics were ones where you took a huge amount of domain knowledge to, first of all, make sure you're setting up the problem correctly, and that's always going to be important. But then to think through what are all of the factors that could influence the treatment decisions called the confounding factors. And the traditional approach is one would write down 10, 20 different things, and make sure that you do some analysis, including the analysis I'll show you about in today and Thursday's lecture using those 10 or 20 variables. But where this field is going is one of now having high dimensional data. So I talked about how you might have imaging data for X, you might have the whole entire patient's electronic health record data facts. And the traditional approaches that the statistics community used to work on no longer work in this high dimensional setting. And so, in fact, it's actually a really interesting area for research, one that my lab is starting to work on and many other labs, where we could ask, how can we bring machine learning algorithms that are designed to work with high dimensional data to answer these types of causal inference questions? And in today's lecture, you'll see one example of reduction from causal inference to machine learning, where we'll be able to use machine learning to answer one of those causal inference questions. So the first thing we need is some language in order to formalize these notions. So I will work within what's known as the Rubin-Neyman Causal Model, where we talk about what are called potential outcomes. What would have happened under this world or that world? We'll call Y 0, and often it will be denoted as Y underscore 0, sometimes it'll be denoted as Y parentheses 0, and sometimes it'll be denoted as Y given X comma do Y equals 0. And all three of these notations are equivalent. So Y is 0 corresponds to what would have happened to this individual if you gave them treatment to 0. And Y1 is the potential outcome of what would have happened to this individual had you gave them treatment one. So you could think about Y1 as being giving the blue pill and Y0 as being given the red pill. Now, once you can talk about these states of the world, then one could start to ask questions of what's better, the red pill or the blue pill? And one can formalize that notion mathematically in terms of what's called the conditional average treatment effect, and this also goes by the name of individual treatment effect. So it's going to take as input Xi, which I'm going to denote as the data that you had at baseline for the individual. It's the covariance, the features for the individual. And one wants to know, well, for this individual with what we know about them, what's the difference between giving them treatment one or giving them treatment zero? So mathematically, that corresponds to a difference in expectations. It's a difference in expectation of Y1 from Y0. Now, the reason why I'm calling this an expectation is because I'm not going to assume that Y1 and Y0 are deterministic because maybe there's some bad luck component. Like, maybe a medication usually works for this type of person, but with a flip of a coin, sometimes it doesn't work. And so that's the randomness that I'm referring to when I talk about probability over Y1 given Xi. And so the CATE looks at the difference in those two expectations. And then one can now talk about what the average treatment effect is, which is the difference between those two. So the average treatment effect is now the expectation of-- I'll say the expectation of the CATE over the distribution of people, P of X. Now, we're going to go through this in four different ways in the next 10 minutes, and then you're going to go over it five more ways doing your homework assignment, and you'll go over it two more ways on Friday in recitation. So if you don't get it just yet, stay with me, you'll get it by the end of this week. Now, in the data that you observe for an individual, all you see is what happened under one of the interventions. So, for example, if the i'th individual in your data set received treatment Ti equals 1, then what you observe, Yi is the potential outcome Y1. On the other hand, if the individual in your data set received treatment Ti equals 0, then what you observed for that individual is the potential outcome Y0. So that's the observed factual outcome. But one could also talk about the counterfactual of what would have happened to this person had the opposite treatment been done for them. Notice that I just swapped each Ti for 1 minus Ti, and so on. Now, the key challenge in the field is that in your data set, you only observe the factual outcomes. And when you want to reason about the counterfactual, that's where you have to impute this unobserved counterfactual outcome. And that is known as the fundamental problem of causal inference, that we only observe one of the two outcomes for any individual in the data set. So let's look at a very simple example. Here, individuals are characterized by just one feature, their age. And these two curves that I'm showing you are the potential outcomes of what would happen to this individual's blood pressure if you gave them treatment zero, which is the blue curve, versus treatment one, which is the red curve. All right. So let's dig in a little bit deeper. For the blue curve, we see people who received the control, what I'm calling treatment zero, their blood pressure was pretty low for the individuals who were low and for individuals whose age is high. But for middle age individuals, their blood pressure on receiving treatment zero is in the higher range. On the other hand, for individuals who receive treatment one, it's the red curve. So young people have much higher, let's say, blood pressure under treatment one, and, similarly, much older people. So then one could ask, well, what about the difference between these two potential outcomes? That is to say the CATE, the Conditional Average Treatment Effect, is simply looking at the distance between the blue curve and the red curve for that individual. So for someone with a specific age, let's say a young person or a very old person, there's a very big difference between giving treatment zero or giving treatment one. Whereas for a middle aged person, there's very little difference. So, for example, if treatment one was significantly cheaper than treatment zero, then you might say, we'll give treatment one. Even though it's not quite as good as treatment zero, but it's so much cheaper and the difference between them is so small, we'll give the other one. But in order to make that type of policy decision, one, of course, has to understand that conditional average treatment effect for that individual, and that's something that we're going to want to predict using data. Now, we don't always get the luxury of having personalized treatment recommendations. Sometimes we have to give a policy. Like, for example-- I took this example out of my slides, but I'll give it to you anyway. The federal government might come out with a guideline saying that all men over the age of 50-- I'm making up that number-- need to get annual prostate cancer screening. That's an example of a very broad policy decision. You might ask, well, what is the effect of that policy now applied over the full population on, let's say, decreasing deaths due to prostate cancer? And that would be an example of asking about the average treatment effect. So if you were to average the red line, if you were to average the blue line, you get those two dotted lines I show there. And if you look at the difference between them, that is the average treatment effect between giving the red intervention or giving the blue intervention. And if the average human effect is very positive, you might say that, on average, this intervention is a good intervention. If it's very negative, you might say the opposite. Now, the challenge about doing causal inference from observational data is that, of course, we don't observe those red and those blue curves, rather what we observe are data points that might be distributed all over the place. Like, for example, in this example, the blue treatment happens to be given in the data more to young people, and the red treatment happens to be given in the data more to older people. And that can happen for a variety of reasons. It can happen due to access to medication. It can happen for socioeconomic reasons. It could happen because existing treatment guidelines say that old people should receive treatment one and young people should receive treatment zero. These are all reasons why in your data who receives what treatment could be biased in some way. And that's exactly what this edge from X to T is modeling. But for each of those people, you might want to know, well, what would have happened if they had gotten the other treatment? And that's asking about the counterfactual. So these dotted circles are the counterfactuals for each of those observations. And by the way, you'll notice that those dots are not on the curves, and the reason they're not on the curve is because I'm trying to point out that there could be some stochasticity in the outcome. So the dotted lines are the expected potential outcomes and the circles are the realizations of them. All right. Everyone take out a calculator or your computer or your phone, and I'll take out mine. This is not an opportunity to go on Facebook, just to be clear. All you want is a calculator. My phone doesn't-- oh, OK, it has a calculator. Good. All right. So we're going to do a little exercise. Here's a data set on the left-hand side. Each row is an individual. We're observing the individual's age, gender, whether they exercise regularly, which I'll say is a one or a zero, and what treatment they got, which is A or B. On the far right-hand side are their observed sugar glucose sugar levels, let's say, at the end of the year. Now, what we'd like to have, it looks like this. So we'd like to know what would have happened to this person's sugar levels had they received medication A or had they received medication B. But if you look at the previous slide, we observed for each individual that they got either A or B. And so we're only going to know one of these columns for each individual. So the first row, for example, this individual received treatment A, and so you'll see that I've taken the observed sugar level for that individual, and since they received treatment A, that observed level represents the potential outcome Ya, or Y0. And that's why I have a 6, which is bolded under Y0. And we don't know what would have happened to that individual had they received treatment B. So in this case, some magical creature came to me and told me their sugar levels would have been 5.5, but we don't actually know that. It wasn't in the data. Let's look at the next line just to make sure we get what I'm saying. So the second individual actually received treatment B. They're observed sugar level is 6.5. OK. Let's do a little survey. That 6.5 number, should it be in this column? Raise your hand. Or should it be in this column? Raise your hand. All right. About half of you got that right. Indeed, it goes to the second column. And again, what we would like to know is the counterfactual. What would have been their sugar levels had they received medication A? Which we don't actually observe in our data, but I'm going to hypothesize is-- suppose that someone told me it was 7, then you would see that value filled in there. That's the unobserved counterfactual. All right. First of all, is the setup clear? All right. Now here's when you use your calculators. So we're going to now demonstrate the difference between a naive estimator of your average treatment effect and the true average treatment effect. So what I want you to do right now is to compute, first, what is the average sugar level of the individuals who got medication B. So for that, we're only going to be using the red ones. So this is conditioning on receiving medication B. And so this is equivalent to going back to this one and saying, we're only going to take the rows where individuals receive medication B, and we're going to average their observed sugar levels. And everyone should do that. What's the first number? 6.5 plus-- I'm getting 7.875. This is for the average sugar, given that they received medication B. Is that what other people are getting? AUDIENCE: Yeah. DAVID SONTAG: OK. What about for the second number? Average sugar, given A? I want you to compute it. And I'm going to ask everyone to say it out loud in literally one minute. And if you get it wrong, of course you're going to be embarrassed. I'm going to try myself. OK. On the count of three, I want everyone to read out what that third number is. One, two, three. ALL: 7.125. DAVID SONTAG: All right. Good. We can all do arithmetic. All right. Good. So, again, we're just looking at the red numbers here, just the red numbers. So we just computed that difference, which is point what? AUDIENCE: 0.75. DAVID SONTAG: 0.75? Yeah, that looks about right. Good. All right. So that's a positive number. Now let's do something different. Now let's compute the actual average treatment effect, which is we're now going to average every number in this column, and we're going to average every number in this column. So this is the average sugar level under the potential outcome of had the individual received treatment B, and this is the average sugar level under the potential outcome that the individual received treatment A. All right. Who's doing it? AUDIENCE: 0.75. DAVID SONTAG: 0.75 is what? AUDIENCE: The difference. DAVID SONTAG: How do you know? AUDIENCE: [INAUDIBLE] DAVID SONTAG: Wow, you're fast. OK. Let's see if you're right. I actually don't know. OK. The first one is 0.75. Good, we got that right. I intentionally didn't post the slides to today's lecture. And the second one is minus 0.75. All right. So now let's put us in the shoes of a policymaker. The policymaker has to decide, is it a good idea to-- or let's say it's a health insurance company. A health insurance company is trying decide, should I reimburse for treatment B or not? Or should I simply say, no, I'm never going to reimburse for treatment because it doesn't work well? So if they had done the naive estimator, that would have been the first example, then it would look like medication B is-- we want lower numbers here, so it would look like medication B is worse than medication A. And if you properly estimated what the actual average treatment effect is, you get the absolute opposite conclusion. You conclude that medication B is much better than medication A. It's just a simple example to really illustrate the difference between conditioning and actually computing that counterfactual. OK. So hopefully now you're starting to get it. And again, you're going to have many more opportunities to work through these things in your homework assignment and so on. So by now you should be starting to wonder, how the hell could I do anything in this state of the world? Because you don't actually observe those black numbers. These are all unobserved. And clearly there is bias in what the values should be because of what I've been saying all along. So what can we do? Well, the first thing we have to realize is that typically, this is an impossible problem to solve. So your instincts aren't wrong, and we're going to have to make a ton of assumptions in order to do anything here. So the first assumption is called SUTVA. I'm not even going to talk about it. You can read about that in your readings. I'll tell you about the two assumptions that are a little bit easier to describe. The first critical assumption is that there are no unobserved confounding factors. Mathematically what that's saying is that your potential outcomes, Y0 and Y1, are conditionally independent of the treatment decision given what you observe on the individual, X. Now, this could be a bit hard to-- and that's called ignorability. And this can be a bit hard to understand, so let me draw a picture. So X is your covariance, T is your treatment decision. And now I've drawn for you a slightly different graph. Over here I said X goes to T, X and T go to Y. But now I don't have Y. Instead, I have Y0 and Y1, and I don't have any edge from T to them. And that's because now I'm actually using the potential outcomes notation. Y0 is a potential outcome of what would have happened to this individual had they received treatment 0, and Y1 is what would have happened to this individual if they received treatment one. And because you already know what treatment the individual has received, it doesn't make sense to talk about an edge from T to those values. That's why there's no edge there. So then you might wonder, how could you possibly have a violation of this conditional independence assumption? Well, before I give you that answer, let me put some names to these things. So we might think about X as being the age, gender, weight, diet, and so on of the individual. T might be a medication, like an anti-hypertensive medication to try to lower a patient's blood pressure. And these would be the potential outcomes after those two medications. So an example of a violation of ignorability is if there is something else, some hidden variable h, which is not observed and which affects both the decision of what treatment the individual in your data set receives and the potential outcomes. Now it should be really clear that this would be a violation of that conditional independence assumption. In this graph, Y0 and Y1 are not conditionally independent of T given X. All right. So what are these hidden confounders? Well, they might be things, for example, which really affect treatment decisions. So maybe there's a treatment guideline saying that for diabetic patients, they should receive treatment zero, that that's the right thing to do. And so a violation of this would be if the fact that the patient's diabetic were not recorded in the electronic health record. So you don't know-- that's not up there. You don't know that, in fact, the reason the patient received treatment T was because of this h factor. And there's critically another assumption, which is that h actually affects the outcome, which is why you have these edges from h to the Y's. If h were something which might have affected treatment decision but not the actual potential outcomes-- and that can happen, of course. Things like gender can often affect treatment decisions, but maybe, for some diseases, it might not affect outcomes. In that situation it wouldn't be a confounding factor because it doesn't violate this assumption. And, in fact, one would be able to come up with consistent estimators of average treatment effect under that assumption. Where things go to hell is when you have both of those edges. All right. So there can't be any of these h's. You have to observe all things that affect both treatment and outcomes. The second big assumption-- oh, yeah. Question? AUDIENCE: In practice, how good of a model is this? DAVID SONTAG: Of what I'm showing you here? AUDIENCE: Yeah. DAVID SONTAG: For hypertension? AUDIENCE: Sure. DAVID SONTAG: I have no idea. But I think what you're really trying to get at here in asking your question, how good of a model is this, is, well, oh, my god, how do I know if I've observed everything? Right? All right. And that's where you need to start talking to domain experts. So this is my starting place where I said, no, I'm not going to attempt to fit the causal graph. I'm going to assume I know the causal graph and just try to estimate the effects. That's where this starts to become really irrelevant. Because if you notice, this is another causal graph, not the one I drew on the board. And so that's something where, really, talking with domain experts would be relevant. So if you say, OK, I'm going to be studying hypertension and this is the data I've observed on patients, well, you can then go to a clinician, maybe a primary care doctor who often treats patients with hypertension, and you say, OK, what usually affects your treatment decisions? And you get a set of variables out, and then you check to make sure, am I observing all of those variables, at least the variables that would also affect outcomes? So, often, there's going to be a back and forth in that conversation to make sure that you've set up your problem correctly. And again, this is one area where you see a critical difference between the way that we do causal inference from the way that we do machine learning. Machine learning, if there's some unobserved variables, so what? I mean, maybe your predictive accuracy isn't quite as good as it could have been, but whatever. Here, your conclusions could be completely wrong if you don't get those confounding factors right. Now, in some of the optional readings for Thursday's lecture-- and we'll touch on it very briefly on Thursday, but there's not much time in this course-- I'll talk about ways and you'll read about ways to try to assess robustness to violations of these assumptions. And those go by the name of sensitivity analysis. So, for example, the type of question you might ask is, how would my conclusions have changed if there were a confounding factor which was blah strong? And that's something that one could try to answer from data, but it's really starting to get beyond the scope of this course. So I'll give you some readings on it, but I won't be able to talk about it in the lecture. Now, the second major assumption that one needs is what's known as common support. And by the way, pay close attention here because at the end of today's lecture-- and if I forget, someone must remind me-- I'm going to ask you where did these two assumptions come up in the proof that I'm about to give you. The first one I'm going to give you will be a dead giveaway. So I'm going to answer to you where ignorability comes up, but it's up to you to figure out where does common support show up. So what is common support? Well, what common support says is that there always must be some stochasticity in the treatment decisions. For example, if in your data patients only receive treatment A and no patient receives treatment B, then you would never be able to figure out the counterfactual, what would have happened if patients receive treatment B. But what happens if it's not quite that universal but maybe there is classes of people? Some individual is X, let's say, people with blue hair. People with blue hair always receive treatment zero and they never see treatment one. Well, for those people, if for some reason something about them having blue hair was also going to affect how they would respond to the treatment, then you wouldn't be able to answer anything about the counterfactual for those individuals. This goes by the name of what's called a propensity score. It's the probability of receiving some treatment for each individual. And we're going to assume that this propensity score is always bounded between 0 and 1. So it's between 1 minus epsilon and epsilon for some small epsilon. And violations of that assumption are going to completely invalidate all conclusions that we could draw from the data. All right. Now, in actual clinical practice, you might wonder, can this ever hold? Because there are clinical guidelines. Well, a couple of places where you'll see this are as follows. First, often, there are settings where we haven't the faintest idea how to treat patients, like second line diabetes treatments. You know that the first thing we start with is metformin. But if metformin doesn't help control the patient's glucose values, there are several second line diabetic treatments. And right now, we don't really know which one to try. So a clinician might start with treatments from one class. And if that's not working, you try a different class, and so on. And it's a bit random which class you start with for any one patient. In other settings, there might be good clinical guidelines, but there is randomness in other ways. For example, clinicians who are trained on the west coast might be trained that this is the right way to do things, and clinicians who are trained in the east coast might be trained that this is the right way to do things. And so even if any one clinician's treatment decisions are deterministic in some way, you'll see some stochasticity now across clinicians. It's a bit subtle how to use that in your analysis, but trust me, it can be done. So if you want to do causal inference from observational data, you're going to have to first start to formalize things mathematically in terms of what is your X, what is your T, what is your Y. You have to think through, do these choices satisfy these assumptions of ignorability and overlap? Some of these things you can check in your data. Ignorability you can't explicitly check in your data. But overlap, this thing, you can test in your data. By the way, how? Any idea? Someone else who hasn't spoken today. So just think back to the previous example. You have this table of these X's and treatment A or B and then sugar values. How would you test this? AUDIENCE: You could use a frequentist approach and just count how many things show up. And if there is zero, then you could say that it's violated. DAVID SONTAG: Good. So you have this table. I'll just go back to that table. We have this table, and these are your X's. Actually, we'll go back to the previous slide where it's a bit easier to see. Here, we're going to ignore the outcome, the sugar levels because, remember, this only has to do with probability of treatment given your covariance. The Y doesn't show up here at all. So this thing on the right-hand side, the observed sugar levels, is irrelevant for this question. All we care about is what goes on over here. So we look at this. These are your X's, and this is your treatment. And you can look to see, OK, here you have one 75-year-old male who does exercise frequently and received treatment A. Is there any one else in the data set who is 75 years old and male, does exercise regularly but received treatment B? Yes or no? No. Good. OK. So overlap is not satisfied here, at least not empirically. Now, you might argue that I'm being a bit too coarse here. Well, what happens if the individual is 74 and received treatment B? Maybe that's close enough. So there starts to become subtleties in assessing these things when you have finite data. But it is something at the fundamental level that you could start to assess using data. As opposed to ignorability, which you cannot test using data. All right. So you have to think about, are these assumptions satisfied? And only once you start to think through those questions can you start to do your analysis. And so that now brings me to the next part of this lecture, which is how do we actually-- let's just now believe David, believe that these assumptions hold. How do we do that causal inference? Yeah? AUDIENCE: I just had a question on [INAUDIBLE].. If you know that some patients, for instance, healthy patients, are not tracking to get any treatment, should we just remove them, basically? DAVID SONTAG: So the question is, what happens if you have a violation of overlap? For example, you know that healthy individuals never receive any treatment. Should you remove them from your data set? Well, first of all, that has to do with how do you formalize the question because not receiving a treatment is a treatment. So that might be your control arm, just to be clear. Now, if you're asking about the difference between two treatments-- two different classes of treatment for a condition, then often one defines the relevant inclusion criteria in order to have these conditions hold. For example, we could try to redefine the set of individuals that we're asking about so that overlap does hold. But then in that situation, you have to just make sure that your policy is also modified. You say, OK, I conclude that the average treatment effect is blah for this type of people. OK? OK. So how could we possibly compute the average treatment effect from data? Remember, average treatment effect, mathematically, is the expectation between potential outcome Y1 minus Y0. The key tool which we'll use in order to estimate that is what's known as the adjustment formula. This goes by many names in the statistics community, such as the G-formula as well. Here, I'll give you a derivation of it. We're first going to recognize that this expectation is actually two expectations in one. It's the expectation over individuals X and it's the expectation over potential outcomes Y given X. So I'm first just going to write it out in terms of those two expectations, and I'll write the expectations related to X on the outside. That goes by name of law of total expectation. This is trivial at this stage. And by the way, I'm just writing out expectation of Y1. In a few minutes, I'll show you expectation of Y0, but it's going to be exactly analogous. Now, the next step is where we use ignorability. I told you I was going to give that one away. So remember, we said that we're assuming that Y1 is conditionally independent of the treatment T given X. What that means is probability of Y1 given X is equal to probability of Y1 given X comma T equals whatever-- in this case I'll just say T equals 1. This is implied by Y1 being conditionally independent of T given X. So I can just stick n comma T equals 1 here, and that's explicitly because of ignorability holding. But now we're in a really good place because notice that-- and here I've just done some short notation. I'm just going to hide this expectation. And by the way, you could do the same for Y0-- Y1, Y0. And now notice that we can replace this average human effect with now this expectation with respect to all individuals X of the expectation of Y1 given X comma T equals 1, and so on. And these are mostly quantities that we can now observe from our data. So, for example, we can look at the individuals who received treatment one, and for those individuals we have realizations of Y1. We can look at individuals who receive treatment zero, and for those individuals we have realizations of Y0. And we could just average those realizations to get estimates of the corresponding expectations. So these we can easily estimate from our data. And so we've made progress. We can now estimate some part of this from our data. But notice, there are some things that we can't yet directly estimate from our data. In particular, we can't estimate expectation of Y0 given X comma T equals 1 because we have no idea what would have happened to this individual who actually got treatment one if they had gotten treatment zero. So these we don't know. So these we don't know. Now, what is the trick I'm planning on you? How does it help that we can do this? Well, the key point is that these quantities that we can estimate from data show up in that term. In particular, if you look at the individuals X that you've sampled from the full set of individuals P of X, for that individual X for which, in fact, we observed T equals 1, then we can estimate expectation of Y1 given X comma T equals 1, and similarly for Y0. But what we need to be able to do is to extrapolate. Because empirically, we only have samples from P of X given T equals 1, P of X given T equals 0 for those two potential outcomes correspondingly. But we are going to also get samples of X such that for those individuals in your data set, you might have only observed T equals 0. And to compute this formula, you have to answer, for that X, what would it have been if they got treatment equals one? So there are going to be a set of individuals that we have to extrapolate for in order to use this adjustment formula for estimate. Yep? AUDIENCE: I thought because common support is true, we have some patients that received each treatment or a given type of X. DAVID SONTAG: Yes. But now-- so, yes, that's true. But that's a statement about infinite data. And in reality, one only has finite data. And so although common support has to hold to some extent, you can't just build on that to say that you always observe the counterfactual for every individual, such as the pictures I showed you earlier. So I'm going to leave this slide up for just one more second to let it sink in and see what it's saying. We started out from the goal of computing the average treatment effect, expected value of Y1 minus Y0. Using the adjustment formula, we've gotten to now an equivalent representation, which is now an expectation with respect to all individuals sampling from P of X of expected value of Y1 given X comma T equals 1, expected value of Y0 given X comma T equals 0. For some of the individuals, you can observe this, and for some of them, you have to extrapolate. So from here, there are many ways that one can go. Hold your question for a little while. So types of causal inference methods that you will have heard of include things like covariance adjustment, propensity score re-weighting, doubly robust estimators, matching, and so on. And those are the tools of the causal inference trade. And in this course, we're only going to talk about the first two. And in today's lecture, we're only going to talk about the first one, covariate adjustment. And on Thursday, we'll talk about the second one. So covariate adjustment is a very natural way to try to do that extrapolation. It also goes by the name, by the way, of response surface modeling. What we're going to do is we're going to learn a function f, which takes as an input X and T, and its goals is to predict Y. So intuitively, you should think about f as this conditional probability distribution. It's predicting Y given X and T. So T is going to be an input to the machine learning algorithm, which is going to predict what would be the potential outcome Y for this individual described by feature as X1 through Xd under intervention T. So this is just from the previous slide. And what we're going to do now are-- this is now where we get the reduction to machine learning-- is we're going to use empirical risk minimization, or maybe some regularized empirical risk minimization, to fit a function f which approximates the expected value of YT given capital T equals little t. Got my X. And then once you have that function, we're going to be able to use that to estimate the average treatment effect by just implementing now this formula here. So we're going to first take an expectation with respect to the individuals in the data set. So we're going to approximate that with an empirical expectation where we sum over the little n individuals in your data set. Then what we're going to do is we're going to estimate the first term, which is f of Xi comma 1 because that is approximating the expected value of Y1 given T comma X-- T equals 1 comma X. And we're going to approximate the second term, which is just plugging now 0 for T instead of 1. And we're going to take the difference between them, and that will be our estimator of the average treatment effect. Here's a natural place to ask a question. One thing you might wonder is, in your data set, you actually did observe something for that individual, right. Notice how your raw data doesn't show up in this at all. Because I've done machine learning, and then I've thrown away the observed Y's, and I used this estimator. So what you could have done-- an alternative formula, which, by the way, is also a consistent estimator, would have been to use the observed Y for whatever the factual is and the imputed Y for the counterfactual using f. That would have been that would have also been a consistent estimator for the average treatment effect. You could've done either. OK. Now, sometimes you're not interested in just the average treatment effect, but you're actually interested in understanding the heterogeneity in the population. Well, this also now gives you an opportunity to try to explore that heterogeneity. So for each individual Xi, you can look at just the difference between what f predicts for treatment one and what X predicts given treatment zero. And the difference between those is your estimate of your conditional average treatment effect. So, for example, if you want to figure out for this individual, what is the optimal policy, you might look to see is CATE positive or negative, or is it greater than some threshold, for example? So let's look at some pictures. Now what we're using is we're using that function f in order to impute those counterfactuals. And now we have those observed, and we can actually compute the CATE. And averaging over those, you can estimate now the average treatment effect. Yep? AUDIENCE: How is f non-biased? DAVID SONTAG: Good. So where can this go wrong? So what do you mean by biased, first? I'll ask that. AUDIENCE: For instance, as we've seen in the paper like pneumonia and people who have asthma, [INAUDIBLE] DAVID SONTAG: Oh, thank you so much for bringing that back up. So you're referring to one of the readings for the course from several weeks ago, where we talked about using just a pure machine learning algorithm to try to predict outcomes in a hospital setting. In particular, what happens for patients who have pneumonia in the emergency department? And if you all remember, there was this asthma example, where patients with asthma were predicted to have better outcomes than patients without asthma. And you're calling that bias. But you remember, when I taught about this, I called it biased due to a particular thing. What's the language I used? I said bias due to intervention, maybe, is what I-- I can't remember exactly what I said. [LAUGHTER] I don't know. Make it up. Now a textbook will be written with bias by intervention. OK. So the problem there is that they didn't formulize the prediction problem correctly. The question that they should have asked is, for asthma patients-- what you really want to ask is a question of X and then T and Y, where T are the interventions that are done for asthmatics. So the failure of that paper is that it ignored the causal inference question which was hidden in the data, and it just went to predict Y given X marginalizing over T altogether. So T was never in the predictive model. And said differently, they never asked counterfactual questions of what would have happened had you done a different T. And then they still used it to try to guide some treatment decisions. Like, for example, should you send this person home, or should you keep them for careful monitoring or so on? So this is exactly the same example as I gave in the beginning of the lecture, where I said if you just use a risk stratification model to make some decisions, you run the risk that you're making the wrong decisions because those predictions were biased by decisions in your data. So that doesn't happen here because we're explicitly accounting for T in all of our analysis. Yep? AUDIENCE: In the data sets that we've used, like MIMIC, how much treatment information exists? DAVID SONTAG: So how much treatment information is in MIMIC? A ton. In fact, one of the readings for next week is going to be about trying to understand how one could manage sepsis, which is a condition caused by infection, which is managed by, for example, giving broad spectrum antibiotics, giving fluids, giving pressers and ventilators. And all of those are interventions, and all those interventions are recorded in the data so that one could then ask counterfactual questions from the data, like what would have happened if this patient had they received a different set of interventions? Would we have prolonged their life, for example? And so in an intensive care unit setting, most of the questions that we want to ask about, not all, but many of them are about dynamic treatments because it's not just a single treatment but really about a service sequence of treatments responding to the current patient condition. And so that's where we'll really start to get into that material next week, not in today's lecture. Yep? AUDIENCE: How do you make sure that your f function really learned from the relationship between T and the outcome? DAVID SONTAG: That's a phenomenal question. Where were you this whole course? Thank you for asking it. So I'll repeat it. How do you know that your function f actually learned something about the relationship between the input X and the treatment T and the outcome? And that really gets to the question of, is my reduction actually valid? So I've taken this problem and I've reduced it to this machine learning problem, where I take my data, and literally I just learn a function f to try to predict well the observations in the data. And how do we know that that function f actually does a good job at estimating something like average treatment effect? In fact, it might not. And this is where things start to get really tricky, particularly with high dimensional data. Because it could happen, for example, that your treatment decision is only one of a huge number of factors that affect the outcome Y. And it could be that a much more important factor is hidden in X. And because you don't have much data, and because you have to regularize your learning algorithm, let's say, with L1 or L2 regularization or maybe early stopping if you're using deep neural network, your algorithm might never learn the actual dependence on T. It might learn just to throw away T and just use X to predict Y. And if that's the case, you will never be able to infer these average treatment effects accurately. You'll have huge errors. And that gets back to one of the slides that I skipped, where I started out from this picture. This is the machine learning picture saying, OK, a reduction to machine learning is-- now you add an additional feature, which is your treatment decision, and you learn that black box function f. But this is where machine learning causal inference starts to differ because we don't actually care about the quality of predicting Y. We can measure your root mean squared error in predicting Y given your X's and T's, and that error might be low. But you can run into these failure modes where it just completely ignores T, for example. So T is special here. So really, the picture we want to have in mind is that T is some parameter of interest. We want to learn a model f such that if we twiddle T, we can see how there is a differential effect on Y based on twiddling T. That's what we truly care about when we're using machine learning for causal inference. And so that's really the gap, that's the gap in our understanding today. And it's really an active area of research to figure out how do you change the whole machine learning paradigm to recognize that when you're using machine learning for causal inference, you're actually interested in something a little bit different. And by the way, that's a major area of my lab's research, and we just published a series of papers trying to answer that question. Beyond the scope of this course, but I'm happy to send you those papers if anyone's interested. So that type of question is extremely important. It doesn't show up quite as much when your X's aren't very high dimensional and where things like regularization don't become important. But once your X becomes high dimensional and once you want to start to consider more and more complex f's during your fitting, like you want to use deep neural networks, for example, these differences in goals become extremely important. So there are other ways in which things can fail. So I want to give you here an example where-- shoot, I'm answering my question. OK. No one saw that slide. Question-- where did the overlap assumptions show up in our approach for estimating average treatment effect using covariate adjustment? Let me go back to the formula. Someone who hasn't spoken today, hopefully. You can be wrong, it's fine. Yeah, in the back? AUDIENCE: Is it the version with the same age in receiving treatment B and treatment B? DAVID SONTAG: So maybe you have an individual with some age-- we're going to want to be able to look at the difference between what f predicts for that individual if they got treatment A versus treatment B, or one versus zero. And let me try to lead this a little bit. And it might happen in your data set that for individuals like them, you only ever observe treatment one and there's no one even remotely like them who you observe treatment zero. So what's this function going to output then when you input zero for that second argument? Everyone say out loud. Garbage? Right? If in your data set you never observed anyone even remotely similar to Xi who received treatment zero, then this function is basically undefined for that individual. I mean, yeah, your function will output something because you fit it, but it's not going to be the right answer. And so that's where this assumption starts to show up. When one talks about the sample complexity of learning these functions f to do covariate adjustment, and when one talks about the consistency of these arguments-- for example, you'd like to be able to make claims that as the amount of data grows to, let's say, infinity, that this is the right answer-- gives you the right estimate. So that's the type of proof which is often given in the causal inference literature. Well, if you have overlap, then as the amount of data goes to infinity, you will observe someone, like the person who received treatment one, you'll observe someone who also received treatment zero. It might have taken you a huge amount of data to get there because treatment zero might have been much less likely than treatment one. But because the probability of treatment zero is not zero, eventually you'll see someone like that. And so eventually you'll get enough data in order to learn a function which can extrapolate correctly for that individual. And so that's where overlap comes in in giving that type of consistency argument. Of course, in reality, you never have infinite data. And so these questions about trade-offs between the amount of data you have and the fact that you never truly have empirical overlap with a small amount of data, and answering when can you extrapolate correctly despite that is the critical question that one needs to answer, but is, by the way, not studied very well in the literature because people don't usually think in terms of sample complexity in that field. That's where computer scientists can start really to contribute to this literature and bringing things that we often think about in machine learning to this new topic. So I've got a couple of minutes left. Are there any other questions, or should I introduce some new material in one minute? Yeah? AUDIENCE: So you said that the average treatment effect estimator here is consistent. But does it matter if we choose the wrong-- do we have to choose some functional form of the features to the effect? DAVID SONTAG: Great question. AUDIENCE: Is it consistent even if we choose a completely wrong function or formula? DAVID SONTAG: No. AUDIENCE: That's a different thing? DAVID SONTAG: No, no. You're asking all the right questions. Good job today, everyone. So, no. If you walk through that argument I made, I assume two things. First, that you observe enough data such that you can have any chance of extrapolating correctly. But then implicit in that statement is that you're choosing a function family which is powerful enough that it can extrapolate correctly. So if your true function is-- if you think back to this figure I showed you here, if the true potential outcome functions are these quadratic functions and you're fitting them with a linear function, then no matter how much data you have you're always going to get wrong estimates because this argument really requires that you're considering more and more complex non-linearity as your amount of data grows. So now here's a visual depiction of what can go wrong if you don't have overlap. So now I've taken out-- previously, I had one or two red points over here and one or two blue points over here, but I've taken those out. So in your data all you have are these blue points and those red points. So all you have are the points, and now one can learn as good functions, as you can imagine, to try to, let's say, minimize the mean squared error of predicting these blue points and minimize the mean squared error of predicting those red points. And what you might get out is something-- maybe you'll decide on a linear function. That's as good as you could do if all you have are those red points. And so even if you were willing to consider more and more complex hypothesis classes, here, if you tried to consider a more complex hypothesis class than this line, you'd probably just over-fitting to the data you have. And so you decide on that line, which, because you had no data over here, you don't even know that it's not a good fit to the data. And then you notice that you're getting completely wrong estimates. For example, if you asked about the CATE for a young person, it would have the wrong sign over here because they flipped, the two lines. So that's an example of how one can start to get errors. And when we begin on Thursday's lecture, we're going to pick up right where we left off today, and I'll talk about this issue a little bit more in detail. I'll talk about how, if one were to learn a linear function, how one could actually, under the assumption that the true potential outcomes are linear, how one could actually interpret the coefficients of that linear function in a causal way under the very strong assumption that the two potential outcomes are linear. So that's what we'll return to on Thursday.
MIT_6S897_Machine_Learning_for_Healthcare_Spring_2019
21_Automating_Clinical_Work_Flows.txt
PETER SZOLOVITS: So today's topic is workflow, and this is something that-- a topic that I didn't realize existed when I started working in this area, but I've had my nose ground and ground into it for many decades. And so finally, it has become obvious to me that it's something to pay attention to. So here's an interesting question. Suppose that your goal in the kind of work that we're doing in this class is to improve medical care-- not an unreasonable goal. So how do you do it? Well, we had an idea back in the 1970s when I was getting started on this, which was that we wanted to understand what the world's best experts did and to create decision support systems by encapsulating their knowledge about how to do diagnosis, how to do prognosis and treatment selection, in order to improve the performance of every other doctor who was not a world class expert by allowing the world class expertise captured in a computer system to help people figure out how to do better-- so to make them more accurate diagnosticians, more efficient therapists, et cetera. And the goal here was really to bring up the average performance of everybody in the health care system. So we used to say things like, bring everybody practicing medicine closer to the level of practice of the world class experts. Now, that turned out not to be what was important. And so there was another idea that came along a little bit later that said, well, it's not really so much the average performance of doctors that's bad. It's the subaverage performance that's really terrible. And so if you're subaverage performance leads to your patients dying, but your above average performance only makes a moderate difference in their outcomes, then it's clearly more important to focus on the people who are the worst doctors and to get them to act in a better way. And thus, was born the idea of a protocol that says, let's treat similar patients in similar ways. And the value of that is to reduce the variance-- so improve average versus reduce variance. So which of these is better? Well it depends on your loss function. So as I was suggesting, if your loss function is a symmetric so that doing badly or doing below average is much worse than doing above average is much better, than this protocol idea of reducing variance is really important. And this is pretty much what the medical system has adopted. So I wanted to try to help you visualize this. Suppose that on some arbitrary scale of 0 to 8, we have an usual, normal distribution, of on the left the base behaviors-- so this is how people, on average, normally behave-- we assume that there's something like a normal distribution. So here is a world class expert whose performance is up at 6 or 7 and here's the dud of a doctor whose performance is down between 0 and 1. And the average doctor is just shy of 4. So here are two scenarios. Scenario one is that we improve these guys performance by just a little bit. So we improve it by 0.1 performance points, I think is what I've done in this model. versus another approach, which is suppose we could cut down the variance dramatically so that this same normal distribution becomes narrower. Its average is still in exactly the same place, but now there are no distant outliers. So there aren't doctors who perform a lot worse, and there aren't doctors who perform a lot better either. Well, what happens in that case? Well, you have to look at the cost function. So if you have a cost function like this that says, that somebody's performing at the 0 level has a cost of 1. Whereas somebody performing at the 8 level has a cost of almost 0, and it's exponentially declining like this, so that the average performance has a much lower cost than the average between the worst performance and the best performance. So this suggests that, if you could bunch people into this region of performance, that your overall costs would go down. And, in fact-- this is a purely hypothetical model that I've built-- but if you do the calculations, you discover that for the base distribution, here is the distribution of costs. For the slightly improved distribution, you get a cost, which is 1,694 versus 781, again, in arbitrary units. But if you manage to narrow the distribution, you can get the total cost down to less than what you do by improving the average. Now, this is not a proof, but this is the right idea. The proof is probably in the fact that medical systems have adopted this, and have decided that getting all doctors to behave more like the average doctor is the best practical way of improving medical care. Well, how do we narrow the performance distribution? So one way is by having guidelines and protocols where you have some learned body who prescribes appropriate methods to diagnose and treat patients. So what happens is, for example, the article here from November of 2018, a report of the American College of Cardiology, the American Heart Association Task Force on Clinical Practice Guidelines, and this has been adopted by this cornucopia of three and four letter abbreviated organizations. And it's a guideline on the management of blood cholesterol. So as you know, having high cholesterol is dangerous. It can lead to heart attacks and strokes, and so there is a consensus that it would be good to lower that in people. So these guys went about this by gathering together a bunch of world experts and saying, well, how do we do this? What do we promulgate as the appropriate way to care for patients with this condition? And the first thing they did is they came up with a color coded notion of how strong the recommendation a certain recommendation should be And another color coded or shaded level of certainty in that recommendation. So, for example, if you say something is in class 1, so it's a strong recommendation, then you use words like is recommended, or is indicated, useful, effective, beneficial, should be performed, et cetera. If it's in class 2, where the benefit is much greater than the risk, then you say things like it's reasonable, it can be useful, et cetera. If the benefit is maybe equal to or a little bit better than the risk, you say waffle words, like might be reasonable, may be considered. If there is no benefit, in other words, if it roughly equals the risk, then you say, it's not recommended. And if the risk is greater than the benefit, then you say things like it's potentially harmful, causes harm, et cetera. So if you were giving a recommendation on whether to spray disinfectant down your lungs, you might put that in red and say, this is not recommended. And then here, this shading coding is basically how good is the evidence for this recommendation. So the best evidence, the level A, is high-quality evidence from multiple randomized controlled clinical trials, or a meta-analyses of a high-quality RCTs, or RCTs corroborated by high-quality registry studies. And then we go down to level C, which is consensus of expert opinion based on clinical experience, but without any sort of formal analysis. So if you look at this particular document on cholesterol it says, well, here are the recommendations on the measurement of LDL and non-HDL cholesterol. And they say here, the confidence and the recommendation is one, and it's based on B and our level of evidence. And it says, in adults who are 20 years or older and not on lipid-lowering therapy, measurements of either a fasting or a non-fasting blood-- dot, dot, dot. So you could read this in the notes later. But notice that there are high force recommendations. There are lower force recommendations, and each recommendation is also shading coded to tell you what the strength of evidence is for this kind of recommendation. Here's just another example. This is secondary atherosclerotic cardiovascular disease prevention. So this is for somebody who's already ill, and it's a bunch of recommendations. If you're over 75 years of age, or younger with a clinical case of coronary vascular disease, then high intensity statin therapy should be initiated or continued with the aim of achieving a 50% or greater reduction in LDLC and et cetera. So again, a whole bunch of different recommendations. Once again, the strength of the recommendation-- by the way, this is just the first page of a couple of pages-- and the quality of evidence for it. So this is very much the way that learned societies are now trying to influence the practice of medicine in order to reduce the variance and get everybody to behave in a normal way. You've probably seen articles about Atul Gawande, who's a surgeon here in Boston, and he's gotten publicly famous for advocating checklists. And he says, for example, if you're a surgeon, you should act like an airline pilot, that before you take off in the airplane, you go through a sanity checklist to make sure that all the systems are working properly, that all the switches are set correctly, which in a surgical setting would be things like you have all the right necessary equipment available, that you know what to do in various potential emergencies, et cetera. So here are their take-home messages, which makes sense. Here, I've abstracted these from the paper that has all of these details. So number one, you go, well, duh-- in all individuals, emphasize a heart healthy lifestyle across the life course. That seems not terribly controversial, and in people who are already diseased, reduce low-density lipoprotein with high-intensity therapy by statins. And in very high risk ASCVD, use a threshold of 70 milligrams per deciliter, et cetera. So these are the summary recommendations. And the hope is that doctors reading these sorts of articles come away from them convinced and will remember that they're supposed to act this way when they're interacting with their patients. This is a flow chart, again, abstracted from that paper by them which says, everybody, you should emphasize a healthy lifestyle. And then depending on your age, depending on what your estimate of lifetime risk is, you wind up in different categories. And these different categories have different recommendations for what you ought to do with your patients. This is for secondary prevention. So it's a similar flow chart for people who are already diseased and not just at risk. And then for people at very high risk for future events, which is defined by these histories and these high-risk conditions, these are the people who fall into that second flow chart and should be treated that way. Now, by the way, I didn't make a poll, so I'll give you the answer. But it's interesting to ask. So when papers like this get published, how well do doctors actually adhere to these? And the answer turns out to be not very well, and it takes many, many years before these kinds of recommendations are taken up by the majority of the community, so even very, very uncontroversial recommendations. For example, I think 20 years ago there was a recommendation that said that anybody who's had a heart attack should be treated, even if they're now asymptomatic, with beta blockers. Because in various trials, they showed that there was a 35% reduction in repeat heart attacks as a result of this treatment. It took, I think, over a dozen years before most doctors were aware of this and started making that kind of recommendation to their patients. There's something called the AHRQ, the Agency for Health Research and Quality. And until the current administration, they ran a national guideline clearinghouse that contained myriad of these guidelines, published by different authorities, and was available for people to download and use. There's been an attempt by Guideline Central to take over some of these roles since the government shutdown the government run one, and they have about 2,000 guidelines that are posted on their site. And these are some of the examples. So risk reduction of prostate cancer with drugs or nutritional supplements, stem cell transplantation in multiple myeloma, stem cell transplantation in myelodysplastic syndromes and acute myeloid leukemia, et cetera. And then they also publish a bunch of risk calculators that say-- I don't know what the 4T score is for heparin-induced thrombocytopenia-- but there are tons of these as well. So there's a clearinghouse of these things. And you, as a practicing doctor, can go to these. Or your hospital can decide that they're going to provide these guidelines to their doctors, and either encourage, or in some cases, coerce them to use the guidelines in order to determine what their activity is. Now, notice that this is a very top-down kind of activity. So it's typically done by these learned societies that bring together experts to cogitate on what the right thing to do is, and then they tell the rest of the world how to do it. But there's also a kind of bottom-up activity. So there is something called a "care plan." Now, a care plan is really a nursing term. So if you hang out at a hospital, the thing you discover is that the doctors are evanescent. They appear and disappear. They're like elementary particles, and they're not around all the time. The people who are actually taking care of you are the nurses. And so the nurses have developed a set of methodologies for how to ensure that they take good care of you, and one of them is the development of these care plans. And then what clinical pathways are is an attempt to take the care plans that nurses use in taking care of individuals and to generalize from those and say, well, what are the typical ways in which we take care of patients in a particular cohort? So I'm going to talk a little bit about that, and one of the papers I gave you as an optional reading for today is about cow paths, which are these attempts to build generalizations of care plans. So this is a care plan from the Michigan Center for Nursing, which is an educational organization that tries to help nurses figure out how to be good nurses. I was very amused when I was looking for this. I ran across a video, which is some experienced nurse talking about how you build these care plans. And she sort of says, well, when you're in nursing school, you learn how to build these very elaborate carefully constructed care plans. When you're actually practicing as a nurse, you'll never have time to do this. And so you're going to do a rough approximation to this. And don't worry about it. But for now, satisfy your professors by doing these exercises correctly. So take a look at this. So there are a bunch of columns. The leftmost one says assessment. So this is objective, subjective, and medical diagnostic data. So the objective data is this patient has gangrene-infected left foot-- not a good thing, an open wound, et cetera, et cetera. Subjective data, the patient said the pain is worse when walking and turning. She dreads physical therapy, and she wishes she did not have to be in this situation-- surprise. But that's definitely subjective. You can't see external evidence of that. The nursing diagnosis is that this patient has impaired tissue integrity in reference to the wound and the presence of an infection. Now, that diagnosis actually comes with a kind of guideline about how to make that diagnosis. In other words, in order to be able to put that down on the care plan, she has to make sure that characteristics of the patient satisfy certain criteria which are the definition of that diagnosis. The patient outcomes-- so this is the goals that the nurse is trying to achieve. And notice, there are five goals here. One is that the patient will report any altered sensation of pain at the tissue impairment between January 23 and 24. So this is a very specific goal. It says, the patient will tell me that they feel better, that there's a change in their feeling in their infected left foot. They will understand the plan to heal tissue and prevent injury. So there's a patient education component. They will describe measures to protect and heal the tissue, including wound care by 124. So notice, this is the patient describing to you what you are planning to do for them, in other words, demonstrating an understanding of what the plan is and what's likely to happen with them. Experience a wound decrease that decreases in size and has increased granulation tissue, and achieve functional pain goal of 0 by 124 per the patient's verbalization. So when they come in and they ask you on that pain scale, are you at a 0, or a 10, or somewhere in between, the goal is that the patient will say, I'm at a 0, in other words, no pain. Now, what are the interventions? Well, these are the things that the nurse plans to do in order to try to achieve those goals. And then the rationale is an explanation of why it's reasonable to expect those interventions to achieve those goals. And the evaluation of outcomes says, what criteria or what are the actual outcomes for what we're trying to achieve? So that gets filled in later, obviously, then when the plan is made. So if you look at a website like this, there are templated care plans for many, many conditions. You can see that I'm only up to C in an A to Z listing from this one website, and there are plenty of others. But there is an admission care plan, adult failure to thrive, alcohol withdrawal, runny nose, altered cardiac output, amputation. I don't know what an anasarca is-- anemia, angina, anticoagulant care, et cetera. So there are tons of different conditions that different patients fall into, and this is a way of trying to list the template care plans. Now, this paper is kind of interesting, by Yiye Zhang and colleagues. And what they did is they said, well, let's take all these care plans and let's try to build a machine learning system that learns what are the typical patterns that are embedded in those care plans. But they didn't start with the plans. This is retrospective analysis. So what they started with is the actual records of what was done to each patient. And so the idea is that you get treatment data from the electronic health record. Then you identify patient subgroups from that data, and then you mine for common treatment patterns. And you have medical experts evaluate these, and these then become clinical pathways, which are this generalization of the care plans to particular subpopulations of patients. So the idea is that they define a bunch of abstractions. So they say, look, an event is a visit. So, for example, for an outpatient, anything that happens to you during one visit to a doctor or to a hospital. So it's a set of procedures, a set of medications, a set of diagnoses. And by the way, they were focusing on people with kidney disease as the target population that they were looking at. So then they say, OK, individual events are going to be abstracted into these supernodes, which capture a unique combination of associations of events associated with some visit. So you might worry that this is going to be combinatorial, because there are many possible combinations of things. And that is, in fact, a bit of a problem, I think, in their analysis. So now, you have these supernodes, and then each patient has a visit sequence, which is a time-ordered list of the supernodes. So every time you go see your doctor, you have one new supernode. And so you have a time series of these. And then they do the following thing. They say, gee, when we talk to our doctors and nurses, they tell us that they care mostly about what happened at the last visit that the patient had. But they also care a little bit less, but they still care about what happened at the visit previous to that, but not so much about history going further back. And so they say, well, in a Markov chain, we only have things depend on the last node in the Markov chain. So let's change the model here so that we will combine pairs of visits into nodes so that each node in the Markov chain will represent the last two visits that the patient had. So this could, again, cause some combinatorial problems. But here's the image that they come up with. So there are individual items. Is it a hospital visit, an office visit, a visit for the purpose of education? Are you in chronic kidney disease stage four? Was an ultrasound done? Were you given ACE inhibitors? Were you given diuretics, et cetera? So these are all the data that we mentioned. They treat that as a bag. And then they say, OK, we're going to identify all the bags that have the same exact content. An asterisk, they didn't look, for example, at the dose of medication that you were given, only which medication it was. So there are some collapsing that way. Then the supernodes are these combinations where we say, OK, you had a particular purpose, a particular diagnosis, a particular set of interventions, a particular set of procedures. And again, we list all possible combinations of those, and then that sequence represents your sequence. These are aggregated into supernodes. That represents your visit sequence, and then these super pairs are this hack to let you look two steps back in the Markov chain. And so they wind up with about 3,500 different of these super pair nodes. So it is combinatorial, but it's not terribly combinatorial in their data. They then compute the maximum of the length of common subsequences between each pair of visit sequences. So they're going to cluster these sequences. They define a distance function that says that the more they share a common sequence, the less distant they are from each other. And the particular distance function they used is the length of each sequence minus twice the length of the common subsequence, the longest common subsequence, which seems pretty reasonable. And then hierarchical clustering into distinct subgroups, they came up with 31 groups for this group of patients, and here they are. And what you see is that some of them don't differ a whole lot from each other. So, for example, these two differ only in that the patient got some medication and diuretics in one case and just that medication in the other case. So these are-- it is a hierarchical cluster, and the things lower down in the clustering are probably fairly close to each other. Nevertheless, what they're able to do, then, is to estimate a transition matrix among these supernode pair states, and they can look at different trajectories depending on the degree of support for the data. So you can set different thresholds on how many cases have to be in a particular state in order for you to take transitions to or from that state seriously. One of the critiques I would make of the study is that they had way too little data, and so many of the groups that they came up with had relatively small numbers of patients in them, which is unfortunate. Now, once you have these transition matrices, then you can say, OK, for cluster 29, which was this cluster, so there were a grand total of 14 patients in this cluster. They were all at chronic kidney disease stage 4, so quite severe. They were all hypertensive. They were all on ACE inhibitors and statins, and everybody in that group had that categorization. So if you look there then you can say, OK, for all the things we know about that patient, what are the probabilistic relationships between them? And what we find is that-- man, I can't read these. So these nodes imply other nodes, and the strength of the arrows is proportional to their width. And so this is a representation of everything that we've learned about that cluster, but remember, only from those 14 patients. So I'm not sure I would take this to the bank and rely on it too intensely. But they then, by hand, abstract it and say, well, let's look at an interpretation of this. And so if they look in typical patterns that they see in that cluster, they say, hmm, we see an office visit in which the patient is on these medications and has these procedures. Then they're hospitalized. Then there's another-- let's see. No, I'm sorry. Yeah, yellow node is an office visit. So they're hospitalized. They then get an education visit, so that's typically with the nurse or nurse practitioner to explain to them what they ought to be doing. They have another hospital-- they have another office visit. They have a hospital visit. They have another hospital visit, and then they die. So that, unfortunately, is a not atypical pattern that you see in patients who are at a pretty severe state of chronic kidney disease. And we don't know from this diagram how long this process takes to take place. So I have some questions. There are a lot of subgroups. Some of them were fairly similar to others. They have between 10 and 158 patients in each subgroup. So I would feel much better if they had between 1,000 and 15,000 or something patients in each group, or 150,000 patients in each group. I would feel much more believing in the representations that they found. And the other problem is that even within an individual subgroup, you can find very different patterns. So, for example, here is a pattern where, again, a person has a couple of office visits. They go to the hospital. Or they go to the hospital twice with slightly different-- yes. So this person at this point is in acute kidney injury. So you can get there either directly from the office visit or from an earlier hospitalization, and then they die. And so this is part of that pattern. But here's another pattern mined from exactly the same subgroup. Now, this subgroup has 122 patients in it, so there's a little bit more heterogeneity. But what you see here is that a patient is going back and forth between education visits and doctor's visits, back and forth between doctors visits and hospitalizations, then a hospitalization, then another hospitalization, but they're surviving. So it's a little bit tricky, but I think this is a good idea, but there are probably improvements that are possible on the technique that's being used here. And, of course, much more data would be very helpful in order to really delineate what's going on in these patients. Here's a similar idea that I was involved. Jeff Klann did his PhD at Regenstrief, which is a very well-known, very early adopter of computerized information systems in Indiana. And so what he started off-- and he said, hmm. You know the Amazon recommendation system that says you just bought this camera lends, and other people who bought this camera lens also bought a cleaning kit and a battery that goes with that camera, and so on? So he said, why don't we apply that same idea to medical orders? And so he took the record of all the orders at Regenstrief, and he basically built an approximation to the Amazon recommendation system that said, hey, other doctors who have ordered the following set of tests have also ordered this additional test that you didn't order. Maybe you should consider doing it. Or conversely, other doctors who have ordered this set of tests have never ordered this other one in addition. And so are you sure you really need it? So that was the idea. And what he did was he focused on four different clinical issues. So one of them was an emergency department visit for back pain, pregnancy, so labor and delivery, hypertension in the urgent visit clinic-- so the urgent visit clinic is one of these lower-level non-emergency department, cheaper, lower level of care, but still urgent care kinds of clinics that many hospitals have established in order to try to keep people who are not that sick out of the emergency department and in this lower-intensity clinic-- and hypertension, and high blood pressure, and then altered mental state in the intensive care unit. So people in the ICU are often medicated, and they become wacko, and so this is trying to take care of such patients. They used three years of encountered data from Regenstrief. And for each domain, they limited themselves to the 40 most frequent orders, and, again, low granularity. So, for example, drug, but not the dose of the drug for medications, and the 10 most frequent comorbidities or co-occurring diagnoses. So this is an example of wisdom of the crowd kind of approach that says, well, what your colleagues do is probably a good representation of what you ought to be doing. Now, what's an obvious pitfall of this approach? I'm just checking to see if you're awake. Yeah? AUDIENCE: Just reinforce whatever's [INAUDIBLE].. PETER SZOLOVITS: Yeah, if they're all bozos, they're going to train you to be a bozo too. And there's a lot of stuff in medicine that is not very well-supported by evidence, where, in fact, people have developed traditions of doing things a certain way that may not be the right way to do it. And this just reinforces that. On the other hand, it probably does reduce variance in the sense that we talked about at the beginning. And so, as a result, it may be a reasonable approach, if you're willing to tolerate some exceptions. My favorite story is Semmelweiss figured out that having a baby in a hospital in Vienna was extremely dangerous for the mother, because they would die of what was called "child bed fever," which was basically an infection. And Semmelweiss figured out that maybe there was-- this was before Pasteur. But he figured out that maybe there was something that was being transmitted from one woman to the next that was causing this child bed fever, and, of course, he was right. And he did an experiment, where on his maternity ward, he had all of the younger doctors wash their hands with some sort of alcohol or something to kill whatever they were transmitting. And their death rate from this child bed fever dropped to almost 0. And he went to his colleagues and he said, hey, guys, we could really make the world a better place and stop killing women. And they looked at him, and they said, you know, these hands heal, they don't kill. Many of them were upper class or noblemen who had gone into this profession. The idea that somehow they were responsible for transmitting what turns out to be bacteria was just a non-starter for them. And Semmelweiss wound up ending his days in a mental institution, because he went nuts. He was unable to change practice even though he had done an experiment to demonstrate that it worked. So this is a case where the wisdom of the crowd was not so good and led to bad outcomes. So like Amazon's recommendation system, it automates the learning of decision support rules. And what's attractive about this is that because it's induced from real data, it tends to deal with more complex cases than the sort of simple, stereotypical cases for which people can develop guidelines, for example, where they can anticipate what's going to happen in various circumstances. So he used the Bayesian networking model that used diagnoses possible orders and evidence, which is the results from orders that were already completed. There's a system out of University of Pittsburgh, called Tetrad, that implements a nice version of something called Greedy Equivalent Search, which is a faster way of searching through the space of Bayesian networks for an appropriate network that represents your data. So it's a highly combinatorial problem, and the cleverness in this is that it figures out classes of Bayesian networks that, by definition, would fit the data equally well. And it does it by class rather than by individual network, and so it gets a nice combinatorial reduction. And what Jeff found is, for example, in the pregnancy network, these are the nodes that correspond to various interventions and various conditions. And this is the Bayesian network that best fits that data. It's reasonably complicated. Here are some others. This is for the emergency department case. So you see that you have things like chest pain and abdominal pain presenting diagnoses, and then you have various procedures, like an abdomen CT, or a pelvic CT, or a chest CT, or a head CT, or a basic metabolic panel, et cetera, and this gives you the probabilistic relationships between them. And so what they were able to do is to take this Bayesian network representation, and then if you lay a particular patient's data on that representation, that corresponds to fixing the value of certain nodes. And then you do Bayesian inference to figure out the probabilities of the unobserved nodes, and you recommend the highest probability interventions that have not yet been done. So it's a little bit like, if you remember, we talked about sequential diagnosis. This is a little bit in that spirit, but it's a much more complicated Bayesian network model rather than a naive-based model. And so the interface looks like this. You have-- it's called the Iterative Treatment Suggestions algorithm, and it shows the doctor that these are the problems of the patient, and the current orders, and the probability that you might ask to have any one of these orders done. And what they're able to show is that this does reasonably well. Obviously, it wouldn't have been published if they hadn't been able to show that. And so what you see is that, for example, the next order that's done in an inpatient pregnancy using this Bayesian network formalism has a position of about fourth on the list. So their criterion for judging this algorithm is, is it raising the things that people actually do too high on the list of the recommended list, on the recommended set of actions that you consider doing? And you see that it's fourth, on average, in inpatient pregnancy, about sixth in the ICU, about sixth in the emergency department, and about fifth in the urgent care clinic. So that's pretty good, because that means that even if you're looking at an iPhone, there's enough screen real estate that it'll be on the so-called first page of Google hits, which is the only thing people ever pay attention to. And, in fact, they can show that the average list position corresponds to the order rank by frequency, but that their model does a reasonably good job of keeping you within the first 10 or so for much of this range. I'm going to shift gears again. So Adam Right, you've met. He was discussant in one of our earlier classes. And Adam's been very active in trying to deploy decision support systems. And he had an interesting episode back in-- when was this-- 2016. So it must have been a little before 2016. He went to demonstrate this great decision support system that they had implemented at the Brigham, and he put in a fake case where an alert should have gone off for a patient who has been on a particular drug for more than a year and needs to have their thyroid stimulating hormone measured in order to check for a potential side effect of long-term use of amiodarone, as well as to have their-- ALT is a liver test, liver enzyme test. So they needed both of those tests. He was demonstrating this wonderful system. He put in a fake patient who had these conditions, and the alert didn't go off. So he goes, hmm, what's going on? And they went back, and they discovered that in 2009 the system's internal code for amiodarone had been changed from 40 to 70-99. Who knows why? But the rule logic in the system was never updated to reflect this change. And so, in fact, if you look at the history of the use of amiodarone-- by the way, it's an interesting graph. The blue dots are weekdays, and the black dots are weekends. So not a lot goes on in the hospital during the weekend. But what you see is that-- I don't know what happened before about the end of 2009. They probably weren't running that rule or something. But what you see is sort of a gradual increase in the use of this rule, and then you see a long decrease from 2010 up through 2013 when they discovered this problem. Now, why a decrease? I mean, it's not a sudden jump to 0. And the reason was that this came about-- first of all, it came about gradually, because the people who had had this drug before that change in the software had gotten the old code, which was still triggering the rule. It's just that as time went on, more and more people who needed the test had gotten the drug with its new code. And with that new code, it was no longer triggering the rule. And then this is the point at which they discovered the bug, and then they fixed it. Of course, it came right back up again. Oh. Well, I'll talk about some of the others as well. So this was the amiodarone case. So it fell suddenly, as some patients were taken off the drug and others were started with this new internal code. And as I said, the alert logic was fixed back in 2013. Yeah? AUDIENCE: So I don't know how hospital IT systems work, and it might vary from place to place. But is there ever a notion of like this computer needs to be updated for the software, but that one already got updated? Or are they all synced up so that they all get updated at the same time? PETER SZOLOVITS: They tend to all get updated at the same time. There are disasters that have happened in that updating process. Famously, the Beth Israel was down for about three days. Their computer system just crashed. And what they discovered is that they had this very complicated network in which there were cyclic dependencies in order to boot up different systems. So some system had to be up in order to let some other system be up, which had to be up in order to let the first system be up. And, of course, in normal operation, they never take down the whole system, and so nobody had discovered this until there was-- Cisco screwed them. There was some fix in the routers that caused everything to crash, and then they couldn't bring it back up again. And so that was a big panic. John Halamka, who's the CIO there, is a former student of mine. And after this all played out, I asked John, so what's the first thing you did when this happened? And he said, I sent a couple of panel trucks down to the Staples warehouse to buy pads of paper, which is pretty smart. So here's another example. This is lead screening. And so this was a case where there is a lead screening rule for two-year-olds. There is also one for one-, three-, and four-year-olds. And there was no change in screening for one-, three-, and four-year-olds, but the screening for two-year-olds went from 300 or 400 a day down to 0 for several years before they noticed it, and then went back up to the previous level. And they never did quite figure out what happened here, but something added two incomplete clauses to the rule having to do with gender and smoking status. But the clauses were incomplete, and so they were actually looking for the case of neither the gender nor the smoking status having been specified. So smoking status for a two-year-old, you could imagine, is not often specified, but gender typically is. And so the rule never fired because of that, and they have no idea how these changes were made. There's a complicated logging system that logs all the changes, and it crashed and lost its logging data. And it's a just so story. Chlamydia screen-- this was human error. And so they wound up-- they found this very quickly, because they had a two-month-old boy who had numerous duplicate reminders, including suggestions for mammograms, pap smears, pneumococcal vaccination, and cholesterol screening, and a suggestion to start the patient on various meds. So this was just a human error in revising the rule, and that one they found pretty quickly. So that's amusing. But what's interesting is these guys went on to say, well, how could we monitor for this in some ongoing fashion? And so they said, well, there's this notion of change point detection, which is an interesting machine learning problem, again. And so they said, well, suppose we built a dynamic linear model that includes seasonality, because we have to deal with the fact that a lot of stuff happens Monday through Friday and nothing happens on weekends? And so they created a model that says that your output is some function, f, of your inputs, plus some noise. The noise is Gaussian with some variance, capital V, and that x evolves according to some evolution that says it depends on the previous value of x, plus some other noise, which is also Gaussian. So that's the general sort of time series modeling approach that people often take. And then they said, well, we have to deal with seasonality. So what we're going to do is define a period, namely a week, and then we're going to separate out the states on different days of the week in order to give us the ability to model that seasonality. I worked on a different project having to do with outbreak detection for infectious diseases, and there the periodicity was a year, because things like the flu come in yearly cycles rather than in weekly cycles. And so that idea is pretty common. And then they built this multiprocess dynamic linear model that says, basically, imagine that our data is being generated by one of a set of these dynamic linear models. And so we have an additional state variable at each time that says which of the models is in control to generate the data at this point. And so if you have the set of observations up to some time, t, then you can compute the probability that model i is driving the generator at this point. And so you can have three basic models. You can have a model that says it's a stable model, in other words, what you expect is the steady state. So that would be the normal weekly variation in volume for any of these alerts. You can have a model which is an additive outlier. So that's something that says, all of a sudden, something happened, like that chlamydia screen or one of the other things that had a very quick blip. Or you can have a level shift change, like the change that happened when the screening rules or the alert rule for amiodarone stopped firing, because it went from one level to a very different level over a period of a relatively short period of time. And then what you can do is calculate the probability of any of these models being in control at the next time, and that's called the change point score. And you can calculate this from the data that you're given. And of course, they have tons of data. It's a big hospital and lots of these alerts go on. And if you plot this, there's the data for a time series. So you see the weekly variation. But what you see is that the probability of the steady behavior is quite high except at certain points where it all of a sudden dips. And so those are places where you suspect that something interesting is going on. And similarly, the probability of a temporary offset goes up at these various points, and the probability of a level shift goes up at this point. And you can see that, indeed, there is a level shift from essentially 0 up to this periodic behavior in the original data sequence. And so they actually implemented this in the hospital, and so now you get not just alerts, but you get meta-alerts that say, this kid ought to be screened for their lead levels, but also the lead level screening rule hasn't fired as often as we expected it to fire. Yeah, so there are a lot of details in the paper that you can look up, if you're interested. And what they find is that, if you look at the area under the delay false positive rate curve, so you're trading off how long it takes to be certain that one of these conditions has occurred versus how often you cry wolf, and you see that their algorithm does much better than a bunch of other things that they tried it against, which are earlier attempts to do this. And these are all highly statistically significant, so they got a nice paper out of it. In the remaining time, I wanted to talk about a number of other issues that really have to do with workflow. So we've talked about alerting, but there are an interesting set of studies about how these alerting systems actually work. So there was a cool idea from the Beth Israel Deaconess Hospital here in Boston where they said, well, what we really need to do is to escalate alerts. So, for example, it's quite typical in a hospital that, if you're a doctor and you have a patient who you have just sent their blood to the lab, and let's say there serum potassium comes back as 7 or 8, that patient is at high risk of going into cardiac arrhythmia and dying. And so your pager, in those days, goes off, and you read this text message that says, Mr. Jones has a serum potassium of 8. You'd better look in on him. So what they did was very clever. They said, well, the problem is busy doctors might ignore this. And so we'll then start a countdown timer. And we'll say, did Dr. Smith actually come and look at Mr. Jones within 20 minutes? And if the answer is no, then they send the page to the doctor's boss that says, hey, we sent this guy a page, and within 20 minutes he didn't look in on the patient. And then they start another timer. And they say, if that boss doesn't respond within an hour, then they send a page to the head of the hospital saying, you're her infectious disease people are doing a lousy job, because they're not-- or in this case, you're endocrine people, or whatever, are doing a lousy job, because they're not responding to these alerts. Now, how do you think the doctors liked this? Not much. And there is a real problem with overalerting. And there is no general rule that says, how often can you bug the head of the hospital with an alert like this before he or she just says, well, turn off the damn thing, I don't want to see these? And clearly, if you set the thresholds at different places, you get different results. So, for example, I remember Tufts implemented a system like this back in the 1980s, but they would send a page on every order where any of the lab results were abnormal, and that was way too much. Because a lot of these tests generate 20 results. Normal is defined as the 95% confidence interval. What are the chances that out of 20 tests, which aren't really independent, but if they were, one of them would be pretty guaranteed to be out of range for most of the patients? And so basically every test generated an alert to the doctor. And the doctors did threaten to kill the people who had implemented the system, and it got turned off. A system like this, if you set the threshold to be not abnormal, but life-threateningly abnormal, and if you set the rate and the time durations such that it's reasonable for people to respond to it, then maybe it can be acceptable. When we did this project on looking at how an emergency department could anticipate a flood of patients because it looked like flu season was starting, for example, the question we asked is, how many false alarms a month can you guys tolerate? And they thought about it. And the ED docs got together and said, three times a month you can cry wolf, because we really want to know when it actually happens. And we'd rather be prepared, and we can tolerate a 10% error rate on this prediction. But I don't know what it is in this domain. Another interesting study was-- it's become quite popular. I got a bunch of emails from my doctor today, because I had ordered a refill on some prescription, and he wanted to know how it's going, and blah, blah, blah. So the BI asked the question, what fraction of those messages are never read by the patients that they're sent to? Which is an important question, because if you're relying on that mode of communication as part of your workflow, you'd like it to be 0. It turned out only to be 3%, which is remarkably good. That means that most people are actually paying attention to those kinds of messages. Then I wanted to say a few words about the importance of communication and then finish up by mentioning some so far failed attempts at really good integration of all different data sources. So as I said, the BI started in 1994 with a system that said, if you're taking a renally-excreted or a nephrotoxic drug, then we're going to warn people if there is a rising creatinine level, which is an indication that your kidneys are not functioning so well. Because, of course, if the drug is renally excreted, that means that if your kidneys are not excreting things at the rate they're supposed to, you're going to wind up building up the amount of drug in your body, and that can become toxic. So they saw a 21-hour, so almost a full day, reduction in response time from the medical staff given these alerts versus what happened before. That's remarkable. I mean, saving a day in responding to a condition like this is really quite an impressive result. And they also saw, in terms of clinical outcome, that the risk of renal impairment was reduced to about half of the preintervention level. So that earlier response actually was saving people's kidney function by getting people to intervene earlier. I found it interesting they said 44% of doctors found these alerts helpful, 28% found them annoying, but 65% of them wanted them continued to be used in a survey. Enrico Carrera is one of my heroes. He used to be in the UK. He's now in Australia. And he had this very deep insight back in the 1980s. He said, you know, all you computer guys who are treading on this medical field think that all of the action is about decision-making, but it's not. All of the action is really about communication, that health care is basically a team sport. And unless we spend much more time studying what goes on in communication, we're going to miss the boat. And then mostly, we didn't pay any attention to him, but he's kept at it. So he said, well, how big is the communication space? So he cited a 1985 study that said that about 50% of requests for information are ones that people ask their colleague for versus 26% that they look up in their own notes. So if a doctor is on rounds, walks into a patient's room and says, I want to know has this guy's temperature been going up or down, a quarter of the time he'll look at notes. And half the time, he'll turn to the nurse and say, is this patient's temperature going up or down? So he says that's interesting. Paul Tang did a study in the '90s that said that in a clinic, about 60% of the time is spent talking among the staff, not doing anything else. Enrico and one of his colleagues said that almost 100% of non-patient record information, in other words, the thing that's not in the written health record, is done by talking. That's almost tautological, because where else would you get it? And then Charlie Saffron at the BI did a time and motion study and was looking at, I think, nursing behavior, and saying that about half their time was face-to-face communication, about 10% with electronic medical records, and also a lot of email, and voicemail, and paper reminders as ways of communicating among people. So this was a study looking at-- this is that 1998 study by Colera and Tombs. And they're looking at a consultant, the house officer, another consultant. These are British titles, because this was done in Australia-- a nurse, et cetera. And they say, OK, among hospital staff-- I think this was in one shift, I believe, I should have had that on the slide-- this is the number of pages that they sent and received. So they range from 0 up to about 4. The number of telephone calls made and received-- this ranges from 0 up to 13. Oh, here's the length of observation. So this was over a period of about three hours for each of these patients. And this is the total number of events. So think about it. In 3 and 1/2 hours, the senior house officer had 24 distinct communication events happen to that person. So that means, what, that's like 7-- yeah, like 7 an hour. So that's like 1 every 10 minutes, roughly. So it's an interrupt-driven kind of environment. Here's one particular subject that they looked at, three and a quarter hours of observation. This person spent 86% of their time talking. 31% were taken up with 28 interruptions. So even the interruptions were being interrupted. 25% were multitasking with two or more conversations. 87%, face-to-face or on a phone or a pager. So most of that is talk time. And 13% dealing with computers and patient notes. So the communication function is really important. And I don't have anything profound to say about it other than I'll put up a pointer to some of these papers. But the kinds of things they're considering are, well, we could introduce new channels, or new types of messages, or new communication policies that say, you know you may not interrupt the person who's taking care of patients while they're doing it, or something like that. And then moving from synchronous to asynchronous methods, like voicemail, or email, or Slack, or some modern communication mechanism. Let me skip by these. Next to the last topic, quickly, how do you keep from dropping the ball? So there are a lot of analyses that say that the biggest mistakes in health care are made not because somebody makes the wrong decision, but it's because somebody fails to make a decision. They just forget about something. They don't follow-up on something that they ought to. The patient is going along, and you think everything's OK, and you don't deal with it. So inspired partly by that escalation of pagers that I read about at the Beth Israel, I said, well, this sounds like what we really need is a workflow engine that's approximately a discrete event simulator. So has anybody built a discrete events simulator in this class? It's a fairly standard sort of programming problem, and it's useful in simulating all kinds of things that involve discrete events. And the idea is that you have a timeline, and you run down the timeline, and you execute the next activity that comes up. And that activity does something. It sends an email, or it shoots a rocket, or whatever field you're doing the simulation in. But most importantly, what it does is-- the last thing it does is it schedules something else to happen later in the timeline. So, for example, for something that happens once a day, when it happens, the task that runs schedules it to happen again the next day. And that means that it's going to be continually operating all the time. So the idea I had was that what you'd like to do is to say, if at some time, t, I have a task that says do x or asks z to do y, or both, then the last thing should be at some time in the future schedule another task that says, is y done? And if not, then go notify somebody or go remind somebody. And as far as I know, no hospital and no electronic record system has any capability like this, but I still think it's a terrific idea. And then I wanted to finish with a pointer to a problem that is still very much with us. So in 1994, some colleagues and I wrote this thing we called "The Guardian Angel Manifesto." And the idea was that we should engage patients more in their own care, because they can keep track of a lot of the things that systems didn't do a very good job of keeping track of. And the idea was that you would have a computational process that would start off at the time your parents conceived you and run until your autopsy after you died. And during this time, it would be responsible for collecting all the relevant health care data about you. So it would be your electronic medical record, but it would also be active. So it would help you communicate with your providers. It would help educate you about any conditions you have. It would remind you about things. It would schedule stuff for you, et cetera. So this was a nice science fiction vision. And in the mid-2000s, Adam Bosworth, who was a VP of Google, came to me. And he said, you know, I read your thing. It's a good idea. I'm going to do it. So Google started up this thing called Google Health, which was more focused on being at least the personal health record. They did a pilot with 1,600 people at Cleveland Clinic, and then they went public as a beta. And three years later, they killed it. And they had a bunch of partners. So they had Allscripts, and Beth Israel, and Blue Cross of Massachusetts, and the Cleveland Clinic, and CVS, and so on. So they did their job of trying to connect to a bunch of important players. But, of course, they didn't have everybody. And so, for example, I, of course, immediately signed up for an account, and the only company that I had ever dealt with out of that set was Walgreens, where I had bought a skin cream one time for a skin rash. And so my total medical record consisted of a skin rash and a cream that I had bought to take care of it-- not very helpful. And so nobody, other than these partners, could enter data automatically, which meant that you had to be even more anal compulsive than I am in order to sit there and type in my entire medical history into the system, especially, because if I did so, nobody would ever look at it. Because if I go to my doctor and say, hey, Doc, here's the Google URL for my medical record, and here's the password by which you can access it, what do you think are the odds that they're actually going to look? AUDIENCE: 0. PETER SZOLOVITS: 0. So the thing was an absolute abject failure. And people keep trying it. And so far, nobody has figured out how to do it, but it's still a good idea. With that, we'll stop on workflow.
MIT_6S897_Machine_Learning_for_Healthcare_Spring_2019
7_Natural_Language_Processing_NLP_Part_1.txt
PETER SZOLOVITS: OK. So today and next Tuesday, we're talking about the role of natural language processing in machine learning in health care. And this is going to be a heterogeneous kind of presentation. Mainly today, I'm going to talk about stuff that happened or that takes advantage of methods that are not based on neural network representations. And on Tuesday, I'm going to speak mostly about stuff that does depend on neural network representations, but I'm not sure where the boundary is going to fall. I've also invited Dr. Katherine Liao over there, who will join me in a question and answer session and interview like we did a couple of weeks ago with David. Kat is a rheumatologist in the Partners HealthCare system. And you'll actually be hearing about some of the work that we've done together in the past before we go to the interview. So roughly, the outline of these two lectures is that I want to talk a little bit about why we care about clinical text. And then I'm going to talk about some conceptually very appealing, but practically not very feasible methods that involve analyzing these narrative texts as linguistic entities, as linguistic objects in the way that a linguist might approach them. And then we're going to talk about what is very often done, which is a kind of term spotting approach that says, well, we may not be able to understand exactly everything that goes on in the narratives, but we can identify certain words and certain phrases that are very highly indicative that the patient has a certain disease, a certain symptom, that some particular thing was done to them. And so this is a lot of the bread and butter of how clinical research is done nowadays. And then I'll go on to some other techniques. So here's an example. This is a discharge summary from MIMIC. When you played with MIMIC, you notice that it's de-identified. And so names and things are replaced with square brackets, star, star, star kinds of things. And here I have replaced-- we replaced those with synthetic names. So Mr. Blind isn't really Mr. Blind, and November 15 probably really isn't November 15, et cetera. But I wanted something that read like real text. So if you look at something like this, you see that Mr. Blind is a 79-year-old white white male-- so somebody repeated a word-- with a history of diabetes mellitus and inferior MI, who underwent open repair of his increased diverticulum on November 13 at some-- again, that's not the name of the actual place-- medical center. And then he developed hematemesis, so he was spitting up blood, and was intubated for respiratory distress. So he wasn't breathing well. So these are all really important things about what happened to Mr. Blind. And so we'd like to be able to take advantage of this. And in fact, to give you a slightly more quantitative version of this, Kat and I worked on a project back around 2010 where we were looking at trying to understand what are the genetic correlates of rheumatoid arthritis. And so we went to the research patient data repository of Mass General and the Brigham Partners HealthCare, and we said, OK, who are the patients who have been billed for a rheumatoid arthritis visit? And there are many thousands of those people, OK? And then we selected a random set of I think 400 of those patients. We gave them to rheumatologists, and we said, which of these people actually have rheumatoid arthritis? So these were based on billing codes. So what would you guess is the positive predictive value of having a billing code for rheumatoid arthritis in this data set? I mean, how many people think it's more than 50%? OK, that would be nice, but it's not. How many people think it's more than 25%? God, you guys are getting really pessimistic. Well, it also isn't. It turned out to be something like 19% in this cohort. Now, before you start calling, you know, the fraud investigators, you have to ask yourself why is it that this data is so lousy, right? And there's a systematic reason, because those billing codes were not created in order to specify what's wrong with the patient. They were created in order to tell an insurance company or Medicare or somebody how much of a payment is deserved by the doctors taking care of them. And so what this means is that, for example, if I clutch my chest and go, uh, and an ambulance rushes me over to Mass General and they do a whole bunch of tests and they decide that I'm not having a heart attack, the correct billing code for that visit is myocardial infarction. Because of course the work that they have to do in order to figure out that I'm not having a heart attack is the same as the work they would have had to do to figure out that I was having a heart attack. And so the billing codes-- we've talked about this a little bit before-- but they are a very imperfect representation of reality. So we said, well, OK. What if we insisted that you have three billing codes for rheumatoid arthritis rather than just one. And that turned out to raise the positive predictive value all the way up to 27%. So we go, really? How could you get billed three times? Right? Well, the answer is that you get billed for, you know, every aspirin you take at the hospital. And so for example, it's very easy to accumulate three billing codes for the same thing because you go see a doctor, the doctor bills you for a rheumatoid arthritis visit, he or she sends you to a radiologist to take an X-ray of your fingers and your joints. That bill is another billing code for RA. The doctor also sends you to the lab to have a blood draw so that they can check your anti-CCP titer. That's another billing code for rheumatoid arthritis. And it may be that all of this is negative and you don't actually have the disease. So this is something that's really important to think about and to remember when you're analyzing these data. And so we started off in this project saying, well, we need to get a positive predictive value more on the order of 95%, because we wanted a very pure sample of people who really did have the disease because we were going to take blood samples from those patients, pay a bunch of money to the Broad to analyze them, and then hopefully come up with a better understanding of the relationship between their genetics and their disease. And of course, if you talk to a biostatistician, as we did, they told us that if we have more than about 5% corruption of that database, then we're going to get meaningless results from it. So that's the goal here. So what we did is to say, well, if you train a data set that tries to tell you whether somebody really has rheumatoid arthritis or not based on just codified data. So codified data are things like lab values and prescriptions and demographics and stuff that is in tabular form. Then we were getting a positive predictive value of about 88%. We said, well, how well could we do by, instead of looking at that codified data, looking at the narrative text in nursing notes, doctor's notes, discharge summaries, various other sources. Could we do as well or better? And the answer turned out that we were getting about 89% using only the natural language processing on these notes. And not surprisingly, when you put them together, the joint model gave us about 94%. So that was definitely an improvement. So this was published in 2010, and so this is not the latest hot off the bench results. But to me, it's a very compelling story that says there is real value in these clinical narratives. OK, so how did we do this? Well, we took about four million patients in the EMR. We selected about 29,000 of them by requiring that they have at least one ICD-9 code for rheumatoid arthritis, or that they've had an anti-CCP titer done in the lab. And then we-- oh, it was 500, not 400. So we looked at 500 cases, which we got gold standard readings on. And then we trained an algorithm that predicted whether this patient really had RA or not. And that predicted about 35-- well, 3,585 cases. We then sampled a validation set of 400 of those. We threatened our rheumatologists with bodily harm if they didn't read all those cases and give us a gold standard judgment. No, I'm kidding. They were actually really cooperative. And there are some details here that you can look at in the slide, and I had a pointer to the original paper if you're interested in the details. But we were looking at ICD-9 codes for rheumatoid arthritis and related diseases. We excluded some ICD-9 codes that fall under the general category of rheumatoid diseases because they're not correct for the sample that we were interested in. We dealt with this multiple coding by ignoring codes that happened within a week of each other so that we didn't get this problem of multiple bills from the same visit. And then we looked for electronic prescriptions of various sorts. We looked for lab tests, mainly RF, rheumatoid factor, and anti-cyclic citrullinated peptide, if I pronounced that correctly. And another thing we found, not only in this study but in a number of others, is it's very helpful just to count up how many facts are on the database about a particular patient. That's not a bad proxy for how sick they are, right? If you're not very sick, you tend to have a little bit of data. And if you're sicker, you tend to have more data. So these were the cohort selection. And then for the narrative text, we used a system that was built by Qing Zeng and her colleagues at the time-- it was called HITex. It's definitely not state of the art today. But this was a system that extracted entities from narrative text and did a capable job for its era. And we did this from health care provider notes, radiology and pathology reports, discharge summaries, operative reports. And we also extracted disease diagnosis notes, mentions from the same data, medications, lab data, radiology findings, et cetera. And then we had augmented the list that came with that tool with the sort of hand-curated list of alternative ways of saying the same thing in order to expand our coverage. And we played with negation detection because, of course, if a note says the patient does not have x, then you don't want to say the patient had x because x was mentioned. And I'll say a few more words about that in a minute. So if you look at the model we built using logistic regression, which is a very common method, what you find is that there are positive and negative predictors, and the predictors actually are an interesting mix of ones based on natural language processing and ones that are codified. So for example, you have rheumatoid arthritis. If a note says the patient has rheumatoid arthritis, that's pretty good evidence that they do. If somebody is characterized as being seropositive, that's again good evidence. And then erosions and so on. But they're also codified things, like if you see that the rheumatoid factor in a lab test was negative, then-- actually, I don't know why that's-- oh, no, that counts against-- OK. And then various exclusions. So these were the things selected by our regularized logistic regression algorithm. And I showed you the results before. So we were able to get a positive predictive value of about 0.94. Yeah? AUDIENCE: In a the previous slide, you said standardized regression coefficients. So why did you standardize? Maybe I got the words wrong. Just on the previous slide, the-- PETER SZOLOVITS: I think-- so the regression coefficients in a logistic regression are typically just odds ratios, right? So they tell you whether something makes a diagnosis more or less likely. And where does it say standardized? AUDIENCE: [INAUDIBLE]. PETER SZOLOVITS: Oh, regression standardized. I don't know why it says standardized. Do you know why it says standardized? KATHERINE LIAO: Couple of things. One is, when you run an algorithm right on your data set, you can't port it using the same coefficients because it's going to be different for each one. So we didn't want people to feel like they can just add it on. The other thing, when you standardize it, is you can see the relative weight of each coefficient. So it's kind of a measure. Not exactly of how important each coefficient was. That's our way of-- if you can see, we ranked it by the standardized regression coefficient. So NL PRA is up top at 1.11. So that has the highest weight. Whereas the other DMARDs lend it only a little bit more. PETER SZOLOVITS: OK. Yes? AUDIENCE: The variables where NL PRA, where it says rheumatoid arthritis in the test, were these presence of or if they're count? PETER SZOLOVITS: Yeah. Assuming it's present. So the negation algorithm hopefully would have picked up if it said it's absent and you wouldn't get that feature. All right? So here's an interesting thing. This group, I was not involved in this particular project, said, well, could we replicate the study at Vanderbilt and at Northwestern University? So we have colleagues in those places. They also have electronic medical record systems. They also are interested in identifying people with rheumatoid arthritis. And so Partners had about 4 million patients, Northwestern had 2.2, Vanderbilt had 1.7. And we couldn't run exactly the same stuff because, of course, these are different systems. And so the medications, for example, were extracted from their local EMR in very different ways. And the natural language queries were also extracted in different ways because Vanderbilt, for example, already had a tool in place where they would try to translate any text in their notes into UMLS less concepts, which we'll talk about again in a little while. So my expectation, when I heard about this study, is that this would be a disaster. That it would simply not work because there are local effects, local factors, local ways that people have of describing patients that I thought would be very different between Nashville, Chicago, and Boston. And much to my surprise, what they found was that, in fact, it kind of worked. So the model performance, even taking into account that the way the data was extracted out of the notes and clinical systems was different, was fairly similar. Now, one thing that is worrisome is that the PPV of our algorithm on our data, the way we calculated PPV, they calculated PPV in this study, came in lower than the way we had done it when we found it. And so there is a technical reason for it, but it's still disturbing that we're getting a different result. The technical reason is described here. Here, the PPV is estimated from a five-fold cross validation of the data, whereas in our study, we had a held out data set from which we were calculating the positive predictive value. So it's a different analysis. It's not that we made some arithmetic mistake. But this is interesting. And what you see is that if you plot the areas under-- or if you plot the ROC curves, what you see is that training on Northwestern data and testing on either Partners or Vanderbilt data was not so good. But training on either Partners or Vanderbilt data and testing on any of the others turned out to be quite decent. Right? So there is some generality to the algorithm. All right, I'm going to switch gears for a minute. So this was from an old paper by Barrows from 19 years ago. And he was reading nursing notes in an electronic medical records system. And he came up with a note which has exactly that text on the left hand side in the nursing note. Except it wasn't nicely separated into separate lines. It was all run together. So what does that mean? Anybody have a clue? I didn't when I was looking at it. So here's the interpretation. So that's a date. IPN stands for intern progress note. SOB, that's not what you think it means. It's shortness of breath. And DOE is dyspnea on exertion. So this is difficulty breathing when you're exerting yourself, but that has decreased, presumably from some previous assessment. And the patient's vital signs are stable, so VSS. And the patient is afebrile, AF. OK? Et cetera. So this is harder than reading the Wall Street Journal because the Wall Street Journal is meant to be readable by anybody who speaks English. And this is probably not meant to be readable by anybody except the person who wrote it or maybe their immediate friends and colleagues. So this is a real issue and one that we don't have a very good solution for yet. Now, what do you use NLP for? Well, I had mentioned that one of the things we want to do is to codify things that appear in a note. So if it says rheumatoid arthritis, we want to say, well, that's equivalent to a particular ICD-9 code. We might want to use natural language processing for de-identification of data. I mentioned that before. You don't, MIMIC, the only way that Roger Mark's group got permission to release that data and make it available for people like you to use is by persuading the IRB that we had done a good enough job of getting rid of all the identifying information in all of those records so that it's probably not technically impossible, but it's very difficult to figure out who the patients actually were in that cohort, in that database. And the reason we ask you to sign a data use agreement is to deal with that residual, you know, difficult but not necessarily impossible because of correlations with other data. And then you have little problems like Mr. Huntington suffers from Huntington's disease, in which the first Huntington is protected health information because it's a patient's name. The second Huntington is actually an important medical fact. And so you wouldn't want to get rid of that one. You want to determine aspects of each entity. Its time, its location, its degree of certainty. You want to look for relationships between different entities that are identified in the text. For example, does one precede another, does it cause it, does it treat it, prevent it, indicate it, et cetera? So there are a whole bunch of relationships like that that we're interested in. And then also, for certain kinds of applications, what you'd really like to do is to identify what part of a textual record addresses a certain question. So even if you can't tell what the answer is, you should able to point to a piece of the record and say, oh, this tells me about, in this case, the patient's exercise regimen. And then summarization is a very real challenge as well, especially because of the cut and paste that has come about as a result of these electronic medical record systems where, when a nurse is writing a new note, it's tempting and supported by the system for him or her to just take the old note, copy it over to a new note, and then maybe make a few changes. But that means that it's very repetitive. The same stuff is recorded over and over again. And sometimes that's not even appropriate because they may not have changed everything that needed to be changed. The other thing to keep in mind is that there are two very different tasks. So for example, if I'm doing de-identification, essentially I have to look at every word in a narrative in order to see whether it's protected health information. But there are often aggregate judgments that I need to make, where many of the words don't make any difference. And so for example, one of the first challenges that we ran back in 2006 was where we gave people medical records, narrative text records from a bunch of patients and said, is this person a smoker? Well, you can imagine that there are certain words that are very helpful like smoker or tobacco user or something like that. But even those are sometimes misleading. So for example, we saw somebody who happened to be a researcher working on tobacco mosaic virus who was not a smoker. And then you have interesting cases like the patient quit smoking two days ago. Really? Are they a smoker or not? And also, aggregate judgment is things like cohort selection, where it's not every single thing that you need to know about this patient. You just need to know if they fit a certain pattern. So let me give you a little historical note. So this happened to be work that was done by my PhD thesis advisor, the gentleman whose picture is on the slide there. And he published this paper in 1966 called English for the Computer in the Proceedings of the Fall Joint Computer Conference. This was the big computer conference of the 1960s. And his idea was that the way to do English, the way to process English is to assume that there is a grammar, and any English text that you run across, you parse according to this grammar. And that each parsing rule corresponds to some semantic function. And so the picture that emerges is one like this. Where if you have two phrases and they have some syntactic relationship between them, then you can map each phrase to its meaning. And the semantic relationship between those two meanings is determined by the syntactic relationship in the language. So this seems like a fairly obvious idea, but apparently nobody had tried this on a computer before. And so Fred built, over the next 20 years, computer systems, some of which I worked on that tried to follow this method. And he was, in fact, able to build systems that were used by researchers in areas like anthropology, where you don't have nice coded data and where a lot of stuff is in narrative text. And yet he was able to help one anthropologist that I worked with at Caltech to analyze a database of about 80,000 interviews that he had done with members of the Gwembe Tonga tribe, who lived in the valley that is now flooded by the Zambezi River Reservoir on the border of Zambia and Zimbabwe. That was fascinating. Again, he became very well known for some of that research. In the 1980s I was amused to see that SRI-- which doesn't stand for anything, but used to stand for Stanford Research Institute-- built a system called Diamond Diagram, which was intended to help people interact with the computer system when they didn't know a command language for the computer. So they could express what they wanted to do in English and the English would be translated into some semantic representation. And from that, the right thing was triggered in the computer. So these guys, Walker and Hobbs, said, well, why don't we apply this idea to natural language access to medical text? And so they built a system that didn't work very well, but it tried to do this by essentially translating the English that it was reading into some formal predicate calculus representation of what they saw, and then a process for that system. The original Diamond Diagram system that was built for people who were naive computer users and didn't know command languages actually had a very rigid syntax. And so what they discovered is that people are more adaptable than computers and that they could adapt to this rigid syntax. How many of you have Google Home or Amazon Echo or Apple something or other that you deal with? Well, so it's training you, right? Because it's not very good at letting you train it, but you're more adaptable. And so you quickly learn that if you phrase things one way, it understands you, and if you phrase things a different way, it doesn't understand you. And you learn how to phrase it. So that's what these guys are relying on, is that they can get people to adopt the conventions that the computer is able to understand. The most radical version of this was a guy named de Heaulme, who I met in 1983 in Paris. He was a doctor Le Pitie Salpetriere, which is one of these medieval hospitals in Paris. And it's wonderful place, although when they built it, it was just a place to die because they really couldn't do much for you. So de Heaulme convinced the chief of cardiology at that hospital that he would develop an artificial language for taking notes about cardiac patients. He would teach this to all of the fellows and junior doctors in the cardiology department at the hospital. And they would be required by the chief, which is very powerful in France, to use this artificial language to write notes instead of using French to write notes. And they actually did this for a month. And when I met de Heaulme, he was in the middle of analyzing the data that he had collected. And what he found was that the language was not expressive enough. There were things that people wanted to say that they couldn't say in this artificial language he had created. And so he went back to create version two, and then he went back to the cardiologist and said, well, let's do this again. And then they threatened to kill him. So the experiment was not repeated. OK, so back to term spotting. Traditionally, if you were trying to do this, what you would do is you would sit down with a bunch of medical experts and you would say, all right, tell me all the words that you think might appear in a note that are indicative of some condition that I'm interested in. And they would give you a long list. And then you'd do grep, you'd search through the notes for those terms. OK? And if you want it to be really sophisticated, you would use an algorithm like NegEx, which is a negation expression detector that helps get rid of things that are not true. And then, as people did this, they said, well, there must be more sophisticated ways of doing this. And so a whole industry developed of people saying that not only should we use the terms that we got originally from the doctors who were interested in doing these queries, but we can define a machine learning problem, which is how do we learn the set of terms that we should actually use that will give us better results than just the terms we started with? And so I'm going to talk about a little bit of that approach. First of all, for negation, Wendy Chapman, now at Utah, but at the time at Pittsburgh, published this paper in 2001 called A Simple Algorithm for Identifying the Gated Findings of Diseases in Discharge Summaries. And it is indeed a very simple algorithm. And here's how it works. You find all the UMLS terms in each sentence of a discharge summary. So I'll talk a little bit about that. But basically, it's a dictionary look up. You look up in this very large database of medical terms and translate them into some kind of expression that represents what that term means. And then you find two kinds of patterns. One pattern is a negation phrase followed within five words by one of these UMLS terms. And the other is a UMLS term followed within five words by a negation phrase, different set of negation phrases. So if you see no sign of something, that means it's not present. Or if you see ruled out, unlikely something, then it's not present. Absence of, not demonstrated, denies, et cetera. And post modifiers if you say something declined or something unlikely, that also indicates that it's not present. And then they hacked up a bunch of exceptions where, for example, if you say gram negative, that doesn't mean that it's negative for whatever follows it or whatever precedes it, right? Et cetera. So there are a bunch of exceptions. And what they found is that this actually, considering how incredibly simple it is, does reasonably well. So if you look at sentences that do not contain a negation phrase and looked at 500 of them, you find that you get a sensitivity and specificity of 88% and 52% for those that don't contain one of these phrases. Of course, the sensitivity is 0 and the specificity is 100% on the baseline. And if you use NegEx, what you find is that you can significantly improve the specificity over the baseline. All right? And you wind up with a better result, although not in all schemes. So what this means is that very simplistic techniques can actually work reasonably well at times. So how do we do this generalization? One way is to take advantage of related terms like hypo- or hypernyms, things that are subcategories or super categories of a word. You might look for those other associated terms. For example, if you're looking to see whether a patient has a certain disease, then you can do a little bit of diagnostic reasoning and say, if I see a lot of symptoms of that disease mentioned, then maybe the disease is present as well. So the recursive machine learning problem is how best to identify the things associated with the term. And this is generally known as phenotyping. Now, how many of you have used the UMLS? Just a few. So in 1985 or '84, the newly appointed director of the National Library of Medicine, which is one of the NIH institutes, decided to make a big investment in creating this unified medical language system, which was an attempt to take all of the terminologies that various medical professional societies had developed and unify them into a single, what they called a meta-thesaurus. So it's not really a thesaurus because it's not completely well integrated, but it does include all of this terminology. And then they spent a lot of both human and machine resources in order to identify cases in which two different expressions from different terminologies really meant the same thing. So for example, myocardial infarction and heart attack really mean exactly the same thing. And in some terminologies, it's called acute myocardial infarction or acute infarct or acute, you know, whatever. And they paid people and they paid machines to scour those entire databases and come up with the mapping that said, OK, we're going to have some concept, you know, see 398752-- I just made that up-- which corresponds to that particular concept. And then they mapped all those together. So that's an enormous help in two ways. It helps you normalize databases that come from different places and that are described differently. It also tells you, for natural language processing, how it is-- it gives you a treasure trove of ways of expressing the same conceptual idea. And then you can use those in order to expand the kinds of phrases that you're looking for. So there are, as of the current moment, there are about 3.7 million distinct concepts in this concept base. There are also hierarchies and relationships that are imported from all these different sources of terminology, but those are a pretty jumbled mess. And then over the whole thing, they created a semantic network that says there are 54 relations and 127 types, and every concept unique identifier is assigned at least one semantic type. So this is very useful for looking through this stuff. Here are the UMLS semantic concepts of various-- or the semantic types. So you see that the most common semantic type is this T061, which stands for therapeutic or preventive procedure. And there are 260,000 of those concepts in the meta-thesaurus. There are 233,000 findings, 172,000 drugs, organic chemicals, pharmacological substances, amino acid peptide or protein, invertebrate. So the data does not come only from human medicine but also from veterinary medicine and bioinformatics research and all over the place. But you see that these are a useful listing of appropriate semantic types that you can then look for in such a database. And the types are hierarchically organized. So for example, the relations are organized so there's an effects relation which has sub-relations, manages, treats, disrupts, complicates, interacts with, or prevents. Something like biological function can be a physiologic function or a pathologic function. And again, each of these has subcategories. So the idea is that each concept, each unique concept is labeled with at least one of these semantic types, and that helps to identify things when you're looking through the data. There are also some tools that deal with the typical linguistic problems, that if I want to say bleeds or bleed or bleeding, those are really all the same concept. And so there are these lexical variant generator that helps us normalize that. And then there is the normalization function that takes some statement like Mr. Huntington was admitted, blah, blah, blah, and normalizes it into lowercase alphabetized versions of the text, where things are translated into other potential meanings, linguistic meanings of that text. So for example, notice this one says was, but one of its translations is be because was is just a form of be. This can also get you in trouble. I ran into a problem where I was finding beryllium in everybody's medical records because it also knows that b-e is an abbreviation for beryllium. And so you have to be a little careful about how you use this stuff. There is an online tool where you can type in something and it says weakness of the upper extremities. And it says, oh, you mean the concept proximal weakness, upper extremities. And then it has a relationship to various contexts and it has siblings and it has all kinds of other things that one can look up. I built a tool a few years ago where if you populated with one of the short summaries, it tries to color code the types of things that it found in that summary. And so this is using a tool called MetaMap, which again comes from the National Library of Medicine, and a locally built UMLS look up tool that in this particular case finds exactly the same mappings from the text. And so you can look through the text and say, ah, OK, so no indicates negation and urine output is a kind of one of these concepts. If you moused over it, it would show you. OK, I think what I'm going to do is stop there today so that I can invite Kat to join us and talk about A, what's happened since 2010, and B, how is this stuff actually used by clinicians and clinician researchers. Kat? OK, well, welcome, Kat. KATHERINE LIAO: Thank you. PETER SZOLOVITS: Nice to see you again. So are the techniques that were represented in that paper from nine years ago still being used today in research settings? KATHERINE LIAO: Yeah. So I'd say yes, the bare bones of platform-- that pipeline is being used. But now I'd say we're in version five. Actually, you were on that revision list. But we've done a lot of improvements to actually automate things a little more. So the rate limiting factor in phenotyping is always the clinician. Always getting that label, doing the chart review, coming up with that term list. So I don't know if you want me to go into some of the details on what we've been doing. PETER SZOLOVITS: Yeah, if you would. KATHERINE LIAO: Kind of plugs it in. So if you recall that diagram, there were several steps, where you started with the EMR. There was that filter with the ICD codes. Then you get this data mart, and then you start training. You had to select a random 500, which is a lot. It's a lot of chart review to do. It is a lot. So our goal was to reduce that amount of chart review. And part of the way to reduce that is reducing the feature space. So one of the things that we didn't know when we first started out was how many gold standard labels did we need and how many features did we need and which of those features would be important. So by features, I mean ICD codes, a diagnosis code, medications, and all that list of NLP terms that might be related to the condition. And so now we have ways to try to whittle down that list before we even use those gold standard labels. And so let me think about-- this is NLP. The focus here is on NLP. So there are a couple of ways we're doing this. So one rate limiting step was getting the clinicians to come up with a list of terms that are important for a certain condition. You can imagine if you get five doctors in a room to try to agree on a list, it takes forever. And so we tried to get that out of the way. So one thing we started doing was we took just common things that are freely available on the web. Wikipedia, Medline, the Merck Manual that have medical information. And we actually now process those articles, look for medical terms, pull those out, map them to concepts, and that becomes that term list. Now, that goes into-- so now instead of, if you think about in the old days, we came up with the list, we had ICD lists and term lists, which got mapped to a concept. Now we go straight to the article. We kind of do majority voting with the articles. We take five articles, if three out of five mention it more than x amount of time, we say that could potentially be important. So that's the term list. Get the clinicians out of that step. Well, actually, we don't train yet. So now instead of training right away in the gold standard labels, we train on a silver standard label. Most of the time, we use the main ICD code, but sometimes we use the main NLP [INAUDIBLE] Because sometimes there is no code for the phenotype we're interested in. So that's kind of some of the steps that we've done to automate things a little bit more and formalize that pipeline. So in fact, the pipeline is now part of the Partners Biobank, which is a Partner's Healthcare. As Pete mentioned, it's Mass General and Brigham Women's Hospital. They are recruiting patients to come in and get the blood sample, link it with their notes so people can do research on linked EHR data and blood sample. So this is the pipeline they used for phenotyping. Now I'm over at the Boston VA along with Tianxi. And this is the pipeline we're laying down for also the Million Veterans program, which is even bigger. It's a million vets and they have EHR data going back decades. So it's pretty exciting. PETER SZOLOVITS: So what are the kinds of-- I mean, this study that we were talking about today was for rheumatoid arthritis. What other diseases are being targeted by this phenotyping approach? KATHERINE LIAO: So all kinds of diseases. There's a lot of things we learn, though. The phenotyping approach is best suited, the pipeline that we-- the base pipeline is best suited for conditions that have a prevalence of 1% or higher. So rheumatoid arthritis is kind of at that lower bound. Rheumatoid arthritis is a chronic inflammatory joint disease. It affects 1% of the population. But it is the most common autoimmune joint disease. Once you go to rare diseases that are episodic that don't happen-- you know, not only is it below 1%, but only happens once in a while-- this type of approach is not as robust. But most diseases are above 1%. So at the VA, we've kind of laid down this pipeline for a phonemic score. And they're running through acute stroke, myocardial infarction, all kinds of these-- diabetes-- just really a lot of all the common diseases that we want to study. PETER SZOLOVITS: Now, you were mentioning that when you identify such a patient, you then try to get a blood sample so that you can do genotyping on them. Is that also common across all these diseases or are there different approaches? KATHERINE LIAO: Yeah, so it's interesting. 10 years ago, it was very different. It was very expensive to genotype a patient. It was anywhere between $500 to $700 per patient. PETER SZOLOVITS: And that was just for single nucleotide polymorphism. KATHERINE LIAO: Yes, just for a snip. So we had to be very careful about who we selected. So 10 years ago, what we did is we said, OK, we have 4 million patients and partners. Who has already with good certainty? Then we select those patients and we genotype them. Because it costs so much, you didn't want to genotype someone who didn't have RA. Not only would it alter the-- it would reduce the power of our association study, it would just be like wasted dollars. The interesting thing is that the change has happened. And we can completely think of a different way of approaching things. Now you have these biobanks. You have something like the VA MVP or UK Biobank. They are being systematically recruited, blood samples are taken, they're genotyped with no study in mind. Linked with the EHR. So now I walk into the VA, it's a completely different story. 10 years later, I'm at the VA and I'm interested in identifying rheumatoid arthritis. Interesting enough, this algorithm ports well over there, too. But now we tested our new method on there. But now, instead of saying, I need to identify these patients and get the genotype, all the genotypes are already there. So it's a completely different approach to research now. PETER SZOLOVITS: Interesting. So the other question that I wanted to ask you before we turn it over to questions from the audience is, so this is all focused on research uses of the data. Are there clinical uses that people have adopted that use this kind of approach to trying to read the note? We had fantasized decades ago that, you know, when you get a report from a pathologist, that somehow or other, a machine learning algorithm using natural language processing would grovel over it, identify the important things that came out, and then either incorporate that in decision support or in some kind of warning systems that drew people's attention to the important results as opposed to the unimportant ones. Has any of that happened? KATHERINE LIAO: I think we're not there yet, but I feel like we're so much closer than we were before. That's probably how you felt a few decades ago. One of the challenges is, as you know, EHR weren't really widely adopted until the HITEx Act in 2010. So a lot of systems are actually now just getting their EHR. And the reason that we've had the luxury of playing around with the data is because Partners was ahead of the curve and had developed an EHR. The VA happened to have an EHR. But I think first-- because research and clinical medicine is very different. Research, if you mess up and you misclassify someone with a disease, it's OK, right? You just lose power in your study. But in the clinical setting, if you mess up, it's a really big deal. So I think the bar is much higher. And so one of our goals with all this phenotyping is to get it to that point where we feel pretty confident. We're not going to say someone has or hasn't a disease, but we are, you know, Tianxi and I have been planning this grant where, what's outputted from this algorithm is a probability of disease. And some of our phenotype algorithms are pretty good. And so what we want to test is what threshold is that probability that you would want to tell a clinician that, hey, if you're not thinking about rheumatoid arthritis in this patient-- this is particularly helpful in places where they're in remote locations where there aren't rheumatologist available-- you should be thinking about it and maybe, you know, considering referring them or speaking to a rheumatologist through telehealth, which is also something. There's a lot of things that are changing that are making something like this fit much more into the workflow. PETER SZOLOVITS: Yeah. So you're as optimistic as I was in the 1990s. KATHERINE LIAO: Yes. I think we're getting-- we'll see. PETER SZOLOVITS: Well, you know, it will surely happen at some point. Did any of you go to the festivities around the opening of the Schwarzman College of Computing? So they've had a lot of discussions. And health care does keep coming up over and over again as one of the great opportunities. I profoundly believe that. But on the other hand, I've learned over many decades not to be quite as optimistic as my natural proclivities are. And I think some of the speakers here have not yet learned that same lesson. So things may take a little bit longer. So let me open up the floor to questions. KATHERINE LIAO: Yes? AUDIENCE: So the mapping that you did to concepts, is that within the Partners system or is that something like publicly available? And can you just transfer that to the VA? Or like, when you do work like, how much is proprietary and how much gets expanded up? KATHERINE LIAO: Yeah. So you're speaking about when we were trying to create that term list and we mapped the terms to the concepts? AUDIENCE: And you were using Wikipedia and three other sources. KATHERINE LIAO: Yeah. Yeah. So that's all out there. So as an academic group, we try to publish everything we do. We put our codes up on GitHub or CRAN for other people to play out and tests and break. So yeah, the terms are really similar in UMLS. I don't know if you had a chance to look through it. They have a lot of keywords. So there is a general way to map keywords to terms to concepts. So that's the basis of what we do. There may maybe a little bit more there, but there's nothing fancy behind it. And as you can imagine, because we're trying to go across many phenotypes, when we think about mapping, it always has to be automated. Our first round was very manual, incredibly manual. But now we try to use systems that are available such as UMLS and other mapping methods. PETER SZOLOVITS: So what map-- presumably, you don't use HITex today. KATHERINE LIAO: No. PETER SZOLOVITS: So which tools do you use? KATHERINE LIAO: Just thinking I had a two hour conversation with Oakridge about this. We're using a system that Cheng developed called NIAL. And it had to do with the fact that cTAKES, which is a really robust system, was just too computationally intensive. And for the purposes of phenotyping, we didn't need that level of detail. What we really needed was, was it mentioned, what's the concept, and the negation. And so NIAL is something that we've been using and have kind of validated over time with the different methods we've been testing. PETER SZOLOVITS: So Tuesday, I'll talk a little bit about that system and some of its successors. So you'll get a sense of how that works. I should mention also that one of the papers that was on your reading list is a paper out of David Sontag's group, which uses this anchorous concept. And that's very much along the same lines. That it's a way of trying to automate, just as Kat was saying, you know, if the doctor's mention some term and you discover that that term is very often used with certain other terms by looking at Wikipedia or at the Mayo Clinic data or wherever your sources are, then that's a good clue that that other term might also be useful. So this is a formalization of that idea as a machine learning problem. So basically, that paper talks about how to take some very certain terms that are highly indicative of a disease and then use those as anchors in order to train a machine learning model that identifies more terms that are also likely to be useful. So this notion of-- and David talked about a similar idea in a previous lecture, where you get a silver standard instead of a gold standard. And the silver standard can be derived from a smaller gold standard using some machine learning algorithm. And then you can use that in your further computations. AUDIENCE: So what was the process like for partnering with academics and machine learning? So did you seek them out, did they seek you out? Did you run into each other at the bus stop? How does that work? KATHERINE LIAO: Well, I was really lucky. There was a big study called The Informatics for Integrating Biology and the Bedside Project called i2B2 led by Zak Kohane And so that was already in place. And Pete had already been pulled in and Tianxi. So what they basically did was locked all us in a room for three hours every Friday. And it was like, what's the problem, what's the question, and how do we get there. And so I think that infrastructure was so helpful in bringing everyone to the table, because it's not easy because you're not rotating in the same space. And the way you think is very different. So that's how we did it. Now it's more mainstream. I think when we first started, everyone was-- my colleagues joked with me. They're like, what are you doing? R2D2? What's going on? Are you going off the deep end over there? Because you know, the type of research we do was more along the ways of clinical trials and clin-epi projects. But now, you know, we have-- I run a core at Brigham. So it's run out of the rheumatology division. And so we kind of try to connect people together. I did post to our core the consulting session here. But you know, if there is interest, there's probably more groups that are doing this, where we can kind of more formally have joint talks or connect people together. Yeah. But it's not easy. I have to say, it takes a lot of time. Because when Pete put up that thing in what looked like a different language, I mean, it didn't even occur to me that it was hard to read, right? So it's like, you know, you're into these two different worlds. And so you have to work to meet in the middle, and it takes time. PETER SZOLOVITS: It also takes the right people. So I have to say that Zak was probably very clever in bringing the right people to the table and locking those into that room for three hours at a time because, for example, our biostatistician, Tianxi Cai just, you know, she speaks AI or she has learned to speak AI. And there are still plenty of statisticians who just have allergic reactions to the kinds just things that we do, and it would be very difficult to work with them. So having the right combination of people is also really I think critical. KATHERINE LIAO: As one of my mentors said, you have to kiss a lot of frogs. AUDIENCE: I wondering if you could say a bit more about how you approached the alarm fatigue with how you balance [INAUDIBLE] question around how certain you are versus clinical questions of how important this is versus even psychological questions of, I said is too often to a certain amount of people. They're going to start [INAUDIBLE]?? KATHERINE LIAO: Yeah, you've definitely hit the nail on the head of one of the major barriers, or several things. The alarm fatigue is one of them. So EMRs became more prominent in 2010. But now, along with EMRs came a lot of regulations on physicians. And then came getting rid of our old systems for these new systems that are now government compliant. So Epic is this big monster system that's being rolled out across the country, where you literally have-- it's so complicated in places like Mayo. They hire scribes. The physicians sits in the office and there's another person who actually listens in and types and then clicks all the buttons that you need to get the information there. So alarm fatigue is definitely one of the barriers. But the other barrier is the fact that the EMRs are so user-unfriendly now. They're not built for clinical care. They're built for billing. We have to be careful about how we roll this out. And that's one reason why I think things have been held up, actually. Not necessarily the science. It's the implementation part is going to be very hard. PETER SZOLOVITS: So that isn't new, by the way. I remember a class I taught in biomedical computing about 15 years ago. David Bates, who's the chief of general internal medicine or something at the Brigham, came in and gave a guest lecture. And he was describing their experience with a drug-drug interaction system that they had implemented. And they purchased a data set from a vendor called First Databank that had scoured the literature and found all the instances where people had reported cases where a patient taking both this medication and that medication had an apparent adverse event. So there was some interaction between them. And they bought this thing, they implemented it, and they discovered that, on the majority of drug orders that they were making through their pharmacy system, a big red alert would pop up saying, you know, are you aware of the fact that there is a potential interaction between this drug and some other drug that this patient is taking. And the problem is that the incentives for the company that curated this database were to make sure they didn't miss anything, because they didn't want to be responsible for failing to alarm. But of course, there's no pushback saying that if you warn on every second order, then no one's going to pay any attention to any of them. And so David's solution was to get a bunch of the senior doctors together and they did some study of what actual adverse events had they experienced at the hospital. And they cut this list of thousands of drug interactions down to 20. And they said, OK, those are the only ones we're going to alarm on. KATHERINE LIAO: And then they threw that out when Epic came in. So now I put in an order, I get like a list of 10 and I just click them all. So that's the problem. And the threshold is going to be-- so there's going to be an entire-- I think there's going to be entire methods development that's going to have to happen between figuring out where that threshold is and the fatigue from the alarms. AUDIENCE: I have two questions. One is about [INAUDIBLE]. Like how did you approach that because we talk about this in other contexts in class? And the other one is, like, how can you inform other countries [INAUDIBLE] done here? Because, I mean, at the end of the day, it's a global health issue. And also drug systems are different even between the US and the UK. So all the mapping we're doing here, how could that inform EHR or elsewhere? KATHERINE LIAO: Yeah. So let me answer the first one. The second one is a work in progress. So ICD-10 came to the US on October 1, 2015. I remember. It hurt us all. So we actually don't have that much information on ICD-10 yet. But it's definitely impacted our work. So if you think about when Pete was pointing to the number of ICD counts for ICD-9, for those of you who don't know, ICD-9 was developed decades ago. ICD-10 maybe two decades ago. But what ICD-10 did was it added more granularity. So for rheumatoid arthritis, I mentioned it's a systemic chronic inflammatory joint disease. We used to have a code that said rheumatoid arthritis. In ICD-10, it now says rheumatoid arthritis, rheumatoid factor positive, rheumatoid arthritis, rheumatoid factor negative. And under each category is RA of the right wrist, RA of the left wrist, RA of the right knee, left knee. Can you imagine? So we're clicking off all of these. And so, as it turns out, surprisingly-- we're about to publish a small study now, is RA any more accurate now they have all these granular-- it turns out, I think we got annoyed because it's actually less accurate now than the ICD-9. So that's one thing. But that's, you know, only two or three years of data. I think it's going to become pretty equivalent. The other thing is, you'll see an explosion in the number of ICD codes. So you have to think about how do you deal with back October 1, 2015 when you had one RA code, but after 2015, it depends on when the patient comes in. They may have RA of the right wrist on one day, then on the left knee the other day. That looks like a different code. So right now, we have to think of systematic systems to roll up. I think the biggest challenge right now is the mapping. So ICD-9, you know, doesn't map directly to ICD-10 or back because there were diseases that we didn't know when they developed ICD-9 that exist in ICD-10. In ICD-10, they talk about diseases in ways that weren't described in ICD-9. So when you're trying to harmonize the data, and this is actively something we're dealing with right now at the VA, how do you now count the ICD codes? How do you consider that someone has an ICD code for RA? So those are all things that are being developed now. CMS, Center for Medicaid and Medicare, again, this is for billing purposes, has come up with a mapping system that many of us are using now, given what we have. PETER SZOLOVITS: And by the way, the committee that is designing ICD-11 has been very active for years. And so there is another one coming down the pike. Although, from what I understand-- KATHERINE LIAO: Are you involved with that? PETER SZOLOVITS: No. But Chris Chute is or was. KATHERINE LIAO: Yes, I saw. I said, don't do it. PETER SZOLOVITS: Well, but actually, I'm a little bit optimistic because unlike the traditional ICD system, this one is based on SNOMED, which has a much more logical structure. So you know, my favorite ICD-10 code is closed fracture of the left femur due to spacecraft accident. KATHERINE LIAO: I didn't even know that existed. PETER SZOLOVITS: As far as I know, that code has never been applied to anybody. But it's there just in case. Yeah. AUDIENCE: So wait, for the ICD-11 you don't think take that long to exist because it's a more logical system? PETER SZOLOVITS: So ICD-11-- well, I don't know what it's going to be because they haven't defined it yet. But the idea behind SNOMED is that it's more a combinatorial system. So it's more like a grammar of descriptions that you can assemble according to certain rules of what assemblies make sense. And so that means that you don't have to explicitly mention something like the spacecraft accident one. But if that ever arises, then there is a way to construct something that would describe that situation. KATHERINE LIAO: I ran into Chris at a meeting and he said something along the lines that he thinks it's going to be more NLP-based, even. I don't know. Is it going to be more like a language? PETER SZOLOVITS: Well, you need to ask him. KATHERINE LIAO: Yeah, I don't know. He hints at it [INAUDIBLE]. I was like, OK, this will be interesting. PETER SZOLOVITS: I think it's definitely more like a language, but it'll be more like the old Fred Thompson or the Diamond Diagram kind of language. It's a designed language that you're going to have to learn in order to figure out how to describe things appropriately. Or at least your billing clerk will have to learn it. Yeah? AUDIENCE: I know we're towards the end. But I had a question about when a clinician is trying to label data, for example, training data, are there any ambiguities ever, where sometimes this is definitely-- this person has RA. This person, I'm not really sure. How do you take that into account when you're actually training a [INAUDIBLE]?? KATHERINE LIAO: Yeah. So we actually have three categories-- definite, possible, and no. So there is always ambiguity. And then you always want to have more than one reviewer. So in clinical trials when you have outcomes, you have what we call adjudication. So you have some kind of system where you have the first sit down, you have to define the phenotype. Because not everybody is going to agree, even for a really clear disease, how do you define the disease. What are the components that has to happen. For that, they're usually for societies or classification criteria for research. So there actually is one for RA, you know, for coronary artery disease. And then it is having those different categories in a very structured system for adjudicating. You know, blindly having two reviewers review 20, you know, let's say 20 of the same notes and look at the integrated reliability. Yeah. That's a big issue. PETER SZOLOVITS: All right. I think we have expired. So Kat, thank you very much. KATHERINE LIAO: Yes, thank you, everybody.
MIT_6S897_Machine_Learning_for_Healthcare_Spring_2019
15_Causal_Inference_Part_2.txt
[SQUEAKING] [RUSTLING] [CLICKING] DAVID SONTAG: So today's lecture is going to continue on the lecture that you saw on Tuesday, which was introducing you to causal inference. So the causal inference setting, which we're studying in this course, is a really simplistic one from a causal graphs perspective. There are three sets of variables of interest-- everything you know about an individual or patient, which we're calling x over here; and intervention or action-- which for today's lecture, we're going to suppose that it's either 0 or 1, so a binary intervention. You either take it or don't-- and an outcome y. And what makes this problem of understanding the impact of the intervention on the outcome challenging is that we have to make that inference from observational data, where we don't have the ability-- at least not in medicine, we typically don't have the ability to make active interventions. And the goal of what we will be discussing in this course is about how to take data that was collected from a practice of medicine where actions or interventions were taken, and then use that to infer something about the causal effect. And obviously, there are also randomized control trials where one intentionally does randomize, but the focus of today's lecture is going to be using observational data, or ready collected data, to try to make these conclusions. So we introduced the language of potential outcomes on Tuesday. Potential outcomes is the mathematical framework for trying to answer these questions. Then with that definition of potential outcomes, we can define the conditional average treatment effect, which is the difference between Y1 and Y0 for the individual Xi. So you'll notice here that I have patients, so treating the potential outcome as a random variable in case there might be some stochasticity. So sometimes, maybe if you were to give someone a treatment, it works, and sometimes it doesn't. So that's what the expectation is accounting for. Any questions before I move on? So with respect to this definition of conditional average treatment effect, then you could ask, well, what would happen in aggregate for the population? And you can compute that by taking the average of the conditional average treatment effect over all of the individuals. So that's just this expectation with respect to, now, p of x. Now, critically, this distribution, p of x, you should think about as the distribution of everyone that exists in your data. So some of those individuals might have received treatment 1 in the past. Some of them might have received treatment 0. But when we ask this question about average treatment effect, we're asking, for both of those populations, what would have been the effect-- what would have been the difference about [INAUDIBLE] they received treatment 1 minus had they received treatment 0? Now, I wanted to take this opportunity to start thinking a little bit bigger picture about how causal inference can be important in a variety of societal questions, and so I'd like to now spend just a couple of minutes thinking with you about what some causal questions might be that we urgently need to answer about the COVID-19 pandemic. And as you try to think through these questions, I want you to have this causal graph in mind. So there is the general population. There is some action that you want to perform, and the whole notion of causal inferences assessing the effective action on some outcome of interest. So in trying to give the answer to my-- various answers to my questions of what are some causal inference questions of relevance to the current pandemic, I want you to try to frame your answers in terms of these Xs, Ts, and Ys. It's also, obviously, very hard to answer using the types of techniques that we will be discussing in this course, and partly because the techniques that I'm focusing on are very much data driven techniques. That said, the general framework that I've introduced on Tuesday for covariate adjustment of, come up with a model and use that model to make a prediction, and the assumptions that underlie that in terms of, well, where's that model coming from, if you're fitting the parameters from data, having to have common support in order to be able to have any trust in the downstream conclusions. Those underlying assumptions and the general premises will still hold, but here, obviously, when it comes to something like social distancing, they're complicated network effects. And so whereas up until now, we've been making the assumption of what was called SUTVA-- it was a assumption that I probably didn't even talk about in Tuesday's lecture. But intuitively, what the SUTVA assumption says is that each of your training examples are independent of each other. And that might make sense when you think about, give a patient a medication or not, but it certainly doesn't make sense when you think about social distancing type measures, where if some people social distance, but other people don't, it has obviously a very different impact on society. So one needs a different class of models to try to think about that, which have to relax that SUTVA assumption. So those were all really good answers to my question, and in some sense, now-- so there's the epidemiological type questions that we last spoke about. But the first few set of questions about, really, how does one treat patients who have COVID are the types of questions that only now we can really start to answer now, unfortunately, because we're starting to get a lot of data in the United States and internationally. And so for example, my own personal research group, we're starting to really scale up our research on these types of questions. Now, one very simplified example that I wanted to give of how a causal inference lens can be useful here is by trying to understand case fatality rates. So for example, in Italy, it was reported that 4.3% of individuals who had this condition passed away, whereas in China, it was reported that 2.3% of individuals who had this condition passed away. Now, you might ask, based on just those two numbers, is something different about China? For example, might it be that the way that COVID is being managed in China is better than in Italy? You might also wonder if the strain of the disease might be different between China and Italy? So perhaps there were some mutations since it left Wuhan. But if you dig a little bit deeper, you see that, if you plot case fatality rates by age group, you get this plot that I'm showing over here. And you see that if you compare Italy, which is the orange, to China, which is blue, now stratified by age range, you see that for every single age range, the percentage of deaths is lower in Italy than in China, which would seem to be a contradiction with what we saw-- with the aggregate numbers, where we see that the case fatality rate in Italy is higher than in China. And so the reason why this can happen has to do with the fact that the populations are very different. And by the way, this paradox goes by the name of Simpson's paradox. So if you dig a bit deeper, you see then that, if you're to look at, well, what is the distribution of individuals in China and Italy that have been reported to have COVID, you see that, in Italy, it's much more highly weighted towards these older ages. And if you then combine that with the total number of cases, you get you get to these discrepancies, so it now fully explains these two numbers and the plot that you see. Now if we're to try to think about this a bit more formally, we would try to formalize it in terms of following causal graph. And so here, we have the same notions of X, T, and Y, where X is the age of an individual who has been diagnosed with COVID. T is now country, so we're going to think about the intervention here as transporting ourselves from China to Italy, so thinking about changing the environment altogether. And Y is the outcome on an individual level basis. And so the formal question that one might want to ask is about a causal impact of changing the country on the outcome Y. Now, for this particular causal question, this causal graph that I'm drawing here is the wrong one, and in fact, the right causal graph probably has an edge that goes from T to X. In particular, the distribution of individuals in the country is obviously a function of the country, not the other way around. But despite the fact that there is that difference in directionality, all of the techniques that we've been teaching you in this course are still applicable for trying to ask a causal question about the impact of intervening on a country, and that's really because, in some sense, these two distributions, at an observational level, are equivalent. And if you want to dig a little bit deeper into this example-- and I want to stress this is just for educational purposes. Don't read anything into these numbers-- I would go to this Colab notebook after the course. So all of this was just a little bit of set up to help frame where causal inference shows up and some things that we've been thinking and really very worried and stressed about ourselves personally recently. And I want to now shift gears to starting to get back to the course material, and in particular, I want to start today's more theoretical parts of the lectures by returning to covariate adjustment, which we ended on and Tuesday. In covariate adjustment, one-- we'll use a machine learning approach to learn some model, which I'll call F. So you could imagine a black box machine learning algorithm, which takes as input both X and T. So X are your covariates of the individual that are going to receive the treatment, and T is that treatment decision, which for today's lecture, you can just assume is binary 01, and uses those together now to predict the outcome Y. Now, what we showed on Tuesday was that, under ignorability, where ignorability, remember, was the assumption of no hitting confounding, then the conditional average treatment effect could be defined as just a difference-- could be could be computed as the expectation of Y1 now conditioned on T equals 1, so this is the piece that I've added in here, and minus the expectation of Y0 now conditioned on T equal 0. And it's that conditioning which is really important, because that's what enables you to estimate Y1 from data where treatment 1 was observed, whereas you never get to observe Y1 in data when treatment 0 was performed. So we have this formula, and after fitting that model F, one could then use it to try to estimate CATE by just taking that learned function, plugging in the number 1 for the treatment variable in order to get your estimate of this expectation, and then plugging in the number 0 for the treatment variable when you want to get your estimate of this expectation. Taking the difference between those then gives you your estimate of the conditional average treatment effect. So that's the approach, and what we didn't talk about so much was the modeling choices of what should your function class be. So this is going to turn out to be really important, and really, the punchline of the next several slides is going to be a major difference in philosophy between machine learning and statistics, and between prediction and causal inference. So let's now consider the following simple model, where I'm going to assume that the ground truth in the real world has that the potential outcome YT of X, where T, again is the treatment, is equal to some simple linear model involving the covariates X and the treatments T, the treatment T. So in this very simple setting, I'm going to assume that we just have a single feature or covariate for the individual, which is there age. I'm going to assume that this model doesn't have any terms with an interaction between X and T, so it's fully linear in X and T. So this is an assumption about the true potential outcomes, and what we'll do over the next couple of slides is think about what would happen if you now modeled Y of T, so modeling it with some function F, where F was, let's say, a linear function versus a nonlinear function, if F took this form or a different form. And by the way, I'm going to assume that the noise here, epsilon t, can be arbitrary, but that it has 0 mean. So let's get started by trying to estimate what the true CATE is, or Conditional Average Treatment Effect, for this potential outcome model. Well, just by definition, the CATE is the expectation of Y1 minus Y0. We're going to take this formula, and we're going to plug it in for the first term using T equals 1, and that's why you get this term over here with gamma. And the gamma is because, again, T is equal to 1. We're also going to take this, and we're going to plug it in for, now, this term over here, where T is equal to 0. And when T is equal to 0, then the gamma term just disappears, and so you just get beta X plus epsilon 0. So all I've done so far is plug in the Y1 and Y0 according to the assumed form, but notice now that there's some terms that cancel out-- in particular, the beta X term over here cancels out with a beta X term over here. And because epsilon 1 has a 0 mean, and epsilon 0 also has a 0 mean. The only thing left is that gamma term, and expectation of a constant's obviously that constant. And so what we conclude from this is that the CATE value is gamma. Now, the average treatment effect, which is the average of CATE over all individuals X, will then also be gamma, obviously. So we've done something pretty interesting here. We've started from the assumption that the true potential outcome model is linear, and what we concluded is that the average treatment effect is precisely the coefficient of the treatment variable in this linear model. So what that means is that, if what you're interested in is causal inference, and suppose that we were lucky enough to know that the true model were linear, and so we attempted to fit some function F, which had precisely the same form, we get some beta hats and some gamma hats out from the learning algorithm, all we need to do is look at that gamma hat in order to conclude something about the average treatment effect. No need to do this complicated thing of plugging in to estimate CATEs. And again, the reason it's such a trivial conclusion is because of our assumption of linearity. Now, what that also means is that, if you have errors in learning-- in particular, suppose, for example, that you are estimating your gamma hat wrongly, then that means you're also going to be getting wrong your estimates of your conditional and average treatment effects. There's a question here, which I was lucky enough to see, that says, what does gamma represent in terms of the medication? Thank you for that question. So gamma is-- literally speaking, gamma tells you the conditional average treatment effect, meaning if you were to give the treatment versus not giving the treatment, how that affects the outcome. Think about the outcome of interest being the patient's blood pressure, there being potential confounding factor of the patient's age, and T being one of two different blood pressure measurements. If gamma is positive, then it means that treatment 1 is more-- treatment 1 increases the patient's blood pressure relative to treatment 0. And if gamma is negative, it means that treatment 1 decreases the patient's blood pressure relative to treatment 0. So in machine learning-- oh, sorry, there's another chat. Thank you, good. So in machine learning, I typically tell my students, don't attempt to interpret your coefficient. At least, don't interpret them too much. Don't put too much weight into them, and that's because, when you're learning very high dimensional models, there can be a lot of redundancy between your features. But when you talk to statisticians, often they pay really close attention to their coefficients, and they try to interpret those coefficients often with the causal lens. And when I first got started in this field, I couldn't understand why are they paying attention to those coefficients so much? Why are they coming up with these causal hypotheses based on which coefficients are positive and which are the negative? And this is the answer. It really comes down to an interpretation of the prediction problem in terms of the feature of relevance being a treatment, that treatment being linear with respect to the potential outcome, and then looking at the coefficient of the treatment as telling you something about the average treatment effect of that intervention or treatment. Moreover, that also tells us why it's often very important to look at confidence intervals, so one might want to know, we have some small data set, we get some estimate of gamma hat, but what if you had a different data set? So what happens if you had a new sample of 100 data points? How would your estimated gamma hat vary? And so you might be interested, for example, in confidence intervals, like a 95% confident interval that says that gamma hat is between, let's say, 1 and, let's say maybe, 0.5 with probability 0.95. That'll be an example of a confidence interval around gamma hat. And such a confidence interval then gives you confidence-- a confidence interval around the coefficients, then gives you confidence intervals around the average treatment effect via this analysis. So the second observation is what happens if the true model isn't linear, but we hadn't realized that as a modeler, and we had just assumed that, well, the linear model's probably good enough? And maybe even, the linear model gets pretty good prediction performance? Well, let's look at the extreme example of this. Let's now assume that the true data generating process, instead of being just beta X plus gamma T, we're going to add in now a new term, delta times X squared. Now, this is the most naive extension of the original linear model that you could imagine, because I'm not even adding any interaction terms like 10 times XT. So no interaction terms involving treatment and covariate. Treatment is still-- the potential outcome is still linear in treatment. We're just adding a single nonlinear term involving one of the features. Now, if you compute the average treatment effect via the same analysis we did before, you'll again find that our treatment effect is gamma. Let's suppose now that we hadn't known that there was that delta X squared term in there, and we hypothesized that the potential outcome was given to you by this linear model involving X and T. And I'm going to use Y hat to denote that that's going to be the function family that we're going to be fitting. So we now fit that beta hat in gamma hat, and if you had infinite data drawn from this true generating process, which is, again, unknown, what one can show is that the gamma hat that you would estimate using any reasonable estimator, like a least squared estimator, is actually equal to gamma, the true ATE value, plus delta times this term. And notice that this term does not depend on beta or gamma. What this means is, depending on delta, your gamma hat could be made arbitrarily large or arbitrarily small. So for example, if delta is very large, gamma hat might become positive when gamma might have been negative. And so your conclusions about the average treatment effect could be completely wrong, and this should scare you. This is the thing which makes using covariate adjustments so dangerous, which is that if you're making the wrong assumptions about the true potential outcomes, you could get very, very wrong conclusions. So because of that, one typically wants to live in a world where you don't have to make many assumptions about the form, so that you could try to fit the data as well as possible. So here, you see that there is this nonlinear term. Well, obviously, if you had used some nonlinear modeling algorithm, like a neural network or maybe a random forest, then it would have the potential to fix that nonlinear function, and then maybe we wouldn't get caught in this same trap. And there are a variety of machine learning algorithms that have been applied to causal inference, everything from random forests and Bayesian additive regression trees to algorithms like Gaussian processes and deep neural networks. I'll just briefly highlight the last two. So Gaussian processes are very often used to model continuous valued potential outcomes, and there are a couple of ways in which they can be done. So for example, one class of models might treat Y1 and Y0 as two separate Gaussian processes and fit those to the data. A different approach, shown on the right here, would be to treat T as an additional covariate, so now you have X and T as your features and fit a Gaussian process for that joint model. When it comes to neural networks, neural networks had been used in causal inference going back about 20, 30 years, but really started catching on a few years ago with a paper that I wrote in my group as being one of the earliest papers from this recent generation of using neural networks for causal inference. And one of the things that we found to work very effectively is to use a joint model for predicting the causal effect, so we're going to be learning a model that takes-- an F that takes, as input, X and T and has to predict Y. And the advantage of that is that it's going to allow us to share parameters across your T equals 1 and T equals 0 samples. But rather than feeding in X and T in your first layer of your neural network, we're only going to feed in X in the initial layer of the neural network, and we're going to learn a shared representation, which is going to be used for both predicting T equals 0 and T equals 1. And then for predicting when T is equal to 0, we use a different head from predicting T equals 1. So F0 is a function that concatenates these shared layers with several new layers used to predict for when T is equal to 0 and same analogously for 1. And we found that architecture worked substantially better than the naive architectures when doing causal inference on several different benchmark data sets. Now, the last thing I want to talk about for covariate adjustment, before I move on to a new set of techniques, is a method called matching, that is intuitively very pleasing. It's a very-- would seem to be a really natural approach to do causal inference, and at first glance, may look like it has nothing to do with covariate adjustment technique. What I'll do now is I'm going to first introduce you to the matching technique, and then I will show you that it actually is precisely identical to covariate adjustment with a particular assumption of what the functional family for F is. So not Gaussian processes, not deep neural networks, but it'll be something else. So before I get into that, what is matching as a technique for causal inference? Well, the key idea of matching is to use each individual's twin to try to get some intuition about what their potential outcome might have been? So I created these slides a few years ago when President Obama was in office, and you might imagine this is the actual President Obama who did go to law school. And you might imagine who might have been that other president? What President Obama have been like had he not gone to law school, but let's say, gone to business school? So if you can now imagine trying to find, in your data set, someone else who looks just like Barack Obama, but who, instead of going to law school, went to business school, and then you would then ask the following question. For example, would this individual have gone on to become president had he gone to law school versus had he gone to business school? If you find someone else who's just like Barack Obama who went to business school, look to see did that person become president eventually, that would in essence give you that counterfactual. Obviously, this is a contrived example because you would never get the sample size to see that. So that's the general idea, and now, I'll show it to you in a picture. So here now, we have to covariates or features-- a patient's age and their Charleson comorbidity index. This is some measure of how many-- what types of conditions or comorbidities the patient might have. Do they have diabetes, do they have hypertension, and so on? And notably, what I'm not showing you here is the outcome Y. All I'm showing you are the original data points and what treatment did they receive. So blue are the individuals who received the control treatment, or T equals 0, and red are the individuals who received treatment 1. So you can imagine trying to find nearest neighbors. For example, the nearest neighbor to this data point over here is this blue point over here, and so if you wanted to know, well, what we observed, some Y1, for this individual, we observed some Y0 for this individual. And if you wanted to know, well, what would have happened to this individual if they had received treatment 0 instead of treatment 1, well, you could just look at what happened to this blue point and say, that's what would have happened to this red point, because they're very close to each other. Any questions about what matching would do before I define it formally? Here, I'll-- yeah, good, one question. What happens if the nearest neighbor is extremely far away? That's a great question. So you can imagine that you have one red data point over here and no blue data points nearby. The matching approach wouldn't work very well. So this data point, the nearest neighbor, is this blue point over here, which intuitively, is very far from this red point. And so if we were to estimate this red point's counterfactual using that blue point, we're likely to get a very bad estimate, and in fact, that is going to be one of the challenges of matching based approaches. It's going to work really well in a high dimensional setting where you can imagine-- sorry, in a large-- it's going to work very well in a large sample setting, where you can hope that you're likely to observe a counterfactual for every individual. And it won't work well you have very limited data, and of course, all this is going to be subject to the assumption of common support. So one question's about how does that translate into high dimensions? The short answer-- not very well. We'll get back to that in a moment. Can a single data point appear in multiple matchings? Yes, and I will define, in just a moment, how and why. It won't be a strict matching. Are we trying to find a counterfactual for each treated observation, or one for each control observation? I'll answer that in just a second. And finally, is it common for medical data sets to find such matching pairs? I'm going to reinterpret that question as saying, is this technique used often in medicine? And the answer is, yes, it's used all the time in clinical research despite the fact that bio statisticians, for quite a few years now, have been trying to argue that folks should not use this technique for reasons that you see shortly. So it's widely used. It's very intuitive, which is why I'm teaching it. And it's going to fit into a very general framework, as you'll see in just a moment, which I'll give you the natural solution for the problems that I'm going to raise. So moving on, and then I'll return to any remaining questions. So here, I'll define one way of doing counterfactual inference using matching, and it's going to start, of course, by assuming that we have some distance metric d between individuals. Then we're going to say, for each individual i, let's let j of i be the other individual j, obviously different from i, who is closest to i, but critically, closest but has a different treatment. So where Ti is different from Tj, and again, I'm assuming binary, so Tj is either 0 or 1. With that definition then, we're going to define our estimate of the conditional average treatment effect for an individual is whatever their actual observed outcome was. This, I'm going to give for an individual that actually received treatment 1, so it's Y1, and the reason-- it's Yi minus the imputed counterfactual corresponding to T is equal to 0. And the way we get that computed counterfactual is by trying to find that nearest neighbor who received treatment 0 instead of treatment 1 and looking at their Y. Analogously, if T is equal to 0, then we're going to use the observed Yi, now over here instead of over there because it corresponds to Y0. And where we need to impute Y1-- capital Y1, potential outcome Y1-- we're going to use the observed outcome from the nearest neighbor of individual i who received treatment 1 instead of 0. So this, mathematically, is what I mean by our matching based estimator, and this also should answer one of the questions which was raised, which is, do you really need to have it matching, or could a data point be matched to multiple other data points? And indeed, here, you see the answer to that last question is yes, because you could have a setting where, for example, there are two red points here. And I can't draw blue, but I'll just use a square for what I would have drawn as blue. And then everything else very far away, and for both of these red points, this blue point is the closest neighbor. So both of the counterfactual estimates for these two points would be using the same blue point, so that's the answer to that question. Now, I'm just going to rewrite this in a little bit more convenient form. So I'll take this formula, shown over here, and you can rewrite that as Yi minus Yji, but you have to flip the sign depending on whether Ti is equal to 1 or 0, and so that's what this term is going to do. If Ti is equal to 1, then this evaluates to 1. If Ti is equal to 0, this evaluates to minus 1. Flips the sign. So now that we have the definition of CATE, we can now easily estimate the average treatment effect by just averaging these CATEs over all of the individuals in your data set. So this is now the definition of how to do one nearest neighbor matching. Any questions? So one question is, do we ever use the metric d to weight how much we would, quote, unquote, "trust" the matching? That's a good question. So what Hannah's asking is, what happens if you have, for example, very many nearest neighbors, or analogously, what happens if you have some nearest neighbors that are really close, some that are really far? You might imagine trying to weight your nearest neighbors by the distance from the data point, and you could imagine even doing that-- you can even imagine coming up with an estimator, which might discount certain data points if they don't have nearest neighbors near them at all by the corresponding weighting factor. Yes, that's a good idea. Yes, you can come up with a consistent estimator of the average treatment effect through such an idea. There are probably a few hundred papers written about it, and that's all I have to say about it. So there's lots of variants of this, and they all end up having the same theoretical justification that I'm about to give in the next slide. So one of the advantages of matching is that you get some interpretability. So if I was to ask you, well, what's the reason why you tell me that this treatment is going to work for John? Well, someone can respond-- well, I used this technique, and I found that the nearest neighbor to John was Anna. And Anna took this other treatment from John, and this is what happened for Anna. And that's why I conjecture that, for John, the difference between Y1 and Y0 is as follows. And so then, that can be criticized. So for example, a clinician who has some domain expert, can look at Anna, look at John, and say, oh, wait a second, these two individuals are really different from one another. Let's say the treatment, for example, had to do with something which was gender specific. Comparing two individuals which are of different genders are obviously not going to be comparable to one other, and so then the domain expert would be able to reject that conclusion and say, nuh-uh, I don't trust any of these statistics. Go back to the drawing board. And so type of interpretability is very attractive. The second aspect of this, which is very attractive is that it's a non-parametric method, non-parametric in the same way that neural networks or random forest are non-parametric. So this does not rely on any strong assumption about the parametric form of the potential outcomes. On the other hand, this approach is very reliant on the underlying metric. If your distance function is a poor distance function, then it's going to give poor results. And moreover, it could be very much misled by features that don't affect the outcome, which is not necessarily a property that we want. Now, here's that final slide that makes the connection. Matching is equivalent to covariate adjustment. It's exactly the same. It's an instantiation of covariate adjustment with a particular functional family for F. So rather than assuming that your function F, that black box, is a linear function or a neural network or a random forester or a Bayesian regression tree, we're going to assume that function takes the form of a nearest neighbor classifier. In particular, we'll say that Y hat of 1, the function for predicting the potential outcome Y hat 1, is given to you by finding the nearest neighbor of the data point X according to the data set of individuals that received treatment 1, and same thing for Y hat 0. And so that then allows us to actually prove some properties of matching. So for example, if you remember from-- I think I mentioned in Tuesday's lecture that this covariate adjustment approach, under the assumptions of overlap and under the assumptions of no hidden confounding, and that your function family for potential outcome is sufficiently rich that you can actually fit the underlying model, then you're going to get correct estimates of your conditional average treatment effect. Now, one can show that a nearest neighbor algorithm is not, generally, a consistent algorithm. And what that means is that, if you have a small number of samples, you're going to be getting biased estimate. Your function F might, in general, be a biased estimate. Now, we can conclude from that, that if we were to use one nearest neighbor matching for inferring average treatment effect, that in general, it could give us a biased estimate of the average treatment effect. However, in the limit of infinite data, one nearest neighbor algorithms are guaranteed to be able to fit the underlying function family. That is to say, that bias goes to 0 in the limit of a large amount of data, and thus, we can immediately draw from that literature and causal inference-- sorry, from that literature and machine learning to obtain theoretical results for matching for causal inference. And so that's all I want to say about matching and its connection to covariate adjustment. And really, the punchline is, think about matching just as another type of covariate adjustment, one which uses a nearest neighbor function family, and thus should be compared to other approaches to covariate adjustments, such as, for example, using machine learning algorithms that are designed to be interpretable. So the last part of this lecture is going to be introducing a second approach for inferring average treatment effect that is known as the propensity score method, and this is going to be a real shift. It's going to be a different estimator from the covariate adjustment. So as I mentioned, it's going to be used for estimating average treatment effect. In problem set 4, you're going to see how you can use the same sorts of techniques I'll tell you about now for also estimating conditional average treatment effect, but that won't be obvious just from today's lecture. So the key intuition for propensity score method is to think back to what would have happened if you had a randomized control trial. In a randomized control trial, again, you get choice over what treatment to give each individual, so you might imagine flipping a coin. If it's heads, giving them treatment 1. If it's tails, giving them treatment 0. So given data from a randomized control trial, then there's a really simple estimator shown here for the average treatment effect. You just sum up the values of Y for the individuals that receive treatment 1, divided by n1, which is the number of individuals that received treatment 1. So this is the average outcome for all people who got treatment 1, and you just subtract from that the average outcome for all individuals who received treatment 0. And that can be easily shown to be an unbiased estimator of the average treatment effect had your data come from a randomized controlled trial. So the key idea of a propensity score method is to turn an observational study into something that looks like a randomized control trial via re-weighting of the data points. So here's the picture I want you to have in mind. Again, here, I am not showing you outcomes. I'm just showing you the features X-- that's what the data points are-- and the treatments that were given to them, the Ts. And the Ts, in this case, are being denoted by the color of the dots, so red is T equals 1. Blue is T equals 0. And my apologies in advance for anyone who's color blind. So the key challenge when working with observational study is that there might be a bias in terms of who receives treatment 0 versus who receives treatment 1. If this was a randomized control trial, then you would expect to see the reds and the blues all intermixed equally with one another, but as you can see here, in this data set, there are very many more people who received-- very more young people who received treatment 0 than received treatment 1. Said differently, if you look at the distribution over X conditioned on T equals 0 in the data, it's different from the distribution over X conditioned on the people who receive treatment 1. So what the propensity score method is going to do is it's going to recognize that there is a difference between these two distributions, and it's going to re-weight data points so that, in aggregate, it looks like, in any one region-- so for example, imagine looking at this region-- that there's roughly the same number of red and blue data points. Where if you think about blowing up this red data point-- here, I've made it very big-- you can think about it being many, many red data points of the corresponding weight. You look over here, see again roughly the same amount of red and blue mass as well. So if we can find some way to increase or decrease the weight associated with each data point such that, now, it looks like the two distributions, those who received treatment 1 and those who received treatment 0, look like they came from-- look like now they have the same weighted distribution, then we're going to be in business. So we're going to search for those weights, w, that have that property. So to do that, we need to introduce one new concept, which is known as the propensity score. The propensity score is given to you by the probability that T equals 1 given X. Here, again, we're going to use machine learning. Whereas in covariate adjustment, we used machine learning to predict Y conditioned on X comma T-- that's what covariate adjustment did-- here, we're going to be ignoring Y altogether. We're just going to take X's input, and we're going to be predicting T. So you can imagine using logistic regression, given your covariates, to predict which treatment any given data point came from. Here, you're using the full data set, of course, to make that prediction, so we're looking at both data points where T equals 1 and T equals 0. T is your label for this. Then what we're going to do is given, that learned propensity score-- so we take your data set. You, first, learn the propensity score. Then we're going to re-weight the data points according to the inverse of the propensity score. And you might ask, this looks familiar. This whole notion of re-weighting data points, this whole notion of trying to figure out which, quote, unquote, "data set" a data point came from, the data set of individuals who receive treatment 1 or the data set of individuals who receive treatment 0-- that sounds really familiar. And it's because it's exactly what you saw in lecture 10, when we talked about data set shift. In fact, this whole entire method, as you'll develop in problem set 4, is a special case of learning under data set shift. So here, now, is the propensity score algorithm. We take our data set, which have samples of X, T, and Y where Y, of course, tells you the potential outcome corresponding to the treatment T. We're going to use any machine learning method in order to estimate this model that can give you a probability of treatment given X. Now, critically, we need a probability for this. We're not trying to do classification. We need an actual probability, and so if you remember back to previous lectures where we spoke about calibration, about the ability to accurately predict probabilities, that is going to be really important here. And so for example, if you were to use a deep neural network in order to estimate the propensity scores, deep networks are well known to not be well calibrated. And so one would have to use one of a number of new methods that have been recently developed to make the outputs of deep learning calibrated in order to use this type of technique. So after finishing step 1, now that you have a model that can allow you to estimate the propensity score for every data point X, we now can take those and estimate your average treatment effect with the following formula. It's 1 over n of the sum over the data points, where the data points corresponding to the treatment 1 of Yi-- that part is identical to before. But what you see now is we're going to divide it by the propensity score, and so this denominator, that's the new piece here. That's the inverse of the propensity score is precisely the weighting that we were referring to earlier, and the same thing happens over here for Ti equals 0. Now, let's try to get some intuition about this formula, and I like trying to get intuition by looking at a special case. So the simplest special case that we might be familiar with is that of a randomized control trial, where because you're flipping a coin, and each data point either gets treatment 0 or treatment 1, then the propensity score is precisely, deterministically equal to 5. So let's take this now. No machine learning done here. Let's just plug it in to see if we get back the formula that I showed you earlier for the estimate of the average treatment effect in a randomized control trial. So we plug that in over there. This now becomes 0.5, and plug that in over here. This also becomes 0.5. And then what we're going to do is we're just going to take that 0.5. We're going to bring that out, and this is going to become a 2 over here, and same, a 2 over here. And you get to the following formula, which is-- if you were to compare to the formula from a few slides ago, it's almost identical, except that a few slides ago over here, I had 1 over n1, and over here, I had 1 over n0. Now, these two are two different estimators for the same thing, and the reason why you can say they're the same thing is that, in a randomized control trial, the number of individuals that receive treatment 1 is, on average, n over 2. Similarly, the number of individuals receiving treatment 0 are, on average, n over 2. So if you were to-- that n over 2 cancels out with this 2 over n is what gets you a correct estimator. So this is a slightly different estimator, but nearly identical to the one that I showed you earlier, and by this argument, is a consistent estimator of the average treatment effect in a randomized control trial. So any questions before I try to derive this formula for you? So one student asks, so the propensity score is the, quote, unquote, "bias" of how likely people are assigned to T equals 1 or T equals 0? Yes, that's exactly right. So if you were to imagine taking an individual where this probability for that individual is, let's say, very close to 1, it means that there are very few other people in the data set who receive treatment 1. They're a red data point in a sea of blue data points. And by dividing by that, we're going to be trying to remove that bias, and that's exactly right. Thank you for that question. Are there other questions? I really appreciate the questions via the chat window, so thank you. So let's now try to derive this formula. Recall the definition of average treatment effect, and for those who are paying very close attention, you might notice that I removed the expectation over Y1. And for this derivation that I'm going to give you, I'm going to suppose-- I'm going to assume that a potential outcomes are all deterministic because it makes the math easier, but is without loss of generality. So the average treatment effect is the expectation, with respect to all individuals, of the potential outcome Y1 minus the expectation with respect to all individuals of the potential outcome Y0. So this term over here is going to be our estimate of that, and this term over here is going to be our estimate of this expectation. So naively, if you were to just take the observed data, it would allow you to compute-- if you, for example, just averaged the values of Y for the individual who received treatment 1, that would give you this expectation that I'm showing on the bottom here. I want you to compare that to the one that's actually needed in the average treatment effect. Whereas over here, it's an expectation with respect to individuals that received treatment 1, up here, this was an expectation with respect to all individuals. But the thing inside the expectation is exactly identical, and that's the key point that we're going to work with, which is that we want an expectation with respect to a different distribution than the one that we actually have. And again, this should ring bells, because this sounds very, very familiar to the data set shift story that we talked about a few lectures ago. So I'm going to show you how to derive an estimator for just this first term, and the second term is obviously going to be identical. So let's start out with the following. We know that p of X given T times p of T is equal to p of X times p of T given X. So what I've just done here is use two different formulas for a joint distribution, and then I've divided by p of T given X in order to get the formula that I showed you a second ago. I'm not going to attempt to erase that. I'll leave it up there. So the next thing we're going to do is we're going to say, if we were to compute an expectation with respect to p of X given T equals 1, and if we were to now take the value that we observe, Y1, which we can get observations for all the individuals who received treatment 1, and if we were to re-weight this observation by this ratio, where remember, this ratio showed up in the previous bullet point, then what I'm going to show you in just a moment is that this is equal to the quantity that we actually wanted. Well, why is that? Well, if you expand this expectation, this expectation is an integral with respect to p of X conditioned on T equals 1 times the thing inside the brackets, and because we know that p of-- because we know from up here that p of X conditioned on T equals 1 times p of T equals 1 divided by p of T equals 1 conditioned on X is equal to p of X, this whole thing is just going to be equal to an integral of p of X times Y1, which is precisely the definition of expectation that we want. So this was a very simple derivation to show you that the re-weighting gets you what you need. Now, we can estimate this expectation empirically as follows, the estimate that we're going to now sum over all data points that received treatment 1. We're going to take an average, so we're dividing by the number of data points that received treatment 1. For p of T equals 1, we're just going to use the empirical estimate of how many individuals received treatment 1 in the data set divided by the total number of individuals in the data set. That's n1 divided by n. And for the denominator, p of T equals 1 conditioned on X, we just plug in, now, the propensity score, which we had previously estimated. And we're done. And so that, now, is our estimate for the first term in the average treatment effect, and you can do that now loosely for Ti equals 0. And I've shown you the full proof of why this is an unbiased estimator for average treatment effect. So I'm going to be concluding now, in the next few minutes. First, I just wanted to comment on what we just saw. So we saw a different way to estimate the average treatment effect, which only required estimating the propensity score. In particular, we never had to use a model to predict Y in this approach for estimating the average treatment effect, and that's a good thing and a bad thing. It's a good thing because, if you had errors in estimating your model y, as I showed you in the very beginning of today's lecture, that could have a very big impact on your estimate of the average treatment effect. And so that doesn't show up here. On the other hand, this has its own disadvantages. So for example, the propensity score is going to be really, really affected by lack of overlap, because when you have lack of overlap, it means there's some data points where the propensity score is very close to 0 or very close to 1. And that really leads to very large variance in your estimators. And a very common trick which is used to try to address that concern is known as clipping, where you simply clip the propensity scores so that they're always bounding away from 0 and 1. But that's really just a heuristic, and it can, of course, then lead to biased estimates of the average treatment effect. So there's a whole family of causal inference algorithms that attempt to use ideas from both covariate adjustment and inverse propensity weighting. For example, there's a method called doubly robust estimators, and we'll try to provide a citation for those estimators in the Scribe notes. And these doubly robust estimators are a different family of estimators that actually bring in both of these techniques together, and they have a really nice property, which is that if either one of them fail, you still get valid estimates of average treatment effect. I'm going to skip this and just jump to the summary now, which is that we've presented two different approaches for causal inference from observational data-- covariate adjustment and propensity score based methods. And both of these, I need to stress, are only going to give you valid results under the assumptions we outlined in the previous lecture-- for example, that your causal graph is correct; critically, that there's no unobserved confounding; and second, that you have overlap between your two treatment classes. And third, if you're using a non-parametric regression approach, overlap is extremely important, because without overlap, your model's undefined in regions of space. And thus, as a result, you have no way of verifying if your extrapolations are correct, and so one has to use trust in the model, which is not something we really like. And in propensity score methods, overlap is very important because if you don't have that, you get inverse propensity scores that are either-- which are infinite and lead to extremely high variance estimators. So in the end of this slide, which are already posted online, I include some references that I strongly encourage folks to follow up on. First references to two recent workshops that have been held in the machine learning community so that you can get a sense of what the latest and greatest in terms of research in causal inference are, two different books on causal inference that you can download for free from MIT, and finally, some papers that I think are really interesting, particularly of interest, potentially, to course projects. So we are at time now. I will hang around for a few minutes after lecture, as I would normally. But I'm going to stop the recording of the lecture.
MIT_6S897_Machine_Learning_for_Healthcare_Spring_2019
20_Precision_Medicine.txt
PETER SZOLOVITS: So today I'm going to talk about precision medicine. And we don't really have a very precise idea of what precision medicine is. And so I'm going to start by talking about that a little bit. David talked about disease subtyping. And if you think about how do you figure out what are the subtypes of a disease, you do it by some kind of clustering on a bunch of different sorts of data. And so we have data like demographics, comorbidities, vital signs, medications, procedures, disease trajectories, whatever those mean, image similarities. And today, mostly I'm going to focus on genetics. Because this was the great hope of the Human Genome Project, that as we understood more about the genetic influences on disease, it would help us create precise ways of dealing with various diseases and figuring out the right therapies for them and so on. So I want to start by reviewing a little bit a study that was done by the National Research Council, so the National Academies, and it's called "Toward Precision Medicine." This was fairly recent, 2017. And they have some interesting observations. So they start off and they say, well, why is this relevant now, when it may not have been relevant before? And of course, the biggie is new capabilities to compile molecular data on patients on a scale that was unimaginable 20 years ago. So people estimated that getting the first human genome cost about $3 billion. Today, getting a human genome costs less than $1,000. I have some figures later in the talk showing some of the ads that people are running. Increasing success in utilizing molecular information to improve diagnosis and treatment, we'll talk about some of those. Advances in IT so that we have bigger capabilities of dealing with so-called big data-- a perfect storm among stakeholders that has made them much more receptive to this kind of information. So the fact that costs in the health-care system in the US keep rising and quality doesn't keep rising proportionately makes everybody desperate to come up with new ways of dealing with this problem. And so this looks like the next great hope for how to do it. And shifting public attitudes toward molecular data-- so how many of you have seen the movie Gattaca? A few. So that's a dystopian view of what happens when people are genotyped and can therefore be tracked by their genetics. And it is true that there are horror stories that can happen. But nevertheless, people seem to be more relaxed today about allowing that kind of data to be collected and used. Because they see the potential benefits outweighing the costs. Not everybody-- but that continues to be a serious issue. So this report goes on and says, you know, let's think about how to integrate all kinds of different data about individuals. And they start off and they say, you know, one good example of this has been Google Maps. So Google Maps has a coordinate system, which is basically longitude and latitude, for every point on the Earth. And they can use that coordinate system in order to layer on top of each other information about postal codes, built structures, census tracts, land use, transportation, everything. And they said, wow, this is really cool, if only we could do this in health care. And so their vision is to try to do that in health care by saying, well, what corresponds to latitude and longitude is individual patients. And these individual patients have various kinds of data about them, including their microbiome, their epigenome, their genome, clinical signs and symptoms, the exposome, what are they exposed to. And so there's been a real attempt to go out and create large collections of data that bring together all of this kind of information. One of those that is notable is the Department of Health-- well, NIH basically started a project about a year and a half ago called All of Us, sounds sort of menacing. But it's really a million of us. And they have asked institutions around the United States to get volunteers to volunteer to provide their genetic information, their clinical data, where they live, where they commute, things like that, so that they can get environmental data about them. And then it's meant to be an ongoing collection of data about a million people who are supposed to be a representative sample of the United States. So you'll see in some of the projects I talk about later today that many of the studies have been done in populations of European ancestry. And so they may not apply to people of other ethnicities. This is attempting to sample accurately so that the fraction of African Americans and Asians and Hispanics and so on corresponds to the sample in the United States population. There's a long history. How many of you have heard of the Framingham Heart Study? So a lot of people. So Framingham, in the 1940s, agreed to become the subject of a long-term experiment. I think it's now run by Boston University, where every year or two they go out and they survey-- I can't remember the number of people. It started as something like 50,000 people-- about their habits and whether they smoke, and what their weight and height is, and any clinical problems they've had, surgical procedures, et cetera. And they've been collecting that database now over several generations of people that descend from those. And they've started collecting genetic data as well. So All of Us is really doing this on a very large scale. Now, the vision of these of this study was to say that we're going to build this information commons, which collects all this kind of information, and then we're going to develop knowledge from that information or from that data. And that knowledge will become the substrate on which biomedical research can rest. So if we find significant associations, then that suggests that one should do studies, which will not necessarily be answered by the data that they've collected. You may have to grow knock-out mice or something in order to test whether an idea really works. But this is a way of integrating all of that type of information. And of course, it can affect diagnosis, treatment, and health outcomes, which are the holy grail for what you'd like to do in medicine. Now, here's an interesting problem. So the focus, notice, was on taxonomies. So Sam Johnson was a very famous 17th century British writer. And he built encyclopedias and dictionaries, and was a poet and a reviewer and a commentator, and did all kinds of fancy things. And one of his quotes is, "My diseases are an asthma and a dropsy and, what is less curable, 75," years old. So he was funny, too. Now, if you look up dropsy in a dictionary-- how many of you have heard of dropsy? A couple. So how did you hear of it? AUDIENCE: From Jane Austen novels. [LAUGHS] PETER SZOLOVITS: Sorry? From a novel? AUDIENCE: Novels. PETER SZOLOVITS: Yeah. AUDIENCE: I've heard of dropsy [INAUDIBLE].. PETER SZOLOVITS: Yeah. I mean, I learned about it by watching Masterpiece Theatre with 19th century people, where the grandmother would take to her bed with the dropsy. And it didn't turn out well, typically. But it took a long time for those people to die. So dropsy is water sickness, swelling, edema, et cetera. It's actually not a disease. It's a symptom of a whole bunch of diseases. So it could be pulmonary disease, heart failure, kidney disease, et cetera. And it's interesting. I look back on this. I couldn't find it for putting together this lecture. But at one point, I did discover that the last time dropsy was listed as the cause of death of a patient in the United States was in 1949. So since then, it's disappeared as a disease from the taxonomy. And if you talk to pulmonary people, they suspect that asthma, which is still a disease in our current lexicon, may be very much like dropsy. It's not a disease. It's a symptom of a whole bunch of underlying causes. And the idea is that we need to get good enough and precise enough at being able to figure out what these are. So I talked to my friend Zack Kohane at Harvard a few weeks ago when I started preparing this lecture. And he has the following idea. And the example I'm going to show you is from him. So he says, well, look, we should have this precision medicine modality space, which is this high-dimensional space that contains all of that information that is in the NRC report. And then what we do is, in this high-dimensional space, if we're lucky, we're going to find clusters of data. So this always happens. If you ever take a very high-dimensional data set and put it into its very high-dimensional representation space, it's almost never the case that the data is scattered uniformly through the space. If that were true, it wouldn't help us very much. But generally, it's not true. And what you find is that the data tends to be on lower-dimensional manifolds. So it's in subsets of the space. And so a lot of the trick in trying to analyze this kind of data is figuring out what those lower-dimensional manifolds look like. And often you will find among a very large data set a cluster of patients like this. And then Zack's approach is to say, well, if you're patient-- it's hard to represent three dimensions in two. But if you're patient that falls somewhere in the middle of such a cluster, then that probably means that they're kind of normal for that cluster, whereas if they fall somewhere at the edge of such a cluster, that probably means that there's something odd going on that is worth investigating, because they're unusual. So then he gave me an example of a patient of his. And let me give you a minute to read this. Yeah? AUDIENCE: What's an armamentarium? PETER SZOLOVITS: Where does it say armamentarium? AUDIENCE: [INAUDIBLE] PETER SZOLOVITS: Oh, yeah. So an armamentarium, historically, is the set of arms that are available to an army. So this is the set of treatments that are available to a doctor. AUDIENCE: Is that the only word you don't know? [LAUGHTER] It's the only word-- AUDIENCE: If I start asking-- AUDIENCE: Based on [INAUDIBLE]. AUDIENCE: Oh, OK. AUDIENCE: In the world. Some of it, I thought I could understand. PETER SZOLOVITS: Well, you probably know what antibiotics are. And immunosuppressants, you've probably heard of. Anyway, it's a bunch of different therapies. So this is what's called a sick puppy. It's a kid who is not doing well. They started life, at age three, with ulcerative colitis, which was well-controlled by the kinds of medications that they normally give people with that disease. And then all of a sudden, 10 years later, he breaks out with this horrible abdominal pain and diarrhea and blood in his stool. And they try a bunch of stuff that they think ought to work, and it doesn't work. So the kid was facing some fairly drastic options, like cutting out the part of his colon that was inflamed. So your colon is an important part of your digestive tract. And so losing it is not fun and would have bad consequences for the rest of his life. But what they did is they said, well, why is he not responding to any of these therapies? And the difficulty, you can imagine, in that cloud-of-points picture, is, how do you figure out whether the person is an outlier or is in the middle of one of these clusters, when it depends on a lot of things? In this kid's case, what it depended on most significantly was the last six months of his experience, where, before, he was doing OK with the standard treatment. So that cloud might have represented people with ulcerative colitis who were well-controlled by the standard treatment. And now, all of a sudden, he becomes an outlier. So what happened in this case is they said, well, maybe there are different groups of ulcerative colitis patients. So maybe there are ones who have a lifelong remission after treatment with a commonly used monoclonal antibody. So that's the center of the cluster. Maybe there are people who have multi-year remission but become refractory to these drugs. And after other treatments, they have to undergo a colectomy. So that's the removal of the colon. And then there are people who have, initially, a remission, but then those standard therapy works. So that's what this kid is in, this cluster. So how do you treat this as a machine learning problem from the point of view of having lots of data about lots of different patients? And the challenges, of course, include things like, what's your distance function in doing the kind of clustering that people typically do? How do you define what an outlier is? Because there's always a continuum where it just gets more and more diffuse. What's the best representation for time-varying data, which is critical in this case? What's the optimal weighting or normalization of dimensions? So does every dimension in this very high-dimensional space count the same? Or are differences along certain dimensions more important than those among others? And does that, in fact, vary from problem to problem? The answer is probably yes. So how do we find the neighborhood for the patient? Well, I'm going to give you some clues by starting with a shallow dive into genetics. So if you've taken a molecular cell biology class, this should not be news to you. And I'm going to run through it pretty quickly. If you haven't, then I hope at least you'll pick up some of the vocabulary. So a wise biologist said, "Biology is the science of exceptions." There are almost no rules. About 25 years ago, the biology department here taught a special class for engineering faculty to try to explain to us what they were teaching in their introductory biology, molecular biology classes. And I remember, I was sitting next to Jerry Sussman, one of my colleagues. And after we heard some lecture about the 47 ways that some theory doesn't apply in many, many cases, Jerry turns to me and he says, you know, the problem with this field is there are just too many damned exceptions. There are no theories. It's all exceptions. And so even biologists recognize this. Now, people have observed, ever since human beings walked the earth, that children tend to be similar to their parents in many ways. And until Gregor Mendel, this was a great mystery. Why is it that you are like your parents? I mean, you must have gotten something from them that sort of carries through and makes you similar to them. So Mendel had this notion of having discrete factors of inheritance, which he called genes. He had no idea what these were. But conceptually, he knew that they must exist. And then he did a bunch of experiments on pea plants, showing that peas that are wrinkled tend to have offspring peas that are also wrinkled. And he worked out the genetics of what we now call Mendelian inheritance, namely dominant versus recessive inheritance patterns. Then Johann Miescher came along some years later, and he discovered a weird thing in cells called nuclein, which is now known as DNA. But it wasn't until 1952 that Hershey and Chase said, hey, it's DNA that is carrying this genetic information from generation to generation. And then, of course, Watson, Crick, and Franklin, the following year, deciphered the structure of DNA, that it's this double helix, and then figured out what the mechanism must be that allows DNA to transmit this information. So you have a double helix. You match the four letters A, C, T, G opposite each other, and you can replicate this DNA by splitting it apart and growing another strand that is the complement of the first one. Now you have two. And you can have children, pass on this information to them. So that was a big deal. So a gene is defined by the National Center for Biotechnology Information as a fundamental physical and functional unit of heredity that's a DNA sequence located on a specific site on a chromosome which encodes a specific functional product, namely RNA or a protein. I'll come back to that in a minute. The remaining mystery is it's still very hard to figure out what parts of the DNA code genes. So you would think we might have solved this, but we haven't quite. And what does the rest, which is the vast majority of the DNA, do if it's not encoding genes? And then, how does the folding and the geometry, the topology of these structures, influence their function? So I went back and I read some of Francis Crick's work from the 1950s. And it's very interesting. This hypothesis was considered controversial and tentative at the time. So he said, "The specificity of a piece of nucleic acid is expressed solely by the sequence of its bases, and this sequence is a simple code for the amino acid sequence of a particular protein." And there were people arguing that he was just flat wrong, that this was not true. Of course, it turned out he was right. And then the central dogma is the transfer of information from nucleic acid to nucleic acid or from nucleic acid to protein may be possible. But transfer from protein to protein or from protein to nucleic acid is impossible. And that's not quite true. But it's a good first approximation. So this is where things stood back about 60 years ago. And then a few Nobel prizes later, we began to understand some of the mechanism of how this works. And of course, how it works is that you have DNA, which is these four bases, double stranded. RNA gets produced in the process of transcription. So this thing unfolds. An RNA strand is built along the DNA and separates from the DNA, creating a single-stranded RNA. And then it goes and hooks up with a ribosome. And the ribosome takes that RNA and takes the codes in triplets, and each triplet stands for a particular amino acid, which it then assembles in sequence and creates proteins, which are sequences of amino acids. Now, it's very complicated. Because there's three-dimensionality and there's time involved. And the rate constants-- this is chemistry, after all. So again, a few more Nobel prizes later, we have that transcription, that process of turning DNA into RNA, is regulated by promoter, repressor, and enhancer regions on the genome. And the proteins mediate this process by binding to the DNA and causing the beginning of transcription, or causing it to run faster or causing it to run slower, or they interfere with it, et cetera. There are also these enhancers, some of which are very far away from the coding region, that make huge differences in how much of the RNA, and therefore how much of the protein, is made. And the current understanding of that is that, if here is the gene, it may be that the strand of DNA loops around. And the enhancer, even though it's distant in genetic units, is actually in close physical proximity, and therefore can encourage more of this transcription to take place. By the way, if you're interested in this stuff, of course MIT teaches a lot of courses in how to do this. Dave Gifford and Manolis Kellis both teach computational courses in how to apply computational methods to try to decipher this kind of activity. So repressors prevent activator from binding or alters the activator in order to change the rate constants. And so this is another mechanism. Now, one of the problems is that if you look at the total amount of DNA in your genes, in your cells, only about 1 and 1/2% are exons, which are the parts that code for mRNA, and eventually protein. So the question is what does the other 98 and 1/2% do? There was this unfortunate tendency in the biology community to call that junk DNA, which of course is a terrible notion. Because evolution would certainly have gotten rid of it if it was truly junk. Because our cells spend a lot of energy building this stuff. And every time a cell divides, it rebuilds all that so-called junk DNA. So it can't possibly be junk. But the question is, what does it do? And we don't really know for a lot of it. So there are introns-- I'll show you a picture. There are segments of the coding region that don't wind up as part of the RNA. They're spliced out. And we don't quite know why. There are these regulatory sequences, which is only about 5%, that are those promoters and repressors and enhancers that I talked about. And then there's a whole bunch of repetitive DNA that includes transposable elements, related sequences. And mostly, we don't understand what it all does. Hypotheses are things like, well, maybe it's a storehouse of potentially useful DNA so that if environmental conditions change a lot, then the cell doesn't have to reinvent the stuff from scratch. It saved it from previous times in evolution when that may have been useful. But that's pretty much pure speculation at this point. So just recently, the Killian Lecture was given by Gerald Fink, who's a geneticist here. And his claim is that a gene is not any segment of DNA that produces RNA or protein. But it's any segment of DNA that is transcribed into RNA that has some function, whatever it is, not necessarily building proteins, but just anything. And I think that view is becoming accepted. So I promised you a little bit of more complexity. So when you look at your DNA in eukaryotes, like us, here's the promoter. And then here is the sequence of the genome. When this gets transcribed, it gets transcribed into something called pre-mRNA, messenger RNA. And then there's this process of alternative splicing that splices out the introns and leaves only the exons. But sometimes it doesn't leave all the exons. It only leaves some of them. And so the same gene can, under various circumstances, produce different mRNA, which then produces different proteins. And again, there's a lot of mysteries about exactly how all this works. Nevertheless, that's the basic mechanism. And then here, I've just listed a few of the complexity problems. So there are things like, RNA can turn into DNA. This is a trick that viruses use a lot. They incorporate themselves into your cell, create a DNA complement to the RNA, and then use that to generate more viruses. So this is very typical of a viral infection. Prions, we also don't understand very well. This is like mad cow disease, where these proteins are able to cause changes in other proteins without going through the RNA/DNA-mediated mechanisms. There are DNA-modifying proteins, the most important of which is the stuff involved in CRISPR-CAS9, which is this relatively new discovery about how bacteria are able to use a mechanism that they stole from viruses to edit the genetic complement of themselves, and more importantly, of other viruses that attack them. So it's an antiviral defense mechanism. And we're now figuring out how to use it to do gene editing. You may have read about this Chinese guy who actually went out and edited the genome of a couple of girls who were born in China, incorporating some, I think, resistance against HIV infections in their genome. And of course, this is probably way too early to do experiments on human beings, because they haven't demonstrated that this is safe. But maybe that'll become accepted. George Church at Harvard has been going around-- he likes to rattle people's chains. And he's been going around saying, well, the guy, he was unethical and was a slob, but what he's doing is a really great idea. So we'll see where that goes. And then there are these retrotransposons, where pieces of DNA in eukarya just pop out of wherever they are and insert themselves in some other place in the genome. And in plants, this happens a lot. So for example, wheat seems to have a huge number of copies of DNA segments that maybe it had only one of, but it's replicated through this mechanism. Last bit of complexity-- so we have various kinds of RNA. There's long non-coding RNA, which seems to participate in gene regulation. There is RNA interference, that there are these small RNA pieces that will actually latch onto the RNA produced by the standard genetic mechanism and prevent it from being translated into protein. This was another Nobel Prize a few years ago. Almost everything in this field, if you're first, you get a Nobel Prize for it. Once the proteins are made, they're degraded differentially. So there are different mechanisms in the cell that destroy certain kinds of proteins much faster than others. And so the production rate doesn't tell you how much is going to be there at any particular time. And then there's this secondary and tertiary structure, where there's actually-- what is it? It's a mile of DNA in each of your cells. So it wouldn't fit. And so it gets wrapped up on these acetylated histones to produce something called chromatin. And again, we don't quite understand how this all works. Because you'd think that if you wrap stuff up like this, it would become inaccessible to transcription. And therefore, it's not clear how it gets expressed. But somehow or other, the cell is able to do that. So there's a lot yet to learn in this area. Now, the reason we're interested in all this is because, if you plot Moore's law for how quickly computers are becoming cheaper per performance, and you plot the cost of gene sequencing, it keeps going down. And it goes down much faster even than Moore's law. So this is pretty remarkable. And it means that, as I said, that $3 dollar first genome now costs just a few hundred dollars. In fact, if you're just interested in the whole exome, so only the 2%, roughly, of the DNA that produces genetic coding, you can now go to this company, which I have nothing to do with. I just pulled this off the web. But for $299, they will give you 50-times coverage on about six gigabases. And if you pay them an extra $100, they'll do it at 100x coverage. So these techniques are very noisy. And so it's important to get lots of replicates in order to reassemble what you think is going on. A slightly more recent phenomenon is people say, well, not only can we sequence your DNA but we can sequence the RNA that got transcribed from the DNA. And in fact, you can buy a kit for $360 that will take the RNA from individual cells-- so these are like picoliter amounts of stuff. And it will give you the RNA sequence for $360 for up to 100 cells, so $3, $3.50 per cell. So people are very excited. And there are now also companies that will sell you advanced analysis. So they will correlate the data that you are getting with different databases and figure out whether this represents a dominant or a recessive or an x-linked model, if you have family familial data and functional annotation of candidate genes, et cetera. And so, for example, starting about three years ago, if you walk into the Dana-Farber with a newly diagnosed cancer, a solid-tumor cancer, they will take a sample of that cancer, send it off to companies like this, or their own labs, and do sequencing and do analysis and try to figure out exactly which damaged genes that you have may be causing the cancer, and maybe more importantly, since it's still a pretty empirical field, which unusual variants of your genes suggest that certain drugs are likely to be more effective in treating your cancer than other drugs. So this has become completely routine in cancer care and in a few other domains. So now I'm going to switch to a more technical set of material. So if you want to characterize disease subtypes using gene expression arrays, microarrays, here's one way to do it. And this is a famous paper by Alizadeh. It was essentially the first of this class of papers back in 2001, I think. Yeah, 2001. And since then, there have been probably tens or hundreds of thousands of other papers published doing similar kinds of analyses on other data sets. So what they did is they said, OK, we're going to extract the coding RNA. We're going to create complementary DNA from it. We're going to use a technique to amplify that, because we're starting with teeny-tiny quantities. And then we're going to take a microarray, which is either a glass slide with tens or hundreds of thousands of spotted bits of DNA on it or it's a silicon chip with wells that, again, have tens or hundreds of thousands of bits of DNA in it. Now, where does that DNA come from? Initially, it was just a random collection of pieces of genes from the genome. Since then, they've gotten somewhat more sophisticated. But the idea is that I'm going to take the amplified cDNA, I'm going to mark with one of these jellyfish proteins that glows under light, and then I'm going to flow it over this slide or over this set of wells. And the complementary parts of the complementary DNA will stick to the samples of DNA that are in this well. OK-- stands to reason. An alternative is that you take normal tissue as well as, say, the cancerous tissue, you mark the normal tissue with green fluorescent jellyfish stuff and you mark the cancer with red, and then you flow both of them in equal amounts over the array. That lets you measure a ratio. And you don't have as much of a calibration problem about trying to figure out the exact value. And then you cluster these samples by nearness in the expression space. And you cluster the genes by expression similarity across samples. So it used to be called bi-clustering. And I'll talk in a few minutes about a particular technique for doing this. So this is a typical microarray experiment. The RNA is turned into its complementary DNA, flowed over the microarray chip. And you get out a bunch of spots that are to various degrees of green and red. And then you calculate their ratio. And then you do this bi-clustering. And what you get is a hierarchical clustering of genes and a hierarchical clustering, in their case, of breast cancer biopsy specimens that express these genes in different ways. So this was pretty revolutionary, because the answers made sense. So when they did this on 19 cell lines in 65 breast tumor samples and a whole bunch of genes, they came up with a clustering that said, hmm, it looks like there are some samples that have this endothelial cell cluster. So it's a particular kind of problem. And you could correlate it with pathology from the tumor slides and different subclasses. And then this is a very typical kind of heat map that you see in this type of study. Another study from 65 breast carcinoma samples, using the gene list that they curated before, looks like it clusters the expression levels into these five clusters. It's a little hard to look at. I mean, when I stare at these, it's not obvious to me why the mathematics came up with exactly those clusters rather than some others. But you can see that there is some sense to it. So here you see a lot of greens at this end of it and not very much at this end, and vise versa. So there is some difference between these clusters. Yeah? AUDIENCE: How did they come up with the gene list? And does anyone ever do this kind of cluster analysis without coming up with a gene list first? PETER SZOLOVITS: Yes. So I'm going to talk in a minute about modern gene-wide association studies, where basically you say, I'm going to look at every gene known to man. So they still have a list, but the list is 20,000 or 25,000. It's whatever we know about. And that's another way of doing it. So what was compelling about this work, this group's work, is a later analysis showed that these five subtypes actually had different survival rates, and at p-equal 0.01 level of statistical significance. You've seen these survival curves, of course, before from David's lecture. But this is pretty impressive that doing something that had nothing to do with the clinical condition of the patient-- this is purely based on their gene expression levels-- you were able to find clusters that actually behave differently, clinically. So some of them do better than others. So this paper and this approach to work set off a huge set of additional work. This was, again, back in the Alizadeh paper. They did a similar relationship between 96 samples of normal and malignant lymphocytes. And they get a similar result, where the clusters that they identify here correspond to sort of well-understood existing types of lymphoma. So this, again, gives you some confidence that what you're extracting from these genetic correlations is meaningful in the terms that people who deal with lymphomas think about, about the topic. But of course, it can give you much more detail. Because people's intuitions may not be as effective as these large-scale data analyses. So to get to your question about generalizing this, I mean, here's one way that I look at this. If I list all the genes and I list all the phenotypes-- now, we're a little more sure of what the genes are than of what the phenotypes are. So that's an interesting problem. Then I can do a bunch of analyses. So what is a phenotype? Well, it can be a diagnosed disease, like breast cancer. Or it can be the type of lymphoma from the two examples I've just shown you. It can also be a qualitative or a quantitative trait. It could be your weight. It could be your eye color. It could be almost anything that is clinically known about you. And it could even be behavior. It could be things like, what is your daily output of Twitter posts? That's a perfectly reasonable trait. I don't know if it's genetically predictable. But you'll see some surprising things that are. So then, how do you analyze this? Well, if you start by looking at a particular phenotype and say, what genes are associated with this, then you're doing what's called a GWAS, or a Gene-Wide Association Study. So you look for genetic differences that correspond to specific phenotypic differences. And usually, you're looking at things like single nucleotide polymorphisms. So this is places where your genome differs from the reference genome, the most common genome in the human population, at one particular locus. So you have a C instead of a G or something one place in your genes. Copy number variations, there are stretches of DNA that have repeats in them. And the number of repeats is variable. So one of the most famous ones of these is the one associated with Huntington's disease. It turns out that if you have up to 20-something repeats of a certain section of DNA, you're perfectly healthy. But if you're above 30 something, then you're going to die of Huntington's disease. And again, we don't quite understand these mechanisms. But these are empirically known. So copy number variations are important, gene expression levels, which I've talked about a minute ago. But the trick here in a GWAS is to look at a very wide set of genes rather than just a limited set of samples that you know you're interested in. Now, the other approach is the opposite, which is to say, let's look at a particular gene and figure out what's it correlated with. And so that's called a PheWAS, a Phenome-Wide Association Study. And now what you do is you list all the different phenotypes. And you say, well, we can do the same kind of analysis to say which of them are disproportionately present in people who have that genetic variant. So here's what a typical GWAS looks like. This is called a Manhattan plot, which I think is pretty funny. But it does kind of look like the skyline of Manhattan. So this is all of your genes laid out in sequence along your chromosomes. And you take a particular phenotype and you say, what is the difference in the ratio of expression levels between people who have this disease and people who don't have this disease? And something like this gene, whatever it is, clearly there is an enormous difference in its expression level. And so you would be surprised if this gene didn't have something to do with the disease. And similarly, you can calculate different significance levels. You have to do something like a Bonferroni correction, because you are testing so many hypotheses simultaneously. And so typically, the top of these lines is the Bonferroni-corrected threshold. And then you say, OK, this guy, this guy, this guy, this guy, and this guy come above that threshold. So these are good candidate genes to think that may be associated with this disease. Now, can you go out and start treating people based on that? Well, it's probably not a good idea. Because there are many reasons why this analysis might have failed. All the lessons that you've heard about confounders come in very strongly here. And so typically, what biologists do is they do this kind of analysis. They then create a strain of knock-out mice who have some analog of whatever disease it is that you're studying. And they see whether, in fact, knocking out a certain gene, like this guy, cures or creates the disease that you're interested in in this mouse model. And then you have a more mechanistic explanation for what the relationship might be. So basically, you're looking at the ratio of the odds of having the disease if you have a SNP, or if you have a genetic variant, to having the disease if you don't have the genetic variant. Yeah? AUDIENCE: I'm just curious on the class size. It seems like the Bonferroni correction is being very limiting here, potentially conservative. And I'm curious if there are specific computational techniques adapted to this scenario that allow you to sort of mine a bit more effectively than those. PETER SZOLOVITS: Yeah. So if you talk to the statisticians, who are more expert at this than the computer scientists typically, they will tell you that Bonferroni is a very conservative kind of correction. And if you can impose some sort of structure on the set of genes that you're testing, then you can cheat. And you can say, well, you know, these 75 genes actually are all part of the same mechanism. And we're really testing the mechanism and not the individual gene. And therefore, instead of making a Bonferroni correction for 75 of these guys, we only have to do it for one. And so you can reduce the Bonferroni correction that way. But people get nervous when you do that. Because your incentive as a researcher is to show statistically significant results. But that whole question of p-values keeps coming under discussion. So the head of the American Statistical Association, about 15 years ago-- he's the Stanford professor. And he published what became a very notorious article saying, you know, we got it all wrong. Statistical significance is not significance in the standard English sense of the word. And so he called for various other ways and was more sympathetic to Bayesian kinds of reasoning and things like that. So there may be some gradual movement to that. But this is a huge can of worms to which we don't have a very good mechanistic answer. All right. So if you do these GWASs-- and this is the real problem with them is that most of what you see is down here. So you have things with common variants. But they have very small effect sizes when you look at what their effect is on a particular disease. And so that same Zach Kohane that I mentioned earlier has always been challenging people doing this kind of work, saying, look-- for example, we did a GWAS with Kat Liao, who was a guest interviewee here when I was lecturing earlier in the semester. She's a rheumatologist. And we did a gene-wide association study. We found a bunch of genes that had odds ratios of like 1.1 to 1, 1.2 to 1. And they're statistically significant. Because if you collect enough data, everything is statistically significant. But are they significant in the other sense of significance? Well, so Zach's argument was that if you look at something like the odds ratio of lung cancer for people who do and don't smoke, the odds ratios is eight. So when you compare 1.1 to eight, you should be ashamed. You're not doing very well in terms of elucidating what the effects really are. And so Zack actually has argued very strongly that rather than focusing all our attention on these genetic factors that have very weak relationships, we should instead focus more on clinical things that often have stronger predictive relationships. And some combination, of course, is best. Now, it is true that we know a whole bunch of highly penetrant Mendelian mutations. So these are ones where, one change in your genome, and all of a sudden you have some terrible disease. And I think when the Genome Project started in the 1990s, there was an expectation that we would find a whole bunch more things like that from knowing the genome. And that expectation was dashed. Because what we discovered is that our predecessors were actually pretty good at recognizing those kinds of diseases, from Mendel on, with the wrinkled peas. If you see a family in which there's a segregation pattern where you can see who has the disease and who doesn't and what their relationships are, you can get a pretty good idea of what genes or what genetic variants are associated with that disease. And it turns out we had found almost all of them. And so there weren't a whole lot more that are highly penetrant Mendelian mutations. And so what we had is mostly these common variants with small effects. What's really interesting and worth working on is these rare variants with small effects. So the mystery kid, like the kid whose case I showed you, probably has some interesting genetics that is quite uncommon, and obviously, for a long time, had a small effect. But then all of a sudden, something happened. And there is this whole field called unknown disease diagnosis that says, what do you do when some weirdo walks in off the street and you have no idea what's going on? And there are now companies-- so I was a judge in a challenge about four or five years ago, where we took eight kids like this and we genotyped them, and we genotyped their parents and their grandparents and their siblings. And we took all their clinical data. This was with the consent of their parents, of course. And we made it available as a contest. And we had 20-something participants from around the world who tried to figure out something useful to say about these kids. And you go through a pipeline. And we did this in two rounds. The first round, the pipelines all looked very different. And the second round, a couple of years later, the pipelines had pretty much converged. And I see now that there is a company that did well in one of these challenges that now sells this as a service, like I showed you before, different company. And so you send them the genetic makeup of some kid with a weird condition and the genetic makeup of their family, and it tries to guess which genes might be involved in causing the problem that this child has. That's not the answer, of course. Because that's just a sort of suspicion of a problem. And then you have to go out and do real biological work to try to reproduce that scenario and see what the effects really are. But at least in a couple of cases out of those eight, those hints have, in fact, led to a much better understanding of what caused the problems in these children. That was fun, by the way. I got my name as an author on one of these things that looks like a high-energy physics experiment. The first two pages of the paper is just the list of authors. So it's kind of interesting. Now, here's a more recent study, which is a gene-wide association of type 2 diabetes. It's not quite gene-wide, because they didn't study every locus. But they studied a hundred loci that have been associated with type 2 diabetes in previous studies. So of course, if you're not the first person doing this kind of work, you can rely on the literature, where other people have already come up with some interesting ideas. So they wound up selecting 94 type 2 diabetes-associated variants. So these are the glycemic traits, fasting insulin, fasting glucose, et cetera; things about your body, your body mass index, height, weight, circumference, et cetera; lipid levels of various sorts, associations with different diseases, coronary artery disease, renal function, et cetera. And let me come back to this. So what they did is they said, OK, here's the way we're going to model this. We have an association matrix that has 47 traits by 94 genetic factors. So we make a matrix out of that. And then they did something funny. So they doubled the traits. The technology for matrix factorization is called non-negative matrix factorization. And since many of those associations were negative, what they did is, for each trait that had both positive and negative values, they duplicated the column. They created one column that had positive associations and one column that had the negation of the negative associations with zeros everywhere else. So that's how they dealt with that problem. And then they said, OK, we're going to apply matrix factorization to factor X into two matrices, W and H. And I drew those here on the board. So you have one matrix that-- well, this is your original 47 by 94 matrix. And the question is, can you find two smaller matrices that are 47 by K and K by 94, that when you multiply these together, you get back some close approximation to that matrix. Now, if you've been looking at the literature, there are all kinds of ideas like auto-encoders. And these are all basically the same underlying idea. It's an unsupervised method that says, can we find interesting patterns in the data by doing some kind of dimension reduction? And this is one of those methods for doing dimension reduction. So what's nice about this one is that when they get their W and H, they predict X from that. And then they know, of course, what the error is. And they say, well, minimizing that error is our objective. So that also lets them get at the question of, what's the right K? And that's an important problem. Because normally clustering methods like hierarchical clustering, you have to specify what the number of clusters is that you're looking for. And that's hard to do a priori, whereas this technique can suggest at least which one fits the data best. And so the loss function is some regularized L2 distance between the reconstruction, W times H and X, and some penalty terms based on the size of W and H coupled by these relevance weights that-- you can look at the paper, which I think I referred to in here and I asked you to read. And then they do give sampling and a whole bunch of computational tricks to speed up the process. So they got about 17,000 people from four different studies. They're all of European ancestry. So there's the usual generalization problem of, how do you apply this to people from other parts of the world? And they did individual-level analysis of all the individuals with type 2 diabetes from these. And the results were that they found five subtypes-- again, five-- which were present on 82.3% of iterations. By the way, total random aside, there's a wonderful video at Caltech of the woman who just made the picture of the black hole shadow. And she makes arguments very much like this. We tried a whole bunch of different ways of coming up with this picture. And what we decided was true is whatever showed up in almost all of the different methods of reconstructing it. So this is kind of a similar argument. And their interpretations, medically, are that one of them is involved with variations in the beta cells. So these are the cells in your pancreas that make insulin. One of them is in variations in proinsulin, which is a predecessor of insulin that is under different controls. And then three others have to do with obesity, bad things about your lipid metabolism, and then your liver function. And if you look at their results, the top spider diagrams, so the way to interpret these is that the middle circle, octagon, the one in the very middle, is the one with negative data. The one in between that and the outside is with zero correlation. And the outside one is with positive correlation. And what you see is that different factors have different influences in these different clusters. So these are the factors that are most informative in figuring out which cluster somebody belongs to. And they indeed look considerably different. I'm not going to have you read this. But it'll be in the slides. Now, one thing that's interesting-- and again, this won't be on the final exam. But look at these numbers. They're all tiny. Some of them are hugely statistically significant. So DI, whatever that is, contributes 0.05 units to having beta-cell type of this disease at a p-value of 6.6 times 10 to the minus 37th. So it's definitely there. It's definitely an effect. But it's not a very big effect. And what strikes me every time I look at studies like this is just how small those effects are, whether you're predicting some output like the level of insulin in the patient, or whether you're predicting something like a category membership, as in this table. So as I said, PheWAS is a reverse GWAS. And the first paper that introduced the terminology was by Josh Denny and colleagues at Vanderbilt in 2010. And so they did not quite a phenome-wide association. But they said, we're going to take 25,000 samples from the Vanderbilt biobank, and we're going to take the first 6,000 European Americans with samples, no other criteria for selection. Why European Americans? Because all the GWAS data is about European Americans. So they wanted to be able to compare to that. And then they said, let's pick not one SNP but five different SNPs that we're interested in. So they picked these, which are known to be associated with coronary artery disease and carotid artery stenosis, atrial fibrillation, multiple sclerosis and lupus, rheumatoid arthritis and Crohn's disease. So it's a nice grab-bag of interesting disease associations. And then the hard work they did was they went through the tens of thousands of different billing codes that were available. And they, by hand, clustered them into 744 case groups and said, OK, these are the phenotypes that we're interested in. And that data set, by the way, is still available. And it's been used by a lot of other people, because nobody wants to repeat that analysis. So now what you see is something very similar to what you saw in GWAS, except here, what we have is the ICD-9 code group. I guess by the time this got published, it was up to 1,000. And these are the same kinds of odds ratios for the genetic expression of those markers. And what you find, again, is that this is the p-equal 0.05. That's the Bonferroni-corrected version. And only multiple sclerosis comes up for this particular SNP, which was one of the ones that they expected to come up. But they were interested to see what else lights up when you do this sort of analysis. And what they discovered is that malignant neoplasm of the rectum, benign digestive tract neoplasms-- so there's something going on about cancer that is somehow related to this single-nucleotide polymorphism, not at a statistically high enough level, but it's still kind of intriguing that there may be some relationship there. Yeah? AUDIENCE: So is this data at all public? Or is this at one particular hospital? Or who has this data? Would it be combined? PETER SZOLOVITS: Yeah. I don't believe that you can get their data unless-- I think, if-- I mean, they're pretty good about collaborating with people. So if you're willing to become a volunteer employee at Vanderbilt, they could probably take you. But I just made that up. But every hospital has very strong controls. Now, what is available is the NCBI has GEO, the Gene Expression Omnibus, which has enormous amounts-- like, I think, hundreds of billions of sample data. But you don't often know exactly what the sample is from. So it comes with an accession number and an English description of what kind of data it is. And there are actually lots of papers where people have done natural language processing on those English descriptions in order to try to figure out what kind of data this is. And then they can make use of it. So you can be clever. And there's a ton of data out there, but it's not well-curated data. Now, what's interesting is you don't always get what you expect. So for example, that SNP was selected because it's thought to be associated with multiple sclerosis and lupus. But in reality, the association with lupus is not significant. Its p-value of 0.5, which is not very impressive. The association with multiple sclerosis is significant. And so they found, in this particular study, a couple of things that had been expected but didn't work out. So for example, this SNP, which was associated with coronary artery disease and thought to be associated with this carotid plaque deposition in your carotid artery, just isn't. p-value of 0.82 is not impressive at all. OK, onward. So that was done for SNPs. Now, a very popular idea today is to look at expression levels, partly because of those prices I showed you where you can very cheaply get expression levels from lots of samples. And so there's this whole notion of Expression Quantitative Trait Loci, or EQTL, that says, hey, instead of working as hard as the Vanderbilt guys did to figure out these hundreds of categories of disease, let's just take your gene expression levels and use those as defining the trait that we're interested in. So now we're looking at the relationship between your genome and the expression levels. And so you might say, well, that ought to be easy. Because if the gene is there, it's going to get expressed. But of course, that's not telling you whether the gene is being activated or repressed or enhanced, or whether any of these other complications that I talked about earlier are present. And so this is an interesting empirical question. And so people say, well, maybe a small genetic variation will cause different expression levels of some RNA. And we can measure these, and then use those to do this kind of analysis. So differential expression in different populations-- there is evidence that, for example, if you take 16 people of African descent, then 17% of the genes in a small sample of 16 people differ in their expression level among those individuals; and similarly, 26% in this Asian population and 17% to 29% in a HapMap sample. Of course, some of these differences may be because of confounders like environment, different tissues, limited correlation of these expression levels to disease phenotypes. Nevertheless, this type of analysis has uncovered relationships between these EQTLs and asthma and Crohn's disease. So I'll let you read the conclusion of one of these studies. So this is saying what I said before, that we probably know all the Mendelian diseases. So the diseases that we're interested in understanding better today are the ones that are not Mendelian, but they're some complicated combination of effects from different genes. And that makes it, of course, a much harder problem. There is an interesting recent paper-- well, not that recent-- 2005-- that uses Bayesian network technology to try to get at this. And so they say, well, if you have some quantitative trait locus and you treat the RNA expression level as this expression quantitative trait locus, and then you take C as some complex trait, which might be a disease or it might be a proclivity for something, or it might be one of Josh Denny's categories or whatever, then there are a number of different Bayesian network-style models that you can build. So you can say, ah, the genetic variant causes a difference in gene expression, which in turn causes the disease. Or you could say, hmm, the genetic trait causes the disease, which in turn causes the observable difference in gene expression. Or you can say that the genetic variant causes both the expression level and the disease, but they're not necessarily coupled. So they may be conditionally independent given the genetic variant. Or you can have more complex issues, like you could have the gene causing changes in expression level of a whole bunch of different RNA, which combined cause some disease. Or you can have different genetic changes all impacting the expression of some RNA, which causes the disease. Or-- just wait for it. Oops. You can have models like this that say, we have some environmental contributions and a bunch of different genes which affect the expression of a bunch of different EQTLs, which cause a bunch of clinical traits, which cause changes in a bunch of reactive RNA, which cause comorbidities. So the approach that they take is to say, well, we can generate a large set of hypotheses like this, and then just calculate the likelihood of the data given each of these hypotheses. And whichever one assigns the greatest likelihood to the data is most likely to be the one that's close to correct. So let me just blast through the rest of this quickly. Scaling up genome-phenome association studies-- the UK Biobank is sort of like this All of Us project. But they do make their data available. All of Us will, also, but it hasn't been collected yet. UK Biobank has about half a million de-identified individuals with full exome sequencing, although they only have about 10% of what they want now. And many of them will have worn 24-hour activity monitors so that we have behavioral data. Some of them have had repeat measurements. They do online questionnaires. About a fifth of them will have imaging. And it's linked to their electronic health record. So we know if they died or if they had cancer or various hospital episodes, et cetera. And there's a website here which publishes the latest analyses. And so you see, on April 18, genetic variants that protect against obesity and type 2 diabetes discovered, moderate with meat-eaters are at risk of bowel cancer, and research identifies genetic causes of poor sleep. So this is all over the place. But these are all the studies that are being done by this. I'll skip this. But there's a group here at MGH and the Broad that is using this data to do, large-scale, many, many gene-wide association studies. And one of the things that I promised you, which is interesting, is from these studies, they say, well, the heritability of height is pretty good. It's about 0.46 with a p-value of 10 to the minus 109th. So your height is definitely determined, in large part, by your parents' height. But what's interesting is that whether you get a college degree or not is determined by whether your parents got a college degree or not. This is probably not genetic. Or it's only partly genetic. But it clearly has confounders us from money and social status and various things like that. And then what I found amusing is that even TV-watching is partly heritable from your genetics. Fortunately, my parents watch a lot of TV. The last thing I wanted to mention, but I'm not going to have time to get into it, is this notion of gene set enrichment analysis. It's what I was saying before, that genes typically don't act by themselves. And so if you think back on high school biology, you probably learned about the Krebs cycle that powers cellular mechanisms. So if you break any part of that cycle, your cells don't get enough energy. And so it stands to reason that if you want to understand that sort of metabolism, you shouldn't be looking at an individual gene. But you should be looking at all of the genes that are involved in that process. And so there have been many attempts to try to do this. The Broad Institute here has a set of, originally, 1,300 biologically-defined gene sets. So these were ones that interacted with each other in controlling some important mechanism in the body. They're now up to 18,000. For example, genes involved in oxidative phosphorylation and muscle tissue show reduced expression in diabetics, although the average decrease per gene is only 20%. So they have these sets. And from those, there is a very nice technique that is able to pull-- it's essentially a way of strengthening the gene-wide associations by allowing you to associate them with these sets of genes. And the approach that they take is quite clever. They say, if we take all the genes in a gene set and we order them by their correlation with whatever trait we're interested in, then the genes that are closer to the beginning of that are more likely to be involved. Because they're the ones that are most strongly associated. And so they have this random walk process that find sort of the maximum place where you can say anything before that is likely to be associated with the disease that you're interested in. And they've had a number of successes of showing enrichment in various diseases and various biological factors. The last thing I want to say is a little bit disappointing. I was just really looking for the killer paper to talk about that uses some really sophisticated deep learning, machine learning. And as far as I can tell, it doesn't exist yet. So most of these methods are based on clustering techniques on clever ideas, like the one for gene set enrichment analysis. But they're not neural network types of techniques. They're not immensely sophisticated. So what you see coming up is things like Bayesian networks and clustering and matrix factorization and so on, which sort of sound like 10-, 15-, 20-year-old technologies. And I haven't seen examples yet of the hot off the presses, we built a 83-layer neural network that outperforms these other methods. I suspect that that's coming. It just hasn't hit yet, as far as I know. If you know of such papers, by all means, let me know. All right. Thank you.
MIT_6S897_Machine_Learning_for_Healthcare_Spring_2019
11_Differential_Diagnosis.txt
PETER SZOLOVITS: OK. Today's topic is differential diagnosis. And so I'm just quoting Wikipedia here. Diagnosis is the identification of the nature and cause of a certain phenomenon. And differential diagnosis is the distinguishing of a particular disease or condition from others that present similar clinical features. So doctors typically talk about differential diagnosis when they're faced with a patient and they make list of what are the things that might be wrong with this patient. And then they go through the process of trying to figure out which one it actually is. So that's what we're going to focus on today. Now, just to scare you, here's a lovely model of human circulatory physiology. So this is from Guyton's textbook of cardiology. And I'm not going to hold you responsible for all of the details of this model, but it's interesting, because this is, at least as of maybe 20 years ago, the state of the art of how people understood what happens in the circulatory system. And it has various control inputs that determine things like how your hormone levels change various aspects of the cardiovascular system and how the interactions between different components of the cardiovascular system affect each other. And so in principle, if I could tune this model to me, then I could make all kinds of pretty good predictions that say if I increase my systemic vascular resistance, then here's what's going to happen as the rest of the system adjusts. And if I get a blockage in a coronary artery, then here's what's going to happen to my cardiac output and various other things. So this would be terrific. And if we had this kind of model for not just the cardiovascular system, but the entire body, then we'd say, OK, we've solved medicine. Well, we don't have this kind of model for most systems. And also, there's this minor problem that if I give you this model and say, "How does this relate to a particular patient?", how would you figure that out? This has hundreds of differential equations that are being represented by this diagram. And they have many hundreds of parameters. And so we were joking when we started working with this model that you'd really have to kill the patient in order to do enough measurements to be able to tune this model to their particular physiology. And of course, that's probably not a good practical approach. We're getting a little better by developing more non-invasive ways of measuring these things. But that's moving along very slowly. And I don't expect that I or maybe even any of you will live long enough that sort of this approach to doing medical reasoning and medical diagnosis is actually going to happen. So what we're going to look at today is what simpler models are there for diagnostic reasoning. And I'm going to take the liberty of inflicting a bit of history on you, because I think it's interesting where a lot of these ideas came from. So the first idea was to build flowcharts. Oh, and by the way, the signs and symptoms, I've forgotten if we've talked about that in the class. So a sign is something that a doctor sees, and a symptom is something that the patient experiences. So a sign is objective. It's something that can be told outside your body. A symptom is something that you feel. So if you're feeling dizzy, then that's a symptom, because it's not obvious to somebody outside you that you're dizzy, or that you have a pain, or such things. Normally, we talk about manifestations or findings, which is sort of a super category of all the things that are determinable about a patient. So we'll talk about flowcharts, models based on associations between diseases and these manifestations. Then there are some issues about whether you're trying to diagnose a single disease or a multiplicity of diseases, which makes the models much more complicated whether you're trying to do probabilistic diagnosis or definitive or categorical. And then we'll talk about some utility theoretic methods. And I'll just mention some rule-based and pattern-matching kinds of approaches. So this is kind of cute. This is from 1973. And if you were a woman and walked into the MIT Health Center and complained of potentially a urinary tract infection, they would take out this sheet of paper, which was nicely color-coded, and they would check a bunch of boxes. And if you hit a red box, that represented a conclusion. And otherwise, it gave you suggestions about what further tests to do. And this was essentially a triage instrument. It said, does this woman have a problem that requires immediate attention? And so we should either call an ambulance and take them to a hospital, or is it something where we can just tell them to come back the next day and see a doctor, or is it in fact some self-limited thing where we say, take two aspirin, and it'll go away. So that was the attempt here. Now, interestingly, if you look at the history of this project between the Beth Israel Hospital and Lincoln Laboratories, it started off as a computer aid. So they were building a computer system that was supposed to do this. And then in-- but you can imagine, in the late 1960s, early 1970s, computers were pretty clunky. PCs hadn't been invented yet. So this was like mainframe kinds of operations. It was very hard to use. And so they said, well, this is a small enough program that we can reduce it to about 20 flow sheets-- 20 sheets like this, which they proceeded to print up. And I was amused, because in the-- around 1980, I was working in my office one night. And I got this splitting headache. And I went over to MIT medical. And sure enough, the nurse pulled out one of these sheets for headaches and went through it with me and decided that a couple of Tylenols should fix me. But it was interesting. So this was really in use for a while. Now, the difficulty with approaches like this, of which there have been many, many, many in the medical world, is that they're very fragile. They're very specific. They don't take account of unusual cases. And there's a lot of effort in coming to consensus to build these things. And then they're not necessarily useful for a long time. So MIT actually stopped using them shortly after my headache experience. But if you go over to a hospital and you look on the bookshelf of a junior doctor, you will still find manuals that look kind of like this that say, how do we deal with tropical diseases? So you ask a bunch of questions, and then depending on the branching logic of the flowchart, it'll tell you whether this is serious or not. And the reason is because if you do your medical training in Boston, you're not going to see very many tropical diseases. And so you don't have a base of experience on the basis of which you can learn and become an expert at doing it. And so they use this as a kind of cheat sheet. I mentioned that the association between diseases and symptoms is another important way of doing diagnosis. And I swear to you, there was a paper in the 1960s, I think, that actually proposed this. So if any of you have hung around ancient libraries, libraries used to have card catalogs that were physical pieces of paper, cardboard. And one of the things they did with these was each card would be a book. And then around the edges were a bunch of holes, and depending on categorizations of the book along various dimensions, like its Dewey decimal number, or the top digits of its Library of Congress number or something, they would punch out holes in the borders. And this allowed you to do a kind of easy sorting of these books. So if you've got a bunch of cards together when people were returning their books and you pulled a bunch of cards. And you wanted to find all the math books. So what you would do is you'd stick a needle through the hole that represented math books, and then you shake the pile, and all the math books would fall out because they had punched. So somebody seriously proposed this as a diagnostic algorithm. And in fact, implemented it. And was trying to even make money on it. I think this was an attempt at a commercial venture, where they were going to provide doctors with these library cards that represented diseases. And the holes now represented not mathematics versus literature, but they represented shortness of breath versus pain in the left ankle versus whatever. And again, as people came in and complained about some condition, you'd stick a needle through that condition and you'd shake, and up would come the cards that had that condition in common. So one of the obvious problems with this approach is that if you had two things wrong with you, then you would wind up with no cards very quickly, because nothing would fall out of the pile. So this didn't go anywhere. But interestingly, even in the late 1980s, I remember being asked by the board of directors of the New England Journal of Medicine to come to a meeting where they had gotten a pitch from somebody who was proposing essentially exactly this diagnostic model, except implemented in a computer now and not in these library cards. And they wanted to know whether this was something that they ought to get behind and invest in. And I and a bunch of my colleagues assured them that this was probably not a great idea and they should stay away from it, which they did. Well, a more sophisticated model is something like a Naive Bayes model that says if you have a disease-- where is my cursor? If you have a disease, and you have a bunch of manifestations that can be caused by the disease, we can make some simplifying assumptions that say that you will only ever have one disease at a time, which means that the values of that node D form an exhaustive and mutually exclusive set of values. And we can assume that the manifestations are conditionally independent observables that depend only on the disease that you have, but not on each other or not on any other factors. And if you make that assumption, then you can apply good old Thomas Bayes's rule. This, by the way, is the Reverend Bayes. Do you guys know his history? So he was a nonconformist minister in England. And he was not a mathematician, except I mean, he was an amateur mathematician. But he decided that he wanted to prove to people that God existed. And so he developed Bayesian reasoning in order to make this proof. And so his argument was, well, suppose you're completely in doubt. So you have 50/50 odds that God exists. And then you say, let's look at miracles. And let's ask, what's the likelihood of this miracle having occurred if God exists versus if God doesn't exist? And so by racking up a bunch of miracles, you can convince people more and more that God must exist, because otherwise all these miracles couldn't have happened. So he never publish this in his lifetime, but after his death one of his colleagues actually presented this as a paper at the Royal Society in the UK. And so Bayes became famous as the originator of this notion of how to do probabilistic reasoning about at least fairly simple situations, like in his case, the existence or nonexistence of God. Or in our case, the cause of some disease, the nature of some disease. And so you can draw these trees. And Bayes's rule is very simple. I'm sure you've all seen it. One thing that, again, makes contact with medicine is that a lot of times, you're not just interested in the impact of one observable on your probability distribution, but you're interested in the impact of a sequence of observations. And so one thing you can do is you can say, well, here is my general population. So let's say disease 2 has 37% prevalence and disease 1 has 12%, et cetera. And now I make some observation. I apply Bayes's rule. And I revise my probability distribution. So this is the equivalent of finding a smaller population of patients who have all had whatever answer I got for symptom 1. And then I just keep doing that. And so this is the sequential application of Bayes's rule. And of course, it does depend on the conditional independence of all these symptoms. But in medicine, people don't like to do math, even arithmetic much. And they prefer doing addition rather than multiplication, because it's easier. And so what they've done is they said, well, instead of representing all this data in a probabilistic framework, let's represent it as odds. And if you represent it as odds, then the odds of some disease given a bunch of symptoms, given the independence assumption, is just the prior odds of the disease times the conditional odds, the likelihood ratio of each of the symptoms that you've observed. So you've just got to multiply these together. And then because they like adding more than multiplying, they said, let's take the log of both sides. And then you can just add them. And so if you remember when I was talking about medical data, there are things like the Glasgow Coma score, or the APACHE score, or various measures of how badly or well a patient is doing that often involve adding up numbers corresponding to different conditions that they have. And what they're doing is exactly this. They're applying sequentially Bayes's rule with these independence assumptions in the form of logs rather than multiplications, log odds, and that's how they're doing it. Very quickly. Somebody in a previous lecture was wondering about receiver operator characteristic curves. And I just wanted to give you a little bit of insight on those. So if you do a test on two populations of patients-- the red ones are sick patients. The blue ones are not sick patients. You do some test. What you expect is that the result of that test will be some continuous number, and it'll be distributed something like the blue distribution for the well patients and something like the red distribution for the ill patients. And typically, we choose some threshold. And we say, well, if you choose this to be the threshold between a prediction of sick or well, then what you're going to get is that the part of the blue distribution that lies to the right is the false positives and the part of the red distribution that lies to the left is the false negatives. And often people will choose the lowest point at which these two curves intersect as the threshold, but that, of course, isn't necessarily the case. Now, if I give you a better test, one like this, that's terrific, because there is essentially no overlap. Very small false negative and false positive rates. And as I said, you can choose to put the threshold in different places, depending on how you want to trade off sensitivity and specificity. And we measure this by this receiver operator characteristics curve, which has the general form that if you get a curve like this, that means that there's an exact trade-off for sensitivity and specificity, which is the case if you're flipping coins. So it's random. And of course, if you manage to hit the top corner up there, that means that there would be no overlap whatsoever between the two distributions, and you would get a perfect result. And so typically you get something in between. And so normally, if you do a study and your AUC, the area under this receiver operator characteristics curve, is barely over a half, you're pretty close to worthless, whereas if it's close to 1, then you have a really good method for distinguishing these categories of patients. Next topic. What does it mean to be rational? I should have a philosophy course here. AUDIENCE: Are you talking about pi? PETER SZOLOVITS: Sorry. AUDIENCE: Are you talking about pi? Pi is-- PETER SZOLOVITS: Pi is irrational, but that's not what I'm talking about. Well, so there is this principle of rationality that says that what you want to do is to act in such a way as to maximize your expected utility. So for example, if you're a gambler and you have a choice of various ways of betting in some poker game or something, then if you were a perfect calculator of the odds of getting a queen on your next draw, then you could make some rational decision about whether to bet more or less, but you'd also have to take into account things like, "How could I convince my opponent that I am not bluffing if I am bluffing?" and "How could I convince them that I'm bluffing if I'm not bluffing?" and so on. So there is a complicated model there. But nevertheless, the idea is that you should behave in a way that will give you the best expected outcome. And so people joke that this is Homo economicus, because economists make the assumption that this is how people behave. And we now know that that's not really how people behave. But it's a pretty common model of their behavior, because it's easy to compute, and it has some appropriate characteristics. So as I mentioned, every action has a cost. And utility measures the value or the goodness of some outcome, which is the amount of money you've won, or whether you live or die, or quality adjusted life years, or various other measures of utility-- how much it costs for your hospitalization. So let me give you an example. This actually comes from a decision analysis service at New England Medical Center Tufts Hospital in the late 1970s. So this was an elderly Chinese gentleman whose foot had gangrene. So gangrene is an infection that usually people who have bad circulation can get these. And what he was facing was a choice of whether to amputate his foot or to try to treat him medically. To treat him medically means injecting antibiotics into his system and hoping that his circulation is good enough to get them to the infected areas. And so the choice becomes a little more complicated, because if the medical treatment fails, then, of course, the patient may die, a bad outcome. Or you may have to now amputate the whole leg, because the gangrene has spread from his foot up the foot, and now you're cutting off his leg. So what should you do? And how should you reason about this? So Pauker's staff came up with this decision tree. By the way, decision tree in this literature means something different from decision tree in like C4.5. So your choices here are to amputate the foot or start with medical care. And if you amputate the foot, let's say there is a 99% chance that the patient will live. There's a 1% chance that maybe the anesthesia will kill him. And if we treat him medically, they estimated that there is a 70% chance of full recovery, a 25% chance that he'd get worse, a 5% chance that he would die. If he got worse, you're now faced with another decision, which is, do we amputate the whole leg or continue pushing medicine? And again, there are various outcomes with various estimated probabilities. Now, the critical thing here that this group was pushing was the idea that these decisions shouldn't be based on what the doctor thinks is good for you. They should be based on what you think is good for you. And so they worked very hard to try to elicit individualized utilities from patients. So for example, this guy said that having your foot amputated was worth 850 points on a scale of 1,000 where being healthy was 1 and being dead was 0. Now, you could imagine that that number would be very different for different individuals. If you asked LeBron James how bad would it be to have your foot amputated, he might think that it's much worse than I would, because it would be a pain to have my foot amputated, but I could still do most of the things that I do professionally, whereas he probably couldn't as a star basketball player. So how do you solve a problem like this? Well, you say, OK, at every chance node I can calculate the expected value of what happens here. So here at it's 0.6 times 995, 0.4 times 0. That gets me a value for this decision. Do the same thing here. I compare the values here and choose the best one. That gives me a value for this decision. And so I fold back this decision tree. And my next slide should have-- yeah, so these are the numbers that you get. And what you discover is that the utility of trying medical treatment is somewhat higher than the utility of immediately amputating the foot if you believe these numbers and those utilities, these probabilities and those utilities. Now, the difficulty is that these numbers are fickle. And so you'd like to do some sort of sensitivity analysis. And you say, for example, what if this gentleman valued his living with an amputated foot at 900 rather than 850. And now you discover that amputating the foot looks like a slightly better decision than the other. So this is actually applied in clinical medicine. And there are now thousands of doctors who have been trained in these techniques and really try to work through this with individual patients. Of course, it's used much more on an epidemiological basis when people look at large populations. AUDIENCE: I have a question. PETER SZOLOVITS: Yeah. AUDIENCE: How are the probabilities assessed? PETER SZOLOVITS: So the service that did this study would read the literature, and they would look in databases. And they would try to estimate those probabilities. We can do a lot better today than they could at that time, because we have a lot more data that you can look in. But you could say, OK, for people-- men of this age who have gangrenous feet, what fraction of them have the following experience? And that's how these are estimated. Some of it feels like 5%. OK. So I just said this. And then the question of where do you get these utilities is a tricky one. So one way is to do the standard gamble, which says, OK, Mr. Szolovits, we're going to play this game. We're going to roll a fair die or something that will come up with some continuous number between 0 and 1, and then I'm going to play the game where either I chop off your foot, or I roll this die and if it exceeds some threshold, then I kill you. Nice game. So now if you find the point at which I'm indifferent, if I say, well, 0.8, that's a 20% chance of dying. It seems like a lot. But maybe I'll go for 0.9, right? Now you've said, OK, well, that means that you value living without a foot at 0.9 of the value of being healthy. So this is a way of doing it. And this is typically done. Unfortunately, of course, it's difficult to ascertain the problem. And it's also not stable. So people have done experiments where they get somebody to give them this kind of number as a hypothetical, and then when that person winds up actually faced with such a decision, they no longer will abide with that number. So they've changed their mind when the situation is real. AUDIENCE: But it's nice, because there are two feet, right? So you could run this experiment and see. PETER SZOLOVITS: They didn't actually do it. It was hypothetical. OK. Next program I want to tell you about, again, the technique for this was developed as a PhD thesis here at MIT in 1967. So this is hot off the presses. But it's still used, this type of idea. And so this was a program that was published in the American Journal of Medicine, which is a high impact medical journal. I think this was actually the first sort of computational program that journal had ever published as a medical journal. And it was addressed at the problem of the diagnosis of acute oliguric renal failure. Oliguric means you're not peeing enough. Renal is your kidney. So this is something's gone wrong with your kidney, and you're not producing enough urine. Now, this is a good problem to address with these techniques, because if something happens to you suddenly, it's very likely that there is one cause for it. If you are 85 years old and you have a little heart disease and a little kidney disease and a little liver disease and a little lung disease, there's no guarantee that there was one thing that went wrong with you that caused all these. But if you were OK yesterday and then you stopped peeing, it's pretty likely that there's one thing that's gone wrong. So it's a good application of this model. So what they said is there are 14 potential causes. And these are exhaustive and mutually exclusive. There are 27 tests or questions or observations that are relevant to the differential. These are cheap tests, so they didn't involve doing anything either expensive or dangerous to the patient. It was measuring something in the lab or asking questions of the patient. But they didn't want to have to ask all of them, because that's pretty tedious. And so they were trying to minimize the amount of information that they needed to gather in order to come up with an appropriate decision. Now, the real problem, there were three invasive tests that are dangerous and expensive, and then eight different treatments that could be applied. And I'm only going to tell you about the first part of this problem. This 1973 article shows you what the program looked like. It was a computer terminal where it gave you choices, and you would type in an answer. And so that was the state of the art at the time. But what I'm going to do is, god willing, I'm going to demonstrate a reconstruction that I made of this program. So these guys are the potential causes of stopping to pee-- acute tubular necrosis, functional acute renal failure, urinary tract obstruction, acute glomerulonephritis, et cetera. And these are the prior probabilities. Now, I have to warn you, these numbers were, in fact, estimated by people sticking their finger in the air and figuring out which way the wind was blowing, because in 1973, there were not great databases that you could turn to. And then these are the questions that were available to be asked. And what you see in the first column, at least if you're sitting close to the screen, is the expected entropy of the probability distribution if you answered this question. So this is basically saying, if I ask this question, how likely is each of the possible answers, given my disease distribution probabilities? And then for each of those answers, I do a Bayesian revision, then I weight the entropy of that resulting distribution by the probability of getting that answer. And that gets me the expected entropy for asking that question. And the idea is that the lower the expected entropy, the more valuable the question. Makes sense. So if we look, for example, the most valuable question is, what was the blood pressure at the onset of oliguria? And I can click on this and say it was, let's say, moderately elevated. And what this little colorful graph is showing you is that if you look at the initial probability distribution, acute tubular necrosis was about 25%, and has gone down to a very small amount, whereas some of these others have grown in importance considerably. So we can answer more questions, we can say-- let's see. What is the degree-- is there proteinuria? Is there protein in the urine? And we say, no, there isn't. I think we say, no, there isn't. 0. And that revises the probability distribution. And then it says the next most important thing is kidney size. And we say-- let's say the kidney size is normal. So now all of a sudden functional acute renal failure, which, by the way, is one of these funny medical care categories that says it doesn't work well, doesn't explain to why it doesn't work well, but it's sort of a generic thing. And sure enough. We can keep answering questions about, are you producing less than 50 ccs of urine, which is a tiny amount, or somewhere between 50 and 400? Remember, this is for people who are not producing enough. So normally you'd be over 400. So these are the only choices. So let's say it's moderate. And so you see the probability distribution keeps changing. And what happened in the original program is they had an arbitrary threshold that said when the probability of one of these causes of the disease reaches 95%, then we switch to a different mode, where now we're actually willing to contemplate doing the expensive tests and doing the expensive treatments. And we build the decision tree, as we saw in the case of the gangrenous foot, that figures out which of those is the optimal approach. So the idea here was because building a decision tree with 27 potential questions becomes enormously bushy, we're using a heuristic that says information maximization or entropy reduction is a reasonable way of focusing in on what's wrong with this patient. And then once we focused in pretty well, then we can begin to do more detailed analysis on the remaining more consequential and more costly tests that are available. Now, this program didn't work terribly well, because the numbers were badly estimated, and also because of the utility model that they had for the decision analytic part was particularly terrible. It didn't really reflect anything in the real world. They had an incremental utility model that said the patient either got better, or stayed the same, or got worse. And obviously in that order of utilities, but they didn't correspond to how much better he got or how much worse he got. And so it wasn't terribly useful. So nevertheless, in the 1990s, I was teaching a tutorial at a Medical Informatics conference, and there were a bunch of doctors in the audience. And I showed them this program. And one of the doctors came up afterwards and said, wow, it thinks just the way I do. And I said, really? I don't think so. But clearly, it was doing something that corresponded to the way that he thought about these cases. So I thought that was a good thing. All right. Well, what happens if we can't assume that there's just a single disease underlying the person's problems? If there are multiple diseases, we can build this kind of bipartite model that says we have a list of diseases and we have a list of manifestations. And some subset of the diseases can cause some subset of the symptoms, of the manifestations. And so the manifestations depend only on the diseases that are present, not on each other. And therefore, we have conditional independence. And this is a type of Bayesian network, which can't be solved exactly because of the computational complexity. So a program I'll show you in a minute had 400 or 500 diseases and thousands of manifestations. And the computational complexity of exact solution techniques for these networks tends to go exponentially with the number of undirected cycles in the network. And of course, there are plenty of undirected cycles in a network like that. So there was a program developed originally in the early 1970s called Dialog. And then they got sued, because somebody owned that name. And then they called it Internist, and they got sued because somebody owned that name. And then they called it QMR, which stands for Quick Medical Reference, and nobody owned that name. So around 1982, this program had about 500 diseases, which they estimated represented about 70% to 75% of major diagnoses in internal medicine, about 3,500 manifestations. And it took about 15 man years of manual effort to sit there and read medical textbooks and journal articles and look at records of patients in their hospital. The effort was led by a computer scientist at the University of Pittsburgh and the chief of medicine at UPMC, the University of Pittsburgh Medical Center, who was just a fanatic. And he got all the medical school trainees to spend hours and hours coming up with these databases. By 1997, they had commercialized it through a company that had bought the rights to it. And they had-- that company had expanded it to about 750 diagnoses and about 5,500 manifestations. So they made it considerably larger. Details are-- I've tried to put references on all the slides. So here's what data in QMR looks like. For each diagnosis, there is a list of associated manifestations with evoking strengths and frequencies. So I'll explain that in a minute. On average, there are about 75 manifestations per disease. And for each disease-- for each manifestation in addition to the data you see here, there is also an important measure that says how critical is it to explain this particular symptom or sign or lab value in the final diagnosis. So for example, if you have a headache, that could be incidental and it's not that important to explain it. If you're bleeding from your gastrointestinal system, that's really important to explain. And you wouldn't expect a diagnosis of that patient that doesn't explain to you why they have that symptom. And then here is an example of alcoholic hepatitis. And the two numbers here are a so-called evoking strength and a frequency. These are both on scales-- well, evoking strength is on a scale of 0 to 5, and frequency is on a scale of 1 to 5. And I'll show you what those are supposed to mean. And so, for example, what this says is that if you're anorexic, that should not make you think about alcoholic hepatitis as a disease. But you should expect that if somebody has alcoholic hepatitis, they're very likely to have anorexia. So that's the frequency number. This is the evoking strength number. And you see that there is a variety of those. So much of that many, many years of effort went into coming up with these lists and coming up with those numbers. Here are the scales. So the evoking strength-- 0 means nonspecific. 5 means its pathognomonic. In other words, just seeing the symptom is enough to convince you that the patient must have this disease. Similarly, frequency 1 means it occurs rarely, and 5 means that it occurs in essentially all cases with scaled values in between. And these are kind of like odds ratios. And they add them kind of as if they were log likelihood ratios. And so there's been a big literature on trying to figure out exactly what these numbers mean, because there's no formal definition in terms of you count the number of this and divide by the number of that, and that gives you the right answer. These were sort of the impressionistic kinds of numbers. So the logic in the system was that you would come to it and give it a list of the manifestations of a case. And to their credit, they went after very complicated cases. So they took clinical pathologic conference cases from The New England Journal of Medicine. These are cases selected to be difficult enough that doctors are willing to read these. And they're typically presented at Grand Rounds at MGH by somebody who is often stumped by the case. So it's an opportunity to watch people reason interactively about these things. And so you evoke the diagnoses that have a high evoking strength from the giving manifestations. And then you do a scoring calculation based on those numbers. The details of this are probably all wrong, but that's the way they went about it. And then you form a differential around the highest scoring diagnosis. Now, this is actually an interesting idea. It's a heuristic idea, but it's one that worked pretty well. So suppose I have two diseases. D1 can cause manifestations 1 through 4. And D2 can cause 3 through 6. So are these competing to explain the same case or could they be complementary? Well, until we know what symptoms the patient actually has, we don't know. But let's trace through this. So suppose I tell you that the patient has manifestations 3 and 4. OK. Well, you would say, there is no reason to think that the patient may have both diseases, because either of them can explain those manifestations, right? So you would consider them to be competitors. What about if I add M1? So here, it's getting a little dicier. Now you're more likely to think that it's D1. But if it's D1, that could explain all the manifestations, and D2 is still viewable as a competitor. On the other hand, if I also add M6, now neither disease can explain all the manifestations. And so it's more likely, somewhat more likely, that there may be two diseases present. So what Internist had was this interesting heuristic, which said that when you get that complementary situation, you form a differential around the top ranked hypothesis. In other words, you retain all those diseases that compete with that hypothesis. And that defines a subproblem that looks like the acute renal failure problem, because now you have one set of factors that you're trying to explain by one disease. And you set aside all of the other manifestations and all of the other diseases that are potentially complementary. And you don't worry about them for the moment. Just focus on this cluster of things that are competitors to explain some subset of the manifestations. And then there are different questioning strategies. So depending on the scores within these things, if one of those diseases has a very high score and the others have relatively low scores, you would choose a pursue strategy that says, OK, I'm interested in asking questions that will more likely convince me of the correctness of that leading hypothesis. So you look for the things that it predicts strongly. If you have a very large list in the differential, you might say, I'm going to try to reduce the size of the differential by looking for things that are likely in some of the less likely hypotheses so that I can rule them out if that thing is not present. So different strategies. And I'll come back to that in a few minutes. So their test, of course, based on their own evaluation was terrific. It did wonderfully well. The paper got published in The New England Journal of Medicine, which was an unbelievable breakthrough to have an AI program that the editors of The New England Journal considered interesting. Now, unfortunately, it didn't hold up very well. And so there was this paper by Eta Berner and her colleagues in 1994 where they evaluated QMR and three other programs. DXplain is very similar in structure to QMR. Iliad and Meditel are Bayesian network, or almost naive Bayesian types of models developed by other groups. And they looked for results, which is coverage. So what fraction of the real diagnoses in these 105 cases that they chose to test on could any of these programs actually diagnose? So if the program didn't know about a certain disease, then obviously it wasn't going to get it right. And then they said, OK, of the program's diagnoses, what fraction were considered correct by the experts? What was the rank order of that correct diagnosis among the list of diagnoses that the program gave? The experts were asked to list all the plausible diagnoses from these cases. What fraction of those showed up in the program's top 20? And then did the program have any value added by coming up with things that the experts had not thought about, but that they agreed when they saw them were reasonable explanations for this case? So here are the results. And what you see is that the diagnoses in these 105 test cases, 91% of them appeared in the DXplain program, but, for example, only 73% of them in the QMR program. So that means that right off the bat it's missing about a quarter of the possible cases. And then if you look at correct diagnosis, you're seeing numbers like 0.69, 0.61, 0.71, et cetera. So these are-- it's like the dog who sings, but badly, right? It's remarkable that it can sing at all, but it's not something you want to listen to. And then rank of the correct diagnosis in the program is at like 12 or 10 or 13 or so on. So it is in the top 20, but it's not at the top of the top 20. So the results were a bit disappointing. And depending on where you put the cut off, you get the proportion of cases where a correct diagnosis is within the top end. And you see that at 20, you're up at a little over 0.5 for most of these programs. And it gets better if you extend the list to longer and longer. Of course, if you extended the list to 100, then you reach 100%, but it wouldn't be practically very useful. AUDIENCE: Why didn't they somehow compare it to the human decision? PETER SZOLOVITS: Well, so first of all, they assumed that their experts were perfect. So they were the gold standard. So they were comparing it to a human in a way. AUDIENCE: Yeah. PETER SZOLOVITS: OK. So the bottom line is that although the sensitivity and specificity were not impressive, the programs were potentially useful, because they had interactive displays of signs and symptoms associated with diseases. They could give you the relative likelihood of various diagnoses. And they concluded that they needed to study the effects of whether a program like this actually helped a doctor perform medicine better. So just here's an example. I did a reconstruction of this program. This is the kind of exploration you could say. So if you click on angina pectoris, here are the findings that are associated with it. So you can browse through its database. You can type in an example case, or select an example case. So this is one of those clinical pathological conference cases, and then the manifestations that are present and absent, and then you can get an interpretation that says, OK, this is our differential. And these are the complementary hypotheses. And therefore these are the manifestations that we set aside, whereas these are the ones explained by that set of diseases. And so you could watch how the program does its reasoning. Well, then a group at Stanford came along when belief networks or Bayesian networks were created, and said, hey, why don't we treat this database as if it were a Bayesian network and see if we can evaluate things that way? So they had to fill in a lot of details. They wound up using the QMR database with a binary interpretation. So a disease was present or absent. The manifestation was present or absent. They used causal independence, or a leaky noisy-OR, which I think you've seen in other contexts. So this just says if there are multiple independent causes of something, how likely is it to happen depending on which of those is present or not. And there is a simplified way of doing that calculation, which corresponds to sort of causal independence and is computationally reasonably fast to do. And then they also estimated priors on the various diagnoses from national health statistics, because the original data did not have prior data-- priors. They wound up not using the evoking strengths, because they were doing a pretty straight Bayesian model where all you need is the priors and the conditionals. They took the frequency as a kind of scaled conditional, and then built a system based on that. And I'll just show you the results. So they took a bunch of Scientific American medicine cases and said, what are the ranks assigned to the reference diagnoses of these 23 cases? And you see that like in case number one, QMR ranked the correct solution as number six, but their two methods, TB and iterative TB ranked it as number one. And then these are attempts to do a kind of ablation analysis to see how well the program works if you take away various of its clever features. But what you see is that it works reasonably well, except for a few cases. So case number 23, all variants of the program did badly. And then they excused themselves and said, well, there's actually a generalization of the disease that was in the Scientific American medicine conclusion, which the programs did find, and so that would have been number one across the board. So they can sort of make a kind of handwavy argument that it really got that one right. And so these were pretty good. And so this validated the idea of using this model in that way. Now, today you can go out and go to your favorite Google App store or Apple's app store or anybody's app store and download tons and tons and tons of symptom checkers. So I wanted to give you a demo of one of these if it works. OK. So I was playing earlier with having abdominal pain and headache. So let's start a new one. So type in how you're feeling today. Should we have a cough, or runny nose, abdominal pain, fever, sore throat, headache, back pain, fatigue, diarrhea, or phlegm? Phlegm? Phlegm is the winner. Phlegm is like coughing up crap in your throat. AUDIENCE: Oh, luckily, they visualize it. PETER SZOLOVITS: Right. So tell me about your phlegm. When did it start? AUDIENCE: Last week. PETER SZOLOVITS: Last week? OK. I signed in as Paul, because I didn't want to be associated with any of this data. So was the phlegm bloody or pus-like or watery or none of the above? AUDIENCE: None of the above. PETER SZOLOVITS: None of the above. So what was it like? AUDIENCE: I don't know. Paul? PETER SZOLOVITS: Is it any of these colors? AUDIENCE: Green. PETER SZOLOVITS: I think I'll make it yellow. Next. Does it happen in the morning, midday, evening, nighttime, or a specific time of year? AUDIENCE: Specific time of year. AUDIENCE: Yeah. Specific time of year. PETER SZOLOVITS: Specific time of year. And does lying down or physical activity make it worse? AUDIENCE: Well, it's generally not worse. So that's physical activity. PETER SZOLOVITS: Physical activity. How often is this a problem? I don't know. A couple times a week maybe. Did eating suspect food trigger your phlegm? AUDIENCE: No. PETER SZOLOVITS: I don't know. I don't know what a suspect food is. AUDIENCE: [INAUDIBLE] food. PETER SZOLOVITS: Yeah. This is going to kill most of my time. AUDIENCE: Is it getting better? PETER SZOLOVITS: Is it improving? Sure, it's improving. Can I think of another related symptom? No. I'm comparing your case to men aged 66 to 72. A number of similar cases gets more refined. Do I have shortness of breath? No. That's good. All right. Do I have a runny nose? Yeah, sure. I have a runny nose. It's-- I don't know-- a watery, runny nose. AUDIENCE: Does it say you've got to call [INAUDIBLE]?? PETER SZOLOVITS: Well, I'm going to stop, because it will just take-- it takes too long to go through this, but you get the idea. So what this is doing is actually running an algorithm that is a cousin of the acute renal failure algorithm that I showed you. So it's trying to optimize the questions that it's asking, and it's trying to come up with a diagnostic conclusion. Now, in order not to get in trouble with things like the FDA, it winds up wimping out at the end, and it says, if you're feeling really bad, go see a doctor. But nevertheless, these kinds of things are now becoming real, and they're getting better because they're based on more and more data. Yeah. AUDIENCE: [INAUDIBLE] PETER SZOLOVITS: Well, I can't get to the end, because we're only at 36%. [INTERPOSING VOICES] Yeah. Here. All right. Somebody-- AUDIENCE: Oh, I think I need your finger. PETER SZOLOVITS: Oh. OK. Just don't drain my bank account. So The British Medical Journal did a test of a bunch of symptom checkers, of 23 symptom checkers like this about four years ago. And they said, well, can it on 45 standardized patient vignettes can it find at least the right level of urgency to recommend whether you should go to the emergency room, get other kinds of care, or just take care of yourself. And then the goals were that if the diagnosis is given by the program, it should be in the top 20 of the list that it gives you. And if triage is given, then it should be the right level of urgency. The correct diagnosis was first in 34% of the cases. It was within the top 20 in 58% of the cases. And the correct triage was 57% accurate. But notice it was more accurate in the emergent cases, which is good, because those are the ones where you really care. So we have-- OK. So based on what he said about me, I have an upper respiratory infection with 50% likelihood. And I can ask what to do next. Watch for symptoms like sore throat and fever. Physicians often perform a physical exam, explore other treatment options, and recovery for most cases like this is a matter of days to weeks. And I can go back and say, I might have the flu, or I might have allergic rhinitis. So that's actually reasonable. I don't know exactly what you put in about me. AUDIENCE: What is the less than 50? PETER SZOLOVITS: What is what? AUDIENCE: The less than 50. [INTERPOSING VOICES] AUDIENCE: Patients have to be the same demographics. PETER SZOLOVITS: Yeah. I don't know what the less than 50 is supposed to mean. AUDIENCE: It started with 200,000 or so. PETER SZOLOVITS: Oh, so this is based on a small number of patients. So what happens, of course, is as you slice and dice a population, it gets smaller and smaller. So that's what we're seeing. OK. Thank you. OK. So two more topics I'm going to rush through. One is that-- as I mentioned in one of the much earlier slides, every action has a cost. It at least takes time. And sometimes it induces potentially bad things to happen to a patient. And so people began studying a long time ago what does it mean to be rational under resource constraints rather than rational just in this Home economicus model. And so Eric Horvitz, who's now a big cheese guy, he's head of Microsoft Research, but used to be just a lowly graduate student at Stanford when he started doing this work. He said, well, utility comes not only from what happens to the patient, but also from the reasoning process from the computational process itself. And so consider-- do you guys watch MacGyver? This is way out of date. So if MacGyver is defusing some bomb that's ticking down to zero and he runs out of time, then his utilities take a very sharp drop at that point. So that's what this work is really about, saying, well, what can we do when we don't have all the time in the world to do the computation as well as having to try to maximize utility to the patient? And Daniel Kahneman, who won the Nobel Prize a few years ago in economics for this notion of bounded rationality that says that the way we would like to be rational is not actually the way we behave, and he wrote this popular book that I really like called Thinking, Fast and Slow that says that if you're trying to figure out which house to buy, you have a lot of time to do it, so you can deliberate and list all the advantages and disadvantages and costs and so on of different houses and take your time making a decision. If you see a car barreling toward you as you are crossing in a crosswalk, you don't stop and say, well, let me figure out the pluses and minuses of moving to the left or moving to the right, because by the time you figure it out, you're dead. And so he claims that human beings have evolved in a way where we have a kind of instinctual very fast response, and that the deliberative process is only invoked relatively rarely. Now, he bemoans this fact, because he claims that people make too many decisions that they ought to be deliberative about based on these sort of gut instincts. For example, our current president. But never mind. So what Eric and his colleagues were doing was trying really to look at how this kind of meta level reasoning about how much reasoning and what kind of reasoning is worth doing plays into the decision making process. So the expected value of computation as a fundamental component of reflection about alternative inference strategies. So for example, I mentioned that QMR had these alternative questioning methods depending on the length of the differential that it was working on. So that's an example of a kind of meta level reasoning that says that it may be more effective to do one kind of question asking strategy than another. The degree of refinement, people talk about things like just-in-time algorithms, where if you run out of time to think more deliberately, you can just take the best answer that's available to you now. And so taking the value of information, the value of computation, and the value of experimentation into account in doing this meta level reasoning is important to come up with the most effective strategies. So he gives an example of a time pressure decision problem where you have a patient, a 75-year-old woman in the ICU, and she develops sudden breathing difficulties. So what do you do? Well, it's a challenge, right? You could be very deliberative, but the problem is that she may die because she's not breathing well, or you could impulsively say, well, let's put her on a mechanical ventilator, because we know that that will prevent her from dying in the short term, but that may be the wrong decision, because that has bad side effects. She may get an infection, get pneumonia, and die that way. And you certainly don't want to subject her to that risk if she didn't need to take that risk. So they designed an architecture that says, well, this is the decision that you're trying to make, which they're modeling by an influence diagram. So this is a Bayesian network with the addition of decision nodes and value nodes. But you use Bayesian network techniques to calculate optimal decisions here. And then this is kind of the background knowledge of what we understand about the relationships among different things in the intensive care unit. And this is a representation of the meta reasoning that says, which utility model should we use? Which reasoning technique should we use? And so on. And they built an architecture that integrates these various approaches. And then in my last 2 minutes, I just want to tell you about an interesting-- this is a modern view, not historical. So this was a paper presented at the last NeurIPS meeting, which said the kinds of problems that we've been talking about, like the acute renal failure problem or like any of these others, we can reformulate this as a reinforcement learning problem. So the idea is that if you treat all activities, including putting somebody on a ventilator or concluding a diagnostic conclusion or asking a question or any of the other things that we've contemplated, if you treat those all in a uniform way and say these are actions, we then model the universe as a Markov decision process, where every time that you take one of these actions, it changes the state of the patient, or the state of our knowledge about the patient. And then you do reinforcement learning to figure out what is the optimal policy to apply under all possible states in order to maximize the expected outcome. So that's exactly the approach that they're taking. The state space is the set of positive and negative findings. The action space is to ask about a finding or conclude a diagnosis. The reward is the correct or incorrect single diagnosis. So once you reach a diagnosis, the process stops, and you get your reward. It's finite horizon because they impose a limit on the number of questions. If you don't get an answer by then, you lose. You get a minus one reward. There is a discount factor so that the further away a reward is, the less value it has to you at any point, which encourages shorter question sequences. And they use a pretty standard Q learning framework, or at least a modern Q learning framework using a double deep neural network strategy. And then there are two pieces of magic sauce that make this work better. And one of them is that they want to encourage asking questions that are likely to have positive answers rather than negative answers. And the reason is because in their world, there are hundreds and hundreds of questions. And of course, most patients don't have most of those findings. And so you don't want to ask a whole bunch of questions to which the answer is no, no, no, no, no, no, no, no, no, because that doesn't give you very much guidance. You want to ask questions where the answer is yes, because that helps you clue in on what's really going on. So they actually have a nice proof that they do this thing they call reward shaping, which basically adds some incremental reward for asking questions that will have a positive answer. But they can prove that an optimal policy learned from that reward function is also optimal for the reward function that would not include it. So that's kind of cool. And then the other thing they do is to try to identify a reduced space of findings by what they call feature rebuilding. And this is essentially a dimension reduction technique where they're co-training. In this dual network architecture, they're co-training the policy model. It's, of course, the neural network model, this being that 2010s. And so they're generating a sequence, a deep layered set of neural networks that generate an output, which is the m questions and the n conclusions that can be made. And I think there's a soft max over these to come up with the right policy for any particular situation. But at the same time, they co-train it in order to predict a number of-- all of the manifestations from what they've observed before. So it's using-- it's learning a probabilistic model that says if you've answered the following questions in the following ways, here are the likely answers that you would give to the remaining manifestations. And the reason they can do that, of course, is because they really are not independent. They're very often co-varying. And so they learn that covariance, and therefore can predict which answers are going to get yes answers, which questions are going to get yes answers. And therefore, they can bias the learning toward doing that. So last slide. So this system is called REFUEL. It's been tested on a simulated data set of 650 diseases and 375 symptoms. And what they show is that the red line is their algorithm. The yellow line uses only this reward reshaping. And the blue line is just a straight reinforcement learning approach. And you can see that they're doing much better after many fewer epochs of training in doing this. Now, take this with a grain of salt. This is all fake data. So they didn't have real data sets to test this on. They got statistics on what diseases are common and what symptoms are common in those diseases. And then they had a generative model that generated this fake data. And then they learned from that generative model. So of course it would be really important to redo the study with real data, but they've not done that. This was just published a few months ago. So that's sort of where we are at the moment in diagnosis and in differential diagnosis. And I wanted to start by introducing these ideas in a kind of historical framework. But it means that there are a tremendous number of papers, as you can imagine, that have been written since the 1990s and '80s that I was showing you that are essentially elaborations on the same themes. And it's only in the past decade of the advent of these neural network models that people have changed strategy, so that instead of learning explicit probabilities, for example, like you do in a Bayesian network, you just say, well, this is simply a prediction task. And so we'll predict the way we predict everything else with neural network models, which is we build a CNN, or an RNN, or some combination of things, or some attention model, or something. And we throw that at it. And it does typically a slightly better job than any of the previous learning methods that we've used typically, but not always. OK. Peace.
MIT_6S897_Machine_Learning_for_Healthcare_Spring_2019
2_Overview_of_Clinical_Care.txt
PETER SZOLOVITS: As David said, I've been at this for a long time. I am not a medical doctor. But I've probably learned enough medicine to be able to play one on television over the years. And actually, that's relevant to today's lecture because today's lecture is really trying to set the scene for you to say, well, what are the kinds of problems that doctors are interested in by looking at what is it that they do. OK. So that's our goal for today. So what we're going to do today is to talk about, in a very general way, what are the goals of health care. So how many of you are doctors? A couple. Great. OK. So fix me when I blow it. All right. Please feel free to interrupt. So that's going to be my first task. And then the second one is going to be what are the things that people actually do in order to try to achieve these goals. What is the practice of medicine like? What is the process that generates the data that we're going to be using to learn from? And then I can't resist talking a little bit about paying for health care at the end of the lecture because a lot of the problems that come up and a lot of the interest that people show in doing the kind of analysis we're talking about in fact is motivated by money. They want to be able to save money, or spend less money, or something like that. So it's important to know that. OK. Medicine's been around for a long time. I think from probably the earliest of recorded history, there are discussions of people wondering what the cause of disease is, how to cure it. They came up with some fairly cockamamie theories because they didn't have a lot of scientific, modern approaches to it. But for example, this is a photo on the left of a shaman I think from a Canadian Indian tribe who's working on the boy lying there who's sick. And this shaman would use his knowledge of experience that he'd had with other patients. They did know a lot about medicinal plants. They knew something about how to care for injuries and things like that. And so this was an effective form of health care. Not much record keeping. You don't see electronic medical records system in that scene. On the right is a modern shaman practicing at one of the New York area hospitals. And so there are traditional cultures in which that sort of hands on interaction with the healer is considered a very important part of the practice of medicine. And if you listen to futurist doctors talking about what medicine is likely to be like, they emphasize the fact that the role of a healer is not just to be a good automaton who figures out the right things but is to persuade a patient to trust them to do the things that he or she is suggesting to the patient. And there are a lot of placebo effects that we know from lots and lots of experiments that say that if you think you're going to get better, you are going to get better, on average. No guarantees. OK. Now modern medicine actually looks more like this. So this is an intensive care unit. And what you see is a patient who has got all kinds of electrical leads, and tubes, and things going into them and is surrounded by tons and tons of equipment which is monitoring him and perhaps keeping him alive. And so this is the high tech medicine that we think of as the contemporary version of clinical care. Well, you might say, OK, what does it mean to be healthy? Right. If the goal of medicine is to make people or keep people healthy, what is health? So we turn to the World Health Organization. And they have this lovely, very comprehensive notion of the definition of health. A state of complete physical, mental, and social well-being, not merely the absence of disease or infirmity. And then they categorize. This and they say, well, there's physical health, there's mental health, and there's social health. Social health is especially hard to measure. And I'll come back to that in a little while. So what's easiest to measure is how long people live. And so we've had data on survival analysis for a long time. And this is kind of shocking. If you look here, this lower curve is from around 1800. And what it shows you is that if you lived in India around 1800, your life expectancy was about 25 years. It's not very good. And if you lived in the richest countries, which in those days were typically European, in Belgium, your life expectancy was way up there at 40 years. Right. How many of you knew that? OK. Good. I didn't until I started looking at this. Now by 1950, which is not that long ago, it was like 69 years ago, in Norway your expectation was that you'd live into your early 70s, in the US that you would live into your late 60s on average. There was still a huge cliff where, if you lived in Bhutan, or Somalia, or something, you were still down around 30. Today, well in 2012, we're doing a lot better. And the thing that's striking is not only that the people who were doing well have gotten better but that a lot of the people who were doing very poorly have also gotten better. And so we're now up-- India, remember, was at 25 years of life expectancy and now it's in the high 60s. Now of course, these are averages. And so individuals vary a lot. But it's kind of interesting. So if you look at the numbers, you see that even on a shorter term, there are big changes. So for example, if you're a male living in Rwanda, which is among the worst places in terms of life expectancy, your life expectancy, if you were born today, is about 62 and 1/2 years. Right. If you were born in 2001, it was only 38 years. Now what was going on in Rwanda in 2001? Genocide. Yeah. They were killing each other. So that's sort of an exceptional situation. And that's gotten much better because they've stopped killing each other as genocidal attacks between the Hutu and Tutsi, I think, if I remember right. What about South Africa? What was going on in South Africa in 2001. I'm not sure I heard you. AUDIENCE: Failure to address the HIV crisis. PETER SZOLOVITS: Yes. The government at the time was claiming that HIV was not the cause of AIDS and therefore there was no point in controlling HIV infections because AIDS was caused by something else. Pretty crazy. So that was terrible. And they've gotten much better at it. So that's what you tend to see in a lot of African countries. And what you also see is that there has really been improvement everywhere. So in the US, we went from males expected to live 74 years to almost 78 years, so about a four year increase in life expectancy over a period of just 17 years. You women, by the way, are going to outlive us men, on average. There's some biological thing that seems to work that way. OK. So a typical way that people look at the survival of a population is to say, well, given a cohort of people born at some instant zero, what fraction of them are still alive after a certain period of time? And what you see is that, of course, 2031 we haven't reached yet. And so these are projections based on sort of theoretical extrapolations of actual data. But the older data is real. And what you see is that from 1851 to, you know, 2011 let's say, these numbers have gone way up. Now where have they gone up the most? Well, it used to be that childhood mortality was enormous. And so if you look at 1851, by age 10 about 30% of children had died. And so we've gotten a lot better at stopping that from happening. People also look at curves like this. So this is a distribution of death rates by age. This happens to be for Japan a few years ago. And again, females do better than males. The gold curve in the middle is the average of the two. And this is very typical of almost any country that you look at. The shape of this curve is pretty universal. So what does this say? It says that when you are born, there is a relatively high risk that you're going to die. So these are kids who have congenital abnormalities, have prenatal problems, have all kinds of difficulties. And they don't make it. So there's a fairly high death rate at birth. But once you make it to, I think, about two years old, the death rate is down to about one in 10,000 per year. Right. And then it stays quite low until you become a teenager. Now why might the death rate go up when you become a teenager? Well, suicide is the extreme example of that. But teenagers tend to be risk seeking rather than risk averse. You know, they start driving cars. They go skiing, skydiving, whatever it is that they're doing. And they start dying. But then if you make it to about 20, then there is a relatively flat region where by then you've developed enough sense to know what risks are worth taking and which ones aren't. And so it's relatively flat until about age 35 or 40 at which point it starts inexorably rising. And of course, as you get older and older, the probability that you're going to die in the next year becomes higher and higher. Right. This is uncomfortable for somebody with my amount of gray hair. Now there is a peculiarity in Japan which people puzzled over for a while. And that is that weird dip up at age 106. So first of all, that's a very small number of people that that represents. And it turned out that it was fraud. So there were families who failed to report the death of their ancient grandmother or great-grandmother because they wanted to continue collecting social security payments from the government. So that's an artifact. OK. Now this is a serious problem which we're going to return to in a more technical way later in the semester, which is this problem of disparities. So if you look at, for example, the difference between white and black female life expectancy in the United States, you see that everybody's life expectancy, as we've shown, is going up gradually in this case from 1975 to 2015. But there continues to be a gap between black and white females and between black and white males where black patients are more likely to die or less likely to survive longer given the disparities that exist socioeconomically. Maybe medically. We don't know exactly. And then if you look at Hispanics, however, they do pretty well. So in 2015, you're actually a little bit better off to be Hispanic, either male or female, than you are to be either white or black. But it's still worse to be black than to be white or Hispanic. Right. So these are the kinds of facts that drive some of the issues in what we do in medical care. Now what do people die of? Well, about a quarter of them die of heart disease. And a little over a fifth of them die of cancer. This is USA data from 2014 so it's not completely up to date but hasn't changed that much. And then there's a decreasing number of deaths from various other causes. So heart disease, cancer, or chronic lower respiratory disease. So this is like COPD that's caused by smoking, stuff like that. Accidents account for about 5% of deaths. Stroke, cerebral vascular events, Alzheimer's disease, diabetes, influenza, pneumonia, kidney disease, suicide, and then everything else is about another quarter. OK. Now take a look at those. What kind of diseases are these, the biggies? Well. They're chronic. Most of them are chronic. They're also not infectious. But except for influenza and pneumonia, nothing else there is infectious as far as we know. I mean, yeah. That asterisk you should put after every statement about current medicine. So that's interesting, because if you wrote the same table back in 1850, you would find that a lot of people were dying of infections. And they weren't typically living long enough to develop these lovely chronic diseases of the aging. So there have been big changes there. Now the other thing that's worth looking at is, in addition to the reasons that people die, they start getting sicker. And they are getting sort of less value out of life because they're developing all these other conditions. So if you look at people over 65, about half of them have some form of arthritis. And about 40%-- yeah. About 40% have hypertension. By the way, if you have trouble with any of the medicalese words, just interrupt. Hypertension is high blood pressure. Hearing impairment. Me, I'm wearing a hearing aid on one side because my ears are going bad. Heart disease, about a quarter. Orthostatic impairment-- that means people who wobble because their sense of balance is not so good-- 16%. Cataracts, chronic sinusitis, visual impairment, genitourinary problems, diabetes, et cetera. So these are all growing. Here's the list of the next 10. And varicose veins, hernia, hemorrhoids, psoriasis, hardening of the arteries, tinnitus, corns, calluses, constipation, hay fever, and cerebral vascular problems. All right. So people develop these by the time they're over 65. So one question we might ask is, well, what is the quality of life? So for example, a lot of the doctors that I started working with in the 1970s were great advocates of the application of decision analysis decision theory to making medical decisions. And so the problem is how do you evaluate an outcome? And they said, well, the way we evaluate an outcome is we look at your longevity. Obviously, the longer you live, the better typically. But we also look at your quality of life during that time. And we say that if you're confined to a wheelchair, let's say, your quality of life might not be as good as if you were able to run around, or if you're suffering from chronic pain, your quality of life might not be as good as if you were pain free. And so we came up with this model that says, well, the value of your life is essentially an integral from time zero to however long you're going to live of a function Q that says this is a measure of how good your quality of life is at that particular point in time and then some discount factor. Right. So what's the role of the discount factor? Well, it's just like in economics. If I offer you some horribly painful thing today versus 10 years from now, which are you going to choose? Most of us will say later. So that's what the discount factor does. Now who knows what the right discount rate is? So in some of their work, they were doing crazy things like taking the financial discount factors about bank interest rates and things like that and applying them to these health things just because they didn't have any better numbers to do. That seems a little suspicious. But nevertheless, methodologically, it's a way of doing it. OK. So how do you measure the quality of life? Well, there is this notion of the activities of daily living. So can you bathe and shower? Can you brush your teeth and groom your hair? Can you get dressed? Can you go to the toilet, clean yourself up? Are you able to walk, get in and out of bed, get in and out of a chair? Can you feed yourself? And then there are a bunch of instrumental factors, things like are you able to clean your house and can you manage your money and so on. So these are typically for older people. But they are ways of trying to quantify that quality of life by saying how many of these things are you able to do. And there are a lot of federal regulations, for example, that take advantage of quantification like this. So if you're asking to be put on some sort of disability where the government sends you a check to keep you alive, you have to demonstrate that you are at a certain point on the scale that's derived from these capabilities in order to justifiably get that. So occupational therapy is one of the things that people try to teach the elderly. My parents died in their late 80s and my dad was 90. And I remember when he would have some medical problem, then he would be put into the clutches of an occupational therapist who would try to make sure that he was able to communicate, and get around, and not fall for tricks where people wanted to get him to send all his money to them, or meal preparation and stuff like that. So these are sort of the-- occupational is a funny term for it, because this typically applies to people who are retired so it's not really an occupation. But it's the sort of things that you need to do in order to be able to have a decent life. Well, now there's an interesting valuation question. So if you look at the top right model, we actually don't have very good data on anything other than mortality. So mortality is who dies. So the blue curve there is the curve that you've seen before, which is, of a cohort, how many people are still alive after a certain number of years? The red curve is a morbidity curve which says how many of those people are still alive and have no sort of problematic chronic diseases. So they're not in constant pain. And they're not immobilized. And they're not unable to do the things that I just listed on the previous slides. And then disability is when you really become incapable of taking care of yourself. And it typically involves moving into an assisted living facility, or a nursing home, or something like that, which is kind of a nightmare for many people and also very expensive for society. So as I said, the blue curve there is based on actual data for American females in 1980. The red curve is a hypothetical curve where I just assumed that the rate of developing a morbidity is about twice the rate of dying. And the green curve I assumed that the rate of developing a disability is-- I can't remember-- I think three times as high as the rate of dying, something like that. So that's why those curves are lower. And they look approximately right. But we don't have good data on those. Now the question that you have to ask is, how do you want to change this? So for example, suppose that we kept the same situation. We reduced mortality to 20% of its actual rate. But we kept the disability and morbidity rates the same as they are on the top left. So what would that do? Well, what that would do is create a huge number of people who are disabled because they're going to live longer beyond the point where they're able to function fully. So this is-- yeah. AUDIENCE: Can I just ask, why does green not just mean healthy? To me, it just seems-- PETER SZOLOVITS: Green just means healthy. AUDIENCE: OK. It does. PETER SZOLOVITS: Yeah. So beyond green is what I meant is the morbidity curve. And beyond red is the disability curve. OK. I may have misspoken. So green is healthy. Red means suffering from some morbidity. And blue means disabled. So if we just extend life but we don't make it any better, then that's not a very attractive picture. So other possibilities are compression of morbidity. So for example, if we reduce the rate at which people get sick and die but we increase it-- we decrease it initially and then increase it so that, on average, people live about the same length of time that they now do, then we're going to have fewer people who are suffering from morbidity or who are disabled because you last doing well and then you die. So this is the wonderful one horse shay where everything falls apart at once, which, you know, frankly, as somebody who's a little closer to the end than you guys are, I wouldn't mind that kind of exit. Right. I don't want to be disabled for 20 years. I'd rather live a long time healthy and then drop dead at some point. OK. My dad used to say that he wanted to die by being hit by a meteor. Right. He wouldn't know it's coming. It's instant. No suffering, no pain. Perfect. He almost got it but not quite. OK. And then the final story is lifespan extension, which is where we simply lower the mortality rate and all the other rates in proportion. And what happens is that you start having healthy 107-year-olds and not so healthy 120-year-olds in the population in larger numbers than we do now. OK. Social quality of life. That's a tough one. All right. So here's a naive theory. We take the quality of life of every person on the planet and we sum them up. And we say, OK, that's the social quality of life. Is that a good idea? Probably not. Right. For one thing, it would say that we ought to have as much population explosion as possible, because then we'd have more people to integrate over, which doesn't seem sensible. Right. Now obviously if we start packing the world so that it's super crowded, then the quality of life will go down eventually enough that adding more people is probably not optimal. But nevertheless, that doesn't seem like a real satisfying solution. How about less? It was popular about 10 years ago for people to write these sort of speculative books about how would the world look if half the people died. Right. Other than the trauma of the half the people dying, you know, they were proposing that this would be some wonderful sylvan sort of ideal old fashioned kind of world. I didn't buy that. And of course, we don't know of a good way to get there, although it is true that in at least developed countries, birth rates keep falling to the point where populations are worried about underpopulation. The Japanese, for example, have very strict immigration policies so you can't become Japanese if you're not born Japanese. And Japanese people aren't having enough kids to replace themselves. And so the natural population of Japan is falling. Italy is in the same quandary except that Italy has all these immigrants coming in and trying to become Italians. And of course, this leads to big political fights about who gets to be an Italian and who doesn't. And then, of course, there's the question of money, which as I say we'll return to later. Now one other important thing to consider is that because of the increase in life expectancy, there has been a big change in timescale in the way people think about medical care. So it used to be long, long ago in the shaman era, you wouldn't go to a shaman to say keep me healthy. You would go to a shaman saying, you know, I broke my arm, or I have this pain in my leg, or fix me somehow. And so things were focused on the notion of cure. And that was applicable to acute illnesses. But as we've gotten better at treating acute illnesses-- which, by the way, didn't happen all that long ago. I mean antibiotics were only invented in the early 20th century. And that made a huge difference in stopping people from dying of infections. So then it became more a matter of managing long term chronic illnesses. And that's pretty much where we are now. The medical world at the moment, most of the action is in trying to understand things like diabetes, and heart disease, and cancer, and things like that that develop over a long time. And they don't kill you instantly like infectious diseases did. But they produce a real burden. And then, of course, the next step that everybody expects is, well, how do we prevent disease? So how can we change your exposures? How can we change your motivation? How can we change your diet? How can we change whatever it is that we need to change? How can we change your genes to prevent you from developing these diseases in the first place? And that's sort of the future. OK. So that's about what medicine tries to do. But how does it do it? And so we're going to talk a little bit about the traditional tasks that are attributed to health care practice. So traditionally, people talk about diagnosis, prognosis, and therapy. So diagnosis-- I go to my doctor. I say, doc, I've got this horrible headache. I've had it for two weeks. What's wrong with me? And his job-- in my case, it happens to be a "he." His job is to come up with an answer. What's wrong with me? Prognosis is he's supposed to predict what's going to happen to me, at least if he doesn't do anything. So is this headache going to go away or is it going to turn out to be a brain tumor that will kill me, or is it going to be some amoeba that's living in my brain and eat my neurons? All kinds of horrible things are possible. So it's the prospect of recovery as anticipated from the usual course of disease or peculiarities of the case. And then therapy, of course, is what do you do about it. And prognosis is definitely informed by diagnosis, because if you don't know what's wrong with me, then it's much harder to predict what's going to happen to me. And if you can't predict what's going to happen to me, then it's much harder to figure out what to do to prevent that from happening or to encourage that to happen, right? So this is kind of a serial process. And the way I look at it is that there's a kind of cyclic process of care. And the process starts with an initial presentation. So I show up at my doctor's office and I complain about something. And if you ever listen to a doctor interacting with a patient, the first time the patient comes in, the first question is always, what brought you here? Right? And that's called the presenting complaint. So if I say, you know, my ankle hurts like hell, that's very different than if I say, I stopped being able to hear in my right ear, or I have this horrible skin rash on my arm, or whatever. And that's going to take me in very different directions. So then what happens is that the doctor examines you and generates a bunch of data. So these are measurements. And of course, it used to be that those measurements were based on observation. So there are very famous doctors from 100 years ago who were spectacularly good at being able to look at a patient and figure out what's wrong with them by being very astute observers-- the Sherlock Holmes kind of subtle, oh, I see that you have a cut on the inside of your shoe, which means you must have been going through brambles, and you know, whatever. I'm making up a Sherlock Holmes story here. So that generates data. And then we interpret that data to generate some kind of information or interpreted data about the patient. And based on that, we determine a diagnosis. Now, do we determine a diagnosis? Maybe not. Maybe we guess a diagnosis. One of the things I learned early on working in this field is that doctors are actually quite willing to make guesses, because it's so useful to believe that you understand what's going on. If you say, well, there's some probability distribution over a vast number of possible things, that doesn't give you very good guidance on what to do next. Whereas if you can say, oh, I think this patient is developing type 2 diabetes, then you're locked into a set of questions and a set of approaches that you might try. Now, when we come back to looking at machine learning, machines don't have the same limitations as people. And so for a machine to integrate over a vast number of possibilities is not difficult. But for a human cognition to do that is very hard. And so this is actually an important characteristic of the way doctors think about diagnostic reasoning. So then, having made a diagnosis or made a guess, they plan some kind of therapy. They apply that therapy, and then they wait awhile and they see what happened. So if your diagnosis led you to a choice of therapy, you gave that therapy to the patient and the patient got better, then you say, well, it must have been the right diagnosis. If the patient didn't get better, then you say, well, how did what happened to the patient differ from what I expected to happen to the patient? And that drives your revision of this whole process. So we again examine what happened as a result of the therapy. We gather more data. We interpret it. We come up with the revised diagnostic hypothesis. We come up with a revised therapeutic plan, and we keep going around the cycle. Now, that cycle happens very quickly if you're a hospitalized patient, because you're there all the time. You're available. They're trying to do things to you constantly. And so this cycle happens on the order of hours or a day, whereas if you were an outpatient, then you're not dealing with some urgent problem. It may happen over a much slower period. It may be that your doctor says, well, we're going to adjust your drug dose and see if that helps bring down your cholesterol, or manage your pain, or whatever it is that he's trying to do. Or worse yet, we're going to try to convince you to eat more healthy food, and six months later we'll see if your hemoglobin A1C came down, that you're less close to being diabetic. So the time scale is very different. But that process of continually reinterpreting things is a really critical feature, I think, of all of medical care. And if you look back, Alan Turing actually talked, in the early 1950s, about health care as being one of the interesting application areas of artificial intelligence. Why? Well, because it was an important topic. And he had the vision that says, as we start getting more data about health care, we're going to be able to build the kinds of models that we're going to be talking about in this class. But a lot of the early work took a kind of one-shot approach. So they said, well, we're going to solve the diagnostic problem. So we're going to take a snapshot of a patient, all their data at a particular moment. We're going to feed it into an algorithm. It'll come up with a diagnosis. We're done. And that wasn't very useful, because it didn't obey the cyclic nature of the process of providing health care. And so this was a revolution that started, really, around the 1980s when people realized that you have to be in it for the long-run and not for sort of single interactions. OK, well, this is just some definitions of these care processes. So here I've listed some ideas that came from a 1976 paper by several of my colleagues, who said, well, here's a cognitive theory of diagnosis. From the initial complaints, guess a suitable hypothesis. Use the current active hypotheses to guide questioning-- so to order more tests, to ask questions of the patient. And it's the failure to satisfy expectations that's the strongest clue to how to develop a better hypothesis. And then the hypotheses could be in an activated, deactivated, confirmed, or rejected state. They actually built a computer program that implemented this theory of diagnostic reasoning. And these rules, essentially, about whether to activate, deactivate, confirm, or reject something could be based both on logical criteria and on a kind of very bad probabilistic model. So it was very bad, because what they really needed was Bayesian networks. And those were about a decade in the future at that point. So they and every other system built in the 1970s had really horrible probabilistic models, because we didn't understand how to do it correctly. Now, what's interesting is somebody noticed that if you strip away the medicine from this, this is kind of like the scientific method, right? If you're trying to understand something, you form a hypothesis. You perform an experiment. If the experiment is consistent with your expectations, then you go on and you've gotten a little bit more confident in your hypothesis. If your experiment is inconsistent with your expectations, then you have to change your theory, change your hypothesis. You go back and gather more data, and then keep doing this until you're satisfied that you've come up with an adequate theory. So this was a surprise to doctors, because they thought of themselves more as artists than as scientists. But in a way, they act like scientists, which is kind of cool. All right, this doesn't stop with caring for a single patient. So we have all these meta-level processes about the acquisition and the application of knowledge about education, quality control and process improvement, cost containment, and developing references. So this is a picture from David Margulies, who was chief information officer at Children's Hospital here. And the cycle that I described is the one right here. This is the care team taking care of a patient. But of course at some point, that patient is discharged. And then they're in community care and self-care, and then maybe they're in some sort of active health status management. And then if that goes badly, then there's some episode where they reconnect to the health care system. They get authorized to come back. They get scheduled for a visit, and they're back in the cycle. And so the process of care involves people going to the hospital, getting taken care of, they get better, they get discharged, they live their lives for a while. Maybe they get sick again. They come back. And so there's this larger cycle around that issue. And then around that are all kinds of things about health plan design and membership and what coverage you have and so on. And then I would add one more idea, which is that if you have a system like this, you actually want to study, at the next meta-level, that system, make observations about it, analyze it, model it, plan some improvements, and then intervene in the system and observe how it's working and try to make it better. So in terms of tasks that are important for us in this class, this class of tasks is central, because one of the things we're trying to do is to look at the way health care works and to figure out how to make it better by examining its operation. And that can be done at any of these three levels. It can be done in the more acute phase, where we're dealing with somebody who's in the middle of an illness. It can be done at the larger phase of somebody who's going through the cycle of being well for a while and then being sick, and then being well again and being sick again. And it can be in terms of the system itself, of how do you design a health care system that works better for the population? So this notion of a learning health care system is now a journal. So in 2017, Chuck Friedman at the University of Michigan started this new journal. And it's full of articles about this third level of external cycle. So how does the health care system learn? Well, I'll tell you an anecdote. In the mid-1980s, I was teaching an AI expert systems course. And I had just come back from a conference of medical informatics people, where they were talking about this great new idea called evidence-based medicine. And I remember mentioning this to a bunch of MIT engineering students. And one of them raised his hand and said, as opposed to what? Right? I mean, to an engineer, it's obvious that evidence is the basis on which you analyze things and make things better. But it wasn't obvious to doctors. And so this was almost a revolutionary change. And the idea that they fostered was the idea of the randomized controlled clinical trial. So I'm going to sketch what that looks like. Of course, there are many variations. But suppose that I'm one of the drug companies around Kendall Square, and I come up with a new drug and I want to prove that it's more effective for condition X than some existing drug B. So what do I do? I find some patients who are suffering from X and I try very hard to find patients who are not suffering from anything else. I want the purest case possible. I then go to my statisticians and I say, let's design an experiment where we're going to collect a standard set of data about all these patients. And then we're going to give some of Drug A and some of them Drug B, and we'll see which of them do better. And we'll pre-define what we mean by do better. So like, not dying is considered doing better, or not suffering some bad thing that people are suffering from is considered doing better. And then the statisticians also will tell me, given that you expect Drug A to be, let's say, 10% better than Drug B, how many patients do you have to enroll in this trial in order to be likely to get a statistically significant answer to that question? And then they do it. The statisticians analyze the data. Hopefully you've gotten p less than 0.05. You go to the Food and Drug Administration and say, give me permission to market this drug as the hottest new cure for something or other, and then you make billions of dollars. Right? This is the standard way that pharma works. Now, there are some problems. So most of the cases to which the results of a trial like this are applied wouldn't have qualified to be in the trial. For example, we talked about morbidities, about the chronic problems that people have. Well, if you're dealing with one disease, you want to make sure that those populations you're dealing with don't have any of these other diseases. But in the real world, people do. And so we've never actually measured what happens to those people if you give them this drug and they have these comorbidities. The other problem is that the drug company wants to start making these billions of dollars as soon as possible. And so they want the trial to be as short as possible. And they want it to be as small a sample as they need to get that 0.05 statistical significance. So these are all problematic, and they lead to real problems. So I didn't bring any examples, but there are plenty of stories where the FDA has approved some drug on this basis, and then later they discovered that although the drug works well in the short-term, it has horrible side effects in the long-term, or that it has interactions with other diseases where it doesn't work effectively for people except for these pure cases that it was tested on. So the other idea, the competing idea is to say, let's build this learning health care system in which progress in science, informatics-- whatever-- generates new knowledge as an ongoing natural byproduct of the care experience and we seamlessly refine and deliver best practices for continuous improvement in health and health care. Wonderful words from the Institute of Medicine, now called the National Academy of Medicine. But it's hard to do this. And the reason it's hard to do this is mainly for a very profound underlying reason, which is that people are not treated by experimental protocols. So it's very important in that sketch that I gave you of the Drug A versus Drug B that there is a randomization step where I flip a coin to decide which drug any particular individual is going to get. If I allow that decision to be biased by my expectations or by something else I know about the patient, then I'm no longer doing a fair trial. And of course, when I collect data about how actual patients are being treated, they're being treated according to what their doctors think is best for them, and so there is no randomization. I mean, if I went to Mass General and said, could you guys please treat everybody randomly so that we collect really good data, they would throw me out properly. OK, so we also need a whole lot of technical infrastructure. We need to capture all kinds of novel data sources, which we'll talk about in the next lecture. And then we need a technical infrastructure for truly big data. So just for example, Dana Farber started about five years ago-- it's a cancer hospital. And so for every solid tumor, they would take samples of the tumor and genotype it-- multiple samples, because tumors are not uniform. So just storing that stuff is a technical challenge, and being able to come up with it. You've got three gigabases, so about over a gigabyte of data-- from each sample, from each tumor, times all the people who come in and have this test. So you buy some big disk drives or you farm it out to Google or something. But then you need to organize it in some way so that it's usefully easy to find that data. So today's technique, today's prejudice is what I call the meat grinder story, which is you take medical records, genetic data, environmental data, data from wearables, you put them into an old-fashioned meat grinder, and out come bits, which you store on a disk. And then you have all the data from which you can build models. And that's what we do. You're going to see a lot of that in this course. OK, the other thing that medicine tries to do is not to cure people but to keep them healthy. And this has been pretty much the domain of the public health infrastructure. So if you go across the river to the Harvard Medical area, there are a couple of big buildings, which is the Harvard School of Public Health, and this is what they're all about. So they do things like tracking disease prevalence and tracking infections, and worrying about quarantining people. They also do a lot of the kind of work we're going to talk about in this class, which is modeling in order to try to understand what's going on in individual's health, in the health of a population, in the operations of a health care system. So they're very much into this. Now historically, I looked back and it turns out there's something called the London Bills of Mortality in the 17th century, started by a gentleman named John Graunt. And he was interested just in figuring out, how long do people live? And so he came up with these bills of mortality, where he went around to different parts of London, talked to undertakers and hospitals and whatever health care providers existed at the time, and collected data on what people died and how many people were living in that area. And so for example, he estimated that the mortality before age six in the 17th century-- this was a long time ago, 1600s-- was about 36%. So if you were a kid, your chances of making it to age six were only about 64%, so less than 2/3. Kind of shocking. In the 18th century, people you've never heard of-- and Linnaeus, who you probably have heard of, because he was one of the early taxonomists for biological and animal species and so on-- made the first attempts at systemic classification. In the mid-1850s, mid-1800s, there was a Congress of the First International Statistical Congress. And a gentleman named William Farr came up with an interesting categorization that said, well, if we're going to taxonomize the diseases, we should separate epidemic diseases from constitutional diseases from local diseases, whereby "local" he means affecting a particular part of the body from developmental diseases. So this is things like stunted growth or failure of mental development or speech development, and then diseases that are the direct result of violence. So this is things like broken bones and bar fight results and stuff like that. So that was the first classification of disease in about 1853. Note, by the way, that this is before Louis Pasteur and his theory of the germ cause of disease. And so this was a pretty early attempt, and obviously could have benefited from what Pasteur later discovered. So by the 1890s, which is post-Pasteur, they came up with a classification that was a hierarchic classification, 44 top-level hierarchies broken up into 99 lower-level categories and 161 particular titles. And they adopted this as a way of getting, typically, mortality data of what was it that people were dying of. And by the 1920s, you've heard of ICD-- ICD-9, ICD-10. So this is currently used as a way of classifying diseases and disorders. The International List of the Causes of Death was the first ICD back in, I think, the 1920s. And then it kept developing through multiple versions. In 1975, ICD-9 was adopted. In 2015, ICD-10. And these are under the control of the World Health Organization, which is now a UN agency, although I think it predates the UN actually. And then we have ICD-9 CM and ICD-10 CM, are US Clinical Modifications that are an extension of the ICD-9 and 10 coding. And they are primarily used for billing. But they're also used for epidemiological research. And if you look at the Centers for Disease Control, CDC, they collect death certificates like this from all over the country. And so this is a person who died of a cerebral hemorrhage which was due to nephritis, which was due to cirrhosis of the liver. And so you can use this kind of data to say, well, here's the immediate cause of death, here's sort of the proximate cause of death, and here's the underlying cause of death. And so this is the sort of statistical data that we now have available. Now, do any of you watch the PBS show Victoria? Nobody? You're not television watchers. All right, that's pretty cool. I was stunned, because as I was preparing this lecture and I had the next slide, it turns out this plays a role in one of the episodes that was broadcast about a week and a half ago. So in the 1850s, there was a big outbreak of cholera in London. And John Snow was a doctor who did this amazing epidemiological study to try to figure out what was causing cholera. The accepted opinion was that cholera was caused by miasma. What's a miasma? STUDENT: Bad air. PETER SZOLOVITS: Bad air, OK? So it's bad air. Somehow, bad air was causing people to get sick and die. And several hundred people died. Whereas Snow started plotting on a map of the area in London where these were concentrated where everybody lived. And what he discovered, interestingly enough, is that right in the middle of Broad Street, which is pretty much at the epicenter of all these people dying, was a water pump that everybody in the neighborhood used. And that water pump, its supply had become infected by cholera. And so people were pumping water, taking it home, drinking it, and then dying, or at least getting very sick. And he looked at this and he said, well, if we turn off that pump, the epidemic will stop. And he actually went to the queen, Queen Victoria-- hence the tie-in to the television show-- and convinced her that this was worth trying, because they didn't have any better ideas. And they took the pump handle off the pump, and sure enough the cholera epidemic abated. Now, of course the underlying problem was sanitation. And they didn't fix that. That took longer. But what's interesting is here is a 2003 study of the spread of West Nile virus. So this is mosquitoes that are biting people and infecting them with this nasty disease. And they actually used very similar techniques to figure out that maybe this was coming in on airplanes through JFK. So mosquitoes were hitching a ride on an airplane and coming into the US. We need to build a border wall against mosquitoes. There is also a very controversial practice that used to be used a lot by public health officials, which was to quarantine people. And so there are lots of examples. Anybody's relatives come through Ellis Island? Must be a few. OK, so they were subject to quarantine. If you were sick when you arrived at Ellis Island and they didn't know exactly how sick you were, they would put you in a building and wait a month and see if you got better or if you got worse, and then decide whether to send you back or to let you in. So that was a pretty common practice. There's a famous story of Typhoid Mary, who was a carrier of typhoid fever but was not herself affected. Unfortunately, she was a cook. And wherever she was employed, people got really sick. And eventually, the New York Department of Health forcibly essentially jailed her in some sanatorium in order to keep her from continuing to infect people-- it was a very controversial case, as you might imagine. You don't have to go that far back. Here's a 1987 article from UPI from The Chicago Tribune. So Jesse Helms, who was a senator, was calling for everybody who has AIDS to be quarantined. So fortunately, that didn't happen. But the idea is still around, to say, well, we're going to stop this infection by quarantining people. And then here's a recent report about the Ebola response in Africa over the past few years, when Ebola was ravaging parts of that continent. And their conclusions are it's controversial, a debated issue, significant risks related to human rights. Quarantine should be used as a last resort. Quarantines in urban areas are really hard. Mobile populations make it hard. And this is the most technical conclusion, that if you're going to quarantine a bunch of people, you have a huge waste disposal problem on your hands. Because if you have people who might have Ebola, you can't just take their garbage and throw it out somewhere. You have to dispose of it properly. OK, so the last thing I wanted to talk about-- hopefully mercifully short-- will be paying for health care. So I remember reading about 20 years ago that if you bought a Chevy from General Motors, they spent more money on health insurance and health care for their employees than they did on the steel that went into your car. That's pretty amazing. So why is this? Well, essentially, there's an insatiable demand for health care, right? Nobody wants to die, except for the suicidal people. And so if I'm sick, I want the best care possible, and I want as much of it as possible. Because you know, what is more important in life than continuing to live? So we also have gotten better at making drugs and better tests, and so on. I remember one MRI machines became popular about 30 years ago. The state of Massachusetts, for example, had a commission that you had to convince them to be allowed to buy an MRI machine for your hospital, because MRI machines were very expensive and MRIs were, at that time, hugely expensive. And so they wanted to contain the cost of health care by limiting the number of such machines. Well, eventually the costs came down. And so we're doing better. But if you read the newspapers, you see that drug therapy is very expensive. We have these wonder drugs for rare diseases or cancers that cost $1 million a year to pay for your dosage. And so there is a high human motivation to do this, and not much pushback-- except from insurance companies. But they just pass the cost on in insurance contracts. There's also waste. So there are lots of stories about half of health care expenses are spent in the last year of somebody's life. Although, Ingelfinger, who was the editor of The New England Journal, gave a talk here about 25 years ago. And he said, you know, when I was a practicing doctor, no patient ever came into my office saying, doc, I'm in the last year of my life. And so that was a difficult criterion to try to operationalize. There are marginally useful procedures. The IOM estimated that there are sort of 40,000 to 100,000 quote unquote unnecessary deaths per year-- in other words, deaths that could be avoided by just being more careful and a little bit smarter. Well, so the result of this is that if you look at health care spending as a percentage of gross domestic product from 1970 to I think 2017, I believe, on this graph, what you see is one real outlier. And that's the United States on top. So I've selected a few of these just to look at. There's the US on top. France, Germany, a lot of the European countries are roughly at that highest of the crowd level. Canada is down there. The UK is a little lower. Spain is a little lower. Israel is a little lower. Turkey is the lowest of the OECD countries in terms of percentage of GDP that they spend on health care. So well, that's OK. But maybe we're getting more for our money than these other countries. And so there are a lot of analyses that look at stuff like this. They say, well, if we spend so much money per patient per year, what do we get in terms of the simplest thing to measure, which is life expectancy? How long do people live? And what we discover is that in the United States, we're spending $9,000 a year on patients and we're getting a life span of somewhere in the high 70s. So this is 2015 data. Whereas in Switzerland, they're spending about $6,000-- so about 2/3-- and they're getting 83 years or something of life for the same price, and so on for all these different countries. By the way, this is all from Gapminder, which is a wonderful data visualization tool. And I don't have time to show you, but you can click on individual lines there and slide the slider about what euro you're talking about, and the data moves. And it's beautiful, and it's a wonderful way of understanding it. So there's an important thing to remember, which was taught to me by my friend Chris Dede at the Harvard Education School. And that is that it's not even good enough to stand still. So he suggested the following scenario. If you look at the growth of productivity over this period of 10 years by industry, you discover that productivity went up by seven point something percent for durable goods and went down by something like 2% for mining. About 1% for construction. Information technologies grew by 5 and 1/2% or something. So if you ask the question, what happens if the demand for these goods remains constant over a period of time? And what you discover is that because the more productive things become cheaper, they wind up occupying a smaller fraction of the total amount of money that's being spent. So your computer, your laptop, is a lot cheaper today than it was 30 years ago. And so that means that the amount of spending that people do on things like information technology, at least per item, is much lower than it used to be. And if in the aggregate it's also lower than it used to be, that means that something else must be higher, right? Because it sums to 100%. And so what this shows is that if you spend 30 years at the same rates of productivity growth, mining grows from whatever fraction of the economy it was to something that's about three times as big a fraction. Right? And if productivity growth is better, then that sector of the economy shrinks. So I think something like this is happening in health care, which is that there's infinite demand. And health care is not terribly productive. We're not getting better as fast as electronics is getting better, for example. So people have tried doing various things. "Managed care" was the buzzword of the 1980s and '90s. And they said, well, what we're going to do is to prevent people from overusing medical services by requiring pre-admission review, continued stay review, second surgical opinion. We're going to have post-care management, where if you're released from the hospital and you're in that second of the cycles, people will call you at home and try to help you out and make sure that you're doing whatever is best to keep you out of the hospital. And We're going to try various experiments, like institutional arrangements, that say, well, if I as a doctor agree to refer all my patients to Mass General rather than to the Brigham or the BI, they'll pay me extra. And they'll get some kind of efficiencies from aggregation. And so maybe that's one way of controlling costs. So leakage is the idea of keeping people in-system. And capitation is an interesting idea, which says that instead of paying for what the hospital does for me or what the doctor does for me, we simply pay him a flat fee for the year to take care of me. And that takes away the incentive for him to do more and more and more to get paid more. But of course, it gives them an incentive to do less and less and less so that he doesn't have to spend as much money. And so it's sort of a knife balance to figure out how to do this. But that's an important component. So if you look at the evaluation of managed care a long time ago, what they said was it helped reduce inpatient costs by increasing outpatient costs. So what it's done is it's pushed people from going to the hospital into going to their doctor's office. But it's been pretty much a wash in terms of overall spending. Doctors also hated managed care. I was sitting with one of my colleagues at a Boston hospital, and an insurance company clerk called him to dun him for having ordered a certain test on a heart patient. And so he was furious. And he turns to her and says, so which medical school do you have your diploma from? And of course, she doesn't have a medical degree. She's following some rules on a sheet about how to harass doctors not to order expensive tests. And so we have Edward Annis, the past president of the AMA, who says, well, in the glory days there was no bureaucratic regimentation, no forms, no blah, blah, blah. And all my patients were happy, and I was happy, and things were ideal. If you actually look back at those days, it wasn't as good as the cracks it up to be. And some of those issues of disparity that we were talking about were horrible. So for his patients who were rich and who could afford to see him, life was pretty good. But not so much for underserved populations. So you've seen ObamaCare come and partly go, and continues to be controversial. But it's trying to foster better information technology as a basis for getting doctors to make better decisions. It's trying to foster these accountable care organizations, which is a version of capitation, to put the pressure on to reduce the amount of health services that people are asking for. There is a hospital readmission reduction program now that says if you are a Medicare patient, for example, and you're discharged from the hospital and you're readmitted within 30 days of your discharge, then they're going to dun you and not pay you for that readmission, or not pay you for part of that readmission. But if you look at the statistics, which I just did, that's the distribution of the payment adjustment factors. So it turns out the lowest number is 97%. So a 3% decrease in reimbursement is something that the CFO of your hospital would really care about. But it's not like a 25% reduction in reimbursements. So this has had a fairly minor effect. Let me just finish by saying that money determines much. From our point of view, one of the problems we face is that IT traditionally gets about 1% to 2% of spending in medical centers, whereas it gets about 6% or 7% in business overall and about 10% to 12% for banking. And a lot of these systems are managed by accountants, although that is slowly changing. So in the 1990s, HST with Harvard started a training program for medical doctors to become medical informaticians-- so practicing this sort of witchcraft. And our first two graduates, one of them is now CIO at the Beth Israel Deaconess. The second one is CIO at Children's Hospital. And so one of my big successes personally has been to displace some accountants by doctors who actually understand this technology to some extent. OK, I think that's all I'm going to say. There's a funny last slide here, which has a pointer that I would want you to remember. The slides will be up on our website. You can follow it. MIT has a program called GEMS. It's the General Education in Medical Sciences, intended as a minor program for people in the PhD program in some other field. And if you're serious about really concentrating on health care, developing at least the kind of understanding of the health care process that I've tried to give you today and that would allow you to play a doctor on television is really important. And there is a program that helps you achieve that, which I commend to people. OK, see you next week.
MIT_6S897_Machine_Learning_for_Healthcare_Spring_2019
19_Disease_Progression_Modeling_and_Subtyping_Part_2.txt
PROFESSOR: So I'm going to begin by trying to build some intuition for how one might be able to do staging from cross-sectional data, and we'll return to this question of combined staging subtyping only much later. So imagine that we had data that lived in one dimension. Here, each data point is an individual, we observe their data at just one point in time, and suppose we knew exactly which biomarker to look at. Right? So I gave you an example of that here, when you might look at some antibody expression level, and that might be what I call biomarker A, is if you knew exactly what biomarker to look at, you might just put each person along this line and you might conjecture that maybe on one side of the line, this is the early disease, and that the other sort of line, maybe that's the late disease. Why might that be a reasonable conjecture? What would be an alternative conjecture? Why don't you talk to your neighbors and see if you guys can come up with some alternative conjectures. Let's go. All right, that's enough. So hopefully simple questions, so I won't give you too much time. All right, so what would be another conjecture? So again, our goal is we have one observation per individual, each individual is in some unknown stage of the disease, we would like to be able to sort individuals and turn it into early and late stages of the disease. I give you one conjecture of how to do that, sorting, what would be another reasonable conjecture? Raise your hand. Yep? AUDIENCE: That there's the different-- that they have different types of the same diseases. They all have the same disease and it could-- just one of the subtypes might be sort of the-- PROFESSOR: Yeah. So you're going back to the example I gave here where you could conflate these things. I want to stick with a simpler story, let's suppose there's only one subtype of the disease. What would be another way to sort the patients given this data where the data is these points that you see here? Yeah? AUDIENCE: For any disease in the middle range, and then as you [INAUDIBLE] PROFESSOR: OK, so maybe early disease is right over here, and when things get bad, the patient-- this biomarker starts to become abnormal, and abnormality, for whatever reason, might be sort of to the right or to the left. Now I think that is a conjecture one could have. I would argue that that's perhaps not a very natural conjecture given what we know about common biomarkers that are measured from the human body and the way that they respond to disease progression. Unless you're in the situation of having multiple disease subtypes where, for example, going to the right marker might correspond to one disease subtype and going to the left marker might correspond to another disease subtype. What would be another conjecture? You guys are missing the easy one. Yeah, in the back. AUDIENCE: Well, it might just be one where the high values are [INAUDIBLE] stage and low values are later ones? PROFESSOR: Exactly. So this might be early disease and that might be late disease. AUDIENCE: It says vice versa on this slide. PROFESSOR: Oh, does it really? Oh shoot. [LAUGHTER] AUDIENCE: [INAUDIBLE] PROFESSOR: Right, right, OK, OK. Thank you. Next time I'll take out that lower vice versa. [LAUGHTER] That's why you guys aren't saying that. OK. OK, so this is good. Now I think we're all on the same page, and we had some idea of what are some of the assumptions that one might need to make in order to actually do anything here. Like for example, we are making some-- we'll probably have to make some assumption about continuity, that there might be some gradual progression of the biomarker relevance from early to late, and it might be getting larger, it might be getting smaller. If it's indeed the scenario that we talked about earlier where we said like early disease might be here and late disease might be going to either side, in that case, I think one could easily argue that with just information we have here, disease progression-- disease stage is unidentifiable, right? Because you wouldn't know where would it-- where should you-- where should that transition point be? So here, here, here, here, here, here. In fact, the same problem arises here. Like you don't know, is it early disease-- is it going this way or is it going that way? What would be one way to try to disentangle this just to try to get us all on the same page, right? So suppose it was only going this direction or going that direction, how could we figure out which is which? Yeah? AUDIENCE: Maybe we had data on low key and other data about how much time we had taken PROFESSOR: Yeah. No, that's great. So maybe we have data on let's say death information, or even just age. And if we started from a very, very rough assumption that disease stage let's say grows monotonically with age, then-- and if you had made an additional assumption that the disease stages are-- that the people who are coming in are uniformly drawn from across disease stages, with those two assumptions alone, then you could, for example, look at the average age of individuals over here and the average age of individuals over here, and you'd say, the one with the larger average age is the late disease one. Or you could look at time to death if you had for each-- for each data point you also knew how long until that individual died, you could look at average time to death for these individuals versus those individuals and try to tease it apart in that way. That's what you meant. OK, so I'm just trying to give you some intuition for how this might be possible. What about if your data looked like this? So now you have two biomarkers. So we've only gone up by one dimension only, and we want to figure out where's early, where's late? Already starts to become more challenging, right? So the intuition that I want you to have is that we're going to have to make some assumptions about disease progression, such as the ones we were just discussing, and we also have to get lucky in some way. So for example, one way of getting lucky would be to have a real lot of data. So if you had a ton, ton of data, and you made an additional assumption that your data lives in some low dimensional manifold where on one side of manifold is early disease and the other side of manifold is late disease, then you might be able to discover that manifold from this data, and you might conjecture that the manifold is something like that, that trajectory that I'm outlining there with my hand. But for you to be able to do that, of course, you need to have enough data, all right? So it's going to be now a trade-off between just having cross-sectional data, it might be OK so long as you have a real lot of that data so you can sort of fill in the spaces and really identify that manifold. A different approach might be, well maybe you don't have just pure cross-sectional data, maybe you have two or maybe three samples from each patient. And then you can color code this. So you might say, OK, green is patient 1-- or patient A, we'll call it, and this is the first time point from patient A, second time point from patient A, third and fourth time points from patient A. Red is patient B, and you have two time points for patient B, and blue here is patient C, and you have 1, 2, 3 time points from patient C. OK? Now again, it's not very dense data, we can't really draw curves out, But now we can start to get a sense of the ordering. And again, now we can-- even though we don't-- we're not in a dense setting like we were here, here, we'd still nonetheless be able to figure out that probably the manifold looks a little bit like this, right? And so again, I'm just trying to build intuition around when disease progression modeling for cross-sectional data might be possible, but this is a wide open field. And so today, I'll be telling you about a few algorithms that try to build on some of these intuitions for doing disease progression modeling, but they will break down very, very easily. They'll break down when these assumptions I gave you don't hold, they'll break down when your data is high dimensional, they'll break down when your data looks like this where you don't just have a single subtype of perhaps a multiple subtypes. and so this is a really very active area of research, and it's an area that I think we can make a lot of progress on in the field in the next several years. So I'll begin with one case study coming from my own work where we developed an algorithm for learning from cross-sectional data, and we valued it in the context of chronic obstructive pulmonary disorder or COPD. COPD is a condition of the lungs typically caused by air pollution or smoking, and it has a reasonably good staging mechanism. One uses what's called a spirometry device in order to measure the lung function of individual at any one point in time. So for example, you take this spirometry device, you stick it in your mouth, and you breathe in, and then you exhale, and one measure is how long it takes in order to exhale all your air, and that is going to be a measure of how good your lungs are. And so then one can take that measure of your function and one can stage how severe the person's COPD is, and that goes by what's called the gold criteria. So for example, in stage 1 of the COPD, common treatments involve just vaccinations using a short-acting bronchodilator only when needed. When the disease stage gets much more severe, like stage 4, than often treatment is recommended to be inhaling glucocorticosteroids if there are repeated aspirations of the disease, long-term oxygen If respiratory failure occurs, and so on. And so this is a disease that's reasonably well-understood because there exists a good staging mechanism. And I would argue that when we want to understand how to do disease staging in a data-driven fashion, we should first start by working with either synthetic data, or we should start with working with a disease where we have some idea of what the actual true disease staging is. And that way, we can look to see what our algorithms would recover in those scenarios, and does it align with what we would expect either from the way the data was generated or from the existing medical literature. And that's why we chose COPD. Because it is well-understood, and there's a wealth of literature on it, and because we have data on it which is much messier than the type of data that went into the original studies, and we could ask, could we come to the same conclusions as those original studies? So in this work, we're going to use data from the electronic medical record. We're only going to look at a subset of the EMR, in particular, diagnosis codes that are recorded for a patient at any point in time, and we're going to assume that we do not have access to spirometry data at all. So we don't have any obvious way of staging the patient's disease. The general approach is going to be to build a generative model for disease progression. At a very high level, this is a Markov model. It's a model that specifies the distribution of the patient's data, which is shown here in the bottom, as it evolves over time. According to a number of hidden variables that are shown in the top, these S variables that denote disease stages, and these X variables that denote comorbidities that the patient might have at that point in time, these X and S variables are always assumed to the unobserved. So if you were to clump them together into one variable, this would look exactly like a hidden Markov model. And moreover, we're not going to assume that we have a lot of longitudinal data for a patient. In particular, COPD evolves over a 10 to 20 years, and the data that we'll be learning from here has data only over one to three-year time range. The challenge will be, can we take data in this one to three-year time range and somehow stitch it together across large numbers of patients to get a picture of what the 20-year progression of the disease might look like? The way that we're going to do that is by learning the parameters of this probabilistic model. And then from the parameters, we're going to either infer the patient's actual disease stage and thus sort them, or actually simulate data from this model to see what a 20-year trajectory might look like. Is the goal clear? All right. So now what I'm going to do is I'm going to step into this model piece by piece to tell you what each of these components are, and I'll start out with a very topmost piece shown here by the red box. So this is the model of the patient's disease progression at any one point in time. So this variable, S1, for example, might denote the patient's disease stage on March 2011; S2 might denote the patient's disease stage April 2011; S capital T might denote the patient's disease stage June 2012. So we're going to have one random variable for each observation of the patient's data that we have. And notice that the observations of the patient's data might be at very irregular time intervals, and that's going to be OK with this approach, OK? So notice that there is a one-month gap between S1 and S2, but a four-month gap between St minus 1 and St, OK? So we're going to model the patient's disease stage at the point in time when we have an observation for the patient. S denotes a discrete disease stage in this model. So S might be a value from 1 up to 4, maybe 1 up to 10 where 1 is denoting a early disease stage and 4 or 10 might denote a much later disease stage. If we have a sequence of observations per patient-- for example, we might have an observation on March and then in April, we're going to denote the disease stage by S1 and S2, what this model is going to talk about is the probability distribution of transitioning from whatever the disease stage at S1 is to whatever the disease stage at S2. Now because the time interval is between stages are not homogeneous, we have to have a transition distribution that takes into consideration that time gap. And to do that, we use what's known as a continuous time Markov process. Formally, we say that the transition distribution-- so the probability of transitioning from stage I at time t minus 1 to state j at time t, given as input the difference in time intervals-- the difference in time points delta-- so delta is the number of months between the two observations. So this conditional distribution is going to be given by the matrix exponential of this time interval times a matrix Q. And then here, the matrix Q gives us the parameters that we want to learn. So let me contrast this to things that you might already be used to. In a typical hidden Markov model, you might have asked t goes to St-- or St minus 1 goes to St, and you might imagine just parametrizing St given St minus 1 just by a lookup table. So for example, if the number of states-- for each running variable is 3, then you would have a 3 by 3 table where for each state St minus 1, you have some probability of transition to the corresponding state St, so this might be something like 0.9, 0.9, 0.9, where notice, I'm having a very large value along the diagonal, because if, let's say, a very small period-- so a priori, we might believe that patients stay in the same disease, and then one might imagine that the probably transitioning from state 1 at time t minus 1 to state 2 at time t might be something like 0.09, and the probability of skipping state 2, going directly to state 3 from state 1 might be something much smaller like 0.01, OK? And we might say something like that the probability-- we might imagine that the probability of going in a backwards direction, going from stage 2 at time t minus 1 to let's say stage 1 at time t, that might be 0 all right? So you might imagine that actually this is the model, and what that's saying is something like you never go in the backwards direction, and you're more likely to transition to the state immediately adjacent to the current stage and very unlikely to skip a stage. So this would be an example of how you would parametrize the transition distribution in a typical discrete time Markov model, and the story here is going to be different specifically because we don't know the time intervals. So intuitively, if a lot of time has passed between the two observations, then we want to allow for an accelerated process. We want to allow for the fact that you might want to skip many different stages to go to your next time step, to go to the stage of the next time step, because so much time has passed. And that intuitively is what this scaling of this matrix Q by delta corresponds to. So the number of parameters in this parameterization is actually identical to the number of parameters in this parametrization, right? So you have a matrix Q which is given to you in essence by the number of states squared-- really, the number of states-- there's an additional redundancy there because it has to sum up to 1, but that's irrelevant. And so the same story here, but we're going to now parametrize the process by in some sense the infinitesimally small time probability of transitioning. So if you were to take the derivative of this transition distribution as the time interval shrinks, and then you were to integrate over the time interval that was observed and the probability of transitioning from any state to any other state with that infinitesimally small probability transitioning, what you get out is exactly this form. And I'll leave-- this paper is in the optional readings for today's lecture, and you can read through it to get more intuition about the continuous time Markov process. Any questions so far? Yep? AUDIENCE: Those Q are the same for both versions or-- PROFESSOR: Yes. And this model Q is essentially the same for all patients. And you might imagine, if there were disease subtypes, which there aren't in this approach, that Q might be different for each subtype. For example, you might transition between stages much more quickly for some subtypes than for others. Other questions? Yep? AUDIENCE: So-- OK, so Q you said had like-- it's just like a screen number used beforehand you kind of like specified these stages that you pick [INAUDIBLE] PROFESSOR: Correct. Yes. So you pre-specify the number of stages that you want to model, and there are many ways to try to choose that parameter. For example, you could look at how about likelihood under this model, which is learned for the different of stages. You could use typical model selection techniques from machine learning as another approach where you try to penalize complexity in some way. Or, what we found here, because of some of the other things that I'm about to tell you, it doesn't actually matter that much. So similarly to when one does hard [INAUDIBLE] clustering or even K-means clustering or even learning a problematic topic model, if you use a very small number of topics or number of clusters, you tend to learn very coarse-grained topics or clusters. If you use very many more-- if you use a much larger number of topics, you tend to learn much more fine-grained topics. Same story is going to happen here. If you use a small number of disease stages, you're going to learn very coarse-grained notions of disease stages; if you use more disease stages, you're going to learn a fine-grained notion; but the overall sorting of the patients is going to end up being very similar. But to make that statement, we're going to need to make some additional assumptions, which I'm going to show you in a few minutes. Any other questions? These are great questions. Yep? AUDIENCE: So do we know the staging of the disease because I PROFESSOR: No, and that's critical here. So I'm assuming that these variables-- these S's are all hidden variables here. And the way that we're going to learn this model is by maximum likelihood estimation where we marginalize over the hidden variables, just like you would do in any EM type algorithm. Any other questions? All right, so what I've just shown you is the topmost part of the model, now I'm going to talk about a horizontal slice. So I'm going to talk about one of these time points. So if you were to look at the translation-- the rotation of one of those time points, what you would get out is this model. These X's are also hidden variables, and we have pre-specified them to characterize different axes by which we want to understand the patient's disease progression. So in Thursday's lecture, we characterized the patient's disease as subtype by just a single number, and similarly in this example is just by a single number, but we might want to understand what's really unique about each subtype. So for example-- sorry, what's really unique about each disease stage. So for example, how is the patient's endocrine function in that disease stage? How is the patient's psychiatric status in that disease stage? Has the patient developed lung cancer yet and that disease stage? And so on. And so we're going to ask that we want to be able to read out from this model according to these axes, and this will become very clear at the end of this section where I show you a simulation of what 20 years looks like for COPD according to these quantities. When does the patient typically develop diabetes, when does the patient typically become depressed, when does the patient typically develop cancer, and so on. So these are the quantities in which we want to be able to really talk about what happens to a patient at any one disease stage, but the challenge is, we never actually observe these quantities in the data that we have. Rather, all we observe are things like laboratory test results or diagnosis codes or procedures that have been formed and so on, which I'm going to call the clinical findings in the bottom. And as we've been discussing throughout this course, one could think about things as diagnosis codes as giving you information about the disease status of the patient, but they're not one and the same as the diagnosis, because there's so much noise and bias that goes into the assigning of diagnosis codes for patients. And so the way that we're going to model the raw data as a function of these hidden variables that we want to characterize is using what's known as a noisy-OR network. So we're going to suppose that there is some generative distribution where the observations you see-- for example, diagnosis codes are likely to be observed as a function of whether the patient has these phenotypes or comorbidities with some probability, and that probability can be specified by these edge weights. So for example, a diagnosis code for diabetes is very likely to be observed in the patient data if the patient truly has diabetes, but of course, it may not be recorded in the data for every single visit the patient has to a clinician, there might be some visits to clinicians that have nothing to do with their patients endocrine function and diabetes-- the diagnosis code might not be recorded for that visit. So it's going to be a noisy process, and that noise rate is going to be captured by that edge. So part of the learning algorithm is going to be to learn that transition distributions-- for example, that Q matrix I showed you in the earlier slide, but the other role-- learning algorithm is to learn all of the parameters of this noisy-OR distribution, namely these edge weights. So that's going to be discovered as part of the learning algorithm. And a key question that you have to ask me is, if I know I want to read out from the model according to these axes, but these axes are never-- I'm never assuming that they're explicitly observed in the data, how do I ground the learning algorithm to give meaning to these hidden variables? Because otherwise if we left them otherwise unconstrained and you did maximum likelihood estimation just like in any factor analysis-type model, you might discover some factors here, but they might not be the factors you care about, and if the learning problem was not identifiable, as is often the case in unsupervised learning, then you might not discover what you're interested in. So to ground the hidden variables, we introduced-- we used a technique that you already saw in an earlier lecture from lecture 8 called anchors. So a domain expert is going to specify for each one of the comorbidites one or more anchors, which are observations, which we are going to conjecture could only have arisen from the corresponding hidden variable. So notice here that this diagnosis code, which is for type 2 diabetes, has only an edge from X1. That is an assumption that we're making in the learning algorithm. We are actually explicitly zeroing out all of the other edges from all of the other comorbidities to a 1. We're not going to pre-specify what this edge rate is, we're going to allow for the fact that this might be noisy, it's not always observed even if the patient has diabetes, but we're going to say, this could not be explained by any of the other comorbidities. And so for each one of the comorbidites or phenotypes that we want to model, we're going to specify some small number of anchors which correspond to a type of sparsity assumption on that graph. And these are the anchors that we chose for asthma, we chose a diagnosis code corresponding to asthma; for lung cancer, we chose several diagnosis codes correspond to lung cancer; for obesity, we chose a diagnosis code corresponding to morbid obesity; and so on. And so these are ways that we're going to give meaning to the hidden variables, but as you'll see in just a few minutes, it is not going to pre-specify too much of the model. The model's still going to learn a whole bunch of other interesting things. By the way, the way that we actually came up with this set was by an iterative process. We specified some of the hidden variables to have anchors, but we also left some of them to be unanchored, meaning free variables. We did our learning algorithm, and just like you would do in a topic model, we discovered that there were some phenotypes that really seemed to be characterized by the patient's disease-- that seemed to characterize a patient's disease progression. Then in order to really dig deeper, working collaboratively with a domain expert, we specified anchors for those and we iterated, and in this way, we discovered the full set of interesting variables that we wanted to model. Yep? AUDIENCE: Did you measure how good an anchor these were? Like are some comorbidities better anchors than others? PROFESSOR: Great. You'll see-- I think we'll answer that question in just a few minutes when I show you what the graph looks like that's learned. Yep? AUDIENCE: Were all the other weights in that X to O network 0? They weren't part of it here. So it looks like a pretty sparse-- PROFESSOR: They're explicitly nonzero, actually, it's opposite. So for an anchor, we say that it only has a single parent. Everything that's not an anchor can have arbitrarily many parents. Is that clear? OK. Yeah? AUDIENCE: Do the anchors that you have in that linear table, you itereated yourself on that or did the doctors say that these are the [INAUDIBLE]? PROFESSOR: We started out with just a subset of these conditions. As things that we wanted to model-- things that we wanted to understand what happens along disease progression according to these axes, but just a subset of them originally. And then we included a few additional hidden variables that didn't have any anchors associated to them, and after doing unsupervised learning and just a preliminary development stage, they discovered some topics and we realized, oh shoot, we should have included those in there, and then we added them in with corresponding anchors. And so you could think about this as an exploratory data analysis pipeline. Yep? AUDIENCE: Is there a chance that these aren't anchors? PROFESSOR: Yes. So there's definitely the chance that these may not be anchors related to the question was asked a second ago. So for example, there might be some chance that the morbid obesity diagnosis code might be coded for a patient more often for a patient who has, let's say, asthma-- this is a bad example. And in which case, that would correspond to there truly existing an edge from asthma to this anchor, which would be a violation of anchor assumption. All right, so we chose these to make that unlikely, but it could happen. And it's not easily testable. So this is another example of an untestable assumption, just like we saw lots of other examples already in today's lecture and the causal inference lectures. If we had some ground truth data-- like if we had done chart review for some number of patients and we actually label these conditions, then we could test that anchor assumption. But here, we're assuming that we don't actually know the ground truth of these conditions. AUDIENCE: Is there a reason why you choose such high-level comorbidites? Like I imagine you could go more specific. Even, say, the diabetes, you could try to subtype the diabetes based on this other model, sort of use that as a single layer, but it seems to-- at least this model seems to choose [INAUDIBLE] high level. I was just curious of the reason. PROFESSOR: Yes. So that was a design choice that we made. There are many, many directions for follow-up work, one of which would be to use a hierarchical model here. But we hadn't gone that direction. Another obvious direction for follow-up work would be to do something within the subtyping with this staging by introducing another random variable, which is, let's say, the disease subtype, and making everything a function of that. OK, so I've talked about the vertical slice and I've talked about the topmost slice, but what I still need to tell you about is how these phenotypes relate to the observed disease stage. So for this, we use-- I don't remember the exact technical terminology-- a factored Markov model? Is that right, Pete? Factorized Markov model-- I mean, this is a term that existed in the graphical model's literature, but I don't remember right now. So what we're saying is that each of these Markov chains-- so each of these X1 up to Xt-- so this, will say, is the first one I call diabetes. This is the second one which I'll say is depression. We're going to assume that each one of these Markov chains is conditionally independent of each other given the disease stage. So it's the disease stage which ties everything together. So the graphical model looks like this, and so on, OK? So in particular, there are no edges between, let's say, the diabetes variable and the depression variable. All correlations between these conditions is assumed to be mediated by the disease stage variable. And that's a critical assumption that we had to make. Does anyone know why? What would go wrong if we didn't make that assumption? So for example, what would go wrong if we had something look like this, x1-- what was my notation? X1,1, X1,2, X1,3, and suppose we had edges between them, a complete graph, and we had, let's say, also the S variable with edges to everything? What would happen in that case where we're not assuming that the X's are conditionally independent given S? So I want you to think about this in terms of distributions. So remember, we're going to learn how to do disease progression through learning the parameters of this model. And so if we set this up and-- if we set up the learning problem in a way which is unidentifiable, then we're going to be screwed, we're not going to able to learn anything about disease progression. So what would happen in this situation? Someone who hasn't spoken today ideally. So any of you remember from, let's say, perhaps an earlier machine learning class what types of distribution's a complete graph-- a complete Bayesian network could represent? So the answer is all distributions, because it corresponds to any factorization of the joint distribution. And so if you allowed these x variables to be fully connected to each other-- so for example, saying that depression depends on diabetes in addition to the stage, then in fact, you don't even need this stage variable in here. The marginal-- you can fit any distribution on these X variables even without the S variable at all. And so the model could learn to simply ignore the S variable, which would be exactly not our goal, because our goal is to learn something about the disease stage, and in fact, we're going to be wanting to make assumptions on the progression of disease stage, which is going to help us learn. So by assuming conditional independence between these X variables, it's going to force all of the correlations to have to be mediated by that S variable, and it's going to remove some of that unidentifiability that would otherwise exist. It's this subtle but very important point. So the way that we're going to parametrize the conditional distribution-- so first of all, I'm going to assume these X's are all binary. So either the patient has diabetes or they don't have diabetes. I'm going to suppose that-- and this is, again, another assumption we're making, I'm going to suppose that once you already have a comorbidity, then you always have it. So for example, once this is 1, then all subsequent ones are also going to be 1. Hold the questions for just a second. I'm also going to make an assumption that later stages of the disease are more likely to develop the comorbidity. So in particular, one can formalize that mathematically as probability of X-- I'll just say Xi being 1 given S-- I'll say St equals little s, comma, Xt minus 1 equals 0, and suppose that this is larger than or equal to probability of Xt equals 1 given St equals S prime and Xt minus 1 equals 0 for all S prime less than S, OK? So I'm saying, as you get further along in the disease stage, you're more likely to observe one of these complications. And again, this is an assumption that we're putting into the learning algorithm, but what we found that these types of assumptions are really critical in order to learn disease progression models when you don't have a large amount of data. And note that this is just a linear inequality on the parameters of the model. And so one can use a convex optimization algorithm during learning-- during the maximum likelihood estimation step with this algorithm, we just put a linear inequality into the convex optimization problem to enforce this constraint. There are a couple of questions. AUDIENCE: Is there generally like a quick way to check whether a model is unidentifiable or-- PROFESSOR: So there are ways to try to detect to see if a model is unidentifiable. It's beyond the scope of the class, but I'll just briefly mention one of the techniques. So one could-- so you can ask the identifiability question by looking at moments of the distribution. For example, you could talk about it as a function of all of the observed moments of distribution that you get from the data. Now the observed data here are not the S's and X's, but rather the O's. So you look at the joint distribution on the O's, and then you can ask questions about-- if I was to-- so suppose I was to choose a random set of parameters in the model, is there any way to do a perturbation of the parameters in the model which leave the observed marginal distribution on the O's identical? And often when you're in the setting of non-identifiability, you can take the gradient of a function and see-- and you could sort of find that there is some wiggle space, and then you show that OK, there are actually-- this objective function is actually unidentifiable. Now that type of technique is widely used when studying what are known as method of moments algorithms or estimation algorithms in learning verbal models, but they would be much, much harder to apply in this type of setting because first of all, these are much more complex models, and estimating the corresponding moments is going to be very hard because they're very high dimensional. And second, because they're-- I'm actually conflating two different things when I talk about identifiability. One statement is the infinite data identifiability, and the second question is your ability to actually learn a good model from a small amount of data, which is a sample complexity. And these constraints that I'm putting in, even if they don't affect the actual identifiability of the model, they could be extremely important for improving the sample complexity of learning algorithm. Is there another question? So we valued to this using a data set of almost 4,000 patients where, again, each patient we observed for only a couple of years-- one to three years. And the observations that we observed were 264 diagnosis codes, the presence or absence of each of those diagnosis codes during any three-month interval. Overall, there were almost 200,000 observations of diagnosis codes in this data set. The learning algorithm that we used was expectation maximization. Remember, there are a number of hidden variables here, and so if you want to maximize the likely-- if you want to learn the parameters that maximize the likelihood of those observations O, then you have to marginize over those hidden variables, and EM is one way to try to find a local optima of that likelihood function, with the key caveat that one has to do approximate inference during the E step here, because this model is not tractable, there's no closed form-- for example, dynamic programming algorithm for doing posterior inference in this model given its complexity. And so what we used was a Gibbs sampler to do approximate inference within that E step, and we used-- we did block sampling of the Markov chains where we combined a Gibbs sampler with a dynamic programming algorithm, which improved the mixing rate of the Markov chain for those of you who are familiar with those concepts. And in the end step of the learning algorithm when one has to learn the parameters of the distribution, the only complex part of this model is the continuous time Markov process, and there's actually been previous literature from the physics community which shows how you can really-- which gives you analytic closed-form solutions for that M step of that continuous time Markov process. Now if I were to do this again today, I would have done it a little bit differently. I would still think about modeling this problem in a very similar way, but I would do learning using a variational lower bound of the likelihood with a recognition network in order to very quickly get you a lower bound in the likelihood. And for those of you who are familiar with variational autoencoders, that's precisely the idea that is used there for learning variational autoencoders. So that's the way I would approach this if I was to do it again. There's just one or two other extensions I want to mention. The first one is something which we-- one, more customization we made for COPD, which is that we enforced monotonic stage progression. So we said that-- so here I talked about a type of monotonically in terms of the conditional distribution of X given S, but one could also put an assumption in the-- I already talked about that, but one could also put an assumption on P of S-- S of t given S of t minus 1, which is implicitly an assumption on Q, and I gave you a hint of how one might do that over here when I said that you might put 0's to the left-hand side, meaning you can never go to the left. And indeed, we did something like that here as well, which is another type of constraint. And finally, we regularize the learning problem by asking about that graph involving the conditions, the comorbidities, and the diagnosis codes be sparse, by putting a beta prior on those edge weights. So here's what one learned. So the first thing I'm going to do is I'm going to show you the-- we talked about how we specified anchors, but I told you that the anchors weren't the whole story. That we were able to infer much more interesting things about the hidden variables given all of the observations we have. So here I'm showing you several of the phenotypes that were learned by this unsupervised learning algorithm. First, the phenotype for kidney disease. In red here, I'm showing you the anchor variables that we chose for kidney disease, and what you'll notice are a couple of things. First, the weight, which you should think about as being proportional in some way to how often you would see that diagnosis code given that the patient had kidney disease, the weights are all far less than one, all right? So there is some noise in this process of when you observe a diagnosis code for a patient. The second thing you observe is that there are a number of other diagnosis codes that are observed to be-- which are explained by this kidney disease comorbidity, such as anemia, urinary tract infections, and so on, and that aligns well with what's known in the medical literature about kidney disease. Look at another example for lung cancer. In red here I'm showing you the anchors that we had pre-specified for these, which mean that these diagnosis codes could only be explained by the lung cancer comorbidity, and these are the noise rates that are learned for them, and that's everything else. And here's one more example of lung infection where there was only a signal anchor that we specified for pneumonia, and you see all of the other things that are automatically associated to that as by the unsupervised learning algorithm. Yep? AUDIENCE: So how do you [INAUDIBLE] for the mobidity, right? PROFESSOR: Right. So that's what the unsupervised learning algorithm is doing. So these weights are learned, and I'm showing you something like a point estimate of the parameters that are learned by the learning algorithm. AUDIENCE: And so we-- PROFESSOR: Just like if you're learning a Markov model, you learned some transition and [INAUDIBLE],, same thing here. All right. And this should look a lot like what you would see when you do topic modeling on a text copora, right? You would discover a topic-- this is analogous to a topic. It's a discrete topic, meaning it either occurs or it doesn't occur for a patient. And you would discover some word topic distribution. This is analogous to that word topic distribution for a topic in a topic model. So one could then use the model to answer a couple of the original questions we set out to solve. The first one is given a patient's data, which I'm illustrating here on the bottom, I have artificially separated out into three different comorbidities, and a star denotes an observation of a data type of that one. But this was artificially done by us, it was not given to learning algorithm. One can infer, when the patient initiated-- started with each one of these comorbidites, and also, when-- so for the full three-year time range that we have data for the patient, what stage was the patient in in the disease at any one point in time? So this model infers that the patient starts out in stage 1, and about half a year through the data collection process, transitioned into stage 2 of COPD. Another thing that one could do using this model is to simulate from the model and answer the question of what would, let's say, a 20-year trajectory of the disease look like? And here, I'm showing a 10-year trajectory. And again, only one to three years of data was used for any one patient during learning. So this is the first time we see the those axes, those comorbidities really start to show up as being important as the way of reading out from the model. Here, we've thrown away those O's, those diagnosis codes altogether, we only care about what we conjecture is truly happening to the patient, those X variables, which are unobserved during training. So what we conjecture is that kidney disease is very uncommon in stage 1 of the disease, and increases slowly as you transition from stage 2, stage 3, to stage 4 of the disease, and then really bumps up towards stage 5 and stage 6 of the disease. So you should read this as saying that in stage 6 of the disease, over 60% people have kidney disease. Now the time interval is here. So how I've chosen these-- where to put these triangles, I've chosen them based on the average amount of time it takes to transition from one stage to the next stage according to the learned parameters of the model. And so you see that stages 1, 2, and 3, and 4 take a long period of-- amount of time to transition between those four stages, and then there's a very small amount of time between transitioning from stage 5 to stage 6 on average. So that's for kidney disease. One could also read this out for other comorbidities. So in orange here-- in yellow here is diabetes, in black here is musculoskeletal conditions, and in red here is cardiovascular disease. And so one of the interesting inferences made by this learning algorithm is that even in stage 1 of COPD, very early in the trajectory, we are seeing patients with large amounts of cardiovascular disease. And again, this is something that one can look at the medical literature to see does it align with what we expect? And it does, so even in patients with mild to moderate COPD, the leading cause of morbidity is cardiovascular disease. Again, this is just a sanity check that what this model is learning for a common disease actually aligns with the medical knowledge. So that's all I want to say about this probabilistic model approach to disease progression modeling from cross-sectional data. I want you to hold your questions so I can get through the rest of the material and you can ask me after class. So next I want to talk about these pseudo-time methods, which are a very different approach for trying to align patients into early-to-late disease stage. These approaches were really popularized in the last five years due to the explosion in single-cell sequencing experiments in the biology community. Single-cell sequencing is a way to really understand not just what is the average gene expression, but on a cell-by-cell basis can we understand what is expressed in each cell. So at a very high level, the way this works is you take a solid tissue, you do a number of procedures in order to isolate out individual cells from that tissue, then you're going to extract the RNA from those individual cells, you go through another complex procedure which somehow barcodes each of the RNA from each individual cell, mixes them all together, does sequencing of it, and then deconvolves it so that you can see what was the original RNA expression for each of the individual cells. Now the goal of these pseudo-time algorithms is to take that type of data and then to attempt to align cells to some trajectory. So if you look at the very top of this figure part figure a that's the picture that you should have in your mind. In the real world, cells are evolving with time-- for example, B cells will have a well-characterized evolution between different cellular states, and what we'd like to be understand, given that you have cross-sectional data-- so you can imagine-- imagine you have a whole collection of cells, each one in a different part, a different stage of differentiation, could you somehow order them into where they were in different stages of differentiation? So that's the goal. We want to take this-- so there exists some true ordering that I'm showing from dark to light. The capture process is going to ignore what the ordering information was, because all we're doing is getting a collection of cells that are in different stages. And then we're going to use this pseudo-time method to try to re-sort them so that you could figure out, oh, these were the cells in the early stage and these are the cells in the late stage. And of course, there's an analogy here to the pictures I showed you in the earlier part of the lecture. Once you have this alignment of cells into stages, then you can answer some really exciting scientific questions. For example, you could ask a variety of different genes which genes are expressed at which point in time. So you might see that gene a is very highly expressed very early in this cell's differentiation and is not expressed very much towards the end, and that might give you new biological insights. So these methods could immediately be applied, I believe, to disease progression modeling where I want you to think about each cell as now a patient, and that patient has a number of observations for this data. The observations are an expression for that cell, but in our data, the observations might be symptoms that we observe for the patient, for example. And then the goal is, given those cross-sectional observations, to sort them. And once you have that sorting, then you could answer scientific questions, such as, I mentioned, of a number of different genes, which genes are expressed when. So here, I'm showing you sort of the density of when this particular gene is expressed as a function of pseudo-time. Analogously for disease progression modeling, you should think of that as being a symptom. You could ask, OK, suppose there are some true progression of the disease, when do patients typically develop diabetes or cardiovascular symptoms? And so for cardiovascular, going back to the COPD example, you might imagine that there's a peak very early in the disease stage; for diabetes, it might be in a later disease stage. All right? So that-- is the analogy clear? So this community, which has been developing methods for studying single-cell gene expression data, has just exploded in the last 10 years. So I lost count of how many different methods there are, but if I had to guess, I'd say 50 to 200 different methods for this problem. There was a paper, which is one of the optional readings for today's lecture, that just came out earlier this month which looks at a comparison of these different trajectory inference methods, and this picture gives a really interesting illustration of what are some of the assumptions made by these algorithms. So for example, the first question, when you try to figure out which method of these tons of methods to use is, do you expect multiple disconnected trajectories? What might be a reason why you would expect multiple disconnected trajectories for disease progression modeling? TAs should not answer. AUDIENCE: [INAUDIBLE] PROFESSOR: Different subtexts would be an example. So suppose the answer is no, as we've been assuming for most of this lecture, then you might ask, OK, there might only be a single trajectory, because we're only assuming that a single disease subtype, but do we expect a particular topology? Now everything that we've been talking about up until now has been a linear topology, meaning there's a linear projection, there's such notion of early and late to stage, but in fact, the linear trajectory may not be realistic. Maybe the trajectory looks a little bit more like this bifurcation. Maybe patients look the same very early in the disease stage, but then suddenly something might happen which causes some patients to go this way and some patients to go that way. Any idea what that might be in a clinical setting? AUDIENCE: A treatment? PROFESSOR: Treatments, that's great. All right? So maybe these patients got t equals 0, and maybe these patients got t equal as 1, and maybe for whatever reason wouldn't even have good data on what treatments patients got, so we don't actually observe the treatment, right? Then you might want to be able to discover that bifurcation directly from the data, then that might suggest, going back to the original source of the data, to ask, what differentiated these patients at this point in time? And you might discover, oh, there was something in the data that we didn't record, such as treatment, all right? So there are a variety of methods to try to infer these pseudo-times under a variety of different assumptions. What I'll do in the next few minutes is just give you an inkling of how two of the methods work. And I chose these to be representative examples. The first example is an approach based on building a minimum spanning tree. And this algorithm I'm going to describe goes by the name of Monocle. It was published in 2014 in this paper by Trapnell et al, but it builds very heavily on an earlier published paper from 2003 that I mostly citing here. So the way that this algorithm works is as follows. It starts with, as we've been assuming all along, cross-sectional data, which lives in some high-dimensional space. I'm drawing that in the top-left here. Each data point corresponds to some patient or some cell. The first step of the algorithm is to do dimensionality reduction. And there are many ways to do dimensionality reduction. You could do principal components analysis, or, for example, you could do independent components analysis. This paper uses the independent component analysis. What ICA is going to do, it's going to attempt to find a number of different components that seem to be as independent from one another as possible. Then you're going to represent the data now in this low-dimensional space, and in many of these papers, it's quite astonishing to me, they actually use dimension 2. So they'll go all the way down to two-dimensional space where you can actually plot all of the data. It's not at all obvious to me why you would want to do that, and for clinical data, I think that might be a very poor choice. Then what they do is they build a minimum spanning tree on all of the patient or cells. So the way that one does that is you create a graph by drawing an edge between every pair of nodes where the weight of the edge is the Euclidean distance between those two points. And then-- so for example, there is this edge from here to here, there's an edge from here to here and so on. And then given that weighted graph, we're going to find the minimum spanning tree of that graph, and what I'm showing you here is the minimum spanning tree of the corresponding graph, OK? Next, what one will do is go look for the longest path in that tree. Remember, finding the longest path in a graph-- in an arbitrary graph has a name, it's called the traveling salesman problem and it's the NP-hard problem. How has that gotten around here? Well we're not-- this is not an arbitrary graph, this is actually a tree. So here's that poor-- here's a algorithm for finding the longest path. I won't talk about that. So one finds along this path in a tree-- in the tree, and then what one does is one says, OK, one side of the path corresponds to, let's say, early disease stage and the other side of the path corresponds to late disease stage, and it allows for the fact that there might be some bifurcation. So for example, you see here that there is a bifurcation over here. And as we discussed earlier, you have to have some way of differentiating what the beginning is and what the end should be, and that's where some side information might become useful. So here's an illustration of applying that method to some real data. So every point here is a cell after doing dimensionality reduction. The edges between those points correspond to the edges of the minimum spanning tree, and now what the authors have done is they've actually used some side information that they had in order to color each of the nodes based on what part of the cell differentiation process is believed-- that cell is believed to be in. And what one discovers, that in fact, this is very sensible, that all of these points are in a much earlier disease stage than [INAUDIBLE] than these points, and this is a sensible bifurcation Next I want to talk about a slightly different approach to-- this is the whole story, by the way, right? It's conceptually a very, very simple approach. Next I want to talk about a different approach, which now tries to return back to the probabilistic approaches that we had earlier in the lecture. This new approach is going to be based on Gaussian processes. So Gaussian processes have come up a couple of times in lecture, but I've never actually formally defined them for you. So in order for what I'm going to say next to make sense, I'm going to formally define for you what a Gaussian process is. A Gaussian process mu for a collection of time points, T1 through T capital N, is defined by a joint distribution, mu, of those time points, which is a Gaussian distribution. So we're going to say that the function value for these T different time points is just a Gaussian, which for the purpose of today's lecture I'm going to assume is zero mean, and where the covariance function is given to you by this capital K, it's a covariance function of the time points-- of the input points. And so if you look-- this has to be a matrix of dimension capital N by capital N. And if you look at the I1 and I2 of the N tree, if you look at the N tree of that matrix, we're defining it to be given to by the following kernel function. It looks at the exponential of the negative Euclidean distance squared between those two time points. Intuitively what this is saying is that if you have two time points that are very close to one another, then this kernel function is going to be very large. If you have two time points that are very far from one another, then this is very large-- it's a very large negative number, and so this is going to be very small. So the kernel function for two inputs that are very far from another are very small; the kernel function for inputs that are very close to each other is large; and thus, what we're saying here is that there's going to be some correlation between nearby data points. And that's the way which we're going to specify a distribution of functions. If one were to sample from this Gaussian with a covariance function specified in the way I did, what one gets out is something that looks like this. So I'm assuming here that every-- that these curves look really dense, and that's because I'm assuming the N is extremely large here. If and what small, let's say 3, there'd only be three time points here, OK? And so if you can make this distribution of functions be arbitrarily complex by playing with this little l-- so for example, if you made a little l be very small, then what you get are these really spiky functions that I'm showing you in a very light color. If you make a little l be very large, you get these very smooth functions, right? So this is a way to get a function-- this is a way to get a distribution over functions just by sampling from this Gaussian process. What this paper does from Campbell and Yau published two years ago in Computational Biology is they assume that the observations that you have are drawn from a Gaussian distribution whose mean is given to you by the Gaussian process. So if you think back to the story that we drew earlier, suppose that the data lived in just one dimension, and suppose we actually knew the sorting of patients. So we actually knew which patients are very early in time, which patients are very late in time. You might imagine that that single biomarker, biomarker A, you might imagine that the function which tells you what the biomarker's value is as a function of time might be something like this, right? Or maybe it's something like that, OK? So it might be a function that's increasing or a function that's decreasing. And this function is precisely what this mu, this Gaussian process is meant to model. The only difference is that now, one can model the Gaussian process-- instead of just being a single dimension, one could imagine having several different dimensions. So this P denotes the number of dimensions, right? Which corresponds to, in some sense, to the number of synthetic biomarkers that you might conjecture exist. Now here, we truly don't know the sorting of patients into early versus late stage. And so the time points T are themselves assumed to be latent variables that are drawn from a truncated normal distribution that looks like this. So you might make some assumption that the time intervals for when a patient comes in might be, maybe patients come in really typically very in the middle of the disease stage, or maybe you're assuming it's something flat, an so patients come in throughout the disease stage. But the time point itself is latent. So now the generative process for the data is as follows. You first sample a time point from this truncated normal distribution, then you look to see-- oh, and you sample from the very beginning your sample this curve mu, and then you look to see, what is the value of mu for the sample time point, and that gives you the expected value you should expect to see for that patient. And the one, then, jointly optimizes this to try to find the most-- the curve, the curve mu which has highest posterior probability, and that is how you read out from the model both what the latent progression looks like, and if you look at the posterior distribution over the T's that are inferred for each individual, you get the inferred location along the trajectory for each individual. And I'll stop there. I'll post the slides online for this last piece, but I'll let you read the paper on your own.
MIT_6S897_Machine_Learning_for_Healthcare_Spring_2019
6_Physiological_TimeSeries.txt
DAVID SONTAG: So I'll begin today's lecture by giving a brief recap of risk stratification. We didn't get to finish talking survival modeling on Thursday, and so I'll go a little bit more into that, and I'll answer some of the questions that arose during our discussions and on Piazza since. And then the vast majority of today's lecture we'll be talking about a new topic-- in particular, physiological time series modeling. I'll give two examples of physiological time series modeling-- the first one coming from monitoring patients in intensive care units, and the second one asking a very different type of question-- that of diagnosing patients' heart conditions using EKGs. And both of these correspond to readings that you had for today's lecture, and we'll go into much more depth in these-- of those papers today, and I'll provide much more color around them. So just to briefly remind you where we were on Thursday, we talked about how one could formalize risk stratification instead of as a classification problem of what would happen, let's say, in some predefined time period, rather thinking about risk stratification as a regression question, or regression task. Given what you know about a patient at time zero, predicting time to event-- so for example, here the event might be death, divorce, college graduation. And patient one-- that event happened at time step nine. Patient two, that event happened at time step 12. And for patient four, we don't know when that event happened, because it was censored. In particular, after time step seven, we no longer get to view any of the patients' data, and so we don't know when that red dot would be-- sometime in the future or never. So this is what we mean by right censor data, which is precisely what survival modeling is aiming to solve. Are there questions about this setup first? AUDIENCE: You flipped the x on-- DAVID SONTAG: Yeah, I realized that. I flipped the x and the o in today's presentation, but that's not relevant. So f of t is the probability of death, or the event occurring at time step t. And although in this slide I'm showing it as an unconditional model, in general, you should think about this as a conditional density. So you might be conditioning on some covariates or features that you have for that patient at baseline. And very important for survival modeling and for the next things I'll tell you are the survival function, to note it as capital S of t. And that's simply 1 minus the cumulative density function. So it's the probability that the event occurring, which is time-- which is denoted here as capital T, occurs greater than some little t. So it's this function, which is simply given to you by the integral from 0 to infinity of the density. So in pictures, this is the density. On the x-axis is time. The y-axis is the density function. And this black curve is what I'm denoting as f of t. And this white area is capital s of c, the survival probability, or survival function. Yes? AUDIENCE: So I just want to be clear. So if you were to integrate the entire curve, [INAUDIBLE] by infinity you're going to be [INAUDIBLE].. DAVID SONTAG: In the way that I described it to here, yes, because we're talking about the time to event. But often we might be in scenarios where the event may never occur, and so that-- you can formalize that in a couple of different ways. You could put that at point mass at s of infinity, or you could simply say that the integral from 0 to infinity is some quantity less than 1. And in the readings that I'm referencing in the very bottom of those slides-- it shows you how you can very easily modify all of the frameworks I'm telling you about here to deal with that scenario where the event may never occur. But for the purposes of my presentation, you can assume that the event will always occur at some point. It's a very minor modification where you, in essence, divide the densities by a constant, which accounts for the fact that it wouldn't integrate to one otherwise. Now, a key question that has to be solved when trying to use a parametric approach to survivor modeling is, what should that f of t look like? What should that density function look like? And what I'm showing you here is a table of some very commonly used density functions. What you see in these two columns-- on the right hand column is the density function f of t itself. Lambda denotes some parameter of the model. t is the time. And on this second middle column is the survival function. So this is obtained for these particular parametric forms by an analytical solution solving that integral from t to infinity. This is the analytic solution for that. And so these go by common names of exponential, weeble, log-normal, and so on. And critically, all of these have support only on the positive real numbers, because the event can ever occur at negative time. Now, we live in a day and age where we no longer have to make standard parametric assumptions for densities. We could, for example, try to formalize the density as some output of some deep neural network. But if we don't use a parametric approach, so there are two ways to try to do that. One way to do that would be to say that we're going to model the post-- the distribution, f of t, as one of these things, where lambda or whatever the parameters of distribution are given to by the output of, let's say, a deep neural network on the covariate x. So that would be one approach. A very different approach would be a non-parametric distribution where you say, OK, I'm going to define f of t extremely flexibly, not as one of these forms. And there one runs into a slightly different challenge, because as I'll show you in the next slide, to do maximum likelihood estimation of these distributions from censor data, one needs to get-- one needs to make use of this survival function, s of t. And so if you're f if t is complex, and you don't have a nice analytic solution for s of t, then you're going to have to somehow use a numerical approximation of s of t during limiting. So it's definitely possible, but it's going to be a little bit more effort. So now here's where I'm going to get into maximum likelihood estimation of these distributions, and to define for you the likelihood function, I'm going to break it down into two different settings. The first setting is an observation which is uncensored, meaning we do observe when the event-- death, for example-- occurs. And in that case, the probability of the event-- it's very simple. It's just probability of the event occurring at capital-- at capital T, random variable T, equals a little t-- is just f or t. Done. However, what happens if, for this data point, you don't observe when the event occurred because of censoring? Well, of course, you could just throw away that data point, not use it in your estimation, but that's precisely what we mentioned at the very beginning of last week's lecture-- was the goal of survival modeling to not do that, because if we did that, it would introduce bias into our estimation procedure. So we would like to be able to use that observation that this data point was censored, but the only information we can get from that observation is that capital T, the event time, must have occurred some time larger than the observed-- the time of censoring, which is little t here. So we don't know precisely when capital T was, but we know it's something larger than the observed centering time little of t. And that, remember, is precisely what the survival function is capturing. So for a censored observation, we're going to use capital S of t within the likelihood. So now we can then combine these two for censored and uncensored data, and what we get is the following likelihood objective. This is-- I'm showing you here the log likelihood objective. Recall from last week that little b of i simply denotes is this observation censored or not? So if bi is 1, it means the time that you're given is the time of the censoring event. And if bi is 0, it means the time you're given is the time that the event occurs. So here what we're going to do is now sum over all of the data points in your data set from little i equals 1 to little n of bi times log of probability under the censored model plus 1 minus bi times log of probability under the uncensored model. And so this bi is just going to switch on which of these two you're going to use for that given data point. So the learning objective for maximum likelihood estimation here is very similar to what you're used to in learning distributions with the big difference that, for censored data, we're going to use the survival function to estimate its probability. Are there any questions? And this, of course, could then be optimized via your favorite algorithm, whether it be stochastic gradient descent, or second order method, and so on. Yep? AUDIENCE: I have a question about the a kind of side project. You mentioned that we could use [INAUDIBLE].. DAVID SONTAG: Yes. AUDIENCE: And then combine it with the parametric approach. DAVID SONTAG: Yes. AUDIENCE: So is that true that we just still have the parametric assumption that we kind of map the input to the parameters? DAVID SONTAG: Exactly. That's exactly right. So consider the following picture where for-- this is time, t. And this is f of t. You can imagine for any one patient you might have a different function. You might-- but they might all be of the same parametric form. So they might be like that, or maybe they're shifted a little bit. So you think about each of these three things as being from the same parametric family of distributions, but with different means. And in this case, then the mean is given to as the output of the deep neural network. And so that would be the way it would be used, and then one could just back propagate in the usual way to do learning. Yep? AUDIENCE: Can you repeat what b sub i is? DAVID SONTAG: Excuse me? AUDIENCE: Could you repeat what b sub i is? DAVID SONTAG: b sub i is just an indicator whether the i-th data point was censored or not censored. Yes? AUDIENCE: So [INAUDIBLE] equal it's more a probability density function [INAUDIBLE]. DAVID SONTAG: Cumulative density function. AUDIENCE: Yeah, but [INAUDIBLE] probability. No, for the [INAUDIBLE] it's probability density function. DAVID SONTAG Yes, so just to-- AUDIENCE: [INAUDIBLE] DAVID SONTAG: Excuse me? AUDIENCE: Will that be any problem to combine those two types there? DAVID SONTAG: That's a very good question. So the observation was that you have two different types of probabilities used here. In this case, we're using something like the cumulative density, whereas here we're using the probability density function. The question was, are these two on different scales? Does it make sense to combine them in this type of linear fashion with the same weighting? And I think it does make sense. So think about a setting where you have a very small time range. You're not exactly sure when this event occurs. It's something in this time range. In the setting of the censored data, where that time range could potentially be very large, your model is providing-- your log probability is somehow going to be much more flat, because you're covering much more probability mass. And so that observation, I think, intuitively is likely to have a much-- a bit of a smaller effect on the overall learning algorithm. These observations-- you know precisely where they are, and so as you deviate from that, you incur the corresponding log loss penalty. But I do think that it makes sense to have them in the same scale. If anyone in the room has done work with [INAUDIBLE] modeling and has a different answer to that, I'd love to hear it. Not today, but maybe someone in the future will answer this question differently. I'm going to move on for now. So the remaining question that I want to talk about today is how one evaluates survival models. So we talked about binary classification a lot in the context of risk stratification in the beginning, and we talked about how area under the ROC curve is one measure of classification performance, but here we're doing more-- something more akin to regression, not classification. A standard measure that's used to measure performance is known as the C-statistic, or concordance index. Those are one in the same-- and is defined as follows. And it has a very intuitive definition. It sums over pairs of data points that can be compared to one another, and it says, OK, what is the likelihood of the event happening for an event that occurs before an event-- another event. And what you want is that the likelihood of the event that, on average, in essence, should occur later should be larger than the event that should occur earlier. I'm going to first illustrate it with this picture, and then I'll work through the math. So here's the picture, and then we'll talk about the math. So what I'm showing you here are every single observation in your data set, and they're sorted by either the censoring time or the event time. So by black, I'm illustrating uncensored data points. And by red, I'm denoting censored data points. Now, here we see that this data point-- the event happened before this data point's censoring event. Now, since this data point was censored, it means it's true event time you could think about as sometime into the far future. So what we would want is that the model gives that the probability that this event happens by this time should be larger than the probability that this event happens by this time, because this actually occurred first. And these two are comparable together-- to each other. On the other hand, it wouldn't make sense to compare y2 and y4, because both of these were censored data points, and we don't know precisely when they occurred. So for example, it could have very well happened that the event 2 happened after event 4. So what I'm showing you here with each of these lines are the pairwise comparisons that are actually possible to make. You can make pairwise comparisons, of course, between any pair of events that actually did occur, and you can make pairwise comparisons between censored events and events that occurred before it. Now, if you now look at this formula, the formula in this indicate-- this is looking at an indicator of survival functions between pairs of data points, and which pairs of data points? It was precisely those pairs of data points, which I'm showing comparisons of with these blue lines here. So we're going to sum over i such that bi is equal to 0, and remember that means it is an uncensored data point. And then we look at-- we look at yi compared to all other yj that's great-- that has a value greater than-- both censored and uncensored. Now, if your data had no sensor data points in it, then you can verify that, in fact, this corresponds-- so there's one other assumption one has to make, which is that-- suppose that your outcome is binary. And so if you might wonder how you get a binary outcome from this, imagine that your density function looked a little bit like this, where it could occur either at time 1 or time 2. So something like that. So if the event can occur at only two times, not a whole range of times, then this is analogous to a binary outcome. And so if you have a binary outcome like this and no censoring, then, in fact, that C-statistic is exactly equal to the area under the ROC curve. So that just connects it a little bit back to things we're used to. Yep? AUDIENCE: Just to make sure that I understand. So y1 is going to be we observed an event, and y2 is going to be we know that no event occurred until that day? DAVID SONTAG: Every dot corresponds to one event, either censored or not. AUDIENCE: Thank you. DAVID SONTAG: And they're sorted. In this figure, they're sorted by the time of either the censoring or the event occurring. So I talked to-- when I talked about C-statistic, it-- that's one way to measure performance of your survival modeling, but you might remember that I-- that when we talked about binary classification, we said how area under there ROC curve in itself is very limiting, and so we should think through other performance metrics of relevance. So here are a few other things that you could do. One thing you could do is you could use the mean squared error. So again, thinking about this as a regression problem. But of course, that only makes sense for uncensored data points. So focus just in the uncensored data points, look to see how well we're doing at predicting when the event occurs. The second thing one could do, since you have the ability to define the likelihood of an observation, censored or not censored, one could hold out data, and look at the held-out likelihood or log likelihood of that held-out data. And the third thing you could do is you can-- after learning using this survival modeling framework, one could then turn it into a binary classification problem by, for example, artificially choosing time ranges, like greater than three months is 1. Less than three months is 0. That would be one crude definition. And then once you've done a reduction to a binary classification problem, you could use all of the existing performance metrics they're used to thinking about for binary classification to evaluate the performance there-- things like positive predictive value, for example. And you could, of course, choose different reductions and get different performance statistics out. So this is just a small subset of ways to try to evaluate survivor modeling, but it's a very, very rich literature. And again, on the bottom of these slides, I pointed you to several references that you could go to to learn more. The final comment I wanted to make is that I only told you about one estimator in today's lecture, and that's known as the likelihood based estimator. But there is a whole other estimation approach for survival modelings, which is very important to know about, that are called partial likelihood estimators. And for those of you who have heard of Cox proportional hazards models-- and I know they were discussed in Friday's recitation-- that's an example of a class of model that's commonly used within this partial likelihood estimator. Now, at a very intuitive level, what this partial likelihood estimator is doing is it's working with something like the C-statistic. So notice how the C-statistic only looks at relative orderings of events-- of their event occurrences. It doesn't care about exactly when the event occurred or not. In some sense, there's a constant. There's-- in this survival function, which could be divided out from both sides of this inequality, and it wouldn't affect anything about the statistic. And so one could think about other ways of learning these models by saying, well, we want to learn a survival function such that it gets the ordering correct between data points. Now, such a survival function wouldn't do a very good job. There's no reason it would do any good at getting the precise time of when an event occurs, but if your goal were to just figure out what is the sorted order of patients by risk so that you're going to do an intervention on the 10 most risky people, then getting that order incorrect is going to be enough, and that's precisely the intuition used behind these partial likelihood estimators-- so they focus on something which is a little bit less than the original goal, but in doing so, they can have much better statistical complexity, meaning the amount of data they need in order to fit this models well. And again, this is a very rich topic. All I wanted to do is give you a pointer to it so that you can go read more about it if this is something of interest to you. So now moving on into the recap, one of the most important points that we discussed last week was about non-stationarity. And there was a question posted to Piazza, which was really interesting, which is how do you actually deal with non-stationarity. And I spoke a lot about it existing, and I talked about how to test for it, but I didn't say what to do if you have it. So I thought this was such an interesting question that I would also talk about it a bit during lecture. So the short answer is, if you have to have a solution that you deploy tomorrow, then here's the hack that sometimes works. You take your most recent data, like the last three months' data, and you hope that there's not much non-stationarity within last three months. You throw out all the historical data, and you just train using the most recent data. So a bit unsatisfying, because you might have now extremely little data left to learn with, but if you have enough volume, it might be good enough. But the real interesting question from a research perspective is how could you optimally use that historical data. So here are three different ways. So one way has to do with imputation. Imagine that the way in which your data was non-stationary was because there were, let's say, parts of time when certain features were just unavailable. I gave you this example last week of laboratory test results across time, and I showed you how there are sometimes these really big blocks of time where no lab tests are available, or very few are available. Well, luckily we live in a world with high dimensional data, and what that means is there's often a lot of redundancy in the data. So what you could imagine doing is imputing features that you observed to be missing, such that the missingness properties, in fact, aren't changing as much across time after imputation. And if you do that as a pre-processing step, it may allow you to make use of much more of the historical data. A different approach, which is intimately tied to that, has to do with transforming the data. Instead of imputing it, transforming it into another representation altogether, such that that presentation is invariant across time. And here I'm giving you a reference to this paper by Ganin et al from the Journal of Machine Learning Research 2016, which talks about how to do domain and variant learning of neural networks, and that's one approach to do so. And I view those two as being very similar-- imputation and transformations. A second approach is to re-weight the data to look like the current data. So imagine that you go back in time, and you say, you know what? I ICD-10 codes, for some very weird reason-- this is not true, by the way-- ICD-10 codes in this untrue world happen to be used between March and April of 2003. And then they weren't used again until 2015. So instead of throwing away all of the previous data, we're going to recognize that those-- that three month interval 10 years ago was actually drawn from a very similar distribution as what we're going to be testing on today. So we're going to weight those data points up very much, and down weight the data points that are less like the ones from today. That's the intuition behind these re-weighting approaches, and we're going to talk much more about that in the context of causal inference, not because these two have to do with each other, but they have-- they end up using a very similar technique for how to deal with datas that shift, or covariate shift. And the final technique that I'll mention is based on online learning algorithms. So the idea there is that there might be cut points, change points across time. So maybe the data looks one way up until this change point, and then suddenly the data looks really different until this change point, and then suddenly the data looks very different on into the future. So here I'm showing you there are two change points in which data set shift happens. What these online learning algorithms do is they say, OK, suppose we were forced to make predictions throughout this time period using only the historical data to make predictions at each point in time. Well, if we could somehow recognize that there might be these shifts, we could design algorithms that are going to be robust to those shifts. And then one could try to analyze-- mathematically analyze those algorithms based on the amount of regret they would have to, for example, an algorithm that knew exactly when those changes were. And of course, we don't know precisely when those changes were. And so there's a whole field of algorithms trying to do that, and here I'm just give me one citation for a recent work. So to conclude risk stratification-- this is the last slide here. Maybe ask your question after class. We've talked about two approaches for formalizing risk stratification-- first as binary classification. Second as regression. And in the regression framework, one has to think about censoring, which is why we call it survival modeling. Second, in our examples, and again in your homework assignment that's coming up next week, we'll see that often the variables, the features that are most predictive make a lot of sense. In the diabetes case, we said-- we saw how patients having comorbidities of diabetes, like hypertension, or patients being obese were very predictive of patients getting diabetes. So you might ask yourself, is there something causal there? Are those features that are very predictive in fact causing-- what's causing the patient to develop type 2 diabetes? Like, for example, obesity causing diabetes. And this is where I want to caution you. You shouldn't interpret these very predictive features in a causal fashion, particularly not when one starts to work with high dimensional data, as we do in this course. The reason for that is very subtle, and we'll talk about that in the causal inference lectures, but I just wanted to give you a pointer now that you shouldn't think about it in that way. And you'll understand why in just a few weeks. And finally we talked about ways of dealing with missing data. I gave you one feature representation for the diabetes case, which was designed to deal with missing data. It said, was there any diagnosis code 250.01 in the last three months? And if there was, you have a 1. If you don't, 0. So it's designed to recognize that you don't have information, perhaps, for some large chunk of time in that window. But that missing data could also be dangerous if that missingness itself has caused you to non-stationarity, which is then going to result in your test distribution looking different from your training distribution. And that's where approaches that are based on imputation could actually be very valuable, not because they improve your predictive accuracy when everything goes right, but because they might improve your predictive accuracy when things go wrong. And so one of your readings for last week's lecture was actually an example of that, where they used a Gaussian process model to impute much of the missing data in a patient's continuous vital signs, and then they used a recurrent neural network to predict based on that imputed data. So in that case, there are really two things going on. First is this robustness to data set shift, but there's a second thing, which is going on as well, which has to do with a trade-off between the amount of data you have and the complexity of the prediction problem. By doing imputations, sometimes you make your problem look a bit simpler, and simpler algorithms might succeed where otherwise they would fail because not having enough data. And that's something that you saw in that last week's reading. So I'm done with risk stratification. I'll take a one minute breather for everyone in the room, and then we'll start with the main topic of this lecture, which is physiological time-series modeling. Let's say started. So here's a baby that's not doing very well. This baby is in the intensive care unit. Maybe it was a premature infant. Maybe it's a baby who has some chronic disease, and, of course, parents are very worried. This baby is getting very close monitoring. It's connected to lots of different probes. In number one here, it's illustrating a three probe-- three lead ECG, which we'll be talking about much more, which is measuring its heart, how the baby's heart is doing. Over here, this number three is something attached to the baby's foot, which is measuring its-- it's a pulse oximeter, which is measuring the baby's oxygen saturation, the amount of oxygen in the blood. Number four is a probe which is measuring the baby's temperature and so on. And so we're really taking really close measurements of this baby, because we want to understand how is this baby doing. We recognize that there might be really sudden changes in the baby's state of health that we want to be able to recognize as early as possible. And so behind the scenes, next to this baby, you'll, of course, have a huge number of monitors, each of the monitors showing the readouts from each of these different signals. And this type of data is really prevalent in intensive care units, but you'll also see in today's lecture how some aspects of this data are now starting to make its way to the home, as well. So for example, EKGs are now available on Apple and Samsung watches to help understand-- help to help with diagnosis of arrhythmias, even for people at home. And so from this type of data, there are a number of really important use cases to think about. The first one is to recognize that often we're getting really noisy data, and we want to try to infer the true signal. So imagine, for example, the temperature probe. The baby's true temperature might be 98.5, but for whatever reason-- we'll see a few reasons here today-- maybe you're getting an observation of 93. And you didn't know. Is that actually the true baby temperature? In which case we-- it would be in a lot of trouble. Or is that an anomalous reading? So we like t be able to distinguish between those two things. And in other cases, we are interested in not necessarily fully understanding what's going on with the baby along each of those axes, but we just want to use that data for predictive purposes, for risk stratification, for example. And so the type of machine learning approach that we'll take here will depend on the following three factors. First, do we have label data available? For example, do we know the ground truth of what the baby's true temperature was, at least for a few of the babies in the training set? Second. Do we have a good mechanistic or statistical model of how this data might evolve across time? We know a lot about hearts, for example. Cardiology is one of those fields of medicine where it's really well studied. There are good simulators of hearts, and how they beat across time, and how that affects the electrical stimulation across the body. And if we have these good mechanistic or statistical models, that can often allow one to trade off not having much label data, or just not having much data period. And it's really these three points which I want to illustrate the extremes of in today's lecture-- what do you do when you don't have much data, and what you do when-- what you can do when you have a ton of data. And I think it's going to be really informative for us as we go out into the world and will have to tackle each of those two settings. So here's an example of two different babies with very different trajectories. One in the x-axis here is time in seconds. The y-axis here-- I think seconds, maybe minutes. The y-axis here is beats per minute of the baby's heart rate, and you see in some cases it's really fluctuating a lot up and down. In some cases, it's sort of going in a similar-- in one direction, and in all cases, the short term observations are very different from the long range trajectories. So the first problem that I want us to think about is one of trying to understand, how do we deconvolve between the truth of what's going on with, for example, the patient's blood pressure or oxygen versus interventions that are happening to them? So on the bottom here, I'm showing examples of interventions. Here in this oxygen uptake, we notice how between roughly 1,000 and 2,000 seconds suddenly there's no signal whatsoever. And that's an example of what's called dropout. Over here, we see a different type of-- the effect of a different intervention, which is due to a probe recalibration. Now, at that time, there was a drop out followed by a sudden change in the values, and that's really happening due to a recalibration step. And in both of these cases, what's going on with the individual might be relatively constant across time, but what's being observed is dramatically affected by those interventions. So we want to ask the question, can we identify those artifactual processes? Can we identify that these interventions were happening at those points in time? And then, if we could identify them, then we could potentially subtract their effect out. So we could impute the data, which we know-- now know to be missing, and then have this much higher quality signal used for some downstream predictive purpose, for example. And the second reason why this can be really important is to tackle this problem called alarm fatigue. Alarm fatigue is one of the most important challenges facing medicine today. As we get better and better in doing risk stratification, as we come up with more and more diagnostic tools and tests, that means these red flags are being raised more and more often. And each one of these has some associated false positive rate for it. And so the more tests you have-- suppose the false positive rate is kept constant-- the more tests you have, the more likely it is that the union of all of those is going to be some error. And so when you're in an intensive care unit, there are alarms going off all the time. And something that happens is that nurses end up starting to ignore those alarms, because so often those alarms are false positives, are due to, for example, artifacts like what I'm showing you here. And so if we had techniques, such as the ones we'll talk about right now, which could recognize when, for example, the sudden drop in a patient's heart rate is due to an artifact and not due to the patient's true heart rate dropping-- if we had enough confidence in that-- in distinguishing those two things, then we might not decide to raise that red flag. And that might reduce the amount of false alarms, and that then might reduce the amount of alarm fatigue. And that could have a very big impact on health care. So the technique which we'll talk about today goes by the name of switching linear dynamical systems. Who here has seen a picture like this on-- this picture on the bottom before. About half of the room. So for the other half of the room, I'm going to give a bit of a recap into probabilistic modeling. All of you are now familiar with general probabilities. So you're used to thinking about, for example, univariate Gaussian distributions. We talked about how one could model survival, which was an example of such a distribution, but for today's lecture, we're going to be thinking now about multivariate probability distributions. In particular, we'll be thinking about how a patient's state-- let's say their true blood pressure-- evolves across time. And so now we're interested in not just the random variable at one point in time, but that same random variable at the second point in time, third point in time, fourth point in time, fifth point in time, and so on. So what I'm showing you here is known as a graphical model, also known as a Bayesian network. And it's one way of illustrating a multivariate probability distribution that has particular conditional independence properties. Specifically, in this model, one node corresponds to one random variable. So this is describing a joint distribution on x1 through x6, y1 through y6. So it's this multivariate distribution on 12 random variables. The fact that this is shaded in simply denotes that, at test time, when we use these models, typically these y variables are observed. Whereas our goal is usually to infer the x variables. Those are typically unobserved, meaning that our typical task is one of doing posterior inference to infer the x's given the y's. Now, associated with this graph, I already told you the nodes correspond to random variables. The graph tells us how is this joint distribution factorized. In particular, it's going to be factorized in the following way-- as the product over random variables of the probability of the i-th random variable. I'm going to use z to just denote a random variable. Think of z as the union of x and y. zi conditioned on the parents-- the values of the parents of zi. So I'm going to assume this factorization, and in particular for this graphical model, which goes by the name of a Markov model, it has a very specific factorization. And we're just going to read it off from this definition. So we're going to go in order-- first x1, then y1, then x2, then y2, and so on, which is going based on a root to children transversal of this graph. So the first random variable is x1. Second variable is y2, and what are the parents of y-- sorry, what are the parents of y1. Everyone can say out loud. AUDIENCE: x1. DAVID SONTAG: x1. So y1 in this factorization is only going to depend on x1. Next we have x2. What are the parents of x2? Everyone say out loud? AUDIENCE: x1. DAVID SONTAG: x1. Then we have y2. What are the parents of y2. Everyone say out loud. AUDIENCE: x2. DAVID SONTAG: x2 and so on. So this joint distribution is going to have a particularly simple form, which is given to by this factorization shown here. And this factorization corresponds one to one with the particular graph in the way that I just told you. And in this way, we can define a very complex probability distribution by a number of much simpler conditional probability distributions. For example, if each of the random variables were binary, then to describe probability of y1 given x1, we only need two numbers. For each value of x1, either 0 or 1, we give the probability of y1 equals 1. And then, of course, probably y1 equals 0 is just 1 minus that. So we can describe that very complicated joint distribution by a number of much smaller distributions. Now, the reason why I'm drawing it in this way is because we're making some really strong assumptions about the temporal dynamics in this problem. In particular, the fact that x3 only has an arrow from x2 and not from x1 implies that x3 is conditionally independent of x1. If you knew x2's value. So in some sense, think about this as cutting. If you're to take x2 out of the model and remove all edges incident on it, then x1 and x3 are now separated from one another. They're independent. Now, for those of you who do know graphical models, you'll recognize that that type of independent statement that I made is only true for Markov models, and the semantics for Bayesian networks are a little bit different. But actually for this model, it's-- they're one and the same. So we're going to make the following assumptions for the conditional distributions shown here. First, we're going to suppose that xt is given to you by a Gaussian distribution. Remember xt-- t is denoting a time step. Let's say 3-- it only depends in this picture-- the conditional distribution only depends on the previous time step's value, x2, or xt minus 1. So you'll notice how I'm going to say here xt is going to distribute as something, but the only random variables in this something can be xt minus 1, according to these assumptions. In particular, we're going to assume that it's some Gaussian distribution, whose mean is some linear transformation of xt minus 1, and which has a fixed covariance matrix q. So at each step of this process, the next random variable is some random walk from the previous random variable where you're moving according to some Gaussian distribution. In a very similar way, we're going to assume that yt is drawn also as a Gaussian distribution, but now depending on xt. So I want you to think about xt as the true state of the patient. It's a vector that's summarizing their blood pressure, their oxygen saturation, a whole bunch of other parameters, or maybe even just one of those. And y1 are the observations that you do observe. So let's say x1 is the patient's true blood pressure. y1 is the observed blood pressure, what comes from your monitor. So then a reasonable assumption would be that, well, if all this were equal, if it was a true observation, then y1 should be very close to x1. So you might assume that this covariance matrix is-- the covariance is-- the variance is very, very small. y1 should be very close to x1 if it's a good observation. And of course, if it's a noisy observation-- like, for example, if the probe was disconnected from the baby, then y1 should have no relationship to x1. And that dependence on the actual state of the world I'm denoting here by these superscripts, s of t. I'm ignoring that right now, and I'll bring that in in the next slide. Similarly, the relationship between x2 and x1 should be one which captures some of the dynamics that I showed in the previous slides, where I showed over here now this is the patient's true heart rate evolving across time, let's say. Notice how, if you look very locally, it looks like there are some very, very big local dynamics. Whereas if you look more globally, again, there's some smoothness, but there are some-- again, it looks like some random changes across time. And so those-- that drift has to somehow be summarized in this model by that A random variable. And I'll get into more detail about that in just a moment. So what I just showed you was an example of a linear dynamical system, but it was assuming that there were none of these events happening, none of these artifacts happening. The actual model that we were going to want to be able to use then is going to also incorporate the fact that there might be artifacts. And to model that, we need to introduce additional random variables corresponding to whether those artifacts occurred or not. And so that's now this model. So I'm going to let these S's-- these are other random variables, which are denoting artifactual events. They are also evolving with time. For example, if there's artifactual factual event at three seconds, maybe there's also an artifactual event at four seconds. And we like to model the relationship between those. That's why you have these arrows. And then the way that we interpret the observations that we do get depends on both the true value of what's going on with the patient and whether there was an artifactual event or not. And you'll notice that there's also an edge going from the artifactual events to the true values to note the fact that those interventions might actually be affecting the patient. For example, if you give them a medication to change their blood pressure, then that procedure is going to affect the next time step's value of the patient's blood pressure. So when one wants to learn this model, you have to ask yourself, what types of data do you have available? Unfortunately, it's very hard to get data on both the ground truth, what's going on with the patient, and whether these artifacts truly occurred or not. Instead, what we actually have are just these observations. We get these very noisy blood pressure draws across time. So what this paper does is it uses a maximum likelihood estimation approach, where it recognizes that we're going to be learning from missing data. We're going to explicitly think of these x's and the s's as latent variables. And we're going to maximize the likelihood of the whole entire model, marginalizing over x and s. So just maximizing the marginal likelihood over the y's. Now, for those of you who have studied unsupervised learning before, you might recognize that as a very hard learning problem. In fact, it's-- that likelihood is non-convex. And one could imagine all sorts of a heuristics for learning, such as gradient descent, or, as this paper uses, expectation maximization, and because of that non-convexity, each of these algorithms typically will only reach a local maxima of the likelihood. So this paper uses EM, which intuitively iterates between inferring those missing variables-- so imputing the x's and the s's given the current model, and doing posterior inference to infer the missing variables given the observed variables, using the current model. And then, once you've imputed those variables, attempting to refit the model. So that's called the m-step for maximization, which updates the model and just iterates between those two things. That's one learning algorithm which is guaranteed to reach a local maxima of the likelihood under some regularity assumptions. And so this paper uses that algorithm, but you need to be asking yourself, if all you ever observe are the y's, then will this algorithm ever recover anything close to the true model? For example, there might be large amounts of non-identifiability here. It could be that you could swap the meaning of the s's, and you'd get a similar likelihood on the y's. That's where bringing in domain knowledge becomes critical. So this is going to be an example where we have no label data or very little label data. And we're going to do unsupervised learning of this model, but we're going to use a ton of domain knowledge in order to constrain the model as much as possible. So what is that domain knowledge? Well, first we're going to use the fact that we know that a true heart rate evolves in a fashion that can be very well modeled by an autoregressive process. So the autoregressive process that's used in this paper is used to model the normal heart rate dynamics. In a moment, I'll tell you how to model the abnormal heart rate observations. And intuitively-- I'll first go over the intuition, then I'll give you the math. Intuitively what it does is it recognizes that this complicated signal can be decomposed into two pieces. The first piece shown here is called a baseline signal, and that, if you squint your eyes and you sort or ignore the very local fluctuations, this is what you get out. And then you can look at the residual of subtracting this signal, subtracting this baseline from the signal. And what you get out looks like this. Notice here it's around 0 mean. So it's a 0 mean signal with some random fluctuations, and the fluctuations are happening here at a much faster rate than-- and for the original baseline. And so the sum of bt and this residual is a very-- it looks-- is exactly equal to the true heart rate. And each of these two things we can model very well. This we can model by a random walk with-- which goes very slowly, and this we can model by a random walk which goes very quickly. And that is exactly what I'm now going to show over here on the left hand side. bt, this baseline signal, we're going to model as a Gaussian distribution, which is parameterized as a function of not just bt minus 1, but also bt minus 2, and bt minus 3. And so we're going to be taking a weighted average of the previous few time steps, where we're smoothing out, in essence, the observation-- the previous few observations. If you were to-- if you're being a keen observer, you'll notice that this is no longer a Markov model. For example, if this p1 and p2 are equal to 2, this then corresponds to a second order Markov model, because each random variable depends on the previous two time steps of the Markov chain. And so after-- so you would model now bt by this process, and you would probably be averaging over a large number of previous time steps to get this smooth property. And then you'd model xt minus bt by this autoregressive process, where you might, for example, just be looking at just the previous couple of time steps. And you recognize that you're just doing much more random fluctuations. And then-- so that's how one would now model normal heart rate dynamics. And again, it's just-- this is an example of a statistical model. There is no mechanistic knowledge of hearts being used here, but we can fit the data of normal hearts pretty well using this. But the next question and the most interesting one is, how does one now model artifactual events? So for that, that's where some mechanistic knowledge comes in. So one models that the probe dropouts are given by recognizing that, if a probe is removed from the baby, then there should no longer be-- or at least if you-- after a small amount of time, there should no longer be any dependence on the true value of the baby. For example, the blood pressure, once the blood pressure probe is removed, is no longer related to the baby's true blood pressure. But there might be some delay to that lack of dependence. And so-- and that is going to be encoded in some domain knowledge. So for example, in the temperature probe, when you remove the temperature probe from the baby, it starts heating up again-- or it starts cooling, so assuming that the ambient temperature is cooler than the baby's temperature. So you take it off the baby. It starts cooling down. How fast does it cool down? Well, you could assume that it cools down with some exponential decay from the baby's temperature. And this is something that is very reasonable, and you could imagine, maybe if you had label data for just a few of the babies, you could try to fit the parameters of the exponential very quickly. And in this way, now, we parameterize the conditional distribution of the temperature probe, given both the state and whether the artifact occurred or not, using this very simple exponential decay. And in this paper, they give a very similar type of-- they make similar types of-- analogous types of assumptions for all of the other artifactual probes. You should think about this as constraining these conditional distributions I showed you here. They're no longer allowed to be arbitrary distributions, and so that, when one does now expectation maximization to try to maximize the marginal likelihood of the data, you've now constrained it in a way that you hopefully are moved on to identifyability of the learning problem. It makes all of the difference in learning here. So in this paper, their evaluation did a little bit of fine tuning for each baby. In particular, they assumed that the first 30 minutes near the start consists of normal dynamics so that's there are no artifacts. That's, of course, a big assumption, but they use that to try to fine tune the dynamic model to fine tune it for each baby and for themselves. And then they looked at the ability to try to identify artifactual processes. Now, I want to go a little bit slowly through this plot, because it's quite interesting. So what I'm showing you here is a ROC curve of the ability to predict each of the four different types of artifacts. For example, at any one point in time, was there a blood sample being taken or not? At any one point in time, was there a core temperature disconnect of the core temperature probe? And to evaluate it, they're assuming that they have some label data for evaluation purposes only. And of course, you want to be at the very far top left corner up here. And what we're showing here are three different curves-- the very faint dotted line, which I'm going to trace out with my cursor, is the baseline. Think of that as a much worse algorithm. Sorry. That's that line over there. Everyone see it? And this approach are the other two lines. Now, what's differentiating those other two lines corresponds to the particular type of approximate inference algorithm that's used. To do this posterior inference, to infer the true value of the x's given your noisy observations in the model given here is actually a very hard inference problem. Mathematically, I think one can show that it's an NP-hard computational problem. And so they have to approximate it in some way, and they use two different approximations here. The first approximation is based on what they're calling a Gaussian sum approximation, and it's a deterministic approximation. The second approximation is based on a Monte Carlo method. And what you see here is that the Gaussian sum approximation is actually dramatically better. So for example, in this blood sample one, that the ROC curve looks like this for the Gaussian sum approximation. Whereas for the Monte Carlo approximation, it's actually significantly lower. And this is just to point out that, even in this setting, where we have very little data, we're using a lot of domain knowledge, the actual details of how one does the math-- in particular, the proximate inference-- can make a really big difference in the performance of this system. And so it's something that one should really think deeply about, as well. I'm going to skip that slide, and then just mention very briefly this one. This is showing an inference of the events. So here I'm showing you three different observations. And on the bottom here, I'm showing the prediction of when artifact-- two different artifactual events happened. And these predictions were actually quite good, using this model. So I'm done with that first example, and-- and the-- just to recap the important points of that example, it was that we had almost no label data. We're tackling this problem using a cleverly chosen statistical model with some domain knowledge built in, and that can go really far. So now we'll shift gears to talk about a different type of problem involving physiological data, and that's of detecting atrial fibrillation. So what I'm showing you here is an AliveCore device. I own one of these. So if you want to drop by my E25 545 office, you can-- you can play around with it. And if you attach it to your mobile phone, it'll show you your electric conductance through your heart as measured through your two fingers touching this device shown over here. And from that, one can try to detect whether the patient has atrial fibrillation. So what is atrial fibrillation? Good question. It's [INAUDIBLE]. So this is from the American Heart Association. They defined atrial fibrillation as a quivering or irregular heartbeat, also known as arrhythmia. And one of the big challenges is that it could lead to blood clot, stroke, heart failure, and so on. So here is how a patient might describe having atrial fibrillation. My heart flip-flops, skips beats, feels like it's banging against my chest wall, particularly when I'm carrying stuff up my stairs or bending down. Now let's try to look at a picture of it. So this is a normal heartbeat. Hearts move-- pumping like this. And if you were to look at the signal output of the EKG of a normal heartbeat, it would look like this. And it's roughly corresponding to the different-- the signal is corresponding to different cycles of the heartbeat. Now for a patient who has atrial fibrillation, it looks more like this. So much more obviously abnormal, at least in this figure. And if you look at the corresponding signal, it also looks very different. So this is just to give you some intuition about what I mean by atrial fibrillation. So what we're going to try to do now is to detect it. So we're going to take data like that and try to classify it into a number of different categories. Now this is something which has been studied for decades, and last year, 2017, there was a competition run by Professor Roger Mark, who is here at MIT, which is trying to see, well, how could-- how good are we at trying to figure out which patients have different types of heart rhythms based on data that looks like this? So this is a normal rhythm, which is also called a sinus rhythm. And over here it's atrial-- this is an example one patient who has atrial fibrillation. This is another type of rhythm that's not atrial fibrillation, but is abnormal. And this is a noisy recording-- for example, if a patient's-- doesn't really have their two fingers very well put on to the two leads of the device. So given one of these categories, can we predict-- one of these signals, could predict which category it came from? So if you looked at this, you might recognize that they look a bit different. So could some of you guess what might be predictive features that differentiate one of these signals from the other? In the back? AUDIENCE: The presence and absence of one of the peaks the QRS complex are [INAUDIBLE].. DAVID SONTAG: So speak in English for people who don't know what these terms mean. AUDIENCE: There is one large piece, which can-- probably we can consider one mV and there is another peak, which is sort of like-- they have reverse polarity between normal rhythm and [INAUDIBLE]. DAVID SONTAG: Good. So are you a cardiologist? AUDIENCE: No. DAVID SONTAG: No, OK. So what the student suggested is one could look for sort of these inversions to try to describe it a little bit differently. So here you're suggesting the lack of those inversions is predictive of an abnormal rhythm. What about another feature that could be predictive? Yep? AUDIENCE: The spacing between the peaks is more irregular with the AF. DAVID SONTAG: The spacing between beats is more irregular with the AF rhythm. So you're sort of looking at this. You see how here this spacing is very different from this spacing. Whereas in the normal rhythm, sort of the spacing looks pretty darn regular. All right, good. So if I was to show you 40 examples of these and then ask you to classify some new ones, how well do you think you'll be able to do? Pretty well? I would be surprised if you couldn't do reasonably well at least distinguishing between normal rhythm and AF rhythm, because there seem to be some pretty clear signals here. Of course, as you get into alternatives, then the story gets much more complex. But let me dig in a little bit deeper into what I mean by this. So let's define some of these terms. Well, cardiologists have studied this for a really long time, and they have-- so what I'm showing you here is one heart cycle. And they've-- you can put names to each of the peaks that you would see in a regular heart cycle-- so that-- for example, that very high peak is known as the R peak. And you could look at, for example, the interval-- so this is one beat. You could look at the interval between the R peak of one beat and the R peak of another peak, and define that to be the RR interval. In a similar way, one could take-- one could find different distinctive elements of the signal-- by the way, each-- each time step corresponds to the heart being in a different position. For a healthy heart, these are relatively deterministic. And so you could look at other distances and derive features from those distances, as well, just like we were talking about, both within a beat and across beats. Yep? AUDIENCE: So what's the difference between a segment and an interval again? DAVID SONTAG: I don't know what the difference between a segment and an interval is. Does anyone else know? I mean, I guess the interval is between probably the heads of peaks, whereas segments might refer to within a interval. That's my guess. Does someone know better? For the purpose of today's class, that's a good enough understanding. The point is this is well understood. One could derive features from this. AUDIENCE: By us. DAVID SONTAG: By us. So what would a traditional approach be to this problem? So this is-- I'm pulling this figure from a paper from 2002. What it'll do is it'll take in that signal. It'll do some filtering of it. Then it'll run a peak detection logic, which will find these peaks, and then it'll measure intervals between these peaks and within a beat. And it'll take those computations or make some decision based on it. So that's a traditional algorithm, and they work pretty reasonably. And so what do I mean by signal processing? Well, this is an example of that. I encourage any of you to go home today and try to code up a peaked finding algorithm. It's not that hard, at least not to get an OK one. You might imagine keeping a running tab of what's the highest signal you've seen so far. Then you look to see what is the first time it drops, and the second time-- and the next time it goes up larger than, let's say, the previous-- suppose that one of-- you want to look for when the drop is-- the maximum value-- recent maximum value divided by 2. And then you-- then you reset. And you can imagine in this way very quickly coding up a peak finding algorithm. And so this is just, again, to give you some intuition behind what a traditional approach would be. And then you can very quickly see that that-- once you start to look at some intervals between peaks, that alone is often good enough for predicting whether a patient has atrial fibrillation. So this is a figure taken from paper in 2001 showing a single patient's time series. So the x-axis is for that single patient, their heart beats across time. The y-axis is just showing the RR interval between the previous beat and the current beat. And down here in the bottom is the ground truth of whether the patient is assessed to have-- to be in-- to have a normal rhythm or atrial fibrillation, which is noted as this higher value here. So these are AF rhythms. This is normal. This is AF again. And what you can see is that the RR interval actually gets you pretty far. You notice how it's pretty high up here. Suddenly it drops. The RR interval drops for a while, and that's when the patient has AF. Then it goes up again. Then it drops again, and so on. And so it's not deterministic, the relationship, but there's definitely a lot of signal just from that. So you might say, OK, well, what's the next thing we could do to try to clean up the signal a little bit more? So flash backwards from 2001 to 1970 here at MIT, studied by-- actually, no, this is not MIT. This is somewhere else, sorry. But still 1970-- where they used a Markov model very similar to the Markov models we were just talking about in the previous example to model what a sequence of normal RR intervals looks like versus what a sequence of abnormal, for example, AF RR intervals looks like. And in that way, one can recognize that, for any one observation of an RR interval might not by itself be perfectly predictive, but if you look at sort of a sequence of them for a patient with atrial fibrillation, there is some common pattern to it. And you can-- one can detect it by just looking at likelihood of that sequence under each of these two different models, normal and abnormal. And that did pretty well-- even better than the previous approaches for-- for predicting atrial fibrillation. This is the paper I wanted to say from MIT. Now 1991, this is also from Roger Mark's group. Now this is a neural network based approach, where it says, OK, we're going to take a bunch of these things. We're going to derive a bunch of these intervals, and then we're going to throw that through a black box supervised machine learning algorithm to predict whether a patient has AF or not. So these are very-- first of all, there are some simple approaches here that work reasonably well. Using neural networks in this domain is not a new thing, but where are we as a field? So as I mentioned, there was this competition last year, and what I'm showing you here-- the citation is from one of the winning approaches. And this winning approach really brings the two paradigms together. It extracts a large number of expert derived features-- so shown here. And these are exactly the types of things you might think, like proportion, median RR interval of regular rhythms, max RR irregularity measure. And there's just a whole range of different things that you can imagine manually deriving from the data. And you throw all of these features into a machine learning algorithm, maybe a random forest, maybe a neural network, doesn't matter. And what you get out is a slightly better algorithm than what if you had just come up with a simple rule on your own. That was the winning algorithm then. And in the summary paper, they conjectured that, well, maybe it's the case that they were-- they'd expected that convolutional neural networks would win. And they were surprised that none of the winning solutions involved convolution neural networks. And they conjectured that may be the reason why is because maybe with these 8,000 patients that they had [INAUDIBLE] that just wasn't enough to give the more complex models advantage. So flip forward now to this year and the article that you read in your readings in Nature Medicine, where the Stanford group now showed how a convolutional neural network approach, which is, in many ways, extremely naive-- all it does is it takes the sequence data in. It makes no attempt at trying to understand the underlying physiology, and just predicts from that-- can do really, really well. And so there are couple of differences that I want to emphasize to the previous work. First, the censor is different. Whereas the previous work used this alive core censor, in this paper from Stanford, they're using a different censor called the Zio patch, which is attached to the human body and conceivably much less noisy. So that's one big difference. The second big difference is that there's dramatically more data. Instead of 8,000 patients to train from, now they have over 90,000 records from 50,000 different patients to train from. The third major difference is that now, rather than just trying to classify into four categories-- normal, abnormal, other, or noisy-- now we're going to try to classify into 14 different categories. We're, in essence, breaking apart that other class into much finer grain detail of different types of abnormal rhythms. And so here are some of those other abnormal rhythms, things like complete heart block, and a bunch of other names I can't pronounce. And from each one of these, they gathered a lot of data. And that actually did-- so it's not described in the paper, but I've talked to the authors, and they did-- they gathered this data in a very interesting way. So they sort of-- they did their training iteratively. They looked to see where their errors were, and then they went and gathered more data from patients with that subcategory. So many of these other categories are very under-- might be underrepresented in the general population, but they actually gather a lot of patients of that type in their data set for training purposes. And so I think those three things ended up making a very big difference. So what is their convolutional network? Well, first of all, it's a 1-D signal. So it's a little bit different from the con nets you typically see in computer vision, and I'll show you an illustration of that in the next slide. It's a very deep model. So it's 34 layers. So the input comes in on the very top in this picture. It's passed through a number of layers. Each layer consists of convolution followed by rectified linear units, and there is sub sampling at every other layer so that you go from a very wide signal-- so a very long-- I can't remember how long-- 1 second long signal summarized down into sort of much-- just many smaller number of dimensions, which you then have a sort of fully connected layer at the bottom to do for your predictions. And then they also have these shortcut connections, which allow you to pass information from earlier layers down to the very end of the network, or even into intermediate layers. And for those of you who are familiar with residual networks, it's the same idea. So what is a 1D convolution? Well, it looks a little bit like this. So this is the signal. I'm going to just approximate it by a bunch of 1's and 0's. I'll say this is a 1. This is a 0. This is a 1, 1, so on. A convolutional network has a filter associated with it. That filter is then applied in a 1D model. It's applied in a linear fashion. It's just taken a dot product with the filter's values, with the values of the signal at each point in time. So it looks a little bit like this, and this is what you get out. So this is the convolution of a single filter with the whole signal. And the computation I did there-- so for example, this first number came from taking the dot product of the first three numbers-- 1, 0, 1-- with the filter. So it's 1 times 2 plus 3 times 0 plus 1 times 1, which is 3. And so each of the subsequent numbers was computed in the same way. And I usually have you figure out what this last one is, but I'll leave that for you to do at home. And that's what a 1D convolution is. And so they have-- they do this for lots of different filters. Each of those filters might be of varying lengths, and each of those will detect different types of signal patterns. And in this way, after having many layers of these, one can, in an automatic fashion, extract many of the same types of signals used in that earlier work, but also be much more flexible to detect some new ones, as well. Hold your question, because I need to wrap up. So in the paper that you read, they talked about how they evaluated this. And so I'm not going to go into much depth in it now. I just want to point out two different metrics that they used. So the first metric they used was what they called a sequential error metric. What that looked at is you had this very long sequence for each patient, and they labeled different one second intervals of that sequence into abnormal, normal, and so on. So you could ask, how good are we at labeling each of the different points along the sequence? And that's the sequence metric. The different-- the second metric is the set metric, and that looks at, if the patient has something that's abnormal anywhere, did you detect it? So that's, in essence, taking an or of each of those 1 second intervals, and then looking across patients. And from a clinical diagnostic perspective, the set metric might be most useful, but then when you want to introspect and understand where is that happening, then the sequential metric is important. And the key take home message from the paper is that, if you compared the model's predictions-- this is, I think, using an f1 metric-- to what you would get from a panel of cardiologists, these models are doing as well, if not better than these panels of cardiologists. So this is extremely exciting. This is technology-- or variance of this is technology that you're going to see deployed now. So for those of you who have purchased these Apple watches, these Samsung watches, I don't know exactly what they're using, but I wouldn't be surprised if they're using techniques similar to this. And you're going to see much more of that in the future. So this is going to be really the first example in this course so far of something that's really been deployed. And so in summary, we're very often in the realm of not enough data. And in this lecture today, we gave two examples how you can deal with that. First, you can try to use mechanistic and statistical models to try to work in settings where you don't have much data. And in other extremes, you do have a lot of data, and you can try to ignore that, and just use these black box approaches. That's all for today.
MIT_6S897_Machine_Learning_for_Healthcare_Spring_2019
23_Fairness.txt
PETER SZOLOVITS: OK, so a little over a year ago, I got a call from this committee. NASEM is the National Academy of Science, Engineering, and Medicine. So this is an august body of old people with lots of gray hair who have done something important enough to get elected to these academies. And their research arm is called the National Research Council and has a bunch of different committees. One of them is this Committee on Science, Technology, and the Law. It's a very interesting committee. It's chaired by David Baltimore, who used to be an MIT professor until he went and became president of Caltech. And he also happens to have a Nobel Prize in his pocket and he's a pretty famous guy. And Judge David Tatel is a member of the US Court of Appeals for the District of Columbia circuit, so this is probably the most important circuit court. It's one level below the Supreme Court. And he happens to sit in the seat that Ruth Bader Ginsburg occupied before she was elevated to the Supreme Court from that Court of Appeals, so this is a pretty big deal. So these are heavy hitters. And they convened a meeting to talk about the set of topics that I've listed here. So blockchain and distributed trust, artificial intelligence and decision making, which is obviously the part that I got invited to talk about, privacy and informed consent in an era of big data, science curricula for law schools, emerging issues, and science, technology, and law. The issue of using litigation to target scientists who have opinions that you don't like. And the more general issue of how do you communicate advances in life sciences to a skeptical public. So this is dealing with the sort of anti-science tenor of the times. So the group of us that talked about AI and decision making, I was a little bit surprised by the focus because Hank really is a law school professor at Stanford who's done a lot of work on fairness and prejudice in health care. Cherise Burdee is at something called the Pretrial Justice Institute, and her issue is a legal one which is that there are now a lot of companies that have software that predict, if you get bail while you're awaiting trial, are you likely to skip bail or not? And so this is influential in the decision that judges make about how much bail to impose and whether to let you out on bail at all or to keep you in prison, awaiting your trial. Matt Lundgren is a radiology professor at Stanford and has done some of the really cool work on building convolutional neural network models to detect pulmonary emboli and various other things in imaging data. You know the next guy, and Suresh Venkatasubramanian is a professor. He was originally a theorist at the University of Utah but has also gotten into thinking a lot about privacy and fairness. And so that that was our panel, and we each gave a brief talk and then had a very interesting discussion. One of the things that I was very surprised by is somebody raised the question of shouldn't Tatel as a judge on the Circuit Court of Appeals hire people like you guys to be clerks in his court? So people like you guys who also happen to go to law school, of which there are a number now of people who are trained in computational methods and machine learning but also have the legal background. And he said something very interesting to me. He said, no, he wouldn't want people like that, which kind of shocked me. And so we quizzed him a little bit on why, and he said, well, because he views the role of the judge not to be an expert but to be a judge. To be a balancer of arguments on both sides of an issue. And he was afraid that if he had a clerk who had a strong technical background, that person would have strong technical opinions which would bias his decision one way or another. So this reminded me-- my wife was a lawyer, and I remember, when she was in law school, she would tell me about the classes that she was taking. And it became obvious that studying law was learning how to win, not learning how to find the truth. And there's this philosophical notion in the law that says that the truth will come out from spirited argument on two sides of a question, but your duty as a lawyer is to argue as hard as you can for your side of the argument. And in fact, in law school, they teach them, like in debate, that you should be able to take either side of any case and be able to make a cogent argument for it. And so Tatel sort of reinforced that notion in what he said, which I thought was interesting. Well, just to talk a little bit about the justice area because this is the one that has gotten the most public attention, governments use decision automation for determining eligibility for various kinds of services, evaluating where to deploy health inspectors and law enforcement personnel, defining boundaries along voting districts. So all of the gerrymandering discussion that you hear about is all about using computers and actually machine learning techniques to try to figure out how to-- your objective function is to get Republicans or Democrats elected, depending on who's in charge of the redistricting. And then you tailor these gerrymandered districts in order to maximize the probability that you're going to have the majority in whatever congressional races or state legislative races. So in the law, people are in favor of these ideas to the extent that they inject clarity and precision into bail, parole, and sentencing decisions. Algorithmic technologies may minimize harms that are the products of human judgment. So we know that people are in fact prejudiced, and so there are prejudices by judges and by juries that play into the decisions made in the legal system. So by formalizing it, you might win. However, conversely, the use of technology to determine whose liberty is deprived and on what terms raises significant concerns about transparency and interpretability. So next week, we're going to talk some about transparency and interpretability, but today's is really about fairness. So here is an article from October of last year-- no, September of last year, saying that as of October of this year, if you get arrested in California, the decision of whether you get bail or not is going to be made by a computer algorithm, not by a human being, OK? So it's not 100%. There is some discretion on the part of this county official who will make a recommendation, and the judge ultimately decides, but I suspect that until there are some egregious outcomes from doing this, it will probably be quite commonly used. Now, the critique of these bail algorithms is based on a number of different factors. One is that the algorithms reflect a severe racial bias. So for example, if you are two identical people but one of you happens to be white and one of you happens to be black, the chances of you getting bail are much lower if you're black than if you're white. Now, you say, well, how could that be given that we're learning this algorithmically? Well, it's a complicated feedback loop because the algorithm is learning from historical data, and if historically, judges have been less likely to grant bail to an African-American than to a Caucasian-American, then the algorithm will learn that that's the right thing to do and will nicely incorporate exactly that prejudice. And then the second problem, which I consider to be really horrendous, is that in this particular field, the algorithms are developed privately by private companies which will not tell you what their algorithm is. You can just pay them and they will tell you the answer, but they won't tell you how they compute it. They won't tell you what data they used to train the algorithm. And so it's really a black box. And so you have no idea what's going on in that box other than by looking at its decisions. And so the data collection system is flawed in the same way as the judicial system itself. So not only are there algorithms that decide whether you get bail or not, which is after all a relatively temporary question until your trial comes up, although that may be a long time, but there are also algorithms that advise on things like sentencing. So they say, how likely is this patient to be a recidivist? Somebody who, when they get out of jail, they're going to offend again. And therefore, they deserve a longer jail sentence because you want to keep them off the streets. Well, so this is a particular story about a particular person in Wisconsin, and shockingly, the state Supreme Court ruled against this guy, saying that knowledge of the algorithm's output was a sufficient level of transparency in order to not violate his rights, which I think many people consider to be kind of an outrageous decision. I'm sure it'll be appealed and maybe overturned. Conversely-- I keep doing on the one hand and on the other-- algorithms could help keep people out of jail. So there's a Wired article not long ago that says we can use algorithms to analyze people's cases and say, oh, this person looks like they're really in need of psychiatric help rather than in need of jail time, and so perhaps we can divert him from the penal system into psychiatric care and keep him out of prison and get him help and so on. So that's the positive side of being able to use these kinds of algorithms. Now, it's not only in criminality. There is also a long discussion-- you can find this all over the web-- of, for example, can an algorithm hire better than a human being. So if you're a big company and you have a lot of people that you're trying to hire for various jobs, it's very tempting to say, hey, I've made lots and lots of hiring decisions and we have some outcome data. I know which people have turned out to be good employees and which people have turned out to be bad employees, and therefore, we can base a first-cut screening method on learning such an algorithm and using it on people who apply for jobs and say, OK, these are the ones that we're going to interview and maybe hire because they look like they're a better bet. Now, I have to tell you a personal story. When I was an undergraduate at Caltech, the Caltech faculty decided that they wanted to include student members of all the faculty committees. And so I was lucky enough that I served for three years as a member of the Undergraduate Admissions Committee at Caltech. And in those days, Caltech only took about 220, 230 students a year. It's a very small school. And we would actually fly around the country and interview about the top half of all the applicants in the applicant pool. So we would talk not only to the students but also to their teachers and their counselors and see what the environment was like, and I think we got a very good sense of how good a student was likely to be based on that. So one day, after the admissions decisions have been made, one of the professors, kind of as a thought experiment, said here's what we ought to do. We ought to take the 230 people that we've just offered admission to and we should reject them all and take the next 230 people, and then see whether the faculty notices. Because it seemed like a fairly flat distribution. Now, of course, I and others argued that this would be unfair and unethical and would be a waste of all the time that we had put into selecting these people, so we didn't do that. But then this guy went out and he looked at the data we had on people's ranking class, SAT scores, grade point average, the checkmarks on their recommendation letters about whether they were truly exceptional or merely outstanding. And he built a linear regression model that predicted the person's sophomore level grade point average, which seemed like a reasonable thing to try to predict. And he got a reasonably good fit, but what was disturbing about it is that in the Caltech population of students, it turned out that the beta for your SAT English performance was negative. So if you did particularly well in English on the SAT, you were likely to do worse as a sophomore at Caltech than if you didn't do as well. And so we thought about that a lot, and of course, we decided that that would be really unfair to penalize somebody for being good at something, especially when the school had this philosophical orientation that said we ought to look for people with broad educations. So that's just an example. And more, Science Friday had a nice show that you can listen to about this issue. So let me ask you, what do you mean by fairness? If we're going to define the concept, what is fair? What characteristics would you like to have an algorithm have that judges you for some particular purpose? Yeah? AUDIENCE: It's impossible to pin down sort of, at least might in my opinion, one specific definition, but for the pre-trial success rate for example, I think having the error rates be similar across populations, across the covariants you might care about, for example, fairness, I think is a good start. PETER SZOLOVITS: OK, so similar error rates is definitely one of the criteria that people use in talking about fairness. And you'll see later Irene-- where's Irene? Right there. Irene is a master of that notion of fairness. Yeah? AUDIENCE: When the model says some sort of observation that causally shouldn't be true, and what I want society to look like PETER SZOLOVITS: So I'm not sure how to capture that in a short phrase. Societal goals. But that's tricky, right? I mean, suppose that I would like it to be the case that the fraction of people of different ethnicity who are criminals should be the same. That seems like a good goal for fairness. How do I achieve that? I mean, I could pretend that it's the same, but it isn't the same today objectively, and the data wouldn't support that. So that's an issue. Yeah? AUDIENCE: People who are similar should be treated similarly, so engaged sort of independent of the [INAUDIBLE] attributes or independent of your covariate. PETER SZOLOVITS: Similar people should lead to similar treatment. Yeah, I like that. AUDIENCE: I didn't make it up. PETER SZOLOVITS: I know. It's another of the classic sort of notions of fairness. That puts a lot of weight on the distance function, right? In what way are to people similar? And what characteristics-- you obviously don't want to use the sensitive characteristics, the forbidden characteristics in order to decide similarity, because then people will be dissimilar in ways that you don't want, but defining that function is a challenge. All right, well, let me show you a more technical approach to thinking about this. So we all know about biases like selection bias, sampling bias, reporting bias, et cetera. These are in the conventional sense of the term bias. But I'll show you an example that I got involved in. Raj Manrai was a MIT Harvard HST student, and he started looking at the question of the genetics that was used in order to determine whether somebody is at risk for cardiomyopathy, hypertrophic cardiomyopathy. That's a big word. It means that your heart gets too big and it becomes sort of flabby and it stops pumping well, and eventually, you die of this disease at a relatively young age, if, in fact, you have it. So what happened is that there was a study that was done mostly with European populations where they discovered that a lot of people who had this disease had a certain genetic variant. And they said, well, that must be the cause of this disease, and so it became accepted wisdom that if you had that genetic variant, people would counsel you to not plan on living a long life. And this has all kinds of consequences. Imagine if you're thinking about having a kid when you're in your early 40s, and your life expectancy is 55. Would you want to die when you have a teenager that you leave to your spouse? So this was a consequential set of decisions that people have to make. Now, what happened is that in the US, there were tests of this sort done, but the problem was that a lot of African and African-American populations turned out to have this genetic variant frequently without developing this terrible disease, but they were all told that they were going to die, basically. And it was only after years when people noticed that these people who were supposed to die genetically weren't dying that they said, maybe we misunderstood something. And what they misunderstood was that the population that was used to develop the model was a European ancestry population and not an African ancestry population. So you go, well, we must have learned that lesson. So this paper was published in 2016, and this was one of the first in this area. Here's a paper that was published three weeks ago in Nature Scientific Reports that says, genetic risk factors identified in populations of European descent do not improve the prediction of osteoporotic fracture and bone mineral density in Chinese populations. So it's the same story. It's exactly the same story. Different disease, the consequence is probably less dire because being told that you're going to break your bones when you're old is not as bad as being told that your heart's going to stop working when you're in your 50s, but there we have it. OK, so technically, where does bias come from? Well, I mentioned the standard sources, but here is an interesting analysis. This comes from Constantine Aliferis from a number of years ago, 2006, and he says, well, look, in a perfect world, if I give you a data set, there's an uncountably infinite number of models that might possibly explain the relationships in that data. I cannot enumerate an uncountable number of models, and so what I'm going to do is choose some family of models to try to fit, and then I'm going to use some fitting technique, like stochastic gradient descent or something, that will find maybe a global optimum, but maybe not. Maybe it'll find the local optimum. And then there is noise. And so his observation is that if you count O as the optimal possible model over all possible model families, and if you count L as the best model that's learnable by a particular learning mechanism, and you call A the actual model that's learned, then the bias is essentially O minus L, so its limitation of learning method related to the target model. The variance is like L minus A, it's the error that's due to the particular way in which you learned things, like sampling and so on, and you can estimate the significance of differences between different models by just permuting the data, randomizing, essentially, the relationships in the data. And then you get a curve of performance of those models, and if yours lies outside the 95% confidence interval, then you have a P equal 0.05 result that this model is not random. So that's the typical way of going about this. Now, you might say, but isn't discrimination the very reason we do machine learning? Not discrimination in the legal sense, but discrimination in the sense of separating different populations. And so you could say, well, yes, but some basis for differentiation are justified and some basis for differentiation are not justified. So they're either practically irrelevant, or we decide for societal goals that we want them to be irrelevant and we're not going to take them into account. So one lesson from people who have studied this for a while is that discrimination is domain specific. So you can't define a universal notion of what it means to discriminate because it's very much tied to these questions of what is practically and morally irrelevant in the decisions that you're making. And so it's going to be different in criminal law than it is in medicine, than it is in hiring, than it is in various other fields, college admissions, for example. And it's feature-specific as well, so you have to take the individual features into account. Well, historically, the government has tried to regulate these domains, and so credit is regulated by the Equal Credit Opportunity Act, education by the Civil Rights Act and various amendments, employment by the Civil Rights Act, housing by the Fair Housing Act, public accommodation by the Civil Rights Act, more recently, marriage is regulated originally by the Defense of Marriage Act, which as you might tell from its title, was against things like people being able to marry who were not a traditional marriage that they wanted to defend, but it was struck down by the Supreme Court about six years ago as being discriminatory. It's interesting, if you look back to probably before you guys were born in 1967, until 1967, it was illegal for an African-American and a white to marry each other in Virginia. It was literally illegal. If you went to get a marriage license, you were denied, and if you got married out of state and came back, you could be arrested. This happened much later. Trevor Noah, if you know him from The Daily Show, wrote a book called Born a Crime, I think, and his father is white Swiss guy and his mother is a South African black, and so it was literally illegal for him to exist under the apartheid laws that they had. He had to pretend to be-- his mother was his caretaker rather than his mother in order to be able to go out in public, because otherwise, they would get arrested. So this has recently, of course, also disappeared, but these are some of the regulatory issues. So here are some of the legally recognized protected classes, race, color, sex, religion, national origin, citizenship, age, pregnancy, familial status, disability, veteran status, and more recently, sexual orientation in certain jurisdictions, but not everywhere around the country. OK, so given those examples, there are two legal doctrines about discrimination, and one of them talks about disparate treatment, which is sort of related to this one. And the other talks about disparate impact and says, no matter what the mechanism is, if the outcome is very different for different racial groups typically or gender groups, then there is prima facie evidence that there is something not right, that there is some sort of discrimination. Now, the problem is, how do you defend yourself against, for example, a disparate impact argument? Well, you say, in order to be disparate impact that's illegal, it has to be unjustified or avoidable. So for example, suppose I'm trying to hire people to climb 50-story buildings that are under construction, and you apply, but it turns out you have a medical condition which is that you get dizzy at times, I might say, you know what, I don't want to hire you, because I don't want you plopping off the 50th floor of a building that's under construction, and that's probably a reasonable defense. If I brought suit against you and said, hey, you're discriminating against me on the basis of this medical disability, a perfectly good defense is, yeah, it's true, but it's relevant to the job. So that's one way of dealing with it. Now, how do you demonstrate disparate impact? Well, the court has decided that you need to be able to show about a 20% difference in order to call something disparate impact. So the question, of course, is can we change our hiring policies or whatever policies we're using in order to achieve the same goals, but with less of a disparity in the impact. So that's the challenge. Now, what's interesting is that disparate treatment and disparate impact are really in conflict with each other. And you'll find that this is true in almost everything in this domain. So disparate impact is about distributive justice and minimizing equality of outcome. Disparate treatment is about procedural fairness and equality of opportunity, and those don't always mesh. In other words, it may well be that equality of opportunity still leads to differences in outcome, and you can't square that circle easily. Well, there's a lot of discrimination that keeps persisting. There's plenty of evidence in the literature. And one of the problems is that, for example, take an issue like the disparity between different races or different ethnicities. It turns out that we don't have a nicely balanced set where the number of people of European descent is equal to the number of people of African-American, or Hispanic, or Asian, or whatever population you choose descent, and therefore, we tend to know a lot more about the majority class than we know about these minority classes, and just that additional data and that additional knowledge might mean that we're able to reduce the error rate simply because we have a larger sample size. OK, so if you want to formalize this, this is Moritz Hardt's part of the tutorial that I'm stealing from in this talk. This was given at KDD about a year and a half ago, I think. And Moritz is a professor at Berkeley who actually teaches an entire semester-long course on fairness in machine learning, so there's a lot of material here. And so he formalizes the problem this way. He says, look, a decision problem, a model, in our terms, is that we have some X, which is the set of features we know about an individual, and we have some said A, which is the set of protected features, like your race, or your gender, or your age, or whatever it is we're trying to prevent from discriminating on, and then we have either a classifier or some score or predictive function that's a function of X and A in either case, and then we have some Y, which is the outcome that we're interested in predicting. So now you can begin to tease apart some different notions of fairness by looking at the relationships between these elements. So there are three criteria that appear in the literature. One of them is the notion of independence of the scoring function from sensitive attributes. So this says that R is independent from A. Remember, on the previous slide, I said that R is a function of-- oops. R is a function of X and A, so obviously, that criterion says that it can't be a function of A. Null function. Another notion is separation of score and the sensitive attribute given the outcome. So this is the one that says the different groups are going to be treated similarly. In other words, if I tell you the group, the outcome, the people who did well at the job and the people who did poorly at the job, then the scoring function is independent of the protected attribute. So that allows a little more wiggle room because it says that the protected attribute can still predict something about the outcome, it's just that you can't use it in the scoring function given the category of which outcome category that individual belongs to. And then sufficiency is the inverse of that. It says that given the scoring function, the outcome is independent of the protected attribute. So that says, can we build a fair scoring function that separates the outcome from the protected attribute? So here's some detail on those. If you look at independence-- this is also called by various other names-- basically, what it says is that the probability of a particular result, R equal 1, is the same whether you're in class A or class B in the protected attribute. So what does that tell you? That tells you that the scoring function has to be universal over the entire data set and has to not distinguish between people in class A versus class B. That's a pretty strong requirement. And then you can operationalize the notion of unfairness either by looking for an absolute difference between those probabilities. If it's greater than some epsilon, then you have evidence that this is not a fair scoring function, or a ratio test that says, we look at the ratio, and if it differs from 1 significantly, then you have evidence that this is an unfair scoring function. And by the way, this relates to the 4/5 rule, because if you make epsilon 20%, then that's the same as the 4/5 rule. Now, the problem-- there are problems with this notion of independence. So it only requires equal rates of decisions for hiring, or giving somebody a liver for transplant, or whatever topic you're interested in. And so what if hiring is based on a good score in group A, but random in B? So for example, what if we know a lot more information about group A than we do about group B, so we have a better way of scoring them than we do of scoring group B. So you might wind up with a situation where you wind up hiring the same number of people, the same ratio of people in both groups, but in one group, you've done a good job of selecting out the good candidates, and in the other group, you've essentially done it at random. Well, the outcomes are likely to be better for a group A than for group B, which means that you're developing more data for the future that says, we really ought to be hiring people in group A because they have better outcomes. So there's this feedback loop. Or alternatively-- well, of course, it could be caused by malice also. I could just decide as a hiring manager I'm not hiring enough African-Americans so I'm just going to take some random sample of African-Americans and hire them, and then maybe they'll do badly, and then I'll have more data to demonstrate that this was a bad idea. So that would be malicious. There's also a technical problem, which is it's possible that the category, the group is a perfect predictor of the outcome, in which case, of course, they can't be separated. They can't be independent of each other. Now, how do you achieve independence? Well, there are a number of different techniques. One of them is-- there's this article by Zemel about learning fair representations, and what it says is you create a new world representation, Z, which is some combination of X and A, and you do this by maximizing the mutual information between X and Z and by minimizing the mutual information between the A and Z. So this is an idea that I've seen used in machine learning for robustness rather than for fairness, where people say, the problem is that given a particular data set, you can overfit to that data set, and so one of the ideas is to do a Gann-like method where you say, I want to train my classifier, let's say, not only to work well on getting the right answer, but also to work as poorly as possible on identifying which data set my example came from. So this is the same sort of idea. It's a representation learning idea. And then you build your predictor, R, based on this representation, which is perhaps not perfectly independent of the protected attribute, but is as independent as possible. And usually, there are knobs in these learning algorithms, and depending on how you turn the knob, you can affect whether you're going to get a better classifier that's more discriminatory or a worse classifier that's less discriminatory. So you can do that in pre-processing. You can do some kind of incorporating in the loss function a dependence notion or an independence notion and say, we're going to train on a particular data set, imposing this notion of wanting this independence between A and R as part of our desiderata. And so you, again, are making trade-offs against other characteristics. Or you can do post-processing. So suppose I've built an optimal R, not worrying about discrimination, then I can do another learning problem that says I'm now going to build a new F, which takes R and the protected attribute into account, and it's going to minimize the cost of misclassifications. And again, there's a knob where you can say, how much do I want to emphasize misclassifications for the protected attribute or based on the protected attribute? So this was still talking about independence. The next notion is separation, that says given the outcome, I want to separate A and R. So that graphical model shows that the protected attribute is only related to the scoring function through the outcome. So there's nothing else that you can learn from one to the other than through the outcome. So this recognizes that the protected attribute may, in fact, be correlated with the target variable. An example might be different success rates in a drug trial for different ethnic populations. There are now some cardiac drugs where the manufacturer has determined that this drug works much better in certain subpopulations than it does in other populations, and the FDA has actually approved the marketing of that drug to those subpopulations. So you're not supposed to market it to the people for whom it doesn't work as well, but you're allowed to market it specifically for the people for whom it does work well. And if you think about the personalized medicine idea, which we've talked about earlier. The populations that we're interested in becomes smaller and smaller until it may just be you. And so there might be a drug that works for you and not for anybody else in the class, but it's exactly the right drug for you, and we may get to the point where that will happen and where we can build such drugs and where we can approve their use in human populations. Now, the idea here is that if I have two populations, blue and green, and I draw ROC curves for both of these populations, they're not going to be the same, because the drug will work differently for those two populations. But on the other hand, I can draw them on the same axes, and I can say, look any place within this colored region can be a fair region in that I'm going to get the same outcome for both populations. So I can't achieve this outcome for the blue population or this outcome for the green population, but I can achieve any of these outcomes for both populations simultaneously. And so that's one way of going about satisfying this requirement when it is not easily satisfied. So the advantage of separation over independence is that it allows correlation between R and Y, even a perfect predictor, so R could be a perfect predictor for Y. And it gives you incentives to learn to reduce the errors in all groups. So that issue about randomly choosing members of the minority group doesn't work here because that would suppress the ROC curve to the point where there would be no feasible region that you would like. So for example, if it's a coin flip, then you'd have the diagonal line and the only feasible region would be below that diagonal, no matter how good the predictor was for the other class. So that's a nice characteristic. And then the final criterion is sufficiency, which flips R and Y. So it says that the regressor or the predictive variable can depend on the protected class, but the protected class is separated from the outcome. So for example, the probability in a binary case of a true outcome of Y given that R is some particular value, R and A is a particular class, is the same as the probability of that same outcome given the same R value, but the different class. So that's related to the sort of similar people, similar treatment notion, qualitative notion, again. So it requires a parody of both the positive and the negative predictive values across different groups. So that's another popular way of looking at this. So for example, if the scoring function is a probability, or the set of all instances assigned the score R has an R fraction of positive instances among them, then the scoring function is said to be well-calibrated. So we've talked about that before in the class. If it turns out that R is not well-calibrated, you can hack it and you can make it well-calibrated by putting it through a logistic function that will then approximate the appropriately calibrated score, and then you hope that that calibration will give-- or the degree of calibration will give you a good approximation to this notion of sufficiency. These guys in the tutorial also point out that some data sets actually lead to good calibration without even trying very hard. So for example, this is the UCI census data set, and it's a binary prediction of whether somebody makes more than $50,000 a year if you have any income at all and if you're over 16 years old. And the feature, there are 14 features, age, type of work, weight of sample is some statistical hack from the Census Bureau, your education level, marital status, et cetera, and what you see is that the calibration for males and females is pretty decent. It's almost exactly along the 45 degree line without having done anything particularly dramatic in order to achieve that. On the other hand, if you look at the calibration curve by race for whites versus blacks, the whites, not surprisingly, are reasonably well-calibrated, and the blacks are not as well-calibrated. So you could imagine building some kind of a transformation function to improve that calibration, and that would get you separation. Now, there's a terrible piece of news, which is that you can prove, as they do in this tutorial, that it's not possible to jointly achieve any pair of these conditions. So you have three reasonable technical notions of what fairness means, and they're incompatible with each other except in some trivial cases. This is not good. And I'm not going to have time to go into it, but there's a very nice thing from Google where they illustrate the results of adopting one or another of these notions of fairness on a synthesized population of people, and you can see how the trade-offs vary and what the results are of choosing different notions of fairness. So it's a kind of nice graphical hack. Again, it'll be on the slides, and I urge you to check that out, but I'm not going to have time to go into it. There is one other problem that they point out which is interesting. So this was a scenario where you're trying to hire computer programmers, and you don't want to take gender into account because we know that women are underrepresented among computer people, and so we would like that not to be an allowed attribute in order to decide to hire someone. So they say, well, there are two scenarios. One of them is that gender, A, influences whether you're a programmer or not. And this is empirically true. There are fewer women who are programmers. It turns out that visiting Pinterest is slightly more common among women than men. Who knew? And then visiting GitHub is much more common among programmers than among non-programmers. That one's pretty obvious. So what they say is, if you want an optimal predictor of whether somebody's going to get hired, it should actually take both Pinterest visits and GitHub visits into account, but because those go back to gender, which is an unusable attribute, they don't like this model. And so they say, well, we could use an optimal separated score, because now, being a programmer separates your gender from the scoring function. And so we can create a different score which is not the same as the optimal score, but is permitted because it's no longer dependent on your sex, on your gender. Here's another scenario that, again, starts with gender and says, look, we know that there are more men than women who obtain college degrees in computer science, and so there's an influence there, and computer scientists are much more likely to be programmers than non-computer science majors. If you're were a woman-- has anybody visited the Grace Murray Hopper Conference? A couple, a few of you. So this is a really cool conference. Grace Murray Hopper invented the notion bug or the term bug and was a really famous computer scientist starting back in the 1940s when there were very few of them, and there is a yearly conference for women computer scientists in her honor. So clearly, the probability that you visited the Grace Hopper Conference is dependent on your gender. It's also dependent on whether you're a computer scientist, because if you're a historian, you're not likely to be interested in going to that conference. And so in this story, the optimal score is going to depend basically on whether you have a computer science degree or not, but the separated score will depend only on your gender, which is kind of funny, because that's the protected attribute. And what these guys point out is that despite the fact that you have these two scenarios, it could well turn out that the numerical data, the statistics from which you estimate these models are absolutely identical. In other words, the same fraction of people are men and women, the same fraction of people are programmers, they have the same relationship to those other factors, and so from a purely observational viewpoint, you can't tell which of these styles of model is correct or which version of fairness your data can support. So that's a problem because we know that these different notions of fairness are in conflict with each other. So I wanted to finish by showing you a couple of examples. So this was a paper based on Irene's work. So Irene, shout if I'm butchering the discussion. I got an invitation last year from the American Medical Association's Journal of Ethics, which I didn't know existed, to write a think piece for them about fairness in machine learning, and I decided that rather than just bloviate, I wanted to present some real work, and Irene had been doing some real work. And so Marcia, who was one of my students, and I convinced her to get into this, and we started looking at the question of how these machine learning models can identify and perhaps reduce disparities in general medical and mental health. Now, why those two areas? Because we had access to data in those areas. So the general medical was actually not that general. It's intensive care data from MIMIC, and mental health care is some data that we had access to from Mass General and McLean's hospital here in Boston, which both have big psychiatric clinics. So yeah, this is what I just said. So the question we were asking is, is there bias based on race, gender, and insurance type? So we were really interested in socioeconomic status, but we didn't have that in the database, but the type of insurance you have correlates pretty well with whether you're rich or poor. If you have Medicaid insurance, for example, you're poor, and if you have private insurance, the first approximation, you're rich. So we did that, and then we looked at the notes. So we wanted to see not the coded data, but whether the things that nurses and doctors said about you as you were in the hospital were predictive of readmission, of 30-day readmission, of whether you were likely to come back to the hospital. So these are some of the topics. We used LDA, standard topic modeling framework. And the topics, as usual, include some garbage, but also include a lot of recognizably useful topics. So for example, mass, cancer, metastatic, clearly associated with cancer, Afib, atrial, Coumadin, fibrillation, associated with heart function, et cetera, in the ICU domain. In the psychiatric domain, you have things like bipolar, lithium, manic episode, clearly associated with bipolar disease, pain, chronic, milligrams, the drug quantity, associated with chronic pain, et cetera. So these were the topics that we used. And so we said, what happens when you look at the different topics, how often the different topics arise in different subpopulations? And so what we found is that, for example, white patients have more topics that are enriched for anxiety and chronic pain, whereas black, Hispanic, and Asian patients had higher topic enrichment for psychosis. It's interesting. Male patients had more substance abuse problems. Female patients had more general depression and treatment-resistant depression. So if you want to create a stereotype, men are druggies and women are depressed, according to this data. What about insurance type? Well, private insurance patients had higher levels of anxiety and depression, and poorer patients or public insurance patients had more problems with substance abuse. Again, another stereotype that you could form. And then you could look at-- that was in the psychiatric population. In the ICU population, men still have substance abuse problems. Women have more pulmonary disease. And we were speculating on how this relates to sort of known data about underdiagnosis of COPD in women. By race, Asian patients have a lot of discussion of cancer, black patients have a lot of discussion of kidney problems, Hispanics of liver problems, and whites have atrial fibrillation. So again, stereotypes of what's most common in these different groups. And by insurance type, those with public insurance often have multiple chronic conditions. And so public insurance patients have atrial fibrillation, pacemakers, dialysis. These are indications of chronic heart disease and chronic kidney disease. And private insurance patients have higher topic enrichment values for fractures. So maybe they're richer, they play more sports and break their arms or something. Lymphoma and aneurysms. Just reporting the data. Just the facts. So these results are actually consistent with lots of analysis that have been done of this kind of data. Now, what I really wanted to look at was this question of, can we get similar error rates, or how similar are the error rates that we get, and the answer is, not so much. So for example, if you look at the ICU data, we find that the error rates on a zero-one loss metric are much lower for men than they are for women, statistically significantly lower. So we're able to more accurately model male response or male prediction of 30-day readmission than we are-- sorry, of ICU mortality for the ICU than we are for women. Similarly, we have much tighter ability to predict outcomes for private insurance patients than for public insurance patients with a huge gap in the confidence intervals between them. So this indicates that there is, in fact, a racial bias in the data that we have and in the models that we're building. These are particularly simple models. In psychiatry, when you look at the comparison for different ethnic populations, you see a fair amount of overlap. One reason we speculate is that we have a lot less data about psychiatric patients than we do about ICU patients. So the models are not going to give us as accurate predictions. But you still see, for example, a statistically significant difference between blacks and whites and other races, although there's a lot of overlap here. Again, between males and females, we get fewer errors in making predictions for males, but there is not a 95% confidence separation between them. And for private versus public insurance, we do see that separation where for some reason, in fact, we're able to make better predictions for the people on Medicare than we are-- or Medicaid than we are for patients in private insurance. So just to wrap that up, this is not a solution to the problem, but it's an examination of the problem. And this Journal of Ethics considered it interesting enough to publish just a couple of months ago. The last thing I want to talk about is some work of Willie's, so I'm taking the risk of speaking before the people who actually did the work here and embarrassing myself. So this is modeling mistrust in end-of-life care, and it's based on Willie's master's thesis and on some papers that came as a result of that. So here's the interesting data. If you look at African-American patients, and these are patients in the MIMIC data set, what you find is that for mechanical ventilation, blacks are on mechanical ventilation a lot longer than whites on average, and there's a pretty decent separation at the P equal 0.05 level, so 1/2% level between those two populations. So there's something going on where black patients are kept on mechanical ventilation longer than white patients. Now, of course, we don't know exactly why. We don't know whether it's because there is a physiological difference, or because it has something to do with their insurance, or because God knows. It could be any of a lot of different factors, but that's the case. The eICU data set we've mentioned, it's a larger, but less detailed data set, also of intensive care patients, that was donated to Roger Marks' Lab by Phillips Corporation. And there, we see, again, a separation of mechanical ventilation duration roughly comparable to what we saw in the MIMIC data set. So these are consistent with each other. On the other hand, if you look at the use of vasopressors, blacks versus whites, at the P equal 0.12 level, you say, well, there's a little bit of evidence, but not strong enough to reach any conclusions. Or in the eICU data, P equal 0.42 is clearly quite insignificant, so we're not making any claims there. So the question that Willie was asking, which I think is a really good question, is, could this difference be due not to physiological differences or even these sort of socioeconomic or social differences, but to a difference in the degree of trust between the patient and their doctors? It's an interesting idea. And of course, I wouldn't be telling you about this if the answer were no. And so the approach that he took was to look for cases where there's clearly mistrust. So there are red flags if you read the notes. For example, if a patient leaves the hospital against medical advice, that is a pretty good indication that they don't trust the medical system. If the family-- if the person dies and the family refuses to allow them to do an autopsy, this is another indication that maybe they don't trust the medical system. So there are these sort of red letter indicators of mistrust. For example, patient refused to sign ICU consent and expressed wishes to be do not resuscitate, do not intubate, seemingly very frustrated and mistrusting of the health care system, also with a history of poor medication compliance and follow-up. So that's a pretty clear indication. And you can build a relatively simple extraction or interpretation model that identifies those clear cases. This is what I was saying about autopsies. So the problem, of course, is that not every patient has such an obvious label. In fact, most of them don't. And so Willie's idea was, can we learn a model from these obvious examples and then apply them to the less obvious examples in order to get a kind of a bronze standard or remote supervision notion of a larger population that has a tendency to be mistrustful according to our model without having as explicit a clear case of mistrust, as in those examples. And so if you look at chart events in MIMIC, for example, you discover that associated with those cases of obvious mistrust are features like the person was in restraints. They were literally locked down to their bed because the nurses were afraid they would get up and do something bad. Not necessarily like attack a nurse, but more like fall out of bed or go wandering off the floor or something like that. If a person is in pain, that correlated with these mistrust measures as well. And conversely, if you saw that somebody had their hair washed or that there was a discussion of their status and comfort, then they were probably less likely to be mistrustful of the system. And so the approach that Willie took was to say, well, let's code these 620 binary indicators of trust and build a logistic regression model to the labeled examples and then apply it to the unlabeled examples of people for whom we don't have such a clear indication, and this gives us another population of people who are likely to be mistrustful and therefore, enough people that we can do further analysis on it. So if you look at the mistrust metrics, you have things like if the patient is agitated on some agitation scale, they're more likely to be mistrustful. If, conversely, they're alert, they're less likely to be mistrustful. So that means they're in some better mental shape. If they're not in pain, they're less likely to be mistrustful, et cetera. And if the patient was restrained, then trustful patients have no pain, or they have a spokesperson who is their health care proxy, or there is a lot of family communication, but conversely, if restraints had to be reapplied, or if there are various other factors, then they're more likely to be mistrustful. So if you look at that prediction, what you find is that for both predicting the use of mechanical ventilation and vasopressors, the disparity between a population of black and white patients is actually less significant than the disparity between a population of high trust and low trust patients. So what this suggests is that the fundamental feature here that may be leading to that difference is, in fact, not race, but is something that correlates with race because blacks are more likely to be distrustful of the medical system than whites. Now, why might that be? What do you know about history? I mean, you took the city training course that had you read the Belmont Report talking about things like the Tuskegee experiment. I'm sure that leaves a significant impression in people's minds about how the health care system is going to treat people of their race. I'm Jewish. My mother barely lived through Auschwitz, and so I understand some of the strong family feelings that happened as a result of some of these historical events. And there were medical people doing experiments on prisoners in the concentration camps as well, so I would expect that people in my status might also have similar issues of mistrust. Now, it turns out, you might ask, well, is mistrust, in fact, just a proxy for severity? Are sicker people simply more mistrustful, and is what we're seeing just a reflection of the fact that they're sicker? And the answer seems to be, not so much. So if you look at these severity scores like OASIS and SAPS and look at their correlation with noncompliance in autopsy, those are pretty low correlation values, so they're not explanatory of this phenomenon. And then in the population, you see that, again, there is a significant difference in sentiment expressed in the notes between black and white patients. The autopsy derived mistrust metrics don't show a strong relationship, a strong difference between them, but the noncompliance derived mistrust metrics do. So I'm out of time. I'll just leave you with a final word. There is a lot more work that needs to be done in this area, and it's a very rich area both for technical work and for trying to understand what the desiderata are and how to match them to the technical capabilities. There are these various conferences. One of the people active in this area, one of the pairs of people, Mike Kearns and Aaron Roth at Penn are coming out with a book called The Ethical Algorithm, which is coming out this fall. It's a popular pressbook. I've not read it, but it looks like it should be quite interesting. And then we're starting to see whole classes in fairness popping up at different universities. University of Pennsylvania has the science of Data ethics, and I've mentioned already this fairness in machine learning class at Berkeley. This is, in fact, one of the topics we've talked about. I'm on a committee that is planning the activities of the new Schwarzman College of Computing, and this notion of infusing ideas about fairness and ethics into the technical curriculum is one of the things that we've been discussing. The college obviously hasn't started yet, so we don't have anything other than this lecture and a few other things like that in the works, but the plan is there to expand more in this area.
Stanford_CS236_Deep_Generative_Models_I_2023_I_Stefano_Ermon
Stanford_CS236_Deep_Generative_Models_I_2023_I_Lecture_11_Energy_Based_Models.txt
So the plan for today is to talk about energy-based model. So it's going to be another family of generative models that is closely related to diffusion models, which is what we're going to talk about next. So as a recap, remember, this is the high level picture where which I think summarizes pretty well the design space. When you're trying to build a generative model, you have data coming from some unknown data distribution and you have IID samples from it. You always need to define some kind of model family. And then you need to define a loss function that basically tells you how good your model is compared to the data distribution. And we've seen that likelihood or KL divergence is a very reasonable approach. And that's pretty natural to use without regressive models, normalizing flow models, and to some extent, variational autoencoders because they give you ways to either exactly or approximately evaluate the probability of a data point. And so you can of score how close p theta is to a data distribution by basically computing the KL divergence up to a constant, which is just like the likelihood assigned by the model to the data, which is like a compression-based type of objective. And as we know, maximum likelihood training is very good. It's a very principled way of training models. But you always have some kind of restrictions in terms of, OK, how do you define this set of probability distributions. And you cannot pick an arbitrary neural network that will take as input the different axes, like the data point and maps it to a scalar. It has to be a valid probability density or a valid probability mass function. And so in order to do that, you have to either use chain rule and break it down into a product of conditionals. Or you have to use some kind of invertible neural network to define that the marginal distribution. Or you have to deal with approximations and use a variational autoencoder. And then the other approach or the other extreme is to try to have as much flexibility as possible in terms of defining the model family. And specifically, we're just going to define the probability distribution implicitly by instead defining the sampling procedure. And the price that you have to pay is that you can no longer basically measure this kind of similarity up here using KL divergence. You have to essentially come up with a training objective that does not require you to evaluate probability of data points. Essentially, the only thing you have access to at this point is samples from the data and samples from the model. And so you have to come up with some kind of two sample test, some kind of likelihood free way of comparing how similar the samples, the real samples from the fake samples are. And GANs are one way to do that where you have this minimax objective, where you're trying to train a generator to produce samples that are hard to distinguish from the real ones as measured by some discriminator that is trained in the innermost maximization problem to do as well as it can to distinguish real versus fake samples. And we've seen that under some conditions, this is principled in the sense that if you had access to extremely powerful discriminator, then you could, to some extent, approximate the optimization of an f divergence or even a Wasserstein distance. But in practice, although it's true that you can use essentially any architecture you want to define the sampling procedure, training this kind of minimax-- with this minimax objectives is very tricky because we don't have likelihoods. You have to do minimax optimization, which is unstable. It's hard to track progress. It's hard to know whether you have converged or not. It's hard to evaluate whether one model is better than the other because you cannot just look at the loss. And you have mode collapses. And so all sorts of issues that arise in practice when you try to train an adversarial-type model. And so what we're going to see today is another way of defining a model family. So kind of a different way of parameterizing probabilistic models that is called an energy-based model. And what we'll see is that this will allow us to essentially lift all those restrictions that we had on the kind of neural networks you can use to define a valid probability density function or a valid probability mass function. So that's the main benefit of using this kind of energy-based models, extreme flexibility. And to some extent, they will allow us to have some fairly stable training procedure in the sense that it's still going to be based on maximum likelihood or other variants of losses that are fully-- that are taking advantage of the fact that you have a probabilistic model and not just a sampling procedure. And these models tend to work pretty well. They give you fairly high sample quality. And we'll see they are very closely related to diffusion models which are actually state of the art models right now for a lot of continuous modalities like images, videos, and audio, and others. And as another benefit, you can also compose energy-based models in interesting ways. And so we'll see that that's another thing you can do that you can take different kinds of generative models and you can combine them. Because essentially, it's one way of defining an energy-based model and that allows you to do interesting things like, essentially, combining different concepts and combining different kinds of model families together like a flow model and an autoregressive model and we'll see that that's also beneficial in some settings. So the high level motivation is the usual one. We want to define a probabilistic model. So we want to define a probability distribution because that's the key building block. We need to define this green set here that we're optimizing over. And if we can do that, then we can just couple that with a loss function and you get a new kind of generative model. And to some extent, this is just a function. It's a function that takes x as an input where x could be an image, or a sentence, or whatever and maps it to a scalar. So it seems pretty straightforward. But the key thing is that you cannot pick an arbitrary function. This valid probability density functions or probability mass functions in the discrete case, they are a special kind of functions in the sense that they need to satisfy certain constraints. The first one is that they have to be non-negative. So given any input x, the output scalar that you get from this function has to be a non-negative number. And this will say it's not a particularly hard constraint to enforce. The more difficult one is that they have to be normalized. So because we're working with probabilities, it has to be the case that if you look at all the possible things that can happen and you sum up their probabilities, you get 1. Or if you're working with continuous random variables, if you integrate, the probability density function over the whole space, you should get 1. And so again, this is basically due to the fact that essentially the probabilities, if you go through all the possible things that can happen, have to sum to 1. And that's a much trickier kind of constraint to enforce. That's really the hard constraint to enforce. And that's because essentially, the reason we have to enforce those strange architectures like autoregressive models or flow models is basically because we have to enforce this normalization constraint. And enforcing that is tricky. And if you take an arbitrary neural network, it's not going to enforce-- it's not going to satisfy that constraint and enforcing that is not it's not straightforward. And so again, if you think about the first constraint, it's not a very hard property to satisfy. It's not hard to come up with a very broad set of families of functions that are guaranteed to be non-negative given any input. And in fact, if you take an arbitrary function, let's say an arbitrary neural network, it's pretty trivial to change it a little bit and make sure that the output is guaranteed to be non-negative. And so one thing you can do is you can take an arbitrary neural network f theta. If you just square the output that it produces, you get a new neural network, g theta, which is also very flexible because it's basically very similar to the f theta that you started with and it's guaranteed to be non-negative given any input. Or you can take the exponential. Again, given an arbitrary neural network f theta, if you just basically add an extra layer at the end which takes the output and passes it through this exponential non-linearity, then you get a new neural network, which is guaranteed to be non-negative. Or you could take the absolute value. Or I'm sure you can cook up many, many other ways of transforming a neural network into one that is just as basically flexible, where you just add a new layer at the end that is guaranteed to make the output non-negative. So that's not hard to do. The tricky part is to guarantee that basically if you go through all the possible inputs that you can feed through this neural network and you sum up the outputs, you get 1. Or if you have a continuous setting where the inputs are continuous, then the integral over all possible inputs to this neural network has to be 1. And I guess one way to think about it is that-- and why this is important if you're building a probabilistic model is that this is basically enforcing that the total probability mass is fixed. So if you're thinking about the role of a probabilistic model as being-- or the meaning of outputting of computing p of x is you're saying, what is the probability that the model assigns to one particular x, which could be an image or it could be some sentence in a language modeling application. The fact that basically the sum of all the probabilities is 1 is enforcing this fact that essentially the total volume is fixed. So if you increase the probability of one data point, you're guaranteed that the probability of something else will have to go down. So the analogy is that there is a cake and you can divide it up in many different ways. But if you make one slice bigger, then the other ones will have to get smaller inevitably. And we need this kind of guarantee so that when we increase the probability of the data that we have in the training set by increasing the slice of the cake that we assign to the samples we like, the ones that are in the training set, we're automatically reducing the probability of other things which are, in the case of a generative model, the things that basically we don't like. And again, enforcing the non-negativity constraint, which is basically saying with this analogy that the size of each slice is non-negative, is easy. But enforcing this constraint that's the volume is fixed. Here in the definition, it's 1. But as long as you can keep it fixed, basically, that's fine because you can always divide by the constant. But enforcing that basically regardless of how you choose your parameters theta in your neural network, you're guaranteed sort of like-- if you go through all the possible inputs, or if you sum over all possible inputs, or you take an integral over all possible inputs and you look at the output value, you get a constant which does not depend on theta, does not depend on the parameters of the neural network, that's hard. You can always compute what is the total normalized probability that is assigned by a neural network. If you go through all the possible inputs, there's always going to be some number. If you go through all possible inputs and you evaluate the output of the neural network, you sum them up, you're going to get a value. But in general, that value is going to depend on theta. It's going to depend on the parameters of your neural network. And it's not going to be 1. It's not going to be something fixed. Unless, you choose g theta. You choose your function family in some very special way, like an autoregressive model or with invertible architectures, that's guaranteed by design that no matter how you choose your parameters, the total mass or the total volume is basically fixed. And the analogy here is in the discrete case, you sum over all possible inputs. In the continuous case, it's the integral that you have to worry about. And so that's basically the hard constraint to enforce. Somehow what we need to be able to do is we need to be able to come up with a family of functions that are parameterized by theta. Ideally, this function should be as flexible as possible, meaning that you would like to choose essentially an arbitrary neural network, or very deep neural networks, or no kind of constraints on the kind of layers that you can choose. It's easy to enforce that the function is non-negative but it's very hard to enforce that the volume is fixed to some value. Yeah? I mean, we still need one function that satisfies if you're going to redistribute the pie though, we need a starting point? So yeah, basically, that's the idea of energy-based models is that basically, as long as you can compute the total area of the pie or the total amount of pie that you have, then you can define an energy-- you can define a valid probabilistic model by basically just dividing by that number. And that's basically the idea behind energy-based models. The fact is that given a non-negative function g theta, you can always define a valid probabilistic model by basically dividing by the total volume, by the total area, the total size of the pie, by dividing by the integral over all possible inputs of the unnormalized probability. And that defines a valid probability distribution because this object is now normalized. So if for every theta you can compute these unnormalized probabilities, the size of each slice of the cake, and at the same time you can also compute how big is the cake, and you divide these two, then you get something that is normalized because it's like a ratio. And that is basically the main idea behind energy-based models, is to just bite the bullet and be willing to work with probability density functions or probability mass functions that are defined by normalizing objects that are not necessarily normalized by design. By dividing by this quantity Z theta, which is often called the partition function, this normalization constant, the total volume, the total normal amount of unnormalized probability that we need to divide by to get a valid probabilistic model. And you can see that if you are willing to divide by this Z theta, you can get a valid-- you get an object that is normalized. Because if you integrate the left hand side here and you swap in the definition, which is g theta over the normalization constant, you basically get the integral over all the possible things that can happen in the numerator, the integral of all the possible things that can happen in the denominator. And when you divide them, you get 1 by definition, basically. And so basically, as long as you have a non-negative function, g theta, you can always define a valid normalized probabilistic model by dividing by this normalization constant, by this partition function, by the integral over the scalar that is well defined, which is just the integral over all possible inputs or the sum over all possible inputs in the discrete space of these unnormalized probabilities that you get by just using g theta. And as a few examples that you might have seen before is-- or one way to go about this is to choose functions g theta for which this denominator, this normalizing constant, this partition function is basically known analytically. In general, we might not know. This integral might be tricky to compute. But if we restrict ourselves to relatively simple functions, g theta, we might be able to compute that integral in closed form analytically. For example, if we choose g to have a very simple form, which is just the relationship that you have in a Gaussian PDF, so g is just basically a squared exponential. And g now has two parameters, mu and sigma. And this non-negative function is just a-- or e to the minus x minus mu squared basically is divided by the variance, sigma squared. This function is non-negative. By itself, it's not necessarily normalized. But it's a simple enough function that you can actually compute the integral analytically. We have a closed form solution to that. And the total volume is just the square root of 2 pi sigma squared. And indeed, if you take this expression of g and you divide it by the total volume, you get the Gaussian PDF. So you can think of that strange scaling factor that you have in front of the Gaussian PDF as being basically the total volume that you have to divide for if you want to get a normalized object. Or you could choose, OK, let's choose g to be just an exponential that looks like this. You have a single parameter lambda and g of x is e to the minus lambda x. Again, non-negative function by itself is not necessarily normalized. But you can compute the volume in closed form. It turns out to be just 1 over lambda. And so you can actually get-- if you divide these two things, you get a valid PDF which is the one corresponding to the exponential distribution. And more generally, there is a broad family of distributions that have PDFs that basically have this form. It's similar to what we have up here. It's also an exponential of some dot product between a vector of parameters theta and a vector of functions of sufficient statistics T of x. Not super important but it turns out that there is a volume here, which is just, again, the integral of the unnormalized probability. And then if you divide by that quantity, you get this family of distributions that are called the exponential family which captures a lot of known commonly used distributions like normal, Poisson, exponential, Bernoulli, and many more. And so this kind of setting where you start with a non-negative function and you somehow restrict yourself to functional forms that are simple enough that you can compute the integrals analytically, they are pretty powerful in the sense that these are very useful building blocks, useful in many applications. But you can see that you can't choose an arbitrary g. If you choose some really complicated thing or you plug in a neural network, you might not be able to compute that integral analytically. There might not be a closed form for that partition function, the theta, or the total unnormalized probability. And that's where energy-based models come in. How do we go from this kind of setting where everything is simple, handcrafted, it can be solved analytically in closed form to something more flexible where we can start to plug in much more complicated functions here like neural networks essentially. And now these simple building blocks like Gaussians, exponentials, and so forth, they're still pretty useful in the sense that what we've been doing so far, like using autoregressive models or even variational autoencoders, latent variable models are essentially tricks for composing simple functions that are normalized by design and building more complicated probabilistic models that are, again, by construction, guaranteed to be normalized. But we can see that in some sense, an autoregressive model is basically just a way of defining joint probability distribution or joint probability density function that is normalized by design because it's a product of conditionals that are normalized, that sort of are Gaussians, or are exponentials, or they are the ones for which we know how to compute these integrals, these partition functions analytically. And so if you imagine you have two of these objects that are guaranteed to be normalized, like a family parameterized by theta and another family here that is parameterized by theta prime, where theta prime can be a function of x as long as, for every theta prime, the distribution that you get over y is normalized, this full object that you get by multiplying them together is guaranteed to be normalized. So if you look over-- if you try to-- if you multiply together two objects that are basically, by construction, normalized, like the marginal over x and the conditional over y, where the parameters depend on x, you get something that is normalized. This is basically what we do in an autoregressive model. You define the joint as a product of conditionals. And if you look at the-- if you look at the integral over all possible inputs of the joint, you get something that is, by design, basically normalized. And the reason is that, if the-- by construction, the distribution over y is such that it's normalized for any choice of the parameters. And the choice of the parameters can depend on the value that x can take. Then, by design, the integral over y is guaranteed to evaluate to 1, regardless of the choice that you have for x. And then when you integrate over x, again, that object is normalized. And so you get, once again, something that-- where the full joint distribution is guaranteed to be normalized and to integrate to 1. So the object in here is essentially-- one way of thinking of the conditional of y-- the probability over y is, let's say, a Gaussian, where the parameters depend on the value of x. This would be one setting where this would show up if you have an autoregressive model where, let's say, p of x is a Gaussian. p theta of x is a Gaussian. So here, theta could be the mean and the standard deviation. And the distribution over the second variable, or the second group of variables, is, again, a Gaussian, where the parameters of the Gaussian, theta prime, are allowed to depend on x. For example, you compute the mean and the standard deviation as a function of the previous variable in the ordering. Then you have an object that is guaranteed to be normalized by design. So you can think of autoregressive models as a way of combining objects that are normalized, simpler objects, and putting together a more complicated one, a joint distribution that is, again, guaranteed to be normalized by design, which is the product of normalized objects. And then if you slide these integrals in, you can-- all the integrals evaluate to 1. When you integrate out the conditionals, they all evaluate to 1, and the full object is guaranteed to be normalized. And to some extent, even latent variable models can be thought as a way of, again, combining normalized objects and building a more complicated one that is, again, guaranteed to be normalized. So if you have two densities, p theta and p theta prime, that are normalized, and then you take a convex [INAUDIBLE] like alpha p plus 1 minus alpha p prime for any choice of alpha, or between 0 and 1, you get another density, which is guaranteed to be normalized. Because if you integrate it out, again, you get something that-- basically, the first integral evaluates to alpha because p theta is normalized. The second integral evaluates to 1 minus alpha because theta prime is normalized. So again, you get an object that is normalized. And that's basically what happens in a variational autoencoder, where you have this mixture in behavior. The conditionals that you have in the encoder-- in the decoder are simple normalized objects, like Gaussians. And you're taking a mixture of them. And by doing that, you define a marginal, which is, again, normalized by construction. So you can think of what we've been doing, building autoregressive models or latent variable models, as trying to-- clever ways of combining simple normalized object and building more complicated objects that are normalized by design. But this is enforcing some restrictions still, in terms of how complicated the final object is. And you have to follow these rules to construct objects that are guaranteed to be normalized. So what energy-based models do is they try to break this constraint and try to go beyond, basically, probability densities or probability mass functions that are guaranteed to be normalized, for which, basically, the-- because the normalization constant is known analytically. And instead, we're going to be working with settings where this product normalization constant, this partition function Z theta, is something that we'll have to maybe deal with, that either we're not going to be able to compute it, or we're going to approximate it. But it's not going to be able-- it's not going to be something that is known. Say value 1 for any choice of theta is going to change as a function of theta in some complicated way. And we're just going to have to basically deal with it. And so, specifically, we're going to be looking at models of this form, where we have a probability density function over x, which is parameterized by theta and is defined as the exponential of f theta, because we need to make sure that the function is non-negative. So this is the unnormalized probability, the exponential of f theta. And then we divide by the partition function to get an object that is actually normalized. And so you can start with an arbitrary, basically, neural network, f theta. You take the exponential. You get a non-negative function. And then you define a valid probability density function by dividing by this partition function by this normalization constant, which is just the integral, basically, of this unnormalized probability. And so that's all, basically. That's the definition of an energy-based model. It's very flexible because you can choose, essentially, an arbitrary function, f theta. And that defines a valid probability density function. We chose specifically the exponential here instead of, let's say, squaring f theta for several reasons. The first one is that it allows us to capture pretty nicely, pretty easily, big variations in the probability that the model assigns to different x's. So if you're thinking about modeling images or even text, to some extent, you might expect very big variations in the probabilities that the model assigns to, let's say, well-formed images as opposed to pure noise. So it makes it easier to model these big variations in the probability if you take an exponential here because small changes in f theta, which is what your neural network does, will lead to big changes in the actual probabilities that are assigned by the model. You could also do it with-- just take a square here. But then that would require bigger changes in the outputs of the neural network. So it's going to be much less smooth. What is an energy by definition is? Yeah. So that will come up in the next bullet. Yeah. Yeah? Is this not softmax? Softmax is an example of that. Yeah. That's a good point. Yeah. A softmax is one way of doing this and, essentially, mapping the output of a neural network, which is not necessarily a valid probability-- a valid categorical distribution over, let's say, the outputs that you're trying to-- [INAUDIBLE] is that-- because you mentioned latent variable earlier. But in, say, our VAE, we use softmax to compute the reconstruction likelihood. So how is that a different-- how is that not an energy-based model that-- So "energy-based model" is a very general term, in the sense that-- you could even think of an autoregressive model as being a type of energy-based model, where, by construction, Z theta is always guaranteed to be 1. So this is just a very general type of model, where we're going to be able to take an arbitrary neural network, let's say f theta, and get a valid probability density function from it. It's not how to refer new concepts? It's not mutually exclusive? It's more general because it doesn't have to be-- Z theta doesn't have to be exactly 1, and it doesn't have to be, like in the Gaussian case, some unknown. Z theta might not be something that is known analytically, so you might not be able to know that the integral-- that this integral evaluates to the square root of 2 pi sigma squared. Because that only happens when f theta is very simple. If f theta is x minus mu squared, then you've get a Gaussian. And then you know how to compute that integral analytically. But in general, you don't have to. So the output for-- in the latent variable, the formula you showed in the previous slide-- so is-- that just shows the probability of a mixture of Gaussian prior or something? Yeah. It's basically-- if you're thinking about the problem more abstractly, as saying, how do I come up with a way of designing functions that are non-negative, and they are guaranteed to have some fixed integral? How would you go about it? One way is to define a set of rules, almost like an algebra, where you can start from objects that have the properties you want. And you can combine them to construct more complicated objects that, again, have the properties you want. And one way to do it is what I was showing here, is you can take linear combinations of these objects, convex combinations of these objects. And that's one way of defining a new object that still has the properties you want, in terms of simpler objects. But which part in this function represents the latent variables? The latent variable would basically be the alphas. The alphas are the probabilities that the latent-- basically, this would correspond to a latent variable model. But there is a single latent variable, which is-- can only take two different values. And it takes value-- the first value with probability alpha. The second value with probability 1 minus alpha. And so that gives you that sort of behavior. Thank you. But I think what you were saying about the softmax is another good example of, essentially, an energy-based model. Softmax is a way of defining-- it essentially has-- if you think about a softmax layer, it has exactly this kind of structure. And it is, in fact, a way of defining a probability distribution over a set of-- over a categorical, basically, random variable, which is the predicted label in terms of a function f theta, which is just the raw outputs of your neural network, which might not be necessarily normalized. So the softmax is exactly this kind of thing. But the softmax is a case where this partition function, this normalization constant, can be computed analytically because you only have, let's say, k different classes. So the softmax will involve-- in the denominator of the softmax, you have a sum over k different possible outputs. And so in that case, this-- yeah, this normalization constant can actually be computed exactly. We're going to be interested in settings where x, this integral, is going to be very difficult to compute exactly because x is very high-dimensional. So there is many different-- if you think about a distribution over images, x can take on an exponentially large number of different values. So if you have to integrate over all possible images, that's going to be a very expensive computation, practically impossible to compute. So that's the difference between the softmax style computation and what we're doing here. Cool. And so-- yeah. Why do we use exponential? Because we want to capture big variation. The other reason is that, as we've seen, many common distributions like the Gaussian, and the exponential, and all the ones in the exponential family-- they have this functional form. They have this flavor of something exponential-- of some simple function in the argument of the exponential. And the reason these distributions are so common is that they actually arise under fairly general assumptions. If you know about maximum entropy modeling assumptions, which is basically this idea of trying to come up with a distribution that, in some sense, fits the data but minimizes all the other assumptions that you make about the model, then it turns out that the solution to that kind of modeling problem has the form of an exponential family. So that's why they are called energy-based models, because this also shows up a lot in physics. Think about the second law of thermodynamics. And in that case, minus f of x is called the energy. And there is a minus because-- if you think about physics, configurations where you can imagine x are the possible states that the system can be in, states that have lower energy, so high f theta, should be more likely. So that's why there is the minus sign. But that's why they are called energy-based models, because they are inspired by statistical physics, essentially. Cool. So that's the basic paradigm of an energy-based model. You start with an essentially arbitrary neural network, f theta. You take an exponential to make it non-negative. And then you divide by this normalization constant, this partition function, which is just the integral of this unnormalized probability. And this, for any choice of theta, defines a valid probabilistic model. So it's guaranteed to be non-negative. It's guaranteed to sum to 1. And so from the point of view of the flexibility, this is basically as good as it gets. There is no restriction, essentially, on the f thetas that you can choose, which means that you can plug in whatever architecture you want to model the data. The cons are-- there is many, as usual. There is usually some price to pay. If you want flexibility, you're basically making less assumptions about the structure of your model. And so there is a price to pay, computationally. And one big negative aspect of energy-based models is that sampling is going to be very hard. So even if you can fit the model, if you want to generate samples from it, it's going to be very tricky to do that. So it's going to be very slow to generate new samples from an energy-based model. And the reason is that, basically, evaluating probabilities is also hard. Because if you want to evaluate the probability of a data point, you basically-- it's easy to get the unnormalized piece, this exponential of f theta. You just feed it through your neural network. You get a number that gives you the normalized probability. But somehow, to actually evaluate a probability, you have to divide by this normalization constant, which is, in general, very expensive to compute, and-- which hints at why, also, sampling is hard. If you don't even know how to evaluate probabilities of data points efficiently, it's going to be pretty tricky to figure out how to generate-- know how to pick an x with-- which are the right probability. Even evaluating the probability of a data point is hard. So why is sampling hard for this energy-based model, since we have already-- since we could learn a form of [INAUDIBLE]?? If we could learn? If we could already learn a form of p theta of x. Yeah. So sampling is hard. Even if somebody gives you the p theta, the function, it tells you, OK, here's the model. Basically, the problem is that sampling is hard because, first of all, there is no order. If you think about an autoregressive model, there is no ordering. So the only thing you can do is-- as we'll see, there's going to be some kind of local-type procedure, where you can try to use, essentially, Markov chain/Monte Carlo methods to try to go look for x's that are likely, essentially, under the Model. But even evaluating likelihoods is not possible. It's hard because that requires the normalization constant. And so, in general, there's not going to be an efficient way of generating samples from these kind of models. So when you said the input x has no order with-- Yeah. So you can imagine-- yeah, there is no ordering. x is just a vector. It's your data, and you just feed it into a neural network. And then you get a number that is the unnormalized probability. But that doesn't tell you how likely that data point is until you know how likely everything else is. So you need to know the normalizing constant, the partition function, to know-- even just to know how likely a data point is. And so, as you can imagine, even figuring out if you were to sample from a distribution like that-- it's pretty difficult because you cannot even-- if you wanted to even just invert the CDF thing, that would require you to be able to evaluate probabilities. And so it's just a very tricky thing to do. Yeah? So I guess the cons-- really, they nullify the [INAUDIBLE],, right? Yeah, as we'll see. It's hard, but possible. And, in fact, if you about a diffusion model, it's essentially doing this. So it's not going to be as straightforward as the sampling from a bunch of conditionals like in autoregressive models. We're going to have to do more work to sample from the model. Evaluating probabilities will also require some approximations or some other techniques that are much more sophisticated. But yeah, this idea of being able to essentially use an energy-based model and be able to use whatever arbitrary architectures to model your data actually paid off in a big time, if you think about the success of diffusion model, which I think largely depends on the fact that we're allowed to use very complicated neural networks to model the data. And yeah, there is also no feature learning, in the sense that-- at least in this vanilla formulation, there is no latent variables. But, I guess, that you can add. So it's not really a big con in this case. And the fundamental issue-- the reason why all these tasks are so hard is the curse of dimensionality, which basically, in this case, means that-- because we want to have very-- a lot of flexibility in choosing f theta, we're not going to be able to compute these integrals analytically. It's not like the Gaussian case. So we're not going to be able to compute that in closed form. And if you wanted to basically brute force it or use numerical methods to try to approximate the integral, the cost that you pay will basically scale exponentially in the number of variables that you're trying to model. And essentially, if you think about the discrete case, there is-- the number of possible x's that you would have to sum over grows combinatorially in the number of-- grows exponentially in the number of dimensions that you have. And essentially, the same thing happens also in the continuous world. If you were to discretize and have little units of volume that you use to cover the whole space, the number of little units of volume that you need will grow exponentially in the number of dimensions that you deal with. And so that's essentially the key challenge of this energy-based model. Computing this denominator is going to be hard. So on the one hand, we're going to get flexibility. On the other hand, there is this big computational bottleneck that we have to deal with the partition function, basically. And the good news is that there is a bunch of tasks that do not require knowing the partition function. For example, if all you have to do is to compare-- you have two data points, x and x prime. And all you have to do is to know which one is more likely. So you just want to do a relative comparison between two data points. So you cannot necessarily-- even though you might not be able to evaluate the probability of x and the probability of x prime under this model, because that would require knowing the partition function-- if you think about what happens if you take the ratios of two probabilities, that does not depend on the normalization constant. But if you take the ratio, both the numerator and the denominator-- they are both normalized by the same constant. And so that basically goes away. If you think about the slices of pie-- if you're trying to just look at the relative size, you can do that easily without knowing the actual size of the pie, and-- which means that we can check. Given two data points, we can check which one is more likely very easily. Even though we cannot know how likely it is, we can check which one is more likely between the two. And this is going to be quite useful when we design sampling procedures. And you can still use it to do things like anomaly detection. Denoising, as we'll see when we talk about diffusion models, also relies on this. And in fact, people have been using energy-based models for a long time, even for a variety of different basic discriminative tasks. If you think about object recognition-- if you have some kind of energy function that relates the label y to the image x, then-- and you're trying to figure out what is the most likely label, then as long as you can compare the labels between them, then you can basically solve object recognition. And these kind of energy-based models have been used to do sequence labeling, to do image restorations. As long as the application requires relative comparisons, the partition function is not needed. And as an example, we can think about the problem of doing denoising. And this is an old-school approach to denoising, where we have a probabilistic model that involves two groups of variables. We have the-- a true image y that is unknown. And then we have a corrupted image x, which we get to observe. And the goal is to infer the clean image given the corrupted image. And one way to do it is to have a joint probability distribution, which is going to be an energy-based model. And so we're saying that the probability of observing a clean image y and a corresponding noisy image x has this functional form, where there is the normalization constant, and then it's the exponential of some relatively simple function, which is the energy, or the negative energy in this case. And this function is basically saying something like-- there is some relationship between the i-th corrupted pixel and the i-th clean pixel. For example, they should be fairly similar. So whenever you plug in xi and yi, configurations where xi is similar to yi should be more likely because we expect the corrupted pixel to be more likely to be similar to the clean pixel than to be very different from it. And then maybe you have some kind of prior where the image is that is saying what kind of-- what choices of y are more likely a priori. And maybe you have some kind of prior that is saying, neighboring pixels tend to have a similar value. Then you sum up all these interaction terms, one per pixel. And then maybe you have a bunch of spatial, local interactions between pixels that are close to each other in the image, and that defines an energy function. And if you want to do denoising-- if you want to compute given an x-- if you want to figure out what is the corresponding y, what you would do is you would try to find a y that maximizes p of y given x. And even though p, the probability, depends on the normalization constant, basically, you can see that the normalization constant doesn't matter. So as long as you want to find the most likely solution-- what is the actual probability? So what is the-- 1 over z just becomes a scaling factor. And it doesn't actually affect the solution of the optimization problem. [INAUDIBLE]? Why is dimensionality kind of large? But in real images [INAUDIBLE]. So it might be still tricky to solve the optimization problem in that you're still optimizing on a very large space. But at least it does not depend-- as long as you're maximizing the actual value-- basically, all the y's are going to be divided by the same z. So again, it doesn't matter as long as you can-- you're going to be able to compare to y's. And that's all you need if you're trying to-- [INAUDIBLE] So x will show up in the energy. But again, x is fixed. So we don't have to sum over [INAUDIBLE]?? Yeah, exactly. Exactly. So it's really all about-- there are a bunch of tasks. Well, basically, what you care about is doing comparisons. And to the extent that the task only involves comparisons, then you don't actually need to know the partition function. You may still need to have the partition function if you want to train the models. That's what's going to come up next. But at least doing comparison is something you can do without knowing the partition function. Yeah. For comparison case, we end up getting away with knowing z value. I also observe that if we take the derivative of the energy functions, then it also goes away. Yeah. Will that have some implications? Yeah. So that's going to come up towards the-- some of the last slides. That's another nice thing, is that the derivative also does not depend-- the derivative of the log probability does not depend on the normalization constant. So we're going to be able to use it to define, basically, sampling schemes. Yeah. Cool. Now, another thing you can do is you can combine various models. So let's say that you have a bunch of probabilistic models. For example, it could be different model families, maybe a PixelCNN, a Flow model, whatnot. You could imagine that each one of them is an expert that will individually tell you how likely is a given x according to each one of these three models. And you could imagine what happens if you try to ensemble these experts. And, for example, you could say, if all these experts are making judgments independently, it might make sense to ensemble them by taking a product. And the product of these objects that are normalized by themselves is not going to be normalized. But we can define a normalized object by dividing by this normalization constant. And intuitively, this way of ensembling behaves like an end operator, where, as long as one of the models assigns 0 probability, then the product evaluates to 0. And this ensemble model will assign a 0 probability. While if you think about the mixture model, where you would say alpha p theta 1 plus 1 minus alpha p theta 2, that behaves more like an OR, where you're saying, as long as one of the models assigns some probability, then the ensemble model will also assign some probability. Taking a product behaves more like an AND. But it's much trickier to deal with because you have to take into account the partition function. But this allows you to combine energy-based models and combine models in an interesting way. You can have a model that is-- produces young people. And then you have a model that produces females. And then you can combine them by multiplying them together. And then you get a model that is putting most of the probability mass on the intersection of these two groups. And you can get that kind of behavior. So you can combine concepts. As long as the different models have learned different things by ensembling them this way, you can combine them in interesting ways. Another example-- yeah? [INAUDIBLE] between product and mixture? If we use mixture, then this will look different? So the difference is, if you think about it like-- the product will-- as long as one of them basically assigns 0 probability, then the whole product evaluates to 0. And so the ensemble model, the product of experts, will also assign 0 probability. If you think about a mixture, even if one of them assigns 0 probability, as long as the others think this thing is likely, that thing will still have some probability. And so it behaves more like an OR, in the sense that-- as long as it's a soft OR because it's a-- [INAUDIBLE] Yeah, exactly. Exactly. It's like an average. Yeah. How do you sample from this product of experts? And how do you not think of it? How do you ignore the Z part of it? Yeah. How do we sample? Yeah, that will come up, how-- there are ways to do it. It's just expensive. So it's not impossible. It's just-- an autoregressive model is very fast. In an energy-based model, you're going to have to put more compute, basically, at inference time when you want to generate a sample. That's the price you pay. Yeah. How is the product of experts related to the energy-based model formalism? Yeah. So you can see that if you have individual probability density functions or probability mass functions-- if you multiply them together, you get another function, which is non-negative, but is not necessarily normalized. So to normalize it, you have to divide by this partition function. And from that perspective, it's an energy-based model. And so you can think of the energy of the product of experts as being the sum of the log likelihoods of each individual model. Because you can write p theta 1 as exp log p theta 1 and the other one as exp log p theta 2. And then it's the exp of the sum of the logs. The problem is that the sum of the log likelihoods-- it's an energy, and it's not guaranteed to be normalized by design. And so you have to then renormalize everything with this global partition function. Cool. Another example is the RBM, the Restricted Boltzmann Machine. This is actually an energy-based model with latent variables. And this one is a discrete probabilities probabilistic model, where both the visible variables, let's say, are binary. And the latent variables are also binary. So you have n binary variables x and m latent variables z. Both of them-- all the variables here are going to be binary. And, for example, the x could represent pixel values. And the z's, as usual, are latent features. And the joint distribution between z and x is an energy-based model. And it's a pretty simple energy-based model in the sense that there is the usual normalization constant. There is the usual exponential. And then the energy is just a quadratic form, where you get the energy by a W matrix. You have a vector of biases, basically b. Another vector of biases, c. And you map the values that the x variables have and the z variables have to a scalar by just taking this kind of expression, which is just a bunch of linear terms in the x's, a bunch of linear terms in the z's. And then there is this cross product between the xi's and the zj's, which are weighted by these weighting terms, this weight matrix W. And it's restricted. It's called a Restricted Boltzmann Machine because, basically, in this expression, there is no connection between the visible units or the hidden units. And so, basically, there is no xi/xj term in here. There are interactions between the x variables and the z variables, but not between the x variables or between the z variables by themselves. Not super important, but the key thing is that this is actually one of the very first-- important for historical reasons. This is one of the first deep generative models that actually worked. They were able to train these models by-- on image data by stacking multiple RBMs. So an RBM is basically a joint distribution between visible and hidden. And then if you stack a bunch of them-- so you have visibles at the bottom. Then you have one RBM here. Then you build an RBM between the hidden units of the first RBM, and some other hidden units of the second RBM, and so forth. You get a Deep Boltzmann Machine. And the idea is that-- OK, you have the pixels at the bottom. And then you have a hierarchy of more and more abstract features at the top. And actually, it's actually pretty interesting that early-- in the very early days of deep learning, people were not able to train deep neural networks very well. Even by-- in the supervised learning setting, things didn't quite work. And the only way they were able to get good results was to actually pre-train the neural network as a generative model. So they would choose an architecture which is like this Deep Boltzmann Machine architecture. They would train the model in an unsupervised way as an RBM. So just as an energy-based model, they would train the weights of these matrices through some technique that we'll talk about later in this lecture. And then they would use that as initialization for their supervised learning algorithms. And that was the first thing that made deep learning work. And it was the only thing that worked initially. Then they figured out other ways of making things work. But yeah, it was actually quite important for getting people onboard with the idea of training large neural networks. And here, you can see some samples of these kind of models. Again, this is a long time ago, 2009. But people were able to generate some reasonable-looking samples by training one of these Deep Boltzmann Machines. And so the-- You can see that the fundamental issue here is the partition function is normalization constant. And just by looking through an example in the RBM setting, we can see why, indeed, computing the volume is hard. If you think about even just a single-layer RBM, where you have these x variables, these z variables-- you have this energy-based model. Computing the exponential of this energy function is super easy. It's just basically a bunch of dot products. But the normalization constant is very expensive. The normalization constant is going to be a function of w, b, and c. So the theta are the parameters that you have in the model, which, in this case, are the-- these biases, b and c, in this matrix W. But computing a normalization constant requires you to go through every possible configuration, every possible assignment to the x variable, every possible assignment to the z variables, and sum up all these unnormalized probabilities. And the problem is that there is basically 2 to the n terms in this sum, 2 to the m terms in this sum. So you can see that, even for small values of n and m, computing a normalization constant is super expensive. Is it possible to keep a running [INAUDIBLE] to approximate? Approximate? Yeah. That's what we're going to have to do, basically. It's just like the-- it's a well-defined function. It's just that if you want it to compute it-- it doesn't have a closed form, unlike the Gaussian case. There's no closed form for this expression. And brute forcing takes exponential time. So we'll have to basically do some kind of approximation. And in particular, the fact that the partition function is so hard to evaluate means that likelihood-based training is going to be almost impossible because just to evaluate the probability of a data point under the current choice of the parameters requires you to know the denominator in that expression. And that's not generally known, and you're not going to be able to compute it. And so that's the issue. Optimizing the unnormalized probability, which is just the exponential, is super easy. But you have to take into account, basically, during learning, you need to figure out, If you were to change the parameters by a little bit, how does that affect the numerator? which is easy in this expression. But then you also have to account, How does changing the parameters affect the total volume? How does that affect the probability that the model assigns to everything else, all the possible things that can happen? And that is tricky because we cannot even compute this quantity. So it's going to be hard to figure out how does that quantity change if we were to make a small change to any of the parameters. If I were to change, let's say, b by a little bit-- I know how to evaluate how this expression changes by a little bit. But I don't know how to evaluate how this partition function changes by a little bit. And that's what makes learning so hard. Could you expand the slide before, where you showed the samples-- the generated samples from the RBF-- RBM? Mhm. I didn't understand how they sum-- how were they able to generate that? So you put in the observed pixels. And then, with the stacking, you can learn the latent variables? Yeah. But how did they generate them? The-- So yeah. So how do they generate? How do they learn? We haven't talked about it. That's going to come up next, how do we do learning and how do you sample from the models. Yeah. The problem is that even learning is hard because it requires evaluating likelihoods. It requires the partition function, which you don't have. Sampling, as we'll see, is also hard. But there are approximations that you can do, and that's basically what they did. All right. And the intuition is that, if you want to do maximum likelihood learning, you have an expression that looks like this that you want to maximize. So you have a training data point, and you want to evaluate its probability according to the model. And then you want to maximize this expression as a function of theta. And the probability of a data point, as usual, is the unnormalized probability divided by the partition function, the total probability assigned by the model to-- or the total unnormalized probability assigned by the model to everything else. And so if you want to, you could make that ratio as big as possible. You need to be able to do two things. You need to be able to increase the numerator and decrease the denominator, which makes sense. The intuition is that you want to figure out how to change your parameters so that you make-- you increase the unnormalized probability of the training data, while at the same time, you need to make sure that you're not increasing the probability of everything else by too much. So what really matters is the relative probability of the training point you care about with respect to all the other things that could happen, which is what you got in the denominator, which is looking at the total unnormalized probability of all the other things that could have happened. And so, essentially, when you train, what you need to do is you cannot just optimize the numerator. Because if you just increase the numerator like you just increased the size of the slice of the pie that you assigned to the particular data point, you might be increasing the size of the total pie by even more. And so the relative probability does not even go up. And so you need to be able to account for the effect that changes to the parameters theta has not only on the training data, but also on all the other points that could have been sampled by the model. And so, somehow, you need to increase the probability-- the normalized probability of the training point while pushing down the probability of everything else. And so it's the intuition that you have here. If this is f theta, and you have the correct answer and some wrong answers, it's not sufficient to just push up the normalized probability of the correct answer because everything else might also go up. So the relative probability doesn't actually go up. So you need to be able to push up the probability of the right answer while, at the same time, pushing down the probability of everything else, of basically the wrong answers. And that's basically the idea. Instead of evaluating the partition function exactly, we're going to use some kind of Monte Carlo estimate. So instead of evaluating the actual total unnormalized probability of everything else, we're just going to sample a few other things. And we're going to try to compare the training point we care about to these other samples from the model, these other wrong answers, these other negative samples. We have a positive sample, which is what we like in the training set. There's going to be a bunch of negative samples that we're going to sample. And we're going to try to contrast them. We're going to try to increase the probability of the positive one and decrease the probability of everything else. And that's basically the contrastive divergence algorithm that has been-- was used to train that RBM-- the DBM that we had before. Essentially, what you do is you make this intuition concrete by a fairly simple algorithm, where-- what you do is you sample from the model. So you generate a negative example by sampling from the model. And then you take the gradient of the difference between the log of f theta, basically, which is just the energy of the model, or the negative energy on the trainings, minus f theta evaluated on the negative one, which is exactly doing this thing of pushing up the correct answer while pushing down the wrong answer, where the wrong answer is what is defined as the sample from the model. How do we know that the sample we get is wrong? Yeah. So the sample is not necessarily wrong. It's just something else that could have come from the model. And we're considering it. It's a representative sample of something else that could have happened if you were to sample from the model. So we want the probability of the true data point to go up, as compared to some other typical scenario that you could have sampled from your model. And that's actually principled, as we'll see. This actually gives you an unbiased estimate of the true gradient that you would like to optimize. Is it still possible that the sample you get is also something that's from the training set? But the-- overall, the estimation-- Yes. The expectation is-- An expectation. Yeah, yeah, yeah, yeah. Was there-- Yeah. Presumably, how do you sample from that model? Yeah. OK. We haven't talked about how to sample. But if you could somehow sample from a model, then what I claim is that this algorithm would give you the right answer. And this idea of making the training data more likely than a typical sample from the model actually is what you want, so to the extent that you can, indeed, generate these samples, which we don't know how to do yet. But if you can, then this gives you a way of training, a model to make it better to fit to some data set. Yeah. So when you draw a sample you don't do whether you can go-- if the sample is not present in training data, you just draw this sample. You just draw a sample, yeah. Whether or not it's in the training set, it doesn't matter. And so why does this algorithm work? Well, if you think about it, what you want to do is if you look at the log of this expression, which is just the log-likelihood, you're going to get these two terms. You're going to get the f theta, which is just the neural network that you're using to parameterize your energy. And then you have this dependence on the partition function, the partition function has on the parameters that you're optimizing with respect to. And what we want is the gradient of this quantity with respect to theta. And so just like before, we want to increase the f theta on the training set while decreasing the total amount of unnormalized probability mass that we get by changing theta by a little bit. And so really what we want is the gradient of this difference, which is just the difference of the gradients. And the gradient of the f theta is trivial to compute. That's just your neural network. We know how to optimize that quantity. We know how to adjust the parameters so that we push up the output of the neural network on the training data points that we have access to. What's more tricky is to figure out how does changing theta affect the total amount of unnormalized probability mass. And we know that the derivative of the log of z theta is just like the derivative of the argument of the log divided by z theta, just the derivative of the log expression. And now we can replace z theta in the numerator there with the expression that we have. And because the gradient is linear, we can push the gradient inside this sum and that's basically the same thing. We know that z theta is just the integral of the unnormalized probability and then we can push the gradient inside and we get this quantity here. And now we know how to compute that gradient using chain rule and that evaluates to that. It's just the gradient of f theta. Again, this is something we know how to compute. And then it's rescaled by this exponential and the partition function. And if we push the partition function inside, you'll recognize that this is just the probability assigned by the model to a possible data point x. And so the true gradient of the log-likelihood, which is what we would like to optimize and take the gradient ascent with respect to, is basically the gradient of the energy evaluated at the data point minus the expected gradient with respect to the model distribution, which makes sense. We need to figure out how does changing theta by a little bit affect the unnormalized probability that you assign to the true data point we care about. And then we also need to understand how changing theta affects the probability of everything else that could have happened. And we need to weigh all the possible axes with the probability assigned by the model for the current choice of theta. And now you see why the contrastive divergence work. The contrastive divergence algorithm is just a Monte Carlo approximation of that expectation. So we approximate the expectation with respect to the model distribution with a single sample and that's an unbiased estimator of the true gradient. And so the true gradient is basically this difference between the gradient evaluated at the true data point and the gradient evaluated at a typical sample, what you get by sampling from the model. And as long as this-- you can follow this direction and your gradient ascent algorithm. You are making the relative probability of the data increase basically because the data goes up more than the denominator that how much the partition function grows, essentially. And that's the key idea behind the contrastive divergence algorithm. The main thing that still remains to be seen is how do you get samples. We still don't know how to sample from these models. And the problem is that, well, we don't have a direct way of sampling in an autoregressive model where we can just go through the variables one at a time and set them by sampling from the conditionals. And we cannot evaluate the probability of every data point because that requires knowing the partition function. But what we can do is we can compare two data points or we can compare two possible samples x and x prime. And the basic idea is that we can do some kind of local search or local optimization where we can start with a sample that might not be good just by randomly initializing x0 somehow, and then trying to locally make some perturb, this sample, to try to make it more likely essentially according to the model. And because we can do comparisons checking whether the sample or its perturbation is more likely is going to be tractable. And so this is a particular type of algorithm called a Markov chain Monte Carlo method. It's actually pretty simple. What you do is, again, you initialize the procedure somehow, and then at every step you propose basically some change to your sample. And it could be as simple as adding some noise to what you have right now. And then if what you get by perturbing your sample has higher probability than what you started from, which we can do this comparison because we don't need the partition function to compare the probability of x prime with what we have right now with xt, then we update our sample to this new candidate, x prime. And then what we need to do is we also need to add a little bit of noise to the process where basically, if you think of this as an optimization problem, we're always taking uphill moves. So if the probability goes up, we always take that step. But if x prime, this proposed transform, the sample that we get by adding noise actually has lower probability than what we started from, we occasionally take this downhill moves with probability proportional to this quantity. So basically proportional to how much worse this new sample we're proposing is compared to where we started from. And then you keep doing this. And it turns out that in theory, at least, if you repeat this procedure for a sufficiently large number of steps, you will eventually get a sample from the true distribution. Why do you need to probabilistically accept the noise one instead of just ignoring the first one? Yeah, great question. Why do we need to occasionally accept samples that are worse than what we started from? The reason is that we don't just want to do optimization. If you were to just do step 1, then you would do some kind of local search procedure where you would keep going around until you find a local optimum, then you would stop there, which is not what we want because we want a sample. So we want to somehow be able to explore the space more. And so we need to be able to accept downhill moves occasionally, and they are not too bad. And that basically allows the algorithm to explore the whole space of samples we could have generated, because maybe you're stuck in a local optimum that is not very good and if you had moved much further away, there would have been regions with very high probability. Discard every time you'll get the same sample. Also. Yeah. So why don't we just use the probability to go uphill as well? Why do we always go to the one with a higher value? Why we always go to the one with the higher likelihood is-- Why don't we use the probability? You could define other-- this is not the only way of doing it. You could define other variants of this where you don't always accept uphill moves. There is a certain something called the Metropolis-Hastings algorithm that you can use to define different variants of MCMC that would also work. This is the simplest version that is guaranteed to give samples, but yeah, there are other variants that you can use. The choice that you make in step 1 and 2 in that loop in that condition is that the same? I think in both cases you take the x prime, right? But in the second case, you only take it with some probability. OK. So you always take an uphill. If your probability goes up, you always take that step. If it's a downhill move, then you take it with some probability. And if it's about the same, then you're likely to take it. If it's much worse, then this probability is going to be very small and you're not going to take that kind of move. So that's the way you generate samples and that's what you do in the contrastive divergence algorithm.
Stanford_CS236_Deep_Generative_Models_I_2023_I_Stefano_Ermon
Stanford_CS236_Deep_Generative_Models_I_2023_I_Lecture_10_GANs.txt
The plan for today is to continue talking about generative adversarial networks. As a recap, remember that the nice thing about generative adversarial networks is that it allows us to train models in a likelihood-free way, which basically means that you no longer have to choose kind of special architectures or factorize a distribution according to chain rule because you're forced to be able to evaluate the probability of each data point because you want to train by maximum likelihood. The idea is that there are ways to basically compare the probability distribution of your generative model to the data distribution that does not involve KL divergence and does not require, basically, you having to evaluate the probability of samples according to your model. And in particular, we've seen that there is one very reasonable way of figuring out how well your generative model matches a data distribution, and that involves basically training a classifier. And the classifier is often called a discriminator in this space. And the discriminator is supposed to-- is trained to distinguish whether or not the samples it's receiving are real, meaning they come from the data distribution, or fake, meaning they come from the model distribution. And you can think of the performance of this classifier as an indicator of how well your generative model has been trained or how similar the samples it produces are to the real data distribution. If the discriminator is having a very hard time distinguishing your samples from the real ones, there is a good chance that your samples are actually pretty good. And so based on this intuition, we have this kind of training objective here, which involves a minimax game. So it's like an optimization problem where there are two players. There is a generator that is trying to produce samples. That's your generative model. There is a discriminator D that is trying to distinguish real samples from phase samples. And there is this performance metric, which is basically the loss of the discriminator. It's basically the negative cross-entropy loss of the discriminator on this task of distinguishing real versus fake. And you have this minimax game where the discriminator is trying to do as well as it can in this classification problem, binary classification problem. And the generator is trying to make the discriminator perform poorly. So they are playing like a game. And this is like a minimax game in the sense that they're pushing the objective function in two different directions. And the generator is being trained to try to fool, basically, the discriminator, trying to produce samples that are as close as possible to the ones in the data as measured by a discriminator not being able to distinguish these two types of samples. And we've seen that under some assumptions. So if you assume that somehow you are able to compute the optimal discriminator, recall that optimal discriminator is basically giving you density ratios. And if you plug that optimal discriminator into this loss function, then you get a mixture of two types of KL divergences. And we've seen that divergence as a name. It's called the Jensen-Shannon divergence. And up to scaling and shifts, you can think of this training objective from the perspective of the generator, assuming the discriminator is optimal. You can think of this as trying to minimize this Jensen-Shannon divergence between the data distribution and the model distribution. And so this is not too different from traditional maximum likelihood learning, where we minimizing KL divergence between the data and the model. Under these assumptions, you're trying to make-- to instead minimize some kind of mixture of KL divergences that are basically between the data and mixtures of models and data. Why was the shift and the scaling important considering there isn't much else there? Would the target not be the same? This shift and scaling is just it happens to show up if you define the loss this way. It just happens to be the case that Jensen-Shannon divergence is defined that way, and it doesn't have this the optimal loss that you can have. Not super important. It's just like if you've worked through the math, you get a shift in scale. But yeah, we don't care about-- of course, the loss is the same. Basically the-- you're just changing the landscape by shifting it. So it doesn't really matter. It just happens to show up if you do the derivation. And in practice, you know, of course, this is not feasible in the sense that you cannot get the optimal discriminator. But in practice, what you would do is you would have two neural networks, a generator and a discriminator, and they play this game. And then the generative distribution is defined as what you get by transforming simple samples from a prior distribution, like a Gaussian, through the generator network. And then you just optimize this sort of objective function. And there's been a lot of success based on this paradigm. This is a cool repo where you can see a lot of different GANs and variants of these ideas that have been proposed in the past. And what we're going to see today is that this idea of setting up a minimax game is actually very powerful. And not only you can use it to minimize the Jensen-Shannon divergence, but you can actually use it as a tool that, under some conditions, allows you to optimize a much broader class of divergences between the data and the model distribution, something called f divergences. And we'll see that there is also another kind of extension or similar framework that allows you to approximately minimize some notion of the Wasserstein distance between model and data distribution. And we'll also see how to get latent representations from generative adversarial networks. So similar to a VAE, we'll see to what extent it's possible to essentially not only generate samples but also map samples to latent representations. Then, you can use perhaps on to do semi-supervised learning or use them on other kinds of downstream tasks. And then we'll also see maybe cycle GANs that like conditional generative adversarial networks are also pretty cool. All right. So first, let's go back to the high-level picture. Again, remember that we've been-- in the first part of the course, we were always kind of choosing this divergence between the data and the model to be the KL divergence, which plays nicely with likelihood-based models. If you can evaluate probabilities under your model comparing similarity in terms of KL divergence makes a lot of sense. And we know that that's kind of optimal in a certain sense. We've seen that, to some extent, you can optimize the Jensen-Shannon divergence through the GAN objective. And what we'll see today is that you can actually optimize a broader class of divergences that are called the f-divergences. And an f-divergences is just defined-- is defined as follows, so if you have two densities, p and q, you can define a divergence between them by looking at the expectation with respect to the second argument, which is q of this f function applied to the density ratio between-- at each point between p and q, where f is basically a function, a scalar function that has to be convex, lower semicontinuous, and it has to evaluate to 0 when you plug in 1. And as you change basically this f function, you get different ways of basically comparing how similar p is to q. And just to be precise, what these technical conditions mean-- well, one is that you have the-- function f has to be convex, which hopefully you know what it means it means that if you-- the graph, basically, if you take two points and you connect them, that line is above the graph of the function, has to be lower semicontinuous, which is just a very technical sort of thing. It basically means something similar to continuous. And if it's discontinuous, then-- on one of the directions, then it has to be above the value of the function where there is a discontinuity, not super important. But the intuition is that somehow f is what tells you how much your being penalized. When p and q are assigned different probabilities to one of the possible things that can happen, let's say, one of the possible images. So it's similar in some sense to KL divergence, where, remember, what we were doing is we were going through all the possible things that can happen. And then, we're looking at the ratio of probabilities assigned by p and q, and then we're taking some kind of log. This is a generalization in the sense that you can use different kind of convex functions to score how happy or unhappy you are with different kinds of density ratios. And ideally, if p and q are the same, then they are going to assign the same probability to every x. And so the density ratio is 1. And then, this function f is going to give you a penalty of 0. And that's the best that can happen. But f is essentially measuring how much you care about p and q, assigning different probabilities to the various axes, to the various samples that can be generated by the model. And the interesting thing is that because f is convex, then you can still use the same trick that we did for KL-divergence to basically show that this f-divergences is nonnegative. And in particular, because we have an expectation of a convex function of some density ratio, this is always at least as large as the function applied to the expectation. And now that expectation, you can expand it. It's basically the integral with respect to this probability distribution q of the density ratio, which is just p over q. So the q's simplify and you're left with the integral of p. P is normalized. It evaluates to 1, that integral. And so this is f of 1, which is zero. And so you get the desirable property that, basically, this f divergence is nonnegative for any choice of p and q. And if you plug in p equals to q, then this density ratios here are always going to be 1. F of 1 is zero. And so this whole expectation is zero. And so it behaves similarly to KL-divergence in the sense that it's-- it tells you how similar or different p and q are by looking at all these density ratios and scoring them throughout. If the distributions are the same, then the density ratios, the two, p and q, are assigned exactly the same probabilities to everything that can happen, and that f-divergence is going to be zero. In general, is going to be non-zero. It's going to be greater than or equal to zero. And so it makes for a reasonable objective function to try to minimize this quantity. So we could have one of p and q be the data distribution, the other one being the model distribution, and then we can try to minimize this as a function of the model. And if you the nice thing about f divergences is that if you plug-in different types of f's, you get many existing reasonable divergences that you might want to use to compare probability distributions. For example, if you choose f to be u log u and you plug it into this formula, then you will see that this expression evaluates to the usual KL-divergence, where the way you compare two probability distributions, p and q, is by going through everything that can happen, look at the density ratios and scoring them with respect to this log function. There are many other f-divergences. So the nice thing is that if you plug in different f's, you get different divergences. So we have the Jensen-Shannon divergence, which you get by choosing, for example, this odd-looking choice of u. You can get the usual KL-divergence by choosing u log u. You can get the reverse KL-divergence where you basically swap the argument of p and q in the regular KL-divergence by choosing minus log u as the function f. And you can get many more. You can get-- you can see here squared Hellinger total variation, alpha divergences, a lot of different kind of ways of comparing similarities between p and q by choosing a different f function. And what will turn out to be the case is that generative, adversarial network-like objectives can not only be used to minimize an approximate version of the Jensen-Shannon divergence, which corresponds to a very particular choice of f, but it can actually be used to optimize all of them. So you can pick any f that according to-- which satisfies those constraints that defines a valid f-divergence. And what we'll see is that we can use a GAN-like objective to minimize the corresponding f-divergence approximately. OK, now the basic setup is that, as usual, we're trying to, to train a generative model, so we have a data distribution, we have a model distribution. And it would be nice if we could choose an f and then either minimize the f-divergence between the model and the data or perhaps the f-divergence between data and model. Now, this is reasonable because we've seen that for any choice of f that satisfies those constraints, this objective function is nonnegative and is going to be 0 if the two arguments match. So if your generative model matches the data distribution, then this loss function is going to be 0, is nonnegative. And so if you set up a learning objective where you try to minimize this as a function of theta, you might be able to learn a good generative model. Now, the issue is that the expression, like when we started looking at KL-divergence the first time, doesn't look like something you can actually optimize. It doesn't look like something you can evaluate, and doesn't look like something you can actually optimize as a function of theta. First of all, you have an expectation outside with respect to, let's say, the data distribution. Well, we don't know p data, but we have access to samples, so we can approximate that expectation with a sample average. So that's not a problem. The real problem is that it requires you to evaluate the probability of x under the model and under the data distribution. And even if you have a likelihood-based model, even if you can evaluate p theta, we can never evaluate probabilities under the data distribution. And so that density ratio is unknown. So like in the KL-divergence case where we have that log density ratio, and we couldn't actually evaluate it, and we could only actually optimize KL-divergence up to a shift, up to the entropy of the data. We have the same problem here, that this kind of objective function seems reasonable but doesn't look like something we can actually optimize. And if you try to swap, you try, OK, maybe we can do-- instead of doing f divergence between p theta and p data, we could try to do p data to p theta, and you kind of end up with something similar. We have, again, an expectation with respect to samples drawn from the model, which is fine, but again, you have this density ratio that is not something we can compute in general, even if you have a likelihood-based model. And so what we need to do is we need to somehow rewrite this divergence or approximate this expression. And write it into something that ideally does not depend on either the probabilities. Basically, it does not require you to be able to evaluate probabilities under the data distribution, and ideally not even according to the model distribution, because if the objective function does not involve neither p theta nor p data, and it only requires you to be able to sample from both of them, Then we're back in the setup, just like a generative adversarial network where we can basically use any sort of architecture to define p theta implicitly as whatever you get. If you were to, sample from a simple prior or feed the samples through a neural network, that which is the generator that defines a potentially very complicated p theta or x to the extent that we can write down the objective function in an equivalent way or approximately equivalent way, that does not require us to evaluate probabilities, then we can use a very flexible network architectures, like in generative adversarial networks. Questions? Yeah. [INAUDIBLE] p data of x, the probability of a-- if you're sampling from p data and it's just [INAUDIBLE] The question is, OK, is p theta-- p data x1-- and in general, no. that's basically the probability that the model assigns to every possible x. So that's just an there is an underlying, as usual, data-generating process that we assume we have access to only through samples. So we assume we have a training set that was sampled from p data and that distribution is not uniform. This is not the empirical distribution on the data set, which could be just like 1 over n, where n is the size of the data set. This is the true data generating process. You could set it up, trying to fit the empirical distribution on the data set, but that's not quite-- you could even think of that as an approximation of p data where you have a very simple kernel density estimator based on the training set. But that doesn't work particularly well because in high dimensions it's going to be-- it might not generalize. So you're kind of like overfitting too much to the training set. Is the reason why you do not even want p theta is to avoid likelihood-based learning? Yes, so this machinery works if you can evaluate p theta. But as we know, evaluating p theta constrains you in the models you can use. You have to then either use autoregressive models or you have to use invertible neural networks, which is undesirable. And if you could set up a learning objective where p theta is not even something that you have to evaluate, you just need to be able to sample from it. Then that opens up the possibility of using implicit models like feed noise through a neural network, kind of like a simulator almost, where you don't even need to know how it works. You don't need to know how it assigns probabilities to data points, you just need to be able to sample from it. So that opens up more-- a broader set of models, including these implicit ones where you just need to be able to sample from it essentially. OK, you mentioned that f should be convex, but [INAUDIBLE] a log, which is concave. But is u log u because-- yeah. Just [INAUDIBLE],, just related to that question, why is the KL-divergence-- why do we get KL-divergence with u log u. I thought-- Yeah, so the reason we need u log u is because you remember KL-divergence is an expectation with respect to p of log p over q-- Over q, right? But we have-- [INAUDIBLE]? Yeah, so you have to multiply by-- yeah. OK. --to basically change the expectation to 1 with respect to p. But if you see, in fact, if you want reverse KL, then it's just minus log u because reverse KL would be an expectation with respect to the second argument. And so the u in front is basically to change the expectation from 1 under q to 1 under p basically. OK, so now let's see how we can actually move forward and come up with a GAN-like way of approximating this f-divergence that does not require likelihoods. Oh, a question. Yeah. Why were we able to estimate the objective for GANs before? If GANs are an example of a [INAUDIBLE]?? So we'll be using-- the reason we were able do it for, I guess, Jensen-Shannon divergence is exactly what we're going to see now, which is basically a way to reduce that-- this expectation, which looks like something that you might not be able to compute. If you look at the Jensen-Shannon divergence, it looks like something you're not able to compute. But if you have an optimal discriminator, intuitively, the optimal discriminator computes this density ratios for you. And so that's how you get around it. You are offloading this problem of computing the density ratios to a discriminator. And this might be good or bad, but the hope is that neural networks seem to work really well for supervised learning classification problems. And so we might be able to come up with reasonable estimates of these density ratios by training a classifier because to do well at classification, if you're trying to classify real samples from fake samples, you essentially need to estimate that. The optimal classifier requires you to know, for every x, how likely is this point to come from one versus the other. And so that's the trick. Cool. So the machinery for doing this goes through something called the Fenchel conjugate or the convex conjugate of a function, which is defined like this. If you have a function f, you can obtain another function, f star, which is called the convex conjugate of f, by using the following expression. So f star is not going to be a function of t, And the value of f star at t is the solution to basically this optimization problem where you're taking the supremum over all the Us in the domain of f. And then you have this relatively simple objective, which is just ut minus f of u. And so it seems a little bit random, but this convex conjugate has a bunch of useful properties. In particular, it's a convex function, even when f is not. And the reason is that the argument here in the supremum as a function of t is just basically a bunch of affine functions. It's just linear in t. And so the supremum of a bunch of convex functions is also convex. And so you can think of this as the supremum of a collection of functions that are indexed u. But as a function of t, they are all very simple. They're just linear functions. And then when you take the supremum of a bunch of convex functions, you get something convex. The other interesting property that we're going to use is that we can look at the conjugate of the conjugate, which we're going to denote as f star, which is just what you get if you take the conjugate of the conjugate of a function f. And again, you basically just apply the same definition, but now the function f becomes f star. And it turns out that this convex conjugate is a lower bound to the original function f. So it's always less than or equal to f. And so the proof is actually very simple. You can see that by the definition that we have up here, we have that for every choice of t. F star is at least as large as ut minus f of u because it's the supremum. So it has to be at least as large at all the possible values that you can get for any choice of u. And if you rearrange, you can move the F on the other side and you can write it as f of u is at least as large as ut minus f star if you just move this and this on the other side. And now this definition means that f of u-- because this holds for any t and for every u, then it means that f of u is at least as large as the sup, the supremum of ut minus f star of t, the convex conjugate, which is exactly the definition that we want. That's exactly the conjugate of the conjugate f double star. And so we see that this conjugate of the conjugate is always less than or equal to the original function that we started with. And it turns out that when f is convex, then this f the conjugate of the conjugate is actually equal to the original function. If you start with a function, you can get the conjugate, and then if you conjugate again. You go back to the original function when f is convex. Now, the reason this is going to be useful is that this is going to be similar to the ELBO, or the evidence lower bound. What we're going to do is we're going to try to write down f in our definition of the f-divergence in terms of the conjugate. And we're going to get bounds on the value of the f-divergence by going through this characterization of the f function and an f-divergence in terms of this convex conjugate. And so that's basically the idea that underlies this framework for training generative models based on f-divergences through a GAN-like objective. So what we do is we have the original definition of the f-divergence, which depends on this density ratio that we don't have access to. We can equivalently-- because f is convex, we can equivalently rewrite this in terms of the conjugate, which is just the conjugate of the conjugate f double star, which by definition is just this supremum. Recall that we're evaluating f double star at the density ratio, so we can write f double star as the supremum of t argument minus f star evaluated at t. That's just the definition of the conjugate of the conjugate. And now this is starting to look like something a little bit more manageable because we see that the density ratio that before was sort of inside the argument of this f function that we didn't know how to handle. Now, it becomes a linear dependence on the density ratio. Now, except for this annoying supremum, then the dependence on px over qx is outside the argument of f, which will allow us to basically simplify things. Now, what you can see is that for every value of x, there is going to be a different value of the density ratio, and that is going to be a different value of t that achieves the supremum. And we can denote that supremum that you get for any particular x as t star of x. So this is just the value of the supremum when we're looking at data point x. And this is going to be what the discriminator is going to do later on. But you see that now we have an expression that is not too bad. It's an expectation with respect to q that we know how to approximate using samples. And now we have the density ratio, is outside the argument of this f function that we use to score them. And what this means is that basically, if you expand it, it will look something like this. The expectation with respect to q is just an integral where every x is weighted using q of x. And now, if you simplify it further, and you notice that this q of x simplifies with this q of x, this whole expression basically just looks like the difference of two expectations. There is an expectation with respect to p, and there is an expectation with respect to q. But that's similar to what we had in the GAN framework, where we had an expectation of something with respect to the data distribution, an expectation of something else with respect to the model distribution. And that was giving us our estimate of the Jensen-Shannon divergence. In that case, you can see that the same idea holds more generally for different choices of f. Yeah, questions. What is supremum [INAUDIBLE]? The supremum is the same as the max, basically. It's just like-- yeah, it's the domain that does not necessarily exist a max. So it's a little bit of a technicality, but think of it as the max, basically. So where there is a star sign for the tx function? I'm just denoting it t star because this is basically the-- but it's just a way of denoting the value-- what this supremum over T evaluates to for any particular x. There's going to be a value of theater achieves the supremum. I'm just going to denote it T star. Other questions? Yeah. In this case, do we still have to evaluate [INAUDIBLE]?? So the good thing is that this is an-- it still-- it looks like, yeah, it still depends on p of x. But if you look at the formula, this is basically-- do I have it? OK, maybe it comes up later, but it's an expectation with respect to p of x. OK. And that you can approximate by taking samples, which we have because you have a training set. Yeah. I saw that you went ahead and you pulled out the supremum from the Ts. And how do you parameterize or how do you represent T? Yeah, so then the next step is that, basically, equivalently, you can just say, well, there is going to be some function that we're going to call T that gives you this optimal value of T star for every x. This doesn't change anything. Basically, for every x, there is an optimal choice of T, which comes from the supremum. Here I'm denoting it T star. Equivalently, you can say, OK, there is a function T that takes x as an input and gives you as an output the supremum of that-- in that definition of the convex conjugate. And then that's where you get the bound, is you can say, well, I cannot-- this would require you sort of like an arbitrarily flexible function t that can take any x and map it to the solution to this optimization problem. Recall, this has a little bit of the flavor of a VAE, amortize the inference in VAE where you have this encoder that is supposed to take x as an input and then map it to the optimal variational parameters, like solving an optimization problem for you. This has the same flavor, but we can say is, well, you can always optimize over a set of functions, an arbitrary set of functions, a set of neural networks, and that would give you a lower bound on this f-divergence. So if instead of optimizing over all possible functions, you optimize over a set of neural network architectures that you're willing to consider, you're always going to get something that is less than or equal because you might not have sufficient flexibility for mapping x to the corresponding value t star of x that you would have gotten if you could actually solve this optimization problem exactly. But you definitely get a lower bound for any choice of this family of mappings that you use to map data points to essentially something that looks an estimate of the density ratio. And the more flexible this family script T of neural networks you can choose, then the tighter this inequality is. So the better of an approximation you get to the true value of the f-divergence that you started with. And back to your question, OK, does this depend? It looks like this still depends on p and this one still depends on q. You notice that these two are just expectations with respect to p and q, which, in our case, will be the data distribution and the model distribution. And so this is essentially the same as a game generative, adversarial network training objective. And remember when you the objective that we were using for training again is the min over g. And then we had the max over the discriminator of something that looked a lot like this. So you were sort of evaluating the discriminator on the data samples. You were evaluating the discriminator on the fake samples, and you were trying to distinguish them, contrast them through the cross-entropy loss. And here, we get something that has a very similar flavor, where we're sort of evaluating this discriminator t over data samples, over model samples, and we're trying to essentially distinguish them by maximizing that quantity. So T is going to be [INAUDIBLE]? Yes, so when we do this optimization over T in this script T, that's going to be where we optimize the discriminator or a critic. And this script T is going to be a family of neural networks that we're going to use to choose T from. Usually, we represent our data distribution as p, and the proposed distribution, like p theta is q, right? Yeah. Yeah, yeah, yeah. So for this to hold, we don't need the discriminator to be optimal? So if you want tight, if you want to have an exact estimate of the f-divergence, then the discriminator has to be optimal. But if you don't, then that you're going to get at least a lower bound. So the lower bound part holds even if the discriminator is not necessarily optimal. Don't you inherit the same problems like we discussed having a lower bound? Don't you inherit it from before? If you don't have an optimal discriminator, you just inherit all the problems here, right? Yeah, it's a problem in the sense that you're optimizing a bound. So it might or might not be the right thing to do. And this is a lower bound. So minimizing a lower bound might not be going in the right direction. And so, yeah, you still have those problems. So in that sense, it's approximately optimizing an f-divergence. If you could somehow optimize over all possible discriminators, then, and I guess you had infinite data and you were able to actually solve this optimization problem perfectly, then you could really optimize an f-divergence. But in practice, no, there is always approximations. So in this setting, not with the discriminator setting, T is supposed to represent the maximum value of x that exceeds the function of what is the-- your supremum definition of [INAUDIBLE]?? Yeah, so it's supposed to-- essentially-- it's essentially computing the conjugate of the conjugate of f, and it kind of like corresponds to finding supporting hyperplanes. So you are encoding the graph of the function as a convex hull. And that optimization problem is trying to find essentially tangent to the, to the graph of the function, so that's essentially what's going on in that optimization problem. Has anyone thought about how to do this for an upper bound instead of lower bound? Because I think it's more natural to minimize an upper bound, right? Yeah. So yeah, that's what I was saying, that in the outer optimization problem you're going to be minimizing this, and then this is a bound that goes in the wrong direction. And unfortunately, getting upper bounds is much harder. There is work where people have tried to come up with bounds, especially as it relates to-- it turns out that you need to do something similar if you want to estimate mutual information between random variables, which also basically involves some estimating density ratios. And there is literature and trying to get bounds there, but nothing that works particularly well. How is it better than KL, than just using KL-divergence? [INAUDIBLE] with respect to these versus KL-divergence? Is that the question? Uh-huh. Oh, it is because it doesn't need likelihood, and it achieves-- as we know, KL-divergence is all based on compression, which might or might not be what you want. These other f-divergences are not necessarily capturing a compression-like objective because you're evaluating the density ratios in a different way. You don't just care about the log of the density ratios. You can plug-in different fs that captures different preferences for how close is the model density ratios to the true density ratio that's captured through f. And that gives you more flexibility, basically, in defining a loss function for training your model. Which term in this pair of expectations, in the GAN world, corresponds to the discriminator and generator [INAUDIBLE]?? Yeah, so it depends what you want to choose. So it could either be p is data and q is model, or it could be vice versa. In both cases, you would end up with something that you can handle in the sense that it's a different of two expectations. And depending-- do you want KL data model, or do you want KL model data? Depending on what you want, you need to choose the right order and the right f that gives you the right thing. So it could be both. It doesn't matter. From the perspective of this, it's just the difference of the expectations. You have samples from both and you can estimate both of them. Yeah, Monte Carlo. Could you explain one more time how we went from the star x to having a supremum over all T, from the fourth last step to the third last step? How do we go from the supremum to the T star? From the T star to [INAUDIBLE]. OK, this one? Yeah. Yeah, so this one is basically saying that there's going to be an optimal T star for every x. In if you are allowed to have an arbitrary function, an arbitrarily complicated function that basically just maps every x to the corresponding T star of x, then you get the same sort of result. So you could say, for every x, I'm going to choose a T star, or you could say, I'm going to first choose a function that maps xs to T stars. And to the extent that this function can do whatever you want, then there is no difference. You can memorize all the T stars into a table and then encode that table into the function t. And so choosing the function or choosing the individual that basically outputs of the functions across the different axes is actually the same thing. We've got high, low motivation here that when we look at GANs, when you looked at the optimal discriminator, we got the scale and the scale things. But now we go the other way, we look at different functions here that we might get different discriminators. Yeah, yeah, essentially. Yes, that's the way to think about it. This is a generalization. In the GAN framework, the original simple thing, we started from an expression that looked like this. And then we showed, oh, by the way, it gives you the Jensen-Shannon divergence. This is showing how you can actually start from any f-divergence you want and you can get a loss that looks like a GAN. And by the way, if you were to start from Jensen-Shannon divergence, you would get exactly the GAN loss that we started with, up to shifts in scales. Cool. And so yeah, then thing to note is that the lower bound is likelihood-free in the sense that you can evaluate just based on samples. And once you have this kind of lower bound on the f-divergence, you can get a GAN-like objective as follows. You can choose an f-divergence of your choice. You let, let's say, p to be the data distribution, q to be the model distribution. Defined implicitly through some generator g. And then you parameterize both using neural networks. So let's say you have a set of neural networks with weights phi that define this function T that you have on the outside, the discriminator, basically. And then, you have some parameters that define the generator g. And then you have an f-GAN training objective, which is very similar to what we had before. It's again, a minimax kind of optimization problem where there is the inner maximization problem over phi, where you're trying to find a good approximation to the f-divergence by maximizing, trying to solve this optimization problem as well as you can by trying to find weights, phi, that makes this expression as big as possible. And again, this is no longer of cross-entropy, but it's something quite similar. And then, on the outside, you have a minimization over theta because you're trying to minimize the divergence between the model and the theta distribution. So just like in the gun setting, we have this. The fake samples that are coming from this implicit distribution defined by a generator with parameters theta. And you can try to minimize, choose the parameters theta that minimize this, this expression. And it's basically the same as what we had in the-- if you were to choose the Jensen-Shannon divergence, this would correspond to what we had before. But fundamentally, what's going on here is that there is a generator that's trying to minimize the divergence estimate, and there is a discriminator is trying to come up with the best possible bound on that f-divergence. So if I do take a KL-divergence, can we still show that-- using this equation itself distills down to maximal? So it's not going to give you exactly maximum likelihood because it's an approximation unless you have infinitely flexible discriminators. What people have shown is that if you were to-- in the original f-GAN paper, they basically tested a bunch of different f's for f-divergence. And what they've shown is that if you choose the f corresponding to KL-divergence, then you tend to get samples that indeed give you better likelihoods, as you would expect, because you're approximating the KL-divergence. But as we discussed, that's not necessarily the one that gives you the best sample quality. And you might be getting better sample quality if you were to choose different f's in that paper. Cool. So that's the high-level takeaway. You're not restricted to KL-divergence, exact KL-divergence, or Jensen-Shannon divergence. You can actually plug in other f-divergence. And using the f-GAN training objective, you can still approximately optimize that notion of that divergence. Now, the other thing that you can do using a very similar machinery is optimize, yet a different notion of divergence, which is based on this idea called the Wasserstein GAN. And the motivation for moving beyond f-divergence is that f-divergence are nice, they're very powerful. But there are issues when the distributions p and q don't share, they have, let's say, disjoint support, which can happen, especially early on in training. The samples coming from your generator could be very, very different from the ones that are in the training set. And if that happens, you can have this weird discontinuity where the KL-divergence is like a constant, maybe infinity or something, and then suddenly shifts to some better value the moment the supports match. And that's a problem because, during training, you don't get good signal to go in the direction of trying to make the support of your model distribution close to the support of the data distribution. And you can see an example here. Imagine that you have a super simple data distribution where all the probability mass is at zero, and then you have a model distribution where you put all the probability mass at this point, theta. So if theta is zero, then the two distributions are the same. But if theta is different from zero, then these two distributions don't share any-- the supports are different. And if you look at, let's say, the KL-divergence is going to be zero if the distributions match, and it's going to be infinity for any other choice of theta. So if we're trying to train this generative, adversarial network by optimizing theta to reduce the KL-divergence, you're not going to get any signal until you hit the exact right value that you're looking for. And if you look at the Jensen-Shannon divergence, you have a similar problem where basically it's zero if you have the right value, and then it's a constant for when you have the wrong value. But again, there is no signal. There is no notion that theta 0.5 is better than theta 10. Ideally, that's something you would want because if you have that, then you can do gradient descent, and you can try to get-- to move your theta closer and closer to the value you want. But this divergences can have trouble with these situations. And so the idea is to try to think about other notions of distance or divergences that work even when p and q have disjoint support. And the support is just the set with-- of points that have non-zero probability under p or q. And so, that the one way to do this is to use this thing called the Wasserstein or the earth distance. And the intuition is something like this. Let's say that you have two distributions, p and q. And you can think of the-- and they are just, let's say, one-dimensional. So you have the densities that I'm showing there and they are just mixtures of Gaussians in this case. And you can ask, How similar are p and q? And one reasonable way of comparing how similar p and q are is to say if you think of the probability mass as being piles of earth or piles of dirt that you have laying out on this x-axis, you can imagine how much effort would it take you if you were to shovel all this dirt from this configuration to this other configuration. And intuitively, the further away you have to move this earth, the more cost you pay, because you have to take more time. And p and q-- they are both normalized. So the amount of earth that you have on the left is the same as the amount you have on the right, but the more similar p and q are the same that you don't have to do any work. If the probability mass under q is very far from the one under p, then you have to do a lot of work because you have to move all this earth from various points where you have it on the left to the points where you have it on the right. And the good thing about this is that we'll see that it can handle situations where the supports are different. This kind of definition doesn't care if the supports of p and q are disjoint or not. And it defines a very natural notion of distance, that is, that varies smoothly as you change the shape of p and q. And the way to mathematically write down this intuition of looking at the cost of transporting earth from configuration p to configuration q is to set up an optimization problem, which looks like this. So the Wasserstein distance between P and q is going to be this infimum, which think of it as the minimum, basically. And this infimum is over all joint probability distributions over x and y. You can think of x as being the distribution, p being defined over x and q being defined over y, let's say, as you look at joint distributions over x and y such that the marginal over x matches p and the marginal over y matches q. And what you do is, over all these probability, joint probability distributions, that you have here, that you are optimizing over, you look at the expected cost that you get when you draw x and y from this joint distribution. And the cost is the thing that we talked about, which is basically how much effort it takes to go from x to y. And in this case, this is measured with this L1 distance. You can choose other choices, but for now, you can think of it basically the absolute value in 1d of x minus y. And you can think of this gamma x, y, which is a joint distribution over x and y, as basically telling you how much probability I'm moving from x to y. And so what this is saying is that this condition here, that the marginal over x is p of x. This is saying that at the beginning, you can't you can't move more probability mass than what you started from at x. And the fact that the marginal over y is qy means that the final result, the amount of earth that you find at position y, is indeed the one that you want to get in the final configuration, which is the one you're trying to get. And this objective function here is telling you what is the cost of moving earth basically from x to y. So equivalently, you can think of the conditional distribution of y given x as telling you which fraction of the earth that I have at location x am I going to move to the different y's. And so you can see that then, if you look at this expectation, this is telling you in expectation, how much are you going-- how much is this going to cost you. Well, you look at x, you look at the y's you're moving the earth to, you look at the difference between the two, and that tells you how much it costs you to-- for a given x, if you take the expectation with respect to y-- Gamma y given x, it's telling you the cost of moving all the probability mass that you have at x to the places you want it to move it to, which-- because of this constraint here, it has to match the final result that you want. And so that's basically the optimization problem that defines this intuition of telling us how much work do you have to do if you want to move this, you want to morph this probability distribution here into the probability distribution q that you have as an outcome. And just to get a sense of what this looks like, in the previous example where we had this data distribution, where all the probability mass is at zero, and this model distribution where all the probability mass is at theta, this one, the KL-divergence between these two objects is not very useful. But if you think about what is the earth mover distance here, how much work do you need to do. If you want to move all the probability mass from here to here-- basically you have a big spike at 0 on the left, and then you have a big spike at theta on the right, how much work do you need to do. X minus theta? Yeah, so it says absolute value of theta technically. And so you can see that now it's starting to be more reasonable in the sense that the closer q theta is to the target. p, the smaller this divergence is, which you might expect it might give you maybe a much better learning objective because you have much better gradients, you're kind of-- you have a notion of how close you are, how much progress you're making towards achieving your goal. And to the extent that you can really compute this thing, and we'll see how to do that, this would mean-- this would be a pretty good learning objective to use. [INAUDIBLE] confused about definitions of p and q and the distributions. There are two distributions [INAUDIBLE].. Are you saying we can have multiple joint distributions of p and q? Yeah, there is an infinite number of joint distributions that have given marginals. If you think about it, this is actually a pretty mild set of constraints. I was just saying that for every x the marginal under gamma has to match the distribution at you started from. So this is like saying that if you think of gamma x, comma, y as the amount of earth that is moved from x to y, this is saying that the total amount of earth that you-- or actually that is-- yeah, that the total amount of earth that you move has to be the, the amount that you had to begin with. And this is saying that the other constraint is saying that if you look at the amount of earth that you get at the end after you moved everything from all the various axes, it has to match what you want, which is the final result, the final q of y, which is the amount of earth that you want after you've done all this transport, after you've moved all the probability mass. So, yeah, there are-- if you have two random variables you can think about many different joint distributions with the same marginals. If you think about two binary random variables, these two random variables could be independent, they could be highly dependent, and the joint distribution is what tells you how related-- how they are related to each other. So there is many joint distributions with the same marginals. And in this case, the relationship between them is telling you how much-- yeah, how coupled they are and where you're going to move, probability mass from one to the other. Yeah. Can you go over what exactly does [INAUDIBLE] term inside the expectation indicates [INAUDIBLE]?? Yeah, so basically what this is saying is-- it's just the L1 norm, which in 1D, you can think of it as just the absolute value of x minus y-- and this is just saying that when x and y are far away, they're going to pay a higher cost because transporting from here to Palo Alto is cheaper than from here to San Francisco. And so you can think of-- the x-axis is kind of like measured in kilometers or something. And then you would-- x minus y is just like the distance that you have to travel to go from one point to the other. And so ideally, you would want to choose a gamma such that when you sample from it, x and y are very close to each other. So you minimize the amount of work that you have to do. But that's non-trivial because you also have to satisfy these constraints, that's at the end of the day you've moved all the probability mass that you have to move, and you get this q configuration as a final result, which is this constraint that is saying the marginal over y is q of y. And this constraint is just saying you cannot create earth out of nowhere. You have to move the earth that you started from-- you have to go from the configuration that you have on the left, which is p, to the configuration that you have on the right, which is q. And these constraints here on the marginals are basically just saying that that's the initial condition, that's the final condition. That's the cost that you incur whenever you move earth from x to y. [INAUDIBLE] The [INAUDIBLE],, think of it as the minimum, yeah. So again, basically, you want to choose a gamma, y given x that puts as much probability mass on y's that are close to x as possible. But then you can't, not always can do it because sometimes you do have to move away. If you have to move probability mass out here and you didn't have any, then you have to take it somewhere. And this optimization problem tells you what's the optimal way of-- what's the optimal transport plan that moves the mass from one setting to the other. And again, we're basically in a situation where the original objective function is reasonable, makes sense. It would be good to optimize, but it looks like not something we can actually compute because, as usual, p and q should be a model and a data distribution. We don't know how to evaluate probabilities according to one or the other, so that doesn't look like something we can optimize. But it turns out that there is a variational characterization, or there is a way to basically write it down as the solution of an optimization problem, that we can then approximate using some discriminator or some neural network. And it's possible to show that this Wasserstein distance or earth mover distance is equal to the solution to this optimization problem where you have a difference of expectations, one with respect to p and one with respect to q. Again, it's very similar to the GAN setting. And the only difference is that now what we're doing is we're optimizing over functions that have Lipschitz constant 1, which basically means you need to optimize over all functions that basically don't change too rapidly. And so the solution to this optimization problem or this scalar functions f is actually equal to the Wasserstein distance. And notice here, we don't have f stars anymore. This is really just the difference in expectations between p and q. And so if you didn't have any constraint, then you could make that thing blow up very easily because you could just pick a point where the probabilities are different, and then you could make just increase the value of f at that point arbitrarily. And then you could make this objective here extremely large or extremely small. But you cannot do it. You cannot choose an arbitrary function f because you have this constraint that basically the shape of f cannot change too rapidly. It has to have Lipschitz constant 1, which basically means that if you go through the graph of f and you take any two points, x and y, the slope that you see is bounded by 1, essentially. And again, this optimization problem by itself is not quite something we can solve. But in practice, what you can do is you can approximate the inner this optimization problem over all discriminators that are trying to tell you, think about it, what is this objective doing? You're looking for points where the probability mass under p and q is different. So you can find these points, then you can increase the value of f. And you can get a high value in that difference of expectations. And so you can approximate that problem of trying to find x's that are given different probabilities under model and data by training a neural network, which is, again, going to be some kind of discriminator. And at this point, there is, again, no cross-entropy loss. You're just trying to find a network that can take high values on the data points and low values on the fake data. And to enforce the Lipschitzness-- enforcing Lipschitzness is hard. But in practice, what you can do is, as usual, you don't want this network to be arbitrarily changing too fast, too much. And then, in practice, what you do is you would either clip the weights, or you would enforce a penalty on the gradient of this discriminator so that, again, it cannot change too much, it cannot change too rapidly. I think you have mentioned this. I was trying to understand the relationship between the neutrality and the earth mover distance. So are they equivalent? So the earth mover distance is this quantity you have on the left. So to the extent that you could solve this optimization problem on the right, then you would be able to compute exactly the earth mover distance. OK, so then-- so basically, the optimized solutions on the right-hand side were equivalent to the-- Yeah. And intuitively, this function f is telling you where there is a discrepancy in probability mass between p and q. So if there are x's that are given different probabilities under p and q, then f will try to choose a large value ideally. But then, because of this Lipschitzness constraint, then you cannot make it arbitrarily big, you cannot go to infinity. So you have to somehow be smooth and, at the same time, try to find differences between p and q. [INAUDIBLE] this optimization problem guarantees to find the solutions? So if you can solve this one, yes, this will give you exactly the Wasserstein. The problem is that in practice, you cannot-- before, like in the f-GAN setting, you can't really optimize that. And so in practice, you would use approximations where you just use some discriminator. And you try to make sure that the discriminator is not too powerful. And you try to restrict basically how powerful the discriminator is by either, for example, trying to reduce, trying to have a penalty term on the gradient of with respect to the inputs of the discriminator so that it cannot change too much. And this doesn't give you bounds. So unlike the f-divergence setting, this is just an approximation. It doesn't necessarily give you bounds. So [INAUDIBLE] we have no guarantee that the Lipschitz constant will be [INAUDIBLE]? Yeah. OK. Yeah. It's very interesting to me. Even though the math derivation are very different, but the end objective-- they all look awfully similar to each other? [LAUGHTER] Yeah, so they're all based on the very similar idea where you're trying to find a witness function, a discriminator, or some kind of classifier that is trying to distinguish samples coming from p, from samples coming from q. And depending-- you have to restrict what this witness function or this classifier does in some way and/or you change the way you're scoring what this classifier does. And depending on how you do that, you measure similarity basically in different ways. And if you restrict this discriminator to be-- to have a Lipschitz constant of at most 1, then you get Wasserstein. If you use an arbitrary function, but then you score it with respect to that f star, then you get an f-divergence, and so forth, yeah. But the main advantage of this is that it's much easier to train. So in practice, this is very often used. And you can see an example here where you can imagine a setting where you have real data that is just a Gaussian that is here. So you see all the samples that are coming, that are these blue dots that. They're lying around here. And then you have a model. Let's say you start out with a bad initialization for your generator, and most of your samples are, again, a Gaussian, but somehow the means are different. And so all your samples are here, these green dots. And if you get the discriminator, the discriminator will have a very good job at distinguishing between the blue samples and the green samples. And it will be a sigmoid, but it's extremely steep. So basically, everything to the left here will be classified as real. And everything to the right will be classified as fake. But it's almost entirely flat. And so when you think about trying to figure out, when you update the generator to try to fool the discriminator, you don't get a whole lot of signal in terms of moving-- in which direction should you move these data points because the green-- the red curve is too flat. And so it's very tricky to actually get this model to learn and be able to learn how to push the data points, the fake data points towards the right. But if you think about the Wasserstein GAN critic, which is just the discriminator, it's almost like a linear function. It's this light blue curve. And if you are from the perspective of the generator and you're trying to reduce the same objective function that was being optimized by the critic, you have a much better learning signal to push your data points to the left. And you kind of know that, yeah, this data points out here are much better than the data points out there. How do we calculate this, the Wasserstein GAN curve, the critic? In this case, I guess you can even do it in closed form. I don't know if they did it-- you could probably also approximate it somehow. But if it's just two Gaussians, I think you can do it in closed form. But why are the slopes different? They have different directions. [INAUDIBLE] discriminating? Yeah, so I guess you would still try to minimize the-- so this this is the decision boundary, which is not-- well, yeah, I guess you would still go. You would try to-- yeah, because it would be the opposite. So you're trying to make it fake. So you would still push towards the left. And from the perspective of the W-GAN, you would still try to minimize from the G, the perspective. You will minimize this expression that you have inside. And so again, you would push the points to the left because this blue curve, the light blue curve goes down. And so I think it's just that it's plotting probability of fake instead of plotting probability of real. So that's why it's going in the wrong direction. OK, yeah. Can you go over again how using this form of divergence is better than the f-divergence? Yeah, so the reason, you can actually see it here. And it's just that you have basically better learning signal. And it's similar to what we were talking about here, that if the distributions are too different, then with respect to KL-divergence, you might not have good enough signal that tells you if you put all-- in this case, putting the probability mass at 1/2 is better than putting the probability mass at 10. With respect to the Wasserstein, this would show up because there would be a difference between those two settings. And 1/2 is closer to the ground truth than 10. And so you would be able to do gradient descent on that objective with respect to theta, and you would get closer and closer. With respect to KL, you don't quite see it. And in practice, you can also see it here, where basically doing optimization from the generator perspective, doing optimization by minimizing the light blue curve is much easier than trying to fool the discriminator in the regular GAN setting because this is these vanishing gradients, and it's too flat. So [INAUDIBLE] we understand this Wasserstein, W-GANs, have a better statistics testing something? Yeah, I don't know if you can formally prove that it's more powerful than a regular GAN. I think it probably-- It is like [INAUDIBLE] GANs tell if it's a fake or not, right? But this shows you the distance between the-- Yeah, they're just-- they're measuring distance in a different way. And I don't know, in general, you could say you would probably have to make some assumptions on p and q to say which one is better and which one is worse. I think, in general, I think, from this, it's more like-- if you had access to infinitely powerful discriminators, I think in that world, I think it-- both would probably work. I think in practice, you always have approximations, and you are optimizing over restricted families of discriminators. And you have this minimax thing where you cannot actually solve the problems to optimality. And it turns out that optimizing the Wasserstein-type objective is much more stable in practice. Cool. Maybe the last thing we can briefly talk about is how to actually-- inferring latent representations in GANs, this is going to be a bit of a shift in terms of the topic. But once you train a GAN, you have these latent variables that are mapped to observed variables. And you might wonder-- it looks like VAE. --to what extent are you able to recover z given x? Let's say if you wanted features or something like that. And one way to do it. The problem is that it's no longer an invertible mapping, and you don't have an encoder. So in the flow mapping setting, you just invert the generator. So given an x, you figure out what was the z that would be mapped to that x. In the variational autoencoder, you have the inference model, you have the encoder that is doing that job for you. In a GAN, you don't quite have it. So one way to get features from a GAN is to actually look at the discriminator. So the discriminator is trying to distinguish real data from fake data. So presumably, to do well at that job. It has to figure out interesting features of the data. And so you can try to take the discriminator and fine-tune it on different tasks or, take the representations that you get towards the end of the neural network and hope that those features are actually useful for other kinds of tasks. If they were helpful for distinguishing real data from fake data, they might work for other tasks as well. But if you want to get the z variables from the discriminator, from the generator like in the VAE, then you need a different learning algorithm. And the problem is that in a regular GAN, you're basically just looking at the x part. And somehow, we need to change the training objective to also look at the z part and the latent variables. And the way to do it is to basically change the way you set up the two sample tests or this likelihood free learning objectives to not only compare the x samples that you get from the model to the real data samples, but to also look at the representations, the kind of zs that produced the samples that you see. And the thing is that when you sample from the model, you get to see both the x and the z part because you're sampling them yourself. But in the data, you only get to see the x. There is no corresponding z. And so the way to do it is to essentially just like in VAE, introduce an encoder network that will map x to a latent-- to the corresponding latent representation z. And so the architecture looks like this. It's called the BiGAN because it involves-- it goes in two directions. So you have latent features that get mapped to data through the generator, and then you have data that gets mapped to latent features through some encoder network. And then, the job of the discriminator is to not only distinguish gz from x, fake samples from real samples, but now the discriminator is going to try to distinguish fake samples with the latent variables from the model, from real samples and latent variables inferred from the encoder. So it's going to work on pairs of inputs, x and z, or sometimes the xs are real. Sometimes they are generated by the model and same thing. Sometimes, the z are real. They are produced from the prior, and sometimes they are produced by the-- by fitting real data to the encoder. And then, basically everything is the same. Then you train the generator, trying to fool the discriminator. You train the encoder, and you train the discriminator trying to distinguish the samples. And to the extent that this works, so the discriminator observes these pairs. And the discriminator is trying to do as well as it can at distinguishing these two pairs. And after training, basically you can get the samples from g, and you use the encoder to get the latent representations. And yeah, that's sort of like the idea. It's pretty simple. It's like an extension of GANs where you have another mapping, which is also deterministic, going from data to latent features. And then, you let the discriminator operate not only on data but on data, comma, latent. And so to the extent that the discriminator cannot distinguish the zs that are produced by the generative procedure from zs that are produced by the encoder, then you might expect that the encoder is producing latent representations that are similar to the one that GANs would have used for generating a data point. And so effectively, the encoder is inverting the generative procedure. So it's very similar to a variational autoencoder except that E is a deterministic mapping and is not trained by minimizing KL-divergences like in the ELBO, but it's trained by minimizing some kind of two sample test that is being optimized by a discriminator, but the same-- it's the same high-level intuition. Yeah. Is that a concatenation of x and Ex that is we get in the [INAUDIBLE] Yes, it's the concatenation. So you need to be able to distinguish pairs of real data features produced by the encoder from fake data, real features produced from the prior. So you cannot distinguish them, then the features that you get from the encoder of x are going to be very similar to the z's that were actually used for generating data points. And so that's how the encoder is trained. Everything is trained or adversarial. So the discriminator will see latents and images? Yeah. Awesome. Sees pairs. And you mentioned the pair, that they are basically four different combinations, the real image and the deterministic picture, the encoded feature was generated image, the encoded feature was the original image? In this version, no, there's only two options. But you could imagine a version where there is-- you're trying to enforce something stronger, where maybe you want-- it's more like a cycle consistency that I guess we didn't have time to talk about today. Here there's only two there's basically samples from the model and corresponding latents versus real data and corresponding latents. And it says after training is complete, new samples are generated by a g, and they representations inferred via z-- via E. But when we generate the new samples with g, didn't we first come up with the latent representation? Why do we still need to infer it [INAUDIBLE]?? So yeah, that's meant to be on real data. So let's say that then you want to do transfer learning or you want to do semi-supervised learning, or you want to get-- you want to do clustering or something, how do you get the features from a data point x? You don't use g because you don't know how to invert it. But you've trained a separate model, this encoder model that is basically trained to invert G. And so on real data at test time, you just use E to get the corresponding latent. So it is two different-- Two different-- like a VAE, two different pieces that are trained together to fool a discriminator, in this case, instead of minimizing an ELBO. And right after [INAUDIBLE] there's something [INAUDIBLE] in this graph, where do we sample the z's from? z's are all the same. They are sampled from a prior, so it's the same training as a GAN. So the z part doesn't change. So the z's are from the top half. Those are-- the zs from the prior and then you pass them through the generator to produce data. OK, so we're just training E to make it map those to-- map the real images to the [INAUDIBLE]?? Essentially, yes, except that in a VAE that matching is done via KL-divergence. Here, that matching is done adversarially basically. So the outputs of the encoder should be indistinguishable from Zs that are sampled from the prior, where indistinguishable is measured not with respect to KL. Now it's measured with respect to-- a discriminator should not be able to distinguish that the stuff that comes out from the encoder when it's fed real data is different from the real latent variables that you sampled yourself from the prior. So it has the same flavor, too. If you remember, at VAE, we had a very similar kind of intuition that what comes out from the encoder should be indistinguishable from what you from the latencies that you generate yourself. In that case, we were enforcing that indistinguishable using KL. Here, we're using a two-sample test, a discriminator. At inference time, the latents are sample from E based on the last class selected? But E requires an x as the input. What do we input to E? The image that you want to get representation for. So like in a VAE, you have a x, you feed it through the encoder and you get the corresponding latents, and then you do whatever you need to do, you do. Yeah, but in VAE, at training time, at inference time, we don't use the encoder at all, right? Yeah, at inference time, if you want to just generate, you don't use the encoder. But if you want to get features, then you still use the encoder, just like here.
Stanford_CS236_Deep_Generative_Models_I_2023_I_Stefano_Ermon
Stanford_CS236_Deep_Generative_Models_I_2023_I_Lecture_1_Introduction.txt
Welcome. Super excited to see so many people interested in deep generative models. So I'm Stefano. I'm the instructor of this class. I've been teaching this course for a few years now. I guess we started back when before all the generative AI hype and before this topic was so popular in the industry. And so now you're lucky you get to experience a pretty mature version of this course. And it's going to be a pretty exciting quarter. This is one of the hottest topics in industry right now. That is, of course, a lot of excitement around the language models, about generative models of images, of videos. And the goal of this class is to give you really the foundations to understand how the methods that are used in industry and in academic papers actually work and hopefully get up to speed with all the really fundamental concepts that you need in order to build a generative model and maybe, in the future, develop better systems, develop better models, deploy them in industry, start your own company. That is sort of like leveraging these technologies. So at a high level, one of the reasons I think these models are becoming so important in AI and machine learning is that they really address a kind of like the fundamental challenge that we encounter in a lot of subfields of AI like computer vision, NLP, computational speech, even robotics, and so forth. If you think about it in a lot of these settings, the fundamental challenge that you have is to make sense of some complex high-dimensional signal or object like an image or a speech signal or a sequence of tokens or a sequence of characters written in some language. And this is challenging because from the perspective of a computer, if you think about an image, it's just like a big matrix of numbers. And the difficulty is making sense of it, trying to figure out how to map that very complex high-dimensional object to some kind of a representation that is useful for decision-making for a variety of tasks that we care about like figuring out what kind of objects are in the image or what kind of relationships they are in, what kind of materials they are made of, if they are moving, how fast, things like that. And now similarly, if you think about NLP, it's a similar story. You have a sequence of characters. And you need to make sense of it. You need to understand what's the meaning, and maybe you want to translate it in a different language. The challenge is really understanding what these complex objects really mean. And understanding these objects is hard. It's not even clear what it means to understand what an image means. But I like to use this analogy inspired by this quote from Richard Feynman. At some point, he said, what I cannot create, I do not understand. I think this was actually what they found on his whiteboard after he passed. And what he meant, in this case, is that he was talking about mathematical theorems, and he was saying, if I can't really derive a proof by myself, I'm not really understanding the concept well enough. But I think the analogy is that we can look at the contrapositive of this and kind of like the philosophy behind the generative modeling approaches in AI, is that if I claim I'm able to understand what an image means or what a piece of text means, then I should be able to create it, right? I should be able to generate new images. I should be able to generate new text. So if you claim you understand what an apple is, then you should be able to kind of like picture one in your head, right? Maybe you're not able to create a photo of an apple, but you know sort of like what it means. Or if you claim you can speak Italian, then you should be able to sort of like produce. You should be able to speak in that language. You should be able to write text in that language. And that's kind of like the philosophy behind this idea of building generative models of images or generative models of text or multimodal generative models. If you have these kind of capabilities-- so you're able to generate text that is coherent and make sense like in large language models, like ChatGPT, those kind of things-- then it probably means that you have a certain level of understanding, not only of the rules, the grammar of the language, but also about common sense, about what's going on in the world. And essentially, the only way you can do a good job at generating text that is meaningful is to have a certain level of understanding. And if you have that level of understanding, then you can leverage it, and you can use it to solve all the tasks that we care about. So how do we go about building a software, writing code that can generate, let's say, images or can generate text? This is not necessarily a new problem. It's not something that we are looking at for the first time. People in computer graphics, for example, have been thinking about writing code that can generate images for a very long time, and they made a lot of progress in this space. And so you can kind think of the setting as something like where you're given a high-level description of a scene. Maybe there are different kinds of objects of different colors, different shapes. Maybe you have a viewpoint. And the goal is to kind of write a renderer that can produce image that corresponds to that high-level description. And again, the idea is that if you can do this, then you probably have a reasonable understanding of what it means what the concept of a cube is, what the concept of a cylinder is, what colors mean, the relative position. And in fact, if you can do this well, then you can imagine a procedure where you try to invert this process. And given an image, you can try to figure out what was the high-level description that produced this scene and to the extent that you don't have sort of like computational constraints and you can do this efficiently. This gives you a way to think about computer vision in terms of inverse graphics. So if you have a process that can generate images well and you are somehow able to invert it, then you are making progress towards computer vision tasks because you are able to really understand this high-level descriptions of the scenes. And this is not going to be a course on computer graphics. We're going to be looking at very different kind of models, but they will have a similar structure. Many of them will have a similar structure where there's going to be a generative component. And then often, there's going to be latent variables that you can kind of like infer, given the raw sensory inputs, in this case. And you can use that to get features, to get representations. You can use them to fine-tune your models to solve computer vision tasks. And so this kind of philosophy and this kind of structure will actually show up in the kind of models that we'll build in the class. So the kind of models we're going to work on, they are not graphics-based. They are going to be statistical models, so we're only going to be talking about models that are based on machine learning techniques. And so the generative models that we're going to work with are going to be based on a combination of data and prior knowledge. And so priors are always necessary, but you can imagine that there is a spectrum, right? You can rely more on data, or you can rely more on priors. And you can kind of like think of computer graphics as sort of like lying on this extreme, where you leverage a lot of knowledge about physics, about light transport, about properties of objects to come up with good renderers. This course is going to be focusing on methods that are more like much more data-driven, where we're going to be trying to use as little prior knowledge as possible and instead leverage data, large data sets of images or text perhaps collected on the internet. And yeah. So at a very high level, these generative models are just going to be probability distributions over, let's say, images x or over sequences of text x. And so in that sense, they are statistical. And we're going to be building these models using a combination of data which you can think of as samples from this probability distribution. And in this case, the prior knowledge is basically going to be a mix of the kind of architectures you're going to be using, the kind of loss functions that you're going to be using for training the models, the kind of optimizer that you're going to be using to try to reduce the loss function as much as possible. And this combination, having access to good data and the right kind of like a priors, is what enables you to build hopefully a good statistical generative model. But at the end of the day, kind of like the abstraction is that we're going to be working with probability distributions. And you can just think of it as a function that takes any input x as input, let's say, an image and maps it to some kind of scalar probability value, which basically tells you how likely is this particular input image x according to my generative model. And this might not look like a generative model directly. Like it looks like how do you actually generate data if you have access to this kind of object. The idea is that you can basically generate samples from this probability distribution to create new objects. So you train a model. You learn this probability distribution. And then you sample from it. And by doing that, you generate new images that hopefully look like the ones you've used for training the model. So that's the structure. So in some sense, what we're trying to do is we're trying to build data simulators. So we often think of data as an input to our machine learning problems. Here we're kind of like changing. We're turning things around. And we're thinking of data as being an output. So we need to think about different kinds of machine learning models that we can use to simulate to generate data. Of course, this looks a little bit weird because we just said we're going to use data to build these models. So indeed, the idea is that we're going to use data to build a model, but then we can use to generate new data. And this is useful because often we're going to be interested in simulators that we can control through control signals. And we'll see some examples of the kind of control signals you might want to use to control your generative process. For example, you might want to-- you might have a model that can generate images. And you can control it by providing a caption of the kind of images you what. Or you might have a model that can again generate images and you can control it by providing maybe black-and-white images. And you can use it to produce a colorized version of the image. Or maybe you have a data simulator that can produce text in English, and you can control the generative process by feeding in text in a different language, maybe in Chinese. And that's how you build machine translation tools. The API is going to be, again, that of a probability distribution. So really, you're going to be able to-- for a lot of these models, you're going to be able to also query the model with potential data points. And the model will be able to tell you whether or not they are likely to be generated by this data simulator or not. So in some sense, it also allows you to build a certain understanding over what kind of data points make sense and which ones don't, which is going to be useful for some applications. And really, this data simulator is, at the end of the day, a statistical model. It's what we call the machine learning generative model. And in particular, in this class, we're going to be thinking about deep generative models, where we're going to be using neural networks, deep learning kind of ideas to implement this piece of code that gives you these capabilities of generating data. And to give you a few examples, if you have a generative model of images, you might be able to control it. Let's say using sketches. Maybe you are not good at painting, and you can only produce a rough sketch of a bedroom. And then you fit it as a control signal into your generative model. And you can use it to produce realistic images that have the structure of the stroke painting that you provide, but they look much better. Or you can do maybe text to image kind of things where if you have a generative model that has been trained on paintings, then you can control it through captions. And you can ask the model to generate a new painting that corresponds to the description that is provided by the user. Other examples that you might not think about immediately-- it could be something like you have a generative model over medical images. And in this case, you might use an actual signal coming from an MRI machine or a CT scan machine. And you can use that signal to sort of like reconstruct the medical image, the thing you actually care about, given this kind of measurement that is coming from an actual machine. And in this kind of application, generative models have shown to be very effective because they can reduce kind of like the number of measurements, the amount of radiation that you have to give to the patient to get a measurement that is good enough to produce the medical images that the doctor needs to come up with a diagnosis. An example of the kind of thing you can do is you can evaluate probabilities, is to do outlier detection, which you are going to be playing with this in the homework, a variant of this. If you have a generative model that understands traffic signs, you might be able to say, OK, this looks like a reasonable traffic sign you might encounter on the streets. Well, if I feed you something like this, some kind of adversarial example, somebody is trying to cause trouble to your self-driving vehicle, the model might be able to say, no, this looks like a low-probability thing. This is weird. Do something about it. Maybe don't trust it. Ask a human for help or something like that. Right. And this is really an exciting time to study generative models because there's been a lot of progress over many different modalities. I'm going to start with images because that's where I've done a lot of my research. When I started working in this space about 10 years ago, these were the sort of images that we were able to generate. And even that was already like a very, very remarkable. Like people were very surprised that it was possible to train a machine learning system to produce images of people that's sort of black and white, and they roughly had the right shape. People were very impressed by those sort of results. And you can see that over a few years, this progress was largely driven by generative adversarial networks, which is a class of generative models we're going to be talking about. You can kind of see how the generations are becoming better and better, higher resolution, more detail, more realistic kind of images of people. One of the big improvements that happened over the last two or three years which was actually largely coming out of Stanford-- Yang Song, who was a PhD student in my group, came up with this idea of using score-based diffusion models, which is a different kind of generative model that we're also going to be talking about in this course, and was able to further push the state of the art, for example, generating images, very high-resolution images that look like this like these people don't exist. They are completely synthesized, generated by one of these generative models. And this is really-- diffusion models are really the technology that drives a lot of the text-to-image systems that you might have seen. Things like Stable Diffusion or DALL-E or Midjourney we think are all based on this type of generative model, this way of representing probability distribution based on a diffusion model. And once you have a good diffusion model, you can try to control it using captions. And now you get this kind of really cool text-to-image systems where you can ask a user for an input. What kind of image do you want? A caption of what kind of image the system should be able to produce. For example, an astronaut riding a horse. And these are the kind of results that you can get with the systems we have today. This is really cool. These models have been trained on a lot of data. But presumably, they have not seen something like this on the internet. They might have seen an astronaut. They definitely have seen a horse. But they probably have not seen those two things together. So it's very impressive that the model is able to, again, understand the meaning of astronaut, understand the meaning of horse, putting them together. And the fact that it's able to generate this kind of picture tells me that there is some level of understanding of what it means, what an astronaut means, and what riding means, what a horse means, and even-- if you look at the landscape-- I don't know it could be-- it feels like it's probably on some other planet or something. So there is some level of understanding about these concepts that is showing here. And that's super exciting, I think, because it means that we're really making progress in this space and understanding the meaning of text, of images, that relationship. And that's what's driving a lot of the successes that we're seeing in ML these days. Here is another example. If you ask a system on a perfect Italian meal, you get-- here I'm generating multiple samples. Because it's a probability distribution, you can imagine-- you can sample from it. And it will generate different answers. So the generation is stochastic, different random seed. It will produce different outputs every time. And here, we can see four of them. Again, I think it does a pretty good job. I mean, some of the stuff is clearly made up, but it does-- it's interesting how it kind of like even captures out of the window the kind of buildings you would probably see in Italy. And it kind of has the right flavor, I think. It's pretty impressive kind of thing. Here's another example from a recent system developed in China. This is a teddy bear wearing a costume, standing in front of the Hall of Supreme Harmony, and singing Beijing opera. So again, a pretty crazy sort of caption. And it produces things like this. Pretty impressive. And this is the latest that came out very recently. We don't know yet what this model is built on, DALL-E 3 from OpenAI. This is an example from their blog post. Here, you're asking the model to generate. Then you can see the caption yourself. Pretty cool. Again, demonstrates a pretty sophisticated level of understanding of concepts and a good way of combining them together. Right. So this is a text-to-image generation. Again, the nice thing about these models is that you can often control them using different kinds of control signals. So here we're controlling using text, using captions, but there is a lot of inverse problems. Again, this is a field that has been studied for a long time. People have been thinking about how to colorize an image, how to do super resolution on an image, how to do inpainting on an image. These problems become pretty much easier to solve once you have a good assistant that really understand the relationship between all the pixel values that you typically see in an image. And so there's been a lot of progress in, let's say, super resolution. You go from low-resolution images like this to high-resolution images like that. Or colorization, you can take old black-and-white photos, and you can kind of like colorize them in a meaningful way. Or inpainting. So if you have an image where some of the pixels are masked out, you can ask the model to fill them in. And they do a pretty good job at doing these. These are probably not the most up-to-date references, but you can kind get a sense of why these models are so useful in the real world. And here is an example from SDEdit, which is one of the things that, again, one of my PhD students developed. This is back to the sketch-to-image, where you can start with a sketch of sort of like a painting or an image that you would like, the kind of thing I would be able to do. And then you can ask the model to refine it and produce some pretty picture that kind of has the right structure. But it's much nicer. I would never be able to produce the image at the bottom, but I could probably come up with a sketch you see on the top. And yeah, here you can see more examples where you can do sketch-to-image or you can do even stroke-based editing. Maybe you start with an image, and then you add some-- you want to change it based on some rough sense of what you want the image to have, and then the model will make it pretty for you. And it doesn't have to be editing, or sort of like you don't have to control it through strokes. Another natural way of controlling this kind of editing process is through text. So instead of actually drawing what you want, you can ask the model. You can tell the model how you want your images to be edited. So you might start with an image of a bird, but now you want to change it. So you want it to spread the wings. And you can tell the model now spread the wings, and it's able to do these kind of updates. Or you have an image with two birds and now you want the birds to be kissing. And then this is what you produce. Or you have an image with a box, and you want the box to be open. And you can kind of see some pretty impressive results in terms of image editing or changing the pose of this dog or even changing the style of the painting of the image. You go from a real image to some kind of drawings. And again, does a pretty good job. You can see it's making some mistakes. Like this knife here, it gets changed in a way that is not quite what we want. They are not perfect yet, but these capabilities are very impressive. They're already very useful. Cool. And yeah, back to the more exotic one that you might not necessarily think fits in this framework just to give you a sense of how general these ideas are. If you have a generative model of medical images, you can use it to essentially improve the way we do medical images. In this case, the control signal. It's an actual measurement that you get from let's say a CT scan machine. And then you can control the generative process using the measurement from the CT scan machine. And this can drastically reduce the amount of radiation that basically has the number of measurements that you need to get a crisp kind of image that you can show to the doctor. This is very similar to inpainting. It's just inpainting in a slightly different space. But you can kind get a sense it's roughly the same problem. And advances in generative models translate into big improvements in these real-world applications. All right. Now moving on to different modalities. Speech audio has been another modality where people have been able to build some pretty good generative models. This is one of the earliest one, the WaveNet model back-- I think it was 2016. And you can kind of see some examples of-- let's hope this works. This is an example-- this is kind of like the pre-deep learning thing. And these are not great text-to-speech-- The Blue Lagoon is a 1980 American romance and adventure film directed by Randal Kleiser. And then the WaveNet model, which is a deep learning-based model for text-to-speech, you're going to see is significantly better. The Blue Lagoon is a 1980 American romance and adventure film directed by Randal Kleiser. And these are maybe the latest ones that are based on diffusion models again. So this is-- well, this is a combination of diffusion models and autoregressive models, but here you can see some of the 2023 stuff. Once you have the first token, you want to predict the second token given the input and the first token using multi-head attention. So you can see it's much more realistic. There is a little bit of an accent here. There is a little bit of emotions that are-- it feels a lot less robotic, a lot less fake. Here's another example. This is a tex-- just text-to-speech. You input a text up, and you produce the speech corresponding to that text. CS236 is the best class at Stanford. So just another example. And again, you can sort of like use these things to solve inverse problems. So you can do super resolution in the audio space. So you can condition on the kind of like a low-quality signal, the kind of thing you can get maybe on phones. One is investment. One is reform. And then you can super resolve it. One is investment. One is reform. Again, this is the same problem as basically inpainting. Like you're missing some pixels. You're missing some frequencies. And you can ask the model to make them up for you. And to the extent that it understands the relationship between these values, you can also kind of think of as images. It can do a pretty good job of super resolving audio. Language, of course, that's another space where there's been a lot of progress and a lot of excitement around large language models. These are basically models that have been trained over large quantities of text collected on the internet often. And then they learn a probability distribution over which sentences make sense or not. And you can use it to, again, do some sort of inpainting where you can ask the model to create a sentence that starts with some kind of prompt. For example, this was an old language model, I guess, in 2019, I think, where you can ask the model to continue a sentence that starts with-- to get an A-plus in deep generative models, students have to-- and then, let's see what the language model does. And then it completes it for you, right? And then it says somewhat reasonable. They have to be willing to work with problems that are a whole lot more interesting. Not great, not perfect for today's standards, but again, for when this thing came out, it was pretty mind-blowing that you could build a model that can generate this quality of text. Now I tried something similar on ChatGPT. And this time, I try something harder. Like here, I said, to get an A-plus in deep generative models. Here I tried, what should I do to get an A-plus in CS236 at Stanford? So I didn't even tell the model, ChatGPT, what CS236 is. It actually knows that CS236 is deep generative models. And here it gives you some actually pretty good tips on how to do well in the class, attend lectures, read the materials, stay organized, seek help, do the homeworks. Then it gives you 15 of them. I cut the prompt here. But it's pretty impressive that you can do these kind of things. And again, it probably means that there is some level of understanding. And that's why these models are so powerful, and people are using them for doing all sorts of things. Because they can generate, it means they understand something, and then you can use the knowledge to solve a variety of tasks that we care about. Of course, the nice thing about this space is that you can often mix and match. So you can kind of like control these models using various sorts of control signals. Once you can do generation, you can steer the generative process using different control signals. A natural one here would be generate the text in English conditioned on some text in a different language. So maybe Chinese. So you have-- and this basically is machine translation, right? So progress is in generative models basically directly translate into progress in machine translation. If you have a model that really understands how to generate text in English and it can take advantage of the control signal well, then it means that essentially it's able to do the translation reasonably well. And a lot of the progress in the terms of the models and the architectures we're going to talk about in this class are the kind of ideas that are behind the pretty good machine translation systems that we have today. Another example is code. Of course, very exciting. As a computer scientist, many of you are computer scientists, so write a lot of code. At the end of the day, code is text. If you have a model that understands which sequences of text make sense and which ones don't, you can use it to write code for you. So here's an example of a system that exists today where you can kind of like try to get the model to autocomplete, let's say, the body of a function based on some description of what the function is supposed to do. Again, these systems are not perfect, but they are very-- they are already pretty good. Like they can do many-- they can solve many interesting tasks. They can solve programming assignments. They can solve-- they do reasonably well in competitive programming competitions. So again, pretty cool that they understand the natural language, they understand the syntax of the programming language, they know how to put things together so that they do the right thing. They're able to translate, in this case, from natural language to a formal language, Python, in this case, and do the right thing. So lots of excitement also around these sort of models. Another one that is pretty cool is video. This is one of the active ones where the first systems are being built. Again, you can imagine a variety of different interfaces where you can control the generative process through many different things. A natural one is text. You might say you start with a caption, and then you ask the model to generate a video corresponding to that caption. This is one example. The videos are pretty short right now. That's one of the limitations. But can you see it? Oh, yeah. OK, it shows up there. This is another example. You're asking it to generate a video of a couple sledding down a snowy hill on a tire roman chariot style. And this is sort of what it produces. They are pretty short videos. At the end of the day, you kind of think of a video as a sequence of images. So if you can generate images, it's believable that you can also generate a stack of images, which is essentially a video. But pretty impressive that there's a good amount of coherence across the frames. It captures roughly what's asked by the user. And the quality is pretty high. And if you're willing to work on this and stitch together many different videos, you can generate some pretty cool stuff. [VIDEO PLAYBACK] [MUSIC PLAYING] [END PLAYBACK] This is just basically stitching together a bunch of videos generated with the previous system. And again, you can see it's not perfect, but it's remarkable. I mean, we're not at the level where you can just ask the system to produce a movie for you with a certain plot or whatever with your favorite actor, but it's already able to produce pretty high-quality content that people are willing to look at and engage with. So that's an exciting kind of development that we're seeing generative models of videos I think when that starts to work. And we're seeing the kind of progress in this space that I showed you before for images. It's happening right now. I think when people figure this out and get really good systems, they can generate long videos of high quality. This could be really changing the way we-- a lot of the media industry is going to have to pay attention to this. Question? Yeah, yeah. [INAUDIBLE] video generation [INAUDIBLE].. Yeah. [INAUDIBLE] generates [? a ?] model or you have to give some inpaintings, like some hand stitches because nowadays I would say this is the most impressive video generation video I've ever seen because I would keep following this. So it's a [INAUDIBLE]. These ones are generated purely from the caption. OK. This one, if I have to-- I don't know exactly what went into this. I didn't make it myself, but I know the system allows you to also control it through a caption and a seed image. So if you maybe already know what you want your character to look like, then you can kind of use it and animate, let's say, a given image. And again, it's an example of controlling the generative process. Like you can control it through text. You can control it through images. There are many different ways to do this. Yeah. And this is actually from a PhD student, a former PhD student in our group. So yeah, it's a system they are developing. It's very good. I agree with you. Pretty impressive stuff. So that's the kind of thing you can do once you learn this material very well. All right. Other completely different sort of application area, sort of decision-making, robotics, these kind of-- a lot of these domains. What you care about is taking actions in the world to achieve a certain goal, let's say, driving a car or stacking some objects. And so at the end of the day, you can think of it as generating a sequence of actions that makes sense. And so again, the kind of machinery that we're going to talk about in this course translates pretty well to a lot of what we call imitation learning problems, where you are given examples of good behavior provided maybe by a human and you want your model to generate other behaviors that are good. For example, you want the model to learn how to drive the car or how to stack objects. So here's an example of how you can kind of like use these sort of techniques that we're going to talk about in the course to learn how to drive the car in this video game. And you have to figure out, of course, what sort of actions make sense to not crash into other cars and stay on the road and so forth. It's nontrivial again, but if you have a good generative model, then you can make good decisions in this simulator. This is an example where you can kind of like train a diffusion model, in this case, to stack objects. So again, you sort of need to figure out what sort of trajectories make sense. And if you have a good model that understands which trajectories have the right structure, then you can use it to stack a different set of objects, and you can control the model to produce high-quality policies. There's a lot of excitement in the scientific-- in science and engineering around generative models. One of your TAs is one of the world's experts on using generative models to synthesize molecules that have certain properties or like proteins that have certain properties and either at the level of their structure or even at the 3D level kind of like really understand the layout of these molecules. And yeah, there is a lot of interest in this space around building generative models to design drugs or to design better catalysts. At the end of the day, you can think of it as, again, some kind of generative model where you have to come up with a recipe that does well at a certain task. And if you train a model on a lot of data on what's kind of, let's say, proteins, perform well in a certain task, then you might be able to generate a sequence of amino acids that perform the task even better than the things we have. Or you might be able to design a drug that binds in a certain way because you're targeting, let's say, COVID or something. And so there is a lot of interest around building generative models over modalities that are somewhat different from the typical ones. It's not images. It's not text. But it's the same generative models. It's still diffusion models, or there's going to be autoregressive models. It's going to be the kind of models we're going to talk about in this course. And right. So lots of excitement. There is many other modalities that I didn't put in the slide deck where there's been progress. Generating 3D objects, that's another very exciting kind of area and many more. Of course, there is also a bit of worry. And hopefully, we'll get to talk about it a bit in the class around-- now, our computers are getting so good at generating content that is hard to distinguish from the real one. There's this big issue around deepfakes. Which one is real? Which one is fake? This was produced again by my students, but you can kind get a sense of the sort of dangers that these kind of technologies can have. And there is a lot of potential for misuse of these sort of systems. So hopefully, we'll get to talk about that in the class. So all right. So that was sort of like the intro. Hopefully, got you excited about the topic. And it kind of like showed you that it's really an exciting time to be working in this area. And that's why there is so much excitement also in the industry and in academia around these topics. Everybody's trying to innovate, build systems, figure out how to use them in the real world, find new applications. So it's really an exciting time to study this. The course is designed to really give you the-- uncover what we think are the core concepts in this space. Once you understand all the different building blocks, the kind of challenges, the kind of trade-offs that all these models do, then you can not only understand how existing systems work, but hopefully, you can also design the next generation of these systems, improve them, figure out how to use them on a new application area. Again, the course is designed to be pretty rigorous. There's going to be quite a bit of math. It's really going to delve deep into the key ideas. And so we're going to talk a lot about representation. As we discussed, the key building block is going to be statistical modeling. We're going to be using probability distributions. That's going to be the key building block. And so we're going to talk a lot about how to represent these probability distributions, how to use neural networks to model probability distributions, where we have many random variables. That is the challenge. I mean, you've seen simple probability distributions like Gaussians and things like that. That doesn't work in this space because you have so many different things that you have to consider and you have to model at the same time. And so you need to come up with clever ways to represent how all the different pixels in an image interact with each other or how the different words in a sentence, they're connected to each other. And so a lot of it will-- a lot of the course content will focus on different ideas, the different trade-offs that you have to make when you build these kind of models. We're going to talk about learning. Again, these are going to be statistical generative models, so there's always going to be data. And you're going to use the data to fit the models. And there's many different ways to fit models. There's many different kinds of loss functions that you can use. There's stuff that is used in diffusion model. There's the stuff that is used in generative adversarial networks. There is the stuff that is used in, let's say, large language models, autoregressive models. Those are essentially boiled down to different ways of comparing the probability distributions. You have a data distribution. You have the model distribution. And you want those two things to be similar so that when you generate samples from the model, they look like the ones that came from the data distribution. But probability distributions, again, going back to the first point, they are very complex if you have very complicated objects, very high-dimensional objects. So it's not straightforward to compare two probability distributions and kind of like measure how similar they are. So you have to sort of like have a data distribution. You have a family of models that you can pick from. And you kind of have to pick one that is close to the data. But measuring similarity is very difficult. And depending on how you measure similarity, you're going to get different kinds of models that work well in different kinds of scenarios. Then we're going to talk about inference. We're going to talk about how to generate samples from these models efficiently. Sometimes you have the probability distribution, but it might not be straightforward to sample from it. So we will talk about that. We will talk about how to invert the generative process, how to get representations from these objects. For example, kind of like-- sort of like following and making the idea of vision as inverse graphics a little bit more concrete. And so we'll touch on unsupervised learning and different ways of clustering. Because, at the end of the day, what these models do is they have to find similarity between data points. When you're trying to complete a sentence, what you have to do is you have to go through your training set. You have to find similar sentences. You have to figure out how to combine them. And you have to figure out how to complete the prompt that you're given. So once you have generative models, you can usually also get sort of like representations. You have ways of clustering data points that have similar meaning. And yeah, again, you can get features, and you can do kind of like the sort of things you would want to do in unsupervised learning, which is do machine learning when you don't have labels. You only have-- you have the x, but you don't have the y. And you want to do interesting things with the features themselves. And so those are sort of like the three key ideas that are going to show up quite a bit. In terms of models, we're going to be talking about first, perhaps the simplest kind of model, which is one where essentially you have access to a likelihood directly. And these are going to be-- there's going to be two kinds of models in this space, autoregressive models and flow-based models. So autoregressive models are the ones used in large language models and a few of other systems that I talked about today. Flow-based models are a different kind of idea that is often used for images and other kinds of continuous data. Then we'll talk about latent variable models, the idea of using latent variables to increase the expressive power essentially of your generative models. We'll talk about variational inference, variational learning, the variational autoencoder, hierarchical variational autoencoders, those sort of ideas. We'll talk about implicit generative models. Here the idea is that instead of representing the probability distribution p of x, you're going to represent the sampling process that you use to generate samples. And that has trade-offs. It allows you to generate samples very efficiently, but it becomes difficult to train the models because you don't have access to a likelihood anymore. So you cannot use maximum likelihood estimation, those kind of ideas that we understand very well and we know have good performance. So we'll talk about two sample tests, f-divergences and different ways of training these sort of systems. And in particular, we'll talk about generative adversarial networks and how to train them. Then we'll talk about energy-based models and diffusion models. Again, this is sort of like a state of the art in terms of image generation, audio generation. People are starting to use them also for text. That's what the technology behind the video generation that I showed you before. So we'll talk in depth about how they work and how you can think of them in terms of a latent variable model and the connections with all the other things. And yeah, we'll-- again, it's going to be a fairly mathematical class, so there's going to be a lot of theory. There's going to be algorithms. And then we'll go through applications. There is going to be homeworks where you're going to get to play around with these models. And yeah. So in terms of prereqs, we're expecting you to have taken at least a machine learning class. We'll try to cover-- try to do as much as possible from scratch, and we'll have some sections to go over some of the background content. But it might be pretty hard to take this class if you've never done any ML before. So you should be familiar with basic probability theory, calculus. We're going to use gradient descent, linear algebra, Bayes' rule, those sort of things, basic calculus sort of stuff, change of variable formula. Yeah, you should be familiar with that. Again, you can probably pick it up, but it might be pretty tricky if you don't-- if you've never seen this sort of ideas before. And then yeah, there is going to be programming assignments, so you should be familiar with the-- hopefully Python. That's what we're going to use, PyTorch. And so again, we'll have a section on that if you've never seen it before, but it might be tricky if you've not done any of this before. And in terms of logistics, we have a website that's not entirely updated, so some of the information may change. So keep checking it. We're finalizing some of the dates, and we're trying to get confirmation about the rooms for the midterm and the poster session, so. But hopefully, that will be done soon. We don't have a textbook. That actually doesn't exist. This was actually, I think, the first class when it was offered here a few years ago on this topic. Nothing like that existed, so we had to create it from scratch. And we put together a set of lecture notes that you can [? all ?] access there, where we try to cover what we-- basically the content that you can see in the slides. Some of it-- you can see some of the content is covered in the deep learning book that you can see there. So that's a useful reference. It's available online, so you might want to check it out. And yeah, we have a great team of teaching assistants. And yeah, there should be a calendar now on the website with our office hours. So, of course, you're welcome to-- most of them will start next week, but yeah, otherwise, feel free to reach out on Ed if the-- yeah, if you cannot find us in person this week. But yeah, we're always happy to chat. In terms of grading and coursework-- so it's going to be three homeworks. So the first one is going to be released the Monday next week. And they are worth 15% of the total grade each. So 45% total. And they go over a mix of theory. And again, there's going to be a programming assignment associated with all of them. We're going to have a midterm. It's going to be in-class, in-person midterm. And the big component of the class is going to be a project. We think there is so much to do in this space that it makes sense for you to really explore. It's going to account for 40% of the grade, so it's going to be a pretty significant component. And there's a bunch of milestones. You're going to start with a proposal. There's going to be a report that you have kind of like to turn in about how things are going. There's going to be a poster presentation towards the end and then a final report on the work you did. And yeah, projects. I think I like this class because it really gives you an opportunity to explore. And there's just so much that you can do in this space that-- there's lots of interesting project ideas that turned out to be papers, turned out to be company ideas. Lots of excitement here. You can work in groups, let's say, up to three students. And typically, it's one of these three things. Sometimes students apply an existing generative model to a new data set. Remember, they come from an application domain, and they find out a new interesting way to use the models on a new problem. Or they compare different generative models and a new kind of data set. Sometimes people work on trying to just improve the models. Again, these things are pretty new. It's unlikely that we found the best way of solving this problem, so there is still a lot of room for improvement. Often, you can combine different methods. You can take a diffusion model. You can add a little bit of generative adversarial training flavor to it, and you can get big improvements. Often, these things can be published in top machine learning conferences if they work well. And sometimes people do more theoretical analysis, and there's going to be quite a bit of theory, quite a bit of math. And so there is room for improvement in trying to understand why these models work, when they work, when they fail. Right now, it's all very, very empirical. And we really need a better theory of why things like the one I've shown you before are possible. And so there is lots of room for developing a better theory in this space. And we'll also be suggesting possible projects, so look out for some information about possible projects that can be suggested by TAs or other faculty on campus. And we are able to provide some Google Cloud coupons. Not much, unfortunately, but at least a little bit. And so we'll figure out a way to distribute them to students. And if you want to get some inspiration for what kind of projects people worked on in previous years, you can go to the old versions of the website at 2019 and 2021 version. You can get a sense of the kind of projects people worked on and the kind of-- so you can get a sense of sort of like what's enough for a project, what worked, what didn't, get some ideas. And yeah, I think that was pretty much what I wanted to cover today. And I'm happy to take questions. And then next week, we're going to start with the background autoregressive models and so forth.
Stanford_CS236_Deep_Generative_Models_I_2023_I_Stefano_Ermon
Stanford_CS236_Deep_Generative_Models_I_2023_I_Lecture_2_Background.txt
Welcome to lecture 2. The plan for today is to talk a little bit about what a generative model is. And we're going to see that we encounter the first challenge whenever we want to build a generative model of complex data sets like images, texts, which is the usual curse of dimensionality that you might have seen before in other machine learning classes. And so the plan for today is to discuss a little bit various ways that at a very high level that people have come up with to deal with the curse of dimensionality. And so we'll do a very brief crash course on graphical models. This is like my CS 228 class or a part of it compressed in a single lecture or half of a lecture. And then we'll talk a little bit about the distinction between a generative model and a discriminative model, which is something again, you might have seen before in ML classes. And finally, we'll get into the deep part of the deep generative models and we'll start to see how you can use neural networks to deal with the curse of dimensionality. So, all right. So this is going to be a high-level picture, a high-level overview that roughly corresponds to a lot of the ideas that we're going to talk about in this course. And it deals with this problem of learning a generative model, the kind of challenges that you encounter and the kind of different ways you can address them. And by changing different pieces, in this picture, you're going to get different classes of generative models. You might get autoregressive models like the ones that are usually used for language. You might get diffusion models, generative, adversarial networks, these kinds of things by changing ingredients into this high-level picture. So this picture will come up several times throughout this quarter. And it deals with this basic problem that you have whenever you want to train a generative model. So the basic problem is one where you're given a set of examples. This might be images or it might be sentences that you've collected on the internet, or it could be DNA sequences. It could really be anything. The assumption is that these data points that you have access to are sampled from some unknown probability distribution that we're often going to denote p data. This is like data-generating process. It's some kind of a complicated unknown process that has generated the data for you. And so the assumption is that all these different data points that you have access to are related to each other because they come from some true underlying common data generating process. And in the case of language, this might correspond to maybe if you have a corpora of text collected from the internet, this might correspond to the different ways people write text for websites or for whatever sites you've scraped to collect your data set, or it might correspond to the complicated physical processes that give rise to a natural distribution over images. The key point here is that this data distribution Pdata is unknown. You assume there is such an object, but the only thing you have access to are a bunch of examples or a bunch of samples from this distribution. And the whole problem in this class and in the space of generative models and generative AI is to basically come up with a good approximation of this data-generating process because the idea is that if you have access to a good approximation to this data generating process, this data distribution Pdata, then you can sample from this approximation that you have access to and you can generate new text. Or if you have a distribution over images, then you can sample from the distribution and you can generate new images that hopefully are close to the ones you've used for training or model to the extent that you're doing a good job. By coming up with a good approximation of this data distribution, hopefully, your samples are also going to be good. And so in order to do that, you need to define a model family, which is this set here in green. And you can think of this as a set of different probability distributions that are indexed or parameterized with this variable theta. So think of it as all possible Gaussian distributions that you can get as you change the mean and the covariance or all the kinds of distributions that you can get as you change the parameters of your neural network. And once you've defined this set, the goal becomes that of trying to find a good approximation of the data distribution within the set. And so in order to do that, you need to define some notion of distance. So you need to define a loss function. You need to specify what you care about and what you don't care about. Like these objects, this probability distribution, the data distribution, and your model distributions are going to be pretty complex. They are defined over high-dimensional spaces. So there's a lot of different-- let's say, images you can assign probability to. And you somehow need to specify what you care about and what you don't, or equivalently, you need to specify some notion of distance or similarity between the data distribution and your model. And then you have an optimization problem then it becomes a question of, how do you find the distribution in your set, in your model family that is as close as possible to the data distribution? And so you try to find this projection and try to find this point. And then hopefully, if you can solve this potentially hard optimization problem, you come up with your model, you come up with a distribution that is hopefully relatively close to your data distribution. And again, then you can use it. Then you have your language model or you have your diffusion model. And you can use it to generate images, you can use it to generate text, you can do many different things. And so we see that there are several components here. You always need to start with some data. Then you need to define a model family and then you need to define a loss function or a similarity metric between distributions that you should optimize over. And what we'll see is that you're going to get different classes of generative models by changing these kinds of ingredients. And the issue here is that it's not straightforward to come up with-- it's not an optimal solution here. And that's why there are many different kinds of generative models, which is not clear what's the right model family that we should use, it's not clear what's the right notion of similarity that we should use, for example, if you think about different data modalities. So that's why we're going to see different families of the generative models that will essentially make different choices with respect to the model family, with respect to the loss, and so forth. But at the end of the day, pretty much all of the models we'll see, we'll try to learn this probability distribution. And this is again useful because once you have this probability distribution, you can sample from it, you can generate new data, you can do density estimation. So if you have access to a probability distribution, then you can query your probability distribution for any input x and the model can tell you how likely this object is. So if you train a model over a bunch of images of dogs, then you come up with this p theta, this distribution here that this is as close as possible to the data distribution. Then you can fit in a new image and the model will tell you how likely was it that this image was generated basically by this model distribution that you've come up with. And this is potentially useful because you can do, for example, anomaly detection. You can do-- you can check how likely that object is and you can start reasoning about your own about the inputs that your models are seeing. You can identify anomalies. You can do many interesting things once you have access to a density. And finally, I know this is also useful because essentially, it's a clean way to think about unsupervised learning. If you think about it, if you're trying to build a model that assigns high probability to images that look like the ones you have in your training set, that again in order to do well, you need to understand what all these images have in common. And so maybe in this example, you might need to understand what does a dog look like, what kind of parts you need to have, what kind of colors exist in the real world, which ones don't, and things like that. And so implicitly by training these models, perhaps on large quantities of unlabeled data, you end up learning the structure of the data. You end up learning what all these data points have in common. You end up learning what are the axes of variation that this data set has. And this is useful because it allows you to, for example, essentially discover features in an unsupervised way. And so we'll see that at least some of the generative models we'll talk about will actually allow you explicitly to recover features for the data points. And you can use them to do controllable generation or you can use them to do-- maybe semi-supervised learning or few shot learning. Once you have good features, it should be relatively easy to, let's say, distinguish different breeds of dogs and things like that. So that's the high-level story. And we'll see all these different components in much detail throughout the course. The first big question is, how do you represent a probability distribution, and how do we actually come up with a reasonable set over which we can optimize if we want to recover a good approximation to the data distribution? And this is not going to be trivial because we care about objects that are pretty complicated in the sense that they have-- if you think about an image, it's going to have many pixels, or if you think about text, we typically care about many tokens. And so representing a probability distribution over a high-dimensional space is actually nontrivial. And that's the first challenge where you need to start making trade-offs. And if you're dealing with low-dimensional data, then the problem is not hard. And this is something you might have seen before. If you have, let's say, a single discrete, random variable, perhaps a binary random variable, then it's not hard to describe all the different things that can happen and assign probabilities to these events. So if you have a Bernoulli distribution or a Bernoulli random variable, then you only have two outcomes-- true/false, heads or tails, something like that. And in order to specify all the possible things that can happen, you just need one parameter. You just need a single number, which tells you the probability of heads. The probability of a tail is just going to be 1 minus the number p. And learning these distributions from data is of course trivial. And it's useful, but this is not quite going to be enough to deal with, let's say, models over images or models over text. The other building block that we're going to use are categorical distributions. So if you have more than two outcomes, you have, let's say, k different outcomes, then you're dealing with a categorical random variable, or you have m different outcomes here. And again, this is a useful building block. You can use it to model things like rolling a die, many other things. The challenge here or where you're starting to see where the issues might arise is that again, you basically need, if you have m different things that can happen, you need to specify a probability for each one of them. And so you basically need to have m numbers, and then these numbers have to sum to 1 because it's a valid probability distribution. And if you sum all the probabilities of all the different things that can happen, you have to get 1. And so these are the two building blocks. And then you can combine them to model more interesting objects. So let's say you want to build a generative model over images, then you're going to have to model many different pixels. And to model the color of a single pixel, perhaps you're going to use some kind of RGB encoding where you're going to have to specify three numbers. You're going to have to specify the intensity of the red channel, which let's say is a number between 0 and 255. You're going to have to specify a green channel intensity and a blue channel intensity. So you can imagine that with these three random variables, you're going to capture the space of possible colors that has been discretized according to that granularity that you've chosen. And now you are able to describe many different colors that you can get for that particular pixel, each one corresponding to an entry in this cube. And so now we have a richer model. And if you somehow are able to model this distribution well, so you are able to assign probabilities to all these entries, to all these different colors that this individual pixel can take, then if you were to sample from it, then you would generate values, colors for that pixel that are reasonable. Hopefully, they match whatever training data you had access to learn this distribution. And how many parameters do you need to specify this joint probability distribution? Three parameters. Other guesses? How many different things can happen here? There are basically 256 times 256 times 256 different colors that we're able to capture. And so if you want to be fully general, you have to specify a probability for each one of them. So there's basically 256 cube entries in the cube, and you have to be able to assign a nonnegative number to each one of them. And then, OK, you know that they all have to sum to 1. So you have slightly less parameters to fit, but it's still a reasonably high number. So here you start to see the issue with having multiple random variables where the space of possible outcomes, the possible things that can happen it grows exponentially in however many random variables you want to model. And so as another example, now let's say you want to model a distribution over images, And for simplicity, let's say the images are just black and white. So you're going to model an image as a collection of random variables. There's going to be one random variable for every pixel. Maybe there is 28 times 28 pixels. Each pixel by itself is a Bernoulli random variable. It can either be on or off, white or black. And let's say you have a training set, maybe MNIST. You have a bunch of images of handwritten digits and that's your training set. And then you would like to learn a probability distribution over all these black-and-white images. So how do you represent it? Well, again, we have this collection of random variables, and they're all binary. And we have n of them, where n is the number of pixels that you have in the image. So it depends on the resolution. And you can think about how many different images are there, how many different black and white images are there with n pixels. 2 to the 28th squared. Yeah, 2 to the-- whatever, 2 to the number of pixels that you have. So there's two possible colors, two possible values of the first pixel can take times 2 in the second times 2 times 2 times 2. You do it n times and you end up with 2 to the n. So there is a huge number of different images. Even in this simple scenario where they are just black and white-- very large state space, sometimes it's called. And so if you want it to be-- somehow if you are able to come up with this model, somehow you are able to come up with a probability distribution over this binary random variables, then you have this object that given any input image, it will tell you how likely it is according to the model. And if you can sample from it, you can assign values to all the pixels then it will generate an image. And if you've done a good job again at learning the distribution, it will generate, let's say, images that look like the ones that you had in the training set. So they will look, let's say like an MNIST digits. But again, you see the issue is how many parameters do you need to specify this object in full generality? Any guess? 2 to the n minus 1. Yeah, 2 to the n minus 1. That's the issue. There's 2 to the n possible things that can happen. You have to assign a probability to each one of them. Then, well, you save one parameter because you have to sum to 1, but this number quickly becomes huge. I guess for even a small number of n, this is more than the number of atoms in the universe. And so the question is, how do you store these parameters in a computer, how do you learn them from data? You need some tricks, you need some assumptions, you need somehow to deal with this complexity. That's a challenge that you always encounter whenever you want to build a generative model of anything interesting, whether it's text, DNA sequences, images, videos, whatever, audio, you always have this issue of representing a distribution. Now, one way to make progress is to assume something about how the random variables are related to each other. And that's always the assumption that you have to make. And one that is a strong assumption that you can make is to assume that all these random variables are independent of each other. And if you recall, if the random variables are independent, then it means that the joint distribution can be factored as a product of marginal distributions. Now if you're willing to make this assumption, then what happens? How many different images are there here? [INAUDIBLE] Cool. How many images are there? There is still 2 to the n possible images, right? You still have a probability distribution over the same space. You're still able to assign a probability number to every possible assignment of this n binary variables. So it's still a distribution over n binary variables. It's still a high-dimensional space. However, what happens is that you can drastically reduce the number of parameters that you need to store this object. How many parameters do you need to specify this joint distribution? n. Now it starts to become n, because you just need to be able to store each one of these entries, each one of these marginals. And these are just Bernoulli random variables, so you just need one parameter if they are binary. And so if these are binary variables, you need one parameter for each one of those marginal distributions. You basically just need to model each pixel separately. Modeling a single pixel is easy. And so if you're willing to make this kind of assumption, you are able to represent a complicated object, a probability distribution over images with a very small number of parameters, which means that this is something you can actually implement. You can afford to store these things very easily. Of course, the challenge is that this independence assumption is probably way too strong. You are literally saying that you can choose the values of the pixels independently. And if you think about modeling, let's say, images of digits, it's probably not going to work. Because you imagine when you sample from this distribution, you're not allowed to look at any other pixel value to choose a new pixel value. And so you're literally picking values at random independently. And so it's going to be very hard to be able to capture the right structure if you make such a strong independence assumption. So this is not quite going to work. What you can do is you can try to make progress by basically making conditional independence assumptions. And so one very important tool that is actually the thing behind autoregressive models, language models, large language models, they're all built on that first tool, which is the chain rule of probability, which hopefully you've seen before, the basic idea is that you can always write down the probability of a bunch of events happening at the same time as a product of conditional probabilities. So you can always say that the probability that S1 happens and S2 happens and S3 happens and so forth, you can always write it as the probability that S1 happens by itself and then the probability that S2 happens given that S1 happened, and so forth. And this is always the case that you can always factorize a distribution in that form. And I guess a corollary of that is the famous Bayes' rule, which allows you to basically write the conditional probability of one event given another one in terms of the prior probability and the likelihood of S2 happening given S1. The important one for now is going to be the first one, chain rule, although we're also going to use Bayes' rule later. But chain rule basically gives you a way of writing down a joint distribution as a product of potentially simpler objects, which are these marginals or conditional probabilities. And so this is how you would use it. You can always take a joint distribution over n variables and write it down as a product in this way, as the probability of x1 times the probability of x2 given x1, the probability of x3 given x1 and x2, and so forth. Using chain rule. This is something you can always do. This is the kind of factorization that is used in autoregressive model, which is the first class of models that we're going to talk about, which is again, the same thing that is used in, for example, large language models. And here the idea is that you can write down the probability of observing a sequence of words, let's say, in a sentence as the probability of observing the first word times the probability of observing the second word given the first one, times the probability of observing the third word given the first two, and so forth. But this is fully general. You can apply it also to pixels. Any collection of random variables can always be factorized this way. Now, how many parameters do we need if you use this kind of factorization? It seems like maybe we've made progress because this object here is very complicated. But now p of x1, for example, is a simple object, is a marginal distribution over a single pixel, so perhaps we've made progress here. So let's do the math. How many parameters do we need? It turns out that we still need an exponentially large number of parameters, unfortunately. And the reason is that it's kind of like no free lunch. We haven't made any assumptions to get this factorization, so we cannot expect to get any savings. And you can kind of see it here, although the first distribution here is indeed simple, you can store it, represent it with a single parameter. Then how many parameters do you need for the second? Well, if the variables are binary, then x1 can take two different values, 0 and 1, and for each one of them, you have to specify a distribution over x, which will take you one parameter. So p of x2 given x1 will take two parameters. One for the case where the first bit or the first variable is 0 and 1 for the case where it's 1. And then if you look at the p of x3 given x1 and x2, there are four possible values that x1 and x2 can take, so you need four parameters. So that's where you get this kind of geometric series and that's where you get the exponential blow up. This last conditionals here are very expensive. And so if you do the sum, you still don't get anything here. But it gives us a way to perhaps make progress. It's still a useful building block. And, for example, one thing you can do is you can assume independence, conditional independence. For example, you might be willing to assume that the value of the i plus 1 word is conditionally independent from the previous-- given the i-th word, the value of the i-th plus 1 word is conditionally independent of all the previous words. So this is a Markov assumption. So if these x's maybe represent the weather, then you're saying the weather tomorrow is conditionally independent from the past given the weather today. And if you're willing to make this assumption and you get big savings. What this means is that if you think about the definition of conditional independence is that a lot of these conditional distributions will simplify. And so, in particular, this probability of x3 given x1 and x2 becomes the probability of x3 given x2. So if you are predicting the third word, this is saying you just need to know the second word, you can ignore the first word. And if you are predicting the last word, you don't need to remember the entire sequence, the previous word is sufficient. And if you do that then you get this nice expression where the conditionals are now simple. You're always conditioning on at most one variable, and so now we get big savings. How many parameters do we need here? [INAUDIBLE] Yeah, something like that, this linear in n, basically. Depending on if the variables are binary, this is the formula. So big savings. And now we have a much more reasonable model. This is much more reasonable than the full independence model. These Markovian models are quite useful in practice. But again, if you think about language or you think about pixels in an image, it's probably not good enough. You're probably not going to do a great job if you're trying to predict-- if you think about your autocomplete in your phone, you're trying to predict the next word just based on the previous one and ignore everything else, you can do OK, but it's not going to be great. You need more context to be able to make a good prediction about the next word. And so although there is an exponential reduction, maybe this assumption is still a little bit too strong. And so one way to generalize this idea is to use something called a Bayesian network, which is essentially the same machinery in slightly more generality. The basic idea is, again, that instead of-- we're going to write down the joint as a product of conditionals. But instead of having these simple conditionals where it's always one variable given another variable, we're going to use conditional distributions where the i-th variable will depend on another set of random variables, which are the parents in this Bayesian network. And so intuitively, the idea is that we're going to try to write down the joint as a product of conditionals, but now the conditionals are a little bit more complex. Now each variable is allowed to depend on a subset of variables. It could be 1, it could be more so that buys you a little bit more flexibility. And the idea is that because we're using chain rule, as long as there is some ordering that you've used to come up with this joint distribution by simplifying the expression that you would get from chain rule, then this is guaranteed to correspond to a valid model. So essentially, you can specify any conditional independence-- any conditional distribution you want on the right-hand side. Once you multiply them together, you're going to get a valid probability distribution on the left-hand side. That's the key intuition behind the Bayesian network. More formally, a Bayesian network is a data structure that you can use to specify a probability distribution. It's a graph-based data structure where basically there's going to be an underlying directed acyclic graph, which basically gives you the ordering in that chain rule factorization. So there's going to be one node in the graph for every random variable that you're modeling. So if you're modeling images, one node for every pixel, if you're modeling text, one word for every token or every word that you have. And then what you do is for every node in the graph, you specify its conditional distribution given its parent in this directed acyclic graph. And that's the graph is the structure. And then by specifying different conditional distributions for each variable given the parents, you get different parameterizations of these joints. And the claim is that basically this is a valid probability distribution. And the reason is that it's essentially the same trick we did for the Markov model. You start with a directed acyclic graph. You can always come up with an ordering. You can just do topological sort on the graph. You get an ordering, you can apply chain rule. You factorize with respect to their ordering, then you simplify the conditionals based on this some conditional independence assumption. And that gives you a potentially compact data structure. It depends on how many parents, how dense the graph is, but this can give you savings. Again, the challenge is that we have this joint distribution. It takes too many parameters to represent this object. But if these conditionals are relatively simple, so you don't have too many parents for each variable, then these conditionals are simple enough that you can store this object, you can learn these parameters from data, and so forth. So it's like exponential in the number of parents that you have for each variable. So if you make a very dense graph, you're going to get a very expressive class of models and you're not going to get big savings. If you use a chain graph where there's only one parent per node, then you get the Markov assumption that we have before and there are things in between. For example, this is just-- what does it mean? A directed cycle would be something like this. So you need to make sure that there is no directed cycle, which means that there is an ordering and it means you can use chain rule. This is an example of a very simple Bayesian network. Here the idea is that you have these five random variables representing the difficulty of an exam, the intelligence of a student, the grade that you get, and so forth. And there is a joint distribution over these five random variables, which is obtained as a product of conditional distributions of each variable given the parent. And so for this particular graph, this node doesn't have any parent, so you just write the marginal probability of that node. This node doesn't have any parent. So again, it's just the probability of getting different intelligence values. The grade has two arrows incoming from difficulty and intelligence. So what you're saying is that the grades that you see depend essentially on the possible values of the difficulty of the exam and the intelligence of the student. And so you can basically write down the joint as a product of conditionals that would look like this. And in this case, this might be more economical than representing the joint because you basically just have to specify these tables, these conditional probability distributions. You only need to work out basically how these random variables are related to each other locally with respect to this graph. You only need to know how to assign grades given different values of difficulty and intelligence, but you're like breaking down the complexity of the joint in terms of smaller local interactions between the random variables. And again, by making this assumption the global dependencies can be broken down into simpler local dependencies. You get benefits because these conditionals are potentially much smaller, much simpler, and easier to represent. And the idea is that assuming this factorization is the same as assuming conditional independence. And you can see it here, we have this kind of factorization for the joint, which is implied by this graph. In general, we know that you can always have a more complicated factorization where every variable depends on all the variables that come before in some ordering. So in general, you would have to specify the probability of having a certain difficulty for the exam. You would have to specify the probability of some intelligence value given the difficulty, a probability of g, given i and d, the probability of the SAT score given everything else, and so forth. Now, if you're willing to assume that the intelligence of the student does not depend on the difficulty of the exam, then you can start simplifying these conditionals and they become like the ones you see above. So if you want to-- for example, the SAT score only depends on the intelligence, and you don't need to know the difficulty of the exam, you don't need to know the grade in the other exam to figure out the SAT score. And so this factorization basically corresponds to a bunch of conditional independencies. We're saying the difficulty and the intelligence are independent of each other. The SAT score is conditionally independent from the difficulty and the grade given the intelligence and so forth. And so Bayesian networks are basically a way to get to simplify complicated distributions based on conditional independence assumptions, which are more reasonable than full independence assumptions. Now, [CLEARS THROAT] so to summarize, we can basically represent-- use Bayesian networks as a tool to factorize distributions and write them down as a product of conditionals. You get the joint by multiplying together the conditionals. You can sample by basically going through the ordering. In this class, we're actually not going to be going this route. So that's the route that you're going to take if you want to build up a probabilistic graph like a graphical model, a PGM, a probabilistic graphical model. In this class, the graphical model-- we'll still be using a little bit of graphical models notations, but the graphical models are going to be relatively simple. They typically involve two or three random variables, random vectors. And instead, we're going to be making other kinds of a softer notion of conditional independence, which is going to be essentially this idea of-- let's use neural networks to try to represent how the different variables are related to each other. So it will still have somewhat the flavor of a Bayesian network, but it's going to be a little bit of a software kind of constraint between the variables. Now, obviously, this was a bit of a crash course. But again, we're not going to be leveraging these things too much. We're going to use a little bit of graphical models notation and a little bit of directed acyclic graph for some of the graphical models, but nothing too heavy. We're going to be using different assumptions and different modeling ideas to build deep generative models. And that's going to be again inspired by the use of neural networks for, let's say, classification or other discriminative tasks that you might have seen before. And so now is a good time to try to get a sense of what's the difference between building a generative model versus building-- as usual, discriminative model, and how do we get the ideas from the things that we know work when you're doing, let's say, image classification or these more standard machine learning problems and translate them back into the generative modeling world. And in order to see that, let's see-- again, one example where you could try to-- we're going to use a simple generative model to solve a discriminative task. And we'll see how that will differ compared to a traditional approach based on, let's say, a neural network. So let's say that you want to solve a task where you're given a bunch of images, a bunch of emails, and the goal is to predict whether or not this email is spam. So there is a binary label Y that you're trying to predict and you're doing it using a bunch of features Xi. And let's say the features are just binary and they are on or off depending on whether or not different words in some vocabulary appear in the email. And the usual assumption is that there is some underlying data-generating process. And so there is some relationship between the different words that you see in the email, the X's and the Y variable, which is the label you're trying to predict. So one way to approach this is by building a Bayesian network. This is a Bayesian classifier called the Naive Bayes classifier, which is basically going to say, we want to model this joint distribution. This joint distribution has too many variables, we cannot afford to store it to learn the parameters from data. So we're going to make conditional independence assumptions and we're going to assume that the joint can be described by this directed acyclic graph. And if you are willing to make this Bayes' net assumption, what this means is that the features, the words, the Xi's are basically conditionally independent given the label, given the Y. If you're willing to make this assumption, then you're able to factorize the joint, which is usually complicated as a product of conditionals. So you can write it as the p of y because y doesn't have any parent, and then the probability of one variable given its parent, probability of this variable given its parent, and so forth-- which means that you can basically-- according to this very simplified model of the world, you can generate a data point by first choosing whether or not it's spam, and then choosing whether different words appear in the email based on whether the email is spam or not. And once you have the model, what you can do is you can try to estimate the parameters of this model from data. So you can try to estimate these probabilities by looking at how frequently do you see different words in different types of emails. And then you can do classification because at the end of the day, what you're trying to do is you're trying to classify whether or not a new email is spam or not. And you can use Bayes' rule to write down the conditional distribution of Y given x. So given a new email, you observe which words are there and which ones are not. And you can try to compute the probability of Y by basically using Bayes' rule probability of x, y divided by the probability of x, essentially, which is what you have at the denominator. And if you've done a good job at estimating these parameters, this thing will-- and to the extent that the assumption is true, this conditional independence assumption is true, this model might perform reasonably well at predicting the label Y given the features X. The challenge of course is once again that perhaps these conditional independence assumptions are not that great. If you think about it, you're saying that different words appear in an email independently of each other. So once you know why basically knowing whether a word appears or not doesn't help you predict whether some other word appears in the email or not, which is probably not reasonable. Nevertheless, this model tends to work OK in practice. So even though the assumption is not quite true, it might give you reasonable results in practice. Now, how does this fit into the discriminative versus generative model of the problem? So at the end of the day, we're trying to model this joint distribution between features and a label Y. And using chain rule, we can write it like this as the probability of the label times the probability of the features given the label. This is exactly what we've done in the Naive Bayes model that we just saw. Alternatively, you can use chain rule based on a different ordering and you can say, I can write it as the probability of observing this feature vector times the probability that that particular feature vector has label Y. And so these are basically two Bayesian networks that capture the same joint distribution, one where we have y and then x, and then one where we have x and y. And the second one is basically the one that you deal with when you think about usual discriminative models. If you think about it, at the end of the day, if all you care about is predicting whether a new data point has label 0 or 1, all you care about is p of Y given X. And so the second modeling approach where you're modeling p of Y given X directly might be much more natural. In the last model, we were specifying p of Y, we were specifying p of X given Y, and then we would compute p of Y given X using Bayes rule. While in the second model you have access to p of Y given X, the probability of this variable given its parent directly. And so the idea is that if you know that all you care about Is p of Y given X, then there is no point in trying to learn or model or deal with this marginal distribution over the features. You know that you're always ever going to be given an email and you just try to predict y, why do you bother trying to figure out what kind of feature vectors x you're likely to see? p of X here will basically be a distribution over the features that your model is going to see. If you know you don't care because you just care about predicting y from x, then you don't even bother modeling p of X. And so that's more convenient. And that's why typically the kind of models that you're building, that you use in machine learning, they don't bother about modeling the distribution over the features. They just bother about modeling the relationship between a label and the features x. While in a generative model, it's the opposite. You're basically modeling the whole thing. You're modeling the full joint distribution. And so the discriminative model is basically only useful for discriminating y given x, while a generative model is also able to reason about its inputs, it's able to reason about the full relationship between x and y. And so now there is still no free lunch in the sense that if you think about it, it's true that you can do these two factorizations. You can use either factorized as p of Y and then p of X given Y, or you can do p of X and then p of Y given X. But in both cases, you end up with some of these conditionals which are pretty complicated. So in the generative model, you have a Bayesian-- if you were to actually unpack the fact that X is a random vector, so you have a bunch of individual features that you have to deal with, the two graphical models corresponding to the two chain rule factorizations would look like this. In the generative view of the world, you have Y and then you have all the features. In the discriminative view of the world, you have all the X's first and then you have Y given X. And you still need to deal with the fact that you have a lot of X's. You have potentially a lot of features that you have to take into account when you're predicting Y. And so in the generative modeling world, p of Y is simple, but then you have a bunch of these variables here that have a lot of parents. So there is a lot of complexity that you have to deal with when you need to decide what are the relationships between the features. In the discriminative modeling world, it's true that you're making some progress because maybe you don't need to model all these relationships between the x variables, but you still need to be able to model how Y depends on all the X's. And Y has a lot of parents. So again, that conditional distribution is potentially very complicated. And so one way to make progress is to say, OK, let's make conditional independence assumptions. So in general, you would have something-- a generative model would have to look like this. So you would have to be able to capture all sorts of dependencies between the X's and the Y. If you're willing to make simplifying assumptions and say, oh, things are conditionally independent, then you basically chop some edges in the graph and you end up with something that is much simpler. Remember the last parents, the variables have, the simpler the relationships between the random variables are the simpler the model is. And so you're saying once I know Y, I can basically figure out the values of the X variables and there is no relationship between them. That's one way to make progress. Obviously, it's a strong assumption. It might or might not work in the real world. In the discriminative model, you still need to be able to model this conditional distribution of Y given all the X's. And again, that's not straightforward because if you think about all these features here, let's say they are binary, there are 2 to the n possible feature vectors that you have to deal with, and for each one of them, you would have to specify-- like when you look at this last conditional here, it's the same as before. Your conditioning on a lot of variables. There are 2 to the n possible combinations of those X variables and in full generality, you would have to assign a different number, a different value for the probability of Y for each one of them. And so again, like the conditional distribution of Y, given all the parents, is not easy to deal with, even in a discriminative model. So the way you make progress, usually in a discriminative model, is to assume that the dependency is not fully general and it somehow takes a particular functional form. So it's true that this x vector can take many, many different values. And if you were to use a big table, that table would have 2 to the n possible rows. So you would not be able to store that, you would not be able to learn it from data, you would not be able to use it. But what you can assume is that there is some simple function that you can use to take x and map it to a probability value. And so the assumption that you have to make here to make progress is to assume that there is some simple function f that you can apply to the different values that the x variables can take and that will map it to this number that you care about, which is the conditional probability of Y given X. And there is many different ways to do it. One way is to do-- there are some constraints here. And one way to do it is to do what's done in logistic regression, for example. So the idea is that-- and that's why it's called regression is that essentially it's not a table. It's going to be some kind of function that will take different values of x and we'll regress them to probabilities for y. And it's not an arbitrary regression problem because what we're doing is we're trying to map these x's to conditional probabilities. And we know that that conditional probability is a number that has to be between 1 and 0. It doesn't make sense to say, oh, I fit in a certain feature vector x in the spam classification. It's a bunch of indicators of whether different words appear in the email. If this function gives me a value of minus 1, it doesn't make sense, because we know that probabilities are numbers between 0 and 1. So there are some constraints on this regression problem. And in particular, we want the output to be between 0 and 1. We want the dependency to be simple but reasonable. If it's too complicated and it's a table, a lookup, then you're back to the previous settings. You don't gain anything. So somehow you want a simple dependency, but it's sufficiently rich that it captures real ways in which changing x should change the probability of y. And one way to do it is to assume that there is some vector of parameters. I'll find this case. And then perhaps what you can do is you can assume some linear dependence, where you basically take a linear combination of these axes, these features weighted by these coefficients alpha, and you try to do this as a regression. It's like linear regression at the end of the day. You take different values of x and you map them to different outputs. Now, by itself, this wouldn't work because remember, we have to assume that these numbers are between 0 and 1. But that's something easy to fix. You can just transform that value with a function that rescales things and maps them to be between 0 and 1. For example, you can use the logistic function or the sigmoid. And if you do that, then you get what's known as logistic regression. It's a way to model a conditional distribution of y given x, where you're assuming that that conditional distribution takes a specific functional form. You're assuming that given different values of x, you can linearly combine them based on some vector of coefficients, alpha. And then you pass them through this sigmoid function, this S-shaped function that will take z values between minus infinity and plus infinity and will rescale them to be between 0 and 1. So then they are valid probabilities. And that's another way to make progress. It's another way to deal with the fact that, in general, you cannot represent this complicated dependency between y and all the x variables as a table. You have to either assume that there is conditional independencies or things don't even depend on some of the inputs, or you assume that they take-- there is some kind of specific functional form that allows you to compute these probabilities. And this one such assumption is the logistic regression assumption. Question. Yeah. Yeah. When you make the linear dependence assumption, is that basically saying that the Bayesian network, that the x's are independent and they're not related to each other? And then is that also equivalent to just having the joint distribution being described by the product of the marginals? So the question is whether this implies some conditional independence assumptions. You can actually show the other way around that basically if you assume the Naive Bayes factorization, then the conditional distribution of y given x will have this functional form, but not vice versa, not necessarily vice versa. And so in some sense, you're making a weaker statement about the relationships of the random variables, which is why this model is stronger in practice. You're assuming less about how the random variables are related. So to the extent that you have enough data to really learn the relationship, you're better off with this model because you are assuming less. If you have very limited data, you might be better off with the Naive Bayes model because you're making a strong assumption, but the prior helps you more because you don't have enough data to figure out how things are really related to each other. But this is a different assumption. You're really saying there is some functional form that tells you how the random variables are related to each other. So that doesn't imply that your joint is a product distribution? So the question is, does this imply that the joint is a product distribution? You're just working at the level of a single conditional. So what we'll see is that, in fact, an autoregressive model, a deep autoregressive model will essentially be just be built by assuming that there is a chain rule factorization and then modeling the conditionals using this functional relationship, maybe a linear regression model or a deep neural network. And that's how we will build the first type of useful, deep generative model. But this by itself is just for a single conditional. So it's not a statement about the joint. It's just saying I'm not even going to care about modeling the p of X. I'm not going to reason about the inputs that my logistic regression model is going to see, because at test time, I'm always ever going to be-- somebody is going to give me the x's. So I don't need to bother about figuring out how the different words are related to each other. I'm only going to bother about modeling how to predict y from x. That's already hard, but I'm going to do it based on this simplifying assumption. And by assuming that you're making this linear dependence, again, you're making some assumptions which might or might not be true in the real world. So in particular, this is a relatively simple dependency that you're assuming between y and x. And so what you're doing is you're saying that-- let's say if you have two features x1 and x2, then you're basically saying that equal probability contours are straight lines. So there are some straight lines such that all the points that lie on those straight lines they have the same conditional probability for y. Or it also means that the decision boundary-- so if you are using a threshold to decide whether a variable belongs to class 0 or 1 is going to be again, a straight line. So all the points on this side of the line are going to be positive, all the other ones are going to be negative. And specifically, basically, it means that if you think about how the probability changes as you change x and y, it has a very specific functional form. It looks like this S kind of thing, where the way you change the probability as you change x, the probability of y given x changes as you change x has a very specific functional form. If you think about the lookup version of this, it would be an arbitrary function. Here, you're saying, no, I'm willing to assume that it takes a very specific, relatively simple functional form, which again, might or might not be true in the real world. Maybe the probability of y given x should have a very different shape and then this model is not going to work well. It's like before, we were assuming conditional independence might or might not be true in the real world. Here we are assuming a specific functional form which might or might not be true in the real world, and that determines whether or not your model is going to work well or not in practice. And so, again, basically these are two are dealing with this issue of modeling distributions over high dimensional spaces. You have to make assumptions. Naive Bayes is one way to make progress, conditional independence assumption. The logistic regression model does not make that assumption explicitly. It does not assume that the features are conditionally independent given the label. So it's a little bit more powerful. If you think about the spam classification, there might be two words in your vocabulary like bank and account. Knowing whether one appears in the email, so knowing x1 tells you a lot about whether x2 appears in the email, But the Naive Bayes model assumes that it doesn't help. So that assumption is clearly wrong in the real world. The discriminative model does not make that assumption explicitly. And so let's say that in your data set, these two words always appear together. So whenever there is bank, there is also account. The Naive Bayes model is forced to assume by construction that they are independent. So whenever you see-- that both of them appear, it's going to double count the evidence. It's going to think both of them are telling me something about whether this is spam or not. I know that they are independent. So when I see both of them at the same time, I'm doubly confident that maybe this is spam. The logistic regression model can actually just set one of the coefficients to 0 and it doesn't double count the evidence. So you can see that you're making a weaker assumption. And it's actually powerful. And that's why this logistic regression model tends to work better in practice. However, the issue is that one thing you cannot do, let's say if you have a logistic regression model is that you cannot reason about your own inputs. So the only thing you can do is you can map x to y, but you cannot-- let's say somebody gives you maybe-- I mean, the same thing happens also in image classification. So let's say that you have a model that is predicting a label of an image given the image x, that's the only thing you can do-- predict y from x. So if somebody gives you a new image where some of the pixels are missing, there is no way for you to impute the missing values because you don't know what's the relationship between the x variables. You didn't model p of X at all. You only model p of Y given X. And so that's one thing you cannot do with a discriminative model that you can do with a generative model. A generative model is trying to model the full joint distribution between y and x. And so at least in principle, as long as you can do inference, as long as you can compute the right conditionals like modulo computational issues, you have enough information to predict anything from anything. So you can impute missing values, you can do more interesting things. But it's a harder problem because you're not only modeling the relationship between how to predict y from x, you are also modeling the full thing. You're modeling the relationship between the features, between the inputs as well. And then, OK, now, how do neural networks come in here? Well, as we said, one of the issues with a logistic regression model is that you're still making some kind of simplifying assumption on how y depends on x. We're assuming that there is this linear dependence. You take the x, the features, you combine them linearly, you pass them through the sigmoid, and that's what gives you y, which again, might not be true in the real world. And so one way to get a more expressive-- make even weaker assumptions in some sense is to basically allow for some nonlinear dependence. You could say instead of directly taking the x features and map them by linearly combining them to a probability value, I'm going to compute some features of the input x. Perhaps I'll do it by taking some linear combination of the features and then applying a nonlinear function to each value that I get out of this, and then I'm going to do linear regression on top of these features. So instead of directly applying linear regression to x, first, I transform x by multiplying it by a matrix A and then shifting by some vector of coefficients B, and then I do a logistic regression on these features. That's essentially a very simple one-layer neural network. Instead of predicting directly based on x, I transform x to get these features h and then I do linear regression based on that. And that's strictly more powerful because now I'm allowed to do more complicated computations. And if you think about that graph, that shape of that function of how y depends on x, now I have two more parameters. I have this matrix A, this vector of coefficients of biases b. And I can use this to change the shape of the function. I can get more complicated relationships between y and x. And so there's a trade-off here. I'm using more parameters to represent this conditional distribution. I no longer have just a vector of coefficients alpha, I also have a bunch of matrices for the previous layer in the neural network, but that gives me more flexibility in predicting y from x. And of course, you can imagine stacking this many, many times. And then you can use a deep neural network to predict y from x. Was there a question? Yeah. Why do we still need alpha here? [COUGH] I guess you still want to have a-- you still want to make it deeper, I guess, and you want to map it to a scalar eventually. So I guess ax plus b is a vector, and then I'm trying to map it to a scalar so I use this alpha vector. But it's just making it explicit that it's a strict generalization of what I had before, but you do want it to be eventually be mapped to a single scalar value. That would be like the softmax at the end. Although this one is just binary, so it's not quite a softmax, but it's essentially the softmax at the end of a neural network that maps the output to a valid probability value. So if y were a categorical random variable, then that would exactly be the softmax at the end. Thank you. OK. And then, yeah. Essentially, what you can do is you can repeat this multiple times and you can get a more expressive way of capturing the relationship between some y variable and the input variable x. And this is going to be the building block that we're going to use to build deep generative models. So what we're going to do is we're going to take advantage of this fact that neural networks seem to work very well at solving this kind of prediction task, and we're going to combine them to build generative models. And the simplest way to do it is to use chain rule and then use neural networks to represent each one of those conditions. And that's essentially on neural autoregressive model and essentially that's what large language models do. They use chain rule and then they simplify the conditionals by assuming that you can model them using a neural network. So you can predict the next word given the previous ones using a neural network. But there is going to be other ways. When we see other classes of generative models they are still going to use this kind of ideas, but maybe we're going to combine them in different ways and we're going to get different types of generative models. So that's the story. There is the chain rule factorization, which is fully general. So given a joint, you can always write it as a product of conditionals with no assumptions. In a Bayesian network, you're going to try to simplify these conditionals somehow by assuming that the variables are conditionally independent. So whenever you're trying to predict x4, you don't really need x2 and x3, you just need x1, for example, which is usually too strong. And this doesn't work on high dimensional data sets, on images, text, the kind of things we care about. The one class of deep generative models, a very successful one, conceptually does this. It just replaces all these conditionals that we don't know how to deal with with neural networks. And you can choose different architectures, but fundamentally, that's the whole idea. We're going to use a neural network to predict what's the fourth word given the first, the second, and the third. And again, there's no free lunch in the sense that what we're giving up is we're assuming that there is some relationship that these conditional distributions can basically be captured by a neural network, which might or might not be the case in practice. But that's one way to get tractability to the extent that these neural networks are not too big and somehow you're able to tie them together. You can see that you need a different neural network for every position in the sequence, which would be very tricky. So somehow you need to figure out a way to tie together the weights of this neural network so this can be done in practice. But ideally, this is the one way to get a deep generative model. In the Bayes net, the last factor should be x4 given x3, right? Yeah, it could be anything, I guess. It depends what's the shape of the neural network. If you were to do a Markov model, it should be x3. But you could say that maybe the fourth world is completely specified by the first one. And you don't need the second and the third, don't help you in predicting the fourth and the first, which is probably a very weird assumption. It wouldn't work in practice, but the underlying idea is that you're going to simplify those conditionals by dropping the dependence on some variables, and that gives you a Bayesian network. Depending on which variables you drop, you're going to get different graphs. If you were to not drop any variable, you get this. You get the fully general model and that makes no assumptions. So that's fully general, but it's too expensive because these conditionals are too-- whenever you're conditioning on too many things, that conditional distribution is too complicated. And you cannot store it, you cannot learn it, and so you cannot actually use it in practice. Cool. The last thing I wanted to mention is how to deal with continuous variables. So we often want to model not just discrete data, but actually data that is more naturally thought of as continuous. So taking values over the whole real axis. And luckily the machinery is very similar. So here instead of working with probability mass functions, we work with probability density functions. And here you can start to see how the idea of working with tables already doesn't work because there is an infinite number of different values that x can take. You cannot write down a table that will assign a number to each one of them. So you have to basically assume that there is some functional form. There's some functions that you can use to map different values of x to a scalar. And for example, you can assume that x is Gaussian, which means that there is a relatively simple function that depends on two parameters, mu and sigma. And then you can plug them into this expression and you get back the density of the Gaussian at any particular point x. Mu and sigma. Here are the mean and the standard deviation of the Gaussian. Or you could say, OK, maybe a uniform random variable. Again, this is another relatively simple function that you can use to map x to densities. Uniform distribution over the interval between A and B would have that kind of functional form, et cetera. And the good news is that, again, we often care about modeling many random variables, which could be continuous or maybe a mix of continuous and discrete. In this case, we care about the joint probability density function. And the same kind of, for example, a joint Gaussian would have that functional form. So now x is a vector of numbers. And the good news is that the whole machinery of chain rule, based rule, they all still apply. So, for example, we can write down the joint over PDF, over probability density function over three random variables as a marginal PDF over the first one, a conditional over the second given the first, and so forth. And this is useful because we can again mix and match. We can use Bayesian networks or we can use neural networks plus Bayesian networks in different ways to get different types of generative models. So for example, you can get a mixture of two Gaussians using a simple Bayesian network with two random variables, z and x. So the Bayesian network has two random variables, z and x. x has z as a parent. z doesn't have any parent. And so what it means is that the joint over x and z can be factorized as the probability of z times the probability of x given z. And for example, you could say z is a Bernoulli random variable with parameter p. So z is binary. It's either 0 or 1, and you choose a value with flipping a biased coin with probability P. And then condition on z, you choose a value for x by, let's say, sampling from a Gaussian. And because z can take two different values, there's actually two Gaussians. There is one Gaussian when z is 0 and there is one Gaussian when z is 1. And these two Gaussians are allowed to have different means and different variances. So this would be a kind of graphical model that corresponds to a mixture of two Gaussians. And because you're mixing together two Gaussians, you have a slightly more flexible model. The parameters here are p, which is the probability of choosing 0 versus 1 for this latency variables, and then you have the means and the standard deviations. Of course, you could choose other things. For example, you could choose z to be a uniform random variable between A and B. And then given z, x, let's say, is a Gaussian with a mean, which is z, and then maybe a fixed standard deviation. Just another example. A more interesting one is the variational autoencoder, which we're going to cover in depth in future lectures. But at the end of the day, a variational autoencoder is this Bayesian network with two nodes, z and x. And the assumption is that z is sampled from a Gaussian. So p of z is just a simple Gaussian random variable. And here you see how we are going to mix and match Bayesian networks and neural networks. Given z, x is again a Gaussian distribution. But the mean and the variance of this Gaussian are the outputs of some neural network or two neural networks mu theta and sigma phi, which depend on z. So the sampling process is like a generalization of the ones you see before where again, you first sample z, then you feed z into a neural network that will give you means and variances that you're using another Gaussian distribution to sample a value for x. And this kind of machinery is essentially a variational autoencoder. This corresponds to the generative process that you use in a VAE or a variational autoencoder. And we're going to have to talk about how you actually train these kinds of models and how to learn them, but fundamentally, you see how we take this idea so I can mix and match them. There's a little bit of Bayesian network, a little bit of chain rule, a little bit of neural networks to represent complicated conditionals, but everything can be stitched together. And that's how you get different kinds of generative models. And yeah, just as a note, even though mu and sigma could be very complicated, the conditional distribution of x given z is still Gaussian in this case. So there are some kind of trade-offs that you have to deal with. And yeah, this is it for today. And then next, we're going to talk about autoregressive models.
Stanford_CS236_Deep_Generative_Models_I_2023_I_Stefano_Ermon
Stanford_CS236_Deep_Generative_Models_I_2023_I_Lecture_16_Score_Based_Diffusion_Models.txt
All right. So let's get started. Today, we're back talking about diffusion models. I think there's still a few things that we didn't get a chance to cover. So specifically, we're going to see how to think about score-based models as a diffusion model. And so where does that name come from and what's the relationship between denoising score matching and other kinds of training objectives you might have seen before. We'll see how we can think of a diffusion model or even a score-based model to some extent as a type of variational autoencoder at the end of the day, a hierarchical one but essentially, a variational autoencoder. And there's going to be some connection between evidence lower bounds and the denoising score-matching losses that we've been seeing. Then we'll go back to interpreting diffusion models as normalizing flows. This is the idea of converting an SDE to an ODE that we briefly talked about but we didn't have time to go into a lot of detail, which will allow us to compute likelihoods exactly because it's a flow model. And then we'll talk about how to make sampling efficient. So take advantage of the fact that once you view generation as solving some kind of ordinary differential equation or stochastic differential equation then you can use advanced numerical methods to accelerate sampling. And then we'll talk about controllable generation. So if you want to build a text-to-image model or you want to use some kind of control, some kind of side information to let's say generate an image, how do you bring that into the equation, how do you change the models to allow you to do that. So let's start with a brief recap of score-based models. Recall that the underlying idea there was that we're going to model a probability distribution by working with the score function, which is this gradient of the log density of the log-likelihood essentially with respect to the input dimensions. So you think of it as a vector field that basically tells you in which direction you should move if you want to increase the likelihood. And we use a deep neural network to model it and this score model, which is like a neural network that takes, let's say, an image as an input and maps it to the corresponding score or gradient of the log-likelihood evaluated at that point. And we've seen that the score can be estimated from data using score-matching losses. And it's a relatively simple regression loss where you try to compare the estimated gradient to the true gradient. And you look at the L2 distance between these two vectors averaged over the data distribution. And we've seen that there are ways to rewrite that loss into one that you can, at least in principle compute and optimize as a function of theta. That's intractable or at least expensive with respect to the data dimension. But we've seen that there is something called denoising score matching, which basically is much more efficient, the basic idea that instead of trying to estimate the score of the data distribution, you estimate the score of a version of the data distribution that has been perturbed with, let's say, Gaussian noise. So you have some kind of kernel or noise kernel which is just like a Gaussian at the end of the day that will take a sample x and we'll add the noise to it. And that defines basically a new distribution q sigma, which is basically just what you get by convolving the original data distribution, which is unknown with some Gaussian kernel. So it's like a smoothed-out version of the data distribution. And it turns out that estimating the score of this q sigma instead of pdata is actually efficient and there is this-- if you look at the usual regression loss where you compare your model with the true score of the noisy data distribution averaged over the noisy data distribution, it turns out that that objective can be rewritten into a denoising objective. So basically if you can train a model as theta that can take a noisy image, x plus noise, and tries to basically estimate the noise vector that was added to the image. So if you can somehow go from this noisy image to the clean image or equivalently you can figure out what was the vector of noise that was added to this image, which if you subtract it will give you back the clean image. So if you can denoise, then you can also estimate the score of the noisy data distribution to sigma. And this does not involve any trace of the Jacobian, it doesn't involve any differentiation, it's just a straightforward loss that is basically just the noise. The reason we're doing denoising is because by solving the denoising objective that you see here in the third line, you are actually learning the score of the noise-perturbed data distribution. And that's a good thing to have access to because if you have the score, then you can basically use Langevin dynamics to effectively generate samples. So if you know how to denoise, then you know how in which direction perturbing your image would increase the likelihood most rapidly. So you have a Taylor approximation of the log-likelihood around every data point. And you can use that information to inform the way you produce, you explore this place and you generate samples. The trade-off is that you're no longer estimating the score of the clean data distribution but you're estimating the score of the noise data distribution. And so, yeah, it's much more scalable. It reduces the denoising, but the trade-off is you're not estimating the score of the data distribution, you're estimating the score of the noise-perturbed data distribution. And then once you have the score-- back to the question, if you somehow are able to estimate the scores, then you can generate samples by basically doing some kind of noisy stochastic gradient ascent procedure where you just initialize your particles somewhere and then you follow the arrows essentially, adding a little bit of noise at every step and trying to move towards high probability regions. And we've seen that in order to make this work, it actually makes sense to not only estimate the score of the data distribution perturbed with a single noise intensity, but you actually want to estimate the score of multiple versions of the data distributions where each version has been perturbed with a different amount of noise. And so you have these different views of the data distribution that have been perturbed with increasingly small, in this case, amounts of noise. And what you do is you train a single model, a single score network, which is conditional on the noise level. So it takes a sigma as an input and it will estimate the score for all these different data distributions perturbed with different amounts of noise. And if you can train this model, then you can do basically Langevin dynamics where what you would do is you would initialize. If you have this good model of the score, then you would initialize which you estimate it by denoising score matching. What you would do is you would then do Langevin dynamics where you would initialize your particles somehow, then you follow the gradients corresponding to the data distribution perturbed with large amounts of noise. You improve the quality of your samples a little bit, and then you use these samples to initialize annealed Langevin dynamics chain where you're going to use the scores of the data distribution perturbed with a smaller amount of noise. And again, you follow these gradients a little bit, and then once again, you take these particles and you initialize a new chain for an even smaller amount of noise. And you keep doing that until the sigma is small enough that basically you're sampling from something very close to the true data distribution. And this is annealed Langevin dynamics and you can see here how it would work. So you would start with pure noise. And then you would run this sequence of Langevin dynamics chains and you would eventually generate something that is pretty close to a clean sample. So you can see that it has this denoising flavor where you would start with pure noise and then you slowly remove noise until you reveal a sample at the end. And this is just again a Langevin dynamics. At every step, you're just following the gradient more or less and you go towards a clean data sample at the end. So now this was all recap. Now, what we're going to do is we're going to start to think about this process as a variational autoencoder. So if you think about it, what's going on here is that we are going from right to left. If you think about multiple versions of the data distribution that has been perturbed with increasingly large amounts of noise, what we're doing is we're starting with pure noise, and then we are iteratively removing noise by running this Langevin chains. So we run a Langevin chain, we try to transform xt into a sample from the data distribution with a fairly large amount of noise. And then we use these particles to initialize a new chain where we follow the gradients corresponding to a data distribution with a little bit less noise. And then we run it for a little bit, and then we keep going until we generate a clean sample at the end. So we can think of the procedure that we were seeing before as basically trying to iteratively generate samples from these random variables, x0 to xt, where these random variables are essentially what you would get if you were to take a real data sample and you were to add noise to it, because that's we were estimating the scores of these noise-perturbed data distributions that were indeed obtained just by taking data and adding noise to it. That was the whole idea of the noise conditional score network. So this is essentially at an intuitive level what's going on. We are iteratively reducing the amount of noise that we have in the sample. So the inverse of this process is the one that we've used to basically train the network to generate samples for the denoising score-matching loss. And we can think about the inverse process, which is the one that you would use if you wanted to go from data to pure noise. And that's a very simple process where at every step, you just add a little bit of noise. So if you want to go from x0 to x1, you take a data point and you add a little bit of noise. If you want to go from x1 to x2, you take a sample from x1, you add a little bit more noise. And as you go from left to right, you add more and more noise until at the end there is no structure left and you are left with basically pure noise. And so you can start to see that this has the flavor of a little bit of a VAE, or there is an encoder process, and then there is a decoder process down here. And we'll make that more formal but that's the intuition. And so more specifically, basically, what's going on here is that there is a relatively simple procedure that we're using to generate these random variables x1, x2 all the way through xt. And that procedure is just adding noise. So at every step, what you do is if you have a sample from xt and you want to generate a sample from xt plus 1, what you do is you take xt and you add noise to it, just Gaussian noise. So that defines a set of conditional densities, q of xt given xt minus 1, which are just Gaussians, where these Gaussians have basically a given mean and a given variance. And the mean is just like the current sample, xt minus 1, rescaled. It's not super important that there is a rescaling there, but the way you would generate a sample xt given a sample xt minus 1 is you would draw a sample from a Gaussian with a mean, which is just xt minus 1 rescaled, and some fixed standard deviation or fixed covariance. So we can think of this process of going from data to noise as some kind of Markov process where at every step, we add a little bit of noise, and perhaps we rescale by some fixed constant beta t. Not super important that you do the rescaling, but that's how it's usually done. And so I'm having it here just to make it consistent with the literature. And this basically defines a joint distribution. So given an initial data point x0, there is a joint distribution over all these random variables x1, x2 all the way through xt, which is just the product of all these conditionals, which are just Gaussians. So that's defines a joint-- given an initial data point x0, there is a joint distribution over all these other random variables x1 to xt, where the joint is given by a product of conditionals. It's like an autoregressive model but a little bit more simple because it's Markovian. So the distribution of xt does not depend on all the previous ones, but it basically only depends on xt minus 1 on the previous time step. And I'm using the notation q because it will turn out that this is indeed the encoder in a variational autoencoder. So you can think of this process of taking x0 and mapping it through this vector of random variables, x1 through xt, as some kind of encoder. And the encoder happens to be pretty simple because all you have to do is you just have to add noise to the original data point x0. So in a typical VAE what you would do is you would take x0 and then you would maybe map it through some neural network that would give you a mean and a standard deviation for the distribution over the latence. Here, the way we get a distribution over the latence, which in this case are just x1 through xt, is through this procedure. So there is nothing learned, you just add noise to the original sample x0. So this defines up some valid procedure of basically defining multiple views of an original data point x0 where every view is a version of the data point with different amounts of noise. The output, technically, for this encoder is higher dimensional than x0 in the sense that it's the whole collection of random variables. Each one of the random variables, xt, has the same dimension as x0. It's t times the dimension of the original data point. And yes, the mapping is not invertible. We're adding noise at every step. So that defines some way of basically mapping a data point to some vector of latent variables through this very simple procedure where you just add noise to it. And it turns out that adding Gaussian noise is pretty convenient because you can also compute-- because everything is basically Gaussian, so the marginals of this distribution are also Gaussian. So if you want to compute what is the probability of observing a certain noisy view of a data point x0 after t steps, that's another Gaussian where the parameters of that Gaussian basically depend on these beta coefficients that we had before. Again, not super important how you take the betas and you combine them to get the alphas. What's important is that if you add a little bit of Gaussian noise at every step the result of applying this kernel multiple times is also another Gaussian, with just different mean and a different standard deviation but you can basically compute them in closed form. So the probability of transitioning from x0 to xt is some other Gaussian distribution where the parameters of this Gaussian basically depend on the effects of each of the individual transitions that you would do to get through to time xt. And this is important for a couple of reasons. First of all, basically, it's efficient to simulate this chain. So if you want to generate a sample at time step t, you don't have to generate the whole process of going through t steps. You can directly sample from this marginal distribution without having to simulate the whole chain. And, yeah, if you choose the parameters in the right way, this is essentially the same exact way we were generating training data for our denoising score-matching procedure. Remember in the denoising score-matching procedure what we were doing is we're taking clean data and we were adding different amounts of noise corresponding to different time steps or different noise levels sigma generating all these different views like the original data corresponding to different amounts of noise levels. So it still achieves the same kind of effect, but we're thinking of it as a process that adds noise incrementally at every step of this process, which you can also think of it as a diffusion process. You can think of what's going on here as a diffusion process where there is an initial distribution over data points, which is the data distribution, which could be, for example, a mixture of two Gaussians. It looks like this. Here are the colors basically indicate the intensity how large the PDF is at that point. So yellow points tend to have higher probability mass than let's say these blue points that are more closer to the tails of these two Gaussians. And what's going on is that we're basically defining these noise-perturbed data distributions by basically adding noise. So we randomly draw a sample from the data distribution and we add noise to it. And by doing that, we define all these noise-perturbed distributions. As you can see, the shape of this distribution changes as you add more and more noise. You see that there is no probability mass here in the middle. But if you add a little bit of noise to the original samples, then you're going to get a little bit of probability mass here in the middle. And then if you add a lot of noise, then basically everything just becomes Gaussian. And so you can think of it as a diffusion where basically giving an initial condition, which is just a data point on this line, you can imagine simulating this process where you add noise at every step. And eventually, the probability mass is going to be all spread out all over the space. And this behaves like the process of heat diffusing, let's say, in a solid or some sort. And so that's why it's called a diffusion because there is some kind of process that takes probability mass and then diffuses it over the whole space. And that this process essentially is defined by the transition kernel which is just basically the Gaussian. In theory, if you think of it as a-- well, maybe we'll come back to this in a few slides, but yes, to some extent, you need several things. You need to be able to smooth out. You destroy the structure so that you end up with a distribution at the end that is easy to sample from. Because essentially what we're going to do at inference time is we're going to try to invert this process and we're going to try to go from noise to data. So first, you have to define a process that destroys structure and goes from data to noise. And the noise that you get at the end has to be something simple. It has to be efficient so you need to be able to simulate any slice here efficiently because we'll see that the learning objective will end up being denoising score matching. And so you need to be able to sample from it efficiently if you want to use denoising score-matching-like objectives. Other than that, pretty much, yes, you get a valid probabilistic model. If you have those two things, then you can essentially use this machinery. And it turns out that the way to invert this process exactly involves the score. So if you have the score, then you can invert the process. Or if you think of it from a VAE perspective as we'll see, then you can also just try to basically invert the process by trying to learn some kind of decoder that will try to invert. You can think of this process are going from data to noise as an encoder, then you can try training an ELBO just by variationally try to learn an operator that goes in the opposite direction, which may not involve the score in general, but if everything is Gaussian, then it turns out that what you need is the score. The important thing is that the original data distribution here, OK, it's a mixture of Gaussians, but it can be anything. It doesn't have to be remotely close to a Gaussian distribution. It has to be continuous for this machinery to be applicable directly, although we'll see later when we talk about latent diffusion models that you can actually also embed discrete data into a continuous space and then it will fall out pretty naturally from our VAE perspective. But the initial distribution doesn't have to be Gaussian. It could be just a distribution over natural images, which is far from Gaussian. What's important is that the transition kernel that you use to spread out the probability mass is Gaussian so that you destroy structure in a controllable way. And you know that after adding a sufficiently large amount of Gaussian noise you have a Gaussian distribution. The signal-to-noise is basically extremely low and at that point, sampling from a pure noise is the same as starting from a data point and adding a huge amount of noise, essentially. So this is just OK. This is a diffusion and that basically maps from data on the left-hand side here to pure noise on the right-hand side. And what this suggests is that there might be a way of generating samples which basically involves the process of inverting this procedure. So we had a simple procedure that goes from data to noise just by adding Gaussian noise at every step. So we had a collection of random variables with some well-defined joint distribution, which was just like that Gaussian defined in terms of the q that given an xt minus 1, you define the next one by just adding noise to it. If we could, we could try to generate samples by inverting this process. So what we could do is we could try by initially sampling a value for this random variable x capital T. And we know that x capital T comes from some known distribution, for example, just pure Gaussian noise. And this notation here, this pi, you can think of it as a prior, some fixed distribution that this diffusion process basically converges to. So that's easy. And then what we could try to do is we could try to basically reverse this process by sampling from these conditionals. So you would sample xt minus 1 given x capital T, and then we could go back one step at a time going from pure noise to data. And this procedure would work perfectly if somehow we had a way of knowing what this distribution is. So we know how to define q of xt given xt minus 1 because that's just a Gaussian. But the reverse kernel which goes from xt to xt minus 1, the one that goes from right to left is actually unknown. And that's why this procedure cannot be directly used. But what we can try to do is we can try to learn some kind of approximation of this reverse kernel that goes from right to left so that we can learn basically how to remove noise from a sample. And so, basically, that's the core underlying idea. We're going to define-- now you can start to see we're going to define a decoder or an iterative decoder, which has the flavor of a VAE. You start by sampling a latent variable from a simple prior, which could be just a Gaussian distribution. And then we go from right to left by sampling from these conditionals, p of xt minus 1 given xt, which are defined variationally in a sense that this is our generative model, this is how we usually-- just like in a VAE, the decoder is defined through some sort of neural network. In this case, the probability density over xt minus 1 given xt is a Gaussian. And as usual, the parameters of the Gaussian are computed by some neural network. So it's the same flavor of VAE where you would sample z from a simple prior and you feed the z into some neural network like the mu theta here to get a parameter for a Gaussian distribution over x and then you would sample from that distribution. It has the similar flavor here in the sense that the reverse process is defined variationally through these conditionals which are parameterized in terms of neural networks. And so there is a true denoising distribution that would map you from xt through xt minus 1. We don't know what this object is. We're going to approximate it with some Gaussian where the parameters of the Gaussian are learned as usual like in a variational approximation. And we're going to try to choose theta so that these two distributions are close to each other intuitively. So that what we get by sampling from this variational approximation of the reverse process is close to what we would get if you were to sample from the true denoising distribution. So more specifically, this basically defines a joint distribution, which is going to be our generative distribution where we basically first-- which essentially just corresponds to that sampling procedure that I just described where there is a prior distribution over the rightmost variable, this xt, which we know comes from a simple distribution like a Gaussian. And then you would sample from all the remaining variables one at a time going from right to left by sampling from these conditionals which are all Gaussian with parameters defined through some neural networks. The key thing here is that we choose the parameters. So we choose this alpha t such that basically there is no signal-to-noise at the end. And so you are basically left with pure noise. So basically, this alpha bar t goes to 0, essentially, by choosing basically-- you might imagine that if you had a sufficiently large amount of noise, it doesn't matter where you started from everything looks the same. So that's the trick. You have to define a diffusion process such that-- and you have to run it for a sufficiently long amount of time, which is the same thing, such that you forget about the initial condition, or you eventually reach a steady state which is known as some sort of Gaussian distribution, with some known mean and standard deviation that you can sample to the end. And so that's what's going on here. There is a distribution qt, which is going to be close indeed to some Gaussian, for example, which you can always set it up. The transition kernel is in the right way. As we'll see, you can actually do something similar by using Langevin dynamics, because it turns out that if you train this thing variationally, this mu that you learn is basically the score. And so as we'll see, there is basically one way of generating samples which basically just involves t steps, where you just do step, step, step, step t times until you get a good somewhat whatever-- hopefully, a good approximation of a clean data point but you don't really know. It only depends on how well you've learned this reverse process. If you're willing to throw more compute at it, you can actually do more computer-average step to try to invert the process better. One way to do it is to do Langevin dynamics. So Langevin dynamics is just a general way of generating samples. It's like an MCMC procedure to generate samples from Markov distribution. At the end of the day, we know what kind of distribution we're trying to sample from here, which is just the noisy data distribution. And if you had the score of that distribution then you can generate samples from it. And so we'll see that there is a way to correct the mistakes that you would do if you just were to use this vanilla procedure by putting in more compute. If you wanted to sample from this joint what you would do is you would sample xt, then you would sample xt minus 1, xt minus 2, all the way through x0. There is no Langevin dynamics at this point. So it's not deterministic, it's stochastic because this transition is a Gaussian. So you would have to sample from a Gaussian where the parameters are given by some neural network. So the neural network part would be deterministic, but then just like in a VAE decoder, it's stochastic, the mapping is stochastic. OK. So now we've defined two things. We basically have an encoder and we have a decoder essentially here, which is parameterized by these neural networks mu theta. What we can do is-- so we can start viewing this as a hierarchical VA or just a VA where there is an encoder that takes x0, a data point, and maps it stochastically to a sequence of latent variables, which are just the x1, x2, x3 all the way through xt. And there is some kind of prior distribution over the latents, which is just this p of xt, this just simple Gaussian. And then there is a decoder that would basically invert the process. And the decoder is, in this case, as usual just parameterized using neural networks, which are going to be learned. So just like in a VAE, there is this through-- there is an encoder and a decoder. So it has the flavor of VAE, except that like the latent variables, there is a sequence of latent variables that are indexed by time, and they have this specific structure where the encoder is actually fixed. There is nothing learned about the encoder. The encoder is just adding noise to the data. So it's like a VAE where the encoder is fixed to have a very special kind of structure. So recall that the vanilla VAE would look something like this. You have a latent variable model, you have a latent variable z, which has a simple prior distribution a Gaussian. And then there is a decoder, which would take a z, map it through a couple of neural networks mu and sigma, and then p of x given z is defined as a simple distribution of Gaussians where the parameters are given by these two neural networks. And then you have the encoder, which does the opposite. It takes x and it basically tries to predict z. And again, that's usually some simple Gaussian distribution where the parameters are usually computed by some other neural network that takes x as an input and gives you the parameters of the distribution over the latent. And we know that you will train this model by maximizing an ELBO, an evidence lower bound that would look something like this. So you would basically guess the values of the latent variables using the encoder, then you have the joint distribution over observed variables and latent variables as inferred by the encoder. And then you have this term that is basically just encouraging high entropy in the encoder. So this is the vanilla version. What we have here is like a hierarchical version of the vanilla VAE. So if you were to replace this single latent variable z with two latent variables, you would get something that looks like this, where the generative process would start by sampling z2 from a simple prior and then passing it through a first decoder to generate z1 and then another decoder to generate x. And so you have a joint distribution, which is just the product of all the prior over z2, the first encoder and the second encoder-- the second decoder, sorry. And then you have an encoder, which is what you would use to infer the latent variables given x. So it's a distribution over z1 and z2 given x. And you would have an evidence lower bound that would look like this just like before. Here we have a simple prior over z. If you replace this with a VAE, then you get what we have here. So that would be the training objective. And so that's exactly what we have in the diffusion model. So in the diffusion model, well, we don't have just two. We have a sequence of latent variables, but it's essentially the same thing. We have a joint decoding distribution, which is what you get by going from right to left. And then we have an encoder, which is fixed, which is just adding Gaussian noise to the images. The way you would train this model is by minimizing some ELBO loss or maximizing the ELBO averaged over the data distribution. And so just like before the objective function would look something like this. So here q of x0 is just the data distribution. And so you would want to maximize the true log-likelihood over the data distribution. And we don't have access to it, so instead, you use the evidence lower bound, which is just like the usual thing, q of z given x0, p of x, z divided by q of z given x. That's the usual evidence lower bound. And there is just a minus sign because I want to minimize that objective as a function of theta. And what would you plug in? So q is fixed. q is just this product of Gaussians, which is this process of adding noise at every step. And p theta is the interesting bit is this distribution that you get by starting from some simple prior like pure Gaussian noise and then passing it through the sequence of neural networks that will try to infer the parameters from other Gaussian that you sample from to go through this sampling procedure. So that's how this joint in the numerator is defined. It's defined in terms of this p theta xt minus 1 given xt. So it's just like-- you can optimize this loss as a function of theta. And so it's actually a little bit simpler than the usual VAE. In the usual VAE, q itself is learnable. Remember you have those five parameters. Here q is fixed, so you don't have to actually optimize it. So it's just like a VAE, except that the encoder is fixed. It's not learnable. That's the usual ELBO objective. And recall that these decoders are parameterized, are all Gaussians, and they have this simple form. To sample xt minus 1, you would take xt, you pass it through some neural network to get the mean, and then they will have fixed variance. And the interesting thing is that if you parameterize these neural networks that will give you the means of these Gaussians using this form, which is again, it depends on these betas and these alphas. It's not super important. But if you parameterize the network in terms of an epsilon network that takes xt as an input and then tries to predict the noise that was added to xt and subtracts this estimated noise from xt to get a guess of what xt minus 1 should be, then you can show that this ELBO objective is actually equivalent to the usual denoising score-matching loss. So minimizing the negative ELBO or maximizing the lower bound on the average log-likelihood is exactly the same as trying to estimate the scores of these noise-perturbed data distributions. And so if you work to-- yeah, basically, if you parameterize these mean networks by saying, OK, take the data point that you have and subtract something to make it look more realistic, this network essentially ends up trying to estimate the score of this noise-perturbed data distributions. And so although we derived everything from the perspective of a variational autoencoder, it turns out that what you're actually doing is estimating scores. So the score-based model would sample differently. So here I'm just claiming that the loss would be the same up to some, I guess, there are some scalings, but roughly if you look at it, you're basically starting from data, you're sampling a noise vector, you're feeding the noisy image to this network epsilon and you're trying to estimate the noise vector that was added to the data. And so the training objective if you choose this kind of parameterization are the same. Then you have different choices in terms of how you sample from it. In a score-based model, you would sample by doing Langevin. Here, you are not sampling using Langevin, you would sample based on just going through the decoding process. So this is the training procedure of DDPM or the denoising diffusion probabilistic model. And if you look at the loss, the loss is basically denoising score matching. If you look at the loss that you have here, that's the same as the denoising score-matching loss that we had before. The sampling is also very similar, actually, to what you would do in a score-based model. If you look at the way you would generate samples is you start from pure noise, same as score-based models. And then at every step, you basically follow the gradient, which is epsilon theta, and add a little bit of noise because that's what you're supposed to do if you were to sample from a Gaussian that is p of xt given or the xt minus 1 given xt. And so the sampling procedure that you get by iteratively sampling from these denoisers actually is very similar to the Langevin dynamics. It's different scalings basically of the amount of-- this is like follow the gradient. You take a step in the gradient direction and then add noise, you basically just do different amounts of noise, essentially. But it's roughly the same procedure. Usually, you would learn the encoder and the decoder together and they would try to help each other out in some sense. The encoder is trying to find structure and the decoder is trying to leverage the structure to generate data more efficiently. Here the encoder is fixed and it's just adding noise. And so the decoder is just trying its best basically at minimizing-- they killed divergences that you would have or basically maximizing the ELBO which is the same as inverting the generative process. And it turns out that in order to do that, you need to estimate the scores to the extent that you can do that process well, then you would be able to generate good samples. It seems like you would have an easier time by actually allowing yourself to learn the encoder as well, but that doesn't actually work in practice. So it would give you better ELBOs but a worse kind of sample quality. You can also do one step of Langevin. In practice, that's what you would do. So at that point, what's the difference? I mean, they are essentially the same thing. I think one advantage of this score-based model perspective is that you can actually think of it as in the limit of infinite number of noise levels as we'll see, which is not something you would be a little bit trickier to get with the VAE perspective. But to some extent, they are essentially the same thing. From the ELBO perspective, you're going to get better numbers if you learn the encoder. But then I don't know if it's an optimization issue, but then in terms of the sample quality you're going to get blurry samples like in a VAE. Well, there are some intuitions related to progressive coding, and in practice, people don't actually optimize the ELBO. In practice, people optimize a scaled version of the ELBO. So the ELBO-- do I have it? Yeah. So the ELBO basically looks like this where you have this lambda t's that basically control how much you care about the different time steps. And in the limit of infinite capacity, it doesn't matter. But then in practice, people would set them to be 0, 1, which is not what you should do if you wanted to optimize the ELBO. So it's a matter of optimizing likelihood. It doesn't necessarily correlate with sample quality. So even if the encoder is fixed and it's just something really simple like adding Gaussian noise, the reverse is not. It requires you to have the score. So it's nontrivial to actually invert it. But you could argue that maybe if you were to destroy the structure in a more structured way, then maybe it would be even easier to invert the generative process. So they generalize to some extent, not out-of-distribution. So if you train it on images of cats, they're not going to generate images of dogs because they've never seen them and there's no point for them to put probability mass on those kinds of-- so it's really based on the actual data distribution that you're using for training the model. So when you add a lot of noise the best way to denoise is to basically predict the average image in the data set. And so there you already see that if you train it on images of cats, what the network will do when t is equal to capital T, it will basically output the average image in the training set. And so it's going to be completely off. I think one of the main reasons is that if you think about it, the amount of compute that you can put at generation time is very large because you're going pass it through 1,000 VAEs, essentially. Maybe t is usually-- capital T here is 1,000, usually. So it's a very deep stack of VAEs that you can use at generation time. However, because of how things are set up, at training time, you never have to actually look at this whole very deep, very expensive computation graph. You can train it layer by layer incrementally without actually having to look at the whole process. So even though you just train it locally to just get a little bit better at every step, which is very efficient. It's all breaking down. It's all breaking down, yeah, over level by level. So the stack of VAEs would exactly give you this. The problem is that-- [COUGH] --if the encoders are not structured in a certain way, you might not be able to do this trick of basically jumping forward. Remember that we had this-- oh, no, it's back a lot. This process here, it's very easy to go from 0 to xt. If these q's are arbitrary neural network there is no way for you to jump from x0 to xt in one step. And so the fact that these q's are very simple and you can compose them in closed form, it allows you to get a very efficient training process. So not all hierarchical VAEs would be very efficient to train, but this is a particular type of hierarchical VAEs, so there are certainly some that would be efficient. If you were to just train the VAE the usual way you would get a loss. If you go through the math, it ends up being the same thing. It seems counterintuitive that you would want to fix the encoder to be something strange like this where you just add noise and you're not reducing the dimensionality. But once you start making that choice, then the loss ends up being the same as the denoising score-matching loss. Historically, we came up with a score matching first and showing that it works. And then people show, OK, you can take a VAE and you can get something essentially identical and that saves you a little bit of trouble at inference time because you no longer have to do Langevin dynamics, you can just sample from VAE. So the lambda basically is just something that turns out to be how much you care about the different denoising losses over different noise intensities. And there is a principle-- if you just do the math and you go through the ELBO, there's going to be certain value of lambda t that you should choose if you really care about the evidence lower bound. In practice, people just choose that to be 1. So you're not actually optimizing an ELBO. Beta is the parameter, beta t's. The alphas are computed in terms of the beta t's that basically controls how quickly you add noise to the data, essentially. And you can choose it. So at inference time, we do start from random noise and then move it back to the clean data. But it makes sense to do it incrementally as opposed-- you could also do it in one step. You could imagine a VAE where the encoder is fixed, takes the image and adds a lot of noise. Presumably that inverse distribution that you would have to learn, which is this-- where do I have it? This procedure here that tries to invert going from noise to data is going to be very complicated. While if q of x given xt minus 1 is like the same thing just adding a little bit of noise, presumably inverting that is also going to be relatively easy. So we're breaking down this complicated problem going from noise to data into 1,000 little subproblems where all you have to do is to just remove a little bit of noise. And that ends up being better in practice because the subproblems are much more tractable to learn. So what you do is you start with a data point, you randomly pick a t, which is like an index in this sequence of noisy random variables, then you start with just the standard Gaussian noise, and then you generate an xt by basically adding the right amount of noise. So it's the same as denoising score matching. So this argument that you feed into epsilon theta is just a sample from q of x t given x0. So the architecture is the same as a noise conditional score model. So you have the same problem that you need to learn a bunch of decoders, one for every t. And instead of learning 1,000 different decoders, you'll learn one that is amortized across the different keys. So you have the same like this epsilon theta network that is trying to predict noise. Basically, it takes the image, xt, the noisy image. It takes t, which is encoded somehow, and those are both inputs to the network that then are used to predict the noise. So that's the same as the noise conditional score network where you take xt. You take sigma or t and then you use it to predict the noise. And so yeah, the architecture would basically be the-- usually some kind of U-Net because we're doing some kind of dense image prediction task where we go from image to image. And U-Net-type architectures tend to be pretty good at this and this is still-- people are using Transformers too, but this is still one of the best-performing models for learning denoising. So we have a paper that you can look up and try to get a hierarchical VAE to perform-- because one problem with the hierarchical VAE is that basically, there is a lot of-- if you start learning the encoder, it's not identifiable. There is many different ways to-- you basically want to encode the information about the input across all these latent variables. But there are many different ways to do it like which bits of information you store where in the latent variables is very much up to the encoder and the decoder figure out. It turns out that if you use a certain kind of encoding strategy that is forcing the model to spread out the bits in a certain way, then you can get pretty close to the performance of-- with a learned encoder you can get pretty close to the performance of these kinds of models. But yeah, it's still not entirely clear how to do it. There are multiple papers on trying to figure out good noise schedules. There is many different choices. What's important is that at the end-- the only real constraint is that at the end, you basically-- the signal-to-noise ratio is zero, essentially. So you destroy all the information. Then there is a lot of flexibility in terms of how much noise you add at different steps such that you would get your that result. You can even try to optimize it. If you take the ELBO perspective, you could just maybe learn a simple function that kind of controls how much noise you add at different steps. So you still just add a Gaussian noise but you can try to learn how to-- so there are ways to even learn this schedule. I would go in the direction of maybe let's learn the encoder or let's make it more flexible in practice. I mean, there are a number of papers where people have tried different strategies, different ways of destroying structure called diffusion. They've shown some empirical success. But in practice, people still mostly use Gaussians. What we can do now is start thinking about what happens. We have this diffusion model perspective, hierarchical VAE perspective where we have clean data, and then we have 1,000, let's say, different versions of the data distribution, perturb the increasingly large amounts of noise. Really if you think about it in terms of a diffusion process, a diffusion process is a continuous time process. If you think about how heat diffuses where some metal bar, that process is not happening at discrete time intervals. It's really more naturally thought as something happening over continuous time, where time is continuous. Or another way to think about it is you can imagine making this discretization finer and finer. We're still going to take the hierarchical VAE perspective, but you can start thinking about what happens if we were to take more and more steps. If we go from 1,000, 2000, 4,000, we make these steps smaller and smaller and smaller and smaller until eventually, we get this continuum of distributions, which really correspond to the diffusion process. And so, as usual, on the left-hand side, we have the clean data distribution, which is this mixture of two Gaussians where there's these two spots where most of the probability mass is. And then there is this continuous time diffusion process happening here that is spreading out the probability mass over time until at the end. On the right-hand side, you get this pure noise kind of distribution. So literally what's happening here is we're thinking about a very, very, very fine-grained discretization or a lot of different steps over which we go from data to pure noise. So if you were to destroy the structure a very little bit at a time, you can imagine in the limit you get a process that is continuous. So instead of having 1,000 different distributions, we have an infinite number of distributions that are now indexed by t, where t is now a time variable going from 0 to capital T just like before. But instead of taking 1,000 discrete different values, it takes an infinite number of values. So it's a continuous variable going from 0 to capital T. So we have as usual data on the one hand and then pure noise on the other extreme. So how do we now describe the relationship between all these random variables that are now indexed by time? We can describe it in terms of a stochastic process. So there is basically a collection of random variables. And there is an infinite number of random variables now. In the VAE case, we had 1,000 different random variables, now we have an infinite number of random variables, xt. And all these random variables have densities that again are indexed by time. And instead of describing that relationship using these encoders, we can describe how they are related to each other through a stochastic differential equation, which is basically the way you would describe how the values of these random variables that are now indexed by a continuous time variable t are related to each other. And so what you're saying is that over a small time interval dt dx, x by an amount which is determined by some deterministic drift and a small amount of noise that you basically add at every step. Not super important what that formula means, but without loss of generality, you can think about a very simple stochastic differential equation that describes a diffusion process where all that's happening is that over a small time increment dt, what you do is you change the value of x by adding an infinitesimally small amount of noise, essentially. And that is basically how you describe the encoder or how all these random variables are related to each other through this essentially diffusion process. Now what's interesting is that just like before, we can think about the reverse process of going from noise to data. And the random variables are the same. We're not changing them. But it turns out that they can be described equivalently through a different stochastic differential equation where time goes from large to small, from capital T to 0. And what's interesting is that this stochastic differential equation has a closed-form solution. And again not super important what the formula is, but the only thing that you need to know to be able to characterize this stochastic differential equation is the score function. So just like in the discrete case, in the VAE case, if you knew the score, then you would get optimal decoders and you would be able to reverse the generation process. In continuous time if you have all the score functions, you can reverse the generative process and go from pure noise to data. So it's closed form up to the score function, which is unknown. So maybe a little bit of a-- but there is-- basically, this is the equation. This exactly inverts the original stochastic differential equation if you know the score function, which you don't. So you are right that we don't know it. But if you knew it then you would be able to exactly invert the process. So the stochasticity is basically this dwt. Basically, at every infinitesimal step, you add a little bit of noise. And in the reverse process, you're also doing it. So in that sense, it's a stochastic differential equation. So if that term was 0, so if you didn't have that here, then it would be an ordinary differential equation where the evolution is deterministic. So given the initial condition, if this gt here was 0 or this piece doesn't exist, then you would have just a regular ordinary differential equation. Given the initial condition, you can integrate it and you would get a solution. This one is a little bit more complicated because at every step you add a little bit of noise. And so that's why you have these paths here that are a little bit-- see all these little jags in the curve? That's because there is a little bit of noise that is added at every step. If you want to do things in continuous time, what we can do is we can try to learn a model of all these score functions, which is just like before is going to be a neural network. It takes as input x and t and tries to estimate the score of the noise-perturbed data density at time t evaluated at x. So this is just like the continuous-time version of what we had before. Before we were doing this for 1,000 different t's, now we do it for every t between 0 and capital T, where t is a real-valued variable. We estimate it again during denoising score matching. So it's the usual thing where we estimate scores of the noise-perturbed data density, we can do denoising score matching. And the solution to that regression problem is basically a denoising objective. And then what we can do is to sample. Instead of using the decoders and go through 1,000 steps of the decoders, we can actually just try to solve numerically the reverse time stochastic differential equation where we plug in our estimate of the score for the true score function. So here we have the exact stochastic differential equation. OK, sorry. This doesn't show right, but let's see-- yeah. So we had this differential equation which involves the true score, and now we are approximating that with our score model. And then what we can do is we can try to-- in practice, we can solve this in continuous time. In practice, you will still have to discretize it by taking small steps. And there are numerical solvers that you can use to solve a stochastic differential equation, and they all have the same flavor of basically, you update your x by following the score and then adding a little bit of noise at every step. If you were to take 1,000 different steps and you would essentially do that kind of machinery of using the decoders, that basically corresponds to a particular way of solving this stochastic differential equation, which is just this kind of discretization, Euler-Maruyama discretization. What a score-based model would do instead is it would attempt to correct. Because there are numerical errors you're going to make some mistakes, and so what a score-based model would do is it would try to basically fix the mistakes by running Langevin for that time step. So you can combine just regular sampling from a diffusion model where you would take 1,000 different steps or even less with the MCMC style sampling to correct the mistakes of a basic numerical SDE solver. It's normally distributed. So it's a normally distributed condition on the initial condition but the marginals are far from normal. The transitions are. Yes. That's key. Yeah, because then that's why we can simulate it forward very efficiently. But marginally, they are not. DDPM is just this. It's like a particular type of discretization of the underlying SDE. Score-based models would attempt to solve this SDE in a slightly different way. There is basically two types of solvers-- predictor solvers, corrector solvers. Basically, a score-based models, which is MCMC or Langevin dynamics. It's something called a corrector method for SDE solving. So it's just a different way of solving the same underlying stochastic differential equation. So DDPM is just predictor, score-based model is just corrector. You can combine them and just get a more accurate solver for the underlying SDE. DDIM is a different beast. DDIM works by basically converting the-- let me skip this, but basically it converts the SDE into an ODE. So I guess we're out of time. But again, it turns out that it's possible to define an ordinary differential equation that has the same marginals, whatever it is, as the original stochastic differential equation that we started from. So now the evolution is entirely deterministic. There is no noise added at every step. So you see how these white trajectories, they are very-- there is no noise added at every step. They are straight. But marginally, they define exactly the same density. So the probability that you see across time are the same, whether you run this simple diffusion, kind of Brownian motion kind of thing, or you do this deterministic, you follow this deterministic paths, the marginals that you see how frequently do you see these trajectories going through different parts of the space are exactly the same. So there are two advantages. And again, it still depends on the score function. One advantage is that-- as you said, you can be more efficient. The other advantage is that now it's a deterministic invertible mapping. So now it's a flow model. So now we've converted a VAE into a flow model. Basically, what's happening is that if you recall you can think of these random variables here as latent variables in some kind of generative model. And in the VAE perspective, we're inferring these latent variables by stochastically simulating this diffusion process. But if you solve the ODE now you are inferring the latent variables deterministically. And because you ODEs have unique solutions, the mapping is invertible, and so you can also convert basically this model. Once you have the score, you can convert it into a flow model that has exactly the same marginal densities over time. And one advantage of a flow model is now you can compute the likelihoods exactly. So now you can use something similar to the change of variable formula to actually compute exactly what is the probability of landing at any particular point. You can just solve the ODE, which is the same as inverting the flow, and then compute the probability under the prior, and then you do change of variable formula, and you can get exact likelihoods. And so by converting a VAE into a flow, you also get exact likelihood evaluation.
Stanford_CS236_Deep_Generative_Models_I_2023_I_Stefano_Ermon
Stanford_CS236_Deep_Generative_Models_I_2023_I_Lecture_15_Evaluation_of_Generative_Models.txt
The plan for today is to talk about evaluation. So instead of talking about how to build new types of generative models, we're going to discuss how to actually evaluate how good they are. It's going to be-- it's kind of a challenging topic where there's not really a consensus on what's the right way to do it, but we'll try to cover at least some of the ways that are out there. Nothing is perfect at this point, but we'll cover some of it. So just as a brief recap, we've talked a lot about modeling. We talked about different types of probabilistic models that you can use. You can work directly with the probability density or the probability mass function, which case we've seen autoregressive models, normalizing flow models, latent variable models like the variational autoencoder. We've seen energy-based models. We've talked about probabilistic models or generative models where, instead of representing a probability density function, you represent directly the sampling procedure. So this is kind of generative adversarial networks would be one example. And then we've talked about score-based models where instead of representing the density, you represent the score, which is just like the gradient essentially. And that's yet another model family that you can use to model your data. And we've talked about a number of different training objectives that you can use to fit a model to data. We've talked about KL divergence, which is the same as-- minimizing KL divergence is the same as maximizing likelihood, which is a very natural kind of objective to use whenever the likelihood is accessible directly. So if you're modeling directly the probability density function, probability mass function this is a very reasonable kind of objective to use. And so autoregressive models, flow models, the ELBO in variational autoencoders is also kind of like an approximation to the maximum likelihood objective. And to some extent, contrastive divergence is also an approximation too, or it's exact to the extent that you can get perfect samples from the model. We've seen f-divergences and two sample tests, which are very natural in the context of generative adversarial networks. If the only thing you have access to our samples from the distributions, then this is a reasonable way of training a generative model. And then we've seen Fisher divergence, which is essentially the same as score matching, which makes a lot of sense whenever you have access to scores or whenever you're working with energy-based models because it allows you to bypass the normalizing constant. And we've seen noise contrastive estimation, which works for energy-based models. And the question is, at this point, there is a lot of different pieces, a lot of different ingredients that you can use. There is many different kinds of model families that you can pick from. There's different kinds of training objectives. And a natural question is, how do you pick which one you should use for a particular data set? And eventually, this boils down to the question of, which model is better? Should you train an autoregressive model on your data? Should you train a flow Model So you train a GAN? And in order to answer that question, you need to be able to say model A is better than model B, essentially. And that requires you to be able to evaluate, basically, the quality of a generative model. And that's really, really important because it allows you to make comparisons and pick a model that is most suitable for your problem. And it's kind of like if we think of it from a research perspective, it's kind of like a super important ingredient. We always want to make progress. We want to build better models. We want to get better and better, but in order to do that, we need to be able to measure how good a model is. And so we live in a world where it's pretty easy to just-- people make their models open source. You can clone a GitHub repo. You can improve. You can make a change to a model or to a training objective. You get something new out. It's very important to be able to quantify your proposed solution better than something that existed before. And again, that requires you to be able to evaluate different kinds of generative models. And unlike the case of discriminative models, typical machine learning models evaluating generative models is unfortunately, pretty hard. In the case of a typical machine learning model that you would use for a discriminative task. Let's say you're training a classifier to label data, to map inputs to labels, so a pretty kind of low-dimensional simple kind of output space. That's a setting that is pretty well understood how to measure progress. Somebody comes up with a new architecture for, let's say, computer vision tasks, you can train the models. And you can check what kind of losses they achieve. You can use it on-- you're going to define some kind of loss that quantifies what is it that you care about. Is it top-one accuracy, top-five accuracy, or whatever decision problem you intend to use the predictions that you get from the model in. You can specify a loss function, and then you can try to, given two models, you can evaluate the losses that they achieve on held-out unseen data. And that gives you a pretty good handle on the performance of the model. That tells you essentially, if you were to at test time, when you deploy the model, you work to fit in data that kind of looks like the one that you've been training on. This looks like the one that you have in the test set, and that's the performance that you would expect. And so that allows you to compare different models and decide which one is better. And unfortunately, things are not so easy for a generative model. It's not clear what is the task. Essentially that's the main challenge. What is it that we care about? Why are you training a generative model? And there is many different options and many different, and all of them are more or less valid. Perhaps you're training a generative model because you care about density estimation, you care about evaluating probabilities of, say, images or sentences. Maybe you care about compression. Maybe you care about generating samples. At the end of the day, you're training a diffusion model over images and what you care about is being able to generate pretty outputs that are aesthetically pleasing. Or maybe you're really just trying to do representation learning or unsupervised learning. At the end of the day, you have access to a lot of unlabeled data, maybe large collections of images or text that you've scraped from the internet. You'd like your model to learn something about the structure of this data. And you'd like to be able to get representations out of the models that then you can use to improve performance on downstream tasks. Instead of working directly on pixels, maybe you can work on representations obtained by a generative model, and then you can get better performance. You can reduce the amount of labeled data that you need to train a model. Or maybe you're thinking about many different tasks that you need to be able to use your model for. Perhaps you're trying to train a single good model over images that then you can use to do compressed sensing, semi-supervised learning, image translation. Or if you're thinking about language models, again, you are trying to find a single model that has been trained on a lot of text, a lot of collected from the internet. And what you really care about is being able to leverage all the knowledge that has been encoded in this big language model, an LLM and then what you really care about is being able to prompt the model to solve tasks using a small number of instructions or examples. So lots of different things you could do. And these different things will lead, and for each one of them, or for some of them, at least, there is many different metrics that you could use to-- even if you pick one of these tasks, it's not entirely obvious how you measure performance on each one of them. The simplest one is probability density estimation. If you really care about density estimation, if you really care about being able to accurately quantify probabilities using a generative model, then likelihood is a pretty good metric for that. So what you can do is, you can split your data into train, validation, and test. You can fit your model using a training set. Maybe you pick hyperparameters on the validation set, and then you can evaluate the performance on the test set, where the performance is just the average log likelihood that the model assigns to on test data, which is a pretty good approximation to the kind of average log likelihood that you would expect the model to assign to samples drawn from this data distribution. And essentially, this is the same thing as compression. We've seen that maximizing likelihood is the same as minimizing KL divergence, which is the same thing as trying to compress data, essentially. So at the end of the day, what we're saying, is that if you use that as a metric, you are comparing models based on how well they compress the data. And to see that, turns out that there is a way to take a probabilistic model and map it to a compression scheme, where what you would do is, you would encode a data point x to some string that can be decoded back in a unique way. And the length of the string, basically, depends on the probability of the data point. So if you have data points that are very likely, they are very frequent, then you want to assign short codes. And if they are very infrequent, then you can afford to assign very long codes if you're are not going to see them very often. And that's a way to compress data using a code. And it goes back to the intuition that we had before. If you think about the Morse code, it's based on this principle. So if you have vowels like e and a, they are common, so you want to assign a short code. And then if you have letters that are less frequent, then you want to assign a long code to that. And when if you train a generative model based on maximum likelihood, you're basically trying to do as well as you can at compression. And if you compare models based on likelihood, you are comparing how well they compress data, which might or might not be what you care about. And to see that, it's pretty clear that if the length of the code that you assigned to a data point x, basically, is proportional to-- it is very close to the log of 1 over p, then you can see that the average code length that you get is going to be this quantity, which is roughly-- if you get rid of the fact that the lengths have to be integers, if you approximate it, it's roughly equal to the negative log likelihood. So if you try to maximize the likelihood, you're minimizing the average length of the code that you get. So you maximize the compression that you can achieve. And in practice, if you use this kind of Shannon or Huffman codes that you might have seen before, it's actually expensive, and it's not tractable to actually build one of these codes. But there are ways to get practical compression schemes. So to the extent that you can get a good likelihood, there is an actual computational efficient way of constructing compression schemes that will perform well, as long as you get good likelihoods on the data. There's something called arithmetic coding, for example, that you can actually use. So if you are able to train a deep generative model that gets you good likelihoods, then you can potentially compress your data very well. And actually, if you've read papers on language models, the GPTs and those kind of things, that's essentially the same metric that they use when they compare language models. They call it perplexity in that setting, but it's essentially like a scaled version of the log likelihood. Now, the question is, why compression? Is that a reasonable thing to do? Is that what we really care about? It's reasonable in the sense that, as we've discussed, if you want to achieve good compression rates, then you need to, basically, be able to identify patterns in the data. The only way you can achieve good compression is by identifying redundancy, identifying patterns, identifying structure in the data. So it's a good learning objective, and we know that if you can get the KL divergence to zero, then it means that you've perfectly match the data distribution. And this makes sense. If you're trying to build, train a generative model to capture knowledge about the world, this is a reasonable objective. We're training the model to compress the data and by doing so, we're learning something about how the world works essentially because that's the only way to achieve compression schemes. So the intuition could be something like this. And if you think about physical laws, like Newton's law or something like that, you can think of it as one way of compressing data. If you notice there is some kind of relationship between variables you care about, F equals ma, then knowing that sort of relationship allows you to compress the data. Let's say if you have a sequence of accelerations and forces, you don't have to store both of them. You can store just the accelerations, and you can recover the forces through the equation, for example. So any kind of pattern or structure in the data like this, allows you to achieve better compression rates. And so by training a model to compress, you might be able to discover some interesting structure in the data, including maybe, knowledge about physical laws and things like that. And that's kind of like there's actually something called the Hutter prize. It's actually there's a half a million dollars for developing a good compression scheme for Wikipedia. And the quote from the prize website is, "being able to compress well is closely related to intelligence. While intelligence is a slippery concept, file sizes are hard numbers, Wikipedia is an extensive snapshot of human knowledge. If you can compress Wikipedia better than the predecessors, your decompressor is likely going to be smarter, basically." And the whole idea behind this prize is to basically encourage the development of intelligent compressors as a path towards achieving AGI. So the hypothesis here is that if you can really compress Wikipedia very well, then you must achieve a very high level of intelligence. And indeed, you can actually compare how well humans do at this, how good are humans at compressing text. There's actually an experiment that Shannon did, many years ago, and he was very interested in this kind of topic of compression. And he invented the whole field of information theory. And he actually did experiments checking how good-- humans have a lot of knowledge, a lot of context. If you see a string of text, you're probably going to be pretty good at predicting what comes next. And so he actually did an experiment with getting human subjects involved and trying to see how good are people, basically, at predicting the next character in English text. And what he found is that they achieve a compression rate of about 1.2, 1.3 bits per character. So there are 27 characters or something like that. So there's a lot of uncertainty. If you didn't know anything about it, you would need maybe four or five bits to encode a character. But people are able to do it with only one or two. So there's not too much uncertainty. When you predict the next character in English text, people are pretty good. There's kind of only if you think about one bit of information, it encodes two possibilities. And so that's the typical uncertainty that people have when they predict the next character in text. So kind of like one bit would correspond to, OK, there's two possibilities, and I'm uncertain about them, about which one it is. And you might ask, how well do large language models, neural network? They actually do better than humans already. And you can get something like people trained on Wikipedia and that Hutter price kind of data set, and they were able to get something like 0.94 bits per character. So even better than humans. And again, this is a reasonable objective, a reasonable way of comparing models. That's what people use for training large language models. They train them at maximum likelihood. It makes sense to compare them based on perplexity to some extent or try to forecast how good the perplexity is going to be if you were to increase data or you were to increase compute, scaling laws kind of things. But there are issues with compression. And the main issue is that it's probably not the task we actually care about or not entirely reflective of what we care about. And the issue is that, basically, not all bits of information are created equal. And so if you think about compression, a bit that is encoding a life or death kind of situation is worth exactly the same as something maybe less important, like, is it going to rain or not tomorrow? And so compressing one or compressing the other, is the same from the perspective of KL divergence or maximum likelihood, but obviously, it doesn't reflect the way we're going to use the information in downstream tasks. So there are some serious limitations of what you can say by just comparing models in terms of compression. Think about image data sets, same thing. There is certain pieces of information that are much less important to us. You could think about a slight change in color for a particular pixel. It doesn't matter too much. While there's information about what's the label of the image. That is much more important. But from this perspective, it is all the same. Basically, it doesn't matter. So that's main limitation of density estimation or compression. And yeah, we'll talk about this more later. The other thing to keep in mind, is that compression or likelihood is a reasonable metric for models which have tractable likelihood. But there is a bunch of models that don't even have it. So if you're working with VAEs or GANs or EBMs, it's not even obvious how you would compare models in terms of likelihood or compression. For VAEs, at least you can compare them based on ELBO values, which we know is kind of like a lower bound on likelihood. So it's a lower bound on how well they would compress data. But if you have GANs, for example, how would you compare, let's say, the likelihood that you achieve with a GAN to the one that you've achieved with an autoregressive model or a flow model. You can't even compare them because there is no way to get likelihoods out of a Generative Adversarial Network. I remember when we are kind of learning the GAN part, you mentioned kind of one big motivations about it is it's a lot likely to be [INAUDIBLE],, right. So I was just trying to understand your why-- so there's a evaluation metrics where it really depend how we're looking at the downstream task? So it depends what you care about GANs. It's a great question. I mean, do you really care about compression? Maybe not, but if you wanted to compare the compression capabilities of a GAN to something else, you would not even be able to do that. And we'll see that maybe that's not what you care about. Maybe you care about sample quality. And we'll see there are other evaluation metrics that maybe make more sense where you can say, OK, is a GAN better than an autoregressive model trained on the same data set. But if you cared about density estimation, then you need to at least be able to evaluate likelihoods, and it's not something you can directly do with a GAN. And so in general, it's a pretty tricky problem to figure out if you have a generative adversarial network, and you have, let's say, I have an image, and you want to know what is the probability that the model generated this particular image is pretty difficult to do. Even if you can generate a lot of samples from the GAN, it's actually pretty tricky to figure out what is the underlying probability density function. And typically, you have to use approximations. And one that is pretty common is called a kernel density estimation. That allows you to basically get an approximation of what is the underlying probability density function given only samples from the model. So it would look something like this. Suppose that you have a generative model for which you are not able to evaluate likelihoods directly, but you're able to sample from it. Then you can draw a bunch of samples. Here, I'm showing six of them. And just for simplicity, let's say that the samples are just scalars. So you generate six of them, and the first one is minus 2.1, minus 1.3, and so forth. And these are representative of what is the underlying distribution that generated this data. And the question is, what can we say about probabilities of other data points? So given that you have these six samples from the model, what is the probability, let's say, that we should assign it to the point minus 0.5? And one answer could be, well, the model never generated 0.5, or 0.5 is not among the six samples that we have access to. So we could say, since it doesn't belong to this set of samples, maybe we should set the probability to zero, which is probably not a great answer because we only have six samples. It could be just due to chance we didn't see that particular data point in our set of samples. So a better way of doing things is to do some kind of binning, build some kind of histogram over the possible values that these samples can take. For example, you can build a histogram, let's say, where we have bins with two here. And then you basically count how frequently the data points land in the different bins. And then you make sure that the object that you get is properly normalized, so that the area under the curve is actually one. So because we had a bunch of-- we have two data two data points landing between minus 2 and 2, then we have a little bit higher. We assign a little bit higher probability to that region, and then you can kind of see the shape of this histogram is related to where we're seeing the samples that we have access to in this set. And then you can evaluate probabilities of new data points by basically checking in which bin does this test data point land. Minus 0.5 lands in this bin where there is two data points. And so we assign probability density 1 over 6. And then if you take, let's say, minus 0.99, 1.99, I guess that's also in the first in this bin, where there's two data points. And so it should also be one sixth. And then the moment you step over to the next bin, on the left, then the probability goes down to 1 over 12 or something like that. So just basic histogram as a way of constructing an approximation of the probability density function based on samples. It's a reasonable thing, but you can kind of see that these transitions are probably not very natural. Perhaps there is something better we can do. And indeed, a better solution is to basically smooth these kind of hard thresholds that we had because we were using bins. And so the way a kernel density estimator works, is that when we evaluate the probability of a test data point x, we basically check how similar this data point is to all the samples that we have in our set. And we do that using this function k, a kernel function. And then we evaluate this probability by basically looking at all the n samples that we have access to checking, evaluating the kernel on the difference between the data point that we're testing the density on and the samples that we have access to and then where the distance is scaled by this parameter sigma, which is called the bandwidth of the kernel. And to make things concrete, you can think of the kernel as being just a Gaussian function that has that sort of functional form. And so the similarity between two data points decays exponentially based on that equation. And if you do that, then you get a smoother kind of interpolation. Before, we had these kind of bins that were sort of not very natural. Now what we're doing if you're doing a kernel density estimator using a Gaussian kernel, is we're basically putting little Gaussians centered around each data point that we have in the set of samples. And then we're summing up, basically, all these Gaussians. And we get an estimate of the density that is now much more smooth. And so essentially, the probability is high if you are close to many data points, kind of like before, but now it's being closed is smooth. It's not only about whether you are in the bin or not. Now there is some small effect, even if you're very far away. Although the effect of a data point decays according to that whatever function you choose for the kernel. Yeah? Are there some heuristics for how you choose the variants specifically for your kernel functions? Yeah, that's going to come up next. That's a great question. And it seems like, OK, you choose the kernel. The kernel is basically telling you, should be a non-negative function that is normalized. So it integrates to 1 so that when you take the sum of n kernels, the total area is going to be n. And then you divide by n, and you get an object that is normalized. So you get a valid probability density. And then, I guess, it has to be symmetric because it's sort of intuitively like a notion of similarity between a pair of data points. And so the function value is going to be high when the difference is close to zero. And the bandwidth controls how smooth, basically, that interpolation looks like. And so what you see here, on the left, are different kinds of kernel functions you could choose. You could choose Gaussian. You could choose more like a square kind of kernel that determines what you think is the right way of comparing how similar two data points are. So if you choose a Gaussian, you have that sort of functional form. If you choose some kind of square kernel that looks like that, then it's more back to the histogram kind of thing where two points are similar if their distance is relatively small. After you're above this threshold, then the distance becomes extremely high. The bandwidth controls the smoothness. And so you can imagine that, ideally, you'd like to pick a bandwidth such that you get the distribution like the black one, the black curve here is as close as possible to the true curve that generated the data, which is shown in gray there. But you can see that if you were to choose a value of sigma that is too small, then you are going to get something like the red curve, which is very jaggy again. And so it's kind of under smoothed. And if you were to choose a very high value of sigma, then everything is kind of similar to each other, then you're going to get a very smooth interpolation, and you get something like the green curve, which again, is not a good approximation of the density that actually generated the data. So back to the question, how do you choose sigma? What you would try to do is you can try to tune it by trying to do cross validation where you leave out some of your samples and then you try to see which kind of sigma fits the samples that you've left out as best as possible. And so. yeah, that's true. At least in principle, it's a way that would allow you to compute, to get an estimate for the underlying density given only samples. Unfortunately, it's actually extremely unreliable. The moment you go in high dimensions, just because, of course, of dimensionality basically, you would need an extremely large number of samples to cover the whole space and all the possible things that can happen. And so the more dimensions you have, the more samples you need. And in practice, it's not going to work very well if you're working on something like images. So there are limitations of what you can do. Now, what if you have latent variable models? If you have a latent variable model, again, but you would like to somehow get the likelihood, in theory, you can get it by integrate out over the latent variable z. We know that that's sort of the expression that you would need. If you want to evaluate the likelihood of a data point x, you can, in principle, get it by looking at all the possible values of the latent variables and then checking the conditional probability of generating the data point x given the different z's that you're integrating over. As we know, this can have very high variance if the distribution of the prior is very different from the posterior, which basically, means that, again, you're going to need a lot of samples to basically get a reasonable estimate of that likelihood. And there are ways to basically, make the estimate more accurate. There is something called annealed importance sampling, which is a procedure to basically do importance sampling by constructing a sequence of distributions to draw these z variables. That is kind of like interpolating between the bad, naive choice of just sampling from the prior p of z and the optimal thing that you would like to do, which is to sample from the posterior. And we're not going to go into the details. I me actually skip some of this stuff. But if you have in your project, you're working with latent variable models, you have a VA and somehow you need to compute likelihoods, you might want to look into these kind of things because they might help you get more accurate estimates of the likelihoods that you get from your variational autoencoder. Cool, now, what about sample quality? In a lot of these situations, we maybe don't care about likelihoods, we don't care about compression. We have two generative models, maybe, and we can produce samples from them. And we would like to know which one is producing better samples. Let's say if you're working on images, maybe you have two groups of samples, and you'd like to know which one is better. And how to do that is not very obvious. It's actually pretty tricky, to say, does this generative model that produce these samples better than the generative model that produce these samples, not obvious how you could do that. Probably the best way to go about it, would be to involve humans. So ask some annotators to essentially compare the samples and check which ones are better. And of course, that's not going to be scalable. Maybe it's not something you can use during training. But if we have the budget for it, and we have the time for to go through a human evaluation that's usually sort of like the gold standard. There is actually very interesting work in the HCI community where people have explored what are principled ways of getting feedback from humans and try to figure out and get them to compare the quality of different types of samples or different kinds of generative models. This paper is actually from Stanford looking at perceptual evaluation of generative models. And they're, which is based in psychology cognitive science kind of literature. What they suggest is that what you should do is you should take samples from your model. You have real data, And then you can check how much time people need to accurately decide whether or not the samples that they are seeing are fake or real. So if you can only look at a sample for a very small amount of time, you might not be able to perceive the difference from what is real and what is not. Maybe the hands are not rendered correctly. But if you don't have enough time to actually stare at the pictures long enough, you might not be able to see it. And so what they suggested is that we need to look at this time to get a sense of the longer it takes for people to distinguish real from fake, the better the samples are. And the other metric that they propose is more traditional. And it would basically be the percentage of samples that deceive people when you're giving them an infinite amount of time to actually check whether it is real or not. And so you can look at the website if you're interested. And this is sort of what it would do, what it would work like if you want to determine how much time it takes for people to figure out whether or not samples are real. What you do is, you might start with a very, maybe, a fairly large number of, maybe 500 milliseconds you give them to decide whether or not the image is real. Maybe they always get it right because they have a lot of time to figure out what kind of mistakes are made by the generative model. Then you start decreasing the time you give them until you get maybe around 300 milliseconds, where people start not being able to distinguish real from fake. And at that point, that would be sort of like the hype time score for this particular generative model. And then as I mentioned, the longer it takes people to figure that out, the better the samples are. And here you can see some of the examples. And then you can also rank different samples based on how long it would take for human evaluators to basically distinguish different types of samples. Now the problem with human evaluations are great, and maybe you can use them for your project. The problem with human evaluation is that they tend to be expensive. You actually have to pay people to go through the process of comparing samples, deciding which ones look better. They are hard to reproduce, and there are strange-- you need to be very careful on how you set up these human evaluations. The lay out that you use to ask them questions affects the answers that you get. The way you phrase the questions affect the answers that you get. So it's actually very tricky to rely entirely on human evaluations. And they tend to be pretty hard to reproduce. And so the other thing you might not be able to get if you just do this, is you might not be able to actually evaluate generalization. Again, if you imagine a generative model that is only just memorizing the training set, it would give you very good samples, just by definition. And you might not be able to even use humans. You might not be able to actually figure out that indeed the model is actually just memorizing the training set, and it's not actually able to generalize in any meaningful way. And so it would be nice if there was some kind of automatic evaluation metric to actually figure out the quality of the samples. And some that are very popular that are often used in the literature, and you might need to implement or use also for your projects, are Inception Scores, FID Scores, and KID Scores, which actually, I think, came up at some point in the last lecture, and there were questions of what they actually are. So now we're going to see how they are actually computed and what they actually mean. So Inception Scores is something you can use when you're working on labeled data sets. So if somehow you're in a setting where the images have associated labels, then what you can do is you can try to essentially predict the labels on synthetic samples. And you can check what kind of distributions over the labels you get on synthetic samples versus real samples. So if you have access to a classifier that can essentially tell you what's the label for an image x, then what you can try to do is, you can try to quantify how good a generative model is by looking at the behavior of the classifier on the samples that it produces. So there is two things that the inception score looks at. The first thing it looks at is something called sharpness. And essentially, you can imagine two sets of samples. One that looks like this, and one that looks like this. And if you were to-- this is a labeled data set. Every sample has a label, which is just a number. This is MNIST, so it's kind of like a toy example. But every digit, every image you produce can be mapped to a number that it represents, and you can kind of see that somehow the true samples are probably relatively easy to classify while synthetic samples that are a little bit blurred, they're not very clear, they're going to be harder, essentially, to classify if you have a good classifier. And so the intuition behind sharpness is to basically look at how confident the classifier is in making predictions on the synthetic samples, on the generated samples. And so the formula looks like this. And it's essentially something related to the entropy of the classifier when evaluated on samples. So you generate samples from the model. And then you make predictions. You look at all the possible predictions that the classifier produces over the x's that are synthetic. Then this quantity here is basically related to the entropy of the classifier. And so when the classifier predictive distribution has low entropy, so kind of like the classifier is putting all the probability mass on one single y, it's very confident in the prediction that it makes, then the sharpness value is going to be high. And the other thing we want to check is something called diversity. And the idea is that if you're working with a labeled data set, you'd like the model to essentially produce images of all the classes that are represented in the training set. So if you have, let's say, a GAN that generates samples that look like this, this would indicate something like mode collapse, where it's only producing once. And we would like to somehow say, OK, these are not good samples because there is not enough diversity. And the way to quantify it is to basically look at the marginal distribution over the labels that you get from the classifier when you evaluate it on the samples. And you basically, try to make sure that this marginal distribution has high entropy, meaning that all the classes that are possible are actually predicted by the classifier over the synthetic samples essentially. So it's not just producing once the model. That's the formula. Again, it's basically looking at the entropy of the marginal distribution, then the way you get the inception score is, you multiply together these two numbers. And so high inception score is good because it means that you have high diversity then you have high sharpness. So how do you evaluate classifier diversity? Because with something like this, it seems like you could just take a bunch of rollouts, and then just average the number of times it's predicted 0, 1, 2, 3, 4, 5, 6, 7, 8, just to make sure that the distributions are all about the same. But it seems a little bit harder to say, within the class, I have several different 1's drawn. Yeah, so it doesn't do that. So that's the problem, yeah. So it's not perfect. And that's one example of a failure mode. If somehow it does represent all the digits, but only one kind of digit, you would have potentially, a high inception score, even though you're sort of like dropping modes within the clusters kind of like corresponding to different labels. So not perfect for sure but widely used, nevertheless. And so higher inception score corresponds to better quality. Why is it called inception score? Well, if you don't have a classifier, so if you're not in the MNIST or toys or like situations, what you can do is you train a classifier, train on ImageNet, like the inception net typically that people use for this, and then you compute these metrics with respect to that. Can you go over this diversity? How is it exactly assessing diversity again? So it's basically checking. So this c of y, if you look at it, it's basically the marginal distribution over the predicted labels when you fit in synthetic samples. So if you were to only produce once, then this c y would be like a one hot kind of vector. And then the entropy would be very low. And so you would be unhappy, basically. And so you want high entropy, meaning that ideally, it should be uniform. The c y should be uniform over the different y's that are possible. So then that means that all the classes are represented in equal numbers, essentially. Yeah? So doesn't that mean that if you want to increase diversity, you would decrease sharpness. Yes. They are competing. And you would like to have a high value of both in there. Cool, OK, so that was one. And it was often used. But as we discussed, not perfect, far from perfect. One issue is that you're not really-- you're kind of only looking at samples from the synthetic samples, but you're not really ever comparing them to real data. If you think about these formulas, you're just looking at synthetic samples. You pass them through the classifier. Then you look at statistics of what comes out from the classifier, which seems suboptimal because you're never even comparing synthetic samples to real samples. So there is something called FID score, which tries to essentially compare the similarity of the features extracted by a large, pre-trained model on synthetic samples versus real samples. So what you do is this. You generate a bunch of samples from your generative model, and you have a bunch of real data from, let's say, the test set. And then you feed each sample through some kind of pre-trained neural network, like an inception net, for example. That's why it's called FID score. And then you get features for each data point. There's going to be a distribution over these features because every data point is going to have a different corresponding feature vector. And what you can try to do is you can fit a Gaussian to the features that you get from the synthetic samples and the features that you get in the real samples. And you're going to get two different Gaussians, meaning that the Gaussians will have different means and different variances. And the closer these two Gaussians are, the higher the quality of the samples, essentially. Because if the samples from the synthetic model, are very different from the real ones, then you might expect that the features that are extracted by a pre-trained model are going to be different. And therefore, these two Gaussians might be different, maybe have different means, or they have different standard deviations, different variances. Then you get a scalar out of this by taking the Wasserstein two distance between these two Gaussians, which you can compute in closed form. And it's essentially looking at the difference between the means of the Gaussians and some quantity that basically quantifies how different the two, the variances that you got by fitting a Gaussian to the real data and the fake data are, with respect to each other. So what's the intuition for using multivariate Gaussians? Why not any other statistic? Yeah, you could use other things. The reason they use multivariate Gaussians, is that this Wasserstein distance can be computed in closed form but yeah, not particularly principal. Yeah? What does depend on the samples that are being drawn? So considering the imaging model, so generated samples could refer to [INAUDIBLE] some other subject area. where there is a kind of-- and the trained ones could be some others. So are we really comparing apples to apples? Well, if the model is doing a good job at fitting the data distribution, then you would expect the statistics extracted by a pre-trained network would also be similar. So if, for example, this pre-trained network is looking at-- it's extracting statistics, high-level features like what's in the image, where the objects are located, and things like that, which you might expect these networks to do because they perform pretty well when you fine tune them on a variety of different tasks, then in some sense, you're hoping that looking at these statistics will tell you something about how similar the samples are in terms of do they capture a similar distribution. So two questions related to the feature. First one was that, why we fitting multivariate Gaussians to the features? What about checking the mean and variance of the generated and the test samples directly? And the second one was like, yeah, can you elaborate on, for example, variational autoencoders. We try to compute a feature representation, right? So is that what you're referring to when you say feature represent anything here? So for the second question, the features are the ones that are extracted by a pre-trained model, which could be anything. In the FID case, its inception net. That's why it's called inception distance. And so that's a pre-trained model, typically, on some large scale image data set, where you have a lot of different classes in order to perform well classification and probably has to extract reasonable features comparing the feature space. Kind of makes sense? The other question was, why not just compare the means of the samples themselves? That would be a very simple kind of feature, where you're just looking at the individual pixels. You could, but it may be not exactly what we care about. It's more interesting to compare these higher level features that are extracted by a model. When you're using FID, do people ever add auxiliary losses to the training card itself, like [INAUDIBLE] or something to make this metric be better downstream, or is it separated? Yeah, you could train on FID. Then you can no longer use it as an evaluation metric. So it's not-- the moment you start training on something, it stops to become a good-- so but you could, yeah. To the extent that it's not too expensive to compute, which I think it might be. But you could try, at the very least. And then in this case, lower FID is better. And the other thing you can do, is to do something called the kernel inception distance. And the idea is to basically do two-sample test, kind of like the same thing we've used for training models, but instead of doing it at the level of the samples themselves, we're going to, again, do it at the level of the features extracted by a pre-trained model. And so the MMD is another kind of two-sample test, where you have samples from two distributions, p and q. And what you do is, you compare the difference in different moments, kind of like what was suggested by Dev right now looking at the mean, the variance, and so forth. And more specifically, the way you do it is back to the kernel idea. You use a kernel to measure similarity between data points. And what you do is you do this. If you have distribution p and q, you check what is the average similarity between two samples, two real samples, let's say. What's the average similarity between two fake samples, and then you compare that to the average similarity between a real and a fake sample. And if p, again, is equal to q, then you can see that this thing evaluates to 0 because the difference between real and fake samples is the same as the difference between two real samples or two fake samples. And the idea is that we're now allowed to use a kernel to basically compare how similar two samples are. And so we don't necessarily have to compare samples in terms of their raw pixel values. But what we can do is, we can, again, do MMD in the feature space of our classifier. And so what you would do, is you would use a kernel to compare the features or two samples, two real samples, two fake samples, and a real and a fake sample basically. And it's similar to FID. The key difference is that KID is a little bit more principled. But it's more expensive because you're kind of like, if you have n samples, then it has a quadratic cost, as opposed to a linear one because you have to make all pairwise comparisons between the two groups of samples. But similar flavor to FID So how does FID compare the two distribution's moments? Yeah, it's not obvious from this perspective, but you could also think of it as, basically, the kernel, you could, basically, map the samples in the reproducing kernel Hilbert space of the kernel. So it's kind of like if the kernel is comparing data points based on some features, then this is basically the same thing as embedding the real data points and the fake data points in the feature space of the kernel, and then comparing those two objects. But the nice thing is that the kernels could be looking at an infinite number of features. So it's kind of like the kernel trick, where you're allowed to compare data points using an infinite number of features without ever having to compute them. OK, so that was the three main metrics that are used for evaluating sample quality. There is many more that you might need to consider, especially if you're thinking about text-to-image models. Then there's many things you have to worry about. So if the generative model is supposed to take a caption and generate an image, then you do care about image quality, but you do care about other things. For example, you care about whether or not the images that you generate are consistent with the caption that was provided by the user, but then you might care about other things. You might care about the kind of biases that are shown by the model. You might care about whether or not it's producing toxic content that you might need to filter. How good it is about reasoning about if the caption talks about different objects and their spatial relationship. How good is the model at understanding the spatial relationship and spatial reasoning kind of problems. So there's actually something pretty new that, also, I was involved in came out of Stanford. So we put together this benchmark called HEIM, Holistic Evaluation of Text2Image models, where we've considered all the different metrics that we could think of. And we've considered different kind of evaluation scenarios. And so you can see some examples here trying to look at quality, where maybe we use the usual FID and inception and KID that we just talked about. But then we also look at other things. How robust the models are if you change words in the captions, and the alignment between the image that you generate and the caption, various kinds of aesthetic scores, various kind of originality scores. So a lot of different metrics, and we actually try to do. But I think today it is the most comprehensive evaluation of existing text-to-image models. We took a lot of existing models, and then we tried to compare them with respect to all these different metrics. And that you can go if you're interested and see the results and see which model produces the highest quality images as measured by all these different metrics, both real, both human and automated or other things and if you care about the biases that these models have, we have a bunch of metrics to measure that. So that might be a useful resource, again, as you develop your projects. Now, another thing you might want to do with the model is to get features. We've talked about this idea of doing unsupervised learning. You have a lot of unlabeled data. You might be able to get good features from the model. How do you evaluate whether you have good features or not? We should know, already, what's the task you are thinking about. You're trying to get features because then at the end of the day, you care about classifying. You have a classification problem in mind. Then you can always measure the performance on the downstream task. So in that case, it's not too hard. It's a lot more tricky to-- if you don't have a task in mind, and you're just trying to say, OK, is this model producing better features than this other model, then it's a lot more tricky to be able to say something definitive there. And there is different aspects that you might want to consider if you are in the unsupervised kind of setting, where there is no task, there is no labels, so no objective way of basically comparing different sort of representations that you might get. You might care about how good the model is at clustering. Maybe you care about compression. Maybe you care about disentanglement. So maybe you care about this idea that we briefly talked about, that if you have a latent variable model, you would like latent variables to have some kind of meaning. And maybe you might want to be able to control different factors of variation by changing the variables individually. So that's what's kind of like referred as disentanglement, where the different variables have separate meanings, and they control different aspects of the data-generating process. So if you care about clustering, ideally, you would like to be able to group together data points that have somehow the same similar meaning, or that they are similar in some way. And this is all very cyclical, but that's the problem with unsupervised learning. And one thing you can do, is you can take your VAE or your model that gives you latent representations, you can map points into this feature space. And then you can apply some kind of clustering algorithm, like K means to group them together. And so here's an example of the kind of thing. You train two generative models on MNIST, and then you map the data points to a two-dimensional latent space. And here, the colors represent the different classes. I don't even remember exactly what is B and what is D, but these are two different models. And they produce two different embeddings of the data. And which one is better? Is B Better Is D better? It's unclear, right? They both seem to be doing something reasonable or data points belonging to the same class end up being grouped together in this latent space, is not obvious which one you would prefer. And so for label data sets, again, there is many quantitative metrics. So if you do have labels that you use to-- use unlabeled data to come up with the clusters. And then you use labels to evaluate the quality of the clusters. Then there is a bunch of metrics things like the completeness scores, homogeneity score, V measures. I'm going to go through them quickly, but sort of like there is a bunch of measures that you can use. If you have a label data set, you pretend you don't have the labels. You get representations. You do clustering, and then you can use the labels to see how good the clusters that you get are. And intuitively, what you want to do is you would like to be able to group together points that belong to the same class. And so maybe you care about making sure that all the points that belong to the same class end up-- land in the same cluster. Or maybe you care about homogeneity within the clusters. So you would like to make sure that all the points that land in the same cluster, have the same label, or maybe some combination of these two scores. So there's different metrics that you can use. And again, if your project involves something like this, you can look into this into more detail. Another thing you might want to do is to check how well, basically, the latent representations preserve information about the original data points. So to what extent, basically, you can reconstruct data, given the latent representations, which is task you care about if you're trying to do lossy compression. So you have data. It might make sense to map it to a latent representation, especially if that latent representation is kind of lower dimensional. And in this case, maybe you care about being able to reconstruct the original data point as accurately as possible. So here you see some examples of different representations, and you have the original images on the top. And then what you see here, is what you get if you map them to the latent space, and then you try to reconstruct the image from the latent representation. And so you'd like the reconstructions to be as close as possible to the original data while basically reducing the size of the latent space as much as possible. So here, for example, they are looking at different kinds of representations where maybe if you compress using JPEG, you would get something like a 17x compression in images with a small loss in accuracy or quality, while there are these other representations that you get from training a generative model, where maybe you can get something like a 90x compression, meaning the latent vectors that you get by mapping data to the latent space are much smaller than the original data points. And still, you're able to do very well at reconstructing the original data points as measured by kind of reconstruction metrics like mean squared error or PSNR or SSIM. Weren't we already trying to do this back when we were discussing VAEs and other latent variable models? Yeah, so VAEs have a reconstruction loss embedded in them. So it would make sense that they do reasonably well at that. But if you had a different model, maybe you're looking at the representation that you get from a GAN, and you want to know are those better than the ones I have on VAE, it depends on what you want to do with this representations. Do you care about clustering? Do you care about reconstruction quality? So this is one aspect that you might care about if you're trying to compare two different types of representations that you get in generative models. Now, the other thing that you might care about the latent space is disentanglement. The idea that we would like these latent representations, the latent variables to capture independent and interpretable attributes of the observed data. Something like if you have a generative model of faces, maybe if you change one of the latent variables, you change the skin color of the image you produce, or maybe there is another latent variable that kind of controls the age of the people you generate through this generative model and so forth. And so for example, maybe there is a latent variable z1 that is controlling the size of the objects you produce. So if you don't change z1, then the size of the object never changes. As soon as you change the z1, then you change the sizes of the objects you produce. Or yeah, that would be the ideal outcome kind of like PCA sort of but in a non-linear way. You find important aspects, latent factors of variation in the data, and then you're able to control them separately, essentially. And again, there is many metrics that people have come up with. For example, the accuracy of a linear classifier that tries to predict a fixed factor of variation and a bunch of others that I'm not going to go over. But there are some libraries that would allow you to compute these metrics. So if you're doing a project around disentanglement, this might be a good resource to look into. And the kind of unfortunate aspect here, is that it's provably impossible to learn a generative model that is disentangled if you only have unlabeled data. So if you never get to see the true kind of latent factors of variation, there is no labels associated with these factors that you're trying to discover from data. It's actually provably impossible to do this. So there has been some empirical success, but it's not well understood why these methods work. And there is some theoretical results showing that it's actually not possible. So I guess there are some limitations there. Cool, now, the other thing that, of course, is very popular these days, is this idea that if you are working with a language model, perhaps you don't care about going through this process of let's take the data. Let's map it to a latent space, and then let's try to somehow use these representations to improve performance in some kind of downstream task. If you have a generative model of language, then you might be able to directly use the model to solve tasks that involve language by specifying the tasks in natural language. So there are two different ways of using the generative model. You could try to train the generative model in an unsupervised way and then try to leverage the knowledge that it discovered by mapping data points in this latent space and then training classifiers the usual way on these latent representations. Or if you're working with a language model, then there is this idea of pre-training the model using a lot of unlabeled data, and then trying to adapt it, for example, through prompts, to actually get it to solve a variety of different tasks. So even though these models have been trained by maximum likelihood, which we know is just compression, we know that they are by doing-- if they do well at compression, it means they've learned something about the structure of the data. They've memorized a lot of interesting things. And then the hope is that we can leverage that knowledge in different kinds of downstream tasks. So for example, let's say that you are doing sentiment analysis where you're basically given, let's say, a review maybe of a movie, and the goal is to predict whether the sentiment of that review is positive or negative. It's a classic kind of NLP task. How would you use a language model to solve this problem? And the idea is that because we're working with natural language, what you could do is you could try to-- we have our interface is a model that takes a sentence and predicts the next word. Let's say it's an autoregressive model. It takes up a piece of language, and then it predicts the next word. Then what you can do is, you can craft the sentence here, such that this prediction is the only thing the model can do, predict the next word given the previous text, is actually solving the task for you. And so what you can do is you can construct a sentence like, classify the sentiment of the movies below as either positive or negative. Then you give it an example of a movie review which is positive, maybe. This has got to be one of the best episodes blah, blah, blah, with a positive sentiment. And then you give it another example maybe with negative sentiment. And then you have this review that you'd like to classify. And then you fit in the text of the review, and then you have sentiment and then blank. Then you use the model to predict the next word. You use the model to predict what goes-- what should you replace blank with, which is exactly consistent with the API of the model, which is predict the next word given some context. Then if the model predicts positive, then you're going to classify this as a positive example. And if the model outputs negative there, then they're going to classify it as a negative example. And so this is an example of prompting. And of course, there are many smarter ways of doing it. There's a whole prompt engineering kind of job where people supposedly are good at extracting knowledge from the models by crafting smart prompts. But that's the basic idea. Getting the knowledge from these generative models by crafting prompts such that encode the kind of task that you want to solve, without actually going through representations. Of course, it's also possible to just fine tune the model, which is closer to the idea of getting representations. You could also just take the model and then fine tune it to solve the tasks you care about. So presumably, the pre-trained model is already mapping the inputs, like a sentence here, to some representation that is good for predicting the next word. So you might be able to fine tune the model to do something interesting. That's also quite successful. I think prompting is perhaps nicer because it doesn't involve any training. That is somewhat special for language models. And it tends to work pretty well, especially if the language model is a very powerful one. And again, what kind of tasks are you going to consider? That's still pretty much very much an open problem in terms of which generative model of language is better. There's many of them out there. How can you say whether model A is better than model B? And you can compute perplexity, which is the same as likelihood but does not quite reflect what we care about. Maybe what we care about is all these kind of scenarios that we might want to be able to ask questions to the language model. Or we might want to ask it to do movie reviews for us, or whatever it is that we care about, or do math, or solve riddles for us, or do question answering. And so again, this is a space where it's not clear what is the right task to consider. And so one way to go about it is to consider a lot of different tasks, a lot of different scenarios, a lot of different metrics because maybe you care about accuracy, but maybe you care about other things. And you can try to see how these different models that exist out there perform on all these different tasks. So you can consider different scenarios. You can consider different adaptation strategies, let's say, different prompting strategies. You can have different metrics, for example, accuracy or whatever it is, when you use the model to solve the task that way, and then you can compare many of the existing models that are out there with respect to all these metrics. And that allows you to maybe say a more precise way, this model is better than this other model with respect to these metrics on these kind of scenarios. So then there is different efforts out there. There is one from Stanford, Helm other looked at a lot of different metrics, a lot of different scenarios, a lot of different-- which is very thorough. There is one that was led by Google, where they also was more like collaborative effort where they asked all the people around the world to come up with tasks, and then they are all part of this big benchmark where there is over 200 tasks that you can ask your language model to solve. And you can see the performance that you get across these different tasks. How could we adopt this idea of prompting to task like the inputs where there is no natural way of sequentializing the data? So you're thinking of images or something like that? Yeah, I think it's a good question that somehow the prompting idea has not quite been applied So. Heavily to the-- if you have a good generative model of images, how can you use it to solve tasks through prompts? And it's not as natural because the output is an image. And it's easy to think of the output to map, say, labels to text, or even bounding boxes to text. And so if the API of your model has text as an output, it's relatively easy to use it to solve a variety of tasks. I think it's a bit less natural if the API has images as an output, but it might be possible. I think it's an interesting kind of area that people are still exploring. And yeah, I don't think there is anything particularly good there but yeah. So GPT 4 B actually answers questions about answers zero shot. There's no idea of prompting there. Do you have an idea how it works? Well, they don't say it, yeah. But yeah, it's probably been-- the underlying mechanics is just predicting the next word, right? And then it has been probably done something instruction fine tuning, so it has been actually pre-trained and a lot of unsupervised text is predicting the next word. That's just compression. It wouldn't probably do very well if you start asking questions in a zero-shot way. And so what you have to do, is you have to fine tune it on a slightly different kind of data set that is emphasizing more the sort of tasks that you might expect the model to be asked at inference time. And again, there is a little bit of a question of what is the right way of-- what kind of task, what is the right distribution, do we care about movie reviews, or do we care about question answering, and how do we weight those tools? It's not clear what's the-- that seems to help, but we don't yet have a good handle in terms of evaluating or seeing what works and what doesn't. It's very coarse at the moment. I think it's becoming clear, when it comes to text maybe. But maybe I need to look at the image model, like, all that stuff construction, fine tuning, and stuff [INAUDIBLE] and like Alpaca here, reproduce and then since then other models. But what about the image stuff? We're doing actually using similar things in-- just right now, we're basically working on applying-- I mean, now, we are not the only ones. But people are trying to do basically you train a model on a lot of images on the internet, but then maybe you really have there's some kind of underlying preference that we would like the model to generate images with higher aesthetic value. Or maybe we would like the model to be non-toxic, or we would like the model to be more fair, less biased, and how do you adapt the model to the kind of downstream use? And so what you can do is you can collect preference data. And maybe you can show you can have a caption. You produce two images, and you ask a user which one do you prefer? You get preference data on what we like and what we don't like, and then you can fine tune the diffusion model to be more consistent with this kind of preference data. So that's possible too. Yeah? I have a question. So what are the pros and cons for solving the task by fine tuning a large language model or prompt engineering? Yeah, so prompting is great because you don't have to actually-- you don't have to have access to any compute. You don't even need to know how to program, right? The only thing you need to do is you need to be able to specify in natural language what you want the model to do. And so it can be completely done in a black box way without even having to know what the model is doing, what the weights are. Fine tuning requires you to actually train the model, for a little bit, at least, on some new data or some new task or something new. So the buyer is a lot higher in terms of the cost and the expertise that is required for that. [INAUDIBLE] Depends, typically, it's a better way of doing things, I think, although, people have gotten pretty good at prompting. So yeah, I think it depends on the task. Cool, yeah, so I guess we're out of time, so it's perfect timing. But basically, I think it's going to take away very high-level messages that it's still a pretty much-- it's still a pretty open kind of area how to evaluate generative models. There is still a lot more that we don't know that we have some coarse ways of comparing in models. And we have a sense of, OK, we're making progress over the years. But there is a lot more work to be done in this space in terms of coming up with better metrics. And even if you have all these large-scale benchmarks, where you have a lot of tasks, a lot of metrics, it's still not obvious how you weight them and what is the right distribution of tasks you might expect to use the model on. And so yeah, lots of work to be done in this space. But hopefully, this was helpful. I know many of you are starting to get into the weeds of the project. And so I'm sure you have a lot of questions on how to evaluate models. And so hopefully, you got a sense of what's out there. Unfortunately, we don't have any definitive answer yet. But at least that gives you some ideas of the kind of things you can use for the project.
Stanford_CS236_Deep_Generative_Models_I_2023_I_Stefano_Ermon
Stanford_CS236_Deep_Generative_Models_I_2023_I_Lecture_6_VAEs.txt
Let's get started. The plan for today is to finish up the variational autoencoder model, and so we'll talk about the ELBO again. We'll see how we can actually solve the corresponding optimization problem, and then we'll actually explain why this model is called the variational autoencoder. And so we'll show some connections with the autoencoders we've seen in the previous lectures. And we'll see how it generalizes them. And you can think of it as a way of turning an autoencoder into a generative model. So, as a recap, recall that we're talking about a generative model called the variational autoencoder, often denoted VAE for short. In the simplest form, you can think of it as something like this. It's a generative model where you first sample a simple latent variable z, for example, by just drawing from a multivariate Gaussian distribution with mean 0 and covariance the identity matrix, the simplest distribution you can think of. And then what you do is you pass this sample that you obtain, this random variable z, through two neural networks, mu theta and sigma theta. And these two neural networks will give you the parameters of another Gaussian distribution. So they will give you a mean vector and a covariance matrix, which will depend on z. And then you actually generate a data point by sampling from this conditional distribution p of x given z. And as we've seen, the nice thing about these models is that even though the building blocks are very simple, like you have a simple Gaussian prior p of z and you have a simple conditional distribution p of x given z, which is, again, just a Gaussian, the marginal distribution over x that you get is potentially very flexible, very general because you can think of it as a mixture of a very large and infinite number of Gaussian distributions, right? For every z, there is a corresponding Gaussian, you have an infinite number of Z's, you have an infinite number of Gaussians that you're mixing. So if you want to figure out what was the probability of generating a data point x, you would have to integrate over all possible values of the latent variables and you would have to see what was the probability that latent variable would give me this data point and that 0000000 you a lot of flexibility. And as we've seen, the nice thing about this is that it gives you a very flexible marginal distribution over x. And it also gives you a way to do unsupervised learning, in the sense that you can try to infer z given x. And hopefully, you might discover some structure, some latent factors of variation that can describe a lot of the variability that you see in the data. So it can be used for unsupervised learning. You can think of it as an extension of k-means, where the latent variables are more flexible, and they can discover more complicated factors of variation. What we've seen is that there is no free lunch, in the sense that the price you pay is that these models are more difficult to train. And at the end of the day, it boils down to the fact that evaluating likelihoods is expensive, is difficult. So evaluating p of x is expensive because you have to essentially check all the possible values of z that could have generated that data point x. And what this means is that you cannot evaluate likelihoods of data points, or you can do it, but it's very expensive. And therefore, training is also very hard because there is not an obvious way to just optimize the parameters to maximize the probability of a particular data set that you have access to because computing likelihoods is hard. And this is different from autoregressive models where, on the other hand, it was trivial to evaluate likelihoods because you just multiply together a bunch of conditionals. Question? Quick question. Can we parameterize p of z itself, or is that-- will we have to have [INAUDIBLE]?? Good question. Yeah, so the question is whether p of z could be learned essentially, or does it have to be something fixed. It can be learned. This is like the simplest setup that is already powerful enough to do interesting things. z is fixed but, as we'll see when you go through the math, nothing really stops you from using a more complex prior distribution over z. And it could be something that you can learn. For example, you could use-- no. The simplest thing would be the parameters of that Gaussian are learned. So instead of using a mean 0 and a fixed covariance, the identity matrix, you could learn those parameters, or you could use an autoregressive model for z. Or what you could do is you could stack another autoencoder, another VAE, and then you can have a hierarchical VAE, where the z is generated from another variational autoencoder. And that's a hierarchical VAE, and it's essentially what's going to be a diffusion model. If you stack these many, many times, you get a more powerful model. But this is the simplest interesting model that highlights the challenges, and it's already useful in practice. Will it make more sense stacking multiple [INAUDIBLE] on top of each other or just increasing the number of layers of these two neural networks that you define? Yeah so, good question. So the question is what's the difference between increasing the dimensionality of z or adding multiple layers versus increasing the depth of the neural networks that give you the mapping from z to x. And the behavior of those two things-- both of them would give you more flexibility, but the behavior is very different. And then you can make this network as deep as you want, but p of x given z is still a Gaussian, and so that restricts what you can do. As will become clear later when we talk about the training-- that increases the flexibility to some extent, but it's not the same as adding more mixture components, which is what you would get if you were to either stacking another VAE or maybe increasing the dimensionality of z. So both of them go in the same direction, but they do it in a different way. Cool. So that's the no free lunch part. And basically what we've seen, we started looking into ways to train these models. And as we'll see, the way to train these latent variable models relies on this technique called variational inference, where we are essentially going to have an auxiliary model that we're going to use to try to infer the latent variables. And in this course, this auxiliary model is also going to be a neural network. It's all going to be deep. And basically, we are going to jointly train the generative model and an auxiliary inference model that you're going to use to try to reduce the problem to the one that we've seen before, where both the x and the z part is observed. And that's the high level idea, and it builds on that result that we've seen in the last lecture of building an evidence lower bound. Right? So we've seen that we can obtain a lower bound through Jensen's inequality, basically, on this quantity that we would like to optimize by essentially using this auxiliary proposal distribution q to try to infer the values of the latent variables. Remember, the challenge is that you only get to see x. You don't get to see the z part. You have to infer the z part somehow. The ELBO trick essentially uses a distribution q to try to infer the values of the z variables when only the x is observed. And it constructs that kind of lower bound on the marginal likelihood, the quantity on the left, the one you would like to optimize, as a function of theta. And what you can do is you can further decompose that objective into two pieces. You have a first piece, which is basically just the average log probability when both the x part and the z part are observed when you infer the z part using this q model. The first piece looks a lot like the setting we've seen before, where everything is observed. Both the x and the z part are observed. The only difference is that you are essentially inferring the latent part using this q model. Then there is another piece, which does not depend on your generative model. It does not depend on p at all. It's only a function of q, and it's basically saying it's the expected value under q of log q, which is what we've called in previous lectures the entropy of q. And essentially, it's a quantity that tells you how random q is, how uncertain you should be about what is the outcome of drawing a sample from q. And we see that, basically, this ELBO has these two pieces. There is a term that depends on the entropy of q, and there is a term that depends on the average log probability when you guess the missing parts of the data using this q distribution. And so the higher the sum of these two terms is, the closer you get to the evidence to the true value of the marginal likelihood. And what we've briefly discussed in the last lecture is that if you were to choose q-- this inequality holds for any choice of q. If you were to choose q to be the posterior distribution of z given x under your generative model, then this inequality becomes an equality. So there is no longer an approximation involved and the evidence lower bound becomes exactly equal to the marginal likelihood. And as an aside, this is exactly the quantity you would compute in the E-step of the EM algorithm as you've seen it before. This procedure has the flavor of an EM algorithm where you have a way of filling in the missing values using this q distribution, and then you pretend that all the data is fully observed, which is this piece here. And then you have this entropy term. But there is a connection between these learning methods, EM and variational learning, that what we're going to talk about today. They both try to address the same problem-- learning models when you have missing data. EM is not scalable, doesn't quite work in the context of deep generative models. But these two methods are closely related, and that's why you can see that the optimal choice of q would be the conditional distribution of the latent variables given-- the z variables latents given x. And now, how do you see this? Well, to derive it, you can work out this expression. If you work out the KL divergence between this q distribution and this optimal way of inferring the latent variables, which is the conditional distribution of z given x, it's not too hard to see that, if you do a little bit of algebra, this expression is equal to what you see here on the right. So we see several pieces. We have the marginal probability of x, the log probability of x, the marginal likelihood, the thing we care about. We have the entropy of q here, and then we have this average log joint probability over a fully observed data point, the same pieces we had before. And the key takeaway is that KL divergence we know is non-negative. For any choice of q, the left hand side in this KL divergence has to be non-negative. And so now, if you rearrange these terms, we re-derived the ELBO in a slightly different way. And you see, if you move this entropy of q and the first term here on the right hand side, you get, once again, the ELBO. You get that the log marginal probability of a data point is lower bounded by the same expression that we had before. So this is another derivation of the ELBO that just leverages the non-negativity of KL divergence. This derivation is nicer because it actually shows you how loose or how tight this lower bound is. So you can clearly see that if you choose q to be the conditional distribution of z given x, so this bound holds for any choice of q. If you choose this particular distribution, then this KL divergence has to be 0 because the left and the right argument of that KL divergence are the same distribution. And so this inequality here becomes an equality. And so we get this result that sort of like the ELBO is tight, so it exactly matches the log marginal probability when you basically infer the latent variables the missing variables using this optimal proposal distribution that uses the true conditional probability of z given x to guess the parts of the data that you don't know. Yeah? Is it like-- my understanding is that we want to do this [INAUDIBLE] because if we just do Monte Carlo, it's too sparse. So since this will give us the maximum likelihood, we can do this? Or can we actually do this, and how easy is it to compute p of z? Yeah, yeah. Great question. So I guess we said that, yeah, we cannot really evaluate this because it's too expensive. Now it seems like maybe we can do it if we were able to choose the optimal q. This is more aspirational. We're not going to be able to actually make this choice, but it's showing us that we should try to pick a q that is as close as possible to the optimal one. And although in practice, for something like a variational autoencoder, as we will see soon, this object here is too expensive to compute. Like if you were able to compute the posterior, then it would also be able to essentially compute the quantity on the left that we want to optimize. And that will motivate the whole idea of variational inference, which is basically saying, let's try to optimize over q to try to find the tightest possible lower bound. So we're going to have a separate neural network that will play the role of q, and we will jointly optimize both p and q to try to maximize this evidence lower bound. And one of the components will be the decoder in the VAE, which is p. The other component will be the encoder of the VAE, and that's going to be q. And they will have to work together to essentially try to maximize the ELBO as much as possible. And you can see from this expression that the optimal encoder, the optimal q, would be the true conditional distribution of z given x. So it's inverting the encoder. So again, it starts to have the flavor of an autoencoder, where there is a decoder model, which is the p of z given x with the p of x given z. And then there is an encoder, which is trying to invert what the decoder does because it's trying to compute the conditional of z given x. Hopefully, it will become clearer later. But essentially, this confirms our intuition that we're looking for likely completions. So given the evidence, given x, we're trying to find possible completions-- values of the z variables that are consistent with what we see, where the consistency is determined by this joint probability p of z comma x, which is essentially the generative model. It's the combination of the simple prior and then the other neural network that will map its two parameters of a Gaussian, and then you sample from it, which is just the p of x given z. So now the problem, as was already brought up earlier, is that this posterior distribution in general is going to be tricky to-- well, we cannot actually compute it. And in some of the toy models that you might have done EM with, for example, sometimes you can compute it. Like if you have a mixture of Gaussians, you can actually compute this analytically. That's why you can do the E-step in EM. But in many cases, doing that E-step is intractable. So if you think about the VAE, essentially, what you're doing is you're trying to invert the decoder. So recall that in a VAE, the conditional distribution of x given z is given by this-- it's relatively simple. It's a Gaussian. But the parameters of the Gaussian depend on these two neural networks, mu and sigma. And so what are you doing when you're trying to compute p of z given x? You're basically trying to invert these neural networks. You are given x and you're trying to find which z's were likely to have produced this x value that I'm seeing, which is potentially pretty tricky because you have to understand how the neural network maps these two outputs and you have to invert a neural network in a probabilistic sense. Does this make sense? Cool. And so the idea of the way we're going to train, again, this variational autoencoder is we're going to try to approximate this intractable posterior. We know that would be the optimal way to infer the latent variables given the observed ones, so we would have to invert these two neural networks, we would have to invert this conditional, this decoder. In general, that's going to be tricky. And so instead, we're going to define a family of distributions over the latent variables, which are also going to be parameterized through a set of parameters phi, the variational parameters. And then we're going to try to jointly optimize both the q and the p to maximize the ELBO. Can you explain again why we want to even be able to compute the posterior z given x if our goal is just x or x given z? Yeah, so the reason we might want to get this posterior distribution is that, as we've seen here, if you were to use q here, this ELBO would be tight so there would be no approximation. And by optimizing the right hand side, you would actually be optimizing the left hand side, which is what we want. This is a little bit of a chicken and egg problem, though, because if you think about it, how is p of z given x defined, it's the joint p of x comma z divided by p of x. And p of x is what we want, right? So that's why it's like it's chicken and egg. You can't really compute this thing. And if you could compute this thing, then you would know how to get the left hand side so you don't even need to do the ELBO computation at all. But this is giving you a recipe to get a lower bound that holds for any choice of q and the game is going to be, let's try to find something that is as close as possible. Let's try to find something tractable that can get us as close as we can to what we know would be the optimal solution. And that will end up being tractable. Quick question about notation. What is variational parameters mean? Do you mean that by like parameters of a neural network or something? Yeah, so let's see what that means. For example, q could be a family of Gaussian distributions, where phi denote the mean and the covariance. So it could be something like this. Maybe you have one part of phi denotes the mean of the Gaussian, the other part denotes the covariance, and somehow you are trying to pick a good choice of these two parameters to get as close as possible to the true posterior that we know we would like to get for doing this to compute the ELBO. And that's basically what variational inference is going to do. It's going to reduce this to an optimization problem, where we're going to try to optimize these variational parameters to try to make this distribution q as close as possible to this intractable optimal choice that we know exists but we don't know how to compute. So in pictures, it might be something like this. There is a true conditional distribution p of z given x, which for simplicity, it's shown as a mixture of two Gaussians in blue here. And let's say that you're trying to approximate that distribution using a Gaussian. Then what you can do is you can change the mean and the variance of this Gaussian distributions to try to get as close as possible to the blue curve. So for example, you could choose the mean to be 2 and the variance to be 2, which would be two choices of phi 1 and phi 2, and maybe that would give you this orange curve. And if you could choose what's the best approximation here, would you choose the orange curve? Would you choose the green curve corresponding to a mean at minus 4 and standard deviation of 0.75? I guess that what it looks like is that the orange curve is better because it's kind of like roughly has the shape of the distribution we want. It's not quite the true posterior distribution, but it's pretty close. And so if we somehow are able to come up with this variational approximation with this simple distribution that is roughly close to what we want, that might be good enough for learning. And that's like the idea behind variational inference. Let's try to optimize this distribution q over this variational parameters phi to try to make this Gaussian distribution as close as possible to this object that we know is there, that we know we would like to approximate as well as we can, but is often intractable. [INAUDIBLE] by the dimensions of the latent variables and the data. So they are generally not the same dimensions? Yeah. So the latent variables don't necessarily have the same dimensions as the data. And for this, the only thing that matters are the latent variables. We're just trying to find a distribution over the latent variables that is as close as possible to the true posterior distribution. Similarly, [INAUDIBLE] like a simple latents rate. Yes. So again, in practice, in a variational autoencoder, this q will actually be, again, a Gaussian. So everything will be relatively simple that will come up soon. But essentially, even in a variational autoencoder, what you would do is you would try to optimize these parameters phi to try to match the true posterior distribution, the true p of z given x, as well as you can. Yeah? How would you evaluate q? Yeah, how we evaluate q? The natural thing to do would be to look at KL divergence, right? We know KL divergence is telling you how far off you are, your ELBO is, from the true thing. So it might make sense to try to choose a q that is as close to p of z given x in KL divergence. And that might be what's going to happen when we optimize the ELBO and when we basically train a variational autoencoder. All right. So that's sort of like the idea. And in pictures, again, it looks something like this. Like for a given x and a given choice of theta, which are the parameters of your variational autoencoder, your decoder, there is a true value of the log probability of x, which is this blue line here. And then you can imagine that if you have a family of distributions q, which are parameterized by phi, as you change phi, you get lower bounds that could be tighter or looser depending on this KL divergence value, how close your distribution q is to the true posterior. And so, essentially, what we're going to do is we're going to define an evidence lower bound, which not only depends on theta, which are the parameters of the generative model, but also depends on this choice of variational parameters phi. And what we're going to do is we're going to try to jointly optimize this right hand side over theta and phi so that by optimizing theta, we try to make this lower bound as close as possible to the thing we care about. And by optimizing theta, we are pushing up a lower bound on the marginal likelihood, which is again a surrogate to the maximum likelihood objective. And so it makes sense to jointly optimize this ELBO as a function of both theta and phi. So what we had here was one lower bound, which holds for some choice of q. And now we're saying we're going to define a family of lower bounds that are going to be indexed by phi, these variational parameters. And we're going to try to find the choice of phi that makes the lower bound as tight as possible because that means that we get the best approximation to the quantity we care about, which is the likelihood of a data point. And that's, basically, the way you train a variational autoencoder. You jointly optimize this expression as a function of theta and phi. And it will turn out that there's, basically, going to be two neural networks, a decoder and an encoder. The decoder is, basically, theta; the encoder is going to be the phi; and these two things work together to try to maximize the evidence lower bound. And we know that, again, the gap between the ELBO and the true marginal likelihood is given by this KL divergence. And so the better we can approximate the true conditional distribution of z given x, the tighter the bound becomes. So that's, basically, going to be the objective back to your question. By pushing up this quantity as a function of theta and phi, we're implicitly trying to reduce the KL divergence between the proposal distribution that we have, which is this q, and the true optimal one that would require you to invert the neural networks exactly that we don't know how to do. But we know that how far off we are with respect to this KL divergence, or how big this KL divergence is, determines how much slack there is between the lower bound and the blue curve. And so you can think of the E-step of EM as giving you the tightest possible lower bound. And that's why you do it as the first thing in EM is to compute the true conditional distribution because that gives you the tightest possible lower bound. So we can no longer do it here because we cannot compute the tightest possible lower bound, but we can try to get as close as we can. Yeah? [INAUDIBLE] if I'm understanding correctly. We're trying to compute log p so that we can do maximum likelihood estimation, right? OK, so there are two ways. So there are two optimization objectives here, right? So the first one is we're trying to maximize log p, and the other one is we're trying to maximize the ELBO to get close to p as possible. Am I understanding correctly? So the dream would be to just optimize log p, this quantity on the left hand side, as a function of theta. That if you could do that, then that's just like training an autoregressive model. That's the best thing we can do. That quantity, we don't know how to evaluate. But we can get a bound that holds for any choice of phi, which is the red curve shown here. So what you could do is you could try to jointly optimize over phi and theta to get a pretty good surrogate of what we would like to optimize. [INAUDIBLE] to maximize the lower bound? We are going to optimize the lower bound instead. And at the same time, we're going to try to make the-- you are optimizing a family of lower bounds and we're trying to find a tight as possible bound and increase the bound as a function of the theta parameters of your generative model to maximize the likelihood of a data set. Exactly. If you were to run this on inference, quote unquote, so you just basically throw out q and just worry about the p? Yeah, so the question is at inference time do you need a q? If you just want to generate, you don't need q. If you just want to generate, just have your optimal choice of theta that perhaps you obtain by optimizing the ELBO. And all you need is that. Right? You can sample z. You feed it through your decoder, your two neural networks, mu theta and sigma theta, produce a sample. Now, if you wanted to evaluate the likelihood of a data point because maybe you want to do anomaly detection or something like that, then you might still need the q because that helps you compute this quantity. At least it gives you a bound. And to the extent that the bound is good, you might need that. And so q is still useful. But if you just care about generation, you are right, you can throw it away after you train the model. Isn't KL Divergence no negative? Shouldn't it be like a minus KL divergence to be a lower bound? I think it's fine because, basically, L is below, right? So this one. So the log probability is always above. Yeah. Yeah. So actually the phi is actually somehow related to [? theta ?] right? It's a [INAUDIBLE] but when you do this optimization, you actually don't care about this. What exactly [INAUDIBLE] about that? That's a great question. Is phi related to theta? And the optimal phi would certainly be related because the optimal phi would try to give you the posterior distribution with respect to theta, right? So we know that the optimal choice-- actually I have it here. The optimal choice of phi would be this posterior distribution, which depends on theta. And so they are certainly coupled together. By jointly optimizing over one and the other, you are effectively trying to get as close as you can to that point, but it's not guaranteed to be exactly equal. So these two things are related to each other, but the final value of phi that you obtain is not necessarily the one that gives you this that matches the true conditional distribution. It's going to be close, hopefully, because if you've done a good job at optimizing, hopefully, this KL divergence is going to be small. But there is no guarantee because perhaps the true posterior is too complicated, and your q is too simple, so you might still be far off. But there is certainly an interplay between the two in the sense that at optimality, this KL divergence should be zero, and so they should match each. Cool. So that's basically how you train a variational autoencoder. You jointly optimize that expression here as a function of theta and phi. And again, this is the picture. It's a little bit tricky because there are two optimization problems that happen at the same time. But what happens is that there is this theta parameters, which are the parameters of the decoder, the thing that you would really like to optimize. And for different choices of thetas, there is going to be different likelihoods that are assigned to the data. And I'm showing this curve here, this black solid curve, as being the true marginal likelihood. If you could, you should just optimize that as a function of theta that would be maximum likelihood learning. That would be great. The problem is that we cannot quite compute that thing. And so we're going to settle for lower bounds, which you see here, meaning that these are curves that are always below the black curve. And there is a family of lower bounds. There is going to be many lower bounds. Any value of phi will give you a valid lower bound. And what we're going to try to do is we're going to try to find a good lower bound, meaning one that is as high as possible, that is as close as possible to the black line. So there's going to be a joint optimization over theta, which is what tries to maximize the probability of the data set. And we're going to achieve that by optimizing a bound. Let's say, optimizing the red curve or optimizing the orange curve, and at the same time, trying to pick a bound to pick a choice of phi that gets us as close as possible to the black line. If our goal is to fit the black line as well as possible, why are we constraining ourselves to a normal distribution? Would it not be maybe possible to learn aspects of the distribution as well? Which part is it you think-- is the normal in the conditional or the prior or the q? I'm talking about q. q. Yes, great question. So the more flexible q is-- so if instead of using just a Gaussian, maybe you use a mixture of Gaussians, maybe you use an autoregressive model that better this bound becomes. So the better you're going to do at fitting your original decoder. So there is many papers where people basically propose better choices for q, which essentially means more flexible families. And that can give you a better data fit by basically making this proposal distribution more flexible. So, indeed, that's a great way to make the model better. Make q more expressive, more flexible. [INAUDIBLE] lets you optimize your [? profiles ?] to fit a curve that matches the black line. But then afterwards, your black line is going to change the [? software ?] [INAUDIBLE] keep doing that over and over again? Yeah, so, that's a great question. I mean the thing is, it seems like as you change-- it goes back to the other points that were raised that phi and theta are coupled together. So [INAUDIBLE] how good a bound is depends on the-- how good a phi is depends on the current choice of theta. And so if you are around here, maybe this-- as you can see here, around this choice of theta, maybe the red curve would be better than the orange. But then if you have a different choice of your variational parameters, then maybe the orange curve starts to become better. And so as we jointly optimize, we'll have to keep them in sync. In practice, what we do is we just do gradient ascent on both theta and phi. So we try to keep them, but you could also just keep theta fixed, optimize as a function of phi as well as you can that gives you the tightest bound, and then optimize theta by a little bit. That is actually what happens in EM. You can think of EM as giving you the tightest possible bound for the current choice of theta. And then in the M-step, you optimize the lower bound as well as you can. Here, we're not going to do that. We're going to do gradient-based updates, but it's the same philosophy, trying to join [INAUDIBLE] optimize one and the other. Cool. So all right. So let's see how we do that. So we know that for any choice of q, we get the lower bound. That's the ELBO. Now what we would like to do, recall, is that we have a data set and we would like to optimize the average log probability assigned to all the data points in the data set. So we don't just care about a single x, we care about all the x's in our data set D. And what we can do is, well, we know how to bound the log probability for any x and any theta through the ELBO. And so we can get a lower bound to the quantity on the left, which is the average log likelihood assigned to the data set, by just taking the sum of the ELBOs on each data point. All right. So this is the ELBO for a general x. We can get the ELBO for each data point, and we get this expression. Now the main complication here is that we're going to need different queues for different data points. And so the, if you think about it, the posterior distribution, even for a same choice of theta, is going to be different across different data points. And so you might want to choose different variational parameters for different data points. And so you don't have a single set of variational parameters phi, but you have a single choice of theta because you have a single generative model for the whole data set. But then you-- at least if you do things this way, you would need to choose variational parameters differently for different data points. I mean, we'll see that this is not going to be scalable, but this would be the-- and so we'll have to introduce additional approximations to make things more scalable, but this would be the most natural thing to do. For every data point, you try to find the best approximation to the posterior for that particular choice of xi, and then you jointly optimize the whole thing. You try to make the lower bound for each data point as tight as possible so that the sum of the lower bounds is going to be as good of an approximation as you can to the true quantity you'd like to optimize, which is the true marginal likelihood here. Then why do you need a separate [INAUDIBLE] for each [INAUDIBLE]? Yeah, I think I have an example here. In this example, let's say that the latent variables are the pixels in an image. So then, at least they are meaningful, and you can get a sense of what the posterior should be. So let's say that we have a distribution over images and the x variables are the bottom half of the image and the z variables are the top half of the image. But let's pretend that maybe you're fitting an autoregressive model, but we are in this situation where some parts of the image is missing, so you never get to see the top half of the image. So that's a latent variable. So it's no longer a VAE. It's a slightly different model, but it's just to give you the intuition of what we're trying to do here. So to fit your autoregressive model, your transformer model, or RNN, or whatever, you need to somehow guess the top half of the image if you want to evaluate this joint probability and you can optimize your parameters. And one way to do it is to basically use this variational trick of trying to guess the values of the missing pixels and then pretend that you have a fully observed data set and then just optimize. But there is different ways of guessing the missing values, the missing pixels. So you can define a family of distributions over the missing pixels. And here, just for simplicity, I'm saying the pixels are just binary 0 and 1. And so you have a bunch of variational parameters that will basically tell you the probability that you should choose each individual pixel that is not observed to be on or off. And so in this case, you have one variational parameter per missing pixel. And you can see that what's a good approximation depends on the bottom half of the image. If you get to see this part, would you choose phi i 0.5 as your approximation of the posterior, which basically means the way you're going to guess the missing values is by randomly flipping a coin on each location. It's probably not a good approximation to the true posterior. You kind of know that this is probably a nine, and so you want to guess that way. And so probably not a good one. Would turning everything on be a good approximation? Probably not. Again, you want to choose turn on the pixels that correspond to the nine. But you see that it depends on what you see. It depends on the evidence x. So if you see this, you might say it's a nine. But if you see a straight line, then maybe you'll think it's a one. So you want to choose different values of these variational parameters. And so even though theta is common across the data points, the values of the latent variables that you infer should be different. Again, going back to the variational autoencoder, if z now captures latent factors of variation, like the class of the data point or whatever, again, you'd like to make different choices for how you infer z depending on what you see, depending on the x part. And so that motivates this choice of, OK, we want to optimize, we want to choose different phis for the data points because the latent variables are going to be potentially very different across the different data points. Or another way to say it is that if you think about the generative procedure of the VAE, you're generating these x's by feeding random noise, essentially, into a neural network. And depending on the x you see, you might want to make very different guesses about what was the random noise that you've put through your decoder, through your neural network. And so you want to choose different choices of phis for the different x's that you have in your data set. I don't know if you covered this already but does maximizing the right ELBO guarantee that it also maximize the left further? Could it be also to have-- maximize that but it actually might not be really optimal. Yeah, so that's a great question. To what extent is optimizing the right hand side a good approximation to optimizing the left hand side? And it's a reasonable approximation in the sense that whatever value you have here, the true thing can only be better than what you got. You could be far off. And in fact, you could be doing weird things where maybe-- let's see whether I have it here. But it could be that it looks like you're optimizing-- the lower bound goes up, but the true thing actually goes down. You could imagine a shape here where-- let's see if I have an example here. But basically, it looks like-- maybe if you go from here to here, it looks like the red line goes up, but the black line might actually go down. So in that sense, there is no guarantee because the bounds could be very far off from the true thing. But what you know is that the true objective function is going to be at least as good as what you get by optimizing these, which is not a bad guarantee to have. All right. So now, this is the how to choose them. Now how do we actually do this? The simplest version would be to just do gradient ascent on this objective function. Right? The ELBO, if you expand it, it would look like this. So for every data point xi, you would have this expectation, with respect to this variational distribution q, this way of inferring the latent variables given the observed ones. And then you would have the log probability in the fully observed case, and then you have this term, which is kind of like the entropy of q. And so what you could do is you could do initialize all the optimization variables somehow. And then you could randomly sample a data point, and then you could try to optimize this quantity as well as you can as a function of the variational parameters. So you compute a gradient of the quantity with respect to the variational parameters, you try to make that ELBO as tight as possible for the i-th data point, and then, until you can no longer improve, you find some local optimum. And then you can take a step on the theta parameters so your actual decoder, the actual VAE model that you use for generating data given the best possible lower bound. So this inner loop will find you the best lower bound. This step 4 will take a step on that optimal lower bound. Why do we get rid of the H [INAUDIBLE] from line 1 to 2? Oh, I'm just expanding it. So the H is the entropy, which is the expected log probability under q. This is not quite going to be the way we were going to train a variational autoencoder. It turns out that it's actually better to keep theta and phi in sync, but you can imagine that a strategy like this could actually work as an optimization objective. How computationally efficient is it going to be? It seems each optimization step is going to take a lot of iteration. Yeah, so how efficient it is? Well, first of all, we'll see that the first challenge is to figure out even how to compute these gradients. These gradients are going to be not too expensive, luckily, as we'll see. But there is a question of, should you take more steps on phi, less steps on theta? Should you do one step on theta, one on phi? There is many strategies that you can use, and it's not even known actually what's the best one. This is one that is reasonable. It's more like a coordinate ascent procedure. You find the best theta, you find the optimal phi, and then you optimize theta a little bit. So the reason why I understand we need parameters per data point is because in the real distribution, the best lower bound is by actually [INAUDIBLE] on x or z given x. And over here, since we're not seeing x, we need one [INAUDIBLE] like parameters p of z also on x? Yeah, yeah. So that's going to be the way we're going to make things more scalable. It's called amortized inference. There's going to be how we move from this vanilla version to something that is going to be worse from the perspective of the bound you get. But it's going to be more scalable because there is not going to be one optimization parameter per data point. We're basically going to do exactly what you suggested. We're going to have a tie together. We're going to have a single q that is supposed to work well across different x's. There is going to be a neural network that will essentially try to guess this phi i star as a function of x. I think it's better to understand it through the lenses of, OK, first you optimize, and then you try to approximate it. But that's going to be the-- how a VAE is actually trained is basically going to be a separate neural network that will take xi as an input and will produce a guess for this phi i star, the optimal choice of variational parameters for that data point, as an output, and that's going to be the encoder of the VAE. So as you said, we can use a neural network to guess the i, right? The phi i. So without using a neural network, how would we choose as the-- So without the neural network, you can do this. And it's actually going to work better than whatever you can get with the neural network because you're optimizing over-- you have less constraints, right? Like what was said before about, let's make you more as expressive as possible. This is going to be better. But it's just going to be slower and not going to be scalable. But if you can afford to do this, is going to be better basically. You said that, previously, a good choice of phi i would probably be close to the top half of pixels of a number 9 or something. But if we don't choose the phi i's carefully, are we going to get stuck into local optimum and where you will just not get out of? Is it what's going to happen? Yeah, that's a problem. Yeah, because here we're jointly optimizing these two and hoping that we can find something. But you can imagine [INAUDIBLE] that if the choice of phi is really bad, initially, at least, it's probably going to be random or something. And so they're going to be doing a very good bad job at guessing the latent variables. And so you might not be able to actually optimize the theta. And so you might get stuck into to a very bad local optimum. And this is non-convex, so you have no guarantee in terms of being able to find a good solution for this optimization problem. And so those issues, indeed, we have them here. And you have to hope that gradient ascent will find you a good solution. But you could certainly get stuck. Cool. So that's, conceptually at least, a good way to think about how you would train a model like this. And the part that is still not obvious is how you compute these gradients. How do you compute the gradients with respect to theta? So we need two gradients. We need a step 3-1 within the gradient with respect to the variational parameters. And at step 4, we need the gradients with respect to the model, the actual decoder, the actual neural networks that define the VAE. And these are expectations for which we don't know-- now you cannot compute them in closed form. There is no analytic expression that you can use to compute the expectation and then the gradients of, so we have to basically rely on Monte Carlo sampling. We're going to approximate these expectations with sample averages. And so what would it look like? If you want to approximate these expectation with respect to q, we can just do this. We can just sample a bunch of draws from q, and then approximate the expectation with a sample average. The usual trick, an expectation with respect to q is approximately equal to the sample average, if you were to sample the latent variables according to this proposal distribution q. And as usual, the larger K is, the more accurate this approximation is going to be. In practice, when you train a VAE, you probably choose k equals 1, and you would just use a single sample. But in general, you could use more if you wanted more accurate estimates of the expectation. And the key assumption here is that q has to be simple. You can't choose something very complicated because you need to be able to sample from it efficiently, and you need to be able to evaluate probabilities under q efficiently. When I asked a question about the q 10 minutes ago, you said, can be something much more complex. How do they do it? Yeah, so it has to be complex. But still it has to be a model for which you can evaluate probabilities efficiently and you have to sample from efficiently. So a VAE, for example, would not be a good choice because you can sample from it efficiently but you cannot evaluate probabilities efficiently. An autoregressive model would be a reasonable maybe choice because you can sample efficiently, and you can evaluate probabilities. But we will see generative adversarial networks will not be a good choice because it's easy to sample from, but you cannot evaluate probabilities. We will see something called a flow model, which is a class of generative models where you can sample from efficiently, and you can evaluate probabilities efficiently. That's a good choice. That's what people actually use in practice. So those are the two constraints. Sample efficiently, evaluate probabilities efficiently. And then we want to compute gradients of this quantity, right? We want to compute gradients with respect to theta and with respect to phi. And the gradients with respect to theta are trivial because, basically, the gradient of the expectation is just going to be approximately equal to the gradient of the sample average essentially. So if the gradient is just linear, you can push it inside. The q part does not depend on theta. So the gradient with respect to theta of this part is 0. So you can, basically, just take your samples, evaluate the gradients of the log probability with respect to theta, which is-- and this is fully observed, so this would be exactly the same gradient as in an autoregressive model. You have the z part, you have the x part, so you know how to evaluate these probabilities and you just take gradients and you just update your theta parameters that way. So this part is very easy. The tricky part is the gradients with respect to phi. And the reason is that you are sampling from a distribution that depends on phi. And so if you want to figure out how should I change my variational parameters phi to make this expectation as large as possible, you need to be able to understand how the change in phi change where the samples land essentially. You are sampling from this distribution, which depends on phi. And so you need to be able to understand, if I were to make a small change to phi, how would my samples change? And if you take gradients with respect to theta, you don't have to worry about it because the samples-- you're not sampling from a distribution that depends on theta, so you don't have to worry about how the samples themselves would change if you were to change phi. But if you're changing phi, then you need to understand how your sampling procedure here depends on phi. And so the gradient is not going to be as easy as this one. And that's essentially the problem. The problem is that you're taking an expectation with respect to a distribution that depends on phi. So if you want to take gradients, you need to understand how the sampling process basically is affected by small changes in the variational parameters, and that's more tricky. And because we would still like to do it through some efficient Monte Carlo thing, where you just sample once and then you compute some gradient through autodiff and you're done. And it's not super obvious how you would do this. And there is different ways of doing it. Later on, we'll see a technique called REINFORCE from reinforcement learning. Because you can think of this as a reinforcement learning problem, where you're-- if you think of z as being an action, you're trying to figure out your policy, you're trying to figure out how you should change your policy to perform well where the argument of the expectation is the reward that tells you how well you're doing. And it's tricky to figure out how changing your policy affects the value that you're getting. But there are techniques from reinforcement learning that you could use. Today was just simpler-- actually, better way of doing things that does not work in general. It only works for certain choices of q. For example, when q is Gaussian, you can use this trick. And it's more efficient in the sense that if it has lower variance, it's a better estimate. And this technique, it's called the reparameterization trick. It only works when these latent variables z are continuous. So it doesn't work when you have discrete latent variables. Only works when z is continuous, like when z is a Gaussian, for example. And so that this expectation is not a sum but it's really an integral. So it's an integral with respect to this probability density function q, which depends on phi, of some quantity which I'm going to denote r. r because it's like a reward. But r of z is just, basically, the argument of the expectation. I'm just changing the notation to make it a little bit more compact. But essentially, the argument doesn't matter too much. The tricky part is to figure out how to change phi so that the expectation becomes as large as possible essentially. And again, you see the connection with reinforcement learning. If z are actions, then you're trying to say-- you have a stochastic policy for choosing actions, and different actions have different rewards. You're asking, how should I choose actions in a stochastic way so that I get the highest possible reward? And you need to understand how changing phi changes which actions you pick, which z's are more likely and less likely under your policy, which is a little bit tricky. The good thing is that if, again, q has certain properties, for example, it's Gaussian, then there is two ways of sampling from q. You could sample from q directly, or you could sample from a Gaussian random variable with mean 0 and covariance the identity, and shift and rescale it. Right? So if you want to sample from a Gaussian with mean mu and covariance sigma squared the identity, you could always achieve that by sampling from a standard normal with 0 mean and identity covariance. So shifting and rescaling. And what this does is that we're basically rewriting this complicated random variable z as a deterministic transformation of something simple of a standard normal Gaussian random variable. This is why it's called the reparameterization trick because we're just writing z as a deterministic transformation of a fixed random variable, which does not depend on the optimization parameters. So we have some deterministic transformation which depends on the optimization parameters, the phi parameters, that we use to transform this basic random variable, epsilon, which does not depend on phi anymore. And then using this equivalence, we can compute the expectation in two ways. You can either sample from z, sample from q, and then evaluate r at the z's that you get by sampling from q. Or you can sample from epsilon, transform it through g, and evaluate r at that point. And the key thing is that now we have an expectation that no longer depends on the optimization parameters phi. Now it's an expectation with respect to epsilon, and so we can basically push the gradient inside, just like what we were doing before. Or in other words, basically, we understand how changing the parameters affects the kind of samples that we get because we're explicitly writing down the sampling procedure as a deterministic transformation of some simple fixed random variable. So if you want to figure out how would my performance change if I were to change phi by a little bit, which is essentially the gradient, now you know exactly how your samples would change because you have a deterministic transformation that gives you the new samples as a function of phi. And so taking the gradient of that would tell you how the samples would change by changing phi by a little bit. And so once you have this expression, or you have an expectation with respect to a quantity that no longer depends on phi, we're basically in a good shape because we can compute this gradient with respect to phi. So here, this one would be a little bit tricky because you have an expectation which depends on phi, and we don't know how to do this. But the expectation on the right is the kind of thing we know how to handle because it's an expectation with respect to epsilon, which no longer depends on phi, and then we can basically push the gradient inside. Yeah? Is r just an arbitrary function here? r is an arbitrary function. Yes. And this is something we can do by Monte Carlo, basically. All you do is you sample epsilon, or a bunch of epsilons, and then you approximate the expectation of the gradient with the sample average of the quantity. And basically by chain rule, you can figure out what would be the effect of changing phi [? B ?] on this expectation that you care about. Because you know that, basically, just by computing these gradients, you get what you want. You know how this epsilon would be transformed, and then you know what is the corresponding reward, r, that you would get if you were to transform the sample in a certain way. And so you know how you should adjust your parameters to maximize the reward as much as you can because you know exactly how changing phi affects the sampling procedure. Yeah? I'm curious about the continuity of the z part requirement for this. What if you had z that was, I don't know, from 1, 2, 3, 4, 5, 6, then you modeled it with a uniform distribution, and you either floor or put a ceiling function to it. Was it-- So it doesn't work for discrete random variables. If you have that kind of setting-- and it doesn't even work for all continuous distributions. You have to be able to write the sampling procedure as some kind of deterministic transformation of some basic distribution that you know how to sample from. If you can do that, then this machinery, you can see it goes through. But if you have something like a discrete, like categorically random variable, then it would be discontinuous. And at that point, you don't know. You can always sample it by inverting the CDF, essentially, but you would not be able to get gradients through, essentially. So for that, you either need to use REINFORCE or we'll talk about other ways to relax the optimization problem when dealing with these things. But this is only applicable to special cases like a Gaussian, which luckily, is what people often use in practice. And so this is actually a good solution when you can use it. OK. So now, we're basically almost there. Recall that what we wanted to was to compute the gradient of this ELBO, which is just an expectation with respect to q of some arbitrary function, which happens to depend on phi, which is a little bit annoying. Because before we had this r, which was not depending on phi. Now the argument of the expectation also depends on phi. But you can see that you can still use reparameterization. Just like before, as long as you know how to write down the sampling procedure in a differentiable way, then you just have the argument of the expectation that depends on phi in two ways. And then you just do chain rule would-- basically autodiff will take care of the gradient for you. So that's actually not an issue. Essentially, you use the same machinery for this reward function, which now depends on phi. But essentially, the same machinery goes through. OK. Now we know, essentially, how to do this. We know how to compute the gradients. The only other annoying piece is that we have one variational parameter per data point. So it would be expensive to have different variational parameters per data point, especially if you have a very large data set. And so the other missing piece is to have what's called as amortization, which basically means that we're not going to try to separately optimize over all these phis. Instead we're going to have a single set of parameters, which is going to be another neural network. It's going to be the encoder of the VAE, which we're going to denote as f lambda. And this function is going to try to guess a good choice of variational parameters. So he's going to try to somehow do regression on this mapping between xi and the optimal variational parameters. He's going to try to guess what's a good way of approximating the posterior for the i-th data point. And this is much more scalable because we have a fixed number of parameters now that we're trying to optimize. We have the theta, and we have the encoder. Again, so let's say the q's are Gaussians. Instead of having one different mean vector per data point, you have a single neural network that will try to guess what's the mean of the Gaussian as a function of x, as a function of the data point, the observed values that you see in each data point. And now, we approximate this posterior distribution, given that the observed value is for the i-th data point using this distribution. So we take xi, we pass it through this neural network that will guess the variational parameters, and then that's going to be the q that we use in the ELBO. And the same gradient computation goes through, as long as the reparameterization works. You can see that the same machinery applies here. We are clearly losing a bit of our ability to model the data set by doing this. So is it worth to do this trade-off? What if we just trained on a hundred examples with a single phi for each, rather than doing this? What's the trade-off there? The trade-off in that case, like you're going to get a better-- I mean, to the extent that you can do the optimization well-- because it's non-convex, so weird things could happen. But to the extent that you can optimize, you would get a better average log likelihood. So it's going to be more expensive because you have more variational parameters to optimize over. You're also going to give up on the fact that if I give you a new test data point and you want to evaluate the likelihood of that test data point, you would have to solve an optimization problem and try to find variational parameters for that data point. If you have this neural network that is already trained to give good variational parameters, you have no cost. So it's all amortized. So it's called amortized because, essentially, there is a neural network that is amortizing the cost of solving this optimization problem or variational parameters. And the problem of solving the optimization problem and give you the optimal variational parameters is amortized by a single feedforward pass through this neural network. If you are learning the separate phi i's, how would that generalize to a new data [INAUDIBLE] anyways because you don't know the cipher [INAUDIBLE]---- Yeah, so if we generalize in the sense that you would have a p, and you could try to evaluate-- it defines a valid likelihood on any x. Optimizing through an encoder might have a regularization effect, in the sense that it's constraining p. Because you're jointly optimizing p and q, so you could say that, OK, you're optimizing phi to try to make the approximate posterior close to the true posterior. But you're also optimizing the true posterior to be close to one that you can approximate with your little neural network. And so it has a regularization effect over the generative model that you learn because it has to be a generative model on which you can do inference relatively well using this single neural network that we have here. So as you said, that might help you in terms of log likelihood on a new data point because the model is more constrained, and so it might perform well. It prevents overfitting to some extent. But I guess-- let's say you have a hundred phis that you've learned. For the past data point, you learn a new phi altogether. You try to figure out which of the hundred phis you already have best represents-- You could do both. If you wanted to get the best approximation to the likelihood on a new test data point, you would optimize a new phi, and that would give you the best lower bound on the ELBO for that data point. That would be the best. The marginal likelihood is defined regardless of how you choose the phis. And so the phi is just a computational thing that you need in order to evaluate the likelihoods. But if you just care about generation, you don't even need the phis. How many phis would be sufficient to train a good parametric function? How many phis? In practice, what you would do is you would have a single neural network that would essentially guess the optimal phis as a function of the data points. And these neural networks are typically relatively shallow. So you wouldn't need that many phis to train that network? You don't actually ever get the phis. So what you do is you just optimize the ELBO as a function of this, let's say, lambda parameters here. And so you just back-- I see. I see. And so you never actually compute this phi stars. You just restrict yourself to the phis that can be produced by this single neural network. But do we need to set the dimensions of phi to be our hyperparameters? Yeah, so the dimension of phi and the family that you choose, is it a Gaussian? Is it a Pois-- whatever variational family? That's a choice, the modeling choice. So, yeah. Yeah? So now, do we train lambda, phi, and theta jointly all at the same time with this? So in this setting, I guess-- I don't know if I have it here. But essentially, you wouldn't even have the phis. You would just have the theta and the lambda and then everything. Yeah. And so, let's see. What do I have here? Yeah. So essentially, again, this is saying what we were discussing before that for different data points, there is going to be different optimal variational parameters. And then you have this single map that will take xi as an input, and will output a good choice of variational parameters for that xi that you're going to use to infer the latent variables for that data point. So there is not even going to be any phis anymore. There's going to be a single neural network, f lambda, that does the work for you. And in the VAE language, that's often denoted q phi of z given x, meaning that the choice of variational distribution that you use is a function of x and phi. And their relationship is determined by this neural network, which is going to be the encoder in the VAE that predicts the parameters of this variational distribution over the latent variables given the x variables. So it's the same machinery except that there is less trainable parameters because there is a single neural network that will describe all this variational distributions that, in general, should be different. But just for computational efficiency reasons, you restrict yourself to things that can be described that way. And then basically that's how you actually do things. Then you have exactly the ELBO that we had before, which depends on the parameters of the decoder and the encoder, phi. So phi here now denotes the parameters of the separate inference neural network that takes x as an input and produces the variational posterior q for that x. And then everything is just optimized as a function of theta and phi through gradient ascent. So you initialize the decoder and the encoder somehow. And then what you would do is you would randomly sample a data point, then there is going to be a corresponding ELBO for that data point. And what you can try to do is you can try to figure out how should you adjust the theta, the decoder, and the encoder to maximize the ELBO for that particular data point. And this expression is just like what we had before, except that the variational parameters are produced through this neural network, which is the encoder. And you can just backprop through that additional neural network to figure out what this gradient should be. How should you adjust the gradients of the encoder so that you produce variational parameters for the i-th data point that perform well with respect to the ELBO. And you can still use the reparameterization trick. As long as q is a Gaussian, everything works, and then you just take steps. And in this version, which is the version that people use in practice, you jointly optimize theta and phi at the same time. So you try to keep them in sync so that-- because we know that they are related to each other, we know that phi should track the true conditional distribution of z given x given the current choice of theta. And so as you update theta, you might as well update phi, and vice versa. So it makes sense to just compute a single gradient over both and optimize both optimization variables at the same time. And how to compute gradients? Again, sort of like, let's say, reparameterization trick as before. And I think we're out of time. But you can see now that the autoencoder perspective q is the encoder. It takes an image of, let's say, an input, and then maps it to a mean and a standard deviation, which are the parameters of the approximate posterior for that x. And then the decoder takes a z variable and then maps it to an x. And that's the other neural network. And you can start to see how this has a autoencoder flavor. And in fact, what we'll see is that the ELBO can be interpreted as an autoencoding objective with some regularization over the kind of latents that you produce through an autoencoder. And so that's why it's called a variational autoencoder because, essentially, it is an encoder, which is the variational posterior, and there is a decoder. And they work together by optimizing the ELBO. And optimizing the ELBO is, essentially, a regularized type of autoencoding objective. But we'll see next time.
Stanford_CS236_Deep_Generative_Models_I_2023_I_Stefano_Ermon
Stanford_CS236_Deep_Generative_Models_I_2023_I_Lecture_13_Score_Based_Models.txt
Today, we're going to start talking about score-based models or diffusion models, and we're going to see that, which is kind of like a state of the art class of generative models for images, video, speech, audio, a lot of different continuous data modalities, this is the way to go. And we'll see it's going to build on some of the techniques we talked about in the last lecture. So first of all, usual picture here, kind of like the overview of what we're talking about in this class. We've talked a lot about different kinds of model families, and we've seen two main classes of generative models. We've seen likelihood-based models, where, basically, the key object you're working with is the probability density or the probability mass function. So the model is basically just a function that takes as input some x and maps it to some scalar, which is how likely is that x according to the model. And we've seen that probability mass functions or probability density functions are tricky to model because they have to be normalized. They have to integrate to 1. So we've seen that one way to get there is to use autoregressive models. Another way to get there is use flow models, but that's always constraints the kind of architectures you can use. And the alternative way to go around it is to, well, give up in some sense on the normalization and use variational tricks to essentially evaluate the likelihood. So we've seen variational autoencoders. And we've seen energy-based models where you have to deal with this normalization constant that normalizes the probability density. And we've talked about a bunch of techniques to try to get around the fact that you have to evaluate Z theta and maybe avoid likelihood-based training and various ways of training energy-based models. And then the pros here is that you can do maximum likelihood training, which is principal is great. It's a lot that you can monitor. You can see how well it goes. It's optimal in a certain sense. You can compare models, but you have to deal with the restricted architectures. You can't plug in an arbitrary neural network to model the likelihood. The alternative way to go about this is to just model the sampling process. So this is kind of like an implicit generative model, again, where we're just going to describe the way you produce samples. For example, you feed random noise through a neural network. Essentially, any neural network you can pick as the generator defines a valid sampling procedure. The problem is that given a sample, given an output from this network, evaluating how likely the model is to generate that is very hard. And so you have to give up on likelihoods again. And although these models tend to work pretty well, the key problem is that you can't train them in a very stable way. You have to do minimax optimization, and that's a problem. And so what we're going to talk about today is a different way of representing probability distributions, probability densities that deals with the score. That's what these models are going to be. They're going to-- they're called score-based generative models. And this is only going to be applicable to probability density functions. So continuous random variables. But when we're dealing with continuous random variables, then we can start thinking about working with the gradient of the log density instead of working with the density itself. So we've seen that in a likelihood-based model. You would normally work with p of x. And score-based model instead, the object that you work with is the gradient of the log density. And the gradient, again, is with respect to the inputs. So it's not with respect to the parameters of your model. And that's the score function. And we've seen this in the previous lecture, but the idea is that it provides you an alternative interpretation of the probability density function. You can alternatively think of the PDF as a function that maps every point to a scalar, which is non-negative. So you can think of it as the height of some kind of surface over this 2D space. In this case, it's a mixture of two Gaussians. And the score is just a function that is vector valued at every point. It gives you the gradient of the log density. And so it's vector field, where at every point, you get the-- the arrow is telling you what's the direction that you should follow if you want to increase the log likelihood most rapidly. And these two are sort of like equivalent views. So if you like, again, analogies with physics, this is kind of like describing a physical system in terms of electric potentials or electric fields that are kind of like the same. But computationally, it might be advantageous, as we'll see, to work with one versus the other. And in particular, the main challenge that we talked a lot about in this course when modeling probability density functions is that you have to make sure that these PDFs are normalized. So you need to figure out a way of parameterizing curves that are ideally flexible. And since they can have arbitrary shapes as you change-- or as complicated as possible of a shape as you can get by changing the parameters of your neural network. But somehow you need to make sure that the total area under the curve is fixed. It's equal to 1. So you have a normalized object or some way of somehow computing the area under the curve for any choice of the parameters. And that's potentially tricky, as we've seen. Often what it means is that you have to choose very specific architectures that allow you to basically either guarantee that the area under the curve is 1 or somehow in a normalizing flow that you can compute it efficiently. And now if you think about the score in the one decays, the score is just a-- this is the gradient. It's [? that, ?] the derivative of the function you see of the log of the function you see on the left. And the function on the right no longer needs to satisfy any kind of normalization constraint. And it's potentially much simpler to work with. You see here, you have this relatively complicated curve on the left, and the corresponding score function on the right is potentially much easier to work with. So the intuition behind a score-based model is that instead of modeling data using the density, we're going to model data using the score. So that's going to be the object that we're going to use to define our model family. And we've seen that this is useful in the context of energy-based models. Energy-based models are one way of defining very flexible probability density functions by saying, OK, I'm going to pick an arbitrary neural network. I'm going to make it non-negative. And then I'm going to renormalize by somehow computing the total area under the curve and then dividing by the number to get a valid probability density function. Super flexible. The problem is that if you want to do evaluating likelihoods, involve the log partition function. So if you want to do maximum likelihood training, you have to go through either somehow estimate a partition function, or you need to do contrastive divergence things, where you have to sample from the model, which is expensive. On the other hand-- which is something you don't want to do. On the other hand, what we're seeing is that we can train energy-based models by instead of trying to match, basically, the density ratios using KL divergences, we can try to fit our energy-based model by trying to make sure that the corresponding vector field of gradients-- so the scores of the model match the scores of the data distribution. And recall that this was basically the Fisher divergence. And we were able to do-- through integration by parts, we were able to rewrite this objective function into 1 that basically only involves the score, which, as we've seen in the last lecture, does not require you to compute the partition function. So the score here, the critical thing to notice here is that the score function, the gradient of the log density according to the model, when you take the log of an EBM, you get your neural network. And then you get the log partition function. Critically, the log partition function does not depend on x. It's the same for every point. It's just the area under the curve. No matter where you are, the area under the curve is the same. And so when you take the gradient with respect to x, that's 0. And so we can compute this model score in terms of the original energy of the model. So in this expression here, we can basically compute this term efficiently without having to deal with the normalization constant. And so we have this expression. If you want to do score matching for an energy-based model, you have that loss which you can in principle optimize and try to minimize as a function of theta. And now you might wonder, I mean, can we only do score matching for EBMs? And if you think about it, you look at the loss. It's something that is well defined for any model family. As long as you're able to compute this gradient with respect to x of the log density according to the model, then you can do score matching. And you can train a model by minimizing the Fisher divergence. So in particular, what other kind of model families can we apply score matching to? Well, we can certainly apply it to continuous autoregressive models. If you can compute the log density, you can probably also differentiate through that and compute the score. You can do it on a normalizing flow models. Again, we can compute the log likelihood. And so we can also compute the score, although perhaps it doesn't make a lot of sense because you have access to the likelihood. So you might as well train these models by maximum likelihood. But in principle, you could apply score matching to these models, and you could train them that way as well. But you could also wonder, I mean, what's the most general model family that we can train using score matching? And you can think that while you can certainly apply it to autoregressive models, to flow models, you can think of EBMs as kind of like a generalization where autoregressive models and flow models are special kinds of EBMs, where the partition function is guaranteed to be one. But perhaps there is something even larger. Like, we can even optimize over an even broader set of model family. And that's the idea behind a score-based model. Instead of modeling the energy, we're basically directly going to model the score function. So we're going to define our model family by defining the-- by basically specifying the corresponding vector field of gradients. So the model is not going to be a likelihood. The model is not going to be an energy. The model is going to be a vector-valued function or a set of vector-valued functions. As you change theta, as you change your neural network, you're going to get different vector fields. And that's what we're going to use to describe basically the set of possible distributions that we are going to be fitting to our data distribution in the usual way. So basically the difference with respect to an EBM is that we're not going to model necessarily the energy and then take the gradient of it. Instead, we're going to directly think about different kinds of vector fields that we can get and we can parameterize using a neural network. In this case, the neural network is a vector-valued function. For every x s theta, the estimated score at that point is a vector with the same number of dimensions as the input. So as theta is really a function from Rd to Rd-- so if you have d dimensions, the output of this neural network will also have D dimensions because that's however many coordinates you need to specify one of these arrows at every point. And so that's basically the very high-level story here. As usual, we want to fit a model to a data density. So there is a true underlying data density that is unknown. We assume we have access to a bunch of samples from the data density. And then what we're going to try to do is we're going to try to find some function in our model family. So we're going to try to choose parameters theta. Or we're going to try to choose some vector field of gradients that is hopefully as close as possible to the vector field of gradients of the original data density. So that's going to be the learning objective. And try to choose parameters theta such that the corresponding vector-valued function that we get matches the true vector field of gradients of the data density. How can we make sure the gradients calculated from the sparse samples as close as to the underlying distribution of our data? Yeah, so that's a great question. The only thing we have access to are samples, and so we don't have access to the true density. And so we're never going to be able to achieve this perfectly. And there is a learning element in the sense that we only have access to a bunch of samples. And so we need to make sure we're not overfitting. And we need to make sure that there's going to be some limits to how well we can do this. But it's that you have the same problem, even if you have a-- if you're training by maximum likelihood, you're only given samples, you can try to get as close as possible to the empirical data distribution, hoping that by fitting the samples, you're also fitting the true underlying data density. So we're going to have the same problem in the sense that we only have samples. We have limited data. But the main difference is that instead of trying to fit one of these scalar function that is giving us the likelihood, we're going to try to fit this vector-valued function that is giving us the gradient of the log likelihood, essentially. Yeah. So building off that, but-- so for instance, over here, we have two clusters. But we want a model to be able to predict even in the decision boundary of the score and all these sorts of things. But isn't that too much of an OD problem? I know you said that, OK, even when you're going for the likelihood, it's fine. But when you're going for the likelihood, you're sort of within reasonable range if you're trying to push [INAUDIBLE] close to the mod that your samples are. Here, you're sort of learning like, OK, I have some data here, and I have no other data and some valley. And yet, somehow expected to learn some. Yeah, so I think in both cases, it's a hard problem. I would say that even if you work with likelihoods, you don't just want to put probability mass around the training data because you want the model to generalize to unseen data that it hopefully coming from the same distribution as the one you've used for training. But you don't want to just fit the training distribution. If you're fitting a model over a training set of images, you don't just want to put probability mass around the images that you have in the training set. You want to spread it out. And you need to be able to say, oh, there is other parts of the space where I need to put probability mass, even though I have not seen them during training. And so we have a similar problem. To some extent, kind of like the gradient and the function are essentially the same thing. So if you have the gradient, you can integrate it, and you can get the function. And because everything has to be normalized-- so you know that the-- I mean, you can get the function up to a constant. And we know what the value of that constant needs to be because it has to be normalized. So in some sense, it's just as hard as the original problem. As far as overfitting is concerned, I think it will be strongest in this compared to other models we have seen because I mean-- It depends on the loss that you use. As we'll see, there is going to be issues that are very specific to training with the Fisher divergence that makes it so that doing things-- this vanilla approach will not quite work. And we'll need to do a bunch of different things to actually make it work in practice. But so far, it's more like a-- up to here, I'm just saying, it's going to be a different representation of the kind of models we are willing to consider. I haven't even said, how are we going to do the training, and how do we prevent overfitting and so forth. And I'm wondering how our energy-based models not a generalized kind of a model for score matching. Because from my understanding, if you're using training by Fisher divergence, you don't estimate the energy or the partition at all. You just look at the scores. And how is that not a generalized? So the idea would be that potentially, the vector field that you model might not be the gradient of a scalar function. So it might not necessarily be a conservative vector field. So you can imagine that here, if you do things this way, f theta is a scalar function, which is kind of like the potential, if you think about in physics term. There is a potential, maybe an electric potential. And that's a scalar. And you get the vector field by taking the gradient of that. So it's a way of parameterizing a set of vector fields that they need to satisfy certain kind of properties because they are the gradients of a scalar function. Here, I'm saying, oh, I'm no longer even going to restrict myself to gradients of scalar function. I'm going to allow myself to just have arbitrary vector fields. That might not be an underlying scalar function, such that this vector field is the gradient of that function. So we're just modeling raw gradients. We're not having any functional parameters. Exactly. So building on that question, is it fair to say that in the energy-based model, we try to model-- we still try to model directly the likelihood versus here we directly model the scores, which are equations? Yes. Oh, maybe here. Are we missing the [? v? ?] Or [INAUDIBLE]?? Is that on-- [? va ?] is not technically EBM because you don't even get the-- yeah, there is no normalization constant and that the likelihood comes from an integral because of the latent variables. It's not directly something that would fit here. I mean, I think-- I was just wondering, like there seems to be some semantic overloading. Why do we call this x scores and not just gradients? Like, why do we have different terms giving the same thing? The reason they're called scores is that they're called scores in the literature. And people use score matching for the losses. And it's called a Fisher score. That's why we chose that name. Yeah. What's the cost of looking at the gradients? Because then, if there's nothing, then we can just look at the second derivative, the third derivative. We can just keep going [INAUDIBLE].. Yeah, yeah, yeah. Although I haven't seen it done, but in principle, you could potentially look at. Maybe it's going to be too high dimensional or too complicated, but you could. If the gradients are in respect to [INAUDIBLE],, how do we practically generate [INAUDIBLE]?? Yeah, so we'll talk about how to do inference, how to do sampling, and those kind of things. I want to go back to when you first talked about the score matching, like, why score matching will at least lead to identical or to distribution like [INAUDIBLE] similar? Because for the score matching, if we think about it, it's kind of like a gradient of the PDF, right? Yeah. And then we do integrations by how about the constant terms that will-- It has to integrate to 1 because it's a density. So if you integrate a function-- so yeah, when you integrate, you get the function up to a constant. And that constant has to be-- is determined by the fact that the integral of the PDF has to be 1. So there is no loss of information. In general, when you go from a function to the derivative, you lose information about a shift, basically, because you can't recover-- if you take a function and you shift it by a constant, they will both have the same derivative, So it looks like you're losing information. But here, you don't because we know that the functions have to be-- have to integrate to 1. Cool. So that's the high-level idea. We're going to try to fit directly score models to data. So the problem is this. You're given IID samples from our data density, which is unknown. Usual learning setting, our training set of samples from some unknown data distribution. And you want to try to estimate the score of this data distribution. And so we're going to think about model family, which is going to be a set of vector-valued functions parameterized by neural networks. As you change theta, you change the shape of the vector field. And the goal is to choose parameters so that the vector fields are similar. So you can imagine the first question is, how do we compare two vector fields? So there's going to be the true vector field of gradients corresponding to the data density. There's going to be an estimated vector field of gradients. How do we compare them? A reasonable way to do it is to basically overlap these two vector fields. At every point, there is going to be a true gradient, an estimated gradient. And we can look at the difference between the two and average this over the whole space. And if you do that, you get back the Fisher divergence that we talked about before. So if you go through every x, you look at the true gradient at that point according to the data density. You look at the estimated gradient at that point according to the model. There's going to be some difference. You look at the norm of that vector. You average with respect to the data density. And that's going to be a scalar value that tells you how far away your model is from the true vector field of gradients of the data distribution. So if you can get this quantity to 0 as a function of theta, then you know that the vector fields match. And you have a perfect model. And so trying to minimize this as a function of theta is a reasonable learning objective. And we know that even though it looks like something that you cannot possibly optimize because it depends on this unknown quantity here, recall, we only have access to samples. We can do integration by parts, and you can rewrite it in terms of an objective that only depends on your model. And it still involves an expectation with respect to the data, but you can approximate that using the sample average. So in order to train this kind of model, you need to be able to evaluate s theta efficiently. And we need to somehow be able to compute this trace of the Jacobian, which is basically the sum of all a bunch of partial derivatives. And then there is the question of, well, do we need this core model to be proper, to correspond to the gradient of some energy function? And we'll see that that's actually not really needed in practice. So the most straightforward way of parameterizing the score would be to just pick a vector value, the neural network. So let's say you have three inputs and three outputs. Because we know at every point, this neural network has to estimate a gradient, which is a vector, which is the same dimension as the input. And then we need to be able to basically evaluate this loss, which involves the norm of the output of the neural network and the trace of the Jacobian. And so to evaluate the first term, which is just the norm of the output, it's easy. Basically, what you do is you just do a forward pass, and then you can compute as theta. And then you can also compute the squared a normal as theta. The more complicated piece is the trace of the Jacobian. So the Jacobian is basically this matrix where you have basically all the partial derivatives or all the gradients of every output with respect to the inputs. So the first term up here is the partial derivative of the first output with respect to the first input. And then you have all these partial derivatives that you have to deal with. And the problem is we're trying to compute the trace of this matrix, which is basically the sum of the elements of the diagonal. And so what you need to do is you need to be able to compute the partial derivative of the first output with respect to the first input. And then you need to compute this element here on the diagonal. You need to compute the partial derivative of the second output with respect to the second input. And then you need to compute the partial derivative of the third output with respect to the third input. Then you have to sum up these three numbers because you need to sum up these three, the elements on the diagonal of this matrix. And although we can do back propagation-- so you can compute these derivatives relatively efficiently-- naively doing this would require a number of back propagation steps that scales linearly with the number of dimensions that you have. And we don't know if there is a more efficient way of basically doing this. But the only way basically we know how to do it is essentially extremely inefficient when the number of dimensions grows and is very large. And so even though this loss does not involve partition functions, it still scales pretty poorly with the dimensionality of the data. So doesn't EBMs have the same problem? Like, we also have to calculate the trees of the hash and-- Yeah, so EBMs are even worse. Because in an EBM, you would need to do one more back prop to get the score and then one more to get these derivatives. So an EBM would be even more expensive. This at least saves you one back propagation because you are already modeling the gradient of something. But it's still expensive. Yeah. I taught in EBMs, it reduced to f theta. Yeah, so you have the hash of f theta. So when you take the first gradient with respect to x of f theta, you get essentially s theta. And then you have to do the Jacobian of s theta. So you need to do second order, basically, derivatives in that case. So it's even more expensive. Do the input and output connections have to be the same? Or can you use different ones? They have to be the same here because you're modeling the score, which is the gradient of the log likelihood. And so that has to be the same dimension as the input. You have to-- I guess you're trying to match with the gradient. So then. yeah. OK. So does this method only deal with the data that has the same dimension, because we're trying to basically compare the scores at each points. So we can only deal with the data that has the identical dimensions. Sorry, this is-- we're modeling a joint distribution over a set of random variables. And if some of them are missing, computing marginals might be expensive. Cool. So this vanilla version, which is something we briefly mentioned also in the last lecture-- if you recall, we said, OK, this is avoids the partition function. But doing integration by parts is still expensive because of this Hessian term or trace of the [? Jacobian ?] in this case. And so we need more scalable kind of approximations that work in high dimensions. And that's what we're going to talk about next, which is how to get this to scale to high-dimensional settings where basically this d is large. And there's two approaches that we're going to talk about. The first one is called the denoising score matching. And the idea is that instead of trying to estimate the gradient of the data, we're going to try to estimate the gradient of the data perturbed with noise. So you can imagine that there is a data distribution that might look like this. And then there's going to be a noise-perturbed data distribution shown in orange denoted q sigma, where we're basically just adding noise to the data or convolving the data density in this case with a noise distribution q sigma of x tilde given x, which might be something like a Gaussian in this case. We're smoothing the original data density by essentially adding noise. Then it turns out that estimating the score of this distribution that you get after adding noise is a lot easier computationally. And so to the extent that you choose the noise level to be relatively small, this might be a reasonable approximation. If you don't add too much noise, then this yellow density will be pretty close to the blue one. And so the scores that you estimate for the yellow density, the noise perturbed density, are going to be pretty close to what you want, because, basically, q sigma is going to be pretty close to the original data density when sigma is small. That's the high-level idea. And so works like this. You have a data density, which could be over images. Then you add noise to the images by using this Gaussian kernel q sigma. And then you get a new distribution over images plus with noise. And we're going to try to estimate the score of that. And the way we're going to try to fit our model to this noise-perturbed data density is again using the Fisher divergence. But now, instead of doing a Fisher divergence between model and data, we do Fisher divergence between model and this noise-perturbed data density. So it's the same thing as before, except we replace p data with q sigma, which is the same as-- which is data plus noise, basically, which is just this. So the expectation is just this integral with respect to q sigma. So just like before, it's just the norm of the difference between the estimated gradient and the true gradient, except that now instead of using the real data density, we use this q sigma, which is the noise-perturbed data density. And then just of like when we're doing integration by parts, we expand this square and get three terms. We get the norm of the first term, the norm of the second term. Then we have this inner product between the two pieces. The red term, which is going to be the complicated one. Basically, just like in the integration by part, you can see that the blue term does not depend on theta. So we can ignore it. The green term depends on theta in an easy way. So it's just basically the usual thing. And the complicated piece is the red one. Or we have this dot product between the score of the noisy data and the estimated score. Yeah? Can you describe again how you start with [INAUDIBLE]?? Are you opposed to formally sampling the data and then-- Yeah, so q is defined as-- basically, you get a sample from q sigma by randomly drawing from the data, randomly drawing some Gaussian noise and adding it to the data. Yeah. Other questions? Yeah. What does this q give us? What do we achieve by doing that? So we achieve that is going to be tractable in the sense that we're going to get rid of that trace of the [? Jacobian ?] term. So we're going to get a loss function that is going to be scalable in high dimensions. So that's going to be the-- we're doing this because the trace of the Jacobian was too expensive. This introduces an approximation because you're no longer estimating the score of the data density or estimating the score of this other thing. But it turns out that we're going to be able to do it much more efficiently. Yeah? Looking back, gradient somehow still depend on the gradient with respect to x? Yeah, it looks like it does. But then it turns out there is-- we'll see that it actually simplifies to something pretty intuitive and very simple. It's going to reduce this problem to denoising. So, basically, this score-matching objective will end up being equivalent to the problem of given this x tilde. Try to remove noise. And try to estimate the original image you started with, basically. It's going to be mathematically equivalent. Basically, we're going to rewrite this red term in a sum. Where do I have my cursor? We're going to rewrite this red term in some way. And we're going to show it's going to be equivalent to denoising. OK, so we ignore the blue term. It doesn't depend on theta. Then we have this green term, which is easy. And then we have this red term, which is tricky. But we're going to rewrite it. So focusing on the red term, it looks like this. And just like in the integration by part trick, we can write the gradient of the log as 1 over the argument of the log times the gradient of the argument of the log. This is the basic-- I just basically expanded the gradient of the log of q sigma. And now you see that this q sigma here and q sigma down here will cancel with each other. And so we end up with something a little bit simpler. It's just the dot product basically between the gradient of the noise-perturbed density and the gradient and the score model at every point. And now we can write the expression for q sigma, which is just this integral. Basically, the probability of any particular sigma x tilde is going to be the probability of sampling any data point x times the probability of generating x tilde by adding noise to x, basically. Think about the sampling process. What is the probability of generating an x tilde? You have to look at every possible x. And you have to check what was the probability of generating x tilde by adding noise to x. That's basically what this integral here is giving you is just the definition of q sigma that we had in the previous slide. And now we can see that this is linear. So we can push the gradient inside the integral. And that's where things become a lot simpler because now you see that now we are getting a gradient of this Gaussian density basically. And we no longer have to deal with the gradient of the data density, basically. And now we can further push out the-- well, now we can use, again, this trick here that the gradient of the log of q is 1 over q times the gradient of q. And we can rewrite the gradient of the transition of the Gaussian density as q times the gradient of log q. If you take the gradient of log q, you're going to get the gradient of q times 1 minus q. And so these two things are obviously the same. And now you push the expectation out. And we basically have an expression that looks very much like the original one that we started with. But we no longer have to deal with this gradient of the log data density perturbed with noise. But we have to look at the gradient of this conditional distribution of x tilde given x, which is just a Gaussian density. And so overall, basically, we've rewritten this complicated object up here into something that is a little bit simpler because now it involves only the gradient. It basically involves the score of this q sigma of x tilde given x, which is just going to be Gaussian. And so bringing it together, this is what we started with estimating the score of the data density perturbed with noise. We know you could write it this way. And through this algebra that we just did, we could also rewrite the red term in terms of this. And now you can basically see that essentially-- you can write it as the square difference between theta and the gradient of this Gaussian transition kernel that we have here, because that would give us when you take the square of this term, it would give you the red one. The square term of this one will give you this brown term when we're subtracting out. And then the dot product between these two is exactly this red term that we just derived. So all in all, basically, what we've shown is that if you want to estimate the score of the q sigma, the noise perturbed data density, it's basically equivalent to trying to estimate the score of this transition kernel, this Gaussian density that we use to add noise across different axes and different x tildes that are sampled from the noise distribution. So a lot of algebra, but, basically, up to constants, we can rewrite the score matching objective for the noise perturbed data density into a new score matching objective that now involves terms that are relatively easy to work with. And in particular, if you look at this expression, it turns out that this gradient of the log of q sigma x tilde given x is easy to compute because that's just a Gaussian. So q sigma x tilde given x is just a Gaussian with mean x and standard deviation and variance sigma squared identity. So that's just a squared exponential. When you take the log, it just becomes a quadratic form. When you take the gradient, you just get a relatively-- basically an expression that looks like that. And so when you plug-in this expression in here, you get something easy to work with. Maybe I don't have it here. But, basically, you end up with an objective that no longer involves traces of the Jacobians. It's like an L2 loss between s theta compared to this x tilde minus x over x over sigma squared, which is basically a denoising objective, as we'll see in the next couple of slides. So the key takeaway here is you don't have to estimate the trace of the Jacobian anymore. if you're willing to estimate the score, not of the clean data. But if you're willing to estimate the score of this q sigma, which is data plus noise. So practically the algorithm is something like this. You have a mini batch of data points sample from the data. You perturb these data points by adding Gaussian noise. So literally just add noise to each xi with the variance sigma squared. And then you just estimate the denoising score matching loss, which is just based on the minibatch, which is just the loss on these data points. And it's just basically this expression. And recall that if this q sigma is Gaussian, then the loss looks something like this. And so it has a very intuitive interpretation because what we're saying is that what this core model needs to do at every data point x tilde-- so the score model is being evaluated at this noisy data points x tilde. And for each data point, what the score model is trying to do is it's trying to estimate the noise that was added to xi to produce x tilde. Do we have some restriction on how noisy, like how large, let's say, the standard sigma here we're trying to add noise to the arrangement? Yeah, so you'd want sigma to be as small as possible because you want q sigma to be as close as possible to [? p data. ?] On the other hand, the variance goes to infinity of this [? law ?] as sigma goes to 0. So you can't actually choose sigma to be too small. So in practice, you need to try to choose the sigma as small as possible such that you still optimize the loss. But there is always an approximation. That's the trade off. You don't have Hessians or traces of the Jacobian anymore. But you're not estimating the score of the clean data. You're estimating the score of the noisy data. I don't understand because-- so they're different. But the gain here, we get rid of the trace of the derivative of the score is because we know the closed forms of the noise that was added. Yeah, we're no longer estimating-- I mean, we're changing the goalpost. We're no longer estimating-- you can think of this as basically a numerical approximation of-- in some sense, we're adding we're adding Gaussian noise. And we're trying to estimate derivatives through a finite difference. Basically, that's one way of deriving the same thing, if you like, that sort of approximation route. It has the flavor of basically estimating the derivatives through a perturbation. Why is it denoising score matching because we're matching with-- we're matching to a perturbed data point. How does it help with denoising? So if you think about when is this loss zero-- maybe I have it on the next slide. Yeah, so if you think about it, the loss function looks like this. The original loss function was this. And then we were able to rewrite it as this. And so what are you doing? You're starting with a clean image. Then you add noise to generate x tilde. Then look at this loss. What we're saying is that the score model takes x tilde as an input. And to make this L2 loss as small as possible, you're trying to match this x minus x tilde, which is exactly the noise that we added. And so to make this loss as small as possible, s theta has to match the vector of noise that we added to this image. And so that's why it's a denoiser because it gets to see x tilde. And it needs to figure out what do I subtract to this x tilde to get back a clean image. Because even though we're not directly comparing with the original image, we still somehow managed to-- Yes, that's called Stein unbiased risk estimator. That's the key trick that is used. You can still evaluate the quality of an estimator without actually knowing the ground truth in some sense. Yeah. So do I understand correctly that the only unknown in this optimization is basically the [? theta? ?] You have basically the x, the x tilde, the sigma is you get everything except for the tilde, right? Yes, so the x's and the x tilde, you are generating them yourself. And then my s theta doesn't see the clean data. So x theta only sees the noise data. And then you're trying to predict the noise. Yeah? So after we added this Gaussian distribution perturbation, the s is trying to basically match the gradient of that Gaussian noise distribution. But the training goal was this s score function should be the gradient of original log likelihood of the original data distribution. So it seems that this trained s deviates from the original goal. For example, if instead of using Gaussian distribution to add the perturbation, if we use another noise distribution, that basically changes the loss function for as and gives a different as. Is that what's happening? You could. Yeah. So it is not restricted to Gaussian noise. If you look at the math, the only thing you need to be able to compute is this gradients of-- basically, as long as the distribution that you use to add noise that you can compute likelihoods, and you can get the gradient in closed form, then you can get a denoising loss for that. You're going to end up estimating the score. We are estimating the score of q sigma, which if you're adding Gaussian noise is going to be basically theta plus Gaussian noise. If you add another kind of perturbation, you're going to get another type of data perturbed data. And you're estimating the score of that. So you're right. We're not estimating the score of the clean data density. We're estimating the score of the data plus noise. The hope is that you need to just a small amount of noise so that like if sigma is small enough that these images are indistinguishable from the clean ones, then the approximation is not too bad. And what we gain by doing that is that it's much more scalable. It feels like instead of converging to the distribution of clean data plus noise, it gives me the feeling that we are converging to the distribution of the noise function. It doesn't. No, that's the key thing. I mean, that's the magic of denoising score matching that basically these two objectives are equivalent up to a constant. So by minimizing the bottom one, the denoising, you are actually also minimizing the top objective, where you're really estimating the score of the distribution of the data convolved basically with Gaussian noise, the smoothed version of the data density even though you can just work at the level of the individual conditionals. That's the beauty of this denoising score matching. Maybe taking a step back, the whole premise of this approach is that it is easier to model the, basically, vector field of gradients than the probability distribution directly, right? Yeah. And another way to say it maybe is that denoising is not too hard as a problem. And so we have pretty good neural networks that can do denoising. And so to some extent, we've reduced the problem of generating images to the problem of denoising, which is a relatively easy task for our neural networks. So to the extent that you can do well denoising, you're going to do well at estimating the score. And we know that the score is basically to some extent equivalent to having a likelihood. So we haven't yet talked about how do you actually generate samples from these models. But essentially we'll do MCMC. And so after all these steps, we've reduced generative modeling to denoising, which is an easy task to probably one of the easiest tasks that you can think of. So, OK, is this like going through-- getting the gradient, this turns out to be this denoising objective that only Gaussians or just that this [INAUDIBLE]. Or there's some underlying thing that needs to [INAUDIBLE].. So it doesn't have to be Gaussian as long as the machinery-- yeah, basically, as long as you can compute this gradient of whatever distribution you use to add noise, the math works out. And really if you think about what happened in the proof, really the only thing that matters is that the gradient is linear. The gradient is a linear operator. And so this whole machinery works. So I'm looking at the optimization objective. So if we are doing optimal optimization, the score function should be close to the noise divided by the standard deviation we chose. And it should also be close to the actual gradient of q. Yeah? So are we claiming that the gradient of the disturbed image should be similar to the noise? So let's see to what extent that is true. So here we've seen the score matching reduces to denoising. So estimating the score is the same as estimating the noise. That was added to the data point. And so the reason this is true, or another way to think about this, is that there is something called Tweedie's formula, which is basically an alternative way of deriving the same result, which is-- it's telling you that, indeed, as was suggested by you that the optimal denoising strategy is to basically follow the gradient of the perturbed log likelihood. So you can imagine that if you had a data density that only has like three images-- so it's like three deltas. And this is like a toy picture. But just for visualization purposes, you can imagine that if you add noise to these three images, you're going to get a density that kind of looks like this. And then you can imagine-- let's say you're trying to denoise. And what we've just shown is that the best way to denoise is to follow the gradient. So if somehow somebody gives you a data point to the left here, how should you denoise it? You should follow the gradient to try to go towards high probability regions, which makes sense. You're trying to denoise. Try to change the image, and push it towards high probability regions. And in fact, the optimal denoising strategy is to take the noisy sample and follow a step plus with the right scaling. But, basically, follow the gradient of the log-perturbed data density. Is it based on the assumption that the expectation of the noise is zero or something like that? Because there's not-- I mean, there must be some sort of-- like you said the gradient must-- it must be some sort of constraints on the noise. Because if I can consider coming up with noises where the gradient might not be-- the gradient might not be optimal for that. So for these results, the denoising score matching stuff is still true. What is good about Gaussian is the following. Maybe that will clarify. So, essentially, what you can look at is-- there is the clean data. And then there is the noisy data. And then there is the posterior distribution of the clean data given the noisy data. And we know the definition of the noisy data distribution. And, basically, Tweedie's formula is telling you that the expected-- given a noisy image x, the expected value of the clean image is given by this expression. So if you want to minimize the L2 loss, the best thing you can do is to output the conditional expectation of x given x tilde. And so from that perspective, you want to follow the gradient. And this particular version of the formula is only true for Gaussians. You just give it noise do denoise that is there a notion that if you keep denoising noise, what would happen? So if you keep denoising noise-- so are you asking if you have a lot of noise, or if you keep repeating the process? I guess, you start with a bunch of noise and just keep letting it denoise. Yeah, so that's going to be basically how we sample from the model. So that's going to come up soon. But, essentially, that's how we're going to keep following the gradient or if we want to keep denoising. And that's going to be MCMC in some sense. And that's going to be Langevin dynamics. And that's how we're going to produce samples, basically. Is how diffusion models [INAUDIBLE]?? Cool. So the other way to make things efficient is to take random projections. We still have time. So another alternative way of coming up with an efficient approximation to the original score matching loss that does not involve traces of the Jacobians is to basically take random projections. So you can imagine that at the end of the day what we're trying to do is we're trying to match the estimated vector field to the true vector field. And if the true vector-- if this vector fields are really the same, then they should also be the same if we project them along any kind of direction. So you can take this direction and this direction. And you can project the arrows along that direction. And if the vector fields are the same, then they should match. The projections should match. That's in particular if these projections are just axis aligned, then individual components of these vectors should match. And the idea is that working on the projection space is going to be much more efficient because now it's going to be a one-dimensional problem. And so that's the defiance, basically, a variant of the Fisher divergence, which we call the Sliced Fisher Divergence, which is exactly what we had before. But before comparing the data to the model gradient, we project them along a random direction v. So you randomly pick a direction v. And then at every data point, you compare the true gradient and the estimated gradient along this direction v. And note that after you take this dot product, these are scalars. So these are no longer vectors. They are scalars. And it turns out you can still do integration by parts. And you end up with an objective function that looks like this. And it still involves the Jacobian. But, crucially, now it involves basically Jacobian vector products, which are basically directional derivatives and are things that you can estimate using backpropagation efficiently. So the second term is just the usual thing, is efficient. It's just the output of the network times dot product with a random vector. So that's efficient to evaluate. Now we have something that looks like this. We have this Jacobian matrix left multiplied by this vector v and right multiplied by the same vector v. And it turns out that, basically, this thing is just a directional derivative. And that's something you can compute with backpropagation efficiently. So if you think about it, this is the expression we started with, which you can equivalently write as the gradient of the dot product. And that's something that you would compute like this. So you have a forward pass that computes s theta. Then you take the dot product with v. And that gives you a scalar. Now you do a single backpropagation to compute the gradient of that scalar with respect to all the inputs. And then you take another dot product to get back the derivative or the quantity. So this can basically be done roughly at the cost of a single backpropagation step. What is the projection operation [INAUDIBLE]?? Or it's just a vector dot product? It's a dot product. Dot product, I see. Yeah. So it's like the v is pretty predefined. The v is sampled from some distribution. And let me see if I have it here. So this is what it would look like. It would sample data. For every data point, you would randomly sample a direction according to some distribution. And then you just optimize this objective function, which as we've seen is tractable to estimate. And it does not involve trace of the Jacobian. And there's a lot of flexibility in terms of choosing this pv, like how do you choose the directions. And you can choose, for example, Gaussian or [? Rademacher ?] vectors. And they both work in theory. Then the variance can vary. But, basically, there's a lot of flexibility in terms of choosing these random directions. So, OK, with this one you can run one backprop. Why in the other case did we have to run multiple backdrops? Because you have to compute-- before you had to compute the partial derivatives of every output with respect to every input. So you needed d backprops. And here you can do a single one because it's basically a directional derivative. Does it ever make sense to bias the projections that you sample towards certain directions? Yeah. It seems like an intuitive-- we tried it for a long time and never saw any difference in practice. I don't have anything conclusive to say. It seems like a good idea but never worked. So if you have a high-dimensional output vector space-- but then depending on the structure of that output space, there are going to be certain directions where you have a lot of signal and then probably many more directions where you just don't have a lot of signal. And so it seems to me that just picking a projection vector from a Gaussian distribution wouldn't work very well. You would get some small amount at the time. You would get a high value of this projection. But some other larger amount at the time, you would get a projection where your signal or your information was very, very low. And, yeah, I guess, how does that work out in training or in practice? Yeah, so, basically, these are essentially unbiased estimators of the original objective. You can also think of it that way. There is variance that you're introducing because you're sort of comparing projections of the vectors instead of comparing the vectors fully, which is what the original score matching loss would do. And so that's the price you pay, basically. And you can use variance reduction techniques to actually make things more stable in practice, different distributions. Well, what you can do is you can take-- if you are willing to pay a little bit more computation cost, you can take multiple random projections per data point. You can just try to match not just sample every x1 along direction v1. But you can take a bunch of them and then average them. And so there is a natural way of reducing variance by taking more projections. But then it becomes more expensive. Eventually, if you take n-projections where n is the dimensionality, and you can compare on every single coordinate, it goes back to the original one. And you are free to choose something in between. In practice, one projection works. So is the key idea why this doesn't take-- [? on/od ?] time is that at each backpropagation step we do the vector product, and it becomes one single-- It becomes a scalar. Yes, yes. Exactly. Yeah. We do all of this projection without worrying about noise? Here there is no noise. So the advantage of this is that you are actually estimating the score of the data density as opposed to the data density plus noise. Yeah. And here you see some plots showing that if you do vanilla score matching, how long it takes per iteration as a function of the data dimension. It can go up to 3, 400 dimensions. And then you run out of memory. This was a few years ago. But it scales poorly linearly with respect to the dimension. And if you have these sliced versions, they are basically constant with respect to the data dimension. And in terms of model quality, it actually performs-- not super important what this graph means. But what you get with sliced versions of score matching matches pretty much what you would get with the exact score matching objective. Now, the final thing I wanted to talk about is actually how do we do inference, how do we generate samples. Suppose that somehow we are able to-- Question. Yeah, I just had one more question on this, basically, creating the scalar and then doing one backprop. Why do you do this with a random vector not with just a scalar product with the corresponding-- so you have predicted this gradient, right? And you know, basically, the gradient of the actual gradient, the ground truth gradient. You don't know it. But didn't we take the L2 norm previously? Yeah, so you still need to do the integration by parts trick. This one you don't know it. So the original loss would basically just take-- at every x, you take the dot product with the true gradient, the estimated gradient. And then you square the difference. You can't evaluate that loss because it depends on the true gradient, which you don't know. But then you can do integration by parts. And you can rewrite it as this thing, which is like what we had before. And it no longer depends on the true score. Cool. So the thing I wanted to talk about is how to do sampling. So let's say that somehow you've used the real vanilla score matching or denoising score matching or SLI score matching, and you are able to train your neural network as theta so that the estimated vector field of gradients is close to the true vector field of gradients of the data density-- The question is, how do you use this? You no longer have access to a likelihood. There is no autoregressive generation. How do you generate samples? And so the intuition is that the scores are basically telling you in which direction you should perturb a sample to increase its likelihood most rapidly. And so you could imagine a basic procedure where an MCMC procedure, like what we talked about before, where you initialize particles at random. And here I'm showing multiple particles. But you could imagine sampling x0 based on some initial distribution. Then you could imagine repeatedly taking this update where you're basically taking a step in the direction of the estimated gradient. So we just do gradient ascent using the estimated scores to decide the direction. And if you do that, you can-- you're going to get something like this, where the particles will all converge in this local optima, the local maxima, hopefully, of this density, which is kind of right. You could imagine you start with random noise, which is an image which is pure noise. And then you follow the gradient until you reach a local optimum where you can no longer improve. We know that that's not the right way to generate a sample. The right way to generate a sample is to follow the noisy gradient. That's what we call Langevin MCMC, which is exactly the same procedure, except that we also add a little bit of Gaussian noise at every step. And if you do that, then you'll see that we'll actually generate-- when you run it for long enough, this procedure is guaranteed to produce samples from the underlying density. So remember that this vector field corresponded to a density where we have a lot of probability mass here, a lot of probability mass there. And, indeed, if you look at the distribution of these particles, they're going to have the right distribution because what we've seen is that these Langevin dynamics sampling is a valid MCMC procedure in the limit. So it's a way of sampling from a density when you only have access to the score. So we know that if you initialize your particle, it doesn't matter how you do it. And then you repeat this process of following the noisy gradient. In the limit of small step sizes, an infinite number of steps, this will give you a sample from the underlying density. So literally all we're doing is replacing the true score function with the estimated score function. And, basically, that's one way of generating samples. Your first estimate the score by score matching trying to match-- have this neural network output arrows, output gradients, that are close to the true one. And then you just follow the directions. And to the extent that you've done a good job at estimating the gradient and to the extent that these technical conditions are satisfied, this would produce a valid sample. And so that's basically the full picture. The full pipeline is you start with data. You estimate the score. And you generate samples by, basically, following the score, which corresponds to removing noise because we know that the score is telling you the direction that you should follow if you want to remove noise. And so back to what we were discussing before, it has a little bit of this flavor of removing noise and then adding noise because that's what Langevin is telling you to do. And, unfortunately, if you just do this, it doesn't work. So this is what you get if you use this procedure. You train a model on MNIST. Even a simple data set, MNIST, CelebA, CIFAR-10 just doesn't work. And this is what the Langevin procedure looks like. You start with pure noise. And then it gets stuck somewhere. But it doesn't produce good samples. And there are several reasons for this. One is that basically data tends to-- real world data tends to, basically, lie on a manifold. And if the data is really on a manifold, the score might not be defined. And you can see this intuitively. Imagine you have a density that is concentrated on a ring-- as you make the ring thinner and thinner, the magnitude of the gradient gets bigger and bigger. And at some point, it becomes undefined. And so that's a problem. And, indeed, real data tends to lie on low-dimensional manifolds. If you just take MNIST samples, and then you take the first 595 PCA components-- so you project it down on a linear manifold of dimension 595-- there is almost no difference. So it basically means that, indeed, even if you restrict yourself to linear manifolds that you can get with PCA, there is almost no loss. And if you take CIFAR-10, and you take a 2,165 dimensional manifold, again, almost no difference after you project the data. So it seems like, indeed, that's an issue. And you can see if you look at the training curve on CIFAR-10, that's the score matching loss. It's very, very bumpy. And it doesn't quite train. The other issue, which was hinted at before, is that if you think about it, we're going to have problems in the low data density regions, because if you think about points that are likely under the data distribution, we're going to get a lot of samples from those regions. If you think about the loss, the loss is an expectation with respect to the data distribution of the difference between the estimated gradient and the true gradient. But with this expectation, we're approximating it with a sample average. And most of our samples are going to come-- let's say, are going to be up here and are going to be down here. And we're never going to see samples in between. And so if you think about the loss, the neural network is going to have a pretty hard time estimating the gradients in between. And you can see here an example where we have the true data scores in the middle panel and the estimated data scores on the right panel. And you can see that the arrows, they match pretty well at the corners where we're going to see a lot of training data. But they're pretty bad the moment you go away from the high data density regions. Is there some way when you're doing this MCMC to want smaller vector field directory? So if you sample the whole thing, it'll be like, oh, wait, the bottom of one of the top right corner have really small arrows. So then I want more there. So just, generically, what's more there than [INAUDIBLE]?? Yeah, the problem is how do you find it. And, I guess, one way to try to go to-- you're trying to find stationary points or trying to maximize, I guess, the log likelihood. And it's not obvious how you would do it. You could do gradient ascent and try to find a local maximum. But the problem is that the gradient is not estimated accurately. If you imagine randomly initializing a data point, very likely you're going to be initializing in the red region. And then you're going to follow the gradients. But the gradients are not accurate because they're estimated very inaccurately. And then your Langevin dynamics procedure would get lost, basically. What happens is that, if you think about those particles, a lot of those particles starts out here. And you're going to follow these arrows. But the arrows are pointing you in the wrong direction. So you're never going to be able to reach this high data density regions by following the wrong instructions somehow. Yeah? What if we just initialize at one of our data points, would that help? You could try to initialize through one of the data points. The problem is that still then it's not going to mix, which is what's going to come up next that even though Langevin dynamics, in theory, converges, it can take a very long time. And you can see the extreme case here, where if you have a data density that is like a mixture of two distributions, where the mixture weights are pi and 1 minus pi but crucially, p1 and p2 have disjoint support-- and so, basically, you have probability pi-- [? pp1 ?] when you are in a. And you have 1 minus pi p2 when you are in b. So there's two sets that are disjoint. And you have a mixture of two distributions that are with disjoint supports. Think of a mixture of two uniform distributions with two disjoint supports. If you look at the score function, you'll see it has this expression. This is the log of this in the support of the first distribution and the log of this in the support of the second distribution. And you can see that when you take the gradient with respect to x, the pi disappears. So it does not depend on the weight that you put on the two mixture modes. And so the problem here is that the score function does not depend on the weighting coefficient at all. So if you were to sample just using the score function, you would not be able to recover what is the relative probability that you assign to the first mode versus the second mode. This is like an extreme case of Langevin not even mixing, basically. And, yeah, basically, if you're not Langevin, it will not reflect pi. And here you can see an example of this where the true samples-- there is more samples up here than down here. So this p1 is maybe-- 2/3 of them are up here. And one third are down here. If you just run Langevin, you end up with half and half. So it's not reflecting the right weight. And that's basically an indication that, again, Langevin is mixing too slowly. And, yeah, then what we'll see in the next lecture is a way to fix it that will actually make it work. And that's the idea behind diffusion models, which is to essentially figure out a way to estimate these scores more accurately all over the space and get better guidance. And that will actually fix this problem. And we'll get to the state of the art diffusion models.
Stanford_CS236_Deep_Generative_Models_I_2023_I_Stefano_Ermon
Stanford_CS236_Deep_Generative_Models_I_2023_I_Lecture_9_Normalizing_Flows.txt
So today, we're going to be talking about generative adversarial networks. So we're going to start introducing yet another class of generative models. Just as a recap, this is the high level story, high level roadmap for the kind of things. So we're going to be talking about in this course. The high level idea when you build a generative model is that you start with some data, and you assume that the data is basically a set of iid samples from some unknown probability distribution that we denote Pdata. Then you have a model family, which is a set of probability distributions that are parameterized usually by neural networks. And then what you do is you define some kind of notion of similarity between the data distribution and the model distribution. And then you try to optimize over the set of probability distribution in your model family. And you try to find one that is close to the data distribution according to some notion of similarity. And we've seen different ways of basically constructing probability distributions in this set. And we've seen autoregressive models, where you have chain rule. And you break down basically the generative modeling problem as a sequence of simple prediction problems. We've seen variational autoencoders, where we are essentially, again, modeling the density over the data using essentially like a big mixture model. And then the last class of models we've seen is this idea of a normalizing flow model where, which is kind of like a variational autoencoder with a special type of decoder, which is just a deterministic invertible transformation where, again, we get these densities through the change of variable rule. But the key thing is that we essentially always try to model the probability assigned by the model to any particular data point. And the reason we do that is that if we can do that, then we can do maximum likelihood training. So if you know how to evaluate probabilities according to the model then there is a very natural way of training the models, which is basically this idea of minimizing the KL divergence between the data distribution and the model distribution, which, as we know, is equivalent to maximizing likelihoods. And so there's a very natural and very principled way of comparing probability distributions that works very well when you have access to likelihoods. And a lot of this machinery kind of involves way of setting up models such that you can evaluate likelihoods efficiently. And that's one way of doing things. What we're going to see today is basically a different way of comparing similarity or of measuring similarity between probability distributions. So we're going to change this piece of the story. And we're going to try to compare probability distributions in a different way. And by doing that, we will get a lot of flexibility in terms of defining the model family, because we will not have to essentially-- the training objective is not going to be based on maximum likelihood anymore. And so we're going to get more flexibility, essentially, in terms of defining the generative model itself. So remember that, again, like what we've been doing so far is training models by maximum likelihood. So the idea is that we have access to the density or the probability mass function over each data point. So we can ask the model how likely are you to generate this particular data point, xi in this case. And if we can do that, then we can also try to choose parameters such that we maximize the probability that the model generated-- the training data set that we have access to or equivalently we can choose parameters to try to maximize the average log probability assigned by the model to our training set. And there is good reasons for choosing this learning objective. In particular, it can be shown that this is optimal in a certain sense. And what I mean is that basically, under some assumptions, which are not necessarily true in practice-- but under some ideal assumptions and an ideal setting where you have a sufficiently powerful model, and there is some kind of identifiability condition-- not super important. But under some technical conditions, you can prove that basically trying to estimate the parameters of the models by maximizing likelihood-- but basically solving this particular optimization problem is the most efficient way of using the data. So basically, there is going to be other learning objectives that you can set up that would potentially give you estimates of the true parameters of the model. But among all these various techniques, the maximum likelihood one is the one that converges the fastest, which basically means that given a certain amount of data, this is the best thing you can do. It's the one that will give you the right answer, basically, using the least amount of data. And so that's why using maximum likelihood is a good idea, because in some sense you're making the best possible use of the data you have access to under some technical conditions. And the other reason that maximum likelihood is a good training objective is that we've seen that it corresponds to a compression problem. So if you can achieve high likelihood on a data set, then it means that you would do reasonably well at compressing the data. And we know that compression is a reasonable kind of learning objective. It's one of-- in some sense, if you're able to compress the data, then it means that you can kind predict the things that could happen pretty well. And it's a good way of forcing you to understand what-- the patterns in the data. And so compression is typically a pretty good learning objective. However, it might not be necessarily what we want. And so what we'll see first is that there are cases in which achieving high likelihood might not necessarily be correlated with, let's say, achieving good sample quality. So if you're thinking about training a generative model over images, for example, it's possible to construct models that would give you high likelihood and terrible samples in terms of quality. And vice versa, it's going to be possible to train models that have very good sample quality, meaning they produce images that are very realistic, but they have terrible likelihoods at the same time. And so although training about maximum likelihood has good properties, it might not be necessarily what we want if what you care about is, let's say, generating pretty samples or pretty images. And so that's going to be some motivation for choosing different sort of training objectives that are not necessarily going to be based on a maximum likelihood. So the-- let's see what does this mean a little bit more rigorously. First, what we know is that if somehow you're able to find the true global optimum of this optimization problem-- so if you're really able to find a model distribution that perfectly matches the data distribution-- so somehow, if you go back to this picture, if you are able to make this distance exactly 0, the kl divergence between the data and the model is truly 0, so you're able to reach the global optimum of this optimization problem, then you are in good shape because, well, you get the best possible likelihood and the samples that you produce are perfect essentially by definition. Because your model is exactly equal to your data distribution, and so if you sample from the model, it's like sampling from the data, and so that's as good as it gets. But what we're going to see-- is that as long as the match is not perfect-- as long as there is a little bit of a gap which, in practice, is always going to be the case, then you being close in KL divergence or equivalently doing well with respect to likelihood, it doesn't necessarily mean that you are achieving good sample quality. But somehow, if you're really able to get the true global optimum, then you are good. But for imperfect models, achieving high likelihoods does not necessarily mean that you achieve good sample quality and vice versa. [? It sounds ?] like these [INAUDIBLE] around the optimum, where suddenly you get getting the best results, and up until that, you're getting lackluster but decent results? So we'll see that this, unfortunately, not really. And here is an example where you can get very good likelihoods, but very bad samples. And so to do that, you can basically imagine a situation like this, where you come up with this model, which is a mixture of two distribution. It's a mixture of the true data distribution and some garbage-- pure noise distribution. And so the sampling process is something like this here. You flip a coin or-- and then with 99% probability, you generate noise you generate garbage. And with 1% probability, you generate a true sample from the data distribution. Of course, in practice, you cannot really do this. But this is just to show that there exists models that achieve very good likelihoods as we'll see but very good sample quality. And what I mean-- the sample quality is bad because 99% of the time, you are generating pure garbage and only 1% of the time you're generating good samples. And what we'll see is that even though this model is generating very bad samples, it actually achieves very good likelihoods. And to see that, it's actually a relatively simple kind of derivation. When you evaluate the probability of a data point x, according to this model, you get a sum of two terms. It's the true probability under the data distribution, and it's the probability under the noise distribution. And even though the noise distribution could be really bad, the probability is at least as good as the-- there's a 1% probability of sampling from the data. And so the probability assigned to this data point is at least as large as this sum of two non-negative quantities, and so this log is at least as large as the log of that little contribution that comes from the data distribution. And because we're taking logs, the log of 1% times P data is equal to the log of P data minus this log of 100. So somehow, basically what we're seeing here is that the log probability assigned by this model to a data point is the best log probability you can get-- the one that you get according to the true data distribution shifted down by some constant. And in particular, what this means is that if you take an expectation of this with respect to the data distribution-- so you want to see what is the average log likelihood that this model achieves, if you take an expectation of the left-hand side, you take an expectation of the right-hand side, you get that on average, these models performs reasonably well. In the sense that it performs as well as what you would get if you were to use the true data distribution as a model shifted by some constant. And we know because KL divergence is non-negative that, somehow, this is the best you can do. The average log likelihood for any model cannot be possibly better than the log likelihood that you would get if you were to use the true data distribution to evaluate likelihoods of samples produced by the data distribution. This is just-- the KL divergence is non-negative, which just-- if you just move the log on the other side, it's just saying that the data distribution-- if the data is coming from the data distribution, the best model of the world you can possibly have is the one that produced the data-- is the data distribution. And no matter how clever you are in choosing data, you cannot possibly do better than using the true model that produced the data. And so you can see that kind of this performance that we get is kind of bounded above by this basically entropy of the data distribution, and below by the same thing shifted by a little bit. And what I argue is that constant doesn't matter too much. Because if you think about it, as we increase the number of dimensions-- so as we go in higher and higher dimensions, the likelihood-- so this piece will basically scale linearly in the number of dimensions while the constant is fixed. It doesn't depend on how many variables you're modeling. So-- and the intuition is that if, for example, you look at factorize-- this is a mistake. This is meant to be Pdata. Sorry. There's a typo here. But basically, if you use-- if you factorize the true data distribution according to the chain rule, you can see that this term here-- the log data scales linearly in the number of variables that you have, while the second piece does not depend on the number of variables. And so you can kind of see that in high dimensions, this model is basically doing as well as you can hope to do. The likelihood of this model, which is producing garbage in 99% of the time, is pretty close to the best you can possibly achieve. And so I think back to your question it means that there is a model that is very close to the optimal one, and it's still producing very, very bad results, especially in high dimensions. Yeah? [INAUDIBLE] but like doesn't this also [INAUDIBLE] models not only on like good images, but assigning bad qualities to bad images or [INAUDIBLE] images [INAUDIBLE] very easily generate bad images? Yeah, that's a good question. Yeah, to what extent could you use, let's say, bad data and somehow train the models that way. It's not obvious how you would do it with the maximum likelihood. But using GANs, which is what we're going to talk about today, it's actually pretty straightforward to incorporate negative data. So if you know that there are certain things that are clearly not possible or things you don't like, then it's pretty straightforward to incorporate that sort of negative data augmentation into your training objective. For example, if you take-- your training on images, and you apply some kind of Jigsaw operator, where you kind of like produce a puzzle and then you move the pieces around, you get an image that has the right local texture, but it's not something you want. And you can incorporate that kind of data augmentation, essentially-- or negative data augmentation to improve the training. So that applies generally. It's a little bit trickier to do with likelihood based models, but that's a good idea. Could you use some type of ensemble to model the noise separately? Like I can do it for like standard supervised learning, but not sure if you can also do it in a generation setting, where you just model noise separately, take it out, and then have, I guess, true data that you're modeling? So you're saying it kind of have a mixture that is trying to-- where would the noise come-- to try to filter it out or? Yeah, [INAUDIBLE] if you train several ensembles, one of the part that they pick up is [INAUDIBLE] the true distribution, and then the noise part is picked up separately. But if you average a lot of them, then you only reinforce the common signals, and then the noise components cancel each other out. I'm just curious if you could do something similar. I think in general, we don't'-- we're not in the setting where we are assuming that there is even noise in the training data, so-- or I think what we were talking about is a setting where what you don't want, and you kind of take advantage of that. But you don't have to figure out what is noise and what is not, because you already know. And here, we are in the setting where we're assuming the data is clean. The data is really just a bunch of samples from the data distribution, so you do want to use everything you have access to, and there is no need to filter the noise. This is just a model of the world that is kind of made up, but it's illustrating the point that optimizing likelihoods might not give you good sample quality, because at least conceptually, it's possible that by optimizing likelihood, you end up with a model like this, which would produce garbage 99% of the time but gives you high likelihoods. And so there is that potential issue. And conversely, it's possible to get models that produce great samples and very bad log likelihoods. Anybody have a guess on how you could do that? How would you-- yeah? Overfitting. Overfitting-- yeah, that's probably the simplest way to do it. Just memorize the training set. So you build a model that puts all the probability mass uniform, let's say, distribution over the training set. And then if you sample from this model, the samples would look great. I mean, they are by definition just training samples, so basically you cannot do better than that. But the test likelihood would be as bad as it gets, because it's going to assign basically 0 probability to anything that the model hasn't seen during training, and so again, a terrible log likelihood. So again, this is sort of suggesting that it might be useful to disentangle a little bit sample quality and likelihood. Even though we had some success training models by maximum likelihood, it's not guaranteed that that's always the case. And there might be other training objectives that will give us good results in practice. And so that's the main motivation behind-- the key idea behind generative adversarial networks. It's a different kind of training objective that will not depend on the likelihood function. And so back to our high level picture, basically, what we're going to do is we're going to change this performance measure here. We're going to change the way we're comparing how good our model is by throwing away KL divergence, which is what we've been doing so far. And we're going to try some alternative way of comparing two probability distributions that will not rely on likelihood. Yeah? So in the third case, how is examples looking exactly like the training set [? great ?] samples? Are we-- how do you define great samples? Yeah, it's a good question. What is a great sample? Maybe you want the samples to have diversity, in which case, maybe this wouldn't be. But if you think about images, if you were to look at them, they would look perfect. They would have the right structure, they would be good except that there is not maybe enough variety because you're not really generating anything new. But presumably, it would be-- in terms of just quality of the individual samples, this should be good. [INAUDIBLE] Yeah, for each individual sample, the quality would be good. Great. So let's see how we can do it. And so the basic idea is that we're going to think about exactly that problem of deciding whether or not two distributions are similar by looking at the kind of samples you get if you went through sample from one or the other. And so you can imagine this setting, where I give you a bunch of samples that are sampled from some distribution P. And I give you a bunch of samples that are coming from some distribution Q And what we can try to do is we can try to figure out if there is a way to tell whether or not P, this distribution that we use to generate the first batch-- this group of samples is actually the same as the distribution that we used to generate the right group of samples. If you can do that, then that could be a good way of comparing probability distributions, because if we don't have a way of telling whether or not the distributions are different, then it probably means that the distributions are close to each other. And so that's basically what we're going to do-- is we're going to use something called a two-sample test, which is basically a procedure for determining whether or not these two groups of samples, S1 and S2, are coming from the same distribution-- are generated by the same probability distribution or not. And so it's an hypothesis testing problem, which you might have seen before in other contexts, where basically there is a null hypothesis, which is the distributions are the same. So P is equal to Q, so the samples in the first group have the same distribution as the samples in the second group. And then there is an alternative hypothesis, which is that the distributions are different. Yeah? Can you explain one more time why do you want to compare the distributions? Yeah, we want to compare the distributions because, I guess, we need a training objective. So we typically use KL divergence to figure out how close our model is to the data. And we kind of said, OK, KL divergence is good, but perhaps not ideal in the sense that you can get pretty close in KL divergence and have terrible samples. So maybe there is room for choosing different comparisons-- different ways of comparing probability distributions that are closer to perceptual quality of the samples and will allow us to do things differently. And that's kind of the main motivation. So one way to go about it is to say, let's say I have a bunch of samples from Pdata, say I have a bunch of samples from P theta-- how do I compare them? And the most basic thing to do is to say, can I decide whether or not they are different, because if I cannot figure out if they are different, then it means that I'm close. They are the same. So if I fail at this hypothesis testing problem, or it's very hard to do well in this task, then it means the distribution are similar. And so I have a pretty good way of-- and then I have a reasonable way of comparing. Then maybe I'm doing well on my learning problem, so that's kind of like the intuition. But doesn't then also the issue of the randomness of the P value come into play? Because like for example, if you set the P value at 5%, then just like 5% of the time, so even if they are the same, you would reject. Right. So the way-- and this is getting at how do you do it. Typically, what you would do is you would come up with a test statistic, which is just a function that you use to compare these two sets. For example, you might try to look at what is the mean of the samples in the first group versus the mean of the samples in the second group. Or you could look at the variance-- the sample variance of the samples in S1 versus the sample variance in S2. And presumably, if indeed P is equal to Q, then you would expect the mean of the samples in here to be similar to the means of the samples in here. And you would expect the variances to be similar. So for example, one statistic you could try is to do something like this, where you compute the mean in the first group, you compute the mean in the second group, and then you compare them. And what you could do is you could say, if this statistic is larger than some threshold, then I reject the null hypothesis. Otherwise, I say that H0, so the null hypothesis is consistent with my observation. And as you were saying, there is always some kind of type I and type II error, in the sense that T is random, because S1 and S2 are random. And so even when P is equal to Q, it's possible that the means are different just because of randomness. It's not going to be super important to what we're going to do. But yeah, there is no-- it's a hard problem. It's-- even if you had a good statistic, there is still going to be some probability of error. But you could ask the question of, OK, what's the best statistic for determining whether or not the tool to solve this hypothesis testing problem. And you could try to minimize type I and type II errors. There's going to be false positives, there's going to be false negatives, but you can try to choose a statistic that minimizes these types of errors. Does it have to be like a differentiable [? function? ?] Can I have human preference as my assistant? We'll see how to come up with this T. This is just a kind of handcrafted and it's not the best one, but this is going to be similar to-- well, it's going to be learned, basically. Yeah. And so here, are you trying to distinguish between two different distributions where initial samples coming from, [? or ?] also two different types of generative models that are making the final output? It could be both. In practice, one is going to be real data, one is going to be-- let's say S1 is going to be samples from the data distribution and S2 is going to be sampled from our model, but it doesn't have to. It could be two models, it could be-- yeah, this is pretty generic. But the way we're going to actually use it for a generative adversarial network, it's going to be one group of samples are going to be real, one group of samples are going to come from that model. So they're going to be fake. And then we're going to use some kind of statistic to try to determine whether or not these two samples are similar. And yeah, the key observation here is that at least-- that there is some room for choosing different kind of test statistics. And you can choose some statistics which do not depend on the likelihood of P or Q. For example, if you just look at the means, you don't need to know the probability-- you don't need to be able to evaluate probabilities under P, you don't need to evaluate probabilities under Q. You can compute this function just based on a bunch of samples. You mentioned you just overfit and replicate Pdata. How would these two sigma tests deal with that? It's going to be the same [? samples. ?] Yeah, so the question is, does it solve the overfitting problem? You still have the overfitting problem just even if you do maximum likelihood. You can still have overfitting problem, so this does not directly address the overfitting directly. Although, you can sometimes-- at least you can use like validation and other things to see what would be the-- yeah, so we'll see. Yeah? So the [? test ?] statistic, which is the-- if you choose it to be your cross-entropy loss, then it's the same thing as maximizing your [? K ?] levels? You could-- yeah, so you could try to choose test statistics that are based on the likelihood of the model, but you don't have to. And so again, sort of like the setting of generative modeling with two sample tests, is one where we have there's going to be a bunch of samples that are just going to be coming from the data distribution. Recall that that's all we-- we always assume that somebody is giving us access to a bunch of samples that are coming from the data distribution, and that's our training set. And so that's going to be the first group of samples. Then, just like before, we have a set of models. We have a set of distributions P theta that we are willing to consider. And as long as these distributions are easy to sample from, which is not a terrible requirement, because presumably, you're going to use the sample the model to generate samples anyways-- but as long as you can somehow sample from the model, you can always generate this second set of samples S2, which are just basically samples from the model. And then what you can do is you can try to train-- basically, the high level idea is going to be instead of trying to find a model P theta, which is going to optimize over the set so that we minimize the KL divergence between these two distributions, we are going to try to find a model that tries to minimize this whatever test statistic that we've decided to use to compare two set of samples. And for example, in the previous example, it could be something try to make sure that the means of the samples that I produce matches the mean of the samples that I had in the training set, which would not be very useful. But that's the high level idea. And so the problem is that finding a good statistic is not easy. For example, if you were to just try to minimize this kind of test that statistic here, you would end up with a generative model that produces samples with the same mean as the training set. Which is a-- the property that is desirable, but it's not sufficient. If you just match the mean, you can still produce very bad samples and fool this test statistic. And that's the problem-- is that in high dimensions, it's very hard to find some good test statistic. And so kind of intuitively, you could say, let's say that you start comparing probability distributions by matching the mean. And so if you compare the means, you would be able to distinguish-- let's say that this green Gaussian is different from this red Gaussian. But then you could say, just matching the means is not sufficient. Here are two Gaussians that have the same mean, but different variances. So if you just compare the means, you would not be able to distinguish between those two Gaussians. And then maybe you-- and if you match, let's say, the mean and the variance, you could have different distributions that have the same first moments or same mean and same variance, like a Gaussian and Laplace density. Same means and same variances, but different shapes. So kind of like-- you can kind get a sense that the more, especially the more dimensions you have-- they are modeling a lot of pixels or a lot of tokens-- there are many different ways in which two-- or many different things you could look at when you compare two probability distributions. There's many different features that you could try to compare samples with respect to. And so kind of like handcrafting a test statistic and just say, let's try to train a model based on that is unlikely to work, because you're going to match the test statistic, and there's going to be other differences that you didn't think about that actually matter in practice. So what we are going to do is we're going to try to-- instead of picking a hand-- a fixed handcrafted test statistic, we're going to try to learn one to automatically identify in which ways these two set of samples-- that one from the data and the one from the model-- they differ from each other. Instead of keeping it fixed and just say let's compare the mean, we're going to try to learn what makes these two samples different. And how to do that? Any guess? We're going to try to just basically train a classifier, essentially, which in this the language of generative adversarial networks is called a discriminator. So what we're going to-- machine learning-- that's exactly what you would do if you are trying to solve a classification problem. You're trying to figure out what distinguishes the positive class from the negative class. The job of a classifier or, for example, a deep neural network is to identify features that allow you to discriminate and distinguish these two groups of samples. So we're going to use the same kind of idea to figure out what sort of features of the data should we look at to discriminate between the green curve and the red curve, essentially. And so that's basically what we're going to do, is we're going to train a classifier-- again, it's called a discriminator in this context to basically classify. We're going to use a binary classifier-- for example, a neural network-- to try to distinguish real samples, which are basically the ones from the first set-- the one generated by the data distribution, which let's say label one, from fake samples which are the ones generated by P theta by our model, which we can say, for example, label 0. And we can train a classifier to do that. And as a test statistic, we can use the minus the loss of the classifier. Why do we do this? Well, what happens if the loss of the classifier is high? If-- well, let's [? say ?] if the loss of the classifier is low, then it means that you're doing a very good job at distinguishing the two. They are very well separated. Your classifier is doing a very good job as-- at distinguishing these two groups of samples. And so they are very different, so we want the test statistic to be small-- to be large, sorry. And if we have a high loss, then the real and the fake samples are hard to distinguish, and so we expect that them to be similar. Yeah? How does [INAUDIBLE]. I'm just a little bit confused, because if we're predicting the-- if we're doing this classification task, the likelihood-- like the loss is still [INAUDIBLE].. It's-- yeah, good-- so you're exactly right. So it's based on the likelihood of the classifier. It's not based on the likelihood of the generative model. So by doing things this way, we're going to use the likelihood, but it's going to be the likelihood of a classifier. And that's much easier because that's just going to be a likelihood over basically a binary variable. And so essentially, it doesn't really put any restriction on the neural network you can use as opposed to the likelihood of a high-- over x over the input, which requires you to either use autoregressive models or invertible neural networks. It puts a lot of restrictions on the kind of architectures you can use. Here, it's just going to be a likelihood over basically a binary random variable, which is the class label. And so all you have to do is you have to have a softmax at the end that maps it to a probability. But then you can do whatever you want with respect to the x. So you can extract whatever features you want of the input x to come up with a good classifier. And there is really no restriction on the kind of architecture. [INAUDIBLE] it's much easier to optimize. Yeah. And so the goal of this classifier-- the way we're going to train this classifier is to maximize the two sample test statistic to basically try to make the-- try to figure out what kind of statistic, basically, can maximally distinguish between the data and the model, which is the same as minimizing the classification loss. So we're going to train the classifier the usual way to minimize the loss, because that's also what gives us the most power to basically distinguish these two distributions-- these two set of samples. Yeah? To clarify, before, you were talking about what statistics should we use in order to figure out if they're different. In this case, if I'm understanding correctly, that we don't end up really even thinking about what that statistic necessarily is. We're just doing the classification in order to figure out if it exceeds or-- It's going to be actually a statistic. So this is the actual statistic. It could be something like this. So it's actually more like a family of statistics, which are all the ones that you can get as you change-- I mean, you have a classifier, and then you can imagine changing the parameters of the classifier. And then you can think about if you were to try to find a classifier that maximizes this objective function, which is just minimizing cross entropy, which you approximate based on data because you only have samples from Pdata and you only have samples from the model, you end up with something that looks like this, which is going to be the statistic. Remember before, we were just taking the mean of x in S1 and the mean of x in-- of the samples in S2. Now we don't just look at the means. Now we look at what the classifier says on these two samples. And that's what we're going to use. And basically, as we discussed before, if the [? log-- ?] which this is just the-- this is the negative loss of the classifier [INAUDIBLE] maximizing. And so if this quantity is large, then it means that you're doing a good job at separating them, and it means that they are different. And if the loss is-- if this quantity is low, then it means that you're very confused. You're not doing a good job at distinguishing them, which supports the idea that probably they are similar. It's not hard-- it's hard to distinguish. Can you just clarify the notation [INAUDIBLE]?? Yeah, so what we have here is-- the setting is the one we had before, where we're saying we're going to use-- as a statistic, we're going to use a classifier, which we're going to denote as the D5, because it's going to be trainable. It's going to be another neural network that we're going to train. And they're going to train this neural network to try to distinguish between the samples in S1 and the samples in S2, where the samples in S1 are the-- it's just a sample from-- a group of iid samples from the data distribution, and S2 is just a group of samples from the model distribution P theta. And if you maximize this objective function over the classifier, you are basically trying to do as well as you can-- you're basically just training the classifier the usual way by minimizing cross-entropy. And this is going to be our statistic in the sense that the loss of a classifier will tell us how well-- how similar, basically, these two groups of samples are. And basically, yeah, the discriminator-- the phi is performing binary classification with the cross-entropy objective. And you can kind of see that you are going to do well here, what you're supposed to do is you're supposed to assign probability 1 to all the samples that come from Pdata, and you're supposed to assign probability 0 to all the samples that come from the model if you want to maximize that quantity. Which again is basically what you would do if you were to train a classifier to distinguish the two groups of samples. And that's just like the negative cross-entropy. So for now, P theta is just fixed. It's the model distribution. The data distribution is, as usual, fixed. You just have a bunch of samples from it. That's S1. And what we're seeing is we're going to try to optimize the classifier to do as well as it can at this task of distinguishing between real samples and fake samples, because the loss of the classifier will basically tell us how similar the samples that come from the model are to samples that come from the data. Yeah? Can you please explain again how the loss is telling us if it's what we want? Yeah, so imagine-- do I have it here? Yeah, so imagine that somehow the-- maybe-- yeah. Imagine that p-- that the two distributions are the same. So P theta is the same as P data. Then these two samples would come from the same distribution. So the classifier basically cannot do better than chance, because you are literally just taking two groups of samples that come from the same distribution. And then there is no way to distinguish them because they are actually coming from the same distribution. So you cannot do better than chance, and so you would have a high loss. On the other hand, if the samples were very different, the classifier would do a pretty good job of separating them, and then in which case, the loss would be small. And so based on that, we can come up with a statistic that would basically say, based on the loss of the classifier, we're going to decide whether or not the samples are similar or not. So to the extent that you can separate them well using a classifier, then we say that they are different. If somehow, they are kind of all overlapping and there is no way of coming up with a good decision boundary, then we were saying, OK, then probably the two samples are similar. Why do we need [INAUDIBLE] discriminator is-- even if it's not optimal, we can just set that to [INAUDIBLE]?? You need the first term, because I guess you do need to contrast it to something. Otherwise, it would be trivial to-- as is the usual-- maybe if you need to-- if you have a classification problem, you do need the two data from the two classes. Otherwise, there is not really much to learn. [INAUDIBLE] if my model-- my generative model, it should just learn to fool the discriminator [INAUDIBLE].. So it should-- it can only control samples from [INAUDIBLE]. Oh, yeah. It has nothing to do with [INAUDIBLE].. Yeah, you are already one step ahead. You're already thinking about optimizing with respect to theta, which we are not doing here right now. Yeah, but for now, we're just optimizing phi, and that depends on both clearly. You are right. When you optimize with respect to theta, you only care about the second term. And indeed, the gradients will only involve that term. Yeah? [INAUDIBLE] notation again? Yeah, so the notation here is saying-- recall that we have a group of samples S1 that are coming from Pdata. This is your training set, or a mini batch of samples from Pdata. And then we have another group of samples S2 that are coming from a model distribution P theta. And then we have some kind of objective function here that depends on the model and the discriminator. And what we're seeing is that the discriminator is going to try to optimize this quantity, which depends on the model and the discriminator. And it's going to try to maximize it. And that's equivalent to basically trying to do as well as it can at distinguishing real samples from fake samples, which are coming from P theta. The reason we have this V is that we're going to also try to then optimize this function with respect to theta, because we want to train the generative model. And so what we will show up later is basically a minimax optimization problem, where we're going to optimize this V both with respect to theta and with respect to phi. So there's going to be a competing game where the discriminator is trying to optimize this V quantity-- it's trying to maximize this V quantity, and the model is going to try to minimize that quantity, because the model is-- we're trying to make it-- we're trying to change P theta to fool the discriminator, or try to make the classification problem as hard as possible. So later, there will be an outer minimization with respect to theta, and that's how we train the model. Cool. And it turns out that the optimal discriminator actually has this form, which makes sense. This is just the-- if you just use Bayes' rule and you compute what is the true conditional probability of a point x belonging to the positive class, which in this case is, let's say, the data distribution real samples. Well, the true conditional distribution is basically the probability that point was generated by the data distribution divided by the total probability that that point was actually generated by either the model or the real data distribution. And so in particular, you can see that if x is only possible according to the data distribution, then the model should assign the optimal discriminator would assign 1, because you have basically 1 over 1 plus 0, and then it would be 1. While for example, if the two models are the same-- so if a point is equally likely to come from Pdata or P theta-- then this quantity should be 1/2. Which kind of makes sense, because if x is equally likely under P theta and the Pdata, then the best you can do is to say 1/2. probability While if a point is much more likely to have come from Pdata because this ratio is large, then the classifier should assign high probability to that point. And if a point is very unlikely to have come from Pdata because the numerator is small, then you should assign-- the classifier should assign low probability to that point. And so in this case, the most confused the discriminator can [? be-- ?] because if P theta equals Pdata, it seems like we're going back to the objective we were trying to get away from, which is that you don't want to minimize the KL divergence between the two distributions. So I guess how do we escape that problem? So we don't, because it's true that the KL divergence is going to be minimized when the two distributions are the same. And that's the global optimum of the KL divergence. So whatever we do, we're still going to go towards that global optimum. In practice, you cannot actually reach it. And. So really what matters is that if you have an imperfect model-- so you cannot really achieve this-- the KL divergence will take some non-zero value. This quantity might take some other non-zero value. And what we're-- the argument could be that perhaps among the suboptimal models, you should prefer one that cannot fool disc-- cannot fool a discriminator as opposed to one that gives you high compression, because maybe that's more aligned to what you care about. Can I have a discriminator that's just a generative model with a softmax over the final P of [? x? ?] [? Does ?] it need to be like a binary-- I can have another generative model trained with, let's say, a normal [? KL ?] divergence, and I use that as my discriminator, [INAUDIBLE].. So you're saying can you use a likelihood-based model to-- Do the discriminator. --do the discriminator? Yeah, you can. There are actually variants of ways of training generative models that are kind of along those lines, where-- we're going to talk a little bit about that when we talk about noise contrastive estimation. I think it's going to be pretty similar to what you're suggesting. So yeah, if you have access to a likelihood or part of the likelihood, then you can take advantage of it and try to design. But then that defeats the purpose. As what we'll see is that the main advantage of this is that you don't have to have access to a likelihood. The only thing you need is to be able to sample from the model efficiently, which means that you can use very-- essentially, an arbitrary neural network to define the generative process, which is a big advantage of this kind of procedure. Yeah, that's what I was saying. If you check, you can see that if P theta is equal to ptheta, then this quantity is going to be 1/2 for every x, which basically means that the best you can do-- the classifier will output 0.5 for every x, which is indeed the best you can do. If the distributions are the same, you cannot possibly do better than chance. So this is when the classifier is maximally confused, basically. So now, how do we get the next step? How do we use-- now that we've decided that that's going to be our notion of the way we're going to compare how similar basically Pdata is to P theta, now we can define a learning objective where we try to optimize P theta to basically fool the discriminator. And so it's going to be kind of like a game. It's going to be a minimax optimization problem between a generator, which is just your generative model and this discriminator-- this classifier. And the generator is just going to be a generative model typically that basically looks like a flow model, in the sense that you start with a latent variable z, and then you map it to a sample through some deterministic mapping, which is parameterized by a neural network, and we're going to call it G theta. And the-- so the sampling procedure is the same as a flow model. You sample z from a simple prior-- for example, a Gaussian-- and then you transform it through this neural network. And crucially, this is similar to a flow, but the mapping does not need to be invertible. It can be an arbitrary neural network. It's an arbitrary sampler. You start with some random vector, and you transform it into a sample. No restrictions on what G is. [INAUDIBLE] you have the same dimensions. [INAUDIBLE] could have-- Exactly. It doesn't need to have the same dimensions. No restrictions, basically, on what G is. Yeah? We can just use any generative model, like a [? VAE? ?] You could use any generative model. Yeah, but the-- yeah, the advantage-- the typical-- well, we'll see that it's actually convenient-- well, to train, it would be good if you can backprop through the generative process. But to some extent, you can use-- you can indeed use other generative models. But the advantage is that basically you don't have any restrictions on this neural network. So we don't have to-- there is going to be some distribution over the outputs of this neural network, but we're not ever going to compute it. So unlike autoregressive models, or flow models, or VAE where we were always very worried about being able to compute given an x what was the likelihood that my model produces that particular x, for these kind of models, we don't even care, because we're going to use two sample tests to compare to train them. And so we don't need to be able to evaluate likelihoods. And so we don't have any restriction basically on what this sampling procedure does. It can essentially be anything. And what we do then is we're going to train this generator to do the opposite, basically, of the discriminator. The generator is going to try to change this mapping, which implicitly also changes the samples it produces to try to minimize this statistic that we were using in support of the fact that-- of this null hypothesis that says the data is equal to the distribution of samples that I get by using this model. And so the end result is this. You have this minimax optimization problem, where the function V is the same as this-- basically, the loss-- of the negative loss of the classifier. And then these two players in the game-- the generator and the discriminator, they have opposing objectives. The discriminator is going to try to maximize this which, again, this is the same as what we had before. This is just saying classifier-- the discriminator is trying to do as well as it can to distinguish samples coming from the data [? to ?] samples coming from this generative model-- from this generator. And the generator is going to do the opposite. It's going to try to minimize this objective function, which basically means the generator is trying to confuse the classifier as much as it can. So it's going to try to produce samples such that the best classifier you can throw at that-- when you compare them to the data distribution, the best classifier is going to perform poorly. Which supports the fact that if a classifier cannot distinguish the samples I produce from the samples that are in the data set, then I probably have pretty good samples. And that's like the training objective that we're going to use for training this class of generative models. And so now, it turns out that this is related to a notion of similarity between probability distributions. We know that-- what the optimal discriminator. It's just the density ratio Pdata over P theta plus P model, basically. And we know that the optimal discriminator is going to depend on what the generator does. And we can evaluate the value of this objective function when, basically, the second player-- the discriminator is picking the best possible thing it can do given what the generator is doing and-- because we know what that looks like. We know that when the discriminator is optimal, the discriminator is just going to give us these density ratios-- Pdata over P theta plus P model. So we can plug it into this expression, and we get this sort of equation. So this is the optimal loss that you get by choosing-- whenever you choose a generator G, If the classifier picks the-- if we pick the best classifier given that G and given the data distribution, this is the value of that objective function. And this kind of looks like a KL divergence. It's an expectation of a log of some density ratios. Remember, divergence is expectation of under P of log P over Q. This has the flavor. Now, the denominators here are not probability distributions. They are not normalized. You have to divide by 2 if you want to get something that integrates to 1. But you can basically just divide by 2 here and here, and then subtract off that logarithm of 4 that you just added in the denominators. And now, this is really just two KL divergences. You can see that this is the KL divergence between the data and a mixture of data and model. And this is KL divergence between model and a mixture-- the same thing-- mixture of data and model. And then it shifted by this log 4, which is just because I had-- I added these two here and here. And so you need a log 4 there to make it equal. And so what this is saying is that this objective as a function of the generator is equal to this sum of KL divergences, which actually has a name. It's called the Jensen-Shannon divergence between the data distribution and the model distribution. So this thing is essentially 2 times this quantity called the Jensen-Shannon divergence and-- which is also known as symmetric KL divergence. If you look at that expression, it's basically that-- if you want to compute this Jensen-Shannon divergence between P and Q, you basically do the KL divergence between P and a mixture of 1/2 P and 1/2 Q, and then you do the reverse. KL divergence between Q and a mixture of P and Q. And this has nice properties. We know KL divergence is non-negative sum of two KL divergences. Also has to be non-negative. What is the global optimum of this? When can when can it be 0? [INAUDIBLE] Yeah, so it also has the nice property that the global optimum can be achieved if, and only if, the distributions are the same. It's symmetric, which is nice. Remember, KL divergence was not symmetric to-- PQ is not the same as KL QP. This is symmetrized, basically, by definition. And it also has triangular inequality, but not super important. And so what this means is that if somehow you can optimize this quantity here as a function of G-- so if you minimize this V as a function of G, which is what you do here on the outside, you will basically choose data-- a model distribution that matches the data distribution exactly. So the global optimum is the same as what you would get with KL divergence, and you would get kind of that optimal loss. So the summary is that-- basically, as a recap, what we're doing is we're changing the way we're comparing the data distribution and the model distribution. And we choose this similarity based on a two sample test statistic, and the statistic is obtained by optimizing a classifier. And under ideal conditions, so that the classifier can basically be optimal, which in practice, it's not going to be. Because if you use a neural network, it might not be able to learn that density ratio. But under ideal conditions, this basically corresponds to not using KL divergence here, and instead using this Jensen-Shannon divergence to compare the data to the model. And the pros are that to evaluate the loss and optimize it you only need samples from P theta. You don't need to evaluate likelihoods, which is great because that means you don't have restrictions on autoregressive, normalizing things, normalizing flows. You don't have to worry about It lots of flexibility in choosing the architecture for the generator. Basically, it just has to define a valid sampling procedure, which is essentially always the case. If you feed in random noise into a neural network, you get a sampling procedure-- a valid sampling procedure. That's really all you need. And it's fast sampling, because you can generate a sample in a single pass through the generator. So it's not like autoregressive models-- one variable at a time. Everything is generated in a single pass. The con is that it's very difficult to actually train in practice, because you have this minimax optimization problem. And so in practice what you would have to do is you would have to do something like this. You would have to, let's say, start with a mini batch of training examples. And then you get a sample of noise vectors from the prior of the generator. And then you pass them through-- you pass these noise noise vectors through G to generate m fake samples. And then you basically have these two mini batches. You have m real data points and m fake data points, which is what you get by passing zi through the generator. And then what you do is you try to optimize your classifier-- your discriminator to maximize that objective. The classifier is-- which is just the usual training of a classifier. You just try to take a step in a gradient ascent in this case-- step on that objective function to try to improve this optimization objective to do better, basically, at classifying-- at distinguishing real data from fake data. And as it was mentioned before, then the generator is also trying to [INAUDIBLE]---- is also looking at the same objective function, but it has an opposite objective. The generator is trying to minimize that objective function. You can still do gradient descent. And what you do is you compute the gradient of this quantity with respect to theta, which are the generator parameters. And the first term does not depend on theta. That's just the data. So you cannot change what the data looks like, but what you can do is you can try to adjust the parameters of G-- so the parameters of the generator. So that the samples that you produce by passing random noise through G kind of look like the real data as measured by this discriminator D phi, which is what you get by taking this kind of gradient descent step on that objective. For that [INAUDIBLE] the gradient descent step, do we use the updated phi or the previous [? one? ?] Yeah, so it's unfortunately very tricky to train this, and this is not guaranteed to converge. And it can be-- in practice, you would use the new phi, and you would do-- you would keep going and trying to playing this game where each player is trying to play a little bit better and hope that it converges. You repeat this and hope that something good happens. And here is kind of a visualization of what happens if you do this. So what's happening here is you can imagine there are a bunch of z vectors that are then mapped by G to different locations. So here, z and x are just one-dimensional. And that is giving us a distribution, which is this green curve. So you see that most of the samples from the generator end up here. And then there is a data distribution, which is just this red curve. That's fixed. It is whatever it is. And then let's say you start with a discriminator, which is not very good, and it's this wiggly kind of blue line here. Now, given that you have a bunch of red samples and you have a bunch of green samples-- so real samples and fake ones coming from the current generator-- what you would do is you would try to come up with a good classifier. And the better classifier that you get after you update the discriminator is this blue curve. So as you can see, if x's are coming from this left side, then they're probably coming from the real data distribution. There is no chance they come from the generator, because the generator has very low probability here. The data is pretty high. And so the discriminator should say, samples around here should have high probability of being real-- samples around here should have high probability of being fake or low probability of being real. And then here in between, it's unclear. And then you just decrease the probability as you move towards the right. So that's what happens when you optimize phi. You basically come up with a better discriminator. Once you have, the better discriminator, you can try to update the generator to fool this discriminator. And so what would happen is you would change these arrows here, which are basically the G, which is telling you how you map the random noise from the prior-- the z to [? axis ?] that you like. And in particular, if you're trying to fool the discriminator, you are going to try to shift probability mass to the left. And so you might end up with a new generator that looks like this, and that confuses the discriminator more, because you can see it's overlapping more with the red curve, and the discriminator is going to have a hard time trying to distinguish these two samples. And then you keep going until you reach, hopefully, this kind of convergence, where the discriminator where the generator matches the data distribution. So the green and the red curves are overlapping. They're identical. And the discriminator is maximally confused and is producing 1/2 everywhere, because it cannot do better than chance at that point. So in terms of actual implementation, I guess this would not look like a nested loop. This is actually like a single for loop in which you are doing alternate [? updates. ?] Yeah. And so in-- as part of what you need to do, the job of the discriminator is to basically look at a bunch of samples, like these 2. And you need to decide which one is real and which one is fake, essentially. And so which one do you think is real and which one is fake? First one. It's actually both are fake. [CHUCKLING] Yeah. Wow. So yeah, and these kind of technologies-- these improved a lot over the years. You can see from 2014 all the way to 2018, and there are even better improvements now. Very successful in a lot of tasks, but it's very challenging in practice to get them to work because of the unstable optimization. [INAUDIBLE] generates? Just a random Gaussian noise [INAUDIBLE].. There's no caption here. Yeah, it's just random noise, and then you pass them through a neural network that turns them into images. Yeah, and there are several problems with GANs. The first one is unstable optimization kind of like-- because you have this minimax objective. It's very tricky to train them. It's very hard to even know when to stop, because it's no longer like likelihood. And you can see it going down, and at some point, you can just stop when-- or up, let's say. Or you're maximizing likelihood. You can see it goes, up. And at some point, you see it's not improving anymore. You can stop. Here is no longer the case that when to stop basically. And it can have this problem called mode collapse, which we'll see basically while KL divergence is mode covering will try to put probability mass everywhere, because otherwise you get a huge penalty if something is possible under Pdata. But you put 0 probability, then you have infinite penalty. This GAN tend to be much more mode seeking, and so they might just focus on a few types of data points and completely stop generating other kinds of data points that are present in the training set. And so in practice, you need a lot of tricks to train these models. And I'm going to point you to some reference for how this-- yeah, where you can see some of them. I mean, in theory under some very unrealistic assumptions, that kind of procedure where you do updates on the discriminator and the generator-- or at every step or like you find the optimal discriminator-- is supposed to work. In practice, it doesn't. In practice, what you see is that the loss keeps oscillating during training. So it might look something like this, where you have the generator loss, which is the green one. The discriminator loss, and the two types of samples the real and the fake ones. You can see it kind of keeps oscillating, because you're not reaching kind of like the-- yeah, it doesn't converge, basically, through this gradient procedure. And there is no robust stopping criteria. You don't know when should you stop. You don't really know. The only thing you can do is you look at the samples, and kind of see it's doing something meaningful, and then you just stop. But it's a very hard to come up with a principled way of deciding when to stop the training. And so the other problem is that you have mode collapse, which is this problem that, again, the generator is basically collapsing on a few types of samples, and it doesn't generate the other ones. And you can see an example here, where if you look at the samples, it really likes this type of data points, and it keeps generating it over and over. And you can see a more toy example that gives you a sense of what's going on. Like imagine the data distribution is just kind of like a mixture of a bunch of Gaussians that are in 2D, and they are kind of like lying on this circle. And then what happens is that as you train your generator, it kind of keeps moving around. So maybe at some point it produces one of the modes. And then the discriminator is doing a good job at distinguishing what it does from the real data. Then the generator is moving the probability [? mass ?] on a different mode. And it keeps moving around, but it never actually covers all of them at the same time. [INAUDIBLE] --fit into one mode. Yeah, so there are all kinds of tricks that you have to use. And here in as another example on [? MNIST, ?] where you can see how it's kind of like collapsing on generating one particular digit, and it kind of like stops learning. And there's this great post-- blog post, ganhacks, where you can see all sorts of hacks that you can use to get GANs to work in practice. And there are all kinds of techniques, including noise and various tricks that you can look up on that website. Unfortunately, it's all very empirical. There is nothing that is guaranteed to work, and you kind of have to try. And there are better architectures that are-- tricks that sometimes works and sometimes don't. But I would say the fact that these models are so hard to train is why they are no longer kind of state of the art, and people have kind of like largely given up on GANs, and people are using diffusion models instead, because they are much easier to train, and they have a clean loss that you can evaluate during training, and you know how to stop, and there is no instability. It's just a single optimization problem that you have to do. And I would say this is the main reason GANs are no longer so much in fashion anymore. But people are still using them, and it's a powerful idea and it might come back. But I think that's the main drawback-- very, very hard to train. [INAUDIBLE] the loss of [INAUDIBLE]?? It seems like the generative model never really sees the actual data. It seems like it's being trained in a very-- signal from the discriminator. So it's like [INAUDIBLE]. Yeah, so the discriminator is seeing the data, and then you-- the generator is kind of learning from what the discriminator has learned about the data, essentially. So it's a very narrow signal. It's a very narrow signal. Yeah, and that's why you also keep moving around, because then depending on what kind of features the discriminator is looking for, you might kind of try to catch up with those. But then it just keep changing, and then you never really converge to anything. Can you combine GAN with the standard generative model and hope for the best? Yeah, in fact, there is even a paper-- recent papers at ICML this year, where they were taking a diffusion model, and then they had a clever way of incorporating basically a discriminator to improve performance. And often, if you can throw in maybe a little bit of discriminator to compare samples in a meaningful way, that often helps. So yeah, that's why we're still talking about this idea, because it's actually powerful. And you can use it in combination with other existing models. And yeah, it's still valuable for other things. Yeah? Would it be a good idea to initialize the generative model here with a pre-trained VAE, or would you prefer to train it from scratch when you have enough data? What is the-- People usually training from scratch. Yeah. And I think-- yeah, we're pretty much at the end. But this, I think, was the first generative model generated art. It was auctioned at Christie's a few years ago. This is a painting generated by a GAN, and one of the best one at that time. I think it was expected to sell for something in that range, and I think it went for almost half a million dollars. [CHUCKLING] So yeah. [INAUDIBLE] Oh, yeah, it was-- but it was-- yeah, I guess a novelty kind of thing. So yeah, cool. I think that's it.
Stanford_CS236_Deep_Generative_Models_I_2023_I_Stefano_Ermon
Stanford_CS236_Deep_Generative_Models_I_2023_I_Lecture_7_Normalizing_Flows.txt
So the plan for today is to finish up the VAE slides that we didn't cover on Monday. And then we'll start talking about flow models, which are going to be yet another class of generative models with different sort of trade offs. So the thing that I really wanted to talk about is this interpretation of a variational autoencoder or a VA as an autoencoder. So we've derived it just from the perspective of, OK, there is a latent variable model. And then there is this variational inference technique for training the model, where you have the decoder, which defines the generative process, P. And then you have this encoder network Q that is used to essentially output variational parameters that are supposed to give you a decent approximation of the posterior under the true generative model. And we've come up with this kind of training objective, where you-- whatever data point, you kind of have a function that depends on the parameters of the decoder, the real generative model, theta, and the encoder phi. And we've seen that this objective function is a lower bound to the true marginal probability of a data point. And it kind of makes sense to try to jointly optimize and jointly maximize this as a function of both theta and phi. And you can kind of see intuitively what's going on here. We're saying that for every data point x, we're going to use q to try to guess possible completions, possible values for the latent variable z. So that's why there's an expectation with respect to this distribution. And then we basically look at the log likelihood of the data point after we've guessed what we don't using this inference distribution, this encoder, this q distribution. And if you were to just optimize these first two pieces, essentially, q would be incentivized to try to find completions that are most likely under the original generative model. And instead, there is also kind of like this regularizer, this other term here, where we also look at the probability of the completions under q. And this is basically corresponds to that entropy of the variational distribution q term that is kind of encouraging the distribution q, that the inference distribution to spread out the probability mass. So not just try to find the most likely z, but also try to find all possible z's that are consistent with the x that you have access to. And we have seen that, to some extent, if your q is sufficiently flexible, then you might be able to actually, and it's actually able to be equal to the true conditional distribution p of z given x. Then this objective function actually becomes exactly the log marginal probability over x, which is the traditional maximum likelihood objective. And so we've motivated it from that perspective and everything made sense. We haven't really discussed why it's called the variational autoencoder like, what's the autoencoding flavor here. And we can see it if you kind of like unpack this loss a little bit. In particular, what you can do is you can add and subtract the prior distribution over the latent variables that you used in your generative model, which recall usually is just a Gaussian distribution over z. So and your sample in your variational autoencoder, your sample, a latent variable according to some prior p of z, then you feed the z into the decoder that produces parameters for p of x, given z, and then you sample from p of x given z. So if you add and subtract this quantity in here, then you end up, and then you look at the joint over x and z, divided by the marginal over z is just the conditional distribution of x given z, which is just the decoder. And then you can see that you end up with another term here, which is the KL divergence between the inference distribution and the prior. And so what does this objective look like if you were to actually evaluate it and do some kind of Monte Carlo approximation? What you would do is you would have some data point, which gives you the x component. So it could be an image like the one you see on the left. That's the input. That's the i-th data point. Then when you want to compute this expectation with respect to q, what you would do is you can approximate that by Monte Carlo. So what you would do is you would draw a sample from q of z, given x. And recall that q of z, given x is just some other neural network that basically takes x as an input, you feed it in. And then as an output, some variational parameters over the distribution, over the latent variables. And so if q of z, given x describes Gaussian distributions, the output of this first neural network, which is the encoder, might be a mu and sigma, which basically defines the kind of Gaussian you're going to use to guess what are likely, what are reasonable values of the latent variables, given what you know, given xi. And then what you could do is you could sample from this distribution. So you sample with a Gaussian, with mean, and variance, defined by what you get by fitting the image through an encoder. Then we can look at-- so yeah, this is what I just said. So there is this encoder, one neural network that would give you parameters, and then you sample from that Gaussian distribution. Then we can essentially look at the first term here of the loss, which you can think of it as a reconstruction loss. So essentially, what we're doing is we're evaluating p of xi, given this latent variable z that we've sampled. And essentially, what we're saying is we are-- if you were to sample from this distribution, you would sample a data point from a Gaussian with parameters given by what you get from the decoder. And that would essentially kind of like produce another image out. And if you actually look at this likelihood term here, it would essentially tell you how likely was the original data point according to this scheme. And so it's kind of like if p of x, given z is a Gaussian, it's some kind of reconstruction loss that tells you how well can you reconstruct the original image given this latent variable z. And so the first term has some kind of autoencoding flavor. And if you didn't have the second term, it would essentially correspond to an autoencoder that is a little bit stochastic. So in a typical autoencoder, you would take an input, you would map it to a vector in a deterministic way. Then you would try to go from the vector back to the input. This is kind of like a stochastic autoencoder, where you take an input, you map it to a distribution over latent variables, and then these latent variables that you sample from the distribution should be useful, should be good at reconstructing the original input. And so yeah, the first term, essentially, encourages that what you get by feeding these latent variables, like you kind of like these autoencoding objective. So like the output that you get is similar to the input that you feed in. The reconstruction part, I'm curious, like what in the training objective causes are hidden representation to resemble a Gaussian? Yeah. So this is just the first term. So if you were to just do that, that's a fine way of training a model. And you would get some kind of autoencoder. Now, there is a second term here. That is this KL divergence term between q and the prior distribution that we used to define the VA. That term, so that's the autoencoding loss. The second term is basically encouraging this latent variables that you generate through the encoder to be distributed similar as measured by KL divergence to this Gaussian distribution that we use in the generative process. And so this is kind of like saying that not only you should be able to reconstruct well, but the kind of latent variables that you use to reconstruct should be distributed as a Gaussian random variable. And if that's the case, then you kind of see why we would get a generative model this way. Because if you just have the first piece, where you have an autoencoder, that's great. But you don't know how to generate new data points. But if you somehow have a way of generating z's just by sampling from a Gaussian or by sampling from a simple distribution, then you can kind of trick the decoder to generate reasonable samples. Because it has been trained to reconstruct images when the z's came from the-- were produced by the encoder. And now if these z's have some simple distribution, and so you have some way of generating the z's yourself just by sampling from a Gaussian, then you essentially have a generative model. And that's why it's called a variational autoencoder because you can think of it as an autoencoder that is regularized so that the latent variables have a specific shape, have a particular kind of distribution, which is just the prior of your VA. So that you can also generate-- you can use it as a generative model, essentially. So the classic setting with the autoencoder. It can only reconstruct images or inputs that it has seen before. Is that correct. Well, if you train an autoencoder, you train it on a training set, and then you hope that it generalizes. So you would hope that it might still be able to reconstruct images that are similar to the ones you've seen during training. And that would still be achieved by this first term to the extent that the model generalizes, which is always a bit tricky to quantify, but to the extent that the autoencoder generalizes, it's fine. But you still don't have a way of generating fresh data points because you don't have a way to start the process. The process always starts from data and produces data out. But somehow, you have to hijack this process and fit in latent variables by sampling from this prior distribution. And this term here, this KL divergence term here, encourages the fact that that's not going to cause a lot of trouble because the z's that you get by sampling from the prior are similar to the ones that you've seen when you train the autoencoder. Yeah. Kind of comment on that. So like for autoencoder, essentially, when we train, once it's trained and we just have a deterministic model, whenever we give our images and then sort of just compress the information to this lower dimension. No, it's not. So the question is whether the after training-- in a regular autoencoder or a variational autoencoder? Like the regular autoencoder. Regular autoencoder, yes, it's deterministic. Yes. But this [INAUDIBLE] sample from this distribution. Yeah. So this is a stochastic autoencoder in the sense that the mapping here. Q is stochastic. I guess, technically, you could make it very almost deterministic. Like you're allowed to choose any distribution you want, but that might not be the optimal way because there could be uncertainty over. Recall that this q should be close to the true conditional distribution of z, given x, under p. And so to the extent that you believe that conditional is not very concentrated, then you might want to use a q that is also somehow capturing that uncertainty. Yeah. It seems like this [INAUDIBLE] encouraging good samples and your second term is discouraging it from going away from the existing distribution. And I know you mentioned previously that the reinforcing might be using a reinforce argument. Does that also lead to this thing? So the reinforce algorithm is just a way to a different optimization algorithm for this loss. That works more generally, like for an arbitrary q. And it works for cases when the latent variable z, for example, are discrete. There is some similarity to the [INAUDIBLE] thing in the sense that one also has this flavor of kind of like optimizing a reward subject to some KL constraint. So it has that flavor of regularizing something. And so if you were to just optimize the first piece, it would not be useful as a generative model or not necessarily. And then you have to add this regularization term to allow you to do something. But it's not the [INAUDIBLE] case, where both p and q are generative models. This is slightly different in the sense that we're just regularizing the latent space, essentially, of an autoencoder. I'm just trying to understand, like for the second term, the KL divergence term, and there's-- what's the rationale between forcing the q. Because in this example, maybe the x is the observable part of the digit and the z is the unobservable part. And knowing x will give strong indication of what z is or what is the rationale behind forcing the q of z, given x, to be very close to unconditional p of z. Yeah. So the reason we're doing this is to basically be allowed to then essentially generate fresh latent variables by sampling from the prior without actually needing an x and feed it into the. So that's what allows us to basically use this generative model. I think what you are alluding to is that it would seem like maybe it would make sense to compare the marginal distribution of z under q to the marginal distribution of z under p. That would be a very reasonable objective too. It's just not tractable. And so meaning that again, you end up with some kind of very hard sort of like integral that you cannot necessarily evaluate. But there are other ways to enforce this. You can use discriminators to-- there are different flavors. The VAE uses this particular kind of regularization. It's not the only way to achieve this kind of behavior. So in the inference like when generating, how do we get the-- like how do we sample from it. Because we don't have the x. Exactly. Exactly. So for sampling, we don't have the x, so you cannot just use both the encoder and the decoder. So to sample, recall, we only have the decoder. So to generate samples, you don't need the encoder anymore. And the difference is that the z's-- during training, the z's are produced by encoding real data points. During sampling, during inference time, the Z's are produced just by sampling from this prior distribution p of z. [INAUDIBLE] p of z also have a neural network like you have to-- P of z, no. P of z is something super simple. Now VAE could be just a Gaussian distribution with 0 mean and identity covariance. That's kind of like that simple prior that we always use. So recall the sampling procedure is you sample z from a simple prior, then you feed it through this neural network, the decoder to get parameters. OK. OK. Just-- OK. Got you. Yeah. How close does p of z actually have to be for a Gaussian? I know Monday, you mentioned that we use-- that it's a Gaussian to do the parametrization. But let's say, it's like financial data where the tails are bigger. Would this still approach or you have to use a different approach to model? Yeah. So the extent that this works depends. Again, it's kind of related to the KL divergence between the true posterior and the approximate posterior. Like if you believe that the approximate, the true posterior is not Gaussian, it's something complicated, then you might want to use a more flexible distribution for q or something with heavy tails. So there is a lot of degrees of freedom in designing the model. I think that understanding how the ELBO is derived tells you what should work or shouldn't work. But yeah, it doesn't have to be Gaussian. That's just like the simplest instantiation. But there's a lot of freedom in terms of choosing the different pieces. Who was first? Yeah, go ahead. So why is the first term called autoencoding loss? I thought the [INAUDIBLE] is the log likelihood not the loss function. The first term is basically an autoencoding loss because it's saying that if you think about it, you are saying, you fit in an x, and then you check-- you produce a z, and then you check how likely is the original x, given that z, which if p of x, given z, is a Gaussian, it's basically some kind of L2 loss basically between what you feed in and what you get out, essentially. So in that sense, it's an autoencoding loss. But the true loss that we optimize is not just that. It's this ELBO L, which is the auto encoding loss plus regularization. Because we want to use it as a generative model. So since we're using the KL divergence between the conditional probability of q with the unconditional probability of p, wouldn't that just encourage the encoder model to generate the same distribution like for every-- Yes. So there is that. Yeah, that's a valid point. Like it's a pretty strong kind of regularization. And that is kind of like forcing it to try to do as well as it can to generate the same. Then there is also this other term that is sort of like forcing you to try to find different representation for different kinds of inputs. So you can do a good job at reconstructing them. So these two terms are kind of like fighting with each other. And you try to find the best solution you can. So if we just heard about the generative aspect, because it seems like if we just chose the mean instead of just 0 and 1. When we start sampling from this, we're just going to reconstruct it. So my question is, should we just not start with random noise and then reconstruct it and just forget about all the encoding of it? So they are suggesting a different kind of training objective, where we would sample fresh from the path? So if we're going to essentially take the latence and make it go between standard deviation of 0 and 1, then why don't we just start with that to begin with? You could. Basically, I think that would end up being something very similar as one of the original kind of like Monte Carlo approximation to the marginal likelihood, where you would just guess the z's and try to see what is the likelihood that you get as a result. And the problem is that most z's wouldn't make sense. And so that yeah, it would be potentially-- if I'm understanding correctly, you could probably cook up something that would be-- that might work if the z's are sufficiently low dimensional. But I think the problem is that if you just sample the z's just from the prior, they might not be consistent. So when we first started learning models, it is motivated that z's will refer to some features. So in real life, if we train a model and then I figured out what z or what the z is some image lives to, do we just look at it qualitatively? If I change this, what is happening in the image, and maybe then, I can get some social features. Yeah. So if you want to interpret the meaning of the z's, what you could do is you could, let's say, start with an image or even start from a random z, and then see what you get as an output. And then can try to change one axis, one of the latent variables, which recall z is a vector. So there's multiple ones. That you try to see if I change one or do I get maybe thicker digits or maybe I change the position of the digit, that was one of the factors of variation in the data. And nothing guarantees that that happens. But we'll see in the next slide that it kind of has the right-- it's encouraging something similar. So once we're done training and now we want to generate stuff, can you repeat again what you exactly said? Like you sample from q of z again or what you do? Yeah. So at generation time, the q can be ignored. You can throw away the q. And what you do is instead of generating the z's by sampling from q, which is what you would do during training, you generate the z's by sampling from p, which is the prior VA. So instead of going from kind of like left to right in this computational graph, you start in the middle and you generate the z's by sampling from the prior. And we do have the prior as part-- That's part of the generative model. And this term here encourages the fact that what the z's that you get by going from left to right versus just injecting them by sampling from the prior are similar. So you might expect similar behavior. So if the approximate posterior goes-- there's a problem. If the approximate posterior goes too close to the prior, I recall that there's a phenomenon called posterior collapse. And I remember that a few years ago that people tried to have a minimum distance. So let's say, we wanted the approximate posterior should not get too close to the prior as in any sort of progress on this. Yeah. So that's like if the posterior here is too close to the prior, then you're kind of ignoring the x, which might not be what you want because recall that we're trying to find good latent representations of the data. And so if there is zero mutual information between the z and the x, maybe that's not what you want. On the other hand, you can only achieve that if somehow, you're not really leveraging the mixture, all the kind of mixtures that you have access to when modeling the data. And so you can encourage, you can avoid that kind of behavior by choosing simple p of x, given z. Because then you're forced to use the z's to model different data points. Like if p of x, given z, is already a very powerful autoregressive model, then you don't need a mixture of complicated autoregressive models. You can use the same z's to model the entire data set, and then you're not going to use the latent variables. And you're going to have exactly that problem, where you can just choose this q to be just the prior, ignore the x completely. And everything would work because you're ignoring the z, you're not using it at all. And there are ways to try to encourage the VAE have more or less mutual information with respect between the x and the z. Sometimes, you want more mutual information. You want the latent variables to be highly informative about the inputs. Sometimes you want to discard information. Maybe you have sensitive attributes and you don't want the latent representations to capture sensitive attributes that you have in the data. And so maybe you want to reduce the mutual information. So there are flavors of this training objective, where you can encourage more or less mutual information between the latent variables and the observed ones. We are not really training p of z [INAUDIBLE],, right? P of z is not trained here. P of z is fixed. You could, but here, it's not. Right. So then maybe that's another way of thinking about what a variational autoencoder is doing that kind of gets at the compression kind of behavior and why we're sort of like discovering a latent structure that might be meaningful. Like, you can imagine this kind of setup where Alice is an astronaut and she goes on a space mission and she needs to send images back to earth, back to Bob. And the images are too big. And so maybe the only thing that she can do is she can only send one bit of information or just a single real number or something like that. And so the way she does it is by using this encoder, q. And given an image, she basically compresses it by obtaining a compact representation, z. And so if you imagine that z is just a single binary variable, then you can either map an image to a 0 or a 1. So you can only send one bit, that's the only thing you can do. If z is a real number, then you can map different images to different real numbers. But the only thing you can send to Bob is a single real number. And then what Bob does is Bob tries to reconstruct the original image. And you do that through this decoder, this decompressor, which tries to infer x, given the message that he receives. And if you think about this kind of scheme will work. Well, if this autoencoding loss-- well, if the loss is low. If this term is large, then it means that Bob is actually pretty-- is doing a pretty good job at assigning high probability to the image that Alice was sending, given the message that he receives. There's not a lot of information lost by sending the messages through by compressing down to a single z variable. And you can kind of imagine that, if you can only send maybe one bit of information, then there's going to be some loss of information. But you can what you're going to try to do is you're going to try to cluster together images that look similar. And you only have two groups of images, and you take one group and you say, OK, these are the zero bit. The other group is going to be the one bit. And that's the best you can do with that kind of setup. And so the fact that this is small, it's kind of like forcing you to maybe discover features. You might say, OK, there is a dog, it's running with a Frisbee. There's grass. That's a more compact kind of representation of the input that comes in. And that's the z variable. And the term, this KL divergence term is basically forcing the distribution of our messages to have a specific distribution. And if this term is small, then it means that basically Bob can kind of like generate messages by himself without actually receiving them from Alice. He can just sample from the prior, generate a message that looks realistic because it's very close in distribution to what Alice could have sent him. And then by just decoding that, he generates images. So instead of receiving the messages, the descriptions from Alice, it just generates the description himself by sampling from the prior. And that's how you generate images. And that's really what the objective is doing. When you're training, how do you know-- like, how do you compute this divergence? I feel like it assumes p of z. Yeah. How do you compute the divergence? So recall that this is just the ELBO. So I'm just rewriting the ELBO in a slightly different way. But if you look at the first line, everything is computable, everything is tractable. Everything is the same thing we derived before. OK. Questions on this? Yeah. So like, let's say, you have a data set with 1,000 dogs and five cats, would your latent representation start allocating more bits to, I guess, the class you're seeing more of? So you're better able to reconstruct that or you have to manually-- In that case, if you have a lot more data points belonging to some class, it would pay more attention to those because you're going to incur you're going to see them often and so you want to be able to encode them well. So if something is very rare, you never see it. You don't care about encoding it particularly well because you just care about the average performance across the data set. Got you. And I guess, the flip side to that question is like, let's say, there is a feature for some semantic attribute that you want to pay more attention to. I'm just wondering if there's work done so that the model specifically focuses on those portions of the image more than other portions. Yeah. So there's different ways to do it. Like one is if you know what you care about, you could try to change this reconstruction laws to pay more attention to things that matters, because right now the reconstruction loss is just L2, which might not be what you want. Maybe you have-- there are some features you care more. So you can change the reconstruction loss to pay more attention to those things. And that's the same thing as changing the shape of this distribution, essentially. To say, OK, I care more about certain things versus others. The other thing you can do is if you have labels and you know-- because right now, this is kind of made up. There is no-- it's discovered whatever it discovers. There is no guarantee that it finds anything semantically meaningful. So the only way to force that is if you have somehow labeled data and you know somebody is captioning the images for you or something, then you can try to change the training objective and make sure that when you know what the values of the z variables is, then your life is easier. You can just do maximum likelihood on those. That's going to force the model to use them in a certain way. So say, when training, should we always sample the most likely z or the sampling just sort of randomly help [INAUDIBLE]. Yeah. So the question is whether we should always choose the most likely z. And if you think about the ELBO derivation, the answer is no. You always want to sample according to the p of z, given x. So you would like to really invert the generative process and try to find z's that are likely under that posterior, which is intractable to compute. But we know that that will be the optimal choice for q. I was saying like should you always sample the most likely seed based on the encoder. We shouldn't. The objective is just to sample from it. Because there could be many. And there might be many other possible explanations or possible completions. And you will really want to cover all of them. So for each input x, is it better practice to sample multiple and do multiple generations? Yeah. So the question is, should you get more than one? Yes, in the sense that just like it's Monte Carlo. So the more z's you get, the more samples you get. Recall, you really want an expectation here. We cannot do the expectation. You can only approximate it with a sample average. The more samples you have in your average, the closer it is to the true expected value. So the better, more accurate of an estimate you have of the loss and the gradient. But it's going to be more expensive. So in practice, you might want to just choose one. Yeah. I'm just curious about if I have that much data, can I use like reconstruction to the training data to start out at all? So you would augment the training data with samples from the model, essentially? That's something that people are starting to explore using synthetic data to train generative models. And there is some theoretical studies showing what happens if you start using synthetic data and put it in the training set. And there are some theoretical results showing that under some assumptions, this procedure diverges. And this, I think is called generative models going mad or something. And meaning that bad things happen if you start doing that kind of thing. But it's under some assumptions that are not really very realistic in practice. So it's not clear. I guess, a question to the previous question that was asked, you said, we don't do an important sampling during inference time. We don't sample from the more likely outcomes. We sample from the conditional. We don't pick the most likely z. I see. So we're still more likely to pick the more likely. Yeah. The more likely they are, the more likely we pick them. But we don't just pick the most likely one. We just sample the distribution. OK. So now the other thing I wanted to talk about today is start talking about flow models, which is another variant, another way of going around the intractable, kind of like marginal probability in a latent variable model. So far, we've seen autoregressive models, we've seen variational autoencoders, where the marginal probability, marginal likelihood is given by this mixture model, the integral over the latent variables. And we've seen that autoregressive models are nice because you don't have to use variational inference. You directly have access to the probability of the data. And you don't have to deal with this encoders and decoders. And VAEs are nice because, well, you get the representation. Z, and you can actually define pretty flexible marginal distributions. You can generate in one shot. So they have some advantages that autoregressive models don't have. But the challenge with the latent variable model was that, well, we cannot evaluate this marginal probability. And so training was a pain and we have to come up with the ELBO. And so what flow models do is it's a type of latent variable model, kind of like a VAE that has spatial structure so that you don't need to do variational inference. And you can train them in a more direct way. So it's actually very efficient to evaluate the probability of a observed data x even though you have latent variables. And so which means that you can train them by maximum likelihood. And so the kind of idea is that we would like to have a model distribution over the visible data, over the observed data that it's easy to evaluate and easy to sample from. Because then, we can generate efficiently at inference time. And we know that there is many simple distributions that would satisfy these properties like a Gaussian distribution or a uniform distribution. But what we want is some kind of distribution that has a complicated shape, kind of like the one you see here. So here, the colors represent probability, density, mass. And so if you have a Gaussian and it has this relatively simple shape, where all the probability mass is sort of like centered around the mean. And so if you think about modeling images with something like a Gaussian, it's not going to work very well because there's only going to be a single point and all the probability mass is shaped around it. Think about a uniform distribution. Again, it's not going to be very, very practical to model real data. And you want something much more multimodal, something that looks like this, where you can have a probability mass somewhere and then have no [INAUDIBLE],, the probability goes, decreases a lot and then goes up and then decreases like you want complicated kind of shapes for this p theta of x, which is kind of like the same reason we were using mixtures as one way of achieving this kind of complicated shapes by taking a mixture of many simple distributions. The way flow models do is they instead try to transform essentially simple distribution into more complicated ones by applying invertible transformations. And that's essentially of like a variational autoencoder. So it's going to be the same kind of generative model, where you have a latent variable z that you sample from. And that's again, sampled from a simple prior distribution like a unit Gaussian, fixed Gaussian with fixed mean and some kind of, let's say identity covariance. And then you transform it. In a VAE, what you would do is you would compute the conditional distribution of x given z by passing z through some two neural networks. And what we've seen is that this is one way of getting a potentially complicated marginal distribution because of this mixture in behavior. But you have this issue that when you want to evaluate the probability of one x, you have to go through all possible z's to figure out which ones are likely to produce that particular image, let's say, x that you have access to. And this enumeration, over all possible z's, is the tricky part, is the hard part. And there could be multiple z's that produce the image or even just finding which z is producing that is likely to have produced the image x that you have access to is not easy. And the way the VAE is get around this is by using the encoder that is essentially trying to guess given the x, which z's are likely to have produced the image that you have access to. And one way to try to get around the problem by design is to construct conditionals, such that inverting them is easy. And one way to do that is to apply a deterministic invertible function to the latent variable z. So instead of passing the z through these two neural network, and then sampling from a Gaussian, defined by the neural networks, we directly transform the latent variable by applying a single deterministic invertible function. And the reason this is good is that if we do this, then it's trivial to figure out what was the z to produce that x, because we know there is only one. And the only thing you have to do is you have to be able to invert the mapping. So to the extent, that by design, this functions that we use are invertible. And they're easy to invert, hopefully, and deterministic. So that there's always only one z that could have produced any particular x. Then that kind of solves the problem at the root that we had when we were dealing with variational autoencoders. And that's really the whole idea of this class of generative models called flow models, which you can think of it as a VAE, where the mapping from z to x is deterministic and invertible, which makes, as we'll see, learning much easier. [INAUDIBLE] the z, the latent, and priors that we choose has the same dimensions with x? Yeah. So that's going to be one that will come up. But that's a great observation that if we want this mapping to be invertible, then we're sort of requiring z and x to have the same dimensionality. And so one of the things you lose if you do this is that there is no longer this idea of a compressed kind of representation because now z and x end up having the same number of dimensions. OK. So that's the high level motivation, the high level idea. Now let's see how it's actually done in practice. So just as and let's start with a simple refresher on what happens if you take random variables and you transform them through, let's say, invertible functions. And so just to start, let's say that we have a single continuous random variable x. And you might recall that one way to describe the random variable is through the CDF, the cumulative density function, which basically tells you for every scalar a, what is the probability that the random variable is less than or equal to a? And then the other way to describe the random variable is through the PDF, which is just the derivative of the CDF. And typically, we describe random variables by specifying a kind of functional form for this PDF or CDF. In the case of a Gaussian, it might look something like this. You have two parameters, the mean and the standard deviation. And then you get the shape of the PDF by applying the function. And it could be uniform, in which case, the PDF would have this kind of functional form, where if it's uniform between and b, then the PDF is 0 outside that interval. And it's 1 over the length of the interval when x is between a and b. And same thing holds when you have random vectors. So if you have a collection of random variables, we can describe this collection of random variables through the joint probability density function. And again, an example would be if these random variables are jointly distributed as a Gaussian distribution, then the PDF would have that kind of functional form. So now, x is a vector. So it's a sequence of numbers. And you can get the probability density at a particular point by evaluating this function. And again, the problem here is that this kind of simple PDFs, they are easy to evaluate, they're easy to sample from, but they are not very complicated. Like the shape is pretty simple, kind of the probability only depends on how far x is from this mean vector mu. And kind of that determines the shape. And you don't have many parameters to change the shape of this function. OK. Now let's see what happens when we transform random variables by applying functions to them. So let's say that z is a uniform random variable between 0 and 2. And pz is the density of this random variable. Now, what is the density PDF evaluated at 1? 0.5. 0.5, yeah. So 1/2. And just a sanity check, if you integrate over the PDF over the interval, 0 to 2, you get 1. This is what you want. Now, let's say that we define a new random variable x by multiplying it by 4. And so now, we have two random variables, x and z. And x is just 4x. And now let's say that we want to evaluate the-- we want to figure out what is the PDF of this new random variable that we've constructed just by multiplying by 4. And one thing you might be tempted to do is to do something like this. And this is going to be wrong. So the probability, let's say, if you want to evaluate it at 4, this kind of the probability that x takes value 4. And we know that x is 4z. And so this is kind of the probability that z takes value 1, which is what we had before, which is 1/2. And this is wrong. This is not the right answer. It's pretty clear that 4z is just going to be a uniform random variable between 0 and 8. And so the density is actually 1/8 because it has to integrate to 1 over the interval. So this kind of replacing a change of variables inside the PDF calculation is not correct. And what you have to do is you have to use the change of variable formula, which you might have seen in previous classes. The idea is that when you apply some invertible transformation. And so you define z-- x as f of z. f is invertible. And so equivalently, you can get z by applying the inverse of f to x, which we're going to denote h. So h is the inverse of f. So h applied to f is the identity function. And if you want to get the density of this random variable that we get by transforming z through this invertible mapping, it is a kind of a p of z evaluated at h of x. So that's the kind of thing we're doing before. But you have to rescale by the absolute value of the derivative of this function h. And so in the previous example, let's say, the function is just multiplying by 4. And so if you were to apply the formula in this case, the inverse function is just dividing by 4. And then the derivative of h is just one quarter. It's just a constant. And so if we want to evaluate the probability of this transformed random variable evaluated at 4, what you do is you evaluate the probability of z at 4 over 4, which is 1. But then you have to multiply by this scaling factor, which is the derivative, evaluated at 4, or the absolute value of the derivative. And this is giving us the right answer, 1/8. So this part here of 1 is kind of like the naive thing we try to do that was wrong. And it becomes right if you account for this derivative of h term. That is rescaling things. So then we get 1/8. And more interesting example could be something like if instead of multiplying by 4, we apply an exponential function. So we have z, which again is something simple, a uniform random variable between 0 and 2. But now we define x as the exponential of z. Now we can work out what is the density of this new random variable that we get through this transformation. What is the inverse of the exponential function? The log. So h of x is the log. And then we can apply-- if you want to evaluate the density of this random variable x at a particular point, what we do is we evaluate the density of z at the inverse, and then we scale by the derivative. So we take x, we invert it to get the corresponding z. And there is only one z that maps to that x. We evaluate the density of that z under the prior distribution p of z, and then we always rescale by this derivative. And so in this case, p of z is uniform. So it's just the 1/2 everywhere because it's uniform between 0 and 2. And then the derivative of the logarithm is 1 over x. And so now we see that the PDF of this random variable x has this more interesting shape. It's kind of 1 over 2x. So we started out with something pretty simple, just a constant basically. And by applying an invertible transformation, we got a new random variable, which is a more interesting kind of shape. Again, hopefully, this is just a recap of formulas that you've seen before, but this is kind of like a change of variable that we're doing here. And you have to account for these derivatives when you apply a change of variables here. OK. Questions on this? Does this sound familiar? OK. Now let's see. This is the formula for the 1D case. And you can see our proof, actually. It's actually pretty simple. We can work out the level of the CDFs, so the probability that this new transformed random variable is less than a particular number is just the CDF evaluated at one point. Now we know that x is just f of z.. So the probability that x is less than or equal to some number. It's the probability that f of z is less than or equal to that number. Then if you apply the inverse of f on both sides of this inequality, which you can because it's a monotonic function, then you get that expression, which is just the CDF of z, evaluated at h of x. And now just we know that the PDF is just the derivative of the CDF. So if you want to get the density of this random variable, you just take the derivative of the left hand side. Or equivalently, you can take the derivative of this expression that we have here, and then you just use chain rule. So you get exactly what we had before, where you need to evaluate the original variable at h of x. You take x, you invert it, you evaluate the density at the corresponding z point. But then because of the chain rule, you have to multiply by h prime. That's where the formula comes from. You can see you need an absolute value because I guess, it could be decreasing. And now, there's an equivalent way of writing the same expression, which will turn out to be somewhat useful. If you want to compute the derivative of the inverse of a function, you can actually get it in terms of the derivative of the original function. There is this simple calculus kind of formula that you might have seen before. So if you want to compute the derivative of the inverse of a function, which is h prime, which is what we have here, you can get it in terms of the derivative of f, which is the original function evaluated at the inverse point. So an equivalent way of writing what we have here is that you can just evaluate the original PDF at the inverse point, and then you can multiply by 1 over f prime of z, where f is the kind of forward map. So you can basically either write it in terms of the derivative of the inverse, or you can write it in terms of the derivative of the forward map and you just do 1 over instead of-- these two things are the same. Yeah. We still need absolute values. Yeah. Why it'd be problematic if [INAUDIBLE] approach to zero in the denominator. Yeah. So it has to be invertible. So that kind of happen. But yeah, I mean, you need to be able to somehow compute these derivatives. And that's not going to be necessarily easy. Yeah. Because if prime is zero, then it means it's a constant, and then it's no longer invertible. OK. So that's the easy thing. Now let's see what happens when we transform random vectors. Because if you think about a VAE, we want to transform random vectors into random vectors. So we kind of need to understand what happens if we apply an invertible transformation to a random variable that has a simple distribution. So let's say our random variable is a random vector z is now uniform over this unit, hypercube. So we have n kind of dimensions. And each one of them is uniform. And we want to understand what happens if we transform that random variable. And as the just to get some intuition, we can start with linear transformations. Just like before, we kind of started by saying multiply by 4 and see what happens. We can do the same thing and instead look at what happens if we linearly transform a random vector, which means that we basically just multiply it by a matrix A. And we want this transformation to be invertible, which in this case just means that the matrix itself has to be invertible. So that you can go-- there is a unique correspondence between x and z. And we're going to denote the inverse of h with W. And the question is how is x distributed? Or what happens if you start with basically uniform and then you pass it through a matrix and multiply it by a matrix, you get another random variable, how is that distributed? And you kind see that A is linear. It's going to stretch things somehow. And essentially, what happens is that it's mapping the hypercube to a parallelotope, which is just kind of like a parallelogram. And so in 2D, it would look something like this. So you have a uniform. So if n is 2, then you have a z is distributed uniformly between 0 and 1 in both directions. So it's less uniform in this square. And then what happens if you multiply it by this matrix, A, which is just a, b, c, d, you're going to get a uniform distribution over that parallelogram. You can kind of see that the vertices correspond to what you would get if you were to multiply the matrix by 0, 0, You're going to get 0, 0. If you multiply this matrix by 1, 0, this corner, you're going to get this corner, a, b. And if you multiply by 0, 1, you get this corner, c, d. And if you multiply by 1, 1, you get this corner up here. And then it's all the other points. [INAUDIBLE] which was this kind of transformation from the unit square to this. And such a process is linear, right? But the one you showed us, when we apply the A, essentially, I think that's a nonlinear process. I was trying to imagine what is this transformation matrix. Is it like a linear operations? So here, it's linear. Everything is linear so far. Then we'll go into non-linear because we want to use neural networks. But it turns out that it's better to understand the linear case first. But so far, it's all linear. We're just multiplying by A, and that's a linear transformation. OK. Even like the A matrix itself is also a linear systems. It's fixed. So it's just a linear mapping. And so it's a linear information. OK. So now, we have some intuition for what happens here. z-- x, which is what we got on multiplying A by z. It should be a uniform random variable over this red area, essentially. And so what is the density? Well, we need to figure out what is the area or the volume of that object. Because if it's uniform, then it's just going to be 1 over the total area of that parallelogram, of this red thing here. And I don't know if you might have seen this, but you can get the area of the parallelogram by basically computing the determinant of the matrix. And here there is a geometric proof showing that indeed, if you can get the area of the parallelogram by taking the area of this rectangle and subtracting off a bunch of parts, you get that expression. So this is the determinant, ad minus cb. And that's the area of the parallelogram. And so what this means is that x is going to be uniformly distributed over this parallelotope of area, absolute value of the determinant of a, which means that the density of x is going to be the density of the original variable evaluated at the inverse. And then again, we have to basically divide by the total area, which is the determinant of the absolute value, of the determinant of this matrix. Or equivalently, because, if W is the inverse of A, then the determinant of W is going to be 1 over the determinant of A. And so you can equivalently, just like before, write it in terms of the derivative of the inverse of the mapping defined by A, which is just the determinant of W here. So you take x, you take a point in here, you map it back to the unit cube by multiplying it by W, which gives you the corresponding x-- the corresponding z. You evaluate the density. And then you have to take into account the fact that basically the volume is stretched by applying this linear transformation. And so things have to be normalized. And so you have to divide by the total area to get a uniform, random variable. And just like before, you have to account basically by how much the volume is shrinked or stretched when you apply a linear transformation. Yeah. [INAUDIBLE] uniform hypercube with-- The question is, does it only apply to a unit hypercube? No. It applies for-- this formula here is general. Whatever is the density you begin with, as long as you apply an invertible transformation, you kind of get the density of the Wx-- or Wz, sorry, I think, I have here. Yeah. Or AZ. So if Z has an arbitrary density, pz, you can get the density of x through this formula. And in which case pz might not be uniform. It could be a Gaussian or something. This can still be used. So again, we are getting towards this idea of starting from something simple, the z, transforming it, and then being able to somehow evaluate the density of the transformed random variable. So recall this is kind of like a VAE. We have a latent variable z. We have the observed variable x, but now through these formulas, we are able to evaluate the marginal probability of a data point without having to compute integrals, without having to do variational inference. We get it through this calculus formula-- by these formulas, basically. And the key idea is that given an x, there is only one corresponding z. So it's just a matter of finding it by multiplying it by x, by W in this case, and then taking into account the fact that the geometry changes a little bit. And notice, yeah, this is the same kind of, not surprisingly, this is strictly more general than what we had before in the 1D case, but it's kind of like the same thing. Yeah. Once we do this linear transformation, will it guarantee us a distribution? Like because it starts from a distribution, which some area, some to be one, and then we do the linear transformation, it might not necessarily be a probability. Yeah. So the question is, is this p of x, is this a PDF? Like if you integrate over all possible values of x, do you get 1? And you have-- basically, the reason you have to apply-- you have to divide by the determinant is to make sure that it is indeed uniform. Because if you were not to do that, then you would-- it's kind of like the wrong calculation that we did at the beginning, where you just map it back and you evaluate. But to make sure that things are normalized, you have to take into account the fact that the area of that parallelogram might grow a lot by applying certain kinds of aids. Or it might shrink a lot if you apply very small kind of coefficients. So you have to renormalize everything through this determinant. That's why they are called normalizing flows, because this change of variable formula takes care of the normalization for you and guarantees that what you get is indeed a valid PDF. Cool. Now we know how to do these things for linear transformations. What we want to do is we want to use deep neural networks. So we need to understand what happens if we apply non-linear transformations, invertible, nonlinear transformations. So now, instead of just multiplying by a matrix, we want to feed x into some kind of neural network and get an output z. And assuming that somehow we are able to construct a neural network that is invertible, we still need to understand how that changes the distribution of the variables. So if you have a simple random variable z, you feed it through a neural network. f. The output is some other random variable x. And we need to somehow understand what is the PDF of that object. And it turns out that it's basically the same thing, that if you understand what happens in the linear case, all you have to do is to basically linearize the function by doing essentially a Taylor approximation. So you compute the Jacobian of the function, and then it's the same formula. It's the determinant of the Jacobian, which is a linearized approximation to the function. And so again, this is probably something you might have seen in some calculus class, but it's essentially the same formula. So if you have, again, a random variable x that is obtained by feeding a simple random variable through some kind of invertible neural network, f, you can work out the density of the output of the neural network, which is x, by basically computing, by inverting it and computing the density of the input that generated that particular output. And as usual, you have to account for how much the volume is stretched locally, which is just the determinant of the Jacobian of the inverse mapping. Just like before, we were always looking at the derivative of the inverse mapping. In the 1D case, the multivariate extension is the determinant of the Jacobian. And so just again, as a sanity check, recall the simple formula that we proved was something like this, which is exactly what you would get if f is just a scalar function. So instead of having determinant of Jacobian, you just have or absolute value of the determinant of the Jacobian, you just have absolute value of the derivative of the inverse of the function. And let's see. And just like before, if you have an invertible matrix, the determinant of the inverse, it's 1 over the determinant of the original matrix. And so you can equivalently write things just like before in terms of the Jacobian of the forward mapping. So here, things are if you go from z to x, then the formula basically involves the Jacobian of the mapping from x to z. You can also write things in terms of the Jacobian of the mapping. And just like before, you just do 1 over instead. You can also compute directly the Jacobian of f, annd then you compute the determinant of that, and then you do 1 over. And that's the same thing, just like before. Remember, before, we had the formula where you could write things in terms of h or you could write things in terms of f, and this is the same thing. But this might be computationally, as we'll see, sometimes, it's easier depending on whether you model the-- when you start using neural networks to model f, it might be more convenient to use one or the other. And that's the reason these formulas are handy. All right. OK. Questions on this? Yeah. So the z's that we have been talking about, they are the latent variables in the previous variational autoencoder? Yeah. So you should still think of-- I mean, this is just math so far. We haven't really built a generative model. But yeah, you should think of the z as having some simple distribution. And then you pass them through a decoder f, which is now an invertible transformation, and then you get your samples, x images out. And now you can evaluate the density over the images, which is what you need if you want to do maximum likelihood through this formula. So you don't have to do variational inference. You don't have to compute ELBOs. You don't have to use an encoder to the extent that you can invert the mapping by construction, then you're done. You just need to invert and take into account this change in volume, essentially, given by the linearized transformation. That is like [INAUDIBLE] that need to be continuous and have the same dimension. So how does this fit into the image generations where x are discrete values? So yeah, the question is, do they have to have the same dimension? Do they need to be continuous? So yeah, they need to have the same dimension, which is kind of like what we were discussing before if you want things to be invertible. How does it apply to images? Well, you can think of images as being continuous. I guess, the kind of measurements that you get are often discrete because, you have maybe some kind of fixed resolution. But you can pretend that things are continuous or you can add a little bit of noise to the training data if you want. It's basically not a problem. You can train these models on images. In the case when f is a neural network, we will apply a chain rule to make the [INAUDIBLE] actually computable space. Yeah. And what if we cheat a little bit and make have [INAUDIBLE] actually invertible? For example, if the neural network capacity first, and then like get 2x. And in the second one, we don't really need the input of f. So we have the f, but it's not invertible. We can still compute the probability of px, of this-- So yeah. So if f is not really invertible, then the formula doesn't quite work. And you're back in kiind of VAE land. This basically means that there could be multiple z's that map to the same x. And so if you want to compute the probability of having generated this particular x, you're no longer guaranteed that there is only one z. You just have to compute it and apply the formula. You would still have to work through all the possible z's that produced that x. And people have looked at extensions of these models, where maybe you're guaranteed that there is only a up to k kind of it's almost invertible. Like if you know all the possible-- all you have to know is basically all the possible x's that could have produced-- all the possible z's that could have produced any particular x. As long as you construct things such that that's always the case, then you can still apply similar tricks. But in general, if there could be a lot, if it's very-- it's highly non-invertible, then you're back in VAE land, and then you need some kind of encoder that guesses that kind of like inverts the function for you. And you have to train them jointly so that the encoder is doing a good job, but inverting the decoder. And so then, you might as well use the ELBO. Cool. And just let's see, one example just worked out what that actually means just to be a little bit more concrete. You might imagine that you have the prior, that's just two random variables, z1 and z2 with some kind of joint density, maybe it could be Gaussian. And then we have this invertible transformation. And this is a multivariate function, two inputs to outputs. So you can, for example, denote it in terms of two scalar function, u1 and u2. So each one of them takes two inputs and map it to one scalar. So it's multivariate, two inputs, two outputs. And we're assuming that these things are invertible. So there is an inverse mapping v, which always maps you back. And again, it's two inputs, two outputs in this case. And then we can define the outputs. So if you take this simple random variables z1 and z2, and you feed them through this neural network, which takes two inputs and produces two outputs, you're going to get two random variables, x1 for the first output of the network, and x2 for the second output of the network, just by applying u1 and u2. And similarly, you can go back, given the outputs, you can get the inputs by using these v functions, two inputs, two outputs again. And what you can try to do is you can try to get the density over the outputs of this neural network u. So how do you figure out what is the density at x1 and x2? When these random variables are obtained by transforming through some neural network u. And it's always the same thing where what you do is you take the outputs, which are x1 and x2, you invert the network. So you figure out which were the two inputs that produced the outputs that we have. And then you evaluate the density, the original density, the input density at those points. This is kind of like the same calculation that we did, the wrong calculation that we did at the beginning. Just invert and evaluate the original density at the inverted points. And then as usual, you have to fix things by looking at how the volume is stretched, essentially, locally. And what you would do in this case is you would get the absolute value of the determinant, of the Jacobian, of the inverse mapping. The inverse mapping is V. The Jacobian is this matrix of partial derivatives. So we have two outputs, two inputs. So you have four kind of partial derivatives that you can get. First output with respect to the first input, first output with respect to the second input, second output with respect to the first input, and so forth. That's a matrix. You get the determinant of that matrix, you get the absolute value. And that gives you the density that you want. To the extent you can compute that, you have a way of evaluating densities for the outputs. Or equivalently, you can do it in terms of the Jacobian of u, which is the network that you use to transform the simple variable. And so again, you can evaluate it directly at the z's, the corresponding inputs. And then you apply this kind of the Jacobian of the mapping in the other direction. And we'll see that sometimes one versus the other could be more convenient, computationally. Yeah. This is more a broader question. But are there domains, which these flow models are better suited than VAEs Like the autoregressive ones? What are the applications where this approach of modeling the problem [INAUDIBLE]? Yeah. So flow models are actually-- the question is, when are flow models suitable? And flow models are pretty successful. Like in fact, you can even think of a diffusion models as a certain kind of flow model. And if you want to evaluate which diffusion models are kind of state of the art right now for images, video speech, there is an interpretation of diffusion models as flow models kind of infinitely deep flow models. And these formulas here are what you use to-- or an extension of the formula is basically what you use to evaluate likelihoods in diffusion models. I think that's like you said that you can also think of diffusion models as like stacked VAEs, right? Yeah. So there is going to be two interpretations of them. And flow models help you if you want to evaluate likelihoods because if you think of it as a stack VAE, you don't have likelihoods. If you want likelihoods, then you can think of them as flow models. And then you can get likelihoods through exactly this formula. So what are you saying is basically the diffusion model approach basically unifies this flow modeling approach and then VAE. Yeah. Yeah. I think you might have covered this, but if we're dealing with a higher dimension problem, then for the linear system that we're trying to solve here, I guess, will become intractable. But like theory guarantees that it's invertible functions, but practicality wise, it's hard to solve a linear system. Yeah. So the question is, yeah, like how do you-- I guess, it seems like we need to do a bunch of things. You need to be able to invert the function. You need to be able to compute the Jacobian and you need to get the determinant of the Jacobian, which in general, the determinant of a matrix is kind of like an n cube kind of operation. So what's going to come next is how you parameterize functions with neural networks that have the properties we want. So they're easy to invert. And they give-- it's easy to compute Jacobians and it's easy to compute determinant of Jacobians. Now for the infusions, we don't even do the whole [INAUDIBLE].. So it's kind of like a mixture of [INAUDIBLE].. Like the pure version of diffusion model would be defining pixel space. And the latent variables have the same dimension as the inputs. And that's why you can think of it as a flow model. So can we reduce the dimensionality at all? They don't. Yeah. Yeah. Is the interpretation of the latent state space still being some sort of intrinsic hidden variables still applicable when they're the same size? Or is it-- it seems like you haven't reduced the problem at all. Yeah. So the question is, can you still think of them as latent variables? I mean, it is a latent variable to some extent, but then it has the same dimensionality. So it doesn't really compress in any way. It has a simple distribution. It is distributed in a simple form, which is desirable to some extent. But it's really more like a change of variables. It's kind of like you're measuring things in pixels or meters, and then you change and you start measuring things in feet. But it's not really changing anything. You're just really changing the units of measure in some sense. And at least if you were to do just linear scaling, that would just be changing the units of measure. Here, you are doing nonlinear, so you're kind of changing the coordinate system in more complicated ways. There is no loss of information. Everything is invertible. And so it's really just looking at the data from a different angle that makes things more simpler to model. Because if you start looking at things through the lens of f inverse, then things become Gaussian. So somehow, by using the right units of measure or by looking at the-- by changing the axis in the right way and stretching them in the right way, things become much easier to model. So that's a better way to probably think what flow models are really doing. So for functions that some that are not generally continuous, for example, like discretizing the latent space and so on, what are some tricks that we can use to approximate continuity for such functions? Yeah. So the question, I guess, is whether this can be applied to discrete or whether-- yeah. So there are extensions of this sort of ideas to discrete, but then you lose a lot of the mathematical-- a lot of the mathematical and computational advantages really rely on continuity. So people have looked at-- I mean, the equivalent of an invertible mapping in a discrete space would be some kind of permutation-- some kind of yeah, kind of like a permutation, essentially. And so people have tried to discover ways to permute things in a way that makes them easier to model, but you lose a lot of the mathematical structure. And so it's not easy to actually-- [INAUDIBLE] After you've trained the model, then you can certainly discretize. But for training, you really want to think of things as being continuous.
Stanford_CS236_Deep_Generative_Models_I_2023_I_Stefano_Ermon
Stanford_CS236_Deep_Generative_Models_I_2023_I_Lecture_8_GANs.txt
The plan for today is to continue talking about normalizing flow models. So recall that in the last lecture, we've introduced this idea of building a latent variable model that will allow us to evaluate likelihoods exactly, so without having to rely on variational inference. And so it's going to be similar to a variational autoencoder, in the sense that there's going to be two sets of variables. There's going to be observed variables, X, and latent variables, Z. And the key difference is that the relationship between these two sets of random variables is deterministic. So in a way, you would say, sample X given Z by using some simple distribution, like a Gaussian, where the parameters of X given Z might depend on the particular value of the latent variables. In a flow model, the relationship is deterministic and invertible. So you obtain X by applying this transformation, which we denote F theta here. And because the mapping is invertible, you can also go back. So inferring the latent variables, given the observed one, only requires you to somehow be able to compute the inverse of this mapping. And here we are denoting these mappings F theta because they are going to be parameterized using neural networks. And what we'll see today is that we're going to think about ways to parameterize this kind of invertible transformations using neural networks and then learn them from data, essentially. And the nice thing, recall, of using an invertible transformation is that the likelihood is tractable. So you can evaluate the probability that this particular model generates a data point x by essentially using the change of variable formula, which is fairly intuitive, especially the first piece is very intuitive. You're saying if you want to evaluate the probability of generating an image, let's say X, what you do is you invert to compute the corresponding Z, and then you evaluate how likely that Z was under the prior, which is this distribution, PZ. And then recall that this is not enough. If you just do that, you're not going to get a valid probability density function. To get something that is normalized, so it integrates 2 to 1 if you go through all possible axes, you have to rescale things by this absolute value of the determinant of the Jacobian of the inverse mapping. And that's basically telling you, intuitively, what you do is you linearize the function locally by looking at the Jacobian, and then the determinant of the Jacobian tells you how much or how little that transformation expands or shrinks, like a unit volume, around the data point X. And so it's very similar. Remember, we worked out the example of the linear mapping in the last lecture where we defined a random variable by transforming a simple random vector by multiplying it by a matrix. Essentially, this is what's going on here, and you have the same kind of expression. And so the key thing to note is that this can be computed exactly and basically without introducing any approximation, to the extent that you can compute these things. You can invert the mapping. You can compute the determinant of the Jacobian. You can do those things, then you can evaluate the likelihoods, exact. So you don't have to rely on variational inference where you have to use this encoder to try to guess the Z given an X, and you have to do that integral because there is many possible Z's that could give you any given X. So you don't have to do any of this. And so this is as nice as having something like an autoregressive model where you can evaluate likelihoods exactly just using this equation. And one of the various limitations of this kind of model family is that X and Z need to have the same dimensionality. And so that's different from a variational autoencoder, where we've seen that Z could be very low-dimensional, and you can use it to discover some compact representation of the data. That's no longer possible in a flow model because, for the mapping to be invertible, Z and X need to have the same dimensionality. Cool, so now, how do we actually do this? How do we turn this math, this general idea, into a model that you can actually use in practice? Well, the idea is the usual story, like in deep learning, is to combine relatively simple building blocks to define flexible architectures. And so a normalizing flow is essentially a generative model based on what we are going to use, essentially, neural networks to represent this mapping, F theta, which is really the only thing-- you have the prior over Z and the F theta mapping, that's the only thing you can really change. And it's called a normalizing flow because the change of variable formula gives us a normalized density if you account for the determinant of the Jacobian. And it's called a flow, exactly what I was saying, because it's like this deep learning idea of defining the mapping that we need by composing individual blocks, which could be relatively simple. So we're going to essentially define an architecture where there is going to be multiple layers of invertible mappings where we essentially start with a random vector, Z0, which could be, let's say, described by the prior Gaussian or something like that. And then what we do is-- or it could even be that, yeah, depending which way you want to see it, we start on one end, with a random vector, Z0, and then we apply these transformations-- F1, F2, F3, Fn, all the way through n in this case. And essentially, what this notation means is that what we do is we take that Z0, we pass it through the first neural network, and then we take the output of that, and we pass it to the second neural network, and so forth. And we denote this architecture that we get by stacking together multiple invertible layers F theta. And it's pretty easy to see that, as long as each individual layer is invertible, the combination of multiple layers that you get by doing this kind of operation is also invertible. And so to the extent that we are able to come up with reasonable neural network architectures that define an individual layer, we're going to be able to stack them together and get something more flexible. The thetas are not the same in each of the F functions, are there? Yeah, so that's a good question. I think this notation is a little bit overloading here, the meaning of theta. The parameters of the individual mappings are going to be different, so they are not necessarily tied together. There's going to be-- we're going to use theta to denote the union of all the parameters that you need to specify each individual layer. And so that's the story of this flow of transformations. You start with a simple distribution for Z0, the first, let's say, at the top, topmost level of your flow, for example, by sampling from a Gaussian distribution. This is the usual prior, the same thing you had in a variational autoencoder. And then you apply this sequence of invertible transformation to obtain your final sample. And so you feed it through all these different layers. And then, let's say, after M of them, you get your final sample, X. And the good thing is that, yeah, if each individual mapping is invertible, then the combination is also going to be invertible. And you can actually work out what's the corresponding change of variable formula. And to the extent that you understand the determinant of the Jacobian, of each individual layer, then you can work out the corresponding determinant of the Jacobian, of the combination of these mappings. So all you have to do is you have to be able to invert this function, F theta, that you get by combining all these neural networks. And if you can invert each individual layer, you can, of course, invert the full function. And to the extent that you can linearize, basically, and you understand how each of the individual layers behave locally, so you understand how that determinant of the Jacobian looks like, then you can get that determinant of the Jacobian of the full mapping. And this is because, yeah, basically, the determinant of the product equals the product of determinants. Or equivalently, you can also get this rule, like if you recursively apply change of variable formula, you get this sort of expression. Or to figure out, basically, by how much the full mapping distorts the volume locally. You just need to figure out by how much the individual layers distort the space, and then you just combine the cumulative effect of all these various layers. And so what this is saying is that we are in a good place if we can somehow define classes of neural networks that are invertible, ideally, that we can invert efficiently, and that we can compute the determinant of the Jacobian also efficiently. And here is a visualization of this, how that normalizing flow works. This is a particular type of normalizing flow called a planar flow. It's not super important, but to give you the intuition, you start on one end with this random variable, Z0, which let's say is Gaussian, and then you get a new random variable Z1 by transforming it through the first layer and Z2 by transforming the Z1 by another simple layer, and so forth. And you can see the effect of these transformations. So you start, let's say, with a two-dimensional random variable, Z0, which is just a unit, Gaussian, so this is just a Gaussian with spherical covariance, which basically has a density that like this. There is the mean in the middle and the probability mass, has a relatively simple shape. It's not something you can use to model complicated data sets. But then you apply, let's say, that first invertible transformation, and you get a new random variable, Z1, which now has a more complicated density. Then you apply another one, and you get something even more complicated. And after 10 layers, after 10 individual-- after 10 invertible transformations, you can get something that is much more interesting. And it has the flavor of a mixture distribution where you can spread out the probability mass in a much more flexible way. Is there that we can't go to the 10th transformation using some neural network that can represent all of them at once? Because if I'm thinking about it in terms of simple functions like matrices, I can show that [INAUDIBLE]. We just don't see it. Yeah, yeah, so there is certainly a mapping that could-- an invertible mapping that would get you from the beginning to the end, which is just the composition of those neural networks. The beauty is the deep learning strategies that the individual layers are going to be relatively simple. So the individual F theta I that we will see are actually relatively simple transformations. Think about, it's not quite linear but something almost linear. And even though that's simple, by stacking them together, you can get some very flexible transformations. So it's similar to a deep neural network, and maybe the individual layers are not particularly complicated. Maybe it's just a linear combination or a nonlinearity, but if you stack them together, you can get a very flexible map. And yeah, that's what's going on here. Yeah? You know how there's a constraint that the weights are actually invertible themselves? Because you could have a sigmoid, which is like a one-to-one mapping, but what about enforcing a quadratic assumption? Because I feel that's where you'd actually have issues. Yeah, yeah, so the question, I think, is how do you ensure that, if you learn these thetas, you get a mapping that is invertible? And so what we will have to do is we will have to design architectures in a very special way, such that you're guaranteed that regardless of how you choose the parameters, the mapping is invertible. And not only that, we'll also need to be able to invert it efficiently, ideally because if you want to-- you need to be able to go both ways. And that's also not enough. You also need to be able to compute that determinant of the Jacobian relatively efficiently, because naively, it could take you n cube, where n is the number of variables, the number of dimensions, the number of pixels. That's horrible, basically. So that's what's going to come up next, ways of defining these mappings and then how to learn them from data, which is going to be trivial because we have access to the likelihood. Right, and so here's another example. This is different, what you see at the bottom. Same idea, but the prior is uniform. So here the prior is Gaussian, and we transform it to something like this. Here the prior is a uniform, random variable. Again, two-dimensional. So all the probability mass is, let's say, between 0101, so it's like a square. And then by applying these invertible transformations, you are able to map it to, again, something much more interesting. And you can see, so it's normalizing because each individual random variable that you get by applying an invertible transformation is automatically normalized by the change of variable formula. And it's a flow because you're applying many transformations, one after the other. So it's like your probability mass is flowing around through these transformations. In the examples we showed on each transformation, are they the same transformation, or are they different? Do you apply each different transformation? Yeah, good question. So this is a planar flow, which is one way of defining an invertible layer through a neural network. And so the functional form is the same at every layer, but the parameters are different, like what was asked before. So it's like the same transform. It's the same layer but with different parameters. And yeah, so you can see the takeaway is, this is the intuition. This is the only thing that is easy to visualize, but you can imagine we're going to try to do something similar over a much higher dimensional space where we're going to try to model, let's say, images on the right-hand side. Cool, so how do we do-- the first thing is, well, we need to parameterize, somehow, this mapping, and that's going to be the main topic of this lecture. The other thing that we need to do is we need to be able to do learning. So once you've defined a space of invertible mappings, how do you choose these parameters, theta, ideally to fit some data distribution? And it turns out that that's very easy. Because we have access to the likelihood, we can basically do the same thing that we did for autoregressive models. So the most natural way of training a flow model is to just pick parameters, theta, that maximize the probability of a given data set or the log probability of a particular data set. Or equivalently, you go through your data set D, and you try to find parameters that maximize the probability assigned to-- or the log, the average log probability, across all the data points in your data set. So intuitively, you're trying to find-- you have a bunch of data points, which you can think of points in this space, and then you're trying to find the parameters of the flows to put as much probability mass around the data points that you have in the training set. And the good thing, again, is that unlike a variational autoencoder, we can actually evaluate this likelihood. We can figure out what was the probability of generating any particular data point, X. All you have to do is you use the usual formula of the change of variable formula. So you take the data point, you invert it, you find the probability with respect to the prior, which is whatever something simple. PZ is, again, like what you have here on the left. It could be a Gaussian. It could be uniform, something simple. And then you account for that determinant of the Jacobian. And because it's a log of product, it becomes a sum of logs. And so again, all you have to do is to basically-- well, maybe it's on the next-- no, it's not on this slide. But basically, you have to figure out how much the-- what is the log determinant of the Jacobian for the full transformation? Which can also be broken down into the log determinant of the Jacobians of the individual pieces define your flow model. And then what you do is if you can evaluate this loss, or I guess, this is not a loss, because we're trying to maximize this, but if you can evaluate this function as a function of theta, then you can take gradients, and you can try to optimize it, essentially. And so to the extent that you can invert this function and to the extent that you can evaluate those Jacobian determinant, Jacobian term efficiently, we have a loss that we can try to optimize as a function of theta. And so you can do exact loss likelihood evaluation by using this inverse mapping, so go from theta to prior, and then after you've trained the model, you can generate new samples. So you want to generate new images or you want to generate new speech, or sound, or whatever you're trying to model, then we know how to do it. It's just basically just the forward direction, just like in a VA that has not changed. You sample Z from the prior, and then you transform it through your mapping. And that produces an output. And if you care about getting latent representations, like in a VA, in a VA, you would use the encoder to try to infer z given X. In a flow model, it's relatively, again, easy to figure out what is the corresponding Z for any particular x. All you have to do is you have to invert the mapping. But again, it's questionable, whether this can be thought as a latent variable, because it has the same dimension as the data. And so it's not going to be compressed in any way. Yeah? Because there surely is some structure, right? Because after all, even if it's one-to-one it's no longer random? So there might be something. Yeah, so I'll show you training models on images, then you can do interpolation in the latent space, and you get reasonable results. So it's certainly doing something meaningful, but it might not be compressive as you would expect a VA, for example. So it's a different kind of latent variable, but it's still a latent variable for sure. OK, Yeah? So do we parameterize both the F theta and the inverse process? So good question. Do we parameterize F theta, or do we parameterize F theta inverse? You only parameterize one because the other one is obtained directly by-- hopefully, it's really invertible, and so hopefully, you can actually do it. But it's a good question, whether you should parameterize F theta, the direction that you need for sampling, or should you directly parameterize the inverse, because that's what you need during training? And so those are two valid choices. And there might be-- if you have to, let's say, numerically invert a-- maybe you know it's invertible, but maybe it's not cheap, and that maybe computing an inverse requires you to solve a linear system of equations or something. It's invertible. It's possible to compute this F theta minus inverse, but it's maybe too expensive if you have to do this over and over during training. Maybe depending on what you want to do, you might want to parameterize one or the other, or you might choose an F theta that can be inverted very quickly. And so we'll see the kind of trade-offs that you get by doing one or the other. F theta, the inverse, have the same form as [INAUDIBLE] loss between them to ensure that they were roughly the same? Is that something we can do? Yeah, so the question is, well, what if it's not quite fully invertible, or could you parameterize both and try to make one the inverse of the other? People have explored these kind of things, where then, they try to make sure that you can do both directions, and what's the other way of distilling models that can be efficiently evaluated in one direction into ones that can be efficiently evaluated in the other direction? So yeah, we'll talk a little bit about this. Cool, all right, so what do we want from a flow model? We have a simple prior that you can sample from efficiently, and you can evaluate probabilities, because we need that PZ here. When you do this formula, you need to be able to evaluate probabilities under the prior. So typically, something like a Gaussian is used. We need invertible mappings that can be tractably evaluated. So if you want to evaluate likelihoods, you need to be able to go from, let's say, X to Z efficiently or as efficiently as possible. But if you want to sample, then you need to do the opposite. So again, going back to what we were just talking about, two things, depending on what you want to do or depending which one you want it to be as fast as possible, you might want to do one or the other. And then the other big thing is that we need to be able to compute this determinant of Jacobians. And these Jacobian matrices are pretty big. They are like n by n, where n is the data dimensionality. And if you recall, computing the determinant of a generic n by n matrix takes order of n cube operations. So even if n is relatively small, like 1000, this is super expensive. So computing these kind of determinants naively is very, very tricky. And so what we'll have to do is we'll have to choose transformations, such that not only they are invertible, but the Jacobian has a special structure. So then we can compute the determinant more efficiently. And the simplest way of doing it is to choose matrices that are basically triangular, because in that case, then you can compute the determinant in basically linear time. You just multiply together the entries on the diagonal of the matrix. And so one way to do it is to basically define the function F, such that basically, the Jacobian-- we want to make sure-- we want to define a function, F, basically, which again, is as n inputs and then outputs, such that the corresponding Jacobian, which is this matrix of partial derivatives, is triangular. So there needs to be a lot of 0s, basically, in the matrix. And one way to do it-- and recall, the Jacobian looks like this. So this is a function. F is a vector-valued function. It has n different outputs. So there is like n functions, F scalar functions, F1 through Fn. And the Jacobian requires you to compute, basically, the gradients with respect to the inputs of each individual function. So you can think of each of these columns as being the gradient of a scalar valued function with respect to the inputs, now with respect to the parameters. And a triangular matrix is basically a matrix where all the elements above the diagonal, let's say, are 0. And so any guess on, how do we make, let's say, the derivative of F1 with respect to Zn 0? Yeah, that doesn't depend on it. So if you choose the computation graph such that the Ith output only depends on the previous inputs, like in an autoregressive model, then by definition, a lot of the derivatives are going to be 0, and you get a matrix that has the right kind of structure. So it's lower triangular. And if it's lower triangular, you can get the determinant just by multiplying together all the entries on the diagonal. And there is n of them, and so it becomes linear time. And so that's one way of getting efficient efficiency on this type of operation, is to choose the computation graph, such that it reminds us a little bit of autoregressive models, in the sense that there is an ordering, and then the Ith output only depends on the inputs that come before it in this ordering. And of course, you can also make it upper triangular. So if Xi, the I output, only depends on entries of the inputs that come after it, then you're going to get a matrix that is going to be upper triangular. And that's also something that you can evaluate the determinant in linear time. So just to recap, normalizing flows, transform simple distribution to complex with a sequence of invertible transformations. You can think of it as a latent variable model with exact likelihood evaluation. We need invertible mappings and somehow Jacobians that have special structure so that we can compute the determinant of the Jacobian and the change of variable formula efficiently. And what we're going to see today is various ways of achieving it. There is a lot of different neural network architectures or layers that you can basically use that achieve these sort of properties. And the simplest one is something called NICE, and then here you can see more. Let's see, NICE. The simplest way of doing this is something like this. It's an additive coupling layer. So what you do is you partition this Z variables into two groups. Again, there is an ordering of the Z variables, and you take the first D, Z1 through the D and then the remaining n minus D. So we have two groups of the Z variables, and you pick a D. Can be anything. And then to define the forward mapping that gets you from Z to X, what you do is you keep the first D components unchanged, so you just pass them through. And then you modify the remaining components, the remaining n minus D components, in the simplest possible way, which is just shift them. So there is a neural network which can be basically arbitrary, which I'm calling m theta, which takes the first D inputs to this layer and computes n minus D shifts that then you apply to the remaining n minus DZ variables. So you can see that the first D components remain the same. You just pass them through. The remaining n minus D components, you obtain them just by shifting the inputs by a little bit. And by how much you shift the inputs, it can be a learnable parameter. Now, is this mapping invertible? It's pretty easy to see that it's invertible. And how do you get the Z if you had access to the x? So how do you get Z1 through D if you have access to the X? Well, the first D components are not changed, so you just keep them. It's just, again, the identity transformation. How do we get the remaining, if you want to compute Z, D, the n minus D inputs? Given all the outputs, how do you get them? You just basically subtract. You just reverse this thing. You just write a Z equals X minus m theta. And crucially, we can't compute m theta, because we know that X-- because the first D component in the input and the output are the same, so when we do the inversion, we know by how much we should shift, because we're just passing through the first D components. And so we can apply this shift by doing this. You can just subtract off m theta, and we can compute m theta as a function of X1 through D, because X1 through D is the same as Z1 through D, which is what we used to compute the shift. And m theta here can be an arbitrary function, can be an arbitrary neural network, basically. Now, what do we need to-- the other thing we need to figure out is, can we compute the Jacobian of the forward mapping? So what are the matrices of partial derivatives? It has the right triangular structure. And you can see that the only thing that we're doing is shifting. And so when you look at the partial derivatives of what happens on the diagonal, it's just all going to be identity matrices. So if you look at, how does the function on the second line here depend on the various Z's, on the later Z's? You see that it's just a shift, so that matrix of partial derivatives that you get here at the bottom right is just another identity matrix. So what this means is that, what is the determinant of the Jacobian of this matrix? 1. Just 1. It's the product of all the elements on the diagonal. They are all 1s, so the determinant of the Jacobian is 1, which means that it's trivial to compute this term in the change of variable formula. It seems like Z and X are so connected. How can we find an easy distribution from where we can sample, will Z ever take a Gaussian form [INAUDIBLE]? Yeah, so great question. Is this flexible enough? I'll show you some empirical results that this model is not the best one. It's probably the simplest you can think of, but it's already quite powerful. You can actually already use this to model images, which is pretty surprising, because it means that by stacking a sufficiently large number of these very simple layers, you can actually transform a complicated distribution over images into, let's say, a Gaussian. Yeah? What if you have [INAUDIBLE] you put some entries over 0, diagonal is always 0? Say again? What if we have a diagonal entry that's just 0, then the term is always 0? So is that-- It basically means that it's not invertible, if you have zeros on the diagonal, basically. And now this is called a volume preserving transformation, because recall that basically, if the determinant is 1, it means that you're not expanding that unit hypercube or you're not shrinking it. It just stays the same. But you can move around probability mass. And now the final component that you use in this model called NICE is rescaling. So you can basically imagine many of these coupling layers where you can change the ordering of the variables in-between. So you don't have to keep the same ordering in every layer. You can pick them-- any order you want is fine, as long as you satisfy that kind of property that we had before. And then the final layer is rescaling. So again, something super simple. You just element Y as a of all the entries, with some parameters, Si, which are going to be learned, and it's just a scaling that you apply. So again, the simplest transformation you can think of, what is the inverse? Again, you just divide by 1 over S, so trivial. To compute the inverse mapping and the determinant of the Jacobian, well, if you think about it, the matrix of partial derivatives is a diagonal matrix. On the elements on the diagonal are these Si terms. And so what is the determinant of this diagonal matrix? It's just going to be, again, the product of all these Si's, basically, so yeah. And you might think this is super simple. How can you possibly learn something useful with a model like this? But if you stack enough of these layers, you can actually learn some decent models. And so if you train a NICE model on MNIST and then you generate samples, they look like this. You train it on faces, you can get samples that look like that. So not the best generative model, of course, but it's already somewhat promising that something so simple already figured out how to map a complex distribution over pixels into a Gaussian, basically, just by stacking a sufficiently large number of simple coupling layers like the one we saw before. Yeah? What would be the Z that we would use for generation in this case? It depends what you choose for the prior. In this model, you would typically use a Gaussian univariate, a unit Gaussian. So that's what you would use. An image of the same dimensions? Same dimension, every entry is Gaussian. And if you have unit covariance, then you can just sample each component independently. And then so you start with pure noise, then you fit it through the, I guess, the inverse mapping, which we know how to compute, because we know how to-- if you want to just invert layer by layer, the final layer, you invert it like this, the previous layers, you invert them by applying this transformation, and then you gradually turn that noise into an image, essentially. Yeah? The first half of its identical transformation means regenerates the image, which basically provide the first half as given information? Yeah, the question is because the first D components are not changed, then yeah, we're basically passing them through. And it doesn't have to be half. It can be any D. It can be an arbitrary fraction that you're basically keeping unchanged, and then you modify the rest by just shifting, essentially. So that's, perhaps, the simpler, and here you can see other examples. If you train it on a data set of SVHN, these are house numbers trained on CIFAR-10. Again, not the best samples, but it's doing something. You can kind of see numbers here in different colors on the left, kind of see samples on the right. Now, what's the natural extension of this? Instead of just shifting, we can shift and scale. And that's a much more powerful model called RealNVP, which is basically essentially identical to NICE, except that at every layer, we don't just shift, but we shift and scale. So the forward mapping is like before. We pass through D components, so we apply an identity transformation. And then for the remaining ones, instead of just shifting, we shift, which is now this neural network, new theta, which is the same m that I had before. But now we also, element-wise, scale each entry. And again, the scaling coefficients are allowed to depend in a complicated way on the first D components, as long as-- and I'm and I'm taking an exponential here so that I'm guaranteed that these scaling factors are non-0 and then I can invert them, but essentially, these matrices, mu theta, alpha, theta can be any. How do we invert the mapping? Again, it's the same thing. So yeah, these are neural networks, arbitrary parameters, basically arbitrary neural networks, and they're parameterized by theta. And that's what you actually learn. How do we get the inverse mapping? How do we get Z from X? Again, the first D components, you don't do anything. You just look them up, and it's, again, an identity mapping. And then for the second one, you have to figure out, how do you recover Z given X? And so what you do, you take X, you shift it by mu, and then you divide by X of alpha, and that gives you the Z, element-wise, which equivalently is like this. You take the X, you shift it by mu, and then you multiply it by e to the minus alpha, which is the same as dividing by e to the alpha, which is dividing by the coefficients that you're multiplying for here. So again, trivial, very easy to do the foreword, very easy to do the inverse. What about the determinant of the Jacobian? What does the Jacobian look like? Again, it has the nice property that it is lower triangular. And now it's a little bit more complicated. The way you operate on this Z is because now, there is a shift. There is a scale, which is like the last layer of NICE, except that it's learned. And so it's like what we were doing before. Before, we were just shifting. Now we're shifting and scaling, and so you have all these extra scaling factors that you're applying to the last n minus D dimensions of the data. And again, this is just what you would get if you applied, if you compute partial derivatives of these outputs with respect to the inputs, you're going to get this kind of expression. And how do you get the determinant? Well, you multiply together a bunch of 1s, and then you multiply together all the elements of the diagonal, of this diagonal matrix. And so it's just going to be the product of the individual scaling factors that you apply on every dimension. Or equivalently, it's like the exponential of the sum of these log parameterization. So basically, you can choose arbitrary neural networks, mu theta, alpha theta. And if you apply that kind of transformation, you get something that is invertible. It's easy to compute the forward mapping, it's easy to compute the reverse mapping, and it's easy to figure out by how much it shrinks or expand, like the volume locally, which is just these scalings. So if the scalings are all 1, then it's the same as in a coupling layer that we had before. But because alpha thetas are learned, this is strictly more flexible than what we had before, because you can learn to shrink or expand certain dimensions. And of course, this is, in general, nonvolume preserving, because in general, this determinant of the Jacobian is not always one, so it can do more interesting transformations. Yeah? [INAUDIBLE] free sample the latent images? So basically saying that, for every Z-- like, let's say we sample an associated image for each of them? Because to me, it seems if you have an invertible transformation, and you need to actually also sample your Z's and associate each of them to an image, would that be an issue over here or [INAUDIBLE]? So to sample, what you would do is you would randomly draw a Z, and then you would pass it through this inverse map, I guess. I'm parameterizing the foreword, so you would just pass it through the forward map. During training, you increase sampling or just do it dynamically? So during training, you don't have to-- the Z's are computed by inverting the mapping. So the Z's are obtained from the data. So you just feed your image at the one end of this big, invertible neural network, and then you hopefully are able to invert it. And if each individual layer has this kind of shape, then you know how to invert it. And then you get a Z at the end. You evaluate how likely the Z is under the prior, and then you account for the local shift in change that you get through the changeover. So it's not like going to be an e, where you have to guess the Z. You get it exactly through the invertible mapping. Yeah? Can I say, during inference, when we sample a Z, and then the generation process is actually deterministic? Yes, so the generation process is deterministic given Z, because the mapping itself is deterministic. That's a big limitation, but it's also what gives you tractability, basically. OK, now, this is a model that works much better. Here you can see some examples. Again, this might seem like a very simple kind of transformation that you're applying to the data, but if you train these models on image data sets, this is starting to generate samples that are much better in terms of quality. You can see bedrooms or people. These models are pretty decent. They're somewhat low-resolution and everything, but it's generating samples that have the right structure. They're already pretty decent. Which part is [INAUDIBLE]? I think these are the training samples, and these are the generations that you see on the right. So maybe the first row, not so good, but for the bedrooms, I think this is [INAUDIBLE]. This is [INAUDIBLE],, I think, you can see. It's pretty decent. Yeah? How many layers are these models using right now? Good question. I don't remember how many of those they use but a pretty decent number. Yeah. Yeah? So you mentioned, the different layers will have different parameters, so they're not sharing. They're not sharing. Yeah, yeah. And back to the question of, what do the Z's actually mean? What you can try to do is, you can try to interpolate between different Z's. And what they show in this paper is that basically, if you start with the four actual samples, which are shown at the corner of this image here, and then you get the corresponding Z vectors, which are Z1, Z2, Z3, Z4, just by inverting the network, and then you interpolate them using this kind of strange formula, which is because the latent space is Gaussian, it doesn't matter too much. And then you get new Z's, and then you pass them through the forward mapping to generate images. You get the reasonable interpolations. You see that you go from one image from one person to another person, and it slowly drifts from one to the other. And you can see examples here on this buildings, and you can see, so basically, in each of these images, the four corners are real images. And what you see in-between is what you get if you were to interpolate the Z vectors of two real images and then decode them back into an image. So even though, yeah, the latent variables are not compressive, they have the same number of variables. They have meaningful structure, like as we were discussing before, in the sense that if you do interpolations, you get reasonable results. OK, now, what we can see is that actually, autoregressive models or certain kinds of autoregressive models, you can also think of them as normalizing flows. And so just to see this, you can think about an autoregressive model, a continuous one, where we are defining the density, the full joint, as a product of conditionals. And let's say that each conditional is a Gaussian with parameters computed as usual by some neural network that takes as input the previous variables and then computes a mean and a standard deviation, and that's how you define the Ith conditional in your autoregressive model. So this is not the language model version where each of these conditionals is like a categorical distribution. This is the continuous version where the conditionals themselves are, let's say, Gaussians in this case. And what I'm going to show you next is that you can think of this, actually, as an autoregressive model, as a flow model. As defined like this is just an autoregressive model. You can think about how you would generate samples from this model. And the way you would generate samples is something like this. You could imagine generating one sample from a standard normal distribution for every I, for every component, for every random variable, individual, random variable that you're modeling. And then what you do is, well, to sample from the conditionals, you have to-- the conditionals are Gaussian with certain means and standard deviations. So like using the reparameterization trick, you can obtain a sample for the-- as usual, you sample the first random variable, then the second sample, the second given the first. And to do that, you need to sample from these Gaussians, which have certain means and certain standard deviations. So you would generate a sample from, let's say, the first pixel or the first random variable by just shifting and scaling this unit standard, random Gaussian, random variable, which is just Gaussian-distributed. So it's starting to look a little bit like a RealNVP of model, where you have the Z's, and then you shift and scale them. How do you sample X2? Well, you sample X2 given X1. So you take the value of X1. You feed it into these two neural networks. You compute the mean and the standard deviation of the next conditional, and then you sample. And so you do that, and then you sample X2 by shifting and scaling this unit, random variable, Z2. Remember that if ZI is a Gaussian with mean 0 and standard deviation 1, if you shift it by mu 2 and you rescale it by this constant, you get a Gaussian with the right mean and the right standard deviation. And again, this feels a lot like a normalizing flow model that we saw before. Given X1 and X2, two, we can compute the parameters of the next conditional, so a mean and a standard deviation, and we can compute, we can sample, the third, let's say, pixel by shifting and scaling these basic random variables that we had access to. And so all-in-all, we can think of sampling what you get by sampling from this autoregressive model as a flow in the sense that you start with this random vector of simple normals. And then you shift them and scale them in some interesting way using these conditionals, using these neural networks, mu and alpha, that define the conditionals in the autoregressive model. These two, sampling the autoregressive model, just by going through the conditionals one at a time is equivalent to doing this, which you can think of as taking a bunch of simple random variables, ZI's, or just Gaussian independent of each other, and then you just feed them through this interesting transformation to get your final output, X1 through XN. Questions on this? I see some confused faces. Yes? How can we get all of them backwards? Won't we need one? How? Let's see. How do we invert it? Yeah, great question. I think that's going to come next. The forward mapping, you can think of it as a flow that basically does this. You use the Z's to compute the first X. And then what you do is you compute the new parameters, and then you get the new X, blah, blah. And you can see that sampling in this model is slow, like in autoregressive models, because in order to compute the parameters that you need to transform the Ith simple prior or random variable, ZI, you need to have all the previous X's to figure out, what's the right shift in scale that you need to apply? What is the inverse mapping? How do you go from X to Z? Well, the good news is that you can compute all these mu's and alphas in parallel, because once you have the image, you have all the X's, so you can compute all the mu's and alphas or the shifts in scales in parallel. And then you compute the corresponding Z's by just inverting that shift in scale transformation. So if you recall, if you want to, you know, compute Z1 from X1, what you do is you take X1, you subtract mu1, and you divide by this exponential, by this scaling. Just like in real MVP, that's how you do the transformation. And so sampling, you can see, you go from Z to X, and you need to do one at a time. But because these alphas and mu's depend on the X's at inference time or during learning, you can compute all the mu's and alpha in parallel. And then you can compute the Z's, again, all in parallel, just by shifting and scaling. And then the Jacobian is still lower diagonal, and so you have an efficient determinant computation, and so you can evaluate likelihoods efficiently in parallel, just like in an autoregressive model. If you remember, the nice thing about an autoregressive model is that in principle, you can evaluate all the conditionals in parallel, because you have all you need. You know how to compute it. Once you have all the whole X vector, you can compute all the conditionals, and you can evaluate the loss on each individual component of your random variable. And the same is true here. So you can basically define a model inspired by autoregressive models, which is called a masked autoregressive flow, MF, that basically transforms simple random variables, Z into X, or equivalently, X to Z's. And if you parameterize it this way, then you can get efficient learning, basically, because you can go from X to Z very efficiently in parallel. But as expected, sampling is slow, because it's just an autoregressive model at the end of the day. So if you want to sample, you to go through this process of, basically, transforming each individual ZI variable one at a time. So this is basically just interpreting an autoregressive model as a flow model, and it inherits the properties of autoregressive model, which is the same model. So sampling is sequential and slow, but as expected, you can evaluate likelihoods, because it's just basically a change of variable formula, and so it's possible to actually compute all these. The likelihood is exactly like in an autoregressive model. And so this is another way of building a flow model, which is basically, you start with an autoregressive model, a continuous one, and then you can essentially think of it-- at least if it's a Gaussian autoregressive model, you can interpret it as a continuous-- as a normalizing flow model. The other thing you can do is, if you need a model that you can sample from efficiently, we know that one of the issues with autoregressive models is that sampling is slow, because you have to generate one variable at a time. If you just-- once you start thinking of an autoregressive model as a flow model, you can just turn this picture around and call the X a Z and the Z an X. And at that point, it's just another invertible transformation. So which one is the input? Which one is the output? Doesn't actually matter. It's just an invertible neural network, and you can use it one way or you can use it the other way, and it's still an invertible neural network. And if you do that, you get something called an inverse autoregressive flow, which is basically just the same, neural network used in the other direction, where if you do it in the other direction, now you are allowed to do the forward mapping from Z to X in parallel. So you can actually generate in a single shot, essentially. You can generate each component of the output in parallel without waiting for the previous or the previous entries. Because we know that the computation in that direction is parallel, basically, you can sample all the Z's independently from the prior. And if the mu's and alphas depend on the Z's, then you already have them, and you can compute all of them, again, in parallel. And then you just shift and scale all the outputs by the right amount, and then you produce a sample. And so if you basically flip things around, you get a model where you can do very efficient sampling. It's no longer sequential, like an autoregressive model, but everything can be done in parallel. Of course, the downside of this is that now, inverting the model is sequential. So it's still an invertible mapping, but now if we want to go from X to Z, let's say, because we want to train this model, so we want to do maximum likelihood training, then we need to be able to go from images, let's say from X, to latent variables. You have to be able to do it for every single data point. And if you try to figure out, what does the computation graph look like? You can see that it becomes sequential, because what you have to do is you have to shift-- you have to compute Z1 by inverting this relationship. So you take the first pixel, you shift it and scale it, and you get the new latent variable. Now you can use that latent variable to compute the new shift and scale for the second dimension. These mu's, they still depend on the alphas, and the mules depend on the previous variables. So now that you have Z1, you can compute mu2 and alpha 2. And now you can shift and scale X2 to get Z2. And now you can use Z1 and Z2 to compute the new shift in scale and so forth. So that's basically the same thing you would have to do when you were-- that you normally do when you sample from an autoregressive model. So you have to generate one variable at a time. Here you have to invert one variable at a time before you can invert the next. And so this is a great model that allows you to sample very efficiently, but it's very expensive to actually compute likelihoods of data points. So this would be a tricky model to use during training, because you would have to go through each individual variable to be able to invert and to be able to compute, to be able to compute likelihoods. The good thing is that it's actually fast to evaluate likelihoods of a generated point. So if you generate the data yourself, then it's easy to evaluate likelihoods because you already have all the Z's. Then you map them to X, which you can do efficiently if you store the latent vector that you use to generate a particular data point, then you don't have to recompute it. You already have it, so you can actually evaluate likelihoods of data points you generate yourself very efficiently. Because all you need is you need to be able to evaluate the likelihood of these, Z1 through Zn and the prior. You need to be able to evaluate the determinant of the Jacobian, which depends on these alphas and which you can compute, because you have all the Z's to begin with, if you generate the data point yourself. And we'll see that this is going to be somewhat useful when we talk about how to distill models, so that if you have a model that is maybe autoregressive and is slow to sample from, we're going to see that it's possible to distill it into a model of this type, so different kind of flow that, after you train a model, like a student model that is much faster than the teacher model that you train autoregressively and can generate in one shot, in parallel, like this. And yeah, this property at the end here, the fact that you can evaluate likelihoods of points you generate yourself is going to be useful when we talk about that. And again, like these two normalizing flows, MAF, IAF, are actually the same model, essentially. It's just, if you swap the role of X and Z, they are essentially the same, the same kind of thing. If you think of it from the perspective of MAF, then you compute the X, the alphas, and the mu's, as a function of the X's. And that's the way you would do it in an autoregressive model. If you just flip things around, you can get an inverse autoregressive flow by just having the mus and the alphas depend on the Z's, which is basically what you get if you relabel Z and X in that figure. And so that's another way to get a flow model, is to basically start with a Gaussian autoregressive model, and then you can get a flow model that way. And yeah, so they're essentially do-- they're essentially the same thing. And so the trade-offs, like our MAF, it's basically an autoregressive model. So you have fast likelihood evaluation, slow sampling, one variable at a time. IAF is the opposite because it's the reverse, so you can get fast sampling, but then you have slow likelihood evaluation. MAF is good for training because what we need is we need to be able to evaluate likelihoods efficiently for every data point, if you want to do maximum likelihood training. And so MF is much better for that. On the other hand, if you need something where you need to be able to generate very, very quickly, IAF would be a better kind of solution. And natural question, can we get the best of both worlds? And that's what they did with this parallel wavenet, which used to be a state of the art model for speech generation. And the basic idea was to start with a really good autoregressive model and then distill it, which is just MAF, basically, and then distill it into an IAF student model that is going to be hopefully close to the teacher and is going to be much faster to generate samples from. And so that's basically the strategy they used. They used an MAF, which is just an autoregressive model, to train a student, a teacher model. You can compute likelihoods efficiently. It's just an autoregressive model, so it's easy to train the usual way. And once you've trained this teacher, you can train a student model to be close to the teacher. But because it's an IAF model by design, it would allow you to sample much more efficiently. And the key observation that we mentioned before is that you can actually evaluate likelihoods on your own samples. So if you generate the samples yourself, you can actually evaluate likelihoods efficiently. And then basically, one way to do it is this objective function, which is basically based on KL divergence, where what you would do is you would first train the teacher model by maximum likelihood. This is your autoregressive model that is expensive to sample from. And then you define some kind of KL divergence between the student distribution, which is an IAF model, efficient to sample from, and the teacher model. And this is just the KL divergence between student and teacher. And this is important that we're doing it in this direction. You could also do divergence teacher-student, but here we're doing divergence student-teacher. And the KL divergence, if you expand it, it basically has this form. And you can see that this objective is good for training, because what we need to do in order to evaluate that objective and optimize it is we need to be able to generate samples from the student model efficiently. The student model is an IAF model, so it's very easy to sample from. We need to be able to evaluate the log probability of a sample according to the teacher model. That's, again, easy to do, because it's just an autoregressive model, so evaluating likelihoods is easy. To evaluate the likelihood of the data point that you generate yourself using the student model, which is what you need for this term, again, that's efficient to do if you have an IAF model because you've generated the samples yourself, so you know the Z, so you know how to evaluate likelihoods. And so this kind of objective is very, very suitable for this kind of setting, where the student model is something you can sample from efficiently from, you can evaluate likelihoods on your own samples efficiently, and then you have a teacher model for which you can evaluate likelihoods. Maybe it's expensive to sample from, but we don't care, because we never sample from the teacher model. You just need to be able to do good [INAUDIBLE] training, which we know we can do with autoregressive models. And to the extent that this KL divergence is small, then the student distribution is going to be close to the teacher distribution. So if you sample from the student model, you're going to get something similar to what you would have gotten if you were to sample from the teacher model, but it's much, much faster. And all the operations that you see there, they can be implemented efficiently. And that's what they did for this parallel WaveNet. You train a teacher model by maximum likelihood, and then you train a student IAF model to minimize this KL divergence. And at test time, then you throw away your teacher. And at test time, what you put on mobile phones to generate samples very efficiently is to use the student model. I am wondering, in the competition, so we use the IAF first to sample then to assign density. Then doing backpropagation, do we also update the parameters for the density calculation part of the student, not only the sampling part? Yes, so you do optimize this function. So you do need to optimize both this, the sampling, but you don't need to optimize T. T is fixed. And because everything can be reparameterized, so you can still backpropagate through that, because essentially, it's just like a big reparameterization trick that you're doing on the student model, is just taking with starting with simple noise and then transforming it. And so it's easy to figure out how, if you were to update the parameter of the student model, how would the sample change? You can do it in this case, because it's the same as reparameterization. And yeah, that's what they did, and they were able to get very, very impressive speed-ups. This was, yeah, a paper from Google a few years ago, and that's how they were able to actually deploy the models in production. They train a really good teacher model by training it autoregressively. That was too slow to generate samples, but then by thinking it from this flow model perspective, and there was a pretty natural way of distilling down into something similar but that has the opposite property of being able to sample efficiently, even though you cannot get likelihoods. If you just care about inference, you just care about generation, that's a more convenient way of parameterizing the family of distributions. Is it possible to do something similar for language models? That would be great. The problem with the-- the question is, can we do this for language models? The problem is that if you have a language model that's discrete, and so you can't necessarily think of it as a flow model, and so you can't really think of sampling from a language model as transforming a simple distribution, at least not in a differentiable, invertible way, because the X is discrete, and so there is not really a way to transform a continuous distribution into a discrete one. And so you can't do it this way, basically, unfortunately. [INAUDIBLE] why not being continuous with doing this flow model? Yeah, Flow models are only applicable to probability density functions, so you cannot apply them to probability mass functions where you would have discrete random variables. So it's only applicable to continuous random variables, because otherwise, the change a variable format does not apply, so you cannot use it anymore. So you cannot get the IAF distinction? Essentially, yes. OK. Cool, so that's another family. And now for the remaining few minutes, we can just go through a few other options that you have if you want to build on invertible mappings. One natural thing you might want to do if you start thinking of autoregressive models are basically flow models, we know that you can build-- you can use convolutional networks in autoregressive models as long as you mask them in the right way. And so the natural thing you can ask is if it's possible to define invertible layers that are convolutional in some way, because we know convolutions are great. And by itself, a convolution would not be invertible, but if you mask it in the right way, you can get the structure or the computation structure of an autoregressive model, and you can build up a layer that is actually invertible. And if you do things in the right way, you can actually make it such that it's not only invertible, but you can actually evaluate the determinant of the Jacobian efficiently. And like in autoregressive models, like in PixelCNN, really all you have to do is you have to mask the convolutions so that there is some kind of ordering. So then that would give you the-- it would not only allow you to invert things more efficiently, but it would also allow you to compute the determinant of the Jacobian efficiently, because it basically makes the Jacobian lower triangular, and so then, we can compute Jacobian's determinant efficiently. And yeah, which is what I just said; and basically, what you can do is you can try to enforce certain conditions on the parameters of the neural network so that the transformation is guaranteed to be invertible. And you can read the paper for more details, but essentially, what it boils down is something like this, where if you have a three-channel input image, like the one you have on the left, and you have, let's say, a 3-by-3 kernel convolutional kernel, which looks at, let's say, R, G, and B, what you can do is you can mask the parameters of that kernel, which in this case is just like this cube. That is a cube for the three channels, and you can mask them so that you only look at the-- that you only look at the pixels that come before you, basically, in the ordering. So you can see the receptive fields of these kernels here on the right. And when you produce the three values in the three channels, they are produced by a computation that is basically consistent with this ordering. And you can see that just like in the PixelCNN, you have to decide on which colors you start from. And then you have to be causal, also, with respect to the channels that you have in the image. And yeah, so basically, there are ways to define convolutional kernels that would give you invertible mappings, and you're losing out something because the receptive fields, you're no longer be able to look at everything in the image. You're restricted in what you can look at. But what you gain is that you get attractable Jacobian, basically. And you can build a flow model by stacking these kind of layers, and this works reasonably well. Here's some examples, MNIST samples, CIFAR-10, ImageNet, samples that you get by training, basically, a flow model where you have all these convolutional layers that are crafted in a certain way so that the filters are basically invertible. The other quick thing I wanted to mention is a different perspective on what happens when you train a flow model, this idea that you can either think about training a model such that the distribution of the samples that you get is close to the data distribution. Or you can think of training the model as basically saying, if I were to transform my data according to the inverse mapping, I should be getting something that is close to the prior of the flow model, as close, for example, a Gaussian distribution. And so you can use this dual perspective to construct other kinds of layers that can get you, that basically try-- where every layer is trying to make the data look more Gaussian, essentially. And the basic intuition is something like this. If you have a flow model where you transform a Gaussian random variable into data X, and then you have some true data distribution, so a true random variable X tilde, which is the one that is distributed really according to the data, if you do maximum likelihood training, what you do is you minimize the KL divergence between the data distribution and the distribution that you get by sampling from this model, by transforming Gaussian noise through this invertible mapping F theta. Or equivalently, you're minimizing the KL divergence between the distribution of the true X tilde, which is distributed according to the data, and this new random variable, X, that you get by applying-- by transforming Gaussian random noise, which is basically, this is saying that if you take Gaussian samples and you transform them through this mapping, you should get something close to the data. Equivalently, because of properties of the KL divergence, which is invariant to invertible transformations, you can also think of this as trying to minimize the KL divergence of what you get by transforming the true data according to the inverse mapping and transforming the samples through the inverse mapping. And we know what we get. If we transform samples through the inverse mapping, we get the prior. And so equivalently, you can think of training a model as transforming through this random vector, X tilde, which is distributed according to the data, into one that is distributed as a Gaussian. And so you can think of the flow model as basically Gaussian-izing the data. You start out with a complicated distribution. And if you go through the flow in the backward direction, you're mapping it into something that has to look like a Gaussian. And how to achieve this? One natural way of doing it is to basically, at least if you have one-dimensional data is through the inverse CDF. And so going through quickly, because I don't have time, but if you have a random variable that has some kind of data distribution, if you apply the inverse or the CDF of the data-- there is going to be a CDF for the data distribution. And if you apply the CDF of the data distribution to this random vector or random variable, you're going to get a uniform random variable. That's basically the way you sample from-- that's one of the ways to sample from a random variable with known CDF. You sample uniformly, you inverse the CDF and you get a sample from X tilde. And so basically, this kind of transformation where you are transforming a data sample through the CDF, which is a way to widen the data, It's like the thing you would do by subtracting the mean divided by the standard deviation, something similar, if you apply this kind of transformation, you get something that is uniform. So it's guaranteed to be between 0 and 1. And once you have a uniform random variable, what you can do is you can apply the inverse CDF of a Gaussian, and you can transform it into something that is exactly Gaussian. And this, basically, the composition of the true CDF of the data and the inverse CDF of a Gaussian will transform any random vector into a Gaussian one. And that's basically the idea of Gaussian-izing flows, is that you stack a bunch of transformations trying to make the data more and more Gaussian. And I guess I'm going to skip this, but if you know about copula models, these are a famous kind of statistical model. It's often used in Wall Street. You can think of it as a shallow kind of normalizing flow where you only apply one layer of Gaussianization each individual dimension. So you start with data that is not Gaussian-distributed, then you apply this Gaussian CDF trick to basically make each individual dimension Gaussian. And even though jointly, it's not Gaussian, that's your model of the data. And then you can stack them together, and then you keep doing this thing and you apply some rotations, then you can transform anything into a Gaussian. And this is another way of basically building invertible transformations. But I'm out of time, so I think this is a good point to stop.
Stanford_CS236_Deep_Generative_Models_I_2023_I_Stefano_Ermon
Stanford_CS236_Deep_Generative_Models_I_2023_I_Lecture_3_Autoregressive_Models.txt
All right, so let's get started. The plan for today is to talk about autoregressive models, which is going to be the first type of or first family of generative models that we're going to consider in the class. This is the kind of technology behind large language models, things like ChatGPT. So yeah, just as a recap, remember, sort of like this high level overview. Whenever you want to train a generative model, you need data. So samples from some IID, unknown probability distribution P-data. And then you need to define a model family, which is going to be a set of probability distributions over the same space over which your data is defined. And these probability distributions are typically parameterized somehow. For example, using it could be conditional probability tables in the case of a Bayesian network, as we have seen in the last lecture. For the most part, we're going to be thinking about probability distributions that are defined in terms of neural networks. So you can think of theta there in that picture as being kind of like the parameters of the neural network that you're going to use to define this probability distribution. And then you're going to define some sort of notion of similarity or divergence between the data distribution and your model distribution. And then we're going to try to optimize the parameters of the neural network to make your model distribution as close as possible to the data distribution. The caveat being that you only have access to samples from the data distribution, right? So you can't evaluate the probability of an image under the data distribution. The only thing you have access to are a bunch of samples. And once you have this probability distribution, then you can do several things. You can sample from it. So you can choose a vector x with probability. There's many different x's that you could choose from. Each one of them is assigned a probability by your model. And you can choose one according to this probability distribution. So you sample from it. And this is what you need to generate new data. We're going to be interested in evaluating probabilities for several reasons. One is that evaluating probabilities is useful for training the models. So if somehow you have a way of figuring out how likely is any particular image according to your model, then that gives you a pretty natural way of training the model, kind of like solving this optimization problem or trying to find the point that is as close as possible to the data distribution. And one way to do that is to just do maximum likelihood. You can try to find the parameters of your model that maximize the probability of observing a particular data set. The other thing you can do, if you have access to probabilities, is you can do things like anomaly detection. So given an input, you can see, is this input likely or not, so kind of like what we discussed in the last lecture. One advantage of generative models compared to discriminative models is that you can reason about the possible inputs that you might be given access to. So you might, for example, try to detect adversarial examples. Because perhaps they are different from the natural images that you've used for training your model. And so if your generative model is good, you might be able to identify that something is odd about a particular input. Maybe the likelihood is lower than it should be. And so you can say, OK, this is perhaps an anomaly. Maybe I shouldn't be very confident about the kind of decisions or the kind of predictions that I make about this particular data point. And as we discussed another thing you can do is potentially unsupervised representation learning. And so in order to do well at learning a good approximation of the data distribution, you often need to understand the structure of the data. And so in some cases, it's going to be a little bit tricky for autoregressive models, which is what we're going to talk about today. But for other types of models, it's going to be pretty natural. There's going to be a pretty natural way of extracting features as a byproduct, basically, of training a good generative model. So the first question is, how to represent these probability distributions. So how do you define this set in a meaningful way? And today we're going to talk about autoregressive models, which are built on the idea of using Chain Rule, essentially. And next we're going to talk about how to learn it. So recall that there is this general result that you can take any probability distribution defined over an arbitrarily large number of variables n and you can always factor it as a product of conditionals. So if you have four random variables, x1 through x4, you can always write it down as the probability of x1, the probability of x2 given x1, and so forth. And this is just fully general. You don't need to make any assumptions on the distribution. Every distribution can be factorized this way exactly. And in particular, you can also use any ordering you want. So in this case, I'm factorizing it based on the ordering x1, x2, x3, and x4. But you could choose a different ordering. So you could decide, you could write it down as the probability of x4 times the probability of x3 given x4, and so forth. And here you start to see that, yeah, in general, you can always do it. But perhaps some orderings might be better than others. So if there is some kind of natural causal structure in the data, then perhaps modeling the data along that direction is easier. But the Chain Rule doesn't care. It works regardless of whatever ordering you're going to use. Bayes Net essentially exploit this idea. And they make progress by basically simplifying these conditionals. So we've seen that, in general, even when the random variables are discrete, representing those conditionals as tables doesn't scale, doesn't work. And so Bayesian networks essentially make some kind of conditional independence assumption. They assume that certain things are conditionally independent from other things. And then that gives you potentially simpler factors that you can represent as tables. And the other way to go about it is to use a neural model where instead you're going to give up on the tabular representation. So it's no longer a lookup table. Now it's going to be some kind of function parameterized by a neural network that you're going to use to map different kind of assignments to the variables you're conditioning on to parameters for the conditional distribution over the next variable in this ordering that you're using. So in this kind of neural models, what we're going to do is we're going to start from Chain Rule, and then we're going to try to approximate the true conditionals using neural networks. And this works to the extent that the neural network is sufficiently powerful that it can well approximate these conditional probabilities, which could be potentially very complicated. If you think about those as tables, there could be really complicated relationships between the entries in the table. And this kind of factorization using neural models works to the extent that the neural network is sufficiently flexible that it can capture the structure of what you would get if you had a fully [? general ?] tabular representation. And the good news is that a sufficiently deep neural network can, in principle, approximate any function. And so that's kind of where the magic of deep learning comes in. If you can use very deep neural networks, there's a good chance you might be able to actually come up with a decent approximation to these conditionals. And that's why these models tend to work in practice. So remember that the machinery that we're going to use is going to be the same as the one you use in regular, let's say, classification. So you want to predict a binary label, given a bunch of input features. You just care about the conditional distribution of a single variable, given a potentially large number of other variables. But the important thing is that you're just trying to predict one thing at a time, a single variable Y. And so you can use things like logistic regression or neural networks to do these kind of things. And in particular, we've seen that logistic regression is kind of like assuming a relatively simple dependency between the values of the covariates x, or the features that you are conditioning on, and the conditional probability of Y given x. It's basically assuming that there is a linear dependency that then is fed through a sigmoid to get a non-negative number that has the right kind of normalization. And you can make things more flexible by assuming some kind of non-linear dependence. And that's where you use neural networks, right? So you can take your inputs x, you can transform them by applying linear transformations, non-linearities. You can stack them in any way you want. And then at the end of the day, you still have some sort of transformation that gives you the parameters of this conditional distribution over what you're trying to predict, given what you have access to. And so maybe at the end, you use some kind of sigmoid function or a softmax function to basically normalize the output to a probability distribution. So it's more flexible. You have more parameters, which is good because the model, you can capture a richer set of dependencies between the variables. The price you pay is that you have more parameters to learn. You need more memory. And you might imagine that you might need more data. Cool. So that's the building block. And then basically, the whole idea of autoregressive models is that once you know how to predict one thing using a neural network, you can combine them. And you can always think of a high-dimensional output-- let's say an image-- as a number of individual components. And Chain Rule gives you a way of predicting the individual components, given the previous ones. And so then you can plug in your neural network to get a generative model. And that's what neural autoregressive models essentially do. All right? So for example, let's say that you wanted to learn a generative model over images. So just for simplicity, let's say that you wanted to work with the binarized MNIST. So MNIST is kind of like a classic data set of handwritten digits. So if you binarize them so that every pixel is either 0 or 1, black or white, then they might look like this. So you see that they kind of look like handwritten digits. And each image has 28 by 28 pixels. So you have 28 times 28 random variables to model. And the variables are binary-- 0, 1, black or white. And the goal is to basically learn a probability distribution over these 784 random variables such that when you sample from it, the images that you get hopefully look like the ones that you have in the training set. Or in other words, you're hoping that the distribution that you learn is a good approximation to the data distribution that generated these samples IID, independent identically distributed samples that you have access to in the training set. And again this is challenging because there's a lot of possible images. And you need to be able to assign a probability to each one of them. And so recall the recipe is you define a family of probability distributions parameterized by theta, which we're going to see in this lecture. And then you define some kind of learning objective to search over the parameter space to do some kind of optimization. Reduce the learning problem to optimization over theta, over the parameters that define the distribution to try to find a good approximation of the data distribution, which is going to be the next lecture. So the way to use an autoregressive model to define this probability distribution is you first need to pick an ordering. So remember if you want to use Chain Rule, you have to pick an ordering. And for an image, it's not even obvious what the ordering should be. There is not an obvious kind of causal structure. Like, you're not modeling a time series where you might expect that there is some causal structure and maybe predicting the future given the past is easier than going backwards. But any ordering works, in principle. And so for example, you can take a raster scan ordering. And so you can go from top-left to bottom-right. You can order the 784 pixels that way. And then you can apply chain rule to this probability distribution. And so you know that without loss of generality, there is always a way to write down this distribution that way, basically as the probability of choosing an arbitrary value for the first random variable and then choosing a value for the second, given the first, and so forth. And so that's how you break down a generative modeling problem that is tricky to a sequence, a small number of classification, regression, something we know how to handle. Each one of these conditionals is only over a single random variable. And that's the kind of setting you know how to deal with or you typically consider when you think about classification, regression, those kind of problems. And you cannot do tabular form. So a Bayesian network is out of the question here. And so instead we're going to try to basically model these conditionals using some kind of neural model, some kind of functional form that will allow us to map the different configurations of the pixels we are conditioning on to a probability distribution over the next pixel that we need to work with in this particular ordering that we've chosen. And so in particular, I mean, if you think about the first probability distribution, you can represent it as a conditional probability table. That's just a binary random variable. You just need one parameter for that. So that's why I'm saying PCPT here means that you can actually store that one separately. But the other ones become complicated. And so you kind of have to make some sort of approximation. And one simple thing you can do is to just use logistic regression. So you can try to use logistic regression to basically predict the next pixel, given the previous pixels. And that gives you a generative model, basically. And if you do that, notice that you don't have a single classification problem. You have a sequence of classification problems. Like, you need to be able to predict the second pixel, given the first one. You need to be able to predict the third pixel, given the first two. You need to be able to predict the last pixel, the one in the bottom right, given everything else. And so all these classification problems are basically different and separate. You even have a different number of covariates or variables that you are conditioning on. And so, in general, you can potentially use different parameters, different models for each one of them. And this is kind of like what I'm alluding here. There is a different vector of coefficients alpha for your logistic regression model for each classification problem. And so more explicitly, for example, you would have the first prior distribution over the first pixel, which is just a single number. It tells you how often do you choose the first pixel to be white versus black. So if you think about the structure of these images, this pixel here at the top-left is almost always black. So you probably would want to choose this number to be close to 0, assuming 0 means black. Sort of like you want that pixel to be often black. And then you need to be able to specify a way of predicting the second pixel given the first one. And you can do it using a simple logistic regression model and so forth. Right? And that's a modeling assumption. Whether or not this type of generative model works well depends on whether or not it's easy to predict the value of a pixel given the previous ones in this particular arbitrary order that I've chosen for the pixels. And whether this works, again it depends on how good this approximation is. So it might work well or it might not work well. Because maybe these dependencies are too simple. Maybe regardless of how you choose this alpha, there is not a good way of figuring out how you should choose the value, whether or not a pixel is white or black in this case. But you can think of it as an autoregressive model. Because essentially what you're doing is you're trying to regress. You're trying to predict the structure of the data itself, right? So you're regressing on yourself. Like, you're trying to predict parts of each data point, given other parts of the data point. And this kind of modeling assumption has been tried before. This kind of model is called a fully visible sigmoid belief network. It's kind of like a relatively simple early type of generative model that as we'll see, is not going to work particularly well. But it's kind of useful to work it through so that you get a certain level of understanding of exactly what it means to model a joint distribution in terms of simple classification models. So when you think about what we're doing here, when you think about chain rule, we have all these individual pixels that we're modeling conditionally on all the ones that come before it in the order. And so when you model the probability of Xi given all the variables that come before it in the ordering, let's say, using a logistic regression model, you're basically outputting the conditional probability of that pixel being on or off, given the values of the previous pixels. And we're often going to denote this using this symbol here, x is smaller than i, which basically means given all the indexes j are strictly smaller than i. And in the case of logistic regression, that conditional probability is given by this relatively simple expression, a linear combination. And then you pass it through a sigmoid. Now, how would you evaluate? If somebody gives you a data point and you want to know how likely is this data point according to my model, which is the kind of computation you would have to do if you want to train a model by maximum likelihood, how would you evaluate that joint probability, given that somehow you have all these values for alpha? So what you would have to do is you would go back to chain rule. So you basically just multiply together all these factors. And so more specifically, the first pixel X1 will have a value. Well, I guess here I have an example with, let's say, imagine that you only have 4 pixels. There's four random variable. And let's say that we are observing the value 0, 1, 1, 0. Then, you basically need to multiply together all these values, which are basically the predicted probability that a pixel takes a particular value given the others. And these predicted probabilities depend on the values of the previous pixels in the ordering. Right. So x-hat-i, which is the predicted probability for the i-th pixel, depends on all the pixels that come before it in the ordering. So a little bit more explicitly, it would look something like this where you would have to compute the conditional probability of the second pixel when the first pixel is 0. You would have to compute the conditional probability of the third pixel being, let's say, in this case, given that the previous two are 0 and 1, and so forth. And then you would basically replace that expression here for x-hat with the standard sigmoid logistic function thing. And that would give you the number. How would you sample from this distribution? So let's say that somehow you've trained the model and now you want to generate images according to this model. The good thing about an autoregressive model is that it also gives you a recipe to sample from it. In general, it might not be obvious how you do this. Like, OK, you have a recipe to evaluate how likely different samples are. But then how do you pick one with the right probability, right? So would you randomly generate one image, evaluate the probability, and then do some sort of rejection sampling? You could do things like that. You could use generic kind of like inference schemes. If you have a way of evaluating probabilities, you could try to even brute force and kind of like invert the CDF and try to do something like that that, of course, would never scale to the situation where you have hundreds of random variables. The good news is that you can basically do it, you can use chain rule again and decide the values of the pixels one by one. So what you would do is, we know what is the prior essentially probability that the first pixel is on or off. And we can just pick a value for the first pixel. Now, once we know the value of the first pixel, we know how to figure out a value, probabilistically, for the second pixel. So we can plug it into the previous expression. Or you could do something like this, just to be very pedantic. There is some prior probability. And perhaps you always choose it to be black because all the images are like that. But then you pick a value. And then you basically sample the second random variable, given the conditional distribution. And this conditional distribution, you can get the parameter by using this expression. So the logistic regression model will try to predict the second pixel, given the first one. And you're going to get a number from this. And then you can sample from it. Then you have two pixels now that you've chosen values for. Then you can fit it to the next logistic regression model. And you can keep generating the image one pixel at a time. So that's the recipe. And it's good news because sampling is, to some extent, easy. I mean, it's not great because you have to sequentially go through every random variable that you're working with. But it's better than alternatives like having to run it using a Markov chain Monte Carlo methods or other more complicated techniques that we might have to resort to for other classes of models. The good news is that for these kind of models, sampling is relatively easy. Conditional sampling might not be. So if you wanted to sample pixel values based on-- you know, if wanted to do inpainting because you already have a piece of the image, you want to generate the rest. Depending on what you know about the image, it might be easier or it might be hard. So it's not straightforward. The fact that you can do this efficiently is a nice benefit of these type of models. OK. Now, how many parameters do we have? So if we have a bunch of alpha vectors, these alpha vectors have different lengths because they are logistic regression models of different sizes basically. Any guess? For this model basically that's say two parameters and this one we have 3 and then 4 and 5 It's n-squared. Yeah. [INAUDIBLE] It's n-squared, 1 plus roughly n-squared. Right. So potentially not great, but maybe manageable. Cool. Now, as I kind of mentioned before, this doesn't actually work particularly well. So now I don't have the results on MNIST. But if you train it on this data set of the Caltech 101, so the samples are on the left. And you can see that they kind of have shapes. Like, there is like objects of different types. And then you can train this simple model based on logistic regression classifiers. Then you can sample from it. And you get these kind of blobs. So not great. And the reason is that basically the logistic regression model is not sufficiently powerful to describe these potentially relatively complicated dependencies that you have on the pixel values. So how can we make things more better? Let's use a deeper neural network, right? That's the natural thing to do. And if you do that, you get a model that is called NADE, neural autoregressive density estimation. And the simplest thing you can do is to just use a single layer neural network to replace the logistic regression classifier. And so what would it look like? Basically, what you do is for every index i, for every pixel, you take all the previous pixel values and you pass them through first a linear layer, then some non-linearity. And then you pass the non-linearity. What you get, these features, these hidden vectors that you get through a logistic regression final output layer, that would give you the parameters of this Bernoulli random variable. So it will tell you whether or not what is the probability that i-th pixel is on or off. And as you can see now, we have a slightly more flexible model. Because you don't just have the alphas, the parameters of the logistic regression classifier of the final layer of the network. But now you also have the first layer. So you have a slightly more flexible model. And so it would look something like this. And again, the issue here is that if there are n random variables, you have n separate kind of classification problems. So in general, you could use completely sort of like decoupled models. And so the first model would have, let's say, a single input x1. And so the shape of this matrix would be just a column vector, basically. And then if you have two inputs, x1 and x2, to predict the third pixel, then this matrix would have two columns essentially and so forth. And, yeah? Why do we have two sigmoids? Like, why do we have a sigmoid over h and then sigmoid over-- the second sigmoid makes sense. But why do we have a sigmoid for h? I don't think you necessarily have it to have it. It's just here I'm having an extra non-linearity there. But you don't necessarily need it. Yeah? If you don't have that sigmoid, wouldn't it just-- It'd just be another linear layer, yeah. Yeah. Well, it is gonna be the same [INAUDIBLE].. Yeah. So it's better to have a non-linearity, yeah. But this is just for illustration purposes. You can imagine different architectures. It doesn't have to be a sigmoid. It could be a ReLU. It could be other things. It's just-- yeah? So over here, you have three rows in your A matrix. Are we trying to predict three separate features for our y? I thought it was just one probability. Oh, I see what you mean. So this is just basically a hidden vector h, which could have, it's not necessarily a scalar. That hidden vector is then passed to a logistic regression classifier. And so it's then mapped down to a scalar through this expression here, which might be, so there's a dot product there. Right. And so this in principle all works. But you can kind of see the issue is that we're separately training different models for every pixel, which doesn't seem great. Perhaps there is some common structure. At the end of the day, we're kind of like solving related problems. We're kind of trying to predict a pixel, given part of an image, given the previous part of the image. And so there might be an opportunity for doing something slightly better by tying the weights to reduce the number of parameters and, as a byproduct, speed up the computation. And so what you can do here is you can basically tie together all these matrices, A2, A3, A4, that you would have if you were to think of them as separate classification problems. What you can do is you can basically just have a single matrix and then you kind of tie together all the weights that you use in the prediction problems. We're basically selecting the corresponding slice of some bigger matrix. All right, so before we had this the first matrix that we used to call A2 and then A3 and then A4. And they were completely decoupled. You could choose any values you want for the [? entries ?] of those matrices. What you can do here is you can basically choose the first column to take some set of values. And then you're gonna use that for all the subsequent kind of like classification problems. So you're equivalently kind of trying to extract the same features about x1. And then you're kind of going to use them throughout all the classification problems that you have when you're trying to model the full image. Yeah? Is reducing overfitting also a motivation for this? Yeah. So the question is overfitting also potentially a concern? Yeah, reducing the number of parameters is also good for overfitting issues. Tying together the classification problems might be good. You might learn a better solution that generalizes better. And as we'll see, it also makes it faster. I'm curious, like, empirically, if it makes more sense to invert your Xs. You're saying, like, you always [? depend ?] the same way based off the last thing you predict instead of, like, saying the n-th term should have the same weight for x1, for example. What's the suggestion? Sorry, I didn't quite-- I guess over here, we're always multiplying the first w1x1w1x1 for every single xi we're predicting. Instead of that, would it make more sense to invert your Xs so that W1 looks at Xi minus 1, W2 looks at Xi minus 2, and so on and so forth? So you're just looking at one preceding entry, two preceding entries, and so on. Oh, that could also work. Yeah, that's a different kind of parameterization. That is more like a convolutional kind of thing, I would say. We're gonna talk about that, too. This is what they did in this particular model. A question about notation. What is the w-dot comma smaller than [INAUDIBLE].. What is the dot? It's just the matrix. Yeah. I don't think you probably didn't need the dot. Or I guess it means the piece of a bigger matrix. I think that was the intended notation. But yeah, you get the idea sort of. And the good news is that this can reduce the number of parameters. So if you have a size d for this hidden vector h that you're using to make the predictions, how many parameters do you need? [INAUDIBLE] n squared over 2 times t? It's no longer quadratic in n. That's the kind of big takeaway. Before we had something that was quadratic in n. Now it's basically linear. Because there's a single matrix that you have to store. And then you can reuse it all the time. All right. So that's good. Now, the other advantage that you have with this kind of model is that you can evaluate probabilities more efficiently. Because basically, remember if you want to evaluate the probability of a data point, you have to evaluate all these conditionals. So you have to go through every conditional and you basically have to evaluate this kind of computation. If there is no structure on the matrices, then you have to redo the computation because there is nothing shared. But if you have some shared structure, then you can reuse the computation. So if you've already computed this dot product, this product here, this matrix vector product here, and then if you are adding an extra column, then you can reuse the computation that you've done before. You can just add in an extra [? column. ?] Is the bias vector c also shared amongst all the hidden layers? Yeah, I guess it could be. Or it doesn't have to be. I think you could make it either way. I think I actually forgot to because it didn't fit. But, yeah, there should be a c. And you could change it, yeah. The weight columns are updated in each step. So like in the fourth step here, it also updates the first and second column. What do you mean update? Like, my understanding was with the weight matrix, you basically build it column by column. But you try to learn by overseeing, like, many examples. So I was wondering, like, as the model learns, it basically updates also in every step all the previous columns that it has learned, right? Yeah, yeah, yeah. So it's all tied together. And then we haven't talked about how you would do learning, but yeah. So then you can see that the first column matters for all the prediction tasks. So you would be able to learn it. You would get some signal from every learning problem. Yeah? Yeah, just to clarify, in this model, you are sharing weights, right? Yeah. And does that imply any assumptions you're making about the data you're looking at? It's an assumption. Again, you're kind of saying that these conditional probability tables could be arbitrary, somehow can be captured by prediction models that have this sort of structure. So somehow there is some relationship between the way you would predict one pixel, different pixels in an image. Whether or not it's reasonable, it becomes an empirical question. I think I have the results here. And it tends to work significantly better than, let's say, the previous logistic regression model. So it does seem like this kind of structure helps modeling natural images or toy kind of images like MNIST. So here you can see some examples. You have MNIST binarized. Or no actually, I don't have the samples for MNIST. Here, what you have here is samples from the model trained on MNIST on the left and the conditional probabilities corresponding to these samples on the right. So remember that when you generate samples autoregressively, you actually get probabilities for each pixel, given the previous ones. And then you sample from them to actually pick a value. And so the images on the left are binary, 0 and 1. The images on the right are kind of soft because for every pixel, you get a number between 0 and 1 that you sample from to generate a color. In this case, 0, 1. And so you can see they look a little bit better because they're a little bit more soft. But you can see that it's doing a reasonable job at capturing the structure of these images. Why are the numbers on the right table look just almost exactly like the left one? Why don't they just create some variation, I mean, some other kind of variations? So the numbers are corresponding to the samples that you see. So basically what this is saying is that what you would actually do when you sample is you would take the first pixel. You have a probability. And then you plot it on the right. Then you sample a value from that on the left. Then based on that value, based on the actual binary value, you come up with a probability for the second pixel, which is just a number between 0 and 1. You plot it on the right image. Then you sample from it and you keep going. So the right table doesn't come from [INAUDIBLE] what [? actually ?] learning? Yeah, it does, it does. So it's basically these numbers, the predicted probabilities for every pixel, which are the x-hat-i, so the probability that pixel is on or off. And they are matching so that's why they look the same. Because the sample that you see on the left is what you get by sampling from those distributions. Yeah? I am noticing that it is agnostic of what the label should be. Is that the right call to make for generation? So the question is, should we take advantage of the fact that maybe we have labels for the data set and so we know that there is different types of digits, that there is maybe 10 digits and then we want to take advantage of that? So here I'm assuming that we don't have access to the label y. If you had access to the label y, you could imagine trying to learn a joint distribution between x and y. And perhaps you would get a better model. Or perhaps you can assume you don't have that kind of structure. You just learn a model. And you can try to use the model to see whether it indeed figured out that there are 10 clusters of data points and that there's a bunch of data points that kind of have this shape that look kind of like an oval. And that's a 0. And that's the kind of third point of, how do you get features out of these models? Like, presumably, if you have a model that can generate digits that have the right structure and it generates them in the right proportions, it has learned something about the structure of the images and what they have in common. And so that was kind of like the third point of getting features, unsupervised learning. We'll talk about how to do that. But, yeah, there is two ways to see it. You can either do it unsupervised. Or if you have access to the label, then perhaps you can include it into the model. You can do conditional generation or you can jointly learn a distribution over x and y. Yeah? So in this case, when you sample, you can get any one of the 10 digits? Well, if the model does well, yes. For example, to check whether the model is doing a good job, you could try to see what is the proportion. Like, if in the original training set, all the images, you see an equal proportion of the different digits, then you now apply an MNIST classifier to your samples and you can see does it generate digits in the right proportion? If it doesn't, then there's probably something wrong with the model. If it does, it's doing something right. Whether it's correct or not, it's hard to say. So, like, here it seems like you're injecting the stronger [? prior ?] into the model. So if you had an infinite data set, would you expect the original approach to perform better than this one? The original meaning the? [? That ?] one's got structure imposed by us, right? So the representation it should learn should theoretically be richer? That one is actually more structured. Like, you have less parameters. It's less flexible. If you had infinite data and infinite compute, the best thing would be conditional probability tables, Bayesian network. That one would be able to, in principle, capture any relationship. With infinite data, you would be able to learn that table. That would be perfect. Modular overfitting, I mean, but if you have infinite data, you don't have to worry about that either. I might have missed something. So on the left picture is the actual samples generated from the model. The right is we somehow code the conditional probabilities into a grayscale? Yeah, just between 0 and 1. The conditional probabilities would be numbers between 0 and 1 and [INAUDIBLE].. So [? that's a ?] grayscale. Yeah, it's just a grayscale. Cool. So that's the NADE. Now, you might wonder, what do you do if you want to model color images, let's say? So if the variables are no longer binary, but if they can take, let's say, K different values, maybe pixel intensities ranging from 0 to 255, how do you do it? Now, what you need to do is the output of the model has to be a categorical distribution over however many different values the random variables can take. So you can basically do the same thing. You first get this kind of hidden vector or latent representation h. And then instead of applying some kind of mapping it down to just the parameters of a Bernoulli random variable, you can use some kind of softmax output layer to map it down to a vector of-- if you have k different outputs that you care about, a vector of k probabilities, pi-1 through pi-k, which basically would represent the probability that the i-th random variable should take one of the K different values that the random variable can take. And that's the natural generalization of the sigmoid function we had before. It's just one way to take K numbers, which are not necessarily non-negative and they might not be normalized, and it's just a way to normalize them so that they become a valid probability distribution. So specifically, you just do something like this. If you have a vector of arbitrary numbers, you apply the softmax operation. It produces another vector. You apply an exponential to every component to make sure it's non-negative. And then you divide by the sum of these exponentials, which is basically making sure that the entries are normalized. So if you sum the probabilities of all the possible things that can happen, you get 1. And so natural generalization of what we had before. Now, you might wonder what do you do if you want to model continuous data. So maybe you're dealing with speech and it's not very natural to discretize the-- I mean, even for images perhaps you don't want to discretize the random variables and you want to model them as continuous random variables. So the solution is to basically, again, use the same architecture. But now the output of the neural network will be the parameters of some continuous distribution. So it's no longer the parameter of a Bernoulli or the parameters of a categorical. It could be the parameters of a Gaussian or a logistic or some continuous probability density function that you think should work well for your data set. And so, for example, one thing you could do is you could use a mixture of K Gaussians. So what you have to do is you need to make sure that the output of your neural network gives you the parameters of K different Gaussians, which are then [? mixtured ?] together, let's say, uniformly to obtain a relatively flexible kind of probability density function like you see here, an example where there is three Gaussians with different means and different standard deviations. Then you combine them together and you get a nice kind of red curve where you kind of allowed to move the probability mass. And you are allowed to say maybe there is two different values that the random variable can take, two modes, one here and one here. And you're allowed to move the probability mass around by changing the mean and the standard deviation of the Gaussians. So in this case, would xi have two K values? Yeah. Yeah. So I think I have, the more precise thing here. So you would say the conditional probability of xi, given all the previous values, is a mixture of K Gaussians, each one of them having a different mean and a different standard deviation. And as usual, you have to basically use the neural network to get the parameters of these distributions. So in this case, as was suggested, you could use the same trick. And then as an output layer, you can no longer use a softmax or a sigmoid. You have to use something else that gives you the parameters of these random variables. And so you need two K numbers, you need K means and you need K standard deviations. And I guess, you need to be careful about, depending on how you parameterize, like if you parameterize a variance, then it has to be non-negative. But that's relatively easy to enforce. OK. Now, as a way to get a deeper understanding of what these kind of models do, you might notice that they look a lot like autoencoders. Like, if you look at this kind of computation graph that I have here where you have the data point x1, x2, x3, and x4 that is being mapped to this predicted probability, x1-hat, x2-hat, x3-hat, and so forth, it kind of looks a little bit like an autoencoder, where you take your input x and then you map it to some kind of predicted reconstruction of the input. And more specifically, an autoencoder is just a model that is often used again in unsupervised learning. It has two components. An encoder takes a data point and maps it to some kind of latent representation. And then, for example, it could be, again, a simple neural network, a two-layer net like this. And then there is a decoder whose job is to try to invert this transformation. And the job of the decoder is to take the output of the encoder and map it back to the original data point. And in this case in this graph that I have here, it could be another neural network that takes the output of the encoder and maps it back to some reconstruction of the input. And the loss function that you would use would be some kind of reconstruction loss. So you would try to train the encoder and the decoder so that for every data point, when you apply the decoder to the encoder, you get back something close to the original data point. So depending on whether the data is discrete or continuous, this could be something like a square loss where you try to make sure that at every coordinate, your reconstructed i-th variable is close to the original one. If you have discrete data, it's more like, is the model doing a good job at predicting the value for the i-th-- let's say, in this case, it's binary here-- for the i-th random variable that I'm actually observing. So if the i-th random variable is true, is 1, is the model giving me a high probability for the value 1. Not super important, but kind of like this is how you would try to learn the decoder and the encoder so that they satisfy this condition. And of course, there is a trivial solution. There is the identity mapping. So if the encoder is just an identity function and the decoder is some identity function, then you do very well at this. And it's not what you want typically. So typically, you would constrain the architecture somehow so that it cannot learn an identity function. But that has kind of like the flavor of what we're doing with these sort of autoregressive models. We're taking the data point and then we're trying to use parts of the data point to reconstruct itself. Or we feed it through these networks and then we output these predicted values. And if you were to think about how you would train one of these models by, let's say, maximum likelihood, you would get losses that are very similar to this. If you were to train these logistic regression classifiers, you would get something very similar to this, where you would try to predict the value that you actually see in the data point. I'm just trying to understand the encoder-decoder kind of mechanism. Is it the main point of the encoder is just to compress all the previous [? informations ?] into a very low dimensional kind of like vector? Is that the main-- Yeah, yeah. So the question is, what are autoencoders used for? Yes, one typical use case would be to learn a compressed representation of the data. Somehow if you can do this, maybe you force the output dimension of the encoder to be small. And then in order to do a good job at reconstruction, it has to capture the key factors of variation in the data. And so you can kind think of it as some sort of nonlinear PCA kind of thing that will try to discover structure in the data in an unsupervised way. Yeah? Right, can we do sampling when we train the autoencoder? The question is, can we do sampling with an autoencoder? No, an autoencoder is not quite a generative model. So these two things are not quite the same. But they are related. And that's why we're going to see next. So yeah, this was coming up. Typically, you would train this to do representation learning, try to find good representations. If you think about what we just said, if you have an autoencoder, it's not really a generative model. Like, how do you generate data from an autoencoder? [? Wouldn't ?] you just remove the encoder and just use the decoder? But what's the input to the-- so the suggestion is, OK, let's throw away the encoder. Let's just use the decoder. What do you feed into the decoder to generate data? Let's just handcraft the e of x. Yeah, that's the solution for a variational autoencoder actually. So the variational autoencoder will be, let's try to learn a simple generative model to feed inputs, fake inputs to your decoder. And so you can kind of fake the process and you can use it to generate. So that's the variational autoencoder solution that we'll talk about later. But there's not an obvious way to generate the inputs to the decoder, unless you have data. But at that point, you're not really sampling, right? Could you like feed in a latent variable that you can [INAUDIBLE]. Yeah, that's the [INAUDIBLE] solution, basically, that we'll talk about, yeah. What if you, like, had a regularization term that forces your hidden representation to just look like a Gaussian or something like that? Yes. So again that's the solution imposed by the-- that's basically a variational autoencoder. Literally, a variational autoencoder is this plus what you suggested, forcing the latent representations to be distributed according to a simple distribution, a Gaussian. And if that happens to work well, then you can sample from that distribution, feed the inputs to the decoder, and that works. But that requires a different kind of regularization. The relationship here is that although these two things look similar, it's not quite the same. And the reason is that we cannot get a generative model from an autoencoder because somehow we're not putting enough structure on this kind of computation graph. And there is not an ordering. Remember that to get an autoregressive model, we need an ordering. We need chain rule. So one way to actually get or to connect these two things is to enforce an ordering on the autoencoder. And if you do that, you get back basically an autoregressive model. And so basically, if you are willing to put constraints on the weight matrices of these neural networks so that there is a corresponding basically Bayesian network or chain rule factorization, then you can actually get an autoregressive model from an autoencoder. And the idea is that basically if you think about it, the issue is that we don't know what to feed to the decoder. So somehow we need a way to generate the data sequentially to feed it into this decoder that we have access to. And so one way to do it is to set up the computation graph so that the first reconstructed random variable does not depend on any of the inputs. If that's the case, then you can come up with the first output of this decoder yourself because you don't need any particular input to do that. And then you can feed your predicted first random variable into then, let's say at the generation time, then you don't need it. Now, if the predicted value for the second random variable depends on x1, that's fine because we can make up a value for x1. Then we can fit it into the computation and we can predict a value for x2. Then we can think of this value, we can take the first two, feed them into the autoencoder kind of thing, and predict a value for x3. And we can keep going. And it's the same thing as an autoregressive model. So if you look at this kind of computation graph, you can see that the predicted value for x1 depends on all the inputs, in general. And so if you look at the arrows, all the inputs have an effect on the first predicted value. And so that's a problem because we cannot get an autoregressive model if we do it that way. But if we somehow mask the weights in the right way, we can get an autoregressive model. And then as a bonus, then we have a single neural network that does the whole thing. So it's not like before that we had different classification models or that they were tied together somehow. If we can do this, then it's a single neural network that in a single forward pass can produce all the parameters that we need. I was wondering in some tasks, for example this digit task we earlier discussed, is there not a risk of the model learning just how to shift it by like one pixel or something? Not obvious that you can just shift because, I mean, you cannot cheat, right? So you cannot look at the next pixel. You have to pick an ordering and you have to predict. Yes, but you can begin with the left and then begin putting the pixels to the left of it, for example. [INAUDIBLE],, you haven't seen them. You haven't seen the right pixels. So you don't know exactly what to copy, right? No, you do because if before order goes left to right, you will know what the pixel in the input image to the left of it is. So you can just put that. And you can draw a black line on the left. So like I was wondering if our metric needs to really prevent that from happening? No, you don't need to prevent that from happening. And partially, it's because these models would then be trained by maximum likelihood. And that's a separate thing that we're going to talk about how to evaluate. So that solution might not actually give you a good score from the perspective of a learning algorithm, even though maybe the samples would look fine. But, yeah, I haven't seen that happening in practice. OK, so the bonus would be single pass you can get everything, as opposed to n different passes. And the way you do it is to basically mask. Right? So what you have to enforce is some kind of ordering. And so you basically have to take the general computation graph that you have from an autoencoder and you have to mask out some connections so that there is some ordering that then you can use to generate data. And the ordering can be anything. So for example, you can pick an ordering where we choose this x2, x3 and x1 which corresponds to the chain rule factorization of probability of x2, x3 given x2, and x1 given the other two. And then what you can do is you can mask out some connections in this neural network so that the reconstruction for x2 does not depend on any of the inputs. And then you can mask out the parameters of this neural network so that the parameter of x3 is only allowed to depend on x2. And the parameter of x1 is allowed to depend on everything, just like according to the chain rule factorization. And so one way to do it, yeah, so that's I think what I just said. One way to do it is you can basically keep track for every unit in your hidden layers, you can basically keep track of what inputs it depends on. And so what you could do is you could pick for every unit, you can pick an integer i. And you can say I'm only going to allow this unit to depend on the inputs up to index i. And so you can see here that there's this 2-1-2-2. This basically means it's only allowed to depend, for example, this unit is only allowed to depend on the units 1 and 2. This unit here is labeled 1. So it's only allowed to depend on the first input, according to the ordering, which is x2. And then you basically recursively add the masks to preserve this invariant. So when you go to the next layer and you have a node that is labeled 1, then you are only allowing a connection to the nodes that are labeled up to 1 in the previous layer. And the way you achieve it is by basically masking out and setting to 0, basically, some of the elements of the matrix that you would use for that layer of the neural network. And if you do that, then you preserve this invariant. And you can see that indeed the probability of x2, which is the second output of the neural network, does not depend on any input, which is what we want for a chain rule factorization. And if you look at the parameter of x3, which is the third output, you'll see that if you follow all these paths, they should only depend on basically the second, on x2, which is the variable that come before it in the ordering. And so by maintaining this invariant, you get an autoencoder, which is actually an autoregressive model. You are essentially forcing the model not to cheat by looking at future outputs to predict. And it can only use past inputs to predict future outputs, essentially. And this is one architecture that would enforce this kind of invariant. Yeah? Sorry, is this something that's like done during training? Or do you train an autoencoder and then mask during the generation? This is done during training. So during training, you basically have to set up an architecture that is masked so that it's not allowed to cheat while you train. Because if you didn't mask, then it could, when trying to predict the x2, you just look at the actual value and you use it, right? And so this is very similar if you've seen language models, you also have to mask to basically not allow it to look into future tokens to make a prediction. If you're allowed to look into the future to predict tokens, then it's going to cheat. And you're not going to do the right thing. And this is the same thing at that level, this is a different computation graph that basically achieves the same sort of result. And the benefits of single pass is during training time, right? For inference, we obviously still have to [INAUDIBLE].. Yes, good question. Yeah, yeah, yeah. So the question is, is the benefit only at training time or inference time? So the benefit is only at training time. Because at inference time, you still have the sequential thing that you would have to come up with a value for the first variable and fit it in. So it would still have to be sequential. That's unavoidable. Every autoregressive model has that kind of annoying flavor, basically. How do you choose the ordering? So the recipe in this paper is random. You mean the value or the ordering? The ordering. Oh, the ordering, that's also very hard. I think if you have something where you know the structure and you know again that there is time, maybe there is a reasonable way of picking an ordering. Otherwise, you would have to either choose many orderings. Maybe you have basically have a mixture. Choose one at random. But there is not a good way of basically selecting an ordering. There is actually research where people have been trying to learn autoregressive models and an ordering so you can define a family of models where you can search over possible orderings and search over factorizations over that ordering. But you can imagine, there is, like, n-factorial different orderings to search over and it's discrete. So it's a very tough kind of optimization problem to find the right ordering. If 1 is not dependent on anything, how does the model output 1? It should only output 2 and 3, right? 1 should be [INAUDIBLE]. You would have to. I mean, depending on the loss function. You cannot depend on anything. But you can still basically make a guess based on no evidence. So you would basically choose the prior, right? So if, let's say, the second variable is always true, then depending on the training objective, you would still try to choose an output here. It's a constant. But you will try to match, basically, the most likely value in the training set. Or if you have a proper scoring rule, then you would try to match the distribution that you see in the training set. Depending on the loss function, you still try to choose a value that makes sense. But it's fixed. So you can only choose one. And so you can't do much. But you would still try to do your best to capture the data depending on the training loss. Yeah? Are there redundancies? Like, in the second to last layer, there's two nodes which just have 1 as an input. So is it kind of redundant to have multiple nodes which just have one input? I mean, the weights are different. So even though there is multiple nodes that have only one input, they might be extracting different features for that input. So it's not necessarily redundant, I would say. So the objective of the autoencoder is to reconstruct. Yes. So how do you reconcile the loss function? It's predicting one thing at a time. How do you make the loss function for reconstruction? Yeah, so the loss function would be the ones that we have here, which would be basically you would try to make the predictions close to what you have in the data. So the loss function wouldn't change. It's just that the way you make predictions is you're not allowed to cheat, for example. You're not allowed to look at xi when you predict xi. And you're only allowed to predict it based on previous variables in some ordering. And it turns out that that would be exactly the same loss that you would have if you were to train the autoregressive model. It depends on the model family that you choose. But if you have logistic regression models, it would be exactly the same loss, for example. In your last layer, like you said, you pick, each of the number of times it looks previously randomly. So if you happen to pick 2-2-2, how would you predict the first entry after that? Yeah, so you would basically be, you're not allowed many connections. And you would do a pretty bad job because you would be less flexible than you could be. It would still be a valid model. It wouldn't be a good one, I guess. So that's why people often have kind of like an ensemble of these models where you have multiple masks and you just do it that way. Yeah. Cool. Let's see now an alternative way to approach this is to use RNN, some kind of recursive style of computation to basically predict the next random variable, given the previous ones, according to some ordering. Right. At the end of the day, this is what the key problem whenever you build an autoregressive model is, solving a bunch of coupled kind of prediction problems where you predict a single variable, given the other variables that come before it in some ordering. And the issue is that this history kind of keeps getting longer. So you are conditioning on more and more things. And RNNs are pretty good at or it's one way to handle this kind of situation and try to keep a summary of all the information of all the things you've conditioned on so far and recursively update it. And so a computation graph would look something like this. So there is a summary h. Let's say h of t or h of t-plus-1, which basically is a vector that summarizes all the inputs up to that time. And you initialize it somehow based on some initialization. And then you recursively update it by saying the new summary of the history is some transformation of the history I've seen so far. And the new input for that time step, xt-plus-1. And maybe this is one way to implement it. You do some kind of linear transformation of ht, xt-plus-1, you apply some non-linearity. And that gives you the new summary up to time t-plus-1. And then what you can do is, just like what we've done so far, is then you transform h and you map it to either, let's say, the parameters of a categorical random variable or a Bernoulli random variable or a mixture of Gaussians. Whatever it is that you need to predict, you do it through-- well, I guess you probably also would need some non-linearities here. But there is some output, which is the thing you use for prediction, which is going to depend only on this history vector or this summary vector of all the things you've seen so far. And the good thing about this is that basically it has a very small number of parameters. Like regardless of how long the history is, there is a fixed number of learnable parameters which are all these matrices that you use to recursively update your summary of all the information you've seen so far. And so it's constant with respect to n. Remember, we had the things that were linear in n and we had things that were quadratic in n. This thing is actually constant. The dimensions are fixed and you just keep applying them. [INAUDIBLE] squared sharing weights? Exactly. It's extreme weight sharing. And then you try to do everything through [INAUDIBLE].. Yes. [? We ?] are imposing a Markov assumption on the conditional probabilities? This is still not. So the question is, is this a Markov assumption? This is not a Markov assumption in the sense that if you think about xt, it's not just a function of the previous xt-minus-1, right? It still depends on all the past random variables in a, again, not entirely general way. So you can only capture the dependencies that you can write down in terms of this sort of recursion. And so it's definitely not a Markov assumption. And if you think about the computation graph, it does depend on all the previous inputs. And so this is an example of how you would use this kind of model to model text. So the idea is that in this simple example, we have only, let's say, four different characters, h, e, l, and o. Then you would basically encode them, let's say, using some kind of one-hot encoding. So h is 1-0-0, e is 0-1-0-0, and so forth. And then as usual you would use some kind of autoregressive factorization. So you write it down. In this case, the ordering is the one from left to right. So you write the probability of choosing the first character in your piece of text, then the probability of choosing the second character, given the first one, and so forth. And what you would do is you would basically obtain these probabilities from the hidden layer of this recurrent neural network. So you have these hidden layers that are updated according to that recursion that I showed you before. And then you would use the hidden layer. You would transform it into an output layer, which is just four numbers. And then you can take a softmax to basically map that to four non-negative numbers between 0 and 1 that sum to 1. And so in this case, for example, we have a hidden layer. And then we apply some linear transformation to get these four numbers. And we're trying to basically choose the values such that the second entry of that vector is very large. Because that would put a lot of probability on the second sort of possible character, which happens to be e, which is the one we want for the second position. And so then when you train these models, the game is to choose values for these matrices so that, let's say, you maximize the probability of observing a particular data point or data set. And, yeah, so again, the key thing here is that there are a very small number of parameters. And then you use the hidden state of the RNN to get the conditional probabilities that you need in an autoregressive factorization. And then you can see the recursion. Then you would compute the next hidden state by taking the current history, the new character that you have access to. You update your recursion and you get a new hidden state. You use that hidden state to come up with a vector of predicted probabilities for the next character, and so forth. So it's the same machinery as before. But instead of having multiple kind of logistic regression classifiers, we have a bunch of classifiers that are tied together by this recursion. And the pro is that you can apply it to sequences of arbitrary length. And it's actually, in theory at least, RNNs are pretty general in the sense that they can essentially represent any computable function, at least in theory. In practice, they are tricky to learn. And you still need to pick an ordering, which is always a problem for autoregressive models. The key issue with these sort of RNNs is that they're very slow during training time. Because you have to unroll this recursion to compute the probabilities. And that's a problem. But I'll just show you some examples. And then I think we can end here. It actually works reasonably well. Like, if you take a simple three-layer RNN and you train it on all the works of Shakespeare at the character level, so it's literally what I just showed you, just a three-layer RNA. And then you sample from the model and you can get things like this, which has a little bit of the flavor of Shakespeare, I guess. If you think about it, this is at the character level. It's literally generating character by character. It's actually pretty impressive. Like, it needs to learn which words are valid and which ones are not, the grammar, punctuation. It's pretty impressive that a relatively simple model like this working at the level of characters can do like this. You could train it on Wikipedia and then you can sample and you can make up fake Wikipedia pages like this one on the Italy that conquered India. It's pretty interesting made-up stuff. But again, you can see it's pretty interesting how it has the right markdown syntax and it's closing the brackets after opening them, which it has to remember through this single hidden state, right, that it's carrying over. Yeah. So, you know, it's even making up links for these made-up facts that it generates. And train it on baby names and then you can sample from the model. You can get new names. So, yeah, it works surprisingly well. I guess the main issue that hopefully then maybe we'll go over it next time. The reason this is not used for state-of-the-art language models is that you have this bottleneck that you need to capture all the information up to time t in a single vector, which is a problem. And the sequential evaluation, that's the main bottleneck. So it cannot take advantage of modern kind of GPUs because in order to compute the probabilities, you really have to unroll the computation. And you have to go through it step by step. And that's kind of the main challenge.
Stanford_CS236_Deep_Generative_Models_I_2023_I_Stefano_Ermon
Stanford_CS236_Deep_Generative_Models_I_2023_I_Lecture_5_VAEs.txt
The plan for today is to talk about latent variable models. So just as a recap, what we've seen so far is the first kind of family of generative models, the autoregressive ones, where the key idea is that we use chain rule to describe a joint probability distribution as a product of conditionals, and then we essentially try to approximate the conditionals using some kind of neural network. And we've seen several options for doing that, including RNNs, CNNs, transformers. At the end of the day, the core underlying idea is, really, this autoregressive factorization of the joint. And we've seen that autoregressive models are good because they give you access to the likelihood. It's relatively easy to evaluate the probability of any data point. You just multiply together the conditionals. And what this means is that you can train them by maximum likelihood. You have a training data set, you can evaluate the probability assigned by your model to the data, and you can optimize the parameters of your probability distribution to maximize the probability of the data set you are given. And you can use the likelihood to do other things like, for example, anomaly detection. The cons of autoregressive models is that, well, first of all, you have to pick an ordering and sometimes it's straightforward to do it, sometimes it can be tricky to figure out what is the right ordering that you're going to use to construct the chain rule factorization. Generation is slow. So even if you use an architecture that allows you to compute all the conditionals basically in parallel like a transformer, the challenge is that at generation, you have to generate, basically, one variable at a time. And so that can be slow. And another thing is that it's not obvious how you can get features from the data in an unsupervised way. And that, we'll see, is one of the things that we're going to be able to do using latent variable models. And so the plan for today is to cover-- start talking about latent variable models. We'll start from simple ones like mixture models and then we'll start the discussion of the variational autoencoder or the VA, and we'll see how to do inference and learning when you have latent variables. So the high level motivation for building or using a latent variable model is that when you're trying to model a complicated data set, for example, a data set of images of people like this one, the problem is typically hard because there is a lot of variability that you have to capture. For example, in this case, there might be a lot of variability because people have different age, people have different poses, people have different hair colors, eye colors. And so all these things lead to very different kind of values for the pixels that you have in the data set. And so the problem is that-- if you somehow had access to these sort of annotations, perhaps you would be able to do-- it would be easier to model the distribution because you could build separate models. That way, you're kind of conditioning on hair color or the eye color or the age or whatever attribute you have access to. But unless you have sort of annotations, all you have access to is a bunch of images. And although you believe that you can kind of see that there is this latent structure, it's not annotated. So it's not obvious how you take advantage of it. And so the idea of latent variable models is to essentially add a bunch of random variables, which we're going to denote z, which are supposed to capture all these latent factors of variation. So even though we only care about modeling pixels in the images, we're going to incorporate a bunch of other random variables in our model. And we're going to call these random variables latent or hidden because they are not observed in the data set. We only get to see the pixel values, the x part, but we don't get to see the corresponding values for the latent factors of variation. And by doing this so we got several advantages, we're going to get more flexible model families. And if you can fit a model reasonably well, then we might also be able to extract these latent variables given the pixel values. And if you're doing a good job at modeling these common characteristics that the different data points have, then you might use these features to do other things. You might try to predict-- if you have a classification task, it might be easier to train a model that works at the level of these latent variables as opposed to the direct pixel values. Because often, you might be able to-- you might need a small number of latent variables to describe a much more high-dimensional data set, like images, for example. So the high level, kind of trying to formalize a little bit of this intuition, what we want to do is we want to have a joint probability distribution between the x, which are basically all the pixel values that we have in an image, and these latent variables, z. And so here I'm showing the x shaded, meaning that it's observed, and the z variables are white and they're not shaded because this basically means for every data point, we don't get to see-- we don't have a notations for the corresponding latent variables. And conceptually, you can think of a Bayesian network that might look something like this, right. Where there is the pixel values that you get to see and then there is a bunch of latent factors of variation that would be helpful in describing the different types of images that you might have access to in your data set. And these latent variables, again, they might correspond to these high-level features. And if z is chosen properly, you get several advantages because it might be a lot easier to model p of x given z as opposed to the marginal distribution p of x. But if you somehow are able to cluster the data points and divide them into different groups, then modeling the images that belong to every particular group separately, which is kind of what this p of x given z would do, could be much easier because at that point, there is a lot less variation that you have to capture once you condition on these latent features. And the other good thing that you have access to if you do this is that if then you try to infer the latent variables for our new data point, x, then you can identify these features. And so, again, this is sort of like going towards the representation learning angle or the computer vision as inverse graphics. Somehow, if you have a good generative model that can produce images based on a set of latent variables, if you can then infer these latent variables, then you might be discovering features structure that you can use for different sorts of problems. And the problem is that it might be very hard to specify a graphical model like this and specify all the conditionals. And so as usual, instead of taking the graphical model view or the Bayesian network view that we have here, we're going to try to use deep neural networks to do the work for us. And so what we're going to do instead is we're still going to keep that kind of structure where we have a set of observed variables, x, and latent variables, z, but we're not going to have anything interpretable in terms of how the random variables are related to each other or what they mean. We're just going to assume that there is a set of random variables, z, that are somewhat simple. For example, they might be distributed according to a simple Gaussian distribution. And then we model the conditional distribution of x given z, again, using, basically, some kind of deep generative model where we have a simple distribution, let's say a Gaussian, but the parameters of this distribution depend, in some potentially complicated way, on the latent variables through, let's say, a couple of neural networks, mu, theta, and sigma theta, that are basically giving us the mean and the standard deviation that we're expecting for x given that the latent variables take a particular value. And so, again, because, at this point, the latent variables, they don't have any pre-specified semantic, then we're sort of like hoping that by fitting this model, let's say by maximum likelihood, we end up somehow discovering interesting latent structures. And as usual, this is basically-- this is an unsupervised learning problem. So it's kind of ill-defined because what does it mean that the structure is meaningful? What is it that we're actually after here? It's not obvious. But the intuition is that hopefully by trying to model the data using these latent variables, we might discover some interesting structure. Some interesting correspondence between x and z that then would, first of all, make learning easier because we are able to model a distribution over images, x, using something like a Gaussian. And then by inferring the latent variables given the observed one, given the x, we're hopefully going to discover interesting features that then we can use to analyze the data or do transfer learning or whatever you want. Question. [INAUDIBLE] Can you repeat. At z. How you managed to update z, the latent variable of [INAUDIBLE] neural network? So the question is, how do we change z when we fit the neural network? Yeah, so we'll see how we do learning. That's the challenge. So the challenge is that the z variables are not observed during training. And so it's not obvious how you should update the parameters of this neural network that gives you, essentially, the x as a function of z when you don't know what z was. And so, intuitively, you're going to have to guess what is the value of z for any given x, and you're going to use some kind of procedure to try to fit this model. So if you've seen EM, it's going to have the flavor of an EM-like procedure where we're going to try to guess a value for the latent variables and then we're going to try to fit the model. So is that image, x, here-- how is that being represented as autoregressively being generated? The question is, is x being represented autoregressively? In this case, there is no autoregressive structure. So x given z is just a Gaussian distribution. So something very simple. The parameters of this Gaussian are determined through this potentially very complicated non-linear relationship with respect to z. And as we'll see, even though p of x given z is very simple, it's just a Gaussian, and you would never expect that a single Gaussian is sufficiently flexible to model anything interesting because you have these latent variables. As we discussed before, if you somehow are able to cluster the data points in a reasonable way, then within the cluster, which is kind of what this object is, you might be able to get away with a very simple kind of distribution. And that's the idea behind a latent variable model. In the last lecture, we spoke about how we basically refactor the joint distribution and individual conditionals and then learn your networks for these individual conditionals, right. Is it common practice that these individual conditionals are modeled with a Gaussian or are there approaches, like classical-- like common approaches where we use different distributions? And also, then these models that we have for the mean and the variation, are they-- is this one model or for each conditional, is it a different model? Yeah, so the question is, I guess, what sort of mu's and z-- what kind of functions do we use here and are they different for every z? In this case, the functions are the same. So there is a single function that is then going to give you different outputs when you fit in the different z values. So the functions are fixed. The other question is, well, does it have to be a Gaussian? Not necessarily. You can use an autoregressive model there if you wanted to. The strategy behind the latent variable model is to usually choose these conditionals to be simple because, again, you have this clustering kind of behavior and so you might be able to get away with a simple p of x given z. But you can certainly-- this is sort of like the mix and match kind part of this course, you can get a different kind of generative model by replacing these p of x given z with an autoregressive model that gives you even more flexibility. But the story behind the variational autoencoder is to keep that simple. Maybe I don't fully understand but can you clarify the big picture motivation for why we want to model p of x given z and p of x in this case. So the goal is, as usual-- so the question is, why do we need p of x given z and p of x? So the goal is to always just model p of x. So that's the same as in the autoregressive model, you want to be able to fit a probability distribution over these x variables, which are the ones you have access to, the pixels, whatever. The motivation for using the z variable is that, well, one, it might make your life easier in the sense that if you somehow are able to cluster the data using the z variables, then learning becomes easier. The second one is that being able to infer the latent variables might be useful in itself because, really, maybe what you're after is not generating images but understanding what sort of latent factors or variations exist in your data set. And so-- [INAUDIBLE] The prior here in this case is very simple. It's just a Gaussian. But yeah, you could have more complicated priors, of course. [INAUDIBLE] in the previous slides that how z can be intuitively thought of as unique features. Over here, is it a way to get a sense of what these learned features are? Yeah, so that's what I was saying, sort of that you just hope that you discover something meaningful. You can certainly test. Once you've trained the model, you can certainly change, let's say, one of the z variables and see how that affects the images that you generate. So you can certainly test whether you've discovered something meaningful or not. It might not be-- whether you discover something meaningful or is not guaranteed by the learning objective. Is the number of latent variables just a hyperparameter at the end of training? So the question is that is the number of latent variables a hyperparameter? Yes. So in this model for learning how to model p of x and then conditioning on z. [INAUDIBLE] How are we supposed to sample and then do estimation? So the question is, if we use this kind of model, how do we sample? How do we do density estimation? So sampling is easy because what you do is you first sample-- this is-- you can think of it as an autoregressive model over two groups of variables, the z's and the x's. And so what you would do is you would first choose a latent factor variation. So you sample z from a Gaussian, which we know how to do. It's trivial. Then you feed z through these two neural networks and you get a mean and and a covariance matrix. Then that defines another Gaussian, then you sample from that Gaussian. So sampling is very easy. Evaluating p of x, as we'll see, that's the challenge. That's kind of like the no free lunch part. Everything seems great except that evaluating p of x, which is kind of doing the estimation, becomes hard. And that's what's going to come up next. Yeah. This does not seem like end-to-end differentiable, right? Yeah. So the question is, is this end-to-end differentiable? How do you train it? Yeah, that's going to be the topic of this lecture. Yeah. All right, so let's see, first, as a warmup, the simplest kind of latent variable model that you can think of which you might have seen before, that's the mixture of Gaussians, right. So again, we have this simple Bayes' net z pointing to x and you can think of a mixture of Gaussian as being a shallow latent variable model where there is no deep neural network involved. In this case, z is just a categorical random variable, which determines the mixture component. Let's say there is k mixtures here. And then p of x given z, again, is a Gaussian, and then you have some kind of lookup table here that would tell you what is the mean and what is the covariance for mixture component k? There's k mixtures. And so you have k means and k covariances, and that defines a generative model. So to sample. Again, you would sample first a mixture component, the z, and then you would sample x from a Gaussian with the corresponding mean and covariance. And so it would look something like this. So if x is two-dimensional, so there is x1 and x2, then each of these Gaussians would be a two-dimensional Gaussian, and these Gaussians will have different means, say, mu1, mu2, and mu3. They will have different covariances and so it might look something like this. So the generative process, again, is you pick a component and then you sample a data point from that Gaussian. So maybe you uniformly pick or whatever is the prior over z. Maybe you sample k, you can sample z, you get two, and then you have to pick a point distributed according to a Gaussian with mean here and covariance shaped like that. And so forth. This is useful, again, kind of like if you think about the clustering interpretation. You can think of this kind of model as giving you one way of performing clustering, which is sort of like a basic kind of unsupervised learning task. This is an example where you have a data set collected for the Old Faithful geyser and Yellowstone National Park. And then each data point here corresponds to an eruption of the geyser. And then you can see there's two features here, the duration of the eruption and the time between two eruptions. And the data set looks like this. So you can see there is some kind of relationship between these two things. And the larger the interval between two eruptions, the longer then the following eruption is. And you try to model this using a single Gaussian. If you fit the parameters, it's going to look like this. You're going to put the mean here and you're going to choose a covariance that kind of captures that correlation between the features. And you can see it's not doing a great job. You're putting a lot of probability mass here where there is no actual data. But that's the best you can do if you're forced to pick a Gaussian as your model. But if you look at the data, it kind of looks like there is two types of eruptions. There is type 1 that behaves like this, and type 2 that behaves like this. And so you're going to get a much better fit to the data if you have two Gaussians, a mixture of two Gaussians, that kind of look like that. And if you can somehow fit this model automatically, then by inferring the z variable given the x, figuring out if a point belongs to the blue or the red mixture, you can identify which type of eruption you're dealing with. Again, this is really this idea of identifying features based on the observed data. And again, you can see that this is kind of ill-posed because it's unsupervised learning and we're hoping to discover some meaningful structure, but it's not clear that this is always possible. It's not clear what it means to find good structure or what's a good cluster-- clustering, right. You might have different definitions of what a good clustering is and this will give you a clustering, whether it is the one that you want or the best one is not guaranteed. And so yeah, you can imagine that you can use it to do unsupervised learning. You can have-- if you have more mixture components, you have a data set that looks like this, then you might want to model it, let's say, using a mixture of three Gaussians. And again, identifying the mixture component, which is the color here, will tell you for which component the data point is coming from. And it tells you something about how to cluster the data points in different ways. Would it be reasonable to guess that this would fail very hard on image classification? So the question is, will this fail very hard on image? Probably. You wouldn't expect. Unless k is extremely large if you have, say, a mixture of two Gaussians, then you would-- let's say if you have a single Gaussian, then you would choose the mean to be the mean of all the images and then you put some kind of standard deviation, and you can imagine you're going to get a blob. It's not going to be very good. Even if you are able to divide your training set into two groups and fit two separate Gaussians, it's still not going to work very well. If k becomes extremely large, in theory, you can approximate anything so eventually it would work. But yeah, in practice, it would require a k that is extremely large. Cool. And here is actually an example on image data. Again, this is on MNIST and this is the latent space, z. This is a projection of that. But you can imagine that one axis is z1, another axis is z2. And then you take your MNIST data set and then you try to figure out where it lands in z. Each data point where it lands in z space. And you can kind of see that, again, it's able to do some reasonable clustering in the sense that data points that actually belong to the same class, which was not known to the generative model, for example, red points here corresponds to digit 2, and you can see that they are kind of like all grouped together. They all have similar z values after training this model. And so there's not a single cluster here for the 2s. There is two of them. Maybe the points in this cluster have a slightly different style than the points in this cluster. I mean, it's hard to say exactly what the clustering is doing here. And again, it hints at the fact that unsupervised learning is hard. But this is sort of like the intuition for what you might hope to get if you try to do this on an image data set. You might hope to be able to discover different classes. You might be able [INAUDIBLE] different styles. And you're hoping to discover that automatically by fitting a latent variable model and just looking at the kind of z's that you discover. [INAUDIBLE] Model, how you actually learn these posteriors. Yeah. So the question is, how do you learn them? And I haven't talked about it. It's going to be the-- we're going to go through that in this lecture. [INAUDIBLE] Trained the model before doing the dimensionality reduction into two dimensions, and then you plot it [INAUDIBLE].. It actually looks better. It looks like it would make more sense if you did the dimensionality reduction first and then tried to learn a mixture of Gaussians. It looks more like it could be expressed that way. So there is no mixture of Gaussians learned here. So this is sort of showing-- this is more like the results of training a deep generative models where the z's are actually not even categorical. The z's variables here are more like Gaussian or [? real-valued. ?] And so what I'm plotting here is for each data point, what is the corresponding inferred value of z, which is no longer a number, it's kind of like a point in this two-dimensional space. And it just so happened that it's finding something reasonable. But again, it's not guaranteed. Yeah. So when we start sampling from a VAE, do we normalize it to a mean and standard deviation of 0 and 1? But in this case, if we can do dimensionality reduction, we then fit a model on this to then have a [INAUDIBLE] sample from this. Not normal. Yeah. So I think the question is whether, I guess, in the model I had here, the p of z was a simple distribution, like a Gaussian. And perhaps if you look at this latent space, for example, which is actually not a VAE, I think, but then that's why it might not look like a Gaussian. It has a bunch of holes so you might be better off having a mixture of Gaussians, for example, for p of z. And you might actually try to learn the p of z as part of the model. And you can certainly do that. Cool. So an alternative motivation is that it's a very powerful way of combining simple models and get a more expressive one out. Using latent variables allows you to have this kind of mixture model behavior which is a very good way of building very flexible generative models. And you can see in the example of the mixture of Gaussian, if you have three Gaussians, a blue, an orange, and a green one, and you can see they have different means and different standard deviations so they have all these bell curves. If you think about the corresponding marginal distribution over x has this very interesting red shape. So even though each of the individual components is pretty simple, it's just a bell curve and there's not too much you can do about changing the shape of the function. The moment you start mixing them together, you can get much more interesting shapes for the probability density that you get. And the reason is that when you want to evaluate the probability under this mixture model, the probability of a data point, x, what is that object? What is the marginal? You basically need to say, what was the probability of generating that point under the blue curve, plus the probability of generating that point under the orange curve, and plus the probability under the green curve? And this is just the definition of the marginal probability. You marginalize out the z. In this case, the joint is just something that looks like this, where p of z is just a categorical distribution. And the p of x given z is, again, something very simple. Just a Gaussian with different means and different standard deviations. And so you can see that even though the components, the p of x given z's, are super simple, just Gaussians, the marginal that you get is much more interesting in terms of the shape that you can get. And that's sort of one way to think about why this variational autoencoders are so powerful because it's basically the same thing, except that now you don't have a finite number of mixture components. So the z variable is no longer a categorical random variable, 1, 2, 3, 4, 5k. Now, z can take an infinite number of different values. There is a Gaussian distribution that you sample z from. So there is-- essentially, you can think of it as an infinite number of mixture components. So even though p of x given z is, again, a Gaussian, now we have a mixture of an infinite number of Gaussians. And what we're giving up is that in this Gaussian mixture model case, we were able to choose the mean and the standard deviations of these Gaussians any way we wanted because you basically have a lookup table. And so you have complete flexibility in choosing the mean and the standard deviation of the Gaussians. In the VAE world, the means and the standard deviations of all these Gaussians are not arbitrary. They are chosen by feeding z through this neural network. Through two neural networks, let's say mu and sigma, that will basically give you the mean and the standard deviation for that Gaussian component. There's no longer a lookup table. Now, it's whatever you can describe using a neural network. [INAUDIBLE] So there is still-- basically, z can take an infinite number of values because this continues. [INAUDIBLE] Before, it could only take k different values. Yeah. [INAUDIBLE] The question is that can't we just use a uniform distribution? Yeah, you can. This is just-- I'm just showing a Gaussian. But yeah, a uniform distribution would work as well. Yeah. You said z is the input to neural networks. Does that mean we first sample from z and then input into the neural network? To sample-- the process to sample, it would be the same as before. Like in the Gaussian mixture model, you pick a component, sample a value of z. Then you do your lookup, you get the mean, the standard deviation, and then you sample from the Gaussian. The sampling process here is the same. You sample a z, now it's a Gaussian. Then you feed it through the two neural networks to get the means and the standard deviations of the corresponding Gaussian, and then you sample from p of x given z. So you sample from a Gaussian with that mean and standard deviation. I'm a little confused about why the mixture makes sense. Let's say you have a unique cluster corresponding to 6 and another corresponding to 7, and now you're creating a mixture that's halfway in between. But in reality, there's probably no digit that's halfway between 6 and 7. So why does this work better in practice? Why does it work better on the finite mixture component? Yeah. So the question is, would you want z to be discrete, I guess, or continuous? And if you're trying to model discrete sort of clusters, is this the right way of doing things? And yeah, you're right. That's sort of like here z is continuous so you have some way of transitioning between the clusters, which might or might not make sense. And it might need to find strange axis of variation to make that happen. You can also have a mixture of discrete and continuous, like this is the setting that VAE uses and it tends to work well in practice. [INAUDIBLE] off of that, what does it mean, intuitively, to have a different number of clusters? Well, I mean, intuitively, it just means that z is no longer a categorical random variable so there's not a finite number of choices that you can make, but it's more like what I had here where the z variables can take values in these 2D space. And so there is not really even necessarily a notion of where you have to pick either here or here, you can be in between. It's like a point [INAUDIBLE] Yeah. Since z is described by a Gaussian distribution, does that mean like it's mean it's like some average of the possible latent variables or something that-- of all [INAUDIBLE]? Can you repeat the question. Yeah. Can you interpret the mean of z as some average latent representation of all the data? Yeah, in some sense, yes. And then usually the average is 0, so you're basically forcing the average of all the latent representations of the data points to be centered at 0. So yeah. I mean, it depends on the training objective. But using the training objective that we will see in this lecture, yes, that would be the effect. Yeah. When you had a categorical representation for z, for the distribution, was it like a uniform probability across the categories? Yeah, the question is, in a mixture of Gaussians, does Z have to be a uniform distribution? It doesn't have to. It can have-- I was being curious why-- how it works for z to be a normal distribution here, or versus a uniform, or how you're choosing z to be a normal distribution. Yeah. So that's kind of like, to some extent, an arbitrary choice, you can choose other priors. The key insight is that it just has to be something simple that you're going to sample from efficiently. The conditional, the p of x given z, are also something simple. Here, I'm using a Gaussian, but you'd use a factorized logistic, or it can be anything as long as it's simple. And all the complexity is really in these neural networks that would figure out how to map the latent variable to the parameters of this simple distribution. And you're going to get different results depending on the choices you make. This is sort of like the simplest Gaussian and [INAUDIBLE]. But yeah. In a GMM, do we have a neural network for each value of z or do we have just a single neural network that you're [INAUDIBLE]. One for the mean, one for standard deviaton. In a GMM, there is no neural network. It's just a loop. So it's the most flexible kind of mapping can think of because you're allowed to choose any value you want for the different values that z can take. So it's more like a Bayesian network world where you're allowed-- it's a lookup table. Which is great because it's super flexible, it's bad because it doesn't scale the moment you have many, many-- How do you actually feed those values? How to feed it? Yeah, we'll see that soon. So just to clarify, are we going to update the prior z while we are fitting the model? You could. In this world, I'm assuming that the prior is fixed. There is no learnable parameter. As usual, the learnable parameters are the theta, which, in this case, would be the parameters of these two neural networks. So you might have a very simple-- a very shallow linear neural network where to get the mean, you just take a linear combination of the z variables and then you apply some non-linearity. And then similarly, another simple neural network that would give you the parameters of the covariance matrix. Perhaps you can make a diagonal or something like this. I just want to clarify the intuition. So in the mixture, you're saying you're summing over a bunch of normals? Switching over here, you're saying, you can represent that sum as a single normal, and you're directly predicting the mean and variance for that normal to the matrices? So the marginal-- let's see if I have it. So the marginal basically becomes an integral. So instead of being a sum over all possible values of z, you have an integral over all the possible values that z can take. But it's the same machinery. Yeah. [INAUDIBLE] is a dimension of much lower or much higher than x? Good question. Yeah. So what is the dimension of z in practice? Typically, the dimension of z would be much lower than the dimensionality of x. And the hope is that, yeah, you might discover a small number of latent factors of variation that describe your data. Oh, the training? I will talk about it soon. Yeah. Does it even make sense for z to be greater than-- the dimensionality of z to be greater than x? That's it. Yeah. So we'll see another generative model that will basically be identical to this, except that z will have exactly the same dimensionality of x. And that's, for example, what a diffusion model does. So you might not necessarily always want to reduce the dimensionality. Having the same dimensionality will allow you to have nice computational properties. Yeah. Is it possible to have a more-- put more information into the prior as opposed to just, I don't know, something from standard Gaussian? Yes. Yes. You can certainly put more information. And I think there are two ways to think about it. One is if the prior is more complex, instead of having a Gaussian, you can put an autoregressive model over the latent variables. You're going to get an even more flexible kind of distribution. The other way to do it would be if you already have some prior knowledge about the types of-- maybe that there is a bunch of classes, there's 1,000 classes or 10 classes, then maybe you want to have one categorical random variable. And so if you have some prior over what kind of latent factors of variation you expect to exist in the data, you can try to capture that by choosing suitable priors. So in your shallow latent variable models, your number of Gaussians is equal to k. Here, since this is a normal distribution, your number of k is less [INAUDIBLE]. Yeah. Yeah. Exactly. Cool. And then so what you would do then is you would somehow try to fit this model to data. And in this case, the parameters that you can choose from-- that you can choose are all this neural network. The parameters of what these two neural networks, mu and sigma. And again, the takeaway is the same as the Gaussian mixture model. Even though p of x given z is super simple, it's just a Gaussian, the marginal that you get over the x variables is very flexible. It's kind of like a big mixture model. And so that's kind of like the recap, two ways to think about it. One is to define complicated marginals in terms of simple conditionals, and then this idea of using the latent variables to cluster data points. And, again, being able to model them through relatively simple conditionals once you've clustered them. On the previous slide, for our covariance matrix, why are we exponentiating the output of [INAUDIBLE]?? Just because it has to be positive semi-definite. Then we're just modeling as a diagonal? Yeah, which is, again, a modeling choice but you could make it not diagonal. Cool. And now we'll see the no free lunch part, which is going to be much harder to learn these models compared to autoregressive, fully observed models that we've seen so far. And the problem is that, basically, you have missing values. And so what happens is something like this. Imagine that you still want to train a autoregressive model but now some of your data is missing. So you still want to fit an autoregressive model over the pixel values but now you don't know the value of the top half of the images. So what do you need to do? Well. There is two sets of variables here again. There is the part that you get to see and then there is some part that you don't get to see that is latent. And then there is a joint distribution. So your Pixel CNN would tell you the relationship between the x variables and the z variables. So you can choose-- you can complete the green part, the missing part, any way you want. And let's say your autoregressive model will tell you how likely the full image is because if you have a joint distribution over z and x, are the challenge is that you only get to see the observed part, so you only get to see the x part. And so you need to be able to evaluate what is the probability of observing, let's say, this bottom half of a digit. And in order to do that, again, we have to marginalize. So you have to basically look at all possible ways of completing that image and you have to sum the probabilities of all these possible completions. And even though the joint is easy to evaluate because maybe it's just a product of conditionals or like in the VAE case, it's just the product of two Gaussians, basically. You have this mixture in behavior. Just like in the mixture of Gaussian when you evaluate the probability over just the x part, you have to sum out all the things over all possible values of the unobserved variables. You have to look at all possible completions and you have to check how likely the different completions are and you sum them up. Just like in the mixture of Gaussian case, you need to sum the probability under each mixture component. The same thing. And the problem is that there is potentially too many possible completions. Like in the mixture of Gaussian case, maybe you only have k possible values that the z variable can take. And so this thing is easy to evaluate. You can just brute force it. But if you have a high-dimensional latent variable z, this sum can be extremely expensive to evaluate. You got flexibility because you're mixing a lot of distributions but then you pay a price because it's hard to evaluate that quantity. [INAUDIBLE] will get elevated? That's how we [INAUDIBLE] know those z's. So how would you sum over the z's? Well, you sum over all possible completions. So you would have to put all white pixels and then you check how likely is that? Probably very low. Then you try all black pixels, and then you try all possible combinations, and then you check the probability of each one of them and you sum them up. Instead of having the z to be the full completion, can the z just be like some digit? Like the 2 digit-- I mean, assume I know the true digit, that's the 9 or 6 or something. Yeah. In that case, you would have a latent variable that is maybe categorical, and that's what you would do if you're trying to infer a digit identity. Cool. And variational autoencoder, you have the same thing. The z's are not observed at training time. So at training time, you only get to see the x part. So when you want to evaluate the probability of observing a particular x, you have to go through all possible values that the z variables can take and you have to figure out how likely was I to generate that particular x? And the z variable is not even discrete so if you want to evaluate the probability of generating a particular x, you have to actually integrate over all possible values that the variables can take. You have to go through all possible choices of the z variable, you have to see where it would map to. You would check the probability under that Gaussian mixture component and then you integrate them up. Again, you can imagine that this is super expensive because-- yeah. Especially if you have a reasonable number of latent variables, the curse of dimensionality, this is very expensive to even numerically approximate this kind of integral. But that's where the flexibility comes from. If you're integrating over an infinite number of mixture components. [INAUDIBLE] Not just for each z. Do you just calculate the probability of x given z? Yeah. So for every z, you can evaluate the joint. Just like here, for every value of z, I can evaluate p of x comma z. I can just check the z and map it through the neural networks. I get a Gaussian, I, can evaluate the probabilities. But-- [INAUDIBLE] [? It's not ?] possible because I don't know what was the z corresponding to that data point, right. The z is not observed so I have to try all of them. Just like here, I only get to see the bottom part. I don't know what was the top part for that particular image. I have to guess. I have to try every possible way of completing that data point and I have to sum them up. [INAUDIBLE] Z's, right? Of the z's. How does-- Yes. How does all possibility of z relate to comparing all possible ways [INAUDIBLE]?? In this case, I'm assuming that the z variables represent the top part, the unobserved part. Cool. So that's sort of the challenge, evaluating this marginal probability of x, you need to integrate over all the possible values of the z variables. And so that's the setting, you have a joint distribution over x and z. Yeah. [INAUDIBLE] assuming you knew what we were looking at, probably observing x, x bar. So you know what x bar is and you want to predict what is the most likely. [INAUDIBLE] that should be over there, then you also have to go through every single possible x value? Click over here, you're looking at the probability of x equals x bar and now you have to loop over all possibilities again. But what if you don't actually know that the training data point is x bar and you want to actually solve that? So that can also work. So the setting that we're going to consider is one where in the data set, the x variables are always observed. You could also think about a setting where you have some missing data and some of the x variables are missing themselves. And then you have-- You have [? through. ?] Yeah. Yeah. So in the previous slide, the capital Z, is it the same as the normal distribution [INAUDIBLE]?? Yeah. Yeah. OK. And so we have to-- so instead of summation, we have integral now. How are we supposed to compute the integral? Do we sample infinitely? [INAUDIBLE] So we'll see how to do [INAUDIBLE].. So that's kind of like the setting. We have a data set but for every data point, we only get to see the x variables and the z variables are missing. They are unobserved. And then-- so you can think of the data set as being a collection of [INAUDIBLE] images, x1 through xn. And what we would like to do is we would like to still do maximum likelihood learning. So we would still like to try to find a choice of parameters that maximize the probability of basically generating that particular data set. It's the same objective that we had before. Let's try to find theta that maximizes the probability of the data maximum likelihood estimation. Or equivalently, the average log likelihood of the data points, if you apply a log. And the problem is that evaluating the probability of a data point under this mixture model is expensive because you have to sum over all possible values that the z variable can take for that data point. And so evaluating this quantity can be intractable. And just as an example, let's say that you have 30 binary latent variables, then that sum involves 2 to the 30 terms. So it's just way too expensive to compute this thing. So if the z variables can only take k different values like a Gaussian mixture model, you can do it. You can brute force it. But if you have many latent variables, you cannot evaluate that quantity efficiently. And for continuous variables, you have an integral instead, which, again, is tricky to evaluate. And if you are hoping that maybe we only need gradients because at the end of the day, we just care about optimizing, gradients are also expensive to compute. So trying to do gradient ascent on that quantity is not feasible directly. So we need some kind of approximations and it has to be very cheap because think about it, you need to be able to go over the data set many times, and you need to be able to evaluate the gradient for every data point, possibly, many times. So this approximation here has to be very cheap. And one natural way to try it would be to try to do Monte Carlo kind of thing, right. Basically, this quantity would require us to sum over all possible values of z, and instead, we could try to just sample a few and get an approximation. And that's the usual recipe that we've seen in the last lecture. The idea is that we have a sum that we're trying to compute, we can try to rewrite that sum as an expectation, essentially. So we can-- if there are capital Z, basically, possible values that these z variables can take, we can multiply and divide by the total number of entries in this sum. And then this object here becomes an expectation with respect to a uniform distribution. And now we can apply Monte-carlo. Whenever you have an expectation, you can approximate it with a sample average. So you could say, let's approximate this sum with a sample average. So essentially, you would randomly sample a bunch of values of z, and then you would approximate the expectation with the sample average. You check how likely these completions are under the joint and then you rescale appropriately. This would be cheaper because you just need to check k completions instead of all the possible completions that you would have to deal with. Yeah. Is the z is an element of [INAUDIBLE],, is that just a subset of all possible values of z? That would be all possible values of z. It's just I needed a set. [INAUDIBLE] We're sampling. So we sample uniformly at random, and then we rescale. And this is more tractable from [INAUDIBLE] before, because [INAUDIBLE] sampling a discrete number of values. And a small number, that's the key thing. You just k of them, could be 1. The cheapest way is to choose k1 here, just sample 1. You look at the joint likelihood of that completion, and then you rescale appropriately. And that would be a valid estimator for the quantity of interest. Over here, you're sampling from the uniform z. I think when we did Monte Carlo last class, we were sampling from the probability distribution of z. So is there a reason why we treat every z as equally [INAUDIBLE] before? It's just because we have a sample. So it's so if you can see I'm multiplying and divided by the total number of things so that then this becomes an expectation with respect to a uniform distribution. That's the trick, basically. [INAUDIBLE] five times more likely than [INAUDIBLE].. That's the problem. That's why this is not going to work, basically. I'm just trying to understand the rationale of trying to sample the z from the uniform distribution. Is it because we knew that we're never going to see the z, so we gave up on that, so we just do like the uniform, which is like almost no priors? So this is not going to be-- you are getting at why this is not a great solution. This is a first attempt doing things uniformly. It's cheap, but it's not going to work in practice because it-- I think what you're suggesting is that if you think about randomly guessing the z's, most likely you're not going to hit the values of z's that have enough probability under the joint. And so most of the completions that you guess by choosing uniform random don't makes sense, so they would have a very low value of theta. And so although this, technically speaking, is an unbiased estimator, the variance would be so big that it's not going to work in practice. So somehow, as I think you were suggesting, we need a smarter way of selecting these latent variables. We don't want a sample uniformly, we want to sample them trying to guess the ones that make sense. I'm sorry, this is kind of like a big picture question. But when we started off, you were describing z as important features, of which, I think, when we tend to think of features, a person can have eye color and also hair color. But I feel like the way that we've been treating z so far has been as like a class or membership in a group. So for example, even in the variational autoencoder case, it's like you choose one scalar value as between 0 and 1. But it's like one value-- are we just like not there yet in terms of different model [INAUDIBLE] get to the feature representation? Or is there a connection there? I think it's a great question, and it's what these z's would even end up representing. Well, there is a first question, whether they are discrete or continuous, and that depends on just how you model them and doesn't matter too much. Whether they end up representing the things that matter, like the hair color or the eye color, it's questionable. Right here, we just try to do maximum likelihood, we just try to fit the data as well as we can. And we're going to try to use these latent variables to fit the data. That's what the model is going to try to do if you use this objective. Whether you end up with something meaningful or not is not necessarily guaranteed. You end up with some latent variables, such that if you sample from them and you fit them through this model, you get, hopefully, good images or good distribution that is very similar to the one in the training set, which means that these latent variables do capture the important latent factors of variation in the data. Whether they correspond to something semantically meaningful is absolutely not guaranteed. Does that answer your question? Not really, but I'll keep listening. Ask again. I'm sure somebody else has the same question here, too. Oh you wanted me to say it again? Sure, yeah. Sorry to take up everyone's time. I guess just the idea is even with the variational autoencoder, we're sampling z from a distribution. But then ultimately, z is like one scalar value. Correct? Yeah. But I feel like when-- so for example, with the MNIST, it was like, maybe just to go back to the categorical z model, you sample like which digit it is you're trying to represent, and then that gives you some Gaussian distribution. But if it's something like a picture of someone, it's not the case that there's only one class they fall into. There are many features, which can coexist. So somebody can have blue eyes and black hair, someone can have brown eyes and black hair. How are we representing that here? Or are we not representing that yet? We are to some extent, except up to the fact that they're not going to be necessarily meaningful, but we are using multiple latent factors of variation. So every x will basically be mapped. There's many different I guess-- there is many different z's that could generate that particular x. There is some that are more likely than others. Given x, when you try to infer p of z given x, you're guessing what are the latent features for this particular data point. And if you look at what you get, indeed, you end up with soft-- if the z variables are continuous, then you don't end up with a discrete clustering thing, you end up with two values. You end up with 0.5, 0.5. These yellow points end up having z1 right around 0.5, 0.5. It doesn't have a specific meaning, except that all the points that correspond to that class, that digit end up having similar values of the variables. [INAUDIBLE] an attempt of trying to answer that question. So if my understanding is correct, the z you are dealing with right now does not have to be a scalar, it can be [INAUDIBLE].. So the first element of the z vector, if it makes sense, it could be the hair color, and the second one could be the eye color, so is a combination of local features right? Yes, there could be multiple z's. So there's not a single z, there is multiple z's. In this case, there's two, z1 and z2, and they capture two out of [INAUDIBLE] factors of variation. [INAUDIBLE] That's the problem. What I have here is you have the 30 binary features that you-- binary latent features. They can all be just zero one then you have two to the 30 basically different possible cluster assignments and then you can't sum them up. Basically, that's the problem. What is the semantic meaning of the z's? Can you do anything to increase whether they're independent or you're not based in parameters? There is a whole field of disentangled representation learning, where people have been trying many different ways of trying to come up with models where the latent variables have a better-- are more meaningful. There are unfortunately theorems showing that it's impossible to do it in general. Practically, people have been able to get reasonable results. But there are some fundamental limitations to what you can do, because the problem is essentially ill-posed. [INAUDIBLE] Disentangled representation. The above example where we have 30 binary latent features, so we have 30 z's, so do we say that these 30 z's follows a normal distribution, or that each of the z follows normal distribution? So if the z's are discrete, then it wouldn't be normal, it would be like each one of them can come from a simple Bernoulli distribution. If z could be a Gaussian random vector, in which case, you would have the integral, both cases are pretty hard, basically. So big picture questions-- so if we're having the data set that have the labels, can I say we just-- because here, we're using these latent variables to try to make our day easier to compute the marginal distribution. So I was just trying to think [INAUDIBLE] connections. So I guess there are two answers there, if sometimes, you get to see the z values, maybe you have an annotator willing to label these things for you, you can imagine that you can-- it's not hard to modify this learning objective, where when the value of that variable, you don't sum over it, you just plug in the true value. And so you can do some kind of semi-supervised learning, where sometimes this variable is observed, and sometimes it's not. [INAUDIBLE] unsupervised learning manner. This is pure, but it seems very easy to adjust the settings. Sometimes the z [? variable will be ?] [? observed ?] is the same thing, except the notation becomes ugly, because then you have some points where you have it, some don't. [INAUDIBLE] learn some latent variables that have semantic meanings or they could not. But if you have have labels, that's a good way to steer them in the direction that you want. Cool. OK, so this was the vanilla kind of setting, where you could just try a bunch of choices for the random variable at random and hope that you get something. But this is not quite going to work. And so we need a better way of guessing the latent variables for each data point. And so the way to do it is using something called importance sampling where instead of sampling uniformly at random we're going to try to sample the important completions more often. So recall, this is the object we want it's this marginal probability where you have to sum over all possible values of the latent variables. And now, what we can do is we can multiply and divide by this q of z, where q is an arbitrary distribution that you can use to choose completions, choose values for the latent variables. And this is one, so you can multiply and divide it by q, is fine. And now again, we're back to the setting where we have an expectation with respect the reqpect of q of this ratio of probabilities, the probability under the true model and the probability under this proposal distribution, or this way that you're using to guess the completion for the latent variables. And now, what we can do, again, is just Monte Carlo. Again, this is still an expectation, it's still intractable in general, but we can try to do the usual trick of let's sample a bunch of z's. Now we don't sample them uniformly, we sample them according to this proposal distribution q which can be anything you want. And then we approximate the expectation with this sample average, just like before. Now the sample average has the importance weight that you have to account for in the denominator, because the expression inside the expectation has this q in the denominator, so we have to put it here. This q's just something that we're modeling. So we'll see how to choose q. For now this works regardless of how you choose q. I think what's a good choice for q, intuitively you want to put probability mass on the z's that are likely under the joint distribution. If you'd like to somehow be able to sample the z's, that makes sense. So you have a current joint distribution between x and z, and you want to choose the z's that make sense, that are the completions that are consistent with what you observe. Then you're not just sampling from z, like ideally you want to sample from z given x. So this is for a particular data point, so I'm doing it for a single x. You're perfectly right that this choice of q has to depend on the x on what you see. But for now, this is a single data point, so I can just have a single q, that's supposed to work. And regardless basically of how you choose q, this is an unbiased estimator, meaning that even if you choose a single sample, we know the expected value of the sample average is the object we want. So equivalently, if you want to think about it, you could say if you were to repeat this experiment, a very large number of times, and average the results, you would get the true value. So this is a reasonable kind of estimate. Now the slight issue is that what we care about is not the probability of a data point, but we care about the log probability of a data point. What we care about is optimize the average log likelihood of the data points, so we need to apply a log to the expression. So we could try to just apply a log on both sides of this equation and get this kind of estimate for the log likelihood, but there is a problem. So for example, if you were to choose a single data, a single sample. so if k here is 1, so you just sample a single possible completion, and then you evaluate that estimator that way, so it's just the ratio of the two probabilities. You can see that this is no longer unbiased, the expectation of the log is not the same as the log of the expectation. So if you take an expectation of this object here, even though the expectation of the right-hand side here is what we want, when you apply a log, there is bias. And we can actually figure out what that bias is, so recall that what we want is this, we want the log marginal probability, which we can write down as this importance sampling distribution. And we know that the log is a concave function, which means that if you have two points, x and x prime, and you take a combination of the two, and you evaluate the log, this is above the linear combination of the two values of the function. And what this means is that, because of this concavity property, we can basically work out what happens if we swap the order of logarithm and expectation. So if we put the expectation outside of the log, we're going to get a bound on the quantity that we want. So there is this thing called Jensen's inequality, which basically says that the logarithm of the expectation of some function, any function, which is just this quantity here, is at least as large as the expectation of the log. And like the picture is again, you have a log which is a concave function, and so if you have two points fz1 and fz2, and you take the linear combination of that, you're always below what you would get if you were to apply the log. And so in our world, what this means is that, for this particular choice of f of z, which is what we have here, this density ratio, the log of the estimator is at least as large as the average of the log. So what we have here, if we do this, is a lower bound on the object that we want. So on the left, we have the thing we want, which is the log marginal probability of a data point, and on the right, we have a quantity, which we can estimate, which is a bunch of samples from q, and we evaluate this log, there's a lower bound. Which is not bad, because what this means is that if we were to optimize the quantity on the right, the thing we care about would also go up hopefully. It has to be at least as large as whatever we find by optimizing the quantity on the right. Just thinking of what and why we care about the log of this, and not the quantity itself. Yeah, because if you recall, what we want to do is we care about doing maximum likelihood, so what we care about is this, so we want to go through all the data points, and for every data point we want to evaluate the log probability of that data point. And so that's the quantity that we'd like to take gradients of and would like to optimize. And the good news is that we can get a lower bound on that quantity through this machinery. Where is it, I think here. And then the strategy is basically going to be, let's try to optimize this lower bound. And what we will see soon is that the choice of q, the way you decide how to sample the latent variables basically controls how tight this lower bound is. So if you have a good choice for q then the lower bound is tight, and this basically becomes a very good approximation to the quantity we actually care about, which is the log marginal probability. What's stopping us from taking the sample average of the logarithms of the [INAUDIBLE].. That's what we're going to do, So this one is, the right hand side that's what we're going to actually optimize. OK, the [INAUDIBLE] what we want exactly the log marginal probability, the right-hand side is the thing we can actually easily evaluate to sample a bunch of q's and then get this log density ratio. So whether this method works or not depends on how well we choose q [INAUDIBLE].. Yes. And this thing that you see on the right is something you might have heard of before, is the evidence lower bound, the elbow, which is a lower bound on the probability of evidence, which is basically the probability of x. So x is the evidence, x is the thing you get to see is the observed part. The log probability of evidence is the thing you would like to optimize, but it's tricky to evaluate that. And so instead, we have this evidence lower bound, the elbow, which is the thing we can actually compute and optimize. Can you remind me again, why we need [INAUDIBLE]?? Is it because the initial function that we have is not something that we can very easily compute the minimum of? Yeah, so the original thing that you would like to have is this, which is still tricky. The expectation itself is not something you can evaluate, so you would have to do a sample average. And you can do the sample average inside or outside if you do the simplest case where, let's say you choose k equals 1, then you see that you basically end up, which would be the cheapest way of doing things, where you take a single sample, so what people would do in practice. Then you see that if you take the expectation of that, you end up with the expectation of the log instead of the log of the expectation. So since this is not a correct approximation, [INAUDIBLE] it's an approximation-- It happens to be a decent one, because it's a lower bound, and so it's not going to hurt us too much, because we optimizing a lower bound. If you maximize the lower bound, the true quantity is always going to be above, and so it's also going to go up. Yeah, so what you're saying that if we sample from here, we just end up getting a log of the expectation, which is not what we want, since we can use Jensen's inequality to describe the function as the expectation of a log, and we can solve for whatever its minimum is, and that'll be a minimum of the other function as well. So Jensen basically just tells you what happens if you were to do this approximation, where you take the expectation, and then the log. So now we can just find the middle of one of these functions, is what we're actually maximizing here, and we can see this one, which is the same as maximizing [INAUDIBLE] the other one is above. And so right, and we'll see that the gap here is not too bad, as long as we choose a good q. Let's talk about [INAUDIBLE] stuff, in that case, if your objective is minimizing the quantity on the left, then you can use Jensen's inequality, is that the right solution, that this only works [INAUDIBLE] Yeah. It only makes sense to maximize a lower bound to the function you want, because otherwise, yeah, it's not clear what the relationship would look like. [INAUDIBLE] because we want to go in the reverse direction, it's very hard [INAUDIBLE]. KUBO's, there is the elbow and the KUBO, and then there's a bunch of techniques that people have come up with the upper bounds to these quantities, but then it's much trickier to get an upper bound and a lower bound. And intuitively, it's because, if you just sample a few these, it's very hard to know whether you're missing some very important ones, which is what you would need to get an upper bound. While it's relatively easy to say that if I've seen so many z's that have a certain amount of probability mass, there must be others. So it's always easier to get a lower bound and an upper bound, because the upper bound would require you to say you rule out that there are many z's that have a very high probability somewhere and you haven't seen it, that's the intuition. Yeah, I guess this is not related, but is there a way to quantify how tight the bound is? So there is a way to quantify how tight the bound is. So we know that, for any choice of q, you have this nice lower bound on the quantity we care about. This is the quantity we care about, and we got a lower bound for any choice of q. If you expand this thing, you're going to get a decomposition where you have just the log of the ratio is the difference of the logs, and you can see that this quantity here is what we've seen in the last lecture being the entropy of q. And so you can also rewrite this expression as the sum of two terms. The average log joint probability under q, and then you have the entropy under q. And it turns out that if q is chosen to be the conditional distribution of z given x under the model, then this inequality becomes an equality. So the bound becomes tight, and there is no approximation basically at that point. And so essentially what this is saying is that the best way of guessing the z variables is to actually use the posterior distribution according to the model. You have a joint distribution between x and z that defines a conditional for the z variables given the x1's, and that would be the optimal way of guessing the latent variables. The problem is that this is not going to be easy to evaluate, and so that's why we'll need other things, but this would be the optimal way of choosing the distribution. And incidentally, if you've seen the EM algorithm, that's what you need in the e step of EM. And there are some very close connections between EM and what we're doing here. Some says that's the best way of inferring the latent variables, is to use the true posterior distribution. Originally, we had a z, we were going to x. But if you also need to go from x to z then you have a cycle. How do you consider the computational graph in this case? Yeah, so essentially what this would require you is, to say given an x, if you have a VAE, you would have to figure out what kind of inputs should I fit into my neural networks that would produce this kind of x. So of have to invert the neural network, and you need to figure out what were the likely inputs to the neural networks that would produce the x that I'm given, which is in general pretty hard, as we'll see, but we can try to approximate that. Cool. I think we're out of time, so this is probably a good place to stop. What we'll see is then the machinery for training of VAE will involve optimizing both p and q, and that's what we're going to see in the next lecture.
Stanford_CS236_Deep_Generative_Models_I_2023_I_Stefano_Ermon
Stanford_CS236_Deep_Generative_Models_I_2023_I_Lecture_17_Discrete_Latent_Variable_Models.txt
All right. OK. So I thought we could finish the lecture from last time and keep talking about diffusion models and some another lecture ready on training latent variable models with discrete variables, but I thought we didn't finish this. And there was quite a bit of interest that we can go through the remaining slides and really see the connections with all these efficient sampling strategies and all that good stuff. So as a reminder, we've seen that we can think of-- there is this close connection between score-based models and denoising diffusion-- DDPMs, denoising diffusion probabilistic models. The basic idea is that you can think of score-based models as basically trying to go from noise to data by essentially running these Langevin dynamics chains. And alternatively, we can think about a process that does something very similar from the perspective of a variational autoencoder. So there is a process that basically adds noise to the data, which you can think of it as an encoder. And all these transitions here, q of xt given xt minus 1, this is just a Gaussian, which is centered at xt minus 1 and you just add a little bit of noise to get xt. And so every step you add a little bit of noise, and then eventually after many steps, you added so much noise to the data that all the structure is lost and you're left with pure noise at the end of this chain. And as in a regular VAE, there is a decoder, which is a joint distribution over the same random variables. And we parameterize it in the reverse direction, so we go from noise to clean data. And we have this sequence of decoders, and the decoders are basically this p theta of xt minus 1 given xt. And so given xt, you try to guess what is the value of xt minus 1. And these decoders are also-- and the DDPM formulation are also simple in the sense that they are Gaussian distributions, and the parameters of these Gaussian distributions are computed using neural networks just like in a regular VAE. And what we're seeing is that we can train these models the usual way, which is by optimizing an evidence lower bound, which essentially tries to minimize the KL divergence between the distribution defined by the decoder and the distribution defined by the encoder. It's trying to match those two joint distributions. And if you look at the ELBO objective, it looks like this. And it turns out that if you do a little bit of math, this objective ends up being exactly the denoising score-matching objective. So in order to-- essentially, if you want to learn the best way, the best decoder, the best way of guessing xt minus 1 given x t, essentially what you have to do is you have to learn the score of the noise-perturbed data density which we know corresponds to-- can be done by solving a denoising problem. And so essentially optimizing the ELBO corresponds to learning a sequence of denoisers. The same as the noise conditional score models, essentially. And that's the main thing here is that we can interpret the whole thing as a variational autoencoder, minimizing the ELBO corresponds to essentially a sum of denoising score-matching objectives, each one corresponding to a different noise level that we have in this chain. And so there is this very kind of the resulting training and inference procedure in a denoising diffusion probabilistic model is very, very similar to the one in a score-based model. During training time, you are essentially learning a sequence of denoisers, one for every time step. And once you have the denoisers to generate samples, what you do is you just use the decoders just like you would do in a normal VAE, and because basically the means of these Gaussians that are defined in the decoders at optimality essentially correspond to the score functions. The updates that you do end up looking very, very similar to the ones you would do in a annealed Langevin dynamics procedure where at every step you would essentially follow the score. And you add a little bit of noise at every step because the decoders are Gaussians. And so in order to sample from a Gaussian, you would compute their mean, and then you would add a little bit of noise to that vector. And so very similar to the procedure that we will do in a Langevin dynamics where again, you would follow the gradient and you would add a little bit of noise. And, yeah, we've seen the architectures are also very similar. But where we stopped the last time was to think about the diffusion version of this, which is really the case when we have an infinite number of noise levels. So instead of having, let's say, 1,000 different versions of the data density that has been perturbed with increasingly large amounts of noise, we can consider this continuum, this spectrum of distributions that are now indexed by this variable t, which you can think of it as time that goes from 0 to 30. And so just like before, on the one end, we have the clean data distribution. At the other end, we have a pure noise kind of distribution, but now we have a continuum. And this continuum is actually going to be useful because it exposes additional structure in the model that we can take advantage for coming up with more efficient samplers, for evaluating likelihoods exactly, and so forth. So you can think of the variational VAE perspective that we talked about so far as some kind of discretization of this continuum version of the process where we only look at, let's say, 1,000 different slices in this sequence. But it makes sense to think about the continuous version because, as we'll see, it allows us to do more, essentially. And so once we go in the continuous version, again, basically, there is a stochastic process that describes this process of going from data to noise, where at every step you add a little bit of noise, just like in the previous case, except that we get this continuous time process by thinking about what happens if you were to take increasingly small discretization steps in the previous perspective. And so before we were jumping. We were taking 1,000 different steps, adding more and more and more noise until we get to the end. You can imagine a continuous process that goes from left to right, where at every step, we add an infinitesimally small amount of noise. But, of course, over time, if you integrate all of this noise, you get the same effect, basically destroying the entire structure in the data. So, formally, what we're dealing with here is a stochastic process where we have a collection of random variables. And now this collection of random variables we have an infinite number of random variables. Before, we had, let's say, 1000. You had a VAE with maybe 1,000 different layers. And so you had 1,000 different random variables, one for every discrete time step. Now we have an infinite number of random variables. There is one for every t and t is continuous. So you can take an infinite number of values between 0 and capital T. And these random variables have densities. Just like in the VAE perspective, there is a density probability density function associated with each one of these random variables. And it turns out that we can describe how these random variables are related to each other through a stochastic differential equation, which you can think of it as a way that would allow you to sample values for these random variables. It would take a whole quarter to explain exactly what that notation mean and what the stochastic differential equation is, but essentially, you can imagine that this is really what happens if you take the previous VAE perspective and you make the intervals, the time steps between one slice and the next one very, very small. So dxt is basically the difference between xt and xt plus delta and a neighboring slice. And the difference between these two random variables is given by some deterministic value, which is just like the drift it's called, plus a little bit of noise, an infinitesimal amount of noise. And for simplicity here we can think about-- if you think about the process of just adding noise, you can describe it with a very simple stochastic differential equation where the difference between the value of the random variable at time t and the value of the random variable at time t plus epsilon or t plus delta t is just an infinitesimally small amount of noise, which is what this equation really means. Yeah. So the drift is basically telling you how you should change, basically. You can imagine that there is some kind of velocity like you think about the dynamics, like if you think about how xt evolves. So xt is, let's say, an image. And as you increase time, the value of the pixels change. And if you don't have the drift, then the change is entirely driven by noise, which is what we're doing here. What we will see is that when we reverse the direction of time, then it becomes very important to actually take into account the drift because we want to have some velocity field that is pushing the images towards the directions where we know there is a high probability mass. And so it's going to be important to have a drift because if you think about-- if you flip the direction of time and you want to go from noise to data, it's not a purely random process. You have to change the values of the pixels in a very structured way to generate at the other end, something that is indeed looking like an image. And so if you see at this end, all the probability mass is spread out according to a Gaussian. But if you want to get to something that looks like this where all the probability mass is here and here, then you have to have some velocity that is pushing the particles, that is pushing this trajectory to go either here or here, essentially because you want to have the right probability mass at the other end. This is a special case of this where the drift is 0. And it turns out this kind of stochastic differential equation is the one that captures this relatively simple behavior of just adding noise to the data. More generally, you could have a drift, and we'll see that we need the drift to talk about the reverse process. But what I'm saying here is that this process of adding noise to the data, which is just a very fine discretization of what happens in the previous VAE can be described by this simple stochastic differential equation where the dxt is the difference in the value of the random variables take at a very small time increment is just an infinitesimally small amount of noise that you get. So I guess if you add Gaussian noise at the level of the densities, you are essentially convolving. So there is an implicit convolution that is happening here. So if you think about the shape of the densities, like you have a density here and then you get that-- if you take one of the slices here, you get a different density, which is actually the previous density convolved with a Gaussian kernel because that's what happened if you sum up two independent random variables. So there is an implicit convolution happening here at the level of the densities. Cool now, the reason this is interesting is that we can think about just changing the direction of time. So we can think about the process as going from right to left in the previous slide. So going from noise to data. And so again, we have these trajectories which are samples from this stochastic process. So these are realizations of these random variables that are consistent with the underlying stochastic differential equation. And if you could somehow sample from this stochastic process, then you would be able to generate data by basically just discarding everything and just looking at the final endpoint of this trajectory. And the beauty is that if you essentially do a change of variables-- and it's a little bit more complicated because this is stochastic, but essentially if you apply a change of variables and you swap, you replace t with capital T minus t prime, so you just literally flip the direction of the time axis and you can obtain a new stochastic differential equation that describes exactly the reverse process. And the interesting bit is that this stochastic differential equation now has a drift term, which is this red term here, which the score of the corresponding perturbed data density at time t. So this is the exact reverse of this SDE. The forward SDE doesn't have the drift term. If it had the drift term, then you would have to account for it in the reverse SDE. You would have to basically flip it. But we don't have to because this is for now-- but you could include it if you wanted. This is the simplest case where I don't have the drift. If you had it, you could include it. So in the forward process, there is no drift. It's purely driven by noise. So you can think of it literally-- it's like a random walk. So at every step, let's say if it was a one-dimensional random walk, you can go either left or right with some probability. And then after a sufficiently large amount of time, you forget about the initial condition and you get an unknown distribution. This is like the continuous time version of that where at every step you move by a little bit. And the amount you move is basically this dw, which is the amount that you move towards the left or towards the right, essentially. But it's essentially a random walk. And it turns out that you can reverse it. And that there is a way to describe this reverse random walk where you go from noise to data and he can be captured exactly if you knew the score function. So they're describing exactly the same thing. So both of these SDEs describe these kinds of trajectories. The only thing that should happen-- the only thing that has happened here is that we're changing the direction of time. So if you flip the direction of time and you go-- if you start from noise and you solve this SDE, you get exactly the same traces that you would have gotten if you were to start from data and add noise to it in the other direction. These two are exactly equivalent to the extent that, you know, the score function. So in this case, you have to go towards-- because this is trying to sample from the data distribution, so if the data distribution had a mixture of two Gaussians, so there's two possible images, let's say, and it's either one or the other, then you would want the process to go there. What we'll see, and that's towards the end of this lecture, is that how to do controllable generation, which is the idea if you want it to only sample one of them, maybe one is cat and the other one is dog. And let's say that you had a classifier that tells you if you're dealing with a cat or a dog, this kind of perspective allows you to do it in a very principled way. So there is a relatively simple algebra that allows you to basically change the drift. And essentially, all you have to do is you just basically apply Bayes' rule and you change the drift to also push you towards x's that are likely to be classified as, let's say, you want a dog that are likely to be classified as a dog. So just by changing the-- it is basically a principled way to change the drift, to push you in a certain direction. And that is probably the right way of basically sampling from a conditional distribution that might be defined in terms of, say, a classifier or the more relevant example would be text to image, where you have a distribution over images that have corresponding captions, now you want to be able to sample images with a particular caption. Then you don't want to necessarily sample from the marginal distribution over all the images that you had in the training set but you want to be able to sample from the condition. So we'll see that there is a way to change this reverse SDE to actually sample from not the data distribution, but some version of the data distribution that is, let's say, more skewed towards a particular caption that you want. So this is the gradient only with respect to x at a given t. There is actually ways to also try to estimate the score with respect to t, which is the partial derivative with respect to t. It turns out you can also estimate the score-matching losses. We actually had a paper on doing these things where you-- the nice thing about that is that-- I guess a lot of the structure deals with the-- I guess very specific to the diffusion kind of math. And the moment that one doesn't hold anymore, let's say, if you're trying to interpolate between two different data sets, then it's no longer a diffusion. And so the math is different and in that case, you do need the score with the gradient with respect to T0's. So you can do more interesting things if you had it, but here you don't need it. It turns out-- because of the Fokker-Planck equation, the gradient with respect to t is completely determined by these objects. In the forward SDE, there is no drift. This is just a random walk where you are essentially just adding noise to the data. So there is no particular direction. If you're going from data to noise, there is no particular direction that you want to bias your trajectories towards. The score is deterministic drift. Yeah, yeah, yeah. And then there is still noise. As you said, there is also a little bit of noise at every step. In the second one, we have both deterministic drift and random noise. So it still has both. And you can think of it as-- if you were to discretize this SDE, you would get essentially Langevin dynamics or essentially the same sampling procedure of DDPM where you would follow the gradient and add a little bit of noise at every step. You can think of that as basically just the discretization of the system. And then basically, what you can do is you can build a generative model here by learning this-- as usual, if you knew this score functions, then you could just solve this SDE, generate trajectories like-- wait, I can't get the animation going again. Yeah. So if you could somehow simulate this process, this SDE, you solve this SDE, then we would be able to generate samples at the end. But to do that, you need to know this red term. You need to know the score, which we know it exists, but we don't know the value of it. The only thing we have access to as usual, is data. And so that you can get a generative model by basically trying to learn these score functions using a neural network. Just like before there is a neural network that parameterized by theta, that every x tries to estimate the corresponding score at that x for the density corresponding to time t. So this is the same as in the DDPM case. You had exactly this thing, but you only cared about, let's say, 1,000 different time indexes, which were those 1,000 different views of the original data density that you were considering in your variational autoencoder. Now, again, we have an infinite collection of score functions because these are real value here. And you can-- as usual, you can basically estimate these things using score-matching. So you have the usual L2 regression loss, where you try to find and make sure that your estimated score at every x is close to the true score as measured by L2 distance on average with respect to the data distribution. And whenever you want to estimate the score of data plus noise, this is something that we can do with denoising score-matching, essentially. So again, solving this training objective corresponds to learning a sequence of denoisers. And it's not a collection of 1,000 different denoisers. It's an infinite collection of denoisers once for every t, but again, it's the usual thing. And now what you can do is now you can plug that in into that reverse time SDE. And if you discretize this SDE, which basically means that you just discretize the time axis, and so you just look at instead of dx, you have xt plus 1 minus xt, essentially. And you integrate that stochastic differential equation, you get once again some update rule that is essentially the same-- that is very similar to Langevin dynamics and it's exactly the same update rule that you would use in DDPM, which is an average step, follow the gradient, and then add a little bit of noise, which is the same thing as Langevin dynamics, follow the score, add a little bit of noise. But you can think of this process as basically trying to start from noise and then you're trying to compute this red curve to get to a good approximation of a data point. We are dealing with a computer, so you cannot deal with infinite truly continuous time processes. So you have to discretize time. You have to discretize the time axis and you can try to approximate this red trajectory with essentially some Taylor expansion. So this is really what this thing is, is just a Taylor expansion to what you should be doing. And that's what DDPM does. So DDPM basically has 1,000 different time slices. And then you will try to approximate this red curve by taking steps according to the-- essentially, following this white arrow corresponds to sampling from one of the decoders that defines the DDPM model. And because you're like discretizing time, there is going to be some error that is happening. There's going to be some numerical errors that can accumulate over time. And you can think of what a score-based model, MCMC, does as essentially trying to use Langevin dynamics to get a good sample from the density corresponding to that time. And so you can actually combine the sampling procedure of DDPM with the sampling procedure of a score-based model. And you're going to increase compute cost, but you can get a closer approximation to the solution of this stochastic differential equation that is defined over a continuous time. One of the nice things about this whole SDE perspective is you might wonder where are we going through all of this, is that there is a way to obtain an equivalent-- there is basically a way to convert the SDE to an ordinary differential equation where there is no longer noise added at every step. And so that basically corresponds to converting a VAE into a flow model, because this is essentially an infinitely deep VAE where there is a lot of-- as you go through this trajectory, you're sampling from a lot of different decoders. And a VAE is the decoders are stochastic. So you would always add a little bit of noise at every step. If you think about a flow model that would be deterministic. So there is randomness in the prior, which is basically the initial condition of this process, but then the dynamics are completely deterministic, all the transformations are deterministic. And that it's also possible to do it because we have the continuous-time formulation. If you don't use it, then you have a score-based model. So don't use the predictor, you just use corrector, then you have a score-based model. If you just use predictor, then you have a DDPM. If you use both, you get something a little bit more expensive that actually gives you better samples because it's a closer approximation basically, underlying red curve, which is what you would really want, essentially. Recall that basically the ELBO objective is trying to essentially invert-- the decoder is trying to invert the encoder. And the decoder is forced to be Gaussian just by definition. And basically, the true denoising process is not necessarily Gaussian, the one that you would get if you were to really invert the denoising process. And so no matter how clever you are in selecting the mean and the variance of your Gaussian decoder, there might always be a gap, if you think about the ELBO between the encoder and the inverse of the decoder. And so you might not be-- which means that the ELBO is not tight and means that you are not modeling the data distribution perfectly. And they only-- basically, another way to think about this math is that only in the limit of continuous-time or basically an infinitely large number of steps it's possible to essentially get a tight ELBO, where if the forward process is Gaussian, the reverse process is also Gaussian. So you're not losing anything, but basically assuming that the decoders are Gaussian. But that's only true if you really have an infinite number of steps. So it's only true in continuous time. So the predictor would just take one step. The corrector is just using Langevin to try to generate a sample from the density corresponding to that. So that you would still do, let's say, 1,000 different steps, but not an infinite number. In this case, I guess I'm showing three steps. In reality, you would have 1,000 of these different white arrows. Cool. So the interesting thing is that so far, we've been talking about stochastic differential equations where we have these paths that you can think either in forward or reverse going from data to noise or noise to data. It turns out that it's possible to define a process where the dynamics are entirely deterministic. And it's equivalent in the sense that the densities that you get at every time step are the same as the one you would get by solving the stochastic differential equation either forward or reverse time. So we basically have two different stochastic processes, one is basically you have a stochastic initial condition, and then deterministic dynamics. Those are those white trajectories that you see. And another one where there is toxicity at the beginning and then also at every step. And the processes are the same in the sense that for every slice that you want to take, so for every time index, the marginal distributions are the same. So if you look at how frequently you see a white line versus one of the colored lines passing through a point, you will get exactly the same kind of density, including at time 0, which is the one that we care about, which is the one corresponding to data, basically. So what this means is that we can basically define a process that is entirely deterministic. And as we were saying before, this essentially corresponds to converting a VAE into a flow model. So in a VAE, you would have this process where at every step, you have a sample from a decoder which has stochasticity. In a flow model, you would have all these layers that are just transforming the data through some invertible transformation. And this is essentially what's going on here. What we're doing is we're converting the model into an infinitely deep flow model, a continuous time normalizing flow model where there is an infinite sequence of invertible transformations which basically correspond to the dynamics defined by this ordinary differential equation. So the difference here is that if you look at this equation, this is no longer a stochastic differential equation, there is no noise added at every step. Now, we only have a drift term here and there is absolutely no noise added during the sampling process. And again, you can see that the only thing that you need in order to be able to define this ordinary differential equation is the score function. So if you have the score function or the sequence of score functions one for every time step, then you can equivalently generate data from your data distribution by solving an ordinary differential equation. So you just initialize this trajectory, basically. Once again, flip the direction of time. You sample an initial condition by sampling from the prior, which is this usual pure noise distribution. And then you follow one of these white trajectories. And at the end, you get a data point, which is exactly the kind of thing you would do in a flow model where you sample from the prior and then you transform it using a deterministic invertible transformation to get a data point. And so that's basically what you would do. We have this process and we can think of this basically as a continuous time normalizing flow. And the reason this is, you know, this is indeed or intuitively the reason you can think of this as a normalizing flow is because these ordinary differential equations have a unique solution. So basically, these white trajectories they cannot cross each other. They cannot overlap, which means that there is some kind of mapping which is invertible that goes from here to here which is the mapping defined by this solution of the ordinary differential equation, which up to some technical condition exists and is unique. And so we can think of this as a very flexible flow model where the invertible mapping is defined by the dynamics of our ordinary differential equation, where the dynamics are defined by a neural network. So it's a neural ODE if you've seen these kinds of models, a neural ordinary differential equation. So it's a deep learning model where the computation is defined by what you get by solving an ordinary differential equation, where the dynamics are defined by a neural network, which in this case is the score function. The ODE is equivalent to the SDE, so they define exactly the same kind of distribution at every step. So the distribution that you get at this end. So a capital T is exactly the same that you would have gotten if you were to just add-- and we're doing a random walk where you add the noise at every step. This is true if you have the exact score function or the average step, which is never the case in practice, but to the extent that you have the exact score function, the mapping between the SDE and the ODE is exact. So they're exactly defining the same. It's a different stochastic process with exactly the same marginal distributions. Another way to think about it is that you're essentially reparameterizing the randomness. So remember that when we're thinking about variational inference and how to backprop through basically the encoder, which is like a stochastic kind of computation, we were showing that it's possible to sample from a Gaussian by basically transforming some simple noise through a deterministic kind of transformation. And in some sense what's happening here is that we are reparameterizing. This is a computation graph where we add randomness at every step. And we're defining a somewhat equivalent computation graph where we're putting all the randomness in the initial condition. And then we're transforming it deterministically. But, yeah. The key thing here is that the mapping is invertible because if you think about the ordinary differential equation, there is a deterministic dynamic. So whenever you are somewhere, the ordinary differential equation will push you somewhere in the next location based on the dynamics. And they cannot bifurcate. There is only one next stage that you get by solving the ODE. And so there is no way for two things to possibly cross. And we can invert it by basically going backwards from capital T to 0, which is from noise to data. And this is important for several reasons. The main one is that we can now think-- if you think of this process of going from some prior, simple prior, a Gaussian distribution to data through an invertible mapping, this is once again a normalizing flow. So what you can do is you can actually compute the likelihood of any x or any image by using essentially a change of variable formula. As in a regular flow model, if you want to evaluate the likelihood of some x under the flow model, what you would do is you would invert the floor to go in the prior space, which in this case corresponds to solving the ODE backwards and find the corresponding point in the latent space. Evaluating the likelihood of that point under the prior and the prior is known as fixed so we can do it efficiently, then as usual, you have to keep track of that change of variable formula, essentially. So you have to keep track of how much the volume is squeezed or expanded as you transform a data point through this ordinary differential equation integrating it. So it looks like this. So you have to integrate the-- so it's an ordinary differential equation. So you can imagine if you were to discretize it, the score would give you the direction that you should move by a little bit and then you need to recompute it. So you still need to somehow solve an ordinary differential equation on a computer, which involves discretizations. But what you get is that people have spent 50 years or more developing really good methods for solving ordinary differential equations very efficiently. There's very clever schemes for choosing the step size, very clever schemes for reducing the numerical errors that you get as you go from left to right, and all that machinery can be used and has been used to basically accelerate sampling, generate higher quality samples. And that's one of the main reasons this perspective is so powerful because once you reduce sampling to solve an ODE, you suddenly have access to a lot of really smart techniques that people have developed to come up with good numerical approximations to the ordinary differential equations. If you recall, you can think of DDPM as a VAE with a fixed encoder, which happens to also have the same dimension. And that's very important for getting the method to work in practice. We'll talk about latent diffusion models in a few slides. And that basically embraces more the VAE perspective of say, well, let's have a first encoder and decoder that will map the data to a lower dimensional space and then learn a diffusion model over that latent space. And so you get the best of both worlds where you've both reduced the dimensionality, and you can still use this machinery tool that we don't practice works very well. The Fokker-Planck equation is basically telling you how the densities change. It's a partial differential equation that relates the partial derivative of pt, basically, of the ptx. So the probability of x across as you change t to spatial derivatives, which is essentially the trace of the Jacobian. And so that's why the things work out. And that's actually how you do the conversion from the SDE to the ODE. You just basically work through the Fokker-Planck equation like everything is relying on this underlying diffusion structure. Yeah. But basically, what I'm saying here is that you can use something that is very similar to the vanilla change of variable formula that we were using in flow models to actually compute exact likelihoods using these models. And again, basically, if you want to evaluate the probability of a data point x0, what you would do is you would solve the backwards, so you would go from data to noise. To get xt, you would evaluate the probability of that latent variable under the prior. And that's this piece. And then you have to look at basically how the volume is changed along the trajectory. And it's no longer the determinant of the Jacobian that you have to look at. It turns out that what you have to do is you have to integrate the trace of the Jacobian, which is something that you can actually evaluate pretty efficiently. So that's basically what our consistency model does. There is this recent model was developed by Yang actually at OpenAI. And essentially what they do is they try to learn a neural network that directly outputs the solution of the ODE in one step. And because there is an underlying ODE, there is some clever objectives that you can use for training the neural network. But yeah, as we'll see, once you take this perspective you can distill down. Then you can get very fast sampling procedures by taking advantage of this underlying ODE perspective. You're just trying to solve ODEs and there is a lot of tricks that you can use to get very fast solvers. But one nice thing you get is you can compute likelihoods. So you can convert the VAE into a flow model, and then you can compute likelihoods. So the good thing is that once you learn the score once and then it's opening up. There's many different ways of using the score at inference time to generate samples. And ODEs are good to generate samples very efficiently. The SDE is still valuable. In some cases, it can actually generate higher-quality samples. And the reason is that if you think about what happens when you solve the ODE, you start with pure noise and then you follow this denoiser essentially to try to approximate one of these trajectories. But then let's say that the denoiser is not perfect and you're making some small mistakes, then the kind of images that you see around the middle of this trajectory, they're supposed to look like data plus noise. But they're not quite going to be data plus noise because your score function is not perfect. Then you're starting to feed the data that is a little bit different from the ones you've seen during training in your denoiser, which is your score model. And so you're going to have compounding errors because the images that you're feeding into your denoiser, which is basically what you get by following these trajectories, are not quite going to be exactly the ones that you've used for training the model, which is what you get by actually going from data to data plus noise by really just adding noise to the data. And so if you think about the SDE, on the other hand, you are actually adding noise at every step. And that's good because you're making the inputs to the denoiser look more like the ones you've seen during training. The problem is that solving SDEs efficiently is a lot harder. And so if you want fast sampling, the ODE perspective is much more convenient to work with. Yep. So we can get likelihoods and-- what you have to do is to basically solve an ODE where you solve this integral over time by-- you can literally call a black box ODE solver and compute this quantity. And it turns out that it's very competitive. So even though-- these models are not trained by maximum likelihood, so they are not trained as a flow model. And the reason they are not trained as a flow model-- because you could, in principle. You could try to optimize. You could do max exact maximum likelihood because you could try to evaluate this expression over your data set and optimize it as a function of theta. But it's numerically very tricky and very, very expensive because you have to differentiate through an ODE solver because you'd have to optimize the parameters theta such that the result of the ODE solver gives you high likelihood, which is extremely difficult to do in practice. So you don't train the model on maximum likelihood. You still train it by score-matching, but still, you get very good likelihoods. You can actually-- this is achieving state-of-the-art results on image data sets because-- yeah, it's unclear why, but-- I mean, well, we know sort of why because as we've seen, score matching has an ELBO interpretation. So it's not too surprising by matching gradients, by doing score-matching, you're optimizing an ELBO and evidence lower bound. So it's not too surprising that the likelihoods are good, too. But yeah, the results are very, very good in terms of likelihoods. The other thing that you can do is you can get accelerated samples. Specifically, DDIM is very often used as a sampling strategy where instead of having to go through, let's say, 1,000 different denoising steps, which is what you would do if you had a DDPM model, essentially what you can do is you can coarsely discretize the time axis. Let's say you only look at 30 different steps instead of 1,000, and then you take big steps, essentially. You take big jumps. And you're going to pay a price because there's going to be more numerical errors, but it's much faster. And in practice, this is what people use. And there is a little bit more-- it can be a little bit more clever than this because there is some special structure like a piece of the OD is linear, so you can actually solve it in closed form, but essentially, this is how you get fast sampling. You just coarsely discretize the time axis and you take big steps. So you can generate an image. Instead of doing 1,000 steps, you maybe only need to do 30 steps. And that becomes a parameter that you can use to decide how much compute you want to use, how much effort you want to put in at inference time. The more steps you take, the more accurate the solution to the ODE becomes, but of course, the more expensive it actually is. Just to clarify, there is not a score function for the ODE and one for the SDE. There is just a single score function which is the score function of the data density plus noise. And so it's the same whether you take the ODE perspective or the SDE perspective. The marginals that you get with the two perspectives are the same. And so the scores are the same and they are always learned by score-matching. Then at inference time, you can do different things. At the inference time, you can solve the SDE, you can solve the ODE, but the scores are the same. This is one way to get very fast sampling, and there is a lot of better now-- there is a lot of other clever ways of solving ordinary differential equations, Heun kind of solvers where you take half steps. There is a lot of clever ideas that people have developed for numerically solving ordinary differential equation pretty accurately with relatively small amounts of compute. And yeah, this can give you very, very big speed-ups with comparable sample quality. Another fun thing you can do is you can actually use parallel. This is actually a recent paper that we have on using these fancy ODE solvers which are basically parallel in time where instead of trying to compute the trajectory of the solution of the ODE one step at a time, which is what the DDPM would do, you try to guess the whole trajectory by leveraging many, many GPUs. And so instead of trying to go one step at a time, trying to find a good approximation to the true underlying trajectories, you use multiple GPUs to denoise the whole-- a bunch of images, basically, a batch of images in parallel. And so if you're willing to trade more compute for speed, you can actually get exactly the same solution that you would have gotten if you were to go through individual steps using a lot more parallel compute but in a vastly smaller amount of wall clock time. Now, I don't want to go into too much detail but basically, there are tricks for using, again, advanced ODE solvers to further speed up the sampling process. And let's see whether I can get that. Basically, what you're doing is instead of going DDPM you would go one step at a time trying to compute the trajectory, which is the brown kind of dot that you see moves slowly. What we're doing is we're using multiple GPUs to compute a whole piece of the trajectory in parallel. So it's a way to basically trade off compute for reduced wall clock time. Another thing you can do is distillation. The basic idea is that you can think of DDIM as a teacher model. So you have a model that would compute, let's say, the solution of the ODE based on some kind of discretization of the time axis. And then what you do is you train a student model that basically does in one step what DDIM would do in two steps. So DDIM maybe would take two steps to go from here to here and then from here to here. And you can train a separate student model, which is another score-based model that is trying to skip and doing basically-- it's trying to define a new score function such that if you were to take one step according to that score function, you would get the same result as what you would have gotten if you were to take two steps of the original score function under DDIM. So again, it's trying to distill the solution of the ODE according to a different-- or find a different ODE that would give you the same solution but using less steps. And then what you do is you recursively apply this. So then you use this new student model as the teacher, and you get another student that tries to do in one step what the other thing does in two steps, which becomes four steps of the original model. And you keep doing this until you can distill down to a very small number of steps. So these are some of the results. Not quite one map, but these are some-- the recent paper we have on this progressive distillation. And this is on text-to-image models. This is with the Stability AI's Stable Diffusion, where you can actually, let's say, you can see here images generated in just two steps or four steps or eight steps by distilling a model that originally had 1,000 steps and essentially using this trick of reducing in half and half and half until you get down to 2, 4, or 8 steps. And you can see the quality is pretty good in terms of the sample quality. And this is, of course, much more efficient. It's also tempting to get at the idea of generating in one step. Consistency Models directly try to do it in just one step. And so they directly try to learn the mapping from what you would get at the end of this progressive distillation. And they do it with a different loss, so there is no progressive distillation. They just do it in one shot, essentially. But they're trying to get at a very similar kind of result. Cool. And so, yeah, distillation is a good way to achieve fast sampling. There's also this thing called Consistency Models that is essentially anything you might have seen in Stability AI. They recently released a model yesterday I think on real-time that allows you to do real-time generation. It's something like this, some version of score distillation plus some GAN that they throw in. But it's like a combination of this thing plus a GAN. And they were apparently able to generate to get a model that is so fast that it's basically real-time. It's a text-to-image model where you can type what you want and it generates images in real time. Yeah, combination. Basing, yes. Speaking of Stable Diffusion and Stability AI, the key difference between what they do and what we've been talking so far is that their use of latent diffusion model. And essentially, what they do is they add an extra-- to think about diffusion model as a VAE, what they do is they add another encoder and decoder layer at the beginning. And the purpose of this encoder and decoder is to reduce the dimensionality of the data. So instead of having to do diffusion model over pixels, you train a diffusion model over the latent space of an autoencoder or variational autoencoder. But literally, you can think of what's happening as just an extra-- if you think of the hierarchical VAE, you just add an extra encoder, an extra decoder at the very end of the stack. So those distilled models were actually distilled latent diffusion models. So the reason you might want to do this is that it's a lot faster to train models, let's say, on low-resolution images or low-dimensional data in terms of the memory that you need for training a diffusion model. It's actually much faster to train a diffusion model if you could somehow train it not on the original pixel space but you could do it over some sort of low-dimensional representation space. And the other advantage of this, if you take this perspective, is that now you can suddenly use diffusion models for essentially any data modality, including text. So some of the diffusion models that people have tried on text essentially take this perspective. So you start with discrete inputs x and then you encode them into a continuous latent space. And then you decode them back with the decoder, and then you train the diffusion model over the latent space, which is now continuous, and so the math works out. And, of course, this only works to the extent that the original encoder and decoder does a pretty good job, but basically reconstructing the data. And yeah, what Stable Diffusion does is they actually pretrain the autoencoder. So it's not trained end to end even though you could. Because it's just a VAE, so you could actually train the whole thing end to end. What they do is they pretrain the autoencoder, and they really just care about getting good reconstruction. So they don't care too much about the distribution of the latent space to be similar to a Gaussian. They just care about getting a good autoencoder essentially. And then in a separate stage, they keep the initial autoencoder fixed and they just train the diffusion model over the latent space. And that works really well. And these were some of the-- they got a lot of success with this approach. They were one of the first to train a large-scale model on a lot of online large-scale image data sets. And it's been widely adopted by a lot of the community. People have actually been successful even in training diffusion models in pixel space. But the most successful versions are usually either on the latent space or downscaled versions of the images. So they have-- this encoder and decoder is more like a downscaling and an upscaling. But essentially the trick is being able to train a model over a low-resolution data. Literally, what you do is you encode all your data set that pretend that the data is whatever comes out from the original encoder and train your diffusion model over whatever you get. So you regularize it to be close to a Gaussian, if I remember correctly, but it's a very weak regularization. Really, all they care about is reconstruction. So if you think about the ELBO as reconstruction plus matching the prior, they don't care too much about matching the prior because they're not really going to sample from-- essentially, they use the diffusion model as the prior and the diffusion model can generate anything. It's a very powerful prior distribution. So you don't have to regularize the VAE to have a distribution over latent that is close to Gaussian because anyways, then you're going to fit a VAE to whatever comes out from the original-- you're going to fit a diffusion model to whatever comes out from the encoder of this nature of VAE. So it's not really necessary to regularize. So maybe there are two priors. So if you just think about the basic VAE that goes from high-dimensional to low-dimensional data, you could have a prior there when you pretrain this model. But since you're not really going to use that-- wanted to sample from, you don't really care about matching the prior. In the diffusion model, the prior at this end is the usual Gaussian. So the diffusion model that you learn over the latent space of the initial autoencoder has a Gaussian prior. How do you get text into one of these models? And there are several ways to do it. So let's say that you have a data set that is not just images x but it's images and captions, where I'm denoting the caption with y here because it could also be a class label, let's say. So really what you're trying to do is you're trying not to learn the joint distribution over x, y because you don't care about generating images and the corresponding captions, you just care about learning the conditional distribution of an image x given the corresponding label or given the corresponding caption y. And essentially, if you want to use a diffusion model for this or a score-based model, this boils down to learning a score function for this conditional distribution of x given y. So now the score function or the denoiser as usual needs to take as input xt, which is a noisy image. It needs to take as input t, which is the time variable in the diffusion process or the sigma level, the amount of noise that you're adding to the image. And now basically the denoiser or the score function gets this side information y as an extra input. So the denoiser knows what is the caption of the image. And it's allowed to take advantage of that information while it's trying to guess the denoise level or equivalently denoise the image. And so in some sense, you can think of it as a slightly easier problem because the denoiser has access to the class label y or the caption y while it's trying to denoise images, essentially. And so then it becomes a matter of cooking up a suitable architecture where you're fitting in into the unit, you're fitting in t, you're fitting in the image xt, and then you need to fit in y, which is, let's say a caption into the architecture. And the way to do it, it would be, for example, you have some pretrained language model that somehow can take text and map it to a vector representation of the meaning of the caption and then you incorporate it in the neural network. And there's different ways of doing it, but maybe doing some kind of cross-attention-- there's different architectures but essentially, you want to add caption-wise and additional input to your neural network architecture. This is the most like vanilla way of doing things, which is just train a conditional model. Now, the more interesting thing, I think, is when you want to control the generation process but you don't want to train a different model. So the idea is that you might have trained a generative model over images. And let's say there's two types of images-- dogs, and cats. And then let's say that now we only want to generate-- back to the question that was asked initially during the class is, what if you want to generate an image only of dogs? So if you have some kind of classifier, p of y given x, that can tell you whether an image x corresponds to the label dog or not, how do we generate an image x that would be labeled as a dog, with a label y equals dog? Mathematically, what we want to do is we want to combine this prior distribution p of x with this likelihood p of y given x, which, let's say, is given by a classifier, and what we want to do is we want to sample from the posterior distribution, x given y. So we know that we want a dog and we want a sample from the conditional distribution of x, given that the label is dog. And if you recall, this conditional distribution of x given y is completely determined by p of x and p of y given x through Bayes' rule. This is inverse distribution is obtained by that equation that you see there, which is just Bayes' rule. So if you want to get p of x given y, you multiply the prior with the likelihood and then you normalize to get a valid distribution. The denominator here is in principle something you can obtain by integrating the numerator over x. So what you have in the numerator is p of x, y, and in the denominator, you have p of y, which is what you would get if you integrate over all possible x's, p of x, y. So everything is completely determined in terms of the prior and this classifier. So in theory, if you have a pretrained, let's say, generative model of images and somebody gives you a classifier, you have all the information that you need to define this conditional distribution of x given y. It's just a matter of computing that expression using Bayes' rule. And unfortunately, even though in theory you have access to the prior, you have access to the likelihood, computing the denominator is the usual hard part. It's the same problem as computing normalization constants in energy-based models, basically. It requires an integral over x and you cannot really compute it. And so in practice, even though everything is well defined and you have all the information that you need, it's not tractable. But if you work at the level of scores, so if you take the gradients of the log of that expression, you get that the score of the inverse distribution is given by the score of the prior, the score of the likelihood. And then we have this term which is the score of the normalization constant. And just like in energy-based models, remember that that term goes to 0 because it does not depend on X. And so basically, if you're working at the level of scores, there is very simple algebra that you need to do to get the score of the posterior is you just sum up the score of the prior and the score of the likelihood. And what this means is that basically, when you think about that SDE or the ODE, all you have to do is you have to just replace the score of the prior with the score of the posterior. And really, all you have to do is if you know the score of the prior, you have a pretrained model. And let's say you have a classifier that is able to tell you what is the label y for a given x, as long as it can take gradients of that object with respect to x, which is basically if you have a, let's say, an image classifier, you just need to be able to take gradients of the classifier with respect to the inputs. Then you can just sum them up and you have the exact score of the posterior. So if basically you do Langevin dynamics or you solve the SDE or the ODE, and instead of following the gradient of the prior, you follow the gradient of the prior plus the likelihood, you do the right thing. So intuitively, if you think about Langevin dynamics, what you're doing is you're trying to follow the direction that increases the likelihood of the image with respect to the prior. But at the same time, you're trying to make sure that the classifier will predict that image as belonging to the class dog. And so you're just changing the drift in the ODE to push the samples towards the ones that will be classified as a dog. In reality, you would need to have this classifier with respect to xt, which is like a noisy version of the image, but roughly. And if you had a latent variable model, then yeah, it's a little bit more complicated because you also have to go through the original encoder and decoder. But if you're working on pixel space, this can be used. And we've used it for a number of things. You can use it to do editing if you want to go from strokes to images. Basically, it's possible to define a forward model in closed form and you can follow it, and you can do image synthesis, or if y is a caption and then you have some kind of image captioning network, you can steer a generative model towards one that is producing images that are consistent, that would be captioned in a particular way. And you can use it to do conditional generation and you can do text generation and so forth. You can actually also do medical imaging problems where the likelihood is specified by how the machine works like the MRI machine works and why it's a measurement that you get from the machine. And then you can try to create a medical image that is likely under the prior and is consistent with a particular measurement that you get from the machine. So a lot of interesting problems can be solved this way. And even classifier-free guidance is basically based on this kind of idea. And I guess we don't have time to go through it, but it's a trick to essentially get the classifier as the difference of two diffusion models, but roughly the same thing. In practice, you can either approximate it just with a classifier that works on clean data by using the denoiser to go from noisy to clean and then use the classifier. In some cases, it can be done in closed form, or you can do this trick where you basically train two diffusion models, one that is conditional on some side information, one that is not. And then you can get the classifier implicitly by taking the difference of the two, which is what classifier-free guidance does which is widely used in state-of-the-art models. But essentially, they avoid training the classifier by taking the difference of two diffusion models. So they train one. Let's say that is the p of x given y, which would be just a diffusion model that takes a caption y as an input. Then they have another model that is essentially not looking at the captions. And then during the sampling, you push the model to go in the direction of the images that are consistent with the given caption, and away from the ones that are-- from the typical image under the prior. And that's the trick that they used to generate good-quality images.
Stanford_CS236_Deep_Generative_Models_I_2023_I_Stefano_Ermon
Stanford_CS236_Deep_Generative_Models_I_2023_I_Lecture_14_Energy_Based_Models.txt
All right. So the plan for today is to  continue our discussion of score-based models, and we'll see how they are connected to  diffusion models. And we'll see some of the state-of-the-art stuff that currently  has been used to generate images, videos, some of the things we've seen in  the very first introductory lecture. So brief reminder-- this is the usual  roadmap slide for the course. Today, we're talking about diffusion models or  score-based models. You can think of them as one way of defining this model family by  parameterizing the score and then learning the data distribution by essentially  using some kind of score-matching loss. And we've seen that-- yeah, that's  the key underlying idea is that to represent the probability distribution,  we're going to use a neural network, which is vector-valued. So for every point,  it gives you a vector. And that vector is supposed to represent the gradient of the  log likelihood at that point. And so can think of it as a vector field like the one  you see here that is parameterized by some neural network. And so as you change the  weights, you get different vector fields. And we've seen that it's possible to fit these  models to data by doing score matching. So we've seen that the machinery that we talked about  in the context of energy-based models can be applied very naturally to these settings. And so  there is a way to fit the estimated gradients to the true gradients by minimizing this kind  of loss, which only depends on the model. This is the thing we derived by doing integration  by parts. And it's a principled way of fitting the model. The issue is that it's not going to work in  practice if you're dealing with high-dimensional settings because of this trace of the Jacobian  term that basically would require a lot of back propagation steps, and so it's not going to  work if you're trying to, say, model images. And so in the last lecture, we talked about two  ways of making score matching, more scalable. The first one is denoising score matching, where  the idea is that instead of trying to model the score of the data distribution, we're going to  try to model the score of this noise-perturbed data distribution. And typically, the way we  obtain this noise-perturbed data distribution is by starting from a data point and then applying  this perturbation kernel, which gives you the probability of error. Given that you have a clean  image x, what is the distribution over noisy images x tilde? And it could be something  as simple as let's add Gaussian noise to x. And it turns out that estimating the score  of this noise-perturbed data distribution is actually much more efficient computationally.  And so the usual kind of score matching loss where you do regression, some kind of  l2 loss between the estimated score, and the true score of the noise-perturbed  data density-- that's the key difference here. We're no longer estimating the score of  pdata. We're estimating the score of q sigma. It turns out that it can be rewritten in terms of  the score of this transition kernel, perturbation kernel, q sigma of x tilde given x, which is just,  let's say, a Gaussian. And so in the case of a Gaussian distribution, this objective function  basically corresponds to denoising because the-- yeah, basically the score of a Gaussian is just  like the difference from the mean, essentially. And so you can equivalently think of denoising  score matching as solving a denoising problem, where what you're doing is you're sampling a  data point, you're sampling a noise vector, and then you're feeding data plus noise to the  score model as data. And the goal of the score model is to try to estimate z, essentially, which  is the amount of noise that you've added to the, to the clean data x. And so there's  this equivalence between learning the score of the noise-perturbed data  density and performing denoising. And as you see, this is much more  efficient because we no longer have to deal with traces of Jacobians. It's a  loss that you can efficiently optimize as a function of theta. And so the pros is, yeah,  it's much more scalable. It has this intuitive correspondence to denoising, meaning that  probably architectures that work well for denoising are going to work well for  this kind of score estimation task. The negative side of this approach is  that we're no longer estimating the score of the clean data distribution. We're now  estimating the score of this noise-perturbed data density. And so we're shifting the goal  post here because we're no longer estimating the score of the true data density, but  we're estimating the score. Even if we're doing very well at solving this problem,  even if we can drive the loss to zero, we don't overfit, everything works well,  we're no longer estimating what we started out with, but we're estimating this  noise-perturbed data density score. And then we've seen the alternative is  to do some kind of random projection, and that's the sliced score matching approach,  where essentially instead of trying to match the true gradient with the estimated gradient  at every point, we try to just match their projections along some random direction  v. And so at every point, we sample a direction vector v based on some distribution,  we project the true score, the estimated score at every point. After the projection, you get  scalars, and then you compare the projections. And if the vector fields are indeed the  same, then the projections should also be the same. And it turns out that, again, this  objective function can be rewritten into one that only depends on your model-- kind of  the same integration by parts trick. And now this is something that can be evaluated  efficiently, you can optimize efficiently as a function of theta because essentially  it only involves directional derivatives. And so it's much more scalable than vanilla  score matching. It also estimates the score of the true data density as opposed to the  data density plus noise. But it's a little bit slower than denoising score matching because  you still have to take derivatives, basically. So that's where we ended our last lecture.  And then the other thing we talked about is how to do inference. And we said, well,  if you somehow are able to estimate the underlying vector field of gradients by doing  some kind of score matching, then there are ways of generating samples by using some kind of  Langevin dynamics procedure where these scores are telling you in which direction you should  go if you want to increase the probability of your data point. And so you just follow these  arrows, and you can generate samples, basically. And what we've seen is that this didn't actually  work in practice. This variant of the approach, it makes sense, but it doesn't work  for several reasons. One is that, at least for images, we expect the data to lie  on a low-dimensional manifold, meaning that the score is not really a well-defined object. We  have this intuition that we're not expecting to be able to learn accurate scores when we're  far away from the high data density regions. If you think about the loss, it depends  on samples that you draw from the data distribution. Most of the samples are  going to come from high-probability regions. When you're far away, you  have an object that looks nothing, let's say, like an image. You've never  seen these things during training. It's unlikely that you're going to be able to  estimate the score very accurately. And that's a problem because then Langevin dynamics depends  on this information to find high-probability regions. And so you might get lost, and you  might not be able to generate good samples. And then we've seen that there are issues with the  convergence speed of Langevin dynamics. It might not even converge. If you have zero probability  regions somewhere, it might not be able to go from one region of the space of possible images  to another one. And so that's also an issue. And so what we are going to see today is that  there is actually a very simple solution to all of these three issues that we just talked about.  And that basically involves adding noise-- adding, let's say, Gaussian noise to the data.  And to see this, we notice that, well, one issue is that if the data lies on a  manifold, then the score is not really defined. But the moment you add the noise to the data, then it becomes supported over the whole  space. Noisy data, you are adding noise, so any possible combination of pixel values  has some probability under this noise-perturbed distribution. And so even though the  original data lies on a manifold, the moment you add noise, you fall off the manifold,  and it becomes supported over the whole space. Score matching on noisy data will allow us to  basically estimate the score much more accurately. This is some empirical evidence showing if you try  to do score matching on CIFAR-10 on clean images, the loss is very, very bumpy. You're  not learning very well. But the moment you add noise to the data, a tiny little  amount of noise to the data with some tiny, little standard deviation, then the  loss converges much more nicely. And it solves the issue of the fact that score  matching is not accurate in low data density regions. But remember the intuition was that most  of your data points are going to come from-- let's say, if your data is a mixture of two Gaussians--  one here, and one here-- most of the data will be-- the samples that you see during training are  going to come from this region or this region, the two corners where the data is distributed.  And as a result, if you try to use data, fit a score model, there is a true score in the  middle, there is an estimated score on the right. It's going to be accurate around the  high data density regions. It's going to be inaccurate the moment you go far away.  But if you think about adding noise, again, it's a good thing for us because if you add  a noise to the data, then the samples of the noise-perturbed data densities are going to  be, again, spread out all over the space. And so what happens is that now if  you think about where you're going to see your samples during training, if you  add a sufficiently large amount of noise, the samples are going to be all over the space.  They're going to be spread around the whole space. And what this means is that if you are willing to  add noise to your data and you add a sufficiently large amount of noise, then we might be able to  estimate the score accurately all over the space. And now, of course, this is good because  it means that we might be able to get good information from our Langevin dynamics  sampler. If we are relying on these arrows to go towards high probability regions,  Langevin dynamics will probably work if we do this. The problem is that  we're no longer approximating. We're no longer-- if you do Langevin  dynamics over these estimated scores, you're going to be producing samples from a  noisy data distribution. So you're going to be generating images that look like this instead  of generating images that look like this. So that's the trade off here. Yes, we're going to  be able to estimate the score more accurately, but we're estimating the score of the wrong thing. Basically, you talked about the  score estimation being the same as denoising. So I guess, how is this  different than that concept? Because we showed that score matching is  equal to denoising. So to me, I'm a bit confused that we're just adding noise  to the data. So what are we doing differently? So before, what we were doing is we were  estimating the score of the noisy data distribution. And so here, if you were to  do this, yeah, you would be using the other score matching. You would solve a denoising  problem. You would learn the score of the noisy data distribution. Now you follow that  score, and you are producing noisy samples. I see. So if we followed what we  did, we would get noisy images. We would get noisy images. That's  the problem. That's the trick. So on the one hand, you would like sigma,  the amount of noise that you add, to be as small as possible because then you're learning  the score of the clean data. So presumably, if you follow those scores, you're going  to generate clean samples. On the other hand, if you do that, we're not  going to be expected to learn the score very accurately. And so that's  the dilemma that we have here, basically. So it's essentially the difference between this  and denoising score matching that denoising score matching, we are putting noise on the  real data samples and estimating that noise sample. But eventually, we're going to learn the  new data sample if the noise is small. But here, we are estimating-- we're using the noise data  samples from the first place. And if we're using denoising score matching, we're going to be adding  noise to the noise samples and then [INAUDIBLE] No, no, no. Actually, you could use denoising  score matching to estimate the score. In fact, that's what we would end up doing. So if  you were to use denoising score matching, you would take data, you would add noise,  you would solve the denoising problem. What you end up learning is the score of the perturbed data density. So you end up  learning this, but that's not this, which is what you wanted. It's not  the score of the clean data density. So in particular, if you were to then  follow those scores that you have here, you would produce samples according to  their noise-perturbed data density. And so in particular, the images would  look like this, not like this. Are you saying this is equivalent to  just doing denoising score matching? So I understand how you do this. Or you could  even do sliced score matching here or vanilla score-- you could do sliced score matching, for  example, not vanilla, but you could do sliced score matching here to estimate this. Denoising  score matching would be a much more natural choice because it's faster, and it automatically gives  you the score of a noise-perturbed data density. So here, I'm just saying, even if  you were able to estimate the scores, they are not what you want. Using denoising  score matching would be a very natural way of estimating these scores, and  that's what we're actually to do. So you were saying how real data logs  on low-dimensional manifolds. Can you say that different words? What does it  mean about in real-life data, the stuff this looks at. What does that mean?  How does it even work? What is it? The manifold? Yeah. Yeah. It basically means that-- if you  recall from the last slide, we were saying, OK, if you were to do a PCA of the data and you  keep a sufficiently large number of components, you would reconstruct the data almost perfectly,  which basically means that the different pixels in an image-- they are not linearly  independent. Once you know, a subset of them, you get all the others automatically,  which basically means that the images lie on some kind of plane, essentially, which  is what I'm visualizing here with this shape. So not all possible pixel values are actually  valid in the data distribution, essentially. There is some kind of constraints, which you can think  of it as encoding this kind of curve. And all the images that we have in the data, they lie on this  surface or this curve in a high-dimensional space. And so the score is not quite well defined because  what does it mean to go off the curve? Then, the probability is zero the moment you go off  the curve, and so it can explode, basically. How does that work? Well, the moment you add noise, than basically  any combination of pixel values is valid because there's always some probability of adding the  right amount of noise such that that combination was possible. So if you imagine data that lies on  a plane or that kind of surface and then you add noise, you're moving the value by a little bit,  and then it's no longer lies on that plane or no longer lies on the surface. So you're breaking  that. that constraint that held for the real data no longer holds for noise-perturbed data, and that  helps estimating the gradient more accurately. Real data, if it lies on a low-dimensional  manifold, then, there are points that are just never occurred. But then here, when you're adding  such small amounts of noise, it still feels like there would be a type of low data problem where  any really far-out, random assortment of pixels is basically never going to occur even if it  has some very minimal probability. So basically, a too small amount of noise, you do  this amount, and nothing really changes. So yeah. So noise does become more stable, but  you do have this problem that, as you said, if you add sufficiently small amount  of noise, it's going to be more or less. There is not really a discontinuity. So  yeah, you had a very small amount of noise, your noise-perturbed data distribution  is very close to what you wanted, so that's great. But you're not really solving  the problems that we have here, basically. That's what I have in the next  slide here. That's the question, how much noise do we want to add? Do you want to  add a very little small amount of noise? Do you want to add a of noise? If you think about the  different amount of noise that you can add, you're going to get different trade offs.  So you can imagine that there is the real data density. There is the real scores. And if  you try to estimate them using score matching, there's going to be a lot of error  in this region, as we discussed. Then you could say, OK, now I'm going to add a  little bit of noise. So I'm no longer estimating the right thing, so there's going to be  a little bit of error everywhere because I'm estimating noise-perturbed scores instead  of true scores. But my estimation starts to become a little bit better. And then you can  add even more noise. And then, at some point, you are doing a great job at estimating the  scores, but you're estimating the scores of something completely wrong because  you added too much noise to your data. And maybe that's the extreme where you add a  ton of noise. You've completely destroyed the structure that was there in the data. So what  you're estimating has nothing to do with the clean images that you started from, but you're  doing a very good job at estimating the score because it becomes very easy. So those  are the things that you need to balance. We want to be able to estimate the score  accurately, and so we would like to add as much noise as possible to do that. But at the  same time, adding noise reduces the quality of the things we generate because we're estimating  the score of a noise-perturbed data density. So we're no longer estimating what we wanted,  which is the thing that we have up here, but we're estimating the score of a data  distribution with a lot of noise added. So the denoising score matching thing-- so that is just the first example and that  is what it was restricted to? You can estimate any of these. You  can use the noisy score matching to estimate the score of any of these  slices. But it's going to perform-- it might become very bad if the amount  of noise that you add is very small. And so that's what you see here.  Well, this is maybe the clean score, or maybe this is a little bit of noise. Denoising  score matching is not going to work very well. So denoising score matching is a technique that you can use to estimate the  scores for any given thing. For any amount of noise. We're actually changing the points, or by  adding noise, we're getting much more points. No. So the noisy score matching is a way of  estimating the score of a noise-perturbed data density with any amount of noise that you  want. The question is, how much noise do you want to add? You'd like to add as little noise as  possible because you want to estimate something close to the real data. But the more noise you  add, the better estimation becomes. And so that's the problem that you want to trade off these  two things, and it's not clear how to do that. Don't you feel like when we add a little  noise, it's working with the function that has discontinuity features that if you do a  perturbation figure out the gradient, it's very easy to overshoot and undershoot, and then we  make it function smoother? So that we easy to find a gradient, but it's not accurate anymore because  you're estimating the gradient on the wrong thing. Yeah. So this is perhaps another way to  think about it. Imagine that somehow the data lies on this curve, and this is  just a curve in a 2D space. You know, most of your samples are going to be close to that  thick line that we have here. What's happening? And so if you were to estimate the  score far away from the black curve, it's going to be fairly inaccurate. Then you  can imagine, OK, let's add a lot of noise, sigma 3. Then, most of the samples are going to  be pretty far away from the black curve. And so we're going to get pretty good directional  information when you're far away from a clean sample. But it's going to be inaccurate the  moment you get closer to where the real data lies. And then you can imagine a setting where you  have a ensemble of different noise levels. You're not just considering a single noise  level, but you are considering many of them. And so that you are able to get good directional  information, both when you're far away and when we are a little bit closer and a little  bit closer to the real data distribution. And that's the main underlying idea of  a diffusion model or score-based model. The key idea is that we're not just going to  learn the score of the data, or we're not just going to learn the score of the data plus a single  amount of noise. But we're going to try to learn the score of the data perturbed with different  kinds of amounts of noise. That's the intuition. And so specifically, we're going to consider  different amounts of noise-- sigma 1, sigma 2, all the way to sigma L. And  we're going to use something called annealed Langevin dynamics to basically  generate samples. And the basic idea is that when we initialize our Langevin dynamics  procedure, there's probably going to be very little structure in the samples. They don't look  like natural images. And so what we can do is we can follow the scores that were estimated for  the data distribution plus a lot of noise. And for a little bit, if you  were to keep running this thing, then you would be able to generate samples  from the data distribution plus a lot of noise, which is not what we want. But what we can  do is we can use these samples to initialize another Langevin dynamics procedure where  we've decreased the amount of noise by a little bit. And then you basically keep  running your Langevin dynamics procedure, following the scores corresponding to the data  density plus a smaller amount of noise at sigma 2. Then you decrease it even more, and you initialize  because you got closer and closer to the high data density regions, then we know that now we are  starting to see more structure in the data. And so we should follow the score for the data density  plus, let's say, a very small amount of noise. And then again, you follow the arrows, and then you're  generating samples like the ones we actually want. But at this point, we can get the best of both  worlds because at the end of this procedure, we're generating samples from data plus a very  small amount of noise. But throughout the sampling procedure, we are always getting relatively  accurate estimates of the scores because we are considering multiple noise scales. So at  the very beginning where there was no structure in the data, we were following the score  corresponding to data plus a lot of noise. And then as we add more and more structure to  the data, because we are moving towards higher probability regions by following these arrows,  then we can afford to reduce the-- basically consider the gradients of data that was perturbed  with smaller amount of noise. And this procedure will get us the best of both worlds because  Langevin dynamics is never lost. We're always following a pretty accurate estimates of the  gradient. But at the same time, at the end, we're able to generate samples for a distribution  of data plus noise where this noise level, sigma 3, can be very, very small. So this final samples  that you produce are going to be almost clean. Are there any studies on how many different noises  and what the sequence decay should look like? Yeah. So typically, people use  1,000. That's the magic number. But in the second part of the lecture,  we'll talk about an infinite number of noise levels. So the natural way to do  things is to actually consider continuous numbers. That's what gets you the best of-- a  lot of structure, a lot of interesting things. So that's the intuition. And you can see here  another example of what happens if you were to just run Langevin dynamics. It has this problem  where you're seeing too many particles down here because it doesn't mix sufficiently rapidly. And even though there should be more probability  mass up here, meaning that more particles should end up up here, there's too many down here  because the arrows are basically down. You're not estimating things accurately. And if you do anneal  Langevin of dynamics, so you use this procedure where you run multiple longevity dynamics chains  corresponding to different amounts of noise, then it ends up giving you the right distribution  where you see there is many fewer particles down here representing the fact that there  should be less probability mass down there. And yeah, here is another example showing this,  but let me skip. So what does it mean in practice? What it means in practice is that in order to do  this, you need to be able to estimate the score, not just of the data density, not just  of the data density plus a certain fixed amount of noise, but you need to be  able to jointly estimate the score of the data plus different amounts of noise levels--  various amounts of noise levels. So you need to be able to know what a z-score of data  plus a lot of noise. You need to be able to know what a score of data plus a little  bit less noise, all the way down to a very, very small amount of noise added to the data,  where it's almost the true data density. And that's fine because if you do anneal Langevin  dynamics, even though this score is only ever going to be estimated accurately close to the high  data density regions, we still have that problem. This score here is not going to be estimated  accurately everywhere. It's only ever going to be estimated accurately when you're very close to,  let's say, a real image. But that's fine because we're using the Langevin dynamics procedure, and  we're only going to use this score model towards the end of the sampling, where we already have a  pretty good guess of the kind of images we want. While this score here, which  is data plus a ton of noise, is going to be estimated pretty accurately  everywhere, it's going to be good at the beginning of the sampling. But we don't want  to just keep following that because we want to be able to sample from something  close to the clean data distribution. And so to make things efficient, what  we would do is we would have-- you could potentially train separate score networks,  one for every noise level. If you have, let's say, 1,000 noise levels, that would  mean 1,000 different neural networks, kind of like training, each one being  trained on a different kind of vector field. To make things more efficient in practice, what  you can do is you can have a single neural network that takes an additional input parameter sigma,  which is basically just the amount of noise that we're considering. And the single neural network  will jointly estimate all these different vector fields. So when you fit in a large value  of sigma here as an input, then the network knows that it should be estimating the vector  field for, let's say, data plus a lot of noise. While when you fit in as an input, a small  value of sigma, then the network knows that it needs to estimate the vector field for  data plus a small amount of noise. And so this is just basically a way to make the  computation a lot more efficient because we have now a single model that is trained to  solve all these different estimation problems. It's going to be worse than just training 1,000  separate models, but it's going to be much more efficient because we're just training  a single neural network at that point. So in this case, we're learning a vector field,  not a probability distribution, right? But when you do the score matching for energy-based  models, in that case, are we learning the solution to an ODE that has a closed form solution? Or are  we implicitly assuming that because we're taking a gradient of a probability distribution  instead of just learning a vector field? So yeah. So what we're learning here-- so these  vector fields are not necessarily conservative unless you parameterize the network in a certain  way. You could potentially parameterize the network such that it's the gradient of an energy  function. Actually, it doesn't hurt performance too much if you do that, but it doesn't actually  seem to help. So in practice, you can just use a free-form neural network that goes from, say,  images to images, and that's not a problem. But you're right that it's not necessarily the  gradient of a potential of an energy function, and so weird things can happen,  where if you follow a loop, the probability can go up or down, even though  if there was really an underlying energy, it shouldn't change the probability. So that could  be a problem. But in practice, it works. Yeah? For a bigger noise level, you can  generate multiple perturbed data points with one real data point. So  we got a hyperparameter based on-- The number of noise levels? Is  that the question, whether that's-- Given the noise level, you can still generate  multiple data points with one real data point. So you mean when we run the Langevin procedure? When you generate the training data for the score. Yeah. Yeah, you can generate-- so we haven't  talked about how we actually learn it. Right now, I'm just saying, what does the model  look like? So the model is going to be a single neural network that will  try to jointly estimate all these scores. How do we actually learn it?  We're going to learn it by denoising score matching. So there's going to be  this noise conditional score network, which is going to be a network that jointly  estimates all these vector field of scores. And how should we train this? You could  do slight score matching. It's much more natural to just use denoising score matching.  Since denoising score matching already gives you the score of a noise-perturbed data  density, you might as well directly use that. So since we were trying to model  scores of data plus noise, we might as well just directly use denoising score matching  because that's a little bit more efficient. And then the loss function is going to be  a weighted combination of denoising score matching losses because we want to jointly solve,  let's say, 1,000 different tasks. And so the loss function might look something like this, where  we have this noise-conditional score network that takes as input a data point and a noise  level and tries to estimate the score of the data distribution perturbed with that noise level  at that point x. And we want to train this score network to perform well across different  noise levels. So if you have L noise levels, this noise-conditional score network should be  able to solve all these different regression problems as well as possible. And so there is  a lambda sigma i parameter here that basically controls how much you care about estimating  accurately the scores at different noise levels. Since the data is already perturbed now,  we don't add an additional perturbations? Yeah. So the log would look like this.  So the data is clean, as you said. And then you add noise corresponding to the  sigma that you care about, and then you try to denoise that data point. And so if you  think of it from the denoising perspective, we're not just learning to denoise data that has  been perturbed with a fixed amount of noise, but we're basically learning a family of denoisers,  each one working with different amounts of noise. So there is a denoiser that works when the data  is corrupted with a very large amount of noise, and there's going to be a denoiser that  works when the data is corrupted with a smaller amount of noise, all the way down  to almost no noise being added to the data. And these are different denoising  problems and equivalently corresponding to estimating the scores of different  noise-perturbed data distributions. [INAUDIBLE] longer than asked to the real samples. Yeah. So when the noise is very large,  basically denoising is very hard. If I add an infinite amount of noise, basically  all the structure and the original data is lost. And the best you can do is to  basically output the average image, essentially. That's the only thing you can do.  There is no information in x tilde about x. So the best thing you can do is to basically--  if you're trying to minimize an L2 loss to predict the mean. And you can imagine  that if that's the only thing you have, you're not going to be able to generate good  images. But because you know also how to denoise images with less amounts of noise, then if you  know all these score models, then you can do that annealed Langevin dynamics procedure, and you  can actually generate clean images at the end. You said we can customize the Lambda  function to assign weights. But from a practical standpoint, what sort  of a lambda are we looking for it? So yeah, we'll get to what lambdas make sense.  In theory, if you had infinite capacity, this model is arbitrarily powerful. It doesn't  even matter because it could solve perfectly each task. In practice, it matters how you  weight this different score matching losses. We'll get to that. And yeah, but the loss function  basically looks like this. It's a mixture of denoising score-matching objectives across all the  different noise levels that we are considering. Would I be correct in saying that  if you have different noise levels, then would the gradients be in some way  related by some scaling factor? Because the general direction is the same. So to  me, I'm a bit-- I guess, why can't we just learn one model and just scale it? Because  like learning rate-- so if you're far away, try to estimate it from-- try to get an  estimate of the gradient, just scale it by a big amount and move in a direction. And  as you get closer, just taper it down so that you'll converge. So why not do something like  that? Because if you assume a Gaussian shape, then I assume that we should be able to scale  gradients also-- or scale the score, I mean. So it's true that they are all related to each  other. So in theory, if you know the score at a particular noise level, you can, in theory,  recover the score of different noise levels. But it's not just the scaling. So there is something  called the Fokker-Planck equation. And if you were to solve of that equation, which is just the PDE  at the end of the day, but if you were able to solve that PDE, that tells you basically  how the scores are related to each other. So that's deterministic but intractable, I see. Yeah. You can try to enforce it. There are  papers. We have a number of papers trying to enforce that condition because, in some  sense, this is just treating all these tasks as being independent. But we know that they are  related to each other, as you were suggesting. And so you might be able to do better if you  tie together the losses because you know how to go from one solution to the other solution.  In practice, it doesn't seem to help a lot. But you're right. There is something called-- yeah,  if you could solve the Fokker-Planck equation, you could go from-- yeah, go from any score to  any other score, at least in continuous time. Cool. So now we have several choices to make. We need to choose what kind of  noise scales are we going to consider. So we need to decide on what is the maximum  amount of noise that we're going to add. We need to decide the minimum amount of noise that  we're going to add, and we need to decide how to step in between these two extremes, essentially.  And for the maximum noise scale, you probably want to choose it to be roughly the maximum  pairwise distance between any two data points. And the idea is like if you have two images in  the data, x1 and x2, you want the amount of noise to be sufficiently large so that basically it's  possible to go from one data point to the other data point if you were to add noise, essentially.  So if you start from data point 1 and you were to add a sufficiently large amount of noise, there  should be a reasonable probability of generating a data point 2-- an equivalently of going back on  the other direction. And this basically ensures that-- like at the beginning when you start out  your Langevin dynamics procedure with a lot of noise, that's going to mix. It's going to explore  the space pretty efficiently because there is a way to go basically from any point to any other  point. That's the intuition for this choice. The minimum noise scale, you probably want it  to be sufficiently small so that the image plus noise is hard to distinguish from just a clean  image. So the minimum noise scale should be very, very small. And the other thing to decide is  how you go from the maximum to the minimum. So how do you interpolate between these two  extremes? And again, the idea is that if you think about that Langevin dynamics procedure,  we want to make sure that these different noise scales have sufficiently overlap so that  when you initialize the Langevin dynamics chain corresponding to the next noise level,  you're starting with something that makes sense. So if you imagine that you have these kind of  spheres where you are increasingly corresponding to what you would get or the probability as  you increase the amount of noise that you add, and you go sigma 2, sigma 1, sigma 3, essentially  what you want to make sure is that when you have, let's say, data plus noise level sigma 2, there  should be sufficiently overlap with the kind of data points you expect when you have data plus  noise level sigma 3, so that when you use the samples that were obtained by running dynamics  with sigma 2 noise levels and you use them to initialize the Langevin chain corresponding to  noise level sigma 3, you have something that makes sense. Because if there is no overlap, so  you have a drastic reduction in noise level. So you go from a lot of noise to very little noise.  After running your Langevin chain for the large amount of noise, you're not going to get a  good initialization for the next noise level. But if there is a decent amount of  overlap between these noise levels, then you expect this annealed Langevin dynamics  procedure to actually work pretty well. Yeah? How can we estimate that other than just guessing? Yeah. So we know we are deciding the sigmas,  right? And so what you can do is you can actually work out a little bit of math about what makes  sense. And it makes sense, according to this heuristic, to basically use some kind of geometric  progression between the different noise levels. This ensures that there is sufficiently  overlap between the different shells that you get as you increase the-- or  you decrease the amount of noise that you add. And this is a heuristic. It's  not necessarily the only valid choice, but this is like the first one  that we did that seemed to work. The other kind of thing we can decide  is the weighting factor. Remember, we're jointly solving all these estimation  problems, corresponding to different noise levels. How much should we care about the  different noise levels? Which is controlled by this lambda sigma i hyperparameter that  decides how much weight do you put on the different components of the loss. And so  how do we choose this weighting function? The idea is that you want to balance the  importance of the different components in the loss. And a reasonable heuristic  that, again, works well in practice is to choose that to be equal to the amount of  noise that you're adding at that level. Doesn't this encourage us to ignore  the more fine-tuned noise levels? It's because the size of the arrow changes  also the way it's scaled. Do I have it here? So the loss would look something like  this. And basically-- yeah. Essentially, you actually end up carrying the same  about the different noise levels if you do that choice because of  the various scaling factors. So remember, there is a sigma i here. So again, it's a choice. Other choices can work as well,  but this is the, the thing that we did. Yeah? I noticed this is similar to what I  think someone asked about before that the epsilon theta is defined  as basically a scaled version of the score. Is there some meaning  behind what epsilon theta represents? Yeah So epsilon theta here is basically  a noise prediction because it's basically literally just estimating the noise  that was added. There is a different parameterization where you might  want to predict-- as we discussed, when the noise level is very high, then you  might want to predict the mean. So there are different ways of parameterizing what the  network is supposed to output. This is the simplest one where you're just predicting  the it's a noise prediction kind of task. And so the final loss looks something like this.  You basically sample a mini-batch of data points. Then, you sample a mini-batch of noise indices.  So you basically equivalently choose a bunch of noise levels, one per data point, let's say,  uniformly across the different noise scales that we're willing to consider. Maybe L here could  be 1,000 if you have 1,000 different noise levels. And then what you do is you sample noise, IID one noise vector per data point. And then,  you basically train the score network to solve the denoising problem for each data point,  essentially, with the weighting function that we had before. And then you basically just  do stochastic gradient descent on this loss, trying to essentially find parameters  theta that minimize this denoising loss, which is equivalent to essentially estimating  the scores of the data density perturbed with these various noise levels as well as you can. And  so basically, everything is just as efficient as training a single score model because everything  is amortized, and there's a single core network that is jointly trained to estimate the score of  data plus noise at different noise intensities. And so the final thing looks like  this. You have data, clean data, and then you have noise at the end. And so there  is going to be correspondingly different versions of the data distribution that has been perturbed  with increasingly large amounts of noise going from clean data, mediumly perturbed  data, all the way to data plus a ton of noise where basically the structure  is completely destroyed. So visually, you can think of clean data, data plus a little  bit of noise, more noise, more noise, more noise, all the way to huge amount of noise, where you  don't even recognize what you started from. In terms of noise scheduling, you  always have go from the large to small. The geometry sequence will always  have or have different noise? So you're thinking of increasing and  decreasing and doing other things? I think, in principle, you could do this kind  of. Actually, yeah, people-- I mean, you can. So during learning, I guess there is  always going to be-- I mean, it's a scalar, and so there's always going to be an ordering. And  so you're always going to be considering. You're always going to go from the smallest  amount to the largest amount of noise. During inference, it might make sense to do  different things than what I described is annealed Langevin dynamics, where you would  start from a lot of noise and you go all the way to clean data. There are versions  where you might not want to do this, and so you might want to go from noise to  data, and then maybe a little bit of noise, and then go back. So yeah, there is a  lot of flexibility then at the inference. A training that you need to estimate all of  them. And there's always going to be a mean, and there's going to be a max. And then you  can think about how you space them. Say again? A parallel process training--  train them in parallel. We train all of them in parallel, yeah.  Yeah, yeah, yeah. So all of them is-- there is a single model that is jointly trained  to solve all these denoising tasks. And in fact, you can get better performance if you're  willing to consider multiple-- like, some of the state-of-the-art models that  are out there, they don't have a single one. They might have a few of them. Because they can  afford to train a bunch of different noise models, it might make sense to do it because this  is purely like a computational trick. If you could, it would be better to separately  estimate the scores for every noise level. And so then there's going to be different  noise levels. For every noise level, there's going to be a corresponding data  density plus noise, which we're going to denote sigma 1. And this is the same as the q  sigma that we had at the beginning. So you can think of it as data becoming increasingly  corrupted as you go from left to right. For each one of these noise levels, there is  going to be a corresponding score model. So all these scores are going to be different  because we are adding increasingly large amounts of noise. And then there's going to  be a single neural network that is jointly trained to estimate all these vector fields.  And the network takes the data x where you are in this plot and the sigma, and it will  estimate the score for that noise level. And we jointly train them by this mixture  of denoising score-matching objectives, Which is just a mixture of the usual score  matching loss. And then we do, again, Langevin dynamics to sample. So what you would  do is you would initialize your particles at random. Presumably, that is pretty close to just  pure noise. So we initialize our particles here, and then we follow the scores for p sigma  3, which is data plus a lot of noise. We're getting a little bit closer  to the higher data density regions, and then we use these samples to initialize a  new Langevin dynamics chain for the data density plus a little bit less noise.  We follow these arrows again, and you see that the particles are moving  towards the high probability regions of the original data density. So they're becoming  more and more structured in some sense. And then we use these particles to  initialize another Langevin chain, corresponding to an even smaller amount of noise.  And at the end, we get samples that are very close to the original data density because sigma 1 is  very, very small. So this is almost the same as clean data. And throughout the process, we  are getting good directional information from our score models because we're using the  corresponding noise level in the data density. It seems like more of a deterministic process.  For a given noise level, you always get a particular mode in a distribution. In a sense, if  it were truly generative, you would expect like a more multimodal, I guess, score function in the  sense that you should have a slight probability to go to any of the modes or any of the, I  guess, the clusters over here. And I guess, that's something which I can see over here.  I was just curious what your perspective is. Yeah. So that's a good question. And although  the arrow is pointing you in one direction because it's deterministic, recall the  Langevin dynamics is following the arrow, but it's also adding noise at every step. And  so regardless of where you are, if you run it for a sufficiently long amount of time, you  might end up somewhere completely differently. So even though you initialize the particle in  the same point, maybe you initialize it here, because of the randomness in the Langevin  dynamics, you might end up in completely different places, meaning that you're going  to generate completely different images. And so that's the procedure, annealed Langevin  dynamics. And here, you can see how it works. This is what happens if you follow that exact  procedure on real data sets. Let me play it again. So you see how you start from pure noise,  and then you're following these arrows, you're following these gradients. And you're  slowly basically turning noise into data. And here, you can see examples on some image data  sets-- MNIST, CIFAR-10, and so forth. And you can see that it has this flavor of going from noise  to data by following these score models. Yeah? How many times do we update the tilt converter? Yeah, great question. How many steps?  Ideally, you would want to do as many as possible. In practice, that's expensive, so  you might want to do one, or two, or maybe 10. Each step is expensive because it requires a  full neural network evaluation. And so that's, again, a hyperparameter. You should do  as many as you can. But the more you do, the more expensive it is to generate a sample. All the pictures are generated? These are all generated, yeah, yeah, through  the annealed Langevin dynamics procedure. [INAUDIBLE] Oh, yeah. So MNIST is basically a data set of  handwritten digits. They kind look like that, and then you have people's faces, and  you have CIFAR-10 samples. So they are pretty good samples that were generated by  this model. They have the right structure, and the model is able to generate  reasonable looking images. Yeah? In training, we pick a few discrete  sigma values and fit into the model. And during the inversion we anneal, can  we pick from the continuous [INAUDIBLE]? So we will come to continuous soon,  yeah. For now, yeah, It's all discrete, And you would anneal down following  the same kind of schedule. Yeah? I guess, these days, you see mostly  these type of models being used. I guess, is there an objective benefit in using  diffusion models compared to GANs? I guess, more from a theoretical perspective, because  we see they get better results in practice, but is that theoretically  grounded for some reason? It's not theoretically grounded. I think one  key advantage is that if you think about it, they can be much more expensive at the  inference time. Imagine you're running a Langevin dynamics chain, which might involve  evaluating a neural network 1,000 times, 10,000 times. So if you think  of it from that perspective, you have a very deep computation graph that  you're allowed to use at inference time. But at training time, you never have  to actually unroll it. So that's the key insight that you're allowed to use  a lot of computation at inference time, but it's not very expensive to train because  it's not trained by, again, generating a full sample and then checking how should I change my  parameters to make the sample better. It's trained very incrementally to slightly improve the samples  by a little bit. And then you keep repeating that procedure at inference time, and you get great  samples. So I think that's a key insight for why. They would both maximize likelihood. But-- This is not maximizing. Oh, sorry. Neither is. I guess, they would both be equivalent,  at least from some perspective in terms of data generation or some-- you know, matching  some distribution or something. They would be equivalent from that perspective. It's just that  the efficient models are at a finer scale then. Yeah. I guess they are trained in a very different way. There is no two-sample test.  They are trained by score matching. The architectures are different. The  amount of compute that you use during training and inference time are different.  So there's a lot of things that change. It's hard to say. There's  no theoretical argument for why this is better. But in practice,  yeah, it seems to be dominating. It's also much more stable to train because it's  all just score matching, so no minimax. Yeah? So I think when you were discussing the model, there is an analogy that diffusion models  are like stacking of these. How is that? We're not yet at diffusion models yet, so  we'll hopefully talk about that later. But we'll see the flow model soon, yeah. So yeah,  if you look at certain kind of metrics that we'll talk about more in a few lectures, but  it was kind of like the first model that was actually able to beat GANs back then, or  the state of the art on image generation, that was kind of like the first hint that these  kind of models could actually outperform GANs despite a lot of years and years and lots of  resources that were spent in optimizing GANs. This thing was actually able to improve sample  quality according to some metrics. And indeed, these are different kinds of data sets.  Scaling up the model a little bit, it can generate pretty reasonable faces of people,  and monuments, and things like that. Yeah? That's metric inception like metrics? Yeah. We'll talk about the metrics more in  future lectures, but there are metrics that try to quantify how good the samples are, how  closely they match the-- they relate to how visually appealing the samples are. And they are  automated, so there's no human in the loop. And they correlate with how good the samples are--  not super important what they are exactly. But the important bit was that there was the first  model that was actually competitive with GANs, and that's what prompted a lot of the follow-up  work on really scaling these models up. So the noise level you used in sampling  as we sample mass most often used for? Yeah. And yeah, so that was very  promising. And you might also wonder, which I think was also brought up here, do the  noise levels that you use during inference have to match the ones that you see during training?  Can we use more noise levels, less? And so it's pretty natural to think about what happens if  you have an infinite number of noise levels. Right now, we have the clean data distribution,  which, let's say, is just a mixture of two Gaussians. So here, yellow denotes high  probability density, and blue denotes low probability density. And then so far, what we  said is that we're going to consider multiple versions of this data distribution perturbed with  increasingly large amounts of noise-- so sigma 1, sigma, 2, sigma 3, where sigma 3 is a  very large amounts of Gaussian noise. So that's the structure in the data  is completely lost. So if you start out with a distribution, it's just a  mixture of two. Gaussians. After you add a sufficiently large amount of noise, you  are left with just pure noise, essentially. And you could imagine using-- maybe here, I  have three different noise levels. You could imagine-- well, and you can always plot these  densities. So you have most of the probability mass is here and here because it's a mixture  of two Gaussians. Then, you can see that the probability mass spreads out as you add  more and more Gaussian noise to the data. And now you might wonder, well, what happens if  we were to consider multiple noise levels that are in between? What happens if we add a noise  level that is in between 0 and sigma 1? Then, you're going to get a density that is in between  these two. And in the limit, you can think about what happens if you were to consider an infinite  number of noise levels going from 0 to whatever was the maximum amount. And what you're going  to get is an infinite number of data densities perturbed with increasingly large amounts of noise  that are now indexed by t, where t is a continuous random variable, a continuous variable from 0 to  the maximum that you have right at the other end. So each slice here, each vertical slice is  basically the density of data where we've added noise corresponding to this continuous index  t. And so you can see how I got here. We started out with a finite number of noise levels where  all the probability mass was here, and then it spreads out. And then you can think about a finer  interpolation, and finer, and finer until you have something continuous. And so you go from pure data  at time 0 here on the left to pure noise on the other extreme, where corresponding to the maximum  amount of noise that you're adding to the data. So now instead of having 1,000 different versions  of the data perturbed with increasingly large amounts of noise, you have an infinite number  of data densities that have been perturbed with increasingly large amounts of noise. And so  you can think of what we were doing before as selecting 1,000 different slices here and modeling  those data distributions. But really, there is an infinite number of them. And that perspective  is actually quite, quite useful, as we'll see. And so you have this sequence of distributions.  P0 is just the clean data, and pt at the other extreme is what you get if you add the  maximum amount of noise, which you can think of as some kind of noise distribution where  the structure in the data is completely lost, and that corresponds to basically pure noise. So as  you go from left to right, you are increasing the amount of noise that you add to the data as you go  from pure data to pure noise at the other extreme. And now you can imagine what happens if you  perturb data with increasingly large amounts of noise. What happens is that you start with  points that are distributed according to p0, that are distributed according to the data density.  For example, you start with these four images. And then as you go from left to right, you are  increasingly adding noise to these data samples until you are left with pure noise. And so you can  think of having a collection of random variables, xt, one for each time step, which is basically  describing what happens as you go from left to right, as you go from pure data to pure  noise. And all these random variables, which you can think about as a  stochastic process is just a collection, an infinite of an infinite number of random  variables, They. All have densities, pts, which are just these data densities plus noise  that we've been talking about for a while. And we can describe the evolution of this  or how these random variables change over time if you think of this axis as a time-slide  dimension, then all these random variables are related to each other. And we can describe  their relationship using something called a stochastic differential equation.  It's not super important what it is, but it's basically a simple formula that relates  the values of these random variables take. And it's similar to an ordinary differential  equation, which would just be some kind of deterministic evolution where the difference  is that we add basically noise at every step. So you can imagine particles that evolve  from left to right, following some kind of deterministic dynamics where we add a little  bit of noise at every step. And in particular, it turns out that if all you want to  do is to go from data to pure noise, all you have to do is you can describe this  process very simply with a stochastic differential equation where you're basically just-- the way  xd changes infinitesimally is by adding a little bit of noise at every step. You can think  of this as some kind of random walk where, at every step, you add a little bit of noise.  And if you keep running this random walk for a sufficiently large amount of time, you  end up with a pure noise distribution. And not super important, but what's interesting  is that we can start thinking about what happens if we reverse the direction of time.  Now, we have this stochastic process that evolves over time. Going from left to  right here, you go from data to noise. Now you can start thinking about what happens if  you were to reverse the direction of time, and you go from capital T to 0. And  so you go from pure noise to data. That's what we want to do if we want to generate  samples. If you want to generate samples, you want to basically invert this process.  You want to change the direction of time. And it turns out that there is a simple,  stochastic differential equation that describes the process in reverse time. And the  interesting thing is that you can describe this process with a stochastic differential equation,  which is relatively simple. And really, the only thing that you need is the score functions  of these noise-perturbed data densities. So if you somehow have access to the score  functions of these densities pts corresponding to data plus noise corresponding to time t,  then there is a simple, stochastic differential equation that you can use to describe  this process of going from noise to data. And so if you somehow knew this score  function, which, to some extent, we can approximate with a score-based model,  we can build a generative model out of this interpretation. And so the idea is that  we're going to train a neural network, just like before, to estimate all these score  models or these ground truth scores. These are scores of data plus noise where there is  an infinite number of noise levels now. So this is exactly what we're doing  before, except that now t doesn't take 1,000 different possible values. T can  take an infinite number of different values, but it's exactly the same thing. Before, we  were just estimating the scores of pt for 1,000 different chosen noise levels. Now  we do it for an infinite number of them. And we do that by doing the usual mixture of  denoising score-matching objectives. So we want to be able to train a single neural  network as theta that jointly estimates all these infinite number of scores. And so  it's exactly what we had before. Except that instead of being a sum over 1,000 different  noise levels, now it's an integral over an infinite number of different ts-- all  the time steps that we have in that plot. And if you can somehow train this model well,  you can derive this loss to a small number, which means that this score model approximates  the true score accurately. Then, you can basically plug in your score model in that reverse time  stochastic differential equation. So recall we had this SDE here such that-- so if you knew this  score, then you could just solve this stochastic differential equation, and you would go from  noise to data. You take this exact equation, you replace the true score with the  estimated score, and you get that. And the advantage of this is that now this  is a well-studied problem. You just have a stochastic differential equation. You  just need to solve it. And there is a lot of numerical methods that you can  use to solve a stochastic differential equation. The simplest one is basically  something similar to Euler method for ODEs, where you basically just discretize time,  and you just step through this equation. So this is a continuous time kind of evolution.  You can just take increments of delta t, and you just basically discretize this. So  I guess, delta t here is negative. So you decrease time, and then you take a step  here following the deterministic part, which is given by the score. And then you  add a little bit of noise at every step. So you see how this is basically the same as  Langevin dynamics. It's always some kind of follow the gradient and add a little bit of  noise at every step. And so you can interpret that as being just a discretization of  this stochastic differential equation that tells you exactly what you should  do if you want to go from noise to data. You're sampling t samples here?  How is it continuous? We're no longer doing this noise level by noise levels. So there is a continuous number of them. So  t is a uniform continuous random variable between 0 and t. So there is an infinite  number of-- let me see if I have it here. So there is an infinite number of noise  levels. And what we do is we're basically numerically integrating these stochastic  differential equation that goes from noise to data. So you start out here by basically  sampling from a pure noise distribution. Yeah. How was it happening for you? So then you take small steps. You still need to,  but you have a freedom to choose. At that point, you don't necessarily have to  take always 1,000 of the length. You can apply whatever. And there are many  more advanced numerical schemes for solving stochastic differential equations. At  the moment you've managed to formulate the problem of sampling to solving a  stochastic differential equation, then you can use a lot of advanced numerical methods  for solving stochastic differential equations. And so you step through time, and you discretize--  you try to find a solution to this trajectory that goes from noise to data, basically. And  that's the main idea of score-based diffusion models. And there is actually a connection  with what we were seeing before. Before, we were doing Langevin dynamics. How is  that related to this numerical SDE solver? You can think of it as kind of like the numerical solver will take a step. It's trying to  approximate the true solution of the SDE, which is like this red trajectory that  I'm showing here. You can use a numerical SDE solver, and you can help it basically  at every step by doing Langevin dynamics. So Langevin dynamics is a procedure that would  allow you to sample from this slice. And so what you can do is you can combine or you can either  just use character steps, in which case basically you get the procedure that I talked about before,  where you do annealed Langevin dynamics. You just follow Langevin for a little bit, and  then you follow the Langevin corresponding to the next slice, and you follow the Langevin  corresponding to the next slide, and so forth. Or you can apply these numerical methods  to try to jump across time. And you can combine the two of them to eventually end up  with something that can generate samples. So you can think of it as, once you view  it from this perspective, there is, again, many different ways of solving the  SDE, including using Langevin dynamics to sample from these intermediate densities that  you get as you change the time dimension. And yeah, this is the kind of thing that  really works extremely well. This kind of model-- again, I guess we haven't  talked exactly about the metrics, but it achieves state-of-the-art results on  a variety of image benchmarks. And you can see some of the high-resolution images  that can be generated by this model. These are fake samples. These people don't exist, but you can see that the model is able  to generate very high-quality data by basically solving this stochastic differential  equation and mapping pure noise to images that have the right structure. And they're  almost indistinguishable from real samples. Possibly, this question has been answered before, but I see most of the images, when they have  fingers in them, that's where the problem is. They have-- Fingers. Oh, yeah. Fingers are a hard one  to do. It's a hard problem for these models to learn how to make  hands. I guess in this data sets, it's typically just the face, so  you don't have to worry about it. But I think people have made  progress on that as well with more training data. I think people  have been able to specialize. If you show a lot of hands in the training  set, the model learns how to do them. Where does the [INAUDIBLE]? There is a bunch of data sets  where we have celebrities' faces, and then you can train a model. Pictures? Yeah, those are real pictures. Oh, they use real pictures  to create a fake picture. You train a model, and then you  generate new people that like the ones you have in your training  set, but don't really exist. Is that from the real picture, which also  from people's face or from other categories? It was always faces. So you train on faces,  and then you generate other people's face. So how do you validate that  the image that's generated isn't just a sample from your training set? Yeah, that's a great question. How do  you prevent overfitting? You can look at it from the perspective of the loss.  If you believe the score matching loss, you can see how well it generalizes to  validation or test data. We also did extensive tests on trying to find our nearest  neighbor in the data set. And we're pretty confident that it's often able to generate  new images that you haven't seen before. There are certainly cases where especially  text-to-image diffusion models actually memorize, which might be OK. I mean, I don't think it's  necessarily wrong behavior to memorize some of the data points. But yeah, people have been able  to craft captions such that if you ask the model to generate you an image with that caption, it  produces exactly an image from the training set. So memorization does happen, but  not to the extent that it's only replicating images in the training set.  It's certainly able to generate new images, including composing things in interesting  ways that possibly have been seen, I think, even on the internet. So it's certainly  showing some generalization capabilities. And I think looking at the loss is a pretty good  way to see that, indeed, it's not overfitting. The score matching loss that you see in a training  set is pretty close to the one you see on the validation on seen data. So it's not overfitting,  at least. We're not yet at that level. So the reason why the samples are  more realistic-- is that because a diffusion model can estimate the  true data distribution more accurate than other generative models again or  something? Or is there another reason? Yeah, it's a mix. I think it goes back to what  we were saying before. The models are trained by score matching, so it's a much more stable  training objective. From the perspective of the computation graph, like if you think  about what happens if you solve an SDE, that's an infinitely deep  computation graph because, at this point, you're discretizing  it, of course. But in principle, you can make it as deep as you want because  you're choosing increasingly small time steps. This can become a very deep computation graph  that you can use at inference time. So again, that's the key advantage that you can  have a very deep. You can use a lot of computation at inference time to generate  images without having to pay a huge price at training time because the models are  trained through this score-matching kind of-- kind of like working at the level  of small changes, figuring out how to improve an image by a little bit. And then  you stack all these little improvements, and you get a very deep computation graph  that can generate very high-quality data sets. So it's a lot bigger tree, right? So if do this for some latent variables because  you have your breakdown is smaller? Yeah. So latent variables-- I guess, I  don't have time to talk about it today, unfortunately. But there is a way to think  of it from the perspective of-- it turns out that there is a way to convert this model  into a normalizing flow, at which point, you would have latent variables. And  the machinery looks something like this. We have this stochastic differential equation  that goes from data to noise. It turns out that it's possible to describe a stochastic  process that has exactly the same marginals, but it's purely deterministic. . So there is  an ordinary differential equation-- the kind of things you probably have seen in other classes--  that has exactly the same marginals over time. And again, this ordinary differential equation  depends on the score. So if you are able to estimate the score, you can actually generate  samples. You can go from noise to data by solving an ordinary differential equation instead of  solving a stochastic differential equation. And at that point, because you can think  of the ordinary differential equation as basically defining a normalizing flow because the  mapping from the initial condition to the final condition of the ordinary differential equation is  invertible. So you can go from left to right along these white trajectories or from left to right or  right to left. And that's an invertible mapping. So essentially, this machinery defines  a continuous time normalizing flow where the invertible mapping is given by  solving an ordinary differential equation. These white trajectories that are the solutions  of that same ordinary differential equation with different initial conditions-- they cannot cross  because that's how ordinary differential equation. So the paths corresponding to different initial  conditions, they can never cross, which basically means that the mapping is invertible, which  basically means that this is a normalizing flow. And so that-- I guess, I don't  have time to talk about it, unfortunately. But if you're willing to  take the normalizing flow perspective, then you can go from data to noise. And the noise  is a latent vector that is encoding the data. And the latent vector has the  same dimension because here, it's clean image. Image plus noise has the same  dimension. It's really just a normalizing flow. The latent variables indeed have a simple  distribution because it's pure noise, and it's just like the mapping from  noise to data is given by a solving an ordinary differential equation, which is  defined by the score model. So it's a flow model that is not being trained by maximum  likelihood. It's trained by score matching. And you can think of it as a flow with  an infinite depth. That's another way to think about it, which means that you  can also get likelihoods. That's the other interesting bit that you can  get if you take that perspective.
Stanford_CS236_Deep_Generative_Models_I_2023_I_Stefano_Ermon
Stanford_CS236_Deep_Generative_Models_I_2023_I_Lecture_18_Diffusion_Models_for_Discrete_Data.txt
All right, so we're ready to get started. Today, we're going to continue talking about diffusion models. But we're going to see how we can use diffusion models to model discrete data and, in particular, text. And we have a guest lecturer by Aaron, who is a PhD student in my lab. And he did some groundbreaking work in this space of using diffusion models for discrete data and language. And so, yeah, take it away, Aaron. Thanks, Stefano, for the introduction and glad to get started. Let's get started. To start, I'd like to talk a bit about the general framing of our generative model problem and how things work, generally. So typically, we're given a dataset x1 to xn, which we assume is sampled IID from some data distribution p data. Our goal is to fit a parameterized model p theta, often parameterized by a neural network that approximates our ground truth data distribution p data. And assuming we can learn our p theta well enough, we can generate new samples, maybe new interesting samples would be the interesting part using our parameterized p theta. And now, if we can do everything together, everything works out, we profit. But there's kind of a bit of math that goes in between, as you all know. So in this class, you guys have learned a lot about different generative modeling paradigms such as GANs, VAEs, and diffusion models. And the thing that you'll notice for all of these different models or most of these models that you've learned is that whenever they draw a schematic diagram about what you should be doing, they normally have a picture-- they normally use an image as your most common data modality. So here, we have a picture of a dog, a picture of a number, and a picture of a smaller, cuter dog. And this type of coincidence actually-- it's not just a coincidence. It's actually a very fundamental reason why we do this. And the reason why is because all of our different data-- all of these different generative models, they're building on the fact that we're working over a continuous data space. So here, our data space X is equal to some R d, where you can think of R as each pixel value and d as a total number of pixels, so the values of the pixels. And if we visualize this using some spectrum like as follows, then what's nice is that we can sample points like here, here, or there. And these three different samples are all valid images, so to speak. And this is a fundamental property of continuous spaces and the fact that we can interpolate. Now, what I and a lot of other people are interested in, which is a bit converse to this type of setup, is discrete data space as follows. So instead of having X equals R d, we have X is equal to 1 to n to the power of d, where n is the total number of-- or is the vocabulary size, so to speak and d is the number of dimensions-- is a number of dimensions. We replace our data points with data points x where x is just a sequence of tokens x1 to xd. And then if we have the setup, we can visualize it with another diagram. This is a lattice, which is the simplest version of a discrete space. And while it's true we can generate samples like here and here, which are valid discrete data point samples, we can't really generate samples here or there. We can't generate samples in between or outside of the values because that just doesn't make any sense for discrete data distribution. And as such, this makes discrete data fundamentally a harder problem as we'll see. So now, you might be asking yourself the question, OK, Aaron, we've learned about GANs, diffusion models, VAEs. These all work pretty nice. Why do we have to go to a completely different regime? Why do we have to go to discrete data, and why does this matter? And I would be remiss if I didn't mention our good friends at OpenAI who have released these big large language models like ChatGPT, which have really like transformed the world in the last couple of years. Also, I would be remiss if I didn't mention other competitors. But yeah, fundamentally, we have this new novel paradigm of large language modeling, which is perhaps arguably the largest advancement in computer science machine learning in the last couple of years. And what's interesting about this data domain is that sentences are fundamentally discrete. So for sentences, it's a sequence of discrete tokens or discrete words that we build up. So as such, it would make the most sense to have a probabilistic model that can generate discrete data like sentences as such. And in particular, if you are familiar with the LLM natural language processing in general, you may have heard of something called language model pre-training. This is kind of the core step for many of these models where you learn distribution over all of your input sentences. And really what they mean by language model pre-training is you're just fitting a discrete probabilistic model to your internet scale data. So we can see that this idea is pretty fundamental here. And other applications include stuff in like natural biology and natural sciences more broadly. We have data modalities such as DNA, molecules and proteins. And all of these different data modalities are fundamentally discrete, and it would make the most sense to try to generate new novel DNA sequences, novel molecules, and novel proteins which can have a big impact in our day to day lives. And this all requires a discrete generative model. And finally, and this is a bit counterintuitive. We also see a return to discreteness for stuff like images. So this is the schematic for a VQVAE backbone or VQGAN is one of the many building blocks in systems like Stable Diffusion. And in the middle, we have this discretized representation, this discretized latent space vectors. And more recent work-- and this is extremely recent like in the last couple of months out of Google and CMU-- has shown that if you just like throw away any continuous notion of your discrete latent space, you only have the discrete component. This actually leads to a broader improvement in results. And results like these tend to show that maybe in the future it's possible to reconcile images into this broadly discrete paradigm as well. But, yeah. So now, we know why discrete data is important. So let's ask the question, why is it so hard? And you might say something like, hey, Aaron, this is all very interesting. Why can we just adapt existing continuous space model like a flow or a GAN? Why can't we just take that and just adapt it to the discrete case? And, well, we have something like this. So we have this diagram. We take some random noise. We push it through some f theta neural network, and it generates an image. And this is a good way to do our sampling and whatnot. And the intuitive idea here would be like why can we just parametrize f theta to output only discrete values? And since it only outputs discrete values, then we can generate something like text. And we can go through some examples here. But let's say for flows, we have this kind of core. We have this coupling where we take noise to data, data to noise through our f theta or f theta inverse that you guys have seen. And the way that this works is that you can stretch and pull your space. And this allows you to take the model complex data distribution with a simple base distribution and a change of variables formula as such. Now, if we replace all of this type of stuff with discrete data, so let's say we go from a discrete random sequence to we map it like bijectively to another real sequence that we want to model. Well, we have a change of variables. Well, we don't really have a change of variables. In fact, the best that we can get is this type of setup where our x has the same probability as this other x. And because of this, your base distribution has to be as expressive as your data distribution, which is why this type of setup struggles really hard. And for this question, yeah, we have this flow. It doesn't really generalize. Also, we have let's say GANs. OK, we take a noise. We map it to an image. We have a discriminator. And then the idea here is we can backpropagate gradients to update our f theta from our discriminator. And if we replace the components with discrete values. So we parametrize it to only allow for discrete outputs, then these gradients don't really backpropagate through a discrete valued input. And the reason why this doesn't work is because we don't have calculus. So from these two examples, we can broadly see that our-- we go through this slide quickly. But our conclusion is that our models are currently too reliant on calculus, and it's hard to extend it. Before transformers, this is kind of a modeling question. It is kind of an architectural problem, not a modeling problem. If that makes sense. So for transformers, when you map it like the input sequence to a discrete-- or to a continuous value, really the reason why this works and the reason why people do it is because it like takes your 50,000 or whatever token space down to 700, which is much more amenable for computation. But you don't really have to do it. So this is kind of an architectural decision and has nothing to do with the fundamental modeling component or probabilistic modeling part. OK, why can't we just embed the tokens into continuous space and do this type of we embed the tokens in continuous space, and then when we generate, we just generate the values, and we discretize kind of. And actually, this is kind of a-- people actually do this for some things. Like, in particular, we can take a look at something like images. For images, an image is-- we don't actually store like the whole continuous values of an image because that would be impossible in a computer. We only have finite [INAUDIBLE]. Generally speaking, we discretize it up to 0 to 255. We have this discretized representation. This is what we store as our quote-unquote "ground truth image." And the idea here is that for our generative modeling-- what people do use for a system like Stable Diffusion or any generative model broadly is a 2-step procedure. First, you have your continuous generative model. You generate a continuous image. And then you discretize it to the nearest value, and this becomes your discrete image, which is kind of what people do generally speaking. So this is how they get around the discrete nature of images and use continuous models here. And the reason why this works for images, in particular, is because if we have the images values 0, 255. This is a typical range for an image pixel value. And then we can just embed this into some continuous space directly. So we just embed it on the real number line. And if we generate any one of these three different generations, well, what we can do is that we can just easily discretize because we just round to the nearest number. This is very simple. Now, if we have something like tokens, let's say, for natural language, OK, there's no way we embed this into a continuous number line like that. And generally, the way that people do this is like something very high dimensional. Like this is two dimensions, people generally do much higher dimensions than this when we try to do our embeddings. And if we try to generate stuff here and try to generate and discretize something here, what you're going to end up with is like, OK, yeah, sometimes, your generations will be good. OK, so if we have generally tokens like in the green X marks there, It's all good. But if we are-- most of the space is empty. So we end up having a lot of empty room between tokens that maybe it's possible though to discretize it into a nearest neighbor token, but it's kind of much more-- like it's much more like not-- it's much less obvious why this would work. Sometimes, when we go between tokens, it doesn't really make sense. And this is kind of a fundamentally hard problem in graph theory, is actually the reality. Yeah, so this would work if your model is like perfect. And it would work if your model is perfect. But in practice, this is kind of not the inductive bias we want to build into our model. It typically makes it pretty hard to learn, for instance. So for something like, let's say, a diffusion model. And there's like language diffusion models that do this exact same procedure. You continuously embed your tokens. You take a diffusion process there. The issue that we'll see for those models that we've seen for those models is that it's kind of they're not really competitive with your standard autoregressive model. And also, they take way too long because you can't have any error whatsoever. If you have any error whatsoever, you're just kind of lost. It doesn't work. For autoregressive modeling, we model the probabilities, which is a different quantity than modeling the actual value. So we can model the probabilities. So like, let's say, the probability of the word "the" or versus the probability of the word "times." and this is like a very continuous quantity. But if we were to say like, hey, let's just take a transformer. We push it through a linear layer, and you select one value. This would be the setup. We would select one value from your continuous space. You don't have probabilities for all the other tokens. You just select one value. This will become more difficult. We try to generate something that's in distribution. So something near one of these tokens. But yeah, this is the case. Yeah, basically, we're modeling a probabilistic model. We try to discretize. And this is just a lot of empty space. This is not a good inductive bias to learn over. And so because of these various issues, we only have one really good discrete probabilistic model. You notice that's the autoregressive model, transformers, just very typical. The idea here is that instead of-- the idea here is you model the probability of each of your sequences x by decomposing it by token. So you model the first token here. Then you take the second token given the first token. That's the next probability, and you just multiply it out to get the probability of the last token given every token beforehand, which is your typical setup. And, yeah. And for language in particular, this decomposes as the idea of context in green. So you have a context tokens and you have a next word prediction in purple basically. And this is the reason why this works so well. And, yeah, there's several good upsides to this autoregressive modeling paradigm. So in particular, it's very scalable. The idea here is that when you compute each next token probability. You only need to compute a probability over your D total tokens or your n total values. And this is very scalable. It's very easy to do this as long as you can build your architecture sufficiently good. This should work out pretty well. Another thing is that if you have a very strong neural network, if you have a neural network that is sufficiently powerful, then you can theoretically represent any probability over your sequences by this decomposition, which is counterintuitive, but it actually works itself out due to this decomposition nature. And finally, it's actually a pretty reasonable inductive bias for stuff like natural language. So for natural language, we speak, we write from left to right. So it's pretty natural that we would do so here for modeling language. For modeling languages, this will make sense as well. There are several downsides though, which have been largely unaddressed by most people because of this overreliance on autoregressive modeling. One famous argument that people like Yann LeCun have really been proponents of is the idea that sampling and autoregressive modeling tends to drift. So when you sample from an autoregressive sequence, you just generate new tokens, but you can accumulate an error. As you continuously accumulate the error, this will cause your generation to veer off course. This is a very famous argument for why we're not going to get AGI through autoregressive modeling. And another issue is that for non-language tasks like, let's say, DNA sequence. Well, there's DNA sequencing. There's like no reason why DNA sequences have to be generated from left to right. I mean, this is not very-- this doesn't make sense as an inductive bias. Furthermore, and this is something that people haven't really been thinking of. But actually, when we have an autoregressive transformer, there's actually a lot of constraints that we need to place on our autoregressive transformer, in particular, making sure that the attention mask is causal, that people haven't really been like a cognizant of. But they are definitely still-- they are definitely still problems for like this probabilistic modeling paradigm. And finally, because we sample iteratively, in autoregressive models, we generate next tokens. This is actually a pretty slow technique. Because of the fact that it's like rather iterative, you have to generate tokens one at a time, which is not great. So yeah, we have all these problems and benefits of autoregressive modeling. So the question that we need to ask ourselves, is there something more to this? And we can think about it in terms of score matching, which I'm sure you're all aware of. And yeah, the key idea behind why we can't just model p theta of x directly instead of why we have to do this autoregressive decomposition is because p theta of x, we have to make sure that as we sum over all the different sequences, we have to sum up to be 1. And this is like impossible because there's an exponential number of sequences that we need to sum over. And so this is very similar to this idea of score matching that we've just been talking about for the last couple of lectures, where we model like this gradient log probability function. And when we do that, we don't have to make-- when we do that, we don't have to sum up over all the possible numbers. We don't have to integrate out-- we don't have to integrate out the distribution to be one, which tends to work pretty well with when you combine it with stuff like diffusion models. So the real question here and the thing that we'll talk about for the rest of the lecture is how we can generalize these techniques from score matching to our more discrete case. And that is the real question of this lecture. Can we do it, and how well does it work? So let's take a look at the outline of how we can go about doing this. There's three steps. The first is how do we extend score matching to discrete spaces? This is not a very well known-- this is a pretty well-known problem, and there haven't been really many good solutions have been proposed previously. The next question is, that once we learn to score, which, in the discrete case, is called the concrete score. How do we generate new samples using concrete scores? And finally, when we build this generative model to generate new sequences, can we evaluate likelihoods? And the reason why we want to evaluate likelihoods is to compare fairly with autoregressive modeling on a lot of perplexity like tasks. And so yeah, first point, can we look at generalizing score matching to discrete spaces? So when we when we think about the core building block of the score matching. We really think about the gradient. And the gradient actually has a nice way that we can build it. There's a nice generalization of the gradient to a discrete space. So the idea here is that our gradient of a function of f when we evaluate at a position x. This is actually like a finite difference because a finite difference is the generalization of a derivative if we assume that a space is not continuous. So yeah, so this gradient becomes of fy minus fx. And we just index over all other y's. And this is like the generalization of a gradient. And using this, we can build a generalization of a score function. So the score function is a gradient at position x of the log probability. And really what this is a gradient of the probability over the probability when we do chain rule to get out the logarithm. And then when we substitute our definition of our finite difference instead of the gradient in this second line there, what we actually get is that this is the collection of all py over px minus 1. So this py over px indexed over all y, we'll learn this is called the concrete score. And this directly generalizes the score function from continuous space to discrete space, which is nice. But how do we learn this? All right, and, well, there's one thing, which is that for py over px, when we try to model all py's over px's, this will make a lot of sense computationally. So in this case, for y, if we just let y be any other neighbor of x, any other value that's not x, then we end up with this issue where we have the model too many quantities. O of n to the power of 2d exponential doesn't work. But instead if we model these ratios, we model the ratios between two sequences that differ by only one position. So instead, let's model the ratios between any two sequences that only differ at one point or at one position, which is a much more local construction. And if we do this, this is only complexity of O of n times D. So this is much more like computationally feasible. It's only the size of our sequence. And when we model these ratios between two sequences that differ only at one position, we can actually parameterize-- but like, we'll normally write it out with this first value, this py over px for all of our derivations. But all the derivations can generalize pretty easily to this multi-dimensional case here, just like as a precursor for the rest of the talk. That's because it's simpler to write it out this way. But, yeah. We can also model these ratios pretty easily with a neural network. So if we feed into the neural network a sequence x1 to xd, we push it through a sequence to sequence neural network. And then we can get another sequence with another dimension attached to it like as such. And we can have probability of 1x2 all the way to xd over probability of x1 over xd. And we can directly model these ratios of all successive neighbors which differ only at one point this way. Sequence to sequence, so you just go from this sequence. It's one dimensional sequence to a D dimensional-- or D dimensional sequence. So we just push it through a neural network in parallel. So you can think about it as a non-autoregressive Bert style transformer. OK, so, yeah, this is the idea. We have a sequence to sequence model. We can generate the ratios of places that differ only at one position. So how do we learn it? This is a very obvious question. How do we learn this concrete score, and how do we do this? So yeah, our goal here is to learn a neural network s theta x. Such that when we parameterize s theta of x at a position y, then we can get the relative ratio py over px. And we need to find a way to do this in a very principled manner. As in we can't allow like negative values for our s theta. And also, when we have enough data, we should also be able to recover a ground truth. Enough data, enough model capacity. And so the way that we do this is very similar to score matching, which is the following loss function. We're calling it score entropy because it's based off of-- it's very related to stuff like cross entropy. But I guess the idea here is that it's very-- the idea here is that it's a discrete generalization of score matching in the sense that we first sample-- we take an integral over all x and our probability function p. And then we sum over all of our y that are neighbors. And then we minimize this type of new type of divergence function in the middle here in order to optimize it. And, yeah, the reason why this is in such a way, that we'll see it, we'll see why we need to do this type of construction soon. But the idea here is that we have this score entropy. It's a generalization of score matching. But for our discrete scores instead. And you might not believe me, but actually, this score entropy function actually does recover our ground truth. So if we just get rid of all the x and y, if we simplify our notation a bit, and we want to minimize the following quantity. Well, this quantity is minimized when our derivative-- we just set the derivative to be 0. And we get this one minus py over px times 1 over s equal to 0. And then we move it back over. We multiply it out, and we get that. When we minimize it correctly, it's this s value should be equal to py over px. And we can also visualize the loss function for something like a ground truth ratio of 0.2. And we can clearly see that it satisfies all of our requirements. Basically, it's convex. And it will recover the true value if we just minimize this. And finally, we can do this between for all pairs x and y. So we can just do this independently for all of our x and y. Which means that if we learn everything correctly, we should recover the ground truth for every pair x and y basically. So, yeah. We have this score entropy loss function, and how do we actually optimize it? Very similarly to score matching. Here's the issue, right, we have this loss function, but this py over px is ground truth value, is not known to us at all. I mean, if we knew it, we could just use it. So it would make sense that we have to find a different way around this in order to learn it. And we have two different ways of doing this, one of these alternative loss functions, which we're calling implicit score entropy. This is a natural generalization of implicit score matching. But we won't be covering it in this lecture. But just nice to know that exists. And we also have another loss function called denoising score entropy or denoising score entropy. And this is analogous to denoising score matching for our score entropy case. The look at denoising score entropy. Here's the idea. If we assume that our px is equal to a convolution between a base distribution p0 and some kernel p, well, then, we can write out our probability in this summation over all x0. And when we do that, we can take a look at our initial score entropy loss. And then yeah, the idea here is that we can just remove the expectation as first. So we instead, we just move in px, and this gets rid of the px and the denominator, but the summation over all x. Then in order to look at things more concretely, we take a look at this decomposition above. And we just apply it to this py term to get out this following decomposition, which is basically, we add in an expectation over x0. We can basically move around our values a bit. So we can move the summation term to the front by Fubini theorem. And we can also add in a p of x given x0-- all over p of x given x0 so we can rework it here. And then once we have everything in this setup, then we can basically just take the last two terms into our expectation. We just move those terms, take them away from the summation, and move it into our expectation. And this gives us an equivalent form. And the nice thing that we'll notice for this equivalent form here is that we only have this relative ratio of p of y given x0 over p of x given x0, not p of y over p of x. And as such, this is possible to compute. And this is something that's possible to compute because we can assume that our transition kernel p is tractable, but we can't assume that our data distribution p is tractable basically. This x0 is kind of your base data point. And this transition kernel, it can be anything basically. It can be anything. So much more than just this type of noise. But in the continuous case, you would also write out like this but for practical reasons. That's why people choose to use a small Gaussian addition in order to do this same exact thing. But this is basically the same exact thing. So we have this way to get rid of the y over px. And as such, we have the following denoising score entropy loss function. And it's particularly scalable because we can sample an x0. We can sample through this-- we can sample through the perturbation kernel. And then we only need to compute this s theta of x once for this summation value because we're only using theta of x. And then, finally, we can compute this ratio of transition kernels by just the way that we define it. So everything becomes computationally tractable, and we can optimize this loss. So now, we have a way of learning the concrete score, the next question is, how can we sample using a concrete score? We have this way of estimating the concrete score, learning the ratio of the data distribution, how do we generate new samples? And this is really diffusion oriented. So in order to do this, we have to define a diffusion process for our discrete tokens kind of. And as we all know, diffusion is just a probabilistic evolution, a way to go from p0 to some pt. So we can work off of this direction directly. Our pt now is just a big vector. We can think about it as a big vector. So because each of our probability at a certain sequence is basically just like some number that is greater than or equal to 0 and everything sums up to be 1. So we can think about it as a big vector kind of. And in a way that we evolve our distribution is with an ordinary differential equation. This is the most natural way of doing things. So our pt is a vector. We take a time derivative with respect to that. And then we can compute the transition based off of this like matrix Qt times our initial vector pt. So we do a matrix vector multiplication. And some things about this diffusion matrix that are not obvious but these are just hard requirements. We need to make sure that this diffusion matrix has columns which sum up to be 0. And also, we need to make sure that this diffusion matrix is non-negative at all, like non-diagonal points basically. And the idea here is that Qt controls how often we go if we jump from one state to another. And we can do this pretty directly here. So basically, if we want to jump from a state i to state j over a period of delta t, then basically, we just take a look at whether or not we stay at the current function, and then we just add the following matrix term times delta t. And then we have some second order term that we get rid of for practical purposes. This is kind of like the analog of Euler-Maruyama sampling for diffusion models, this time discretization of our sampling process. And so we clearly see here that our Qt is-- these matrix entries are the jump transition rates between i to j kind of. And so, once we have this set up, let's take a look at a couple of examples. This is not a very intuitive thing. But let's take a look at the following Qt. Our Qt is given by this matrix, this negative 2, negative 2, negative 2 on the diagonal 1, 1, 1, every 1 everywhere else matrix. And let's say we take an initial distribution of 0.5, 0.2, 0.3. When we multiply this stuff out. We get a transition rate of negative 0.5, 0.1, 0.4. And what's interesting about this is that the values sum up to be 0, which is important in order to maintain the fact that our probability is always sums up to be 1. And also, it's always a valid transition rate between different states. For this type of setup, we can actually compute the intermediate densities pt by just exponentiating out this matrix times this 0.5, 0.3, 0.2 initial vector, which allows us to compute intermediate densities by solving the ODE basically. And if we do this, we can actually also check to make sure that the transition actually satisfies this above statement basically. Or for the first value, you're losing mass at a rate of negative 0.5. And then the other two are gaining mass at 0.1, 0.4. So the total mass remains the same, but like the relative ratios change. Building off of that, basically generally speaking, we'll take a Qt is equal to a noise level times a Q matrix. And then once we have that, this becomes a linear ODE. Everything linearizes. We have a linear ODE. And in order to solve basically just very general, we can solve the intermediate densities by doing this matrix exponentiation in order to solve the linear ODE. Basically here, in many ways, compute this like exponentiation. But simpler is better. And the idea here is that we can calculate the transition rates with a long horizon transition rates by taking column and-- by taking entries of our exponentiated matrix basically. So, yeah, this is great. And another thing that's also important for diffusion is that as t goes to infinity or pt will go to p base basically. So this is just making sure that we approach a nice base distribution basically. I guess other thing to-- I mean, in this case, we can take a look at the following matrix as negative 2 matrix. We exponentiate out with respect to some t, and we get this thing basically. It's not as bad as it looks. And then as we go to infinite time, we just go to a random value basically. So this is a uniform transition matrix. We just go from an initial point to any other point randomly eventually. Similarly, we have this masking thing where we add a new dimension. We add a new dimension to our three-dimensional case. And basically, we only have transitions to this new state. And our exponentiated matrix looks like this. And as we take infinite time, the diagonal disappears and everything goes to mask basically. This is a mask transition. Well, the first case basically the idea here is you just randomly go from your initial value to any other random value. And in the second case is you randomly go from your initial value to a mask value basically. It just determines where you're moving. We have this continuous time Markov chain setup. And generally, we're looking at sequences. The idea here is a set of-- coming from sequence to sequence, which would be very expensive because we have to consider the transitions between in our sequence to any other sequence. And this is a computationally intractable. We instead like go from token to token. So instead, we just flip one token at a time would be the idea. And as such, this is O of d squared because we only have to consider one token. And because of this, when we do our overall transition between sequences, this becomes the overall transitions between tokens. So it factorizes basically. This is just like another point there. And what's nice about this is that we can change this with our score entropy to estimate the intermediate density ratios. So if we assume that our samples are from-- assume we have some samples x0 given from p0, then we can learn our s theta. We now add a t value in order to estimate the pt values, PT ratios. We have another extra t input, but it's the same setup. And now, we have our denoising score entropy loss function. And the idea here is now, we can take these transition values, this transition between two different states. This is all given by our initial rate matrix Q basically. So more or less, what this is saying is that we can optimize our denoising score entropy using this Q setup, using this diffusion setup. It's all very natural. And so the question here is that, OK, now, we have a way of going from data to noise. We also have a way of estimating the intermediate ratios. What can we do with this? Well, the idea here is we can reverse the diffusion process. So if we go from p0 to pt, which is p data to p base roughly speaking, the idea here is, can we go back from p base back to p data? And actually, there is a way of doing this. So there is this other type of diffusion, reverse diffusion process where, basically, we take the time derivative. But in here, we're going backwards in time. And we have a new matrix Q bar, which is a bit more different. And the idea behind Q bar is that Q bar is-- an input j and i this is equal to the density ratio pt of j over pt of i and times this initial Qtij for any i and j not equal. Basically, we have this following relationship between the forward and reverse diffusion matrices, which is pretty neat. And also, I guess the other thing to note here-- I won't write it out-- is that for Qti of i or bar QTi of i. I'm not going to write that out because you just need to make sure the columns sum to 0. So we just assume it's just some-- we can extract it from the other values basically. It's i and j represents an index basically. So, I mean, for our purposes will be like a sequence, but this is kind of hard to write out. But you can think about in matrix, you just take the-- matrix and vector, so the matrix, you just take the jth row ith column entry. And then you take the ratio between the two corresponding entries in the vector, which is the probability vector. But, yeah. So, yeah, we have this reverse setup. And again, what's nice is that we have this appearance of our ratio basically of our concrete score. So in particular, we can approximate it with our learned concrete score network as theta. And this goes back to the reason why we like parametrize everything this way is that the way that we do it is that we have initial state i, and then we basically compute the concrete score of s theta it. And this goes over all the various j indices. And if we do this, it allows us to, in parallel, jump to any of the other states that we want to jump in because of the way that we parameterize things. So everything kind of works its way together in this setup. As an example, we can have this initial matrix here. We multiply it out. This is the rate, negative 0.5, 0.1, 0.4. And then we can construct like the corresponding reverse matrix here. This reverse matrix, if you work itself out, it looks something like this basically, where we add in the ratios of the data vector at the time. And then we multiply this reverse matrix by this probability vector. And actually, what you'll get out is the exact reverse. It's the exact reverse, the 0.5, negative 0.1, negative 0.4. So here, we can see that just it works basically. And as an example, we can visualize as follows between the uniform where we basically just go with two other random values, and eventually, denoises to some initial sequence. And also, we have it for the mask basically, where we can go from mask to our initial tokens. So this is all pretty nice, and we have this nice setup. And, well, there's only one other problem that we kind of have, which is basically, when we try to actually do this reverse sampling, when we try to go through the various-- when we try to simulate the reverse, it's pretty slow. The reason why it's so slow is because we are jumping from-- this is all fundamentally comes down to our computational consideration. Basically, our x1 to xd, we're only jumping between that and another sequence which only differs at one point or at one position. And so when we construct a reverse, we can only also jump between sequences that differ only by one position, which you can imagine would be very expensive, especially if you need a jump, if you to continuously refine the individual position like as such. And so we cheat basically, and that's how we sample. We basically allow multiple steps within-- we allow one to sample multiple jumps in one sample step basically. So instead of going individually unmasking the tokens, let's say, it was the mask of mask. We just unmask both of these tokens simultaneously. We can do this pretty easily given our setup. But it's more or less kind of a way we can do this. And instead of sample, it was the best of times in one step allowing us to go through two different jumps at once. So, yeah, we can put everything together. We have an entire setup built. So the first idea is that we get some samples from our desired data distribution that we want the model. We define a forward diffusion process, whether it be like the uniform or the mask or whatever or maybe something more exotic of these for diffusion process given the transitions. And then we can learn the ratios using our score entropy loss function that we've defined. And then we can use these ratios to reverse the diffusion process, including adding and some discretization to make sampling faster. And let's see how this works. So this is an example of a text sequence that we were able to generate just randomly from our corpus. This is a GPT-2 level sampling procedure or GPT-2 level dataset and model size. And yeah, it's reasonably coherent, and everything is-- it works. That's kind of an idea. It works. But the idea here is like how does it compare with autoregressive modeling on the scale of dataset? So we can compare like samples as such. And we have a GPT-2. We're calling our model score entropy discrete diffusion, so S-E-D-D, SEDD. And so we have the GPT-2 model at the top. We have a SEDD model with an absorbing transition, which is like you go to the mask token. And we have a set with a uniform transition SEDD-U, which means that you go from your token to another random token whenever you transition. And generally, we're able to see that our S-E-D-D, SEDD models tend to outperform GPT-2 in terms of coherence when we do this baseline sampling method, when we try to sample from the distribution. And we can also visualize this more like as a function of number of sampling steps versus quality. So like in this graph on the right here, we have our two GPT-2 models. And if we try to generate out long sequences, it tends to look something like this where we generate out-- it takes 1,024 network evaluation functional evaluations in order to generate one of these outputs. And yeah, it tends to be pretty high. When we feed in these generated sequences into another larger model, they tend to say, hey, these sequences are very high perplexity. These sequences are very low likelihood. These sequences don't make sense basically. So we can see here that like as GPT-2 tends to pretty-- it tends to be pretty bad in terms of our evaluation as such, even from both the small and the medium. But these lines, which are our SEDD models basically, we can kind of trade off the compute versus the number of-- we can try to trade off quality and compute basically. So if we only take 64 steps, which means that we're doing a lot of discretization, we take a lot of simultaneous jumps, we end up with this model that matches GPT-2 in terms of its generated quality. But it's much faster basically. And also, if we really crank up the number of iteration steps. So we take, let's say 1,024 or even 2,048 sampling discretization steps, what we see here is that our quality gets progressively better and better in a log log linear type of fashion. So basically, we're able to generate sequences that are just significantly lower in terms of generative perplexity, which means that they're much better sequences if we just crank up the number of steps. We can't do it with GPT-3 or GPT-4 mostly because of model size. Like in this case, our model sizes are pretty small, like 100 million parameters, 400 million parameters. So we're matching the models by color. So blue models are small. Orange models are medium. For GPT-3 and GPT-4, the other issue is that the dataset is private basically. But for our GPT-2, the dataset is like web text or open web text, which is why we can do an apples to apples comparison. But, yeah. So the conclusion here is that, yeah, so quite surprisingly and pretty nice, and this is a pretty strong motivating factor is that this discrete diffusion model with score entropy tends to outperform autoregressive transformers, at least for generation quality and speed. And I guess another interesting thing, and this is another important thing is that for this type of generative modeling technique, what we need to do is we need to have controllable generation. We need to be able to control how we generate. And at least, in this case, we can do something similar. We can do prompting. But the new and interesting thing is that we can prompt from an arbitrary location. So if we have this top one here, we can take our blue prompt text. And the idea here is that when we generate our new sequence, we just generate around it. We don't change the prompt text, but we just generate everything else around it. This actually is principled if you go through the math. And it allows you to fill in the rest of the information there. So we also have something like in the middle, where we have these two prompt tokens, sequences of prompt tokens to the middle. We just generate around it. And this allows us to infill. And yeah, it typically tends to produce pretty coherent statements basically, which means that we're able to control the generation process in a new more interesting way kind of. Yeah, you can't do this with a typical autoregressive model. So now that we have our generation quality, the last thing that we need to look at is how do we actually evaluate likelihoods of this generative process. So we've shown how we can learn. We've shown how we can generate, how can we evaluate for likelihoods. So the typical metric that people use for evaluating likelihoods is perplexity. The perplexity of an input sequence x is basically just this e to the power of negative 1 over d times the log probability of x1 to xd. This is a very typical metric for use for autoregressive modeling. The reason why is because it's a relatively principled way of measuring the model ability. So if we can have a very low perplexity on some other dataset, it means that we're generalizing pretty well. We're compressing things, which is typically a good sign. Also, we can directly-- this is directly computable for autoregressive modeling because we can compute this p theta directly. And finally, we also tend to optimize with respect to something like this. Because at least for autoregressive modeling, we optimize with respect to this negative log probability and just taking the exponential or effectively optimizing this something similar. So which is why we can report something like this basically. And so for diffusion models, long story short, we can also do something similar. The math actually tends to be kind of a bit involved. But the key insight here is that we can take this under some mild conditions, like some very mild conditions on our base distribution, how long we diffuse for. Generative process has the following likelihood bound basically. So our negative log likelihood is bounded above by this integral and with this expected value and all this stuff. And when we also add some known constants C The C constant is known, a priori. And what's interesting here is that this integral or whatnot, this is exactly our denoising score entropy loss if we recall back to a couple of slides ago. And the only new thing is that we have to wait about this Qt of xty, which is this other weighting. It doesn't really affect anything for any of the computations basically. It's just this other weighting. But yeah, which means we can basically train with respect to this loss, this log likelihood bound loss. And so we end up getting a perplexity bound because we can just take the perplexity of the input sequence. We just feed it through this denoising score entropy loss with this weighting. And we basically get an upper bound just by the fact that things are-- like the E is monotonic basically, which allows us to report perplexity values as well. How does it work in practice? Well, across these different models, we do this whole GPT-2 train on open web text, evaluate on other types of datasets type of setup. And what we see here pretty consistently is that our GPT-2 model, it does tend to produce the best likelihood values. But SEDD with absorbing, the absorbing masking transition, it tends to be very close basically. So for most of these datasets , if we have a pretty close value within plus 10% or so, we underline it. And the reason why we have this plus 10% cut off is because of the fact that we're only reporting a bound and not the true ground truth like non-bound likelihood perplexity. But we have this underline here, and we also had the best results bolded. And what we consistently see is that our SEDD model, it can basically match on like WikiText2, WikiText103. It has to fall within this perplexity bound. And for something like PTB, it actually outperforms the existing model, like an existing GPT-2 pre-trained model. And sometimes, by a pretty considerable margin as shown in this middle bar here. Basically this middle line here. And so this is great because, now, we can show that. Basically, we can challenge autoregressive modeling not only on like generation quality, which is a bit more like of-- there's more moving parts there, but also on perplexity which is a more streamlined, more compact way of comparing between two different autoregressive models. Well, what you would do here is you would just like generate up to a end of text token, and you just post-process it kind of. And typically, for this open web text data, sequences are pretty long. Like 700 or so tokens out of the 1,024. So it's pretty comparable basically. But, yeah, OK, just to summarize. So the first thing is that it's pretty hard to build probabilistic models for discrete spaces. We have GANs, VAEs, diffusion models. And a lot of these things are pretty hard to naively extend from continuous space to discrete space, which is why we only really have autoregressive modeling as a way of doing things. Basically, so, autoregressive modeling is the only really viable paradigm in this space. The idea here is that we can extend score based models to discrete spaces. And we can do this by instead of modeling the gradients of the data distribution, we model the ratios of the data distribution, also known as a concrete score. We optimize this new score matching loss called score entropy, which we can also have these denoising and implicit variants of which make it tractable. And then we can sample from our process. We can sample from our score based model using a diffusion process, using a forward and a reverse diffusion process. So in particular, the forward diffusion process synergizes with our denoising score entropy loss, which makes everything pretty seamless integrate together. And we can make it fast and controllable for our generation, which is nice. And finally, our generation quality can surpass autoregressive modeling because of the fact that we don't have to worry about contacts. We can just generate a whole sequence in parallel. And this allows us more information during the generation process. Finally, we also have our likelihood bound based off of score entropy. This basically lines up perfectly with-- our score entropy loss basically lines up perfectly with the likelihood bound that one would hope to optimize or compare with. And for this task, we basically challenge autoregressive dominance for the first time on any large enough sequence like GPT-2 level result. For this case, we're computing a bound on the negative log likelihood. And the negative log likelihood goes into the perplexity here. So perplexity will kind of be like a forward KL divergence. And this is like a reverse KL divergence, will be the way this model. So basically, so GPT- 2, if we remove the fact that we're only reporting a bound, it tends to outperform. Really, it's much closer than that, which means that it's covering the modes of the data distribution sufficiently well. But it also has leakage basically. We can generate sequences that are low probability, but this doesn't show up in our KL divergence loss. So previously, you had this embed into continuous space. Generally, the issue that people have found is that it doesn't really work as well. Like the log likelihoods are-- I mean, so we had-- typically, let's take a look at this graph, thing here. So for previous continuous diffusion models, it would be way worse basically, like much, much worse like 2.5 times worse, something like this. This is typically like the range for discontinuous discrete diffusion model, where we discretize the tokens. And also, the issue is that for generation quality, it's much slower. Basically, when we try to generate the sequences, it becomes like-- because it's like so sparse, we have to make sure we don't have much error. So we have to take a lot of discretization steps. So for example, for some models, we'd have to take like 4,000 steps in order to generate a 1,000 length sequence, which is just too much basically. The idea is that hopefully the error isn't that much, and you can jump between the two. There are some principled way of doing it. And it's called tau leaping, if you're familiar with the-- it's called tau leaping in the chemical engineering literature or whatever. And it kind of works. So if you take very small steps, it's going to be reasonably conditionally independent assuming your ratios don't change too much. Your model doesn't change too much. So it's a discretization scheme basically. So like diffusion models, we also have a similar discretization scheme in [INAUDIBLE].. Presumably, you can learn any probability distribution over your discrete space with both methods. But the question here is which one builds in a better inductive bias and is more amenable for optimization? Qt is kind of like the transition rate but we exponentiate it. So like in particular, we have this-- so basically, Qt is a transition rate. And this exponentiated matrix is a transition kernel. We can do it for many time steps. But the issue here is that it's better to put in a time step so it becomes easier. Basically, the fundamental Q tends to stay the same, but just we just multiply it by some noise level. So it's kind of like all built in. Q is just a transition rate so something like this basically. It would go from uniformly. This is how we go other things uniformly kind of. Or in this case, where we go to a mask token. At each time step, the Q basically is scaled effectively. So we have a scaling to how much noise we add in at each time step. Yeah, a true scale is controlled by sigma. So this bounds basically the elbow bound from a VAE. So you would assume that your diffusion model, you have your forward diffusion process, which is your encoding layer and your VAE. And you're learning the reverse, like, real reverse diffusion process, which is your decoder. And then if you just work that out, you plug it in, this is the ELBO that you get out basically. This architecture right here is the key idea here. And the sequence to sequence neural network is just like a transformer. We basically make a transformer. But we have a non-causal mask, which allows us to go like-- which allows the attention layer to go from [INAUDIBLE] completely from everything to everything, basically. So it's like BERT basically. It would be like this-- for question answering, it would be like this basically. You just fill it in. You fill it in. We're separating out between the GPT-2 small, the small models and the medium models. And between the medium models, we have the absorbing and uniform state basically. So we have this uniform transition matrix and what is masking transition matrix basically. And typically, we see that the uniform tends to produce worse results than the masking basically, so like just randomly flipping words. And this makes sense because if you randomly flip words, then you're going to end up with sequences that kind of don't make sense. Whereas if you just mask the word, then the sequences still make sense broadly. I mean, if you assume that we can fill in the masks. And in this case, this is our generative perplexity, which is basically we generate a new sequence, and then we take a GPT-2 large model. And we evaluate the perplexity of this generated sequence on our GPT-2 large. And it's a pretty common evaluation to use like GPT-2 large evaluate the things. There's a bunch of different metrics that are built off of this. We also took a look at Fréchet distance metric. And it also tends to work, like see an improvement there basically. So basically, you can take like a larger model, and then try to extract some feature representations or some values in order to mask, view your smaller model outputs. The issue here is that we need to compute this exponential quickly. And if our Q is, let's say, a GPT-2 tokenizer size, a total number of tokens is like 50,000. Well, if we want to compute this matrix exponential, it takes way too long. Like it will take like 10 seconds just to compute this or whatever even on CUDA, even on GPU because of how massive it is. We tried experimenting with other more complex Q that would allow us to do this computation easier. But it doesn't tend to work because of the fact that it is too much of-- it is kind of a fundamentally different architectural design choice basically. It's not built for CUDA to do this matrix exponential. Thanks, everyone, for attending. Thanks, everyone, for listening. I hope you learned something.
Stanford_CS236_Deep_Generative_Models_I_2023_I_Stefano_Ermon
Stanford_CS236_Deep_Generative_Models_I_2023_I_Lecture_12_Energy_Based_Models.txt
Cool. So the plan for today is to continue talking about energy-based models, which is going to provide a lot of the foundation also to discuss score-based models and diffusion models. Just as a recap, this is our usual slide providing an overview of all the different things we've been discussing in this course so far. Energy-based models provide you yet another way of defining a very broad set of probability distributions and expanding that green set which potentially would allow you to get closer to the true data distribution. The nice thing about energy-based models is that they are defined in terms of this energy function f theta which can basically be anything. So you can pick whatever neural network architecture you want. And by using the expression that you see there, you get a valid probabilistic model where essentially, you can get the likelihood of a data point by looking at the unnormalized probability, which is what you get in the numerator of that expression and then dividing by the total like unnormalized probability that exists And so which is just the sum of the numerator over all possible things that can happen. So probabilities are defined relatively to this partition function, normalization constant which depends on the parameters of the model. That's the crucial thing. And the problem is that typically evaluating Z theta is intractable because we are interested in modeling random variables. So multiple random variables or random vectors x with many different components, which means that there is a huge number of possible x's that you would have to consider in order to compute the normalization constant. Which means that evaluating the probabilities of data points is generally going to be intractable. You can always evaluate the numerator very easily but it's very hard to evaluate the denominator in that expression. And the good thing is that comparing the probabilities of two data points is actually easy. And this is important for sampling. So if you want to know you have an x and an x prime, which could be two images, for example, you cannot easily evaluate how likely is any of the two according to the model. But it's easy to figure out which one is more likely because the ratios of two probabilities when you take the ratio, basically the two normalization constants, they cancel. And so it's easy to evaluate that expression in terms of whatever energy function, whatever neural network you use to represent f theta. And the price you pay is that once again, evaluating likelihoods is expensive. And so if you wanted to train the model by maximum likelihood, you would need to somehow be able to evaluate for every data point this expression or the log of this expression, which would be something like this. And the problem is that you have two terms that depend on theta. And so whenever you want to figure out how to adjust theta or how to pick theta to maximize the probability of a training data point, you need to figure out how to adjust the parameters of your neural network to increase the numerator, the unnormalized probability of this training data point, which is always easy. But then you have to worry about how does changing theta affect the normalization constant. So by how much are you changing the probabilities of everything else that could have happened. And so you need to figure out how to change theta so that this increases while the partition function the log normalization constant ideally also goes down. So that the relative importance, the relative weight of this training data point goes up as much as possible. Again, doing this is hard because we don't know how to evaluate the normalization constant exactly. So it's different from a likelihood-based model, like an autoregressive model where this partition function Z theta is guaranteed to be 1 regardless of how you choose the parameters of your conditionals, for example. In which case, you don't have to worry about if you were to change some parameters in your neural network, how does the partition function change because it's constructed by design to be 1 regardless of how you choose your parameters. So you basically only have the first term when you train an autoregressive model and it's easy to optimize and you don't have the issues that we have here. What we've seen is that it's relatively straightforward to come up with a sample based way of describing an approximation of the log partition function. And in particular, we've seen that there is this contrastive divergence algorithm that will give us a way of evaluating gradients of the log likelihoods, which is what you will need if you wanted to update your parameters to maximize the probability of a data point, you will need to evaluate the gradient of this expression here that we're maximizing. And it turns out that it's not too hard actually to figure out how the gradient of the log partition function, how the log partition function changes as a function of theta or what's the gradient of the log partition function if you have access to samples from the model. And so if you can somehow sample from the model, which we know unfortunately, is also relatively hard. But if you have access to samples from the model then you can figure out-- you can get an estimate for the gradient of what you care about by basically looking at the gradient of the energy on the training data versus the gradient of the energy on the samples that you generate from your model. So this is a fairly intuitive explanation where we're saying is, we're trying to figure out in which direction we should now update theta to increase the probability of the training data, we're decreasing the probability of some alternative fake synthetic data that is produced by our model. And by doing that, you're actually figuring out how the log partition function changes as a function of theta, that's the right expression, so to the extent that you can generate samples from your model. Then you have this contrastive divergence. And it's contrastive because you're comparing or contrasting the real data from and you're contrasting two samples from the model. And so you just need to figure out how to adjust your parameters to basically by following that expression that contrasts real data to fake samples from the model. So the gradient of log Z theta would be the figuring out if you were to change the parameters by a little bit, how does the partition function change. So how does the total unnormalized probability that you have change. So if you remember the analogy of the cake that we're dividing up into slices, this term is basically saying, what is the size of the slice that we assign to a particular data point. The other term is telling you how much does the size of the whole cake changes. And because everything is relative to the size, you have to figure out that to figure out how to push up the probability of a data point. Because it's not the size of the slice that matters, it's the relative size of the slice versus the total cake, the total amount of unnormalized probability. And this is the gradient of the log partition function which we can approximate with samples basically. [INAUDIBLE] The log partition function would be the log of this size of the whole cake basically. Cool. So that was the recap. And so training energy-based models by maximum likelihood is feasible to the extent that you can generate samples. And we've seen one recipe for generating samples from an energy-based model, which is this idea of setting up a Markov chain. So using this technique called Markov chain Monte Carlo, where essentially the way to generate a sample is to initialize some procedure by sampling x0 from some distribution. Turns out it doesn't matter what that is. But if you think about you're trying to sample a distribution over images, you start with some image, doesn't matter what that image is, at time 0. And then you basically try to make changes to this image, to this candidate sample that you have to essentially try to make it more likely. If you sample from this distribution pi which you initialize your algorithm with, this could be really bad. It could be just set values of the variables uniformly at random. So you start with pure noise and then you need to figure out how to change the pixel values to go towards high probability regions. And there is a principle way to do it, which basically involves trying to perturb, try to change your data point a little bit. If it's continuous, you might want to add noise. If it's discrete, maybe you change the value of a single pixel, something like that. Turns out can basically do many different things and they all work. And that way you propose a new sample x prime. And then sometimes making this little change by adding noise is good in the sense that you go towards higher probability regions and sometimes it's not. And so what the algorithm does is it checks basically how good this proposed sample is compared to where you are right now. And remember, this is good because in an energy-based model, although we cannot evaluate likelihoods, we can always compare two data points. So we can always check whether this sample x prime that we generate by making some local small change to the current best guess is better or worse than what we have. And if it's better, meaning that the unnormalized probability of x prime is larger than a normalized probability that we have before we did the perturbation, then we accept the transition and we say, OK, we're making progress. The state at time t plus 1 is this new sample x prime that we generated. And if not, then with some probability, which depends on basically how bad this proposed sample x prime is, we accept the transition anyways. So I mean the reason we're-- Adding noise is because we want a new sample, which is much more likely in our model. It can't be a guarantee that if we take the derivative of the model with respect to x and then learn that instead of being noisy, would that be proper? Yeah. So that's going to come up in the next slide, where we're going to use the gradient of the energy of the theta as a way to perturb the sample basically. In general, this machinery works regardless of how you do it. The reason and meaning that in theory at least, under some mild conditions on how you add noise, if you were to repeat this procedure for a sufficiently large number of steps, what you get converges to a sample from the true energy-based model. So you can picture this in your head as some local search or some kind of stochastic hill climbing procedure where you're trying to go-- trying to move around this space of possible samples looking for high probability regions. And the way you do it is you always accept uphill moves and with some small probability and occasionally, you accept downhill moves when the height of the hill would be the likelihood, or the log likelihood, or unnormalized log probability assigned by the model. And the reason this works is because this operator satisfies something called detailed balance, meaning that if we denote T x, x prime to be the probability of transitioning from one state to another state x prime, we have the following condition that the probability of being an x under the true distribution we're trying to sample from and transitioning to x prime is the same as the probability of being in x prime and doing the reverse move going back to x. You can see that this is true because one, either x or x prime is going to have higher-- let's say x prime has higher probability than x. Then the transition from x to x prime is this T is 1 and the probability of going from x prime to x is exactly the ratio of p theta x over p theta x prime, which is the probability with which we accept a downhill move. And it turns out that if that condition is true, then basically p theta is basically a fixed point of this kind of operator that we're using to propose new states. Meaning that if at some point xt is distributed according to p theta, then xt plus 1 is also distributed according to p theta. And what you can show is that under some condition, you actually converge to this fixed point. So p theta is a fixed point of this sort of operator and you get there regardless of where you start from. So regardless of how you choose this pi, how you initialize your sample, eventually, xt is going to be distributed as p theta, which is what you want because it's a fixed point of this operator. And I'm not doing justice to this topic. You could probably do a whole course on MCMC methods. But for our purposes, the important thing to note is that there are ways of sampling from energy-based models, namely MCMC. In principle, they work. In practice, what happens is that you typically need a very large number of steps before you get something good. So you can imagine if you were to start, let's say x is an image, and you start with random pixel values, and then you change them one at a time, it's going to take a lot of changes before you get to something that has the right structure. Even though you have guidance provided by this f theta so know when you're making mistakes and when you don't, it's still going to take a lot of steps before you get something that is good. And so that's the problem of energy-based models is that even if you have an energy-based model trained or somebody gives you the right f theta, generating samples is expensive. So that's the price you pay. You have a very flexible model but sampling from it is expensive. And note that if you wanted to train a model by contrastive divergence, you have to generate samples over and over during training. So it's not just something you have to do during inference time but even during training. If you wanted to use contrastive divergence, you would have to somehow use this procedure. So very, very expensive and very, very difficult. A slightly better version which was just proposed of this procedure is something called Langevin dynamics which is essentially a special case of what we've seen before. And it basically works the same in the sense that you start by initializing this process somehow, let's say, a random image, and then you still do your steps. You still an iterative procedure where you're trying to locally change your sample into something better. But the way you do it is by trying to go in a direction that increases-- that should increase the probability of your sample. So what you do is the way you produce this perturbed version of xt is by doing a step of noisy gradient ascent where you modify xt in the direction of the gradient of the log likelihood. Here I'm assuming that x is continuous. This only works on continuous state spaces. And so the gradient of the log likelihood evaluated at xt tells you in which direction you should perturb your sample if you want it to increase the likelihood most rapidly. And then you basically follow the gradient but you add a little bit of noise. And the reason is that just like before, we don't want to be greedy. We don't want to always optimize the likelihood. We want to also explore. So we want to occasionally take moves that decrease the probability of our sample just because we want to be able to move around and explore the space of possible images. But essentially, it is really take your sample, follow the gradient, and add a little bit of Gaussian noise at every step rescaled in some way. And you always accept the transition, at least in this version of the algorithm. There is also a version of this where you accept and reject kind of like the previous algorithm I described. But it turns out you don't even have to accept or reject. You can always move to xt plus 1 regardless of whether you land in a state that has a higher or lower probability than where you start from. And you can prove that under some technical conditions, again, this procedure converges to a sample from the distribution defined by the energy-based model in the limit of a large number of iterations. Yeah? Epsilon on both the actual gradient and the noise should be-- Yeah. So the reason we're using epsilon is that that controls the step size. So it's the step size in gradient ascent or descent. And it turns out that for things to work, you have to balance-- the amount of noise that you add has to be scaled with respect to how much you scale the gradient. So the signal to noise has to scale in that way for things too. [INAUDIBLE] where the square root of 2 is? Yeah. So it basically needs to keep the ratio between the amount of noise-- the signal to noise. Gradient to noise ratio has to be scaled that way to be able to guarantee this kind of condition. [INAUDIBLE] very small? In theory, yes. So it's only guaranteed to work in the limit of basically a step size is going through zero. In practice, you would use a small step size and hope that it works. Is that [INAUDIBLE] step size of near 0 due to the noise? We move the noise and instead, we have a higher-- So the step size because we are not doing accept or reject here, so if you remember this version here, sometimes we stay where we are and sometimes we accept or reject based on that. If you wanted, you can think-- basically here, I didn't really say how I produced this perturbed version. I just said add noise. But in practice, it turns out you can do it any way you want and it still gives you a valid algorithm basically. So if you define a way you add noise to it by saying, I follow the gradient and I add a little bit of Gaussian noise, that defines a valid procedure of proposing new data points. And as long as you balance it, then you would have a valid procedure regardless, even when epsilon is large. I mean you would still have the problem that basically you might accept-- you need to make-- then you have accept and reject. So sometimes you get stuck where you are so you don't want to take too much of a-- If the step size is too large, you might be-- the Taylor expansion is no longer accurate and so the probability might actually go down. And so then you might get stuck where you are. So it's still non-trivial but you can-- I guess, yeah, this is called the unadjusted version. That is the adjusted version which is basically you accept and reject and that one can work with finite step sizes. Cool. Oh, question. Is this faster than just regular morning? Or why would we use this over the previous one? So great question. And in general, I mean it still, in theory, can require a larger number of steps and the convergence is only guaranteed to be in the limit. But in practice, you can imagine that it's a much better proposal because you are following the-- you have a lot more information. Before, we were blindly making changes to the image. Well, now we're saying, OK, if you have access to the gradient information, it can be much more informed in the way you make proposed moves. And in practice, this is much better in terms of the number of steps that you need to converge. And the good thing is that even though the log likelihood depends on the partition function-- or maybe I don't have it here. But if you work out the expression, you will see that the Z theta depends on-- the partition function depends on theta but does not depend on x. So all x's have the same partition function. So when you take the gradient with respect to x, you just get the gradient of the energy of the neural network. And so computing the gradient of the log likelihood is actually easy even when you have an energy-based model. And so this kind of sampling procedure is very suitable for EBMs. And I mean, it's still problematic. In theory at least, the more dimensions you have, the slower things tend to be. And this kind of thing is reasonable to do at inference time. But even if you maybe need, let's say, 1,000 steps, or maybe 10,000 steps, or something like that of this procedure to generate a sample, it might be something tolerable at inference time. If you're generating, let's say, a million pixels, it's fine to do 1,000 steps of this procedure. That might require you to evaluate a big neural network, let's say, 1,000 times. Might not be too bad. But if you have to do it during training, then things become very, very expensive. So training energy-based models by sampling in an inner loop where you're doing gradient ascent on the log likelihood is actually very, very expensive. And even though this is a reasonable way of sampling from an energy-based model, it's just not fast enough. If you want to plug this is in this kind of contrastive divergence subroutine where for every training data point you have to generate a sample from the model, you have to generate the sample, you have to run a Langevin chain with 1,000 steps. Things will become just too expensive basically. So what we're going to see today are other ways of training energy-based models that do not basically require sampling. Yeah? Could you go one slide back? I don't see how point 2 is true. Why is that equivalence true? Which one? So the gradient of the log p theta equal to the gradient. Yeah, sure. So let's see. When we have it here, this expression here is the gradient of the log likelihood. Sorry, just the log likelihood, f theta minus log Z, which is just the log of this expression. Now if you take the gradient with respect to x of this thing, log Z theta does not depend on x, and so it's 0, and so it drops out. And so that's why basically this is true. Great. And so the plan is to essentially introduce ways of training energy-based models that do not require sampling during training at least. And so think of them as alternatives to contrastive divergence which was an approximation to the KL divergence between data and model. So an approximation to maximum likelihood training. That's how we introduced contrastive divergence. What we'll see is the usual trick that is going to be some other kind of divergence, some other way of comparing model to data that does not involve the-- where the loss function basically does not involve the partition function. And if we train by that instead of by approximating the KL divergence then we get much faster training procedures. And so we'll see a few of them. We'll see score matching, which is the key building block also behind diffusion models, noise contrastive estimation, and adversarial training. So recall that we have an energy-based model which is defined like that. And if you take the log of that expression, back until we get this sort of difference between the energy, which is whatever neural network you're using to model the distribution and then you have the log partition function. And the key thing is that the score function or the gradient of the log likelihood with respect to x. So note this is not the gradient of the log likelihood with respect to theta, which are the parameters of the model. This is the gradient with respect to x. So this is basically how does the probability change if I were to make small changes to the sample itself. Not how the likelihood would change if I were to make changes to the parameters of the neural network. So this is gradient with respect to x, not with respect to theta. This is also a function of x in the sense that at every axis, there is going to be different gradients and a function of theta because the log likelihood itself is parameterized by a neural network with weights theta. And just kind of like what we just saw before, the gradient of the log likelihood does not depend on the partition function. So here I guess it's showing a little bit better than what I was trying to show before but if the log likelihood is the difference of these two terms, the log partition function is the same for every x. It depends on theta but it does not depend on x. And so when you take the gradient with respect to x, the log partition function doesn't change. And so the gradient is 0. And so that's why we were able to use the score function or the gradient of the log likelihood in the previous sampling procedure. It's easy to compute if you have access to the energy function f theta. And you can see it here, this kind of idea in play. If you have a Gaussian distribution where as usual the parameters would be the mean and the standard deviation, remember the partition function is this normalization constant that you have in front that guarantees that the integral of this function is actually 1. If you take the log, you're going to get the log of the normalization constant and then you get the log of this exponential. And then when you take the derivative with respect to x, you get, again, a function of x and the parameters of the model, which is relatively simple. It's just like x minus the mean scaled by the variance. And if you have a gamma distribution, again, some potentially nasty normalization constant in front. But the moment you take the score, that's normalization constant disappears and you get a much simpler function to work with. And so the intuition is that S theta, which is the score, provides you an alternative view of the original function where you are looking at things from the perspective of the gradient instead of looking at things from the perspective of the likelihood itself. So if you imagine you have a p theta which is just a mixture of two Gaussians, let's say, in 2D, so there is a Gaussian here and a Gaussian up here, so it's a mixture of two. So you have this kind of fairly complicated level curves. The likelihood is just a scalar function. For every x, it gives you a scalar, which is the height of this curve where you can imagine you have two bell curves, one centered here and one here. The score is basically the gradient at every point. It's a function that every x gives you the gradient of the log likelihood. And so it's a vector field. You can imagine that every point there is an arrow and the arrow tells you what is the direction that you should follow if you wanted to increase the log likelihood most rapidly. And so as expected, you can see that these arrows are pointing towards the means of the Gaussian, which is what you see here in the sense that if you are at a data point and you want to increase the likelihood, you should push it towards the mean if the model is a Gaussian. [INAUDIBLE] fixing theta-- the model parameters are fixed? Well, they are not necessarily fixed. So we're still going to learn them. But when we take gradients, we take gradients with respect to x and so theta does not depend on x. And so when you take the gradient with respect to x, the log partition function disappears. But we're still going to be learning theta. So here, of course, I'm just showing a snapshot where theta is fixed and theta would represent the means and the variances of these two Gaussians. And if you change those, the score function itself would change. And you can see it here, it's still a function of theta but it's a simple function of theta that does not depend on the normalization constant. So you can compute it without knowing the relative-- you don't need to know relative-- Remember that the gradient is telling you how the likelihood changes. But if you were to make small changes to x and we know how to compare the probabilities of two data points in an energy-based models so it makes sense that it does not depend on the partition function. Yeah? Yeah, in this case, the score function is basically a vector field representing the gradients. In general, is it always like a vector field or what is it supposed to represent in general? Yeah, good question. So the score function as defined is always a vector field in the sense representing the gradient because by definition, it's just the gradient of f theta with respect to x. And so in general, f theta would be much more complicated than a mixture of two Gaussians. And so you can imagine that these arrows would be much more complicated. And if you have probability mass spread out in a complicated way, the gradient could be-- I mean, it's still going to be a vector field. It might not have that simple structure where it's just pointing you towards these two points. But it's still always going to be a vector field of gradient. [INAUDIBLE] Or it can be a vector field of some other quantity? So it's a vector field of gradients if it's defined like this because it's actually a conservative vector field because there is an actual underlying energy function. When we talk about score-based models, we'll see that we'll just use an arbitrary neural network to model this. But for now we're assuming that there is an underlying f theta, an energy function, and this is just the vector field. So if you like analogies with physics, you can think of f theta as being an electric potential and S theta as being the gradient of that, which is like a field basically, an electric field. And they describe the same object but in slightly different ways. And so there is no loss of information. We're just thinking of things in a slightly different-- taking a slightly different view that is going to be beneficial from a computational reason because we don't have to worry about the partition function. So how do we do-- a key observation is the score function gradient of the log likelihood with respect to the inputs is independent of the partition function. And so the idea is that we're going to define a training objective where we're going to compare two probability distributions or two probability densities p and q by comparing their respective vector field of gradients. So the idea is that if p and q are similar, then they should also have similar vector field of gradients. If p and q are similar, a different axis would have similar gradients. So one reasonable way of comparing how similar p and q are is to say what is the average L2 difference between the score of p and the score of q. So at every point, we look at what is the direction that you should follow if you wanted to increase the likelihood of p most rapidly, what is the direction that you should follow if you wanted to increase the likelihood of q most rapidly, and we check how different they are. So it's a vector. So to turn it into a scalar, we take the norm of this vector and then we're averaging with respect to p in this case. And what I claim is that you can imagine that this is a reasonable loss function because if p is actually equal to q, then the gradients are going to be the same. So gradient of log p is going to be the same as gradient of log q. This vector is going to be 0 everywhere. And the norm is going to be 0. The average is going to be 0. And so what's called the Fisher divergence between p and Q is also going to be 0. So it's a reasonable way of checking how p and q are different from each other. And crucially, the reason we're doing it is that at the end of the day, we're interested in training an energy-based model so let's say p is going to be the data, q is going to be the model. But crucially this loss function only involves the scores, it only involves this gradient which we do not depend on the partition function. And so this might give us a loss function that is actually very suitable for energy-based models because it does not require you to know the log partition function of the model. That's like why we're looking at this. [INAUDIBLE] So it's a different loss function. It's a different way of comparing probability density functions. They're actually related to each other. So in a certain sense, the Fisher divergence is the derivative of the KL divergence in a certain way. So if you take two densities, and you convolve them with Gaussian noise, and you take the derivative of that with respect to the size of the noise, it turns out that that's the Fisher divergence. But just think of it as a different kind of divergence. [INAUDIBLE] p theta the ground truth, why do we need a model? Yeah. So let's see how-- it's not going to be as easy but that's the idea, is that let's define a loss in terms of the score because we know how to compute the score and we don't know how to compute the log likelihood. So that's the score matching idea. p is going to be pdata. q is going to be the energy-based model which is parameterized by this energy function. And then if you evaluate that Fisher divergence between the data density and the model density, you get this kind of thing or equivalently this sort of thing, where you take an expectation with respect to the data distribution of the difference between the gradient of the true data generating process and what's given by the model. [INAUDIBLE] assume that theta is continuous? Yes, this only works with continuous densities, yes. And so basically we're comparing the gradients of the true data distribution with the gradients of the model, which are things we can compute. And even though p theta is an energy-based model, this loss function only depends on the score which we know we can compute efficiently without having to worry about the normalization constant. So that's the idea. Now as was pointed out, it feels like it's not very useful because it involves the gradient of the log data density which we don't know. It seems like a reasonable loss function but not exactly one we can evaluate or optimize because although we have access to samples from pdata so presumably you can approximate this expectation with respect to pdata with samples, it looks like we need to know the gradient of the log data density which is unknown. If we knew what log pdata is for every x, then we wouldn't have to build a generative model. Why would [INAUDIBLE] with respect to theta from the first [INAUDIBLE]? It's a square. We'll see that there is a square there that creates a coupling unfortunately. But it turns out that it's almost easy to do. [INAUDIBLE] is just that-- Yeah. Well, not as easy as that. But it turns out that-- so that's sort of the expression. And the problem is that we only have samples from pdata. And so it looks like that first term, the score of the data distribution is unknown. So we don't know how to optimize that objective function and try to make it as small as possible as a function of theta because we cannot compute this first term here. We only have access to samples from pdata. That's the usual setting in our generative modeling problem. But it turns out that you can rewrite this loss function into an equivalent one that no longer depends on the unknown score function of the data distribution by using integration by parts. And so just to see how this works, let's start with the univariate case. So x is just a one-dimensional, scalar random variable so the gradients are actually just derivatives. And just because integration by parts is a little bit easier to see that way so I'm still using the gradient notation but these are actually derivatives. And then we don't have to worry about the norm of the vector because again, the derivatives are just scalars and so the square norm is just the difference of these two scalars squared. So that's what the loss function looks like when x is just a single scalar random variable. This basically is the same exact expression, except that it's no longer a vector. It's just the difference of two scalars and that's what's happening there. And then we can expand this by just explicitly writing this out as an expectation with respect to the data distribution. So you go through every x that can possibly happen, you weight it with the data density, and then you look at the difference between the derivatives of the log data distribution and the log model distribution at every point. So you kind of have these two curves. You look at the slopes at every point and you compare them. And you can expand the square. It's a square of a difference and so if you expand it, you're going to get three terms. You're going to get a blue term which is just the square of this first, gradient of the log data density squared. Then you have the gradient of the log model density squared. And then you have this red term where you have basically the dot product between-- the cross product between model and data. And you can see that the first term does not depend on theta. So we can ignore it. For the purposes of optimization with respect to theta we can ignore the blue term. The green term is easy. It just depends on the model. So again, we're good. The problem is the red term because that one still involves this gradient of the log data density in some non-trivial way. And what we're going to do is we're going to use integration by parts, which is usually, remember from basic calculus, it's a way to write the integral of the f prime g in terms of the integral of g prime f. Basically switch which function you're taking derivative with respect to and we apply to that red term, which is the annoying term. Recall that this is an expectation with respect to the data of the gradient log data density gradient of the log model density. Now what is the gradient of the log of pdata? Gradient of log is the argument of the log-- 1 over the argument times the derivative of the argument of the log. So it should look like this. Just by expanding out, this gradient of log pdata 1 over pdata times the derivative of pdata. And the reason we're doing it is that now this pdata here and this pdata here will cancel. And now it looks something where we can apply integration by parts. So this is the derivative of pdata times the derivative of the log p model. And we can apply integration by parts and rewrite it in terms of pdata. So here we had a derivative of pdata. So we rewrite it in terms of just the-- instead of f prime, we go to f. So pdata prime, it becomes pdata. And then we take an extra derivative of the log on the score of the model. And so we've basically rewritten it in terms of an expectation with respect to the data distribution of a second derivative of the model score essentially. Now we still have to deal with this the term here, fg which is the integrand evaluated at the two extremes. And under some reasonable assumption, you can assume that in the limit as x goes to plus and minus infinity, this pdata goes to 0. It's a density so there cannot be too much probability mass at the boundaries. And if you are willing to make that assumption, this simplifies into something that now basically no longer depends on the score of the data density. It only depends on things we can manage. It's still an expectation with respect to the data density but it only involves the-- it no longer involves the score. And so that's basically the trick. If you can assume-- if you are willing to assume that this term here is 0, basically that the data distribution decays sufficiently fast, then you can use integration by parts and you can rewrite this thing, the original score matching loss. Recall, it had three pieces. If we apply that trick to rewrite the red term into this brown term that we just derived using integration by parts, now we get a loss function that we can actually evaluate and we can optimize as a function of theta. We have the first term which is constant with respect to theta so we can ignore it. We have an expectation with respect to pdata of the derivative squared. And then we have an expectation with respect to pdata of the second derivative of the log likelihood. And so this is basically what the-- you can write the two expectations as a single expectation and now we basically derive the loss function that is equivalent up to a constant to where we started from but now it only involves things we have access to. It only involves the model score and the further derivative of the model score, which is the second derivative of the log likelihood. But again, derivatives are always with respect to x. And so that's where the magic happens. This is how you get rid of that dependence on the score of the data density and write it down using elementary calculus into an expression that is now something we can actually optimize. You can evaluate and optimize as a function of theta. So that's sort of like at least in the one d case. And it turns out that there is something, you might have seen it, in multivariate calculus, there is an equivalent of integration by parts. That's actually Gauss's theorem where you can basically do the same trick for when you have a vector-- so when x is a vector and you really have gradients, you can basically use the same trick and you derive something very similar where instead of looking at the square of the derivative, you have the L2 norm of the gradient. And instead of having just the second derivative of the log likelihood, you have the trace of the Hessian of the log probability. So again, you have to look at second order derivatives. But things become a little bit more complicated when you have the vector valued function. So the Hessian is basically this matrix, n by n if you have n variables where you have all the mixed second derivatives. Partial derivatives of the log p theta x with respect to xi, xj for all pairs of variables that you have access to. So again, kind of like a Taylor expansion up to second order if you want. [INAUDIBLE] that's trace operator? And that's the trace operator. Oh, so just take all the-- Elements on the diagonal, yeah. So all the second derivatives. So it's also return a vector? The trace will return a scalar [INAUDIBLE].. And so that's how basically using the same derivation we're using integration by parts, you again write it down in terms of a quantity that no longer depends on the score of pdata. And that's an objective function that we can now optimize. If you're willing to approximate this expectation with a sample average, we always have access to samples from pdata so we can approximate that expectation using samples. Then you get an algorithm or a loss that looks like this. You have samples of data points that you sample from pdata, training data. And then you can estimate the score matching loss with the sample mean, which would look like this. So you go through individual data points, you evaluate the gradient of the energy at each data point, you look at the square of the norm of that vector, and then you need to look at the trace of the Hessian of the log likelihood, which is this, again, Hessian of f theta in this case, which is the model. And then this is now a function of theta that you can try to optimize and minimize with respect to theta. You recall, we're trying to minimize-- this is equivalent up to a shift independent from theta. It's equivalent to the Fisher divergence. So if you're able to make this as small as possible with respect to theta, you're trying to match the scores of the data distribution and the model distribution. [INAUDIBLE] any better so getting the Hessian of some model or even the trace of the Hessian for that matter is not very trivial, is it? It's not trivial. Yeah, that's a good point. And I think it's going to come up soon. It still has issues with respect to very high dimensional settings, like the trace of the Hessian, I know it requires higher order differentiation and it's somewhat expensive. But there's going to be ways to approximate it. The key takeaway is that it does not require you to sample from the energy-based model. This is the kind of loss where you just need to training data, you evaluate your neural network, and you don't need to sample from the energy-based model during a training loop, which is key if you want to get something efficient. And the last function actually have-- this is what you just brought up that indeed, the Hessian is tricky. But it has a reasonable flavor. If you think about it, what is this loss saying? You try to minimize this quantity as a function of theta. So what you're saying is that you should look at your data points and you should look at the gradient of the log likelihood evaluated at every data point and you're trying to make that small, which basically means that you're trying to make the data points stationary points for the log likelihood. So the data points should either be local maxima or local minima for the log likelihood because the gradients at the data points should be small. So you should not be able to somehow perturb the data points by a little bit and increase the likelihood by a lot because the gradients should be very small evaluated at the data. That's what this piece is doing. And this piece is say, loosely trying to make sure that the data points are local maxima instead of local minima of the log likelihood. And to do that, you need to look at the second order derivative and that's what that term is doing. Which is very reasonable. It's saying if you want to fit the model, try to choose parameters so that the data points are local maxima somehow of the log likelihood. And that can be evaluated just by looking at first order gradients and second order gradients. Yeah? So [INAUDIBLE] trace of [INAUDIBLE] could it be figure out this local minima, maxima when you start looking at points of value, and seeing if they're part of a whole? Essentially, yeah. So that's essentially what we're going to do, is we're going to-- there's two ways of doing it. One is to, I guess, something called slice score matching where you're taking random directions and you're checking whether the likelihood goes up or down along those directions, which is the same as-- if you know about the Hutchinson trick for estimating the Hessian, it's basically the same thing where it's an estimator for the trace of a matrix that looks at a random projection around or a random direction around it. And the other thing is denoising score matching which also has this flavor of adding a little bit of noise and checking whether the likelihood goes up or down in the neighborhood of a data point. And so it has that flavor basically. And those things are going to be scalable with respect to the dimension. Yeah? Does the Hessian have a fixed analytical form when you're taking derivative with respect to x? So it doesn't in general. So the question has the Hessian an analytical form if f theta is a neural network? You can't. There is no close-- I mean, you have to use autodiff to basically compute it. The problem is that it needs many backward passes. Because you're not computing just a single partial derivative, you're computing n partial derivatives with respect to every input because you have to compute all the diagonal elements of the Hessian and we don't know of an efficient way of doing it other than doing backprop basically n times, which is also expensive when n is large. But the good thing is this avoids sampling and this is going to be the key building block also for training diffusion models. But more of this in the next lecture. Oh, yeah, question? So this would converge with [INAUDIBLE] a sharper point peak rather than-- so there might be a really sharp thing that's pretty low and then a smoother higher. It doesn't happen in the sense that I just proved you that this is equivalent to the Fisher divergence and the Fisher divergence is 0 if and only if the distributions match. So even though you might think that this is not quite doing the right-- it's not quite the right objective, in the limit of infinite data, this would be giving you exactly the-- if you were to optimize it globally, this would give you exactly the data distribution. Because it's really just the equivalent up to a shift to the true Fisher divergence that we started with, which is this thing here which is 0 only basically if the densities match. Cool. Now the other cool technique that you can use for training-- so the takeaway so far is that KL divergence-- approximations to KL divergence [INAUDIBLE] require sampling, too expensive. But if you are willing to instead measure similarity up here using this Fisher divergence, then again, you get a loss function that is much more-- that is very suitable for training energy-based models because it does not require you to-- even though it looks tricky to compute and optimize, it actually can be rewritten in terms of something that only depends on the model and you can optimize as a function of theta. Now there is another way of training energy-based models which is going to be somewhat loosely similar to generative adversarial networks which is essentially a way to fit an energy-based model by instead of contrasting data to samples to from the model, we're going to contrast the data to samples from some noise distribution which is not necessarily the model distribution itself. So that's how it works. You have the data distribution. And then there's going to be a noise distribution which is any distribution you can sample from and for which you can evaluate probabilities. And what we're going to do is we're essentially going to go back through, again, idea of training a discriminator to distinguish between data samples and noise samples. So far there is no energy-based models. Just the usual GAN-like objective. And the reason I'm bringing this up is that if you had the optimal discriminator, then you would somehow get these density ratios between the noise distribution and the data distribution. So recall that if you train a discriminator optimally by minimizing cross-entropy and so if you're trying to discriminate between real data and samples from the noise distribution, what is the optimal discriminator? It has to basically give you the density ratio. For every x, it has to be able to know how likely x is under data and how likely x is under the noise distribution. So useful recap for the midterm. This is the optimal discriminator, is the density ratio between-- for every x, you need to figure out how likely it is under the data versus how likely it is under the data and the alternative noise distribution. And the reason I'm bringing this up because what we could try to do is we could try to basically parameterize the discriminator in terms of our generative model which could be an energy-based model. So we know that the optimal discriminator has this form, pdata over pdata plus noise distribution. So we could try to just define a discriminator. So instead of having whatever MLP, whatever neural network you want to discriminate between data versus noise, we're going to define a special type of discriminator where when we evaluate the probability of x being real data, we get the number-- instead of just feeding x through a neural network arbitrarily, we get it by evaluating the likelihood of x under a model p theta versus the probability under the noise distribution, which again, we're assuming is known because we're generating the noise distribution, the noise data points ourselves. And so the good thing is that if you could somehow come up with the optimal discriminator that distinguishes data versus noise, we know that the optimal discriminator will have this form and this has to match the pdata or pdata plus noise. And so you can see that somehow if this classifier is doing very well at distinguishing data from noise, it has to learn-- basically p theta has to match pdata. So the classifier is forced to make decisions based on the likelihood of x under p theta. And then if it's able to make good decisions, then this p theta has to match the data distribution basically. That's essentially the trick that we're leveraging here. And then what we're going to do is we're going to actually parameterize the p theta using an energy-based model. But that's the key idea. Instead of using an arbitrary neural network as the discriminator, as you would do in a GAN, we're defining a discriminator in terms of another generative model. And the idea is that by training the discriminator the usual way by minimizing cross-entropy loss, we're forcing it to learn a p theta that matches the data distribution because that's the only way it can do well at this binary classification task. It really needs to know which x's are likely under pdata to get good cross-entropy loss. And that's only possible when p theta matches pdata. And we're going to see that this is suitable when p theta is defined up to a constant. [INAUDIBLE] p theta is going to be an energy-based model. So well, maybe let me skip this since we're running out of time. But you can also use the classifiers to correct the noise distribution. But for now, let's assume that p theta is an energy-based model. So we're going to parameterize p theta in that previous expression in terms of an energy, use a trick. Let's define up to a constant. And what we're going to further do is in general, this normalization constant Z theta is a function of the parameters f theta and it's a complicated function because we don't know how to compute that integral, that sum over all possible things that can happen. And so what we're going to do is we're going to treat Z theta as being an additional trainable parameter. So not only we're going to optimize f theta, but we're going to treat Z theta itself as an additional trainable parameter which is not explicitly constrained to take the value of the normalization constant. It's going to be some other scalar parameter that we can optimize over. And so if you do that, then basically the density model that we're going to use in the classifier now depends on theta and depends on Z. And then we just plug this-- the idea is that basically if we plug in this expression into the classifier, into the discriminator and we train the discriminator the usual way by minimizing cross-entropy, we know that under the optimal parameters, this classifier will have-- the density model that we're using to build the classifier we'll have to match the data distribution. And what this means is that the optimal theta and the optimal Z are going to be such that the energy-based model is equal to the data distribution. But crucially now, Z is just a learnable parameter. It happens to be the correct partition function in the limit because you take the integral of both sides with respect to x, you're going to see that the integral of this optimal energy-based model is equal to the integral of the data, which is 1 by definition. So even though we treat Z as a learnable parameter, in the limit of learning an optimal classifier, this learnable parameter that is not constrained to be the actual partition function will take the value of the true partition function of the model because that's what the optimal classifier should do if it does really well with this binary cross-entropy classification of loss. This case is like a-- It's not an [INAUDIBLE]. Actually, so the loss function ends up being-- which ends up being let's see something like this. So if you plug it in, recall, we're basically saying, instead of picking an arbitrary neural network for the discriminator like in a GAN, we're going to pick a neural network that has a very specific functional form so that when you evaluate what is the probability that x is real, you have to get it through this kind of computation where you have an energy-based model that tells you how likely x is under the model. Where both f theta and Z are learnable parameters. And then if you just multiply numerator and denominator by Z, you get an expression that, again, as it should, it depends on f theta, and Z, And the noise distribution which is known. pn, the noise distribution is, again, something that we are deciding. You can pick whatever you want as long as you can sample from it and you can evaluate probabilities under the noise distribution. And then literally what we do is we still train the classifier by doing binary classification with cross-entropy loss. So we have just like in a GAN, we have data, we have real data. We have fake data which is generated by this noise distribution which we decide ourselves. So this is different from, again, the fake data is coming from a fixed noise distribution. So we're contrasting the real data to fake samples generated by the noise distribution and we're training the classifier to distinguish between these two. The classifier has this very specific functional form where it's defined in terms of an energy based model where the partition function is itself a learnable parameter. And then we optimize this with respect to both theta and Z trying to do as well as we can at this classification task. Yeah? With the scheme, how will the samples you end up generating in the end be good? It doesn't seem like it would be very hard to discriminate between even a crappy generated sample and noise. Yeah. So what happens is that in theory, this works regardless of what is the noise distribution. In practice, what you want is you want a noise distribution that is very close to the data distribution so that the classifier is really forced to learn what makes for a good sample, what makes for the real-- what kind of structures do the real samples have. At the end of the day, what you learn is you learn an energy-based model. So you learn an f theta and you learn a partition function. And in the limit of infinite data, perfect optimization, then if you optimize this loss perfectly, the energy-based model matches the data distribution and the partition function, which is just the value of these learnable parameters Z that you get, actually is the true partition function of the energy-based model. So even though we're just training it in an unconstrained way so there is no relationship here between theta and Z, it just so happens that the best thing to do is to actually properly normalize the model where Z theta becomes the partition function of the energy-based model. So in principle, this does the right thing. In practice, it heavily depends on how good the noise distribution is. We're not really explicitly training a discriminator. We're just adding a new trainable parameters to our agenda? So we are training an actual-- so there is no generator. The generator is fixed or you can think of it as being fixed. So the noise distribution, it would be the generator and that's fixed. We are training a discriminator but it's a very special discriminator. So you are not allowed to take x and then feed it through a ConvNet or an MLP and then map it to a probability of being real versus fake. You have to get the probability by using this expression. Yeah. Just [INAUDIBLE] the f theta is just [INAUDIBLE].. That's [INAUDIBLE]. Yeah. In that sense, yes. There is only a discriminator. Once you've trained it, you can extract an energy-based model, which is the f theta, from the discriminator. [INAUDIBLE] noise distribution or you can sample the noise distribution? Yeah. So in this flavor which is the simpler version, the noise distribution is fixed. We'll see if we have time in a few couple of slides that indeed, it makes sense to change the noise distribution and trying to adapt it and make it as close as possible to the data or the current best guess of the model distribution. So that's an improvement over this basic version of things where the noise distribution is fixed for now. How would you sample in this procedure? So we're assuming that the noise distribution is something you can sample from efficiently so you can always basically get-- do some kind of stochastic gradient ascent here on this. Once you train them on-- so the learning is fine. It's just efficient. As long as pn is efficient to sample from, you never have to sample from p theta. Once you've trained a model, you have an EBM. And so if you want to generate samples from it, you have to go through the MCMC, Langevin. So at inference time, you don't get any benefit. This is just at training time. This loss function does not involve sampling from the model. So why can we not allow Z0 to be identical? Why is that fair deal? It's fair game in the sense only to the extent that in the limit, you will learn the partition function. In general, you will not. And so the solution to this optimization problem will give you a Z, in practice, that is not the true partition function of the model. It's just going to be an estimate. And you're going to end up with an energy-based model that is suboptimal because you're short of the Z that you estimated is not the true partition function for that model. So when you have finite data, imperfect optimization, you pay a price for this approximation. But in the limit of things being perfect, this is not an issue basically. So pn, p theta [INAUDIBLE] probably converge to the integral? Yeah. So if you have infinite data and somehow you're able to perfectly optimize over theta and Z, then we know that the optimal solution over theta and Z will be one where this matches the data distribution. And so the only way that for that to happen is for Z star to be the true partition function of that energy-based model. But in practice, this is not going to happen. So you just get an estimate. Yeah? So if it's not the true partition function, you will have a valid probability distribution for pdata, is that fine? So yeah. So that's a great question. So if it's not a true partition function, you still have an energy-based model for which there is going to be a real partition function. It's just not the one you've estimated. So f theta still defines a valid energy-based model. It's just that the partition function for that model is not going to be the solution to this optimization problem over Z. So it's not going to satisfy the constraint. But there's going to be a partition function for that f theta so that's going to be a valid energy-based model. Implementationally, this seems like-- because you're letting go of the constraints and all these sorts of things, so implementationally, this must be a harder optimization problem compared to something like a score-based model or implementationally, somehow it works? So this is the implementation that's actually not to-- we'll see soon. And then you can ask again if how it's implemented. It's a relatively simple loss to optimize and write down. It's actually trivial to implement. [INAUDIBLE] because you are almost guaranteed that you're going to get a suboptimal f theta, xi because your Z theta is [INAUDIBLE] approximation. So my question is that if I train this in a score-based model, am I guaranteed that the score-based model for the extra effort I put in there is going to be a better model? So it turns out that they are actually very much related. And then if the noise distribution is what you get by perturbing data, by adding a little bit of Gaussian noise essentially, then this turns out to be exactly denoising score matching. So it very much depends on the noise distribution that you choose. But there are instances where this becomes exactly score matching so I don't think it's fair to say that this is always bad. It's just a different thing. Yeah? So we are going back a bit. Is the partition function something you can normally compute for energy-based models or-- No, you can't. Yeah, that's the problem. And so generally [INAUDIBLE] using these different methods? So either you do contrastive divergence where you would sample from it and so in some sense, it involves the partition function in the sense that you would estimate the gradient of the log partition function by using samples from the model but that's also too expensive. Or that's exactly what we're doing right now. Let's come up with a training objective that does not depend on the partition function. So it's going to be efficient. Cool. And so then for numerical stability, let me see what do I have here. So that's the objective. And then you plug in the expression for the discriminator in here and you get a loss that looks like this. And you have the log of sum of two things. And so for numerical stability, it's actually easier to use the log sumexp trick where the log of e of f theta plus Zpn which is what you have in the denominator, it's more numerically stable to write as a log sumexp. But then practically speaking, the implementation is very simple. You start with a batch of data points. You have a batch of noise samples. And basically, you have this classifier which has a very specific functional form. And just you evaluate the cross-entropy loss of that classifier on this mini batch which happens to have this kind of functional form. And then you optimize it as a function of theta and Z. And that's just basically what we had before. So you're evaluating the loss of the classifier where these two batches are real and fake or real and samples from the noise distribution. And then you try to maximize these as a function of theta and Z. And stochastic gradient ascent with respect to theta and Z. And again, key thing you don't need to sample from the model. And you can see that the dependence on Z is non-trivial in the sense that sometimes, it's not optimal to just make Z as small as possible or as big as possible. It depends on Z on some non-trivial way and so there is some interesting learning happening here over both theta and Z. But at the end of the day, you end up with an estimate of the energy of the model f theta and an estimate of the log partition function. And everything can be trained without using samples from the energy-based model. So it looks a lot like a GAN, Generative Adversarial Network, in the sense that in both cases, you are training a discriminator with binary cross-entropy. So that part is the same. Both are likelihood free. We don't have likelihoods in EBM. So it better be. There is never a need to evaluate likelihoods under the EBM or under the data distribution because we don't have either of them. So it's all just a standard cross-entropy loss basically on a classification task reduced to a discriminative modeling-- generative model to discriminative classifier training. The key difference is that in a GAN, you actually have a minimax optimization where you are also training the noise, you're training the generator. Here we are not. Here this is table. It's easy to train. The noise distribution is fixed. And you're just maximizing that objective function as a function of theta. It's non-convex but there is no minimax. There is no instability. It's actually relatively stable to train. And the slight difference is that in noise contrastive estimation, you need to be able to evaluate the likelihoods of the contrastive samples that you generate from the noise distribution while in a GAN, you just need to be able to sample from the generator. So if you look at the loss here we need to be able to evaluate when we generate from pn, from the noise distribution, we also need to be able to evaluate how likely these noisy samples are. In a GAN, you don't have to. You just need to be able to generate them fast. So that's slightly different. And when you're train the NCE model, you just train the discriminator. And then from the discriminator, you get an energy function which defines an energy-based model. While in a GAN, you're actually training deterministic sample generator. So the outcome of the learning is going to be different. And maybe the last thing that I'll say is what was suggested before, is that it might make sense to adapt the noise distribution as you go during training. And so instead of keeping a fixed noise distribution, we can try to learn it jointly with the discriminator. So recall we need an energy-- we need a noise distribution that we can sample from efficiently and we can evaluate probabilities over efficiently. And so the natural candidate is a flow-based model for this. And intuitively, we're training the noise distribution to make the classification problem as hard as possible so that the noise distribution is close to pdata. And so the flow contrastive estimation is basically this idea where the noise distribution is defined by a normalizing flow with parameters phi. And then it's basically the same, except that now the discriminator depends on the noise distribution which is a flow model. So it will depend on the parameters of the flow. Flow model, you can sample from efficiently. You can evaluate likelihoods efficiently. So it fits with this kind of API. And then now we optimize the discriminator over theta and Z the usual way by noise contrastive estimation. And then what they propose is to train the flow model in a minimax way, so it goes back to GANs in some way by train the flow model to confuse the discriminator as much as possible. So that's their proposal. Really curious. In the end does the flow model perform better or does [INAUDIBLE]? In the end, they use the flow model. So here are some samples and they are actually generated from the flow model. [CHUCKLES] Although, technically, they get both. They get an energy-based model and they get a flow model. And they show that for some things, you're better off using the energy-based model. But you get both at the end of the day. It's more [INAUDIBLE]. It's more like a GAN, yeah. So basically, noise contrastive estimation where the noise distribution is a flow that is learned adversarially. Recall that the inside this max here, inside is basically the loss of a discriminator in a GAN. It tells you how confused the discriminator is. Well, not how confused. How not confused. And so by minimizing it, you're trying to make the life of the discriminator as hard as possible. And so you're learning something by minimizing a two-sample test essentially. And so it's the same as the usual GAN training. And then here are some samples that they generated in a model. And I think this is probably a good time to stop. I think we're out of time.
Stanford_CS236_Deep_Generative_Models_I_2023_I_Stefano_Ermon
Stanford_CS236_Deep_Generative_Models_I_2023_I_Lecture_4_Maximum_Likelihood_Learning.txt
All right, let's get started. So the plan for today is to finish up the material we didn't cover in the last lecture on autoregressive models, and then we'll talk about learning. So towards the end of the last lecture, we talked about RNNs as being another way to parameterize autoregressive models. And remember, the key idea is that you have a small number of parameters, actually a constant number of parameters, with respect to the length of the sequence you're trying to model. And you're going to use these parameters to basically keep track of the context that you use to predict, basically, the next token or the next pixel. And you keep track of all this information through a single hidden vector that is supposed to summarize all the information that you've seen so far and that you're going to use to make the next prediction. Like, in this example here, where I'm looking at, let's say, building an RNN to model text. So you have tokens, and you might have some prefix, like "My friend open the," and then you're going to use all this information. You pass it through your RNN, and the RNN will update its state, its hidden vector, and you end up with a hidden vector H4 here. And then you're going to use that vector to predict the next token. And maybe if you're doing a good job, then you'll put high probability to reasonable ways to continue this sentence, like the door or the window, and you're going to put low probability to things that don't make sense. And as we've seen, these RNN models work reasonably well, even if you build them at the character level, which is pretty hard. One challenge is that this single hidden vector that you have here, basically, has to summarize all the information that you've seen, so far, and that's the only thing you can use to make the next prediction. And that can be a problem because you have to do a pretty good job of summarizing the meaning. Let's say if you're building a language model, this single vector has to capture all the entire meaning of all the previous elements in the sequence, which can be challenging. The other problem of RNNs is that, basically, you have to unroll the computation, if you want to compute these probabilities, and you want to come up with reasonable losses at training time, which makes them pretty slow and pretty hard to train. And the other problem is that, yeah, they can be a bit problematic to train because you have this long dependencies from, let's say, early on in the sequence towards the, let's say, the present. It can take many, many updates to get there. And this can lead to exploding or vanishing gradients, and it can be problematic. So this is now what's actually been used in state-of-the-art language model-- autoregressive language models. Existing state-of-the-art models use attention, and the basic idea is that they look more like a NADE or like a MADE, and these other models that we've seen before, where you essentially are able to use the entire sequence of inputs up to time T to make the next prediction. And so instead of just using the hidden vector, corresponding to the last time step to make the prediction, you look at all the hidden vectors from previous time steps to predict what's going to come next. And the way to make this effective in practice is to use an attention mechanism to try to figure out which parts, which elements of this sequence are useful and which ones are not, which one you should pay attention to, and which one you shouldn't pay attention to when you make a prediction. And so roughly, at a very high level, the way these methods work is that there is some attention mechanism that will tell you how relevant a query vector is, with respect to a key vector. So this is similar to when you search in a database. You have a query. You have a set of keys, and you want to figure out want to do retrieval. This has a similar flavor, and it will basically tell you how relevant is the hidden vector, let's say, corresponding to the first time step, with respect to the hidden vector that you have at the current time step. And this could be something as similar as just taking a dot product between the two vectors. Once you have the similarity vectors, then you turn them into an attention distribution, which is the thing that we were talking about before, the thing that tells you which elements of the sequence matter and which ones don't. And one simple way to do it is to just take all these attention scores and pass them through a softmax to get an actual distribution. Yeah? [INAUDIBLE] Are we not assuming any conditional independence? So the question is whether this kind of model assumes conditional independence? If you build a model like this, again, there is no conditional independence explicitly stated. Because in principle, as long as this is just an autoregressive model, and we're just parameterizing the conditionals using a function that has a very specific functional form. And so we're not going to be able to capture all possible dependencies, but we're not explicitly making any conditional independence assumption, so far. Well, if you were to make conditional independence assumptions, yet, typically, performance would drop significantly. As we'll see, the nice thing about this architecture is that it allows you to take into account the full context when you make a prediction, while, at the same time, being selective and be able to ignore things that are not relevant and pay attention to things that are relevant. For example, in this simplified version of an attention mechanism, what you could do is, you could take an average of the hidden vectors that you've seen before in your RNN. And you weigh them with the attention distribution scores that you have. You average them, and you get a new vector, then you're going to combine it with the current vector to make a prediction for the next token. And you see that now. We're no longer bottlenecked. We're not just using this green vector to make the prediction. We're able to use the whole history. So we're able to really compare every pair, essentially, of tokens in the sequence. And that's pretty, pretty powerful. And as you can see, for example, in this little example here, I have a robot must obey the orders given. And then you need to make a prediction. And if you want to make a prediction, you need to figure out what "it" refers to. And the attention mechanism can help you to figure out that this "it" is probably referring to, that when you're trying to figure out what "it" means, you should pay attention to these two tokens, a robot. And so that's the flavor of why this attention mechanism is helpful. Because you can take advantage of the whole sequence. As usual, in practice, you need to be careful about making sure that the model is autoregressive. So you cannot pay attention to future vectors when you do these kind of things. So you have to use a mask mechanism, just like you made, just like in these other models so that you can only basically pay attention to the tokens or the random variables that come before it in the sequence, in the ordering. The other thing that is important is that in an actual system that is used in practice, you would not use any recurrent architecture. So you wouldn't you wouldn't even need this recurrent computation here, where you update the state, recursively using an RNN. You just use feed forward computation. You stack multiple layers of attention. And the key advantage of this is that we're back to the previous MADE-like setting, where you can actually evaluate. You can evaluate the architecture in parallel, so you can do the computation necessary to make a prediction at every index, in parallel across indexes. This is a training time, of course. And this is really what makes these systems, these models good in practice compared to an RNN. I think, actually, an RNN would be reasonably good in terms of modeling power. It's just too slow to train. And these transformers, because they allow for massive parallelism, and that was-- and we'll come back to this when we talk exactly how these models are trained. But the key advantage is that you can basically evaluate the loss very efficiently without having to unroll the recursion corresponding to an RNN. And that's why they are one of the reasons they've achieved this great success in practice is because they can be evaluated in parallel. They can take advantage of GPUs, and you can scale them to very large sizes. And you can see some of the demos of the systems that-- like the GPT, GPT-2, 3, 4 that we've seen in the first lecture, that the amazing LLMs that everybody's talking about. Llama, other systems that are available online, you can play around with, are essentially based on these kind of on this architecture. Autoregressive models using this self-attention mechanism that we're going to talk about more in one of the section that is going to be dedicated to two neural architectures. So this is the high-level idea of one of the key ingredients that is behind state-of-the-art language models. Cool. Now back to RNNs. People have been using them not only for text. You can use them to model images. So you can just think of an image as a sequence of pixels. You can generate them in top left to bottom right, one at a time, and you can use RNN to basically model all the conditionals in your autoregressive model. So each pixel, you're going to have one conditional per pixel, giving you the distribution of that pixel, given all the ones that come before it in the sequence. And each conditional is going to be a categorical distribution over the colors that, that pixel can take. And if you're modeling pixels using an RGB encoding, then you have three channels, red, green, and blue. And so you need to capture the distribution over the colors of a pixel, given all the previous pixels. And one way to do it is to use an autoregressive structure inside every pixel, every conditional, defining the pixel. So a pixel is going to involve three random variables, the red, the green, and the blue channel. And you can generate them, let's say, in that order. So you can compute the conditional probability of the red channel, given the previous context. And you can do the green channel, given the previous context, and the value of the red channel, and so forth. And in practice, you can basically use an RNN style architecture with some masking, the same kind of masking we've seen in MADE that enforces this ordering. So first, you try to compute the conditional probability of the red pixel, and that can depend on everything you've seen before, but you cannot pick. You cannot look at the green channel or the blue channel. When you try to predict the green channel, it's fine to look at the value of the red channel for that pixel and so forth. And so again, it's basically the same idea, but you're going to use some masking to enforce autoregressive structure. And this was one-- these are some examples of the results you can get from an RNN at the pixel level, trained on ImageNet, downscaled ImageNet. Again, you can see that these results are not great, but they're pretty decent. Like, what you see here is you take an image. You see the rightmost column is an actual image. And then what you do is you can remove the top, the bottom half, and then you can let the model complete. So it's similar to a language model. You have a prompt, which, in this case, is going to be just the top half of the image. And then you let your autoregressive model generate the next pixel, and then the next pixel, and then the next pixel, and so forth. And you can see that it's coming up with somewhat reasonable completions,, has the right structure. It has the right symmetries. It's doing a reasonable job of capturing the dependencies between the pixels. There is some variability in the samples, like here, this one versus this one. Of course, there is stochasticity. So if you sample from the-- even given the same initial condition, if you sample, there is randomness in the way you sample, so you can generate different completions every time. Every time you sample, you're going to get a different possible way of completing that image. And you can see that they have not always-- I mean, some of them don't make a lot of sense, but some of the completions are actually decent, and there is some variability, which is good. The challenge is that, again, because you have to evaluate the probability of an image sequentially, you have to unroll the recursion. These models are very slow. And so in practice, what tends to works much better on images is convolutional architectures. These are the kind of architectures that work well when you're building classification models. And so it would be natural to try to use a convolutional architecture to build a generative model of images. The challenge, once again, is that you need to make sure that the model is consistent with an autoregressive one. So what you need to make sure is that when you make a prediction for a pixel, you only use information that is consistent with the ordering you've chosen. So if the ordering is, once again, from top left to bottom right, when you make a prediction for this pixel, it's fine to use information from all the shaded area in the image. But you cannot pick. You cannot look at information coming from the future or coming from any of the white region of the image. And the way to do it is, once again, relatively simple. It's always masking at the end of the day. So when you think about if you want to enforce autoregressive structure, one way to do it is to set up the kernels of your convolutions, to be consistent, to have zeros in the right places so that the way the computation occurs is consistent with the autoregressive nature of the model. So if you have a simple 3 by 3 convolutional kernel, and you zero out all these entries in the kernel, then if you look at the computation graph, whenever you make a prediction for this red pixel, you're only going to use the blue pixels to make that prediction. And so that's consistent with the ordering that we had before. So again, it's very similar to MADE. It's very similar to transformers or self-attention. You basically mask to make sure that things are consistent with the ordering. I was wondering with regards to the. [INAUDIBLE] convolutional approach [INAUDIBLE] over the pixels. Do you recover something, like a convolutional structure, where you get more attention for the pixels in the neighborhood? Yeah, so the question is whether you can use, I think, attention or self-attention for modeling images and whether that would recover the right inductive biases. And yeah, you can use masked, once again, attention on images. And there have been autoregressive models that are essentially using the transformer-like architecture on images. And they've been very, very successful. As far as I know, they are not in the public domain. So these have been built in industry, but they have not been actually released. I think they tend to be more computationally intensive to train. And so other models seem to-- diffusion models that we're going to talk about later tend to work better in practice. But there has been reported in the literature some good success using transformer-based architectures on images. How do you model autoregressive architecture for data that's not very clearly sequential? So if you have a language model, it's obviously token, token, token. But I can generate an image is a matter of how we're actually starting off at the order that we're doing. Yeah, that's a great question. The question is, what's the right ordering for images. For text, maybe left to right seems reasonable. But for images, what's the right order? That's a great question. And we don't have a great answer. Right now, the typical ordering is top left to bottom right. But as you said, it's probably not the right one. And you could imagine a different mechanism. There are people, and there's been research where people have tried to learn the optimal ordering. Like you can imagine, there's a combinatorially large number of orderings, but you could try to somehow set up an optimization problem, where you search for the right ordering first. And then you find the autoregressive model consistent with that order that maximizes the data fit with moderate success. And incidentally, as far as I know, even for language, you can model right to left, and it works OK, too. So maybe the ordering is not that important, even for language. Even if we're using a CNN right now, since it's autoregressive, if we're trying to evaluate the likelihood of, say, an image, don't we still have to unroll the chain rule and results in a really long computation? So the question is whether these convolutional models can be evaluated in parallel? And to some extent, convolutions can be evaluated pretty efficiently. Components can be evaluated in, basically, just matrix multiplications. And they can be done very efficiently on modern hardware. In fact, that's another way to build very efficient language models is actually based on convolutions, one deconvolutions. You can get pretty close to transformers like models, using convolutions that are, of course-- of course, they need to be causal, so you cannot look into the future. You can only look into the past. But using convolutional models has shown to work reasonably well on language as well. It matches the performance of transformers. So that's another way to get fast parallel computation and reasonably good modeling performance. [INAUDIBLE] problem is fundamentally different. Because I guess in any [INAUDIBLE] industry, I use outside context. I'm curious, if you pre-train with-- I mean, you can still generate or if that's only applicable to the case where you have only certain classes? Yeah, so the question is whether you could train a generative model based on inpainting, where you maybe mask out parts of an image, and you train a model to predict the remaining parts. And in general, that wouldn't give you a generative model. Although, there are ways to generate samples from that architecture. Because in some sense, it's still trying to learned something. You need to learn something about the joint, if you want to do well at that. But it doesn't give you directly a way to generate samples, at least left to right. You would need to use more expensive sampling procedures that make these models harder to use in practice. Although, there are variants like masked out encoders that are used generatively, but that's a little bit more complicated. Because, obviously, on transformers, they're very good for parallelization, but ignoring the computational efficiency, just in terms of the raw representation power of these models, can we say that transformers, the representation space, like [INAUDIBLE] is more powerful than LSTM-based models. I think that would be hard to claim. So the question is whether transformers are more powerful than an RNN. And I think that's a little bit tricky because another an RNN, by itself, is already Turing complete in general, so it can implement any function at relatively small end, in theory, it could do that. So it's been proven that they are essentially arbitrarily. Yeah. So it's really probably more about the efficiency of training or maybe inductive biases. Then there is not a good understanding about the flexibility by itself. Why would anyone be using an RNN today versus the transformer architecture? The question is why would you use an RNN? One advantage is that then at inference time, having keeping track of a single state is actually pretty good because you don't have to do a lot of computation over and over, if you had a vanilla model, where nothing is tied, like you need to do a lot of computation at inference time. An RNN is nice because all you have to do is you keep track of a small state, and you can throw away everything. All the past doesn't matter. You just need to keep track of the hidden state, and you just keep on folding the computation. I mean, sequential, but all these models are sequential anyways. But the fact that you have this very small vector, and that's the only thing you need to keep track of with respect to the state is very appealing. So that's why people are trying to actually get back to RNN-like architectures because they could be much more efficient at inference. Cool. So yeah, the other thing you have to keep in mind is if you do this mask convolution is that you might end up with this blind spot thing, where if you look at the receptive field that you get when you use kernels that are masked, when you make a prediction, if you have a stack of convolutions, and you make a prediction for this pixel, you're not actually going to take into account this grayed out pixels because of the blind spot. If you see what happens, if you recurse on this computation structure and you do a bunch of convolution one on top of each other, you end up with this blind spot. And so there are some other tricks that you have to do at the level of the architecture to basically combine multiple convolutions with different masking to solve that issue. And here, you can see some samples that it tends to work well. If you replace the net with a CNN, you get significantly better samples, and it's much faster. And right, maybe-- and these models tend to actually not only generate reasonable samples, but they seem to get a pretty good understanding of what is the structure of the images that they see at training time. And one indication that is, indeed, the case is that you can use them to do anomaly detection. So you might have heard that machine learning models are pretty vulnerable to adversarial examples, adversarial attacks. So you take an image like this one. That would be classified as a dog image. And then you add this noise, you get back an image that looks identical to the original one but would be classified with very high confidence by state-of-the-art models to be something completely wrong. So these two images are different, but in very subtle ways. And there is a natural question of whether you can detect these differences in the images. And if you could do it, maybe you can build more robust machine learning models. And one way to do it is to try to fit in these two types of inputs, like natural images and adversarial attacks into a pre-trained generative model and see whether they would assign different probabilities to these two types of inputs. If the model is doing a good job, it might be able to detect that this is a natural image. It should be assigned fairly high probability versus this one has something weird is going on here. And so, it should be assigned a lower probability. And indeed, a pre-train PixelCNN model does a pretty good job at discriminating between natural images and ones that have been tampered with. And so what you see here is basically a histogram of the likelihoods. I guess, they are written in bits per dimension, but it's the same thing as the probability that the different samples are given by the model is on the x-axis. And on the y-axis is you see how frequently different images, let's say, in the training set are given that probability by the model. And you see that the train and test set, they are here, while the adversarial attack are significantly separated from the natural images, meaning they are assigned much lower probability by the model. So if you use a threshold to try to distinguish and say if the probability of my input is significantly lower than one I'm expected to, then I can maybe say that's an adversarial attack. And I can reject it, and this model seem to perform reasonably well, which means that they are no longer getting the high level semantics of the image, but they really are able to understand the subtle dependencies between the pixel values that exist in natural images. And yeah, question? Can people do adversarial attack if they don't have access to the model. The question is whether people can do adversarial attacks if they don't have access to the model. To some extent, yes. It depends. There are different kinds of adversarial methods. You can assume that you have exact-- you know the weights, maybe you can only the outputs of the model. Sometimes, you don't even have access to anything. And you have to somehow hope that an attack built for a model transfers to a different one. So to some extent there have been some success even in black box settings. [INAUDIBLE] regressive architecture are more robust to adversarial examples? So the question-- so the question is whether the autoregressive-- is it the auto regressive? [INAUDIBLE] defend against adversarial attacks better than other models. Yeah, so it's not necessarily better. I think the idea is that this is just to show that the generative model, the pixel CNN that was just trained by maximizing the likelihood of a data set is able to understand the structure of the images and the likelihood itself is useful. So it's not just a matter of sampling from the model, but the likelihood can actually be used to discriminate between different kinds of inputs. And in order to do well, you really need to understand the relationship between all the pixels. You need to figure out that this image is actually different from this image. And so it means that those conditionals that you learn through the autoregressive model are actually doing a pretty good job at discriminating this very subtle differences. So for the challenge model, how we evaluate the P is half? [AUDIO OUT] So you're basically just-- just the same thing. I think I have it here. Basically, if you want to compute the probability, you just use the autoregressive chain rule computation. And so you evaluate the probability of the first pixel, the second pixel, given the first one. Just multiply all those things together, and that gives you the likelihood. That's the formula from an autoregressive model. And you do that for every input image, the same logic, the same function. And then you get different results because the images are different in some fundamental way. Yeah? Can you explain one more time what the x-dimension and y-dimension on this graph represents. Yeah, so the x dimension is essentially p of x, the different probability values that are assigned by the model. And it's in bits per dimension because it's normalized by the number of dimensions that you have in the images. But think of it as p of x, rescaled so that it's a little bit more meaningful. But roughly, it's the probability. And on the y-axis, you have the how many images are assigned. It's a histogram. How many images are assigned different probability values. And so you get this Gaussian where even all the images in the training set, they are given different probability values, but roughly, they range-- they are usually in this range between 1 and 4. And if you look at adversarial attacks, they are significantly separated. So they're different in probability. Cool. And then they can also be used for speech. But let me skip that. And the summary is that autoregressive models are pretty general. They're good because it's easy to sample from them. It's easy to evaluate probabilities, which are useful in itself because you can do things like anomaly detection. You can extend it to continuous variables. One issue with autoregressive models is that there is not really a natural way to cluster data points or get features. We'll see that latent variable models are going to be much more natural for that. And now the next thing is learning, which is going to be pretty straightforward for autoregressive models. You can probably guess how you would train them. An autoregressive model is just a sequence of basically classifiers, and then you just train all the classifiers the usual way, essentially. And at a high level, remember, we have your model family, which could be autoregressive models. You have data to train the model. You have to specify some notion of distance. So how good your model distribution is, how similar it is to the data distribution. And we've seen how to define a set of distributions using neural networks. And now the question is, how do you optimize the parameters of the neural network to become as close as possible to the data distribution? And the setting is one where we assume we're given a data set of samples from the data distribution. And each sample is basically an assignment to all the variables in the model. So it could be the pixel intensities, every pixel intensity in each image in the model or which is the same as a standard classification problem where you might have features and label. You get to see the values of all the random variables. And the assumption is that each data point is coming from the same distribution, so they all sampled from the same data distribution. So they are identically distributed, and they are independent of each other, which is a standard assumption in machine learning. And then you're given a family of models, and the goal is to pick a good model in this family. So the model family could be all Bayesian networks with a given structure, or it could be fully visible sigmoid belief network where you can think of a bunch of logistic regression classifiers. They each have a bunch of parameters. And the question is, how do choose the parameters, such that you get a good model. Well, when the only thing you have access to is a bunch of samples from some unknown data distribution. And the goal is to come up with a model that is a good approximation to this unknown data-generating process. And the problem is that you don't know what data is. I cannot evaluate data on an arbitrary input. The only thing I have access to is a bunch of samples from this distribution. And in general, this is pretty tricky because you can imagine samples tell us something about which axis, let's say, are likely under the data distribution. But there is a lot of information that is just loss that we're just losing whenever we just sample from our distribution. All right, so let's say that we're trying to model MNIST again. And so, let's say, modeling 784 binary variables, black and white pixels. And what I claim is that this is a really, really hard problem because x is so high-dimensional that there is just so many different possible images, that even, basically, regardless how large your training set is, this is a really, really hard problem. If you think about it, how many possible images are there? If we have binary variables, you have 784 of them. There is 2 to the 784, which is roughly 10 to the 236 different images. And somehow, you need to be able to assign a probability to each one of them. So let's say that you have maybe 10 million training examples or 100 million or a billion training examples. There is still such a huge gap between however many samples you have and all the possible things that can happen, that this is just fundamentally a really, really hard problem. This is way more than the number of atoms in the universe. So there's just so many different possible combinations. And somehow, you need to be able to assign a probability value to each one of them. And so you have sparse coverage. And so, this is just fundamentally a pretty hard problem. And then, there are computational reasons, even if you had infinite data, training, these models might not be-- might still be challenging, just because you have finite compute. And so somehow we'll have to be OK with approximations, and we'll still try to find, given the data we have, we're going to try to find a good approximation. And so the natural question is, what do we mean by best? What's a good approximation? What should we even try to achieve to do, try to achieve here, given that there are fundamental limits on what we can do. And so the setting, what best means really depends on what you want to do. One goal could be to just do density estimation. So if you think about anomaly detection, we just talked about, you really care about being able to assign reasonable probabilities to every possible inputs because you care about-- because, let's say, you care about that. And if you are really able to estimate this full joint probability distribution accurately, then you can do many other things. Then, you can condition on a subset of the variables. You can infer the others. You can do basically everything you want. But it's a pretty tall order. It's a pretty challenging problem, as we've just seen before. Another thing you can do is maybe you have a specific task in mind, right? If you already know how you're going to use this model, perhaps, you can try to train a model that performs well at that particular task. If you only care about classifying images in spam versus not spam, then maybe you actually want to build a discriminative model that just predicts y given x. Or if you know that you just care about captioning an image or generating images, given captions, then maybe you don't need to learn a joint distribution between images and captions. You just need to learn the conditional distribution of what you're trying to predict, given what you have access to what test time. That can make your life a little bit easier because you don't think about density estimation. You're saying, I don't have any preference about the task the model is going to be given. I want to do well at every single possible task. But if you know that there is a very specific way you're going to use the model, then you might want to train the model so that it does well at that specific task you care about. Other times you might care about structure, like knowledge, discovery, but we're not going to talk about that in this class. And so, we'll see, first, how to do one, and then we'll see how to do two. And so let's say that really what you want to do is you want to learn a joint probability distribution of the random variables that is as good as possible-- as good an approximation as possible to the data distribution that generated your data. How do you do that? This is basically density estimation. It's a regression problem. You can think of it as an estimation problem because, again, you want to be able to assign a probability value to every possible assignment of values to the random variables you have. You're trying to build a model over. And so at this point, really, we just want the joint, defined by the data distribution, which is unknown. But we have access to samples to be close to this model that to some distribution in your, in your model family, P theta. And so the setting is like this. So there is this unknown P data. There is a bunch of samples that you have access to it. There is a bunch of distributions in this set, say, all the distributions that you can get, as you change parameters of your logistic regression classifiers or your transformer model. It doesn't matter. And somehow, we want to find a point that is close with respect to some notion of similarity or distance to the true underlying data distribution. So the first question is, how do we evaluate, whether or not to joint probability distributions are similar to each other? And there is many ways to do it. And as we'll see, we're going to get different generative models by changing the way we measure similarity between two probability distributions. There are some ways of comparing probability distributions that are more information theoretic. We're going to see today, like maximum likelihood based on compression that will give you certain kinds of models. There's going to be other ways that are more based on if you can generate-- you could say, OK, if I generate samples from P data, and I generate samples from P theta, you should not be able to distinguish between the two. That would give rise to something like a generative, adversarial network. So there's going to be different ways of defining similarities between distributions, and that will be one of the axes, one of the ingredients that you can use to define different types of generative models. For autoregressive models, a natural way to build a notion of similarity is to use the likelihood because we have access to it. And so we can use a notion of similarity that is known as the KL-divergence, which is defined like this. The KL-divergence between distribution p and q is just basically this expectation with respect to all the possible things that can happen. All the possible things that can happen are weighted with respect to the probability under p. And then you look at the log of the ratio of the probabilities assigned by p and q. And it turns out that this quantity is non-negative, and it's zero if and only if p is equal to q. And so it's a reasonable notion of similarity because it tells you, if you, somehow, are able to choose one of them, let's say, p to be p data, q to be your model distribution. If you are able to derive this quantity as small as possible, then it means that you're trying to make your model closer to the data. And if you are able to derive this loss to zero, then you know that you have a perfect model. Well, I have a one-line proof, but I'm going to skip it, showing that it's non-negative. The important thing is that this quantity is asymmetric. So the KL-divergence between p and q is not the same as the KL-divergence between q and p. In fact, the KL-divergence, if you use one versus the other, it's going to give us both reasonable ways of comparing similarity. One will give us maximum likelihood training. One will be more natural and will come up again when we talk about generative adversarial networks. It's going to be harder to deal with computationally, but it's also like a reasonable way of comparing similarity between p and q. Yeah, so they are symmetric. And the intuition, as I mentioned before, is these kind of quantity has an information theoretic interpretation, and it tells you something to do with compression. So the idea is that when you're building a generative model. You're essentially trying to learn a distribution. If you have access to a good probability distribution over all the possible things that can happen, then you also have access to a good way of compressing data. And essentially, the KL-divergence between p and q tells you how well compression schemes based on p versus q would perform. And so specifically, it's telling you, if the data is truly coming from p, and you use an optimization, a compression scheme that is optimized for q, how much worse is it going to be than a compression scheme that was actually based on the true distribution of the data? So intuitively, as I mentioned, knowing the distribution that generates the data is useful for compression. And so imagine that you have 100 binary random variables, coin flips. If the coin flips are unbiased, so 50/50, heads, tails, then there is not much you can do. The best way to compress the result of flipping this coin 100 times is to basically use one bit. Let's see zero to encode head, one to encode tails, and on average, you're going to use one bit per sample. And that's the best thing you can do. But imagine now that the coin is biased. So imagine that heads is much more likely than tail. Then you know that you are going to-- out of this 100 flips, you're expecting to see many more heads than tails. So it might make sense to come up with a compression scheme that assigns low short codes to things that are going to be much more frequent. So you could say that you could batch things together, and you could say sequences like HHHH are going to be much more common than sequences like TTTT. And so you might want to assign a short code to sequences that you know are going to be frequent and a long code to sequences that you think are going to be infrequent. And that gives you to savings in practice. So an example that many of you are probably familiar with is Morse code. That's a way to encode letters to symbols, like dots and dashes. And if you think about it, there is a reason why the vowels, like E and A are assigned to this very short code, while a letter like U is a sign, a very long code with four elements. And that's because vowels are much more common in English, so you are much more likely to use-- if are trying to send a message to somebody, you are much more likely to use vowels. And so if you want to minimize the length of the message, you want to use a short encoding for frequent letters and a long encoding for infrequent letters. And so all this to say is that KL-divergence has this interpretation, and it's basically saying if the data is truly distributed according to p, and you try to build a compression scheme that is optimized for q, you're going to be suboptimal. Maybe in your model of the world, the vowels are much more frequent than are much more infrequent than q. So you have a bad generative model for text. Then if you try to optimize, come up with a scheme based on this wrong assumption, it's not going to be as efficient as the one based on the true frequencies of the characters. And how much more ineffective your code is, is exactly the KL-divergence. So the KL-divergence exactly measures how much more inefficient your compression scheme is going to be. And so if you try to optimize KL-divergence, you're equivalently trying to optimize for compression. So you're trying to build a model, such that you can compress data pretty well or as well as possible, which is, again, a reasonable kind of way of thinking about modeling the world. Because, in some sense, if you can compress well, then it means that you're understanding the structure of the data, you know, which things are common, and which ones are not. And that's the philosophy that you take if you train a model using KL-divergence as the objective function. So now that we've chosen KL-divergence as one of the ways of measuring similarity between distributions, we can set up our learning problem as saying, OK, there is a true data generating process. There is a family of distributions that I can choose from. I can measure how similar my model is to the data distribution by looking at this object. And so, intuitively, if you think about this formula, this thing is saying, I'm going to look at all possible, let's say, images that could come from the data distribution. And I'm going to look at the ratio of probability assigned by the data distribution and the model. So I care about how different the probabilities are under the model and under the data distribution. If those two match, so if they assign exactly the same probability, then this ratio becomes one. The logarithm of 1 is 0. And you see that the KL-divergence is exactly zero. So you have a perfect model. If you assign exactly the same probability to every x, then you have a perfect model. Otherwise, you're going to pay a price. And that price depends on how likely x is under the data distribution and how far off your estimated probability is from the true probability under the data distribution. Yeah? [INAUDIBLE] through density estimation. Great question. The question is, OK, this looks reasonable, but how do you compute this quantity? How do you optimize it? It looks like it depends on the true probability assigned under the data distribution, which we don't have access to, so it doesn't look like something we can optimize. And we'll see that it simplifies into something that we can actually optimize. Yeah, a great question. So what would the interpretation be of the KL-divergence of T theta, like two lines of theta, just flipped it. And what is that-- It be more like-- the question is, what happens if we flip the argument of here, and we have what's called as the reverse KL. So the Kl-divergence between P theta and P data. So it would be the same thing. But in that case, we would be looking at all possible things that can happen. We would weigh them with respect to the model P theta, and then the ratio here would again be flipped. So we care about the ratios, but in a different sign, basically. And so the quantity would be zero if and only if they are identical. But you can see that it has a different flavor. Because if you look at this expression, we're saying, it does what happens outside, let's say, of the support of the data distribution doesn't matter with respect to these laws. Well, if you had P theta here, then you would say, I really care about the loss that I achieve on things I generate myself. And if you think about how these models are used, that actually seems like a more reasonable thing to do. Because maybe it really matters. You really want to score the generations that you produce, as opposed to what's available in the training set. But it will turn out that the nice property that we're going to see soon that makes this tractable doesn't hold when you do reverse KL-divergence. So that's why you can't really optimize it in practice. Yeah? So even though, say, the KL-divergence of p and q is not equal to KL-divergence of q and p, so if you have two sets of p2q1q1, if KL p1q1 is smaller than p2q2, with the reverse-- with a KL q2q1q1, those will be smaller than KL of q2p2p2. As far as I know, not necessarily. Yeah. Yeah. I wonder, do we use other metrics to evaluate the distance between two distributions, let's say, do integrations between the two distributions? Yeah, so the question is, do we ever want to use other metrics? Yes, we'll see that in future lectures. We'll get different generative models, simply by changing this one ingredient. So you can still define your family in any way you want, but we might change the way we compare distributions. Because, at the end of the day, here, we're saying we care about compression, which might or might not be what you want. If you just care about generating pretty images, maybe you don't care about compression. Maybe you care about something else. And we'll see that there is going to be other types of learning objectives that are reasonable. And they give rise to generative models that tend to work well in practice. I'm just curious, for this one, for this P data key, you mean this order, does it make more-- does it have more meaning because the P data is really the empirical distribution. But if you flip it, and then does it make it less meaningful, meaning although it seems to be like a symmetrical, but it doesn't make it much less meaningful because we don't really trust-- like the Monte Carlo simulation, you yourself imagined out of-- So the question is, again, should the expectation, with respect to the true data distribution, or should we with respect to the model, which is what you would get if you were to flip the order here? And the quantities will be zero. Both of them will be zero if and only if you have perfect matching. But in the real world, where you would have finite data, you would have limited modeling capacity. You would not have perfect optimization. You would get very different results. And in fact, you get a much more-- if you were to do the KL-divergence between P theta and P data, you would get a much more mode seeking-like behavior, where you can imagine if you put all the probability mass into a single mode. It might look like you're still performing pretty well according to this objective. So it tends to have a much more mode-seeking objective compared to the KL-divergence, which is forcing you to spread out all the probability mass over all the possible things that can happen. So if there is an x that is possible under P data, and you assign it zero probability, you're going to get an infinite loss. So it's going to be very, very bad. So you're forced to spread out the probability mass. If you do reverse KL, that is an incentive to concentrate the probability mass. So the behavior, as you said, are going to be very different. And depending on what you want, one might be better than the other. So we've been discussing about the flavor when we do one way of KL-divergence and the flipped way. Is it analogous to precision and recall? Because in one case, we're considering the denominator is the all true things. And another, we are considering all the positive predictions. That happens. The question is, does this have the flavor of precision and recall? And yes, it has a very similar. It's not exactly precision and recall. It's a softer thing, but it has the flavor of you care more about precision versus recall. It's a good way to put it. All right, so we have this loss, which you can expand the expectation. That's something like this. And now we know that this divergence is zero if and only if the distributions are the same. So if you can optimize this as a function of theta to make it as small as possible, it's a reasonable learning objective. Measure compression loss. The challenge is, as was mentioned before, is that it might look like it depends on something you cannot even compute it because it depends on the probability assigned to all the possible things that can happen under the true model-- under the true data distribution, which you don't know. But if you just decompose the log of the ratio as the difference of the logs, you get an expression that looks like this. And now you can note that the first term here does not depend on theta. It's just like a shift. It's a constant that is independent on how you choose the parameters of your model. And so for the purposes of optimizing theta, you can ignore the first term. Actually, if you're trying to make this quantity as small as possible, regardless of how you choose theta, this is going to be the same, as you can effectively ignore it for the purposes of optimization. And so if you try to find a theta that minimizes this expression because there is a minus here, the best thing you can do is to basically make this thing here as large as possible, what I have here. And this term here should be somewhat familiar. What we're saying is that we should pick the distribution that assigns basically the highest probability to the axes that are sampled from the data distribution. And so this is really a maximum likelihood estimation. We're trying to choose a model that puts high probability on the things that you have in the training set, essentially, which is the training objective that you've seen before, probably in other classes of trying to pick parameters that basically maximize the probability of observing a particular data set. We're trying to choose parameters, such that, in expectation, the average log likelihood of the data is as high as possible. So you can see that that's equivalent to minimizing our KL-divergence, which, as we've seen, is the same as trying to do as well as you can at this compression task. And one caveat here is because we've ignored this term, it's possible to compare two models. So you have a theta 1 and a theta 2. I can tell you which one is doing a better job, but you can never know how close you truly are to the data distribution. You can only evaluate the loss up to a constant. So you'll never know how much better could it have been. You can't really evaluate that. And that's one of the problems here, is that we don't know how much better could we have been, because there's always this shift that cannot be evaluated. And for those who have seen this in other classes, that's basically the entropy of the data distribution. And that's telling you how hard is it to model the data distribution or what's the-- yeah, how random is the data distribution to begin with, how hard is it to model the data distribution, if you had access to the perfect model. That doesn't affect how well your particular model is doing, but it's the thing you need to know how close you are truly to the data distribution. As you mentioned, you could compare two models using this, but you'd only get the divergence between the two, and you still wouldn't know how they [INAUDIBLE] So you could get-- if you take the-- let's say, you have a P theta 1 and a P theta 2, and you take the difference between the KL-divergence between data and P theta 1 minus the KL-divergence between theta and P theta 2, the constant cancels out. And so you know which one is closer to the data distribution. But you never know how close you are. So going back to this picture, I guess what I'm saying is that-- maybe it's too many-- given two points in here, you can tell which one is closer to the data distribution, but you never know the length of this segment, like you don't know how close you actually are. [INAUDIBLE] we have two models and the second term is [INAUDIBLE]. Is there any way to objectively compare which of the two is better, relative to the [INAUDIBLE] other than to just have zero. Yeah, so if you have models that achieve exactly the same average log likelihood, which one is better? Occam's razor would tell you pick the simplest one. And that's usually a good inductive bias. OK. Now, one further problem is that this quantity here still involves an expectation, with respect to the data distribution, which we still don't have access to. So you can't still optimize this quantity. However, we can approximate the expected log likelihood with the empirical log likelihood or the average log likelihood on the training set. So remember that what we would really care about is the average log likelihood, with respect to all the things that can possibly happen when you weight them with the probability given by the data distribution that we don't have access to. But we can approximate that by going through our data set and checking the log probabilities assigned by the model to all the data points in the data set. And to the extent that the data set is sufficiently large, I claim that this is a good approximation to the expected value. And the intuition is that you have an expectation. You have a sample average. To the extent that you take an average with respect to a large enough number of samples, the sample average will be pretty close to the expectation. And now this is a loss that you can compute. You just go through your training set. You look what's the likelihood assigned by the model to every data point, and you try to make that as large as possible. And so that's maximum likelihood learning. That's the thing that you've seen before. Try to find the distribution that maximizes the average log probability over all the data points in your training set D. And as usual, you can ignore this one over D. That's just a constant. It doesn't involve, doesn't depend on theta. And so you get the usual loss function. And note, this is exactly the same thing as saying because the data points are independent, maximizing this expression is exactly the same thing as maximizing the probability of observing the data set that you have access to. So this is a reasonable learning objective. You have a bunch of data, and you're trying to find the parameters that maximize the probability of sampling data set, like the one you have access to. If you take a log of this expression, the log of a product becomes a sum of logs, and then you get that these two things are exactly the same. So again, very reasonable training objective. Let's find parameters that maximize the probability of observing the data set that we have access to. Yeah? We use similar numerical methods to estimate that constant term from before, where you could-- like in your data set, based on x, and then estimate the entropy of that. Yeah, so the question is, can you use similar tricks to estimate this? So you can certainly estimate the expectation, but then the problem is this log probability. And that one is much harder to estimate. And you can try to do kernel density estimates, or you could even use P theta in there, if you believe you have a good approximation, then you can plug it in, but you'll never know how far off you are. So there's always approximations. So obviously, there's a little thumb that says samples of p theta of x as close to zero weigh heavily in the objective. In practice, two people do anything to maybe alleviate that a bit. I think for discrete distributions, maybe you have a softmax that will make it a bit further away from zero. Is that something that happens in continuous distributions, or how do you help alleviate that? Yeah, so that's goes back to what we were saying, what is this model doing? It's trying to make sure that if something is possible happen in the training set, you're going to be forced to put some probability mass there, which is a good thing, right? You're going to be forced to spread out the probability mass so that the entire support of the entire data set is covered by your model. Now, the problem is that you're going to always have finite modeling capacity, right? So if you put probability mass there, you might be forced to put probability mass somewhere that you didn't want to. And maybe then your model will hallucinate weird stuff that was not in the training set, but you have to generate them because you're forced by this objective to spread out the probability mass. Again, back to precision recall. You need to have a very high recall. Everything in the training set has to be non-zero probability. And as a result, maybe your precision goes down because then you start to generate stuff that should not have been generating. So that's the takeaway of that line. Cool. Now, why does this work? That's an example of why can you approximate this expectation with this sample average. This is something that is basically a Monte Carlo estimate. You might have seen it before. The idea is that if you have an expectation of some function, there's a random variable x. There is a function g of x. You want to get the expected value of g of x, which is just this thing. You can approximate this by just taking-- the true thing would look at all the things that can happen, and it would weight them with the probability under p. Alternatively, what you can do is you can just generate T scenarios, T samples, and look at the average value of G under these T samples. And that should be a reasonable approximation, right? You can approximate the expectation by looking at the value of the function on this T representative samples. And these g hat is a random variable because it depends on this samples, x1 through xt. But it has good properties in the sense that in expectation, it gives you back what you wanted. So although this g hat is now a random variable, in expectation this random variable has the right value, which is the true expectation of the function, the thing you wanted to compute. And the more samples you get, the better the approximation is. So although g hat is random, as you increase the number of samples T, g hat converges pretty strongly to this expected value. So the more samples you take, the less randomness there is, and the more likely you are to get close to the true answer you're looking for, which is the expectation of the function. And the variance also goes down as the number of samples increases. So you have a random variable that an expectation gives you the answer you want. And as you increase the number of samples, the variance of this random variables becomes smaller and smaller, which means that your approximation becomes more and more reliable. The less unlikely you are that the estimate you have is wildly off. And that's exactly what we're doing here. We are approximating. This expectation is a number. It's not random. We're approximating it with a quantity that depends on the training set, so different training sets would give you different answers. But if the training set is sufficiently large, this sample average would be very close to the expectation. And the larger the training set is, the more likely it is that this sample average that you get on the data set is actually going to be pretty close to the true expected value that you care about. Cool. And we'll see this idea come up often, this idea that there is an intractable expectation that you have to deal with, and you're going to approximate it using samples from the distribution. It's a pretty convenient way of making algorithm is more computationally tractable, essentially. Was there a question? OK, right. So now back to learning. I mean, you've probably seen maximum likelihood learning and examples, like learning the parameters of a Bernoulli random variable. So let's say you have two outcomes, heads and tails. You have a data set. So you've seen that you flipped the coin five times, and the first two times were heads, then you have a tail, then a heads and a tail. You assume that there is some underlying data distribution that produced the results of this experiment that you did with five tosses of the coin. And then you model all these Bernoulli distributions. And then, again, you just need one parameter to describe the probability of heads versus the probability of tail. And then you could try to fit, and you try to find a model of the world that is as close as possible to the true data generating process. For example, you might see that there is three heads out of five of coin flips, and then you try to find a good model for this data. And a way to do it is maximum likelihood. So in this case, P theta would be really, really simple. It's just a single Bernoulli random variable. You have one parameter, which is the probability two heads. One minus theta is the probability of tails. And then you have your data set, which is three heads and two tails, and then you can evaluate the likelihood of the data. And it's just that expression. So you have theta, theta, 1 minus theta, because the third result is a tail and so forth. And now this is a function of theta, as you change theta, the probability that your model assigns to the data set changes. And if you plot it, it has the shape. And then maximum likelihood would tell you pick the theta that maximizes the probability of observing this particular data set. And that basically corresponds to trying to find a maximum of this function. And in this case, what's the solution? That's 0.6, right? In this case, you can actually solve this in closed form, and you can work out what is the optimal theta, and it's going to be 0.6. And so we're basically going to do the same thing now but for autoregressive models, so this is the same idea. You have, except that now theta is very high-dimensional. It's all possible parameters of a neural network, but the y-axis is the same. It's basically the probability that your model assigns to the data set. And then you try to find theta that maximizes the probability of observing the data set that you have access to. And the good news is that in an autoregressive model, evaluating likelihoods is relatively easy. If you want to evaluate the probability that the model assigns to a particular image or sentence or whatever, the probability of x is just given by chain rule. It's the product of the conditional probabilities. And so evaluating the probability of a single data point is very easy. It's exactly the same computation we did before when we were trying to do anomaly detection. We just go through all the conditionals, and you multiply them together. And how to evaluate the probability of a data set. While the probability of the data set is just the product of the probabilities of the individual data points, and the individual data points are just obtained through chain rule. And so, again, it's all pretty simple. If you want to maximize the probability of observing the data set that you have access to, you can also take a log. And you can maximize the log likelihood. And you get an expression that when you can turn the log of a product into a sum of logs. But we no longer have a closed form solution. So before, for the Bernoulli coin flips, you all knew the answer is 0.6. If you have a deep neural network here, you no longer have a closed form way of choosing theta, and you have to rely on some optimization algorithm to try to make this objective function as high as possible. You negate it and try to make it as small as possible. And so, for example, you can use gradient descent. So that's the objective function that we're trying to optimize. And if you take a log, I guess it boils down to this, which is much more natural. So you go through all the data points. You go through all the variables in each data point, and you look at the log probability assigned of that variable, given all the ones that come before it in that data point. So equivalently, what you're doing is-- remember that this P neural here are basically classifiers that try to predict the next value, given everything before it. So this loss is basically just evaluating the average loss of all these classifiers across data points and across variables. And so again, basically minimizing KL-divergence is the same as maximizing log likelihood, which is the same as basically trying to make these classifiers perform as well as they can. They should do a pretty good job at predicting overall data points. J overall variables I. They should do a pretty good job at predicting the next variable, given what they've seen so far for that particular data point. And so all of this is basically boiling down to let's try to make these classifiers that predict the next variable, given the ones before it, as efficient, as good as possible in terms of the essentially cross entropy. So one way to do it is you can initialize all the parameters at random, and then you can compute gradients on this loss by backpropagation, and then you just do gradient ascent on this thing. It's non-convex. But in practice, basically, that's how you would train all these models. And it tends to work pretty well in practice. One thing to note is that as written, this quantity involves a sum over an entire data set. Like, if you want to know what's the effect of changing the parameter of one of these classifiers. Let's say you want to get the gradient of the loss with respect of, let's say, theta I, where theta I is basically the parameters of the i-th conditional, you would have to sum over the whole data set to get this gradient, which would be, of course, way too expensive. Because you would have to go through the whole data set to figure out how to adjust the parameters of your classifier. And that's tricky. But well, here I'm actually-- OK, the good news is, OK, each condition can be optimized separately, if there is no parameter sharing. In practice, there is always parameter sharing. The challenge is that you have this big sum over all the data points in the data set. But again, what we can do is, we can use a Monte Carlo estimate. So instead of going through the whole data set, we can try to estimate what is the gradient, just by looking at a small sample of data points. Just like before, we were approximating an expectation with a sample average. We can think of this sum over m over all the data points. We can multiply by m and divide by 1/m, and then we can think of this sum over 1/m as an expectation, with respect to a uniform distribution over the data points in the data set. And so you can write down the gradient as the expectation of the gradient, with respect to a uniform distribution over the data set. And so far, we haven't gained anything. But now you can do. Monte Carlo. You can approximate this expectation by taking a bunch of samples and evaluating the gradient only on those samples. And that's basically stochastic gradient descent or mini batch, where you would basically select a small subset of data points. You will evaluate the gradient on those data points, and you would update your model accordingly. And so we see another layer of Monte Carlo simulation or Monte Carlo estimate, where instead of evaluating the full gradient, you evaluate the gradient on a subset of data points to make things scalable. And what else? Yeah, the other thing to keep in mind is that, well, there is always the risk of overfitting that came up before. If you just blindly optimize that objective, you could just memorize the data set. So the data becomes the model. You're going to perform pretty well at this prediction task, but that's not what we want. So we don't care about the performance on the data set. We care about performance on unseen samples that come from the same distribution as the one we've used for training. So the same problems that we've seen when you train a machine learning model, apply here. Blindly minimizing this loss might not do what we want because you can do very well on the training set, but you might not be doing well in general. You might not be generalizing. And so what you would need to do is to, somehow, restrict the hypothesis space or regularize the model somehow so that this doesn't happen. So that it doesn't just memorize the training set, and it doesn't-- and so you don't get this overfitting behavior. And the problem-- and then you get the usual bias variance trade-offs, where if you limit the model too much, if you restrict the modeling capacity too much, instead of using deep neural network, you use logistic regressions, or you make very strong conditional independence assumptions. Your modeling capacity or hypothesis space becomes too limited, and you might not be able to do well at minimizing that loss on the training set. And this is basically bias because it limits how well you can approximate the target distribution, even if you could optimize as well as you could. And then the trade off here is that if you choose model families that are too flexible, then you end you encounter the other issue, which is variance. So your model might be fitting too well. It might be fitting even better than the true model that generated the data. And even small changes to the data set could have huge changes to the parameters that you output. And that's variance. And so you want to find a sweet spot where you balance the effect of bias and variance on the performance of your model. And visually, I think, this is an example. Let's say that you have a bunch of data points, and you're trying to fit a curve, trying to predict y from x. If you choose a very simple space of possible relationships, like all linear models, you can do very well at fitting. But somehow, if the model class is too simple, you're not going to be able to capture the true trend in the data. And so the bias here will hurt you too much. If you say underfits-- if you choose a very flexible model, lots of parameters, you're going to be fitting the data set extremely well, but you can see that, perhaps, it's too flexible, the model. If you were to change one single data point a little bit, the predictions would change drastically. And that's maybe overfitting. And so you want, maybe, that sweet spot, where you have a low degree polynomial that fits the data, A little bit worse than this higH-degree polynomial, but it will generalize. And it will perform OK in practice. And yeah, question? Is there any way we can-- because when we train our models, we will see those hyperparameters fixed, and then we train the model. So it's like the case you show here. We say this is a polynomial with order three. Is there any way for us to turn the model parameters into trainable or have a very elegant framework? Well, yeah. So there's a few things you can do. One is to prevent overfitting is you could be Bayesian, but that's very hard computationally. Another thing you can do is you can try to do cross validation, where you're keep some held out data to evaluate the performance of your model. And if you see that there is a big gap between the performance that you had at training time versus what you had at on the validation set, then you know you're overfitting, and so maybe you want to reduce the complexity of your model. And so, yeah, one thing you can do is you can reduce the complexity of your neural networks, reduce the number of parameters, share parameters, make the set smaller in some way. Another thing that was mentioned before is you could try to use some soft preference for simpler models so that if you have two models that fit the data equally well, they achieve the same loss. Maybe you have a regularization term that says prefer the simpler one. Maybe the one with fewer parameters or the one where the magnitude of the parameters is smaller. And the other thing is what I just mentioned. You can always evaluate performance on some held out validation set. This is actually what people do in practice. And you can check if there is a big gap between training and validation loss, then you know that you're probably overfitting. And maybe you want to reduce the size of the set, or you want to do something to prevent that overfitting. And yeah, I think that's probably a good place to stop. I think training conditional models is pretty much the same thing.
MIT_3054_Cellular_Solids_Structure_Properties_and_Applications_Spring_2015
Unique_Aspects_of_the_Course.txt
LORNA GIBSON: So I think one of the special things about this course and about my kind of research on cellular materials is that I spend a fair amount of time talking about materials in nature. So I talk about wood, for instance, and what it is about the cellular structure that gives rise to the density dependence of wood properties and the anisotropy in wood properties. I talk about trabecular bone, and I talk about the structure and properties of the bone, but also we do a little bit of modeling on how you might look at bone loss and osteoporosis. So if you lose a certain fraction of the bone density, what residual strength would you expect the bone to have. We have a project on bamboo right now. We talk about the structure of bamboo. Bamboo is actually a grass. And this is a Chinese species of bamboo called moso bamboo, and you can see how big this one is. They even get bigger, maybe six or eight inches across. And what we're interested in doing is making something called structural bamboo products. And this is an example of a bamboo oriented strand board. So the same way people take wood, and they chop wood up into strands and make oriented strand boards for housing construction, you could, in principal, do the same thing with bamboo. And we have a project that's in collaboration with some colleagues at the University of British Columbia. They're the ones who are actually making the bamboo oriented strand board. And with some architects in England, in Cambridge, England, we're looking at things like how you might modify wood building codes that talk about wood structural products, how you would modify that for bamboo structural products. And what we're doing here at MIT is we're looking at the structure of the bamboo and doing some modeling of the mechanical properties of the bamboo itself. So that's one example. Here's another example. This is a bamboo laminate. So this is a little bit like a glue laminated wood member, but in fact, this one is made out of bamboo instead of out of wood. So it's the same kind of idea. Let's see. We've also had a project in the past on cork. So this is a cork from a wine bottle, and we've looked at that. Cork has an unusual mechanical property. You know, if you take a rubber band and you pull on it, if you can make it longer this way, it gets narrower that way. Well, if you load cork in one direction, if you, say, pull on it or compress it, it doesn't get any wider or narrower in the other direction. It just stays the same kind of size. And you can show that that's related to the structure of the cells. The cells are like little bellows, or like a little concertina. So you can imagine if you have a little concertina, and you push on it this way, it doesn't really get any bigger that way or smaller that way. It just stays the same dimension. And the cork cells look a little bit like that, and that's why they do that. So we have all these natural materials. We talk a little about the hierarchical structure in plants. The cell walls are fiber composites, and then there's a cellular structure. And plant materials have a sort of hierarchical structure, with several different levels of hierarchy, and we talk about that in the class as well. And we talk a little bit about why that makes these materials mechanically efficient and how you might look at designing engineering materials based on that. So there's a little bit of biomimicking in the class as well.
MIT_3054_Cellular_Solids_Structure_Properties_and_Applications_Spring_2015
8_Foams_Nonlinear_Elasticity.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. LORNA GIBSON: All right. And I really wanted to show you my little hook video and I downloaded it so I thought we'd start just by watching that and then I'll pick up about modeling phones. So this takes like nine or 10 minutes, but I just thought it was cute. And I made it and I want you to see it. So let's do that to start. [VIDEO PLAYBACK] We're here at the Harvard University Botany Library, looking at a first edition of Robert Hooke's Micrographia, published-- How do I get rid of the bar, Greg? Oh, there it is. show the microscopic structure of materials. And it has a number of remarkable drawings in it. Here we see drawings of silk. These are two different silks. On the top here, we have a fine-waled silk. And in this more details drawing down here, you can see the patterned weaving of the silk. The bottom image here is a drawing of watered silk. And over here, there's another higher magnification image. And you can see the pattern here is more sharply angled. And it appears that this sharper angle here gives the different texture to the surface finish of the silk. So here we see a drawing of charred wood. And one of the things I find interesting about this drawing is how similar it is to modern electron micrographs which we've seen before. And in this drawing, we can see two of the main features. We see these small cells, which are fibers that provide structural support to the tree. And we see these larger cells here, which are vessels which allow fluids to go up and down the tree. And here we see a drawing of the surface of a rosemary leaf, with the unexpected, tiny, little bars. And this is something that you can only see with the microscope. You wouldn't expect to see those when you just feel the surface of the rosemary leaf. So it's kind of interesting that with the microscope, you can see these features that are invisible to the naked eye. One of the main themes of material science is that the property of materials are related to their structure. And so being able to see the structure at a microscopic scale is very helpful. And today, we can even see the structure at the atomic scale. Robert Hooke understood this idea. And in the description of the cork, Hooke states, "I no sooner discerned these-- which were the first microscopical pores I ever saw-- but methought that I had with the discovery of them, perfectly hinted to me the true and intelligible reason for all of the phenomena of cork." So what he's saying here is that by looking at the structure and looking at the cells here in the drawing, he thinks he can understand the properties of cork or the phenomena of cork. What was it about Robert Hooke that allowed him to make this book? Why was it him and not somebody else? Well, Robert Hooke had kind of an interesting history. He grew up on the Isle of Wight. And as a boy, he loved making drawings. And he got quite skilled at making drawings. The other thing was, he loved making models of things. He made models of ships. He made a wooden clock that was a working clock when he was a kid. And as a teenager, he moved to London. And he became an apprentice to Sir Peter Lilley, who was a famous painter of the time. So his drawing was good enough that he would be working with a very well-known painter. After he did that, he went to the Westminister School. And he studied classics. He studied mathematics. But he also learned to use a lathe. And this was also very helpful in him making various sorts of apparatus. And as a student at Oxford, he worked in the lab of Robert Boyle. And his job in that lab was to develop scientific apparatus. And he did things like he built pumps that allowed Robert Boyle to do the experiments that led to Boyle's Law. When he returned to London after Oxford, he became the Curator of Experiments at the Royal Society. And one of the things he did was he got a microscope. He improved that microscope, increasing their magnification, which was what allowed him to make the beautiful drawings that we see today. And here in the preface of the book, we see that he even made a drawing of his microscope. So this thing down here-- this is Robert Hooke's microscope. The development of new microscopes with higher and higher magnifications continues to this day. Scanning electron microscopes were invented in the 1960s. And today, we have transmission electron microscopes and atomic force microscopes with even higher magnifications. At these higher magnifications, we can see details that Hooke was unable to see because of the limitations of the microscope that he had-- the optical microscope. But it's interesting to see today the images we see in a scanning electron microscope at a similar magnification to those that Hooke saw in his optical microscope. And it's remarkable to see how many of the features that we see in these much more fancy microscopes that he was able to capture in his drawings with his simple optical microscope. So here we have a picture of cork. We have Hooke's drawings showing two perpendicular planes. We also have this nice, little drawing of a cork branch here. Cork is the bark from the cork oak tree. And in Hooke's drawings of the microstructure, we can see these cells here are roughly box-like. They're more or less rectangular. And these cells here look more or less circular. So there's these two different perpendicular planes in the cork. And when we look at these scanning electron micrographs, we can see very similar structure. There are some cells that are roughly boxlike, and others that are more or less hexagonal or roughly rounded. One feature that Hooke was not able to see, though, that you do see on the scanning electron micrographs, is the waviness in the cell walls. And that was because the resolution of his microscope was insufficient to see that level of detail. And here in this illustration on the bottom here is a drawing of sponge. And when we look at the scanning electron micrograph, we see that the structure is remarkably similar to what Hooke has drawn. So here we have Hooke's drawing of feathers. And we can see he's made several drawings at different length scales. And if we look at this one here, we see the barbule. And you can see these little hooked regions there. And those hooks lock into the little feathers over on this side over here of the adjacent barbule. And in the higher magnification picture, you can see on one barbule, there's hooks on one side but not on the other. And it's this hooking of the two sections together that allows the feathers to maintain a smooth surface for the wing when the bird is flying. And you can see the same sort of thing when you look at the electron micrograph. So you can see the little hooks on one side of the barbules. And you can see how they interconnect together with the next barb. One of the most reproduced images from Hooke's book is that of the flea-- this image we see here. And you can see why. It's a gorgeous image. And it shows details that people had never seen before. People were amazed to see that the little flea that they might have found on their dog or something was actually made up of this compound body, with all these little plates and little hairs here. And you can see these little tiny claws on the legs, and the legs have all these hairs. Nobody had any idea that this is what a flea actually looked like. And so it was an amazing drawing. And it was something that people were just stunned by when Hooke's book came out. And if we look at a modern electron micrograph, we can see it's remarkably similar if we look at the same magnification. So Hooke showed many of the same details, showed some of the same hairs on the legs, showed the same sorts of plates, showed the claws at the ends of the legs. And our modern image is probably from a different species of flea. We don't know what species of flea that Hooke actually looked at. But you can see there's a tremendous similarity between the two images. And it's remarkable how many of the features that Hooke was able to capture in his drawing. And here we have the compound eye of the fly. And this, again, was astonishing to people in Hooke's day. And even today, people look at this image, and they're pretty amazed at the detail in this drawing. And again, we can compare this with a modern electron micrograph. And again, you can see the similarities between what Hooke saw and what we see in a modern scanning electron microscope at a similar magnification. In the 1980s, atomic force microscopes were invented, which have a resolution down to tens of nanometers. And today, there's transmission electron microscopes, which allow you to see the atomic structure. So for instance in a crystal lattice, you can see the individual atoms and the regular crystal structure. Today, most experimental studies of materials include photographs of the microscopic structure of the material taken through some sort of microscope. And the remarkable thing is that all of these studies really trace back to this book here that we're looking at today-- to Robert Hooke's Micrographia. [END PLAYBACK] There you have it. So I just thought that was kind of cute. You might enjoy that. So that was that. All right, let's get out of there. Stop. So let's go back to the foams. So I think last time, we got as far as talking about the linear elastic behavior of foams and modeling that. But we didn't quite get to looking at the compressive strength of the foam. So I think we got as far as comparing the models with these equations here, and these plots of the data. And what I wanted to pick up with today was looking at the compressive strength. And we'll look at the fracture toughness as well in tension. So we're going to start with nonlinear elasticity and the elastic collapse stress. So if we have an open-cell foam, the derivation for the elastic collapse stress is really pretty straightforward. We say the elastic collapse occurs when the cell walls buckles. So in this schematic here, you can see the vertical cell walls have buckled. And so there's going to be some Euler load that's related to that buckling. And that's just the usual Euler load-- n squared, the n constraint factor, pi squared E. This is going to be E of the solid, I over l squared-- the length of the member. And then the stress that corresponds to that is just going to be proportional to that buckling load over the area of the cell, which is just l squared, so just P critical over l squared. So that just goes as Es. I is going to go as t to the fourth, because we have that square sectioned member. And now this is going to be l squared. And that's an l squared. So that's l to the fourth. And so if I combine all of that together, I can say that the elastic buckling stress is going to be some constant-- and I think we're up to C4 now-- times the Young's modulus of the solid times the relative density of the foam squared. So that's our equation for the elastic buckling stress. And if you compare this with data, you can make an estimate of what C4 is. And we find that C4 is about 0.05. And you can also say that 0.05 really corresponds to the strain at which the buckling occurs. Because the Young's modulus goes as the constants 1 times Es times the relative density squared. So the strain's just going to be the stress over the modulus. So that does correspond to the strain. So that's saying that buckling compressive stress occurs at a strain of about 5%. So that's open cells. And then if we look at closed cells, if you recall when we looked at the moduli we looked at a couple of extra terms. One was associated with face stretching for the modulus. And the other was associated with the compression of the gas. For the buckling, the faces don't really contribute that much, because typically the faces are very thin relative to the struts. And because they're so thin, they buckle at a much lower load, and they don't contribute too much. So we're not going to worry about that contribution. So I'm just going to say that the thickness of the face is often small compared to the edges. And that really is from the surface tension in processing that draws material away from the face and into the edges. There can be some contribution from the internal pressure. So if the internal pressure is greater than atmospheric pressure, then the cell walls are pre-tensioned, and you'd have to account for that. So the buckling would have to overcome that pressure as well. So then you would have the buckling stress would just be what we have up there-- C4 times Es times the relative density squared. And then we just add on that factor P0 minus P atmospheric. The thing with the gas which tends to affect more than the buckling stress, though, is the post-collapse behavior. So let me just show you a couple of things here. So here's some data for the elastic collapse stress. And you can see on the y-axis, we've got the stress normalized by the Young's modulus of the solid. And on the x-axis, we've got the relative density. And that solid line there-- sort of solid, dark line-- is that equation there, which is the same as this one up here. And you can see the data lie fairly close to that. But what's interesting is if you look at the-- why is this not working? Maybe my batteries finally died. If we look at the post-collapse behavior, you can see if these are the stress-strain curves, they're not flat here. They have some rise to them. And this is a closed-cell foam. And you can imagine as you're compressing the closed-cell foam, you're reducing the volume of the cell. And as you doing that, you're increasing the pressure inside the cell from the gas. And you can calculate what that is. And I'll do that in a second. And if you subtract off that gas pressure contribution, that works out to this line here. Then these lines will be more flat, like this. And we already really pretty much worked out that gas contribution. So I'll just say for the post-collapse behavior, the stress rises due to the gas compression. And that's as long as the faces don't rupture. So if you have an elastomeric foam, typically they don't rupture. And what we had worked out before was that that pressure-- we called it P prime-- it was P0 minus P atmospheric-- that was equal to P0 times the amount of strain, epsilon, times 1 minus 2 times the Poisson's ratio divided by 1 minus epsilon times 1 minus 2 nu minus the relative density. And once you get to the buckling stress, then the Poisson's ratio becomes 0. So if you take a foam-- so I brought a little foam in so you can play around with this one-- so if you take a foam like this and you compress it, once you've buckled it like this, it's not getting any wider this way. And part of the reason for that is you've got all these pores in here. And the cells just collapse into the pores. They don't really need to move out sideways. So you can smush that yourself, and try to convince yourself that the Poisson's ratio is just 0. Yes, Matt. AUDIENCE: [INAUDIBLE] I guess I want to measure [INAUDIBLE] the gas contribution? LORNA GIBSON: Yes, so there is a strain rate effect with these things. But I wasn't going to get into that here. If you look in the book, it's described in the book. So I think there's two things. One is that the solid itself can have a rate dependency. And then there could be something connected. AUDIENCE: [INAUDIBLE]. LORNA GIBSON: Yeah, I mean, I'm not going to go into that here. But one could look at that. So let me just write down one more thing here, because if we let nu be 0, then this thing here becomes simpler. So we could say the stress post collapse as a function of strain would be our buckling stress and then plus this factor here. So that curve on the bottom over here-- if this is the stress-strain curve-- this little dashed line here-- that's the gas contribution. And that is this term here. So you can kind of see how the shape of the curves reflects that gas contribution. And when you subtract it out, you get pretty much a horizontal plateau over here. Are we happy? Yeah? AUDIENCE: [INAUDIBLE]? LORNA GIBSON: This is for the closed cell. Because the closed cell are the ones that are going to have the gas pressure. If it's open cells, the gas can just move out of the cells. AUDIENCE: [INAUDIBLE]? LORNA GIBSON: Oh, sorry, that was to show you that the Poisson's ratio was 0. And that's true for both of them. So then we can look at the plastic collapse stress. Say we had a metal foam. And we do a calculation a little bit like the one we did for the honeycombs, too. So we say the failure occurs when the applied moment equals the plastic moment. And the applied moment is proportional to the applied stress times the length cubed. So I'm going to call that applied stress-- our strength sigma star plastic times the length cubed. So if you think of, say, the little schematic up here, the force is going to go with stress times the length squared. And the moment's going to force times the length. So it's the stress times the length cubed. And then the plastic moment goes as the yield strength times the thickness cubed. And then if I just combine those, I get that the plastic collapse stress in compression is another constant-- I'm going to call it C5-- times the yield strength times the relative density to the 3/2 power. And if we look at data, we find that the constant is about equal to 0.3. And if I go to the next slide, here's a plot of the yield strength or the plastic collapse strength of the foam divided by the yield strength of the solid, plotted against the relative density. And that dark, bold line is this equation here. And you can see the data lie pretty well on that line. There's one data set that's a little bit above the line. But you can see the slope of that data set is still about 3/2. OK, and the same as in the honeycombs, we could say that we can get elastic collapse before the plastic collapse if we were at a low density. You can get the same thing in the foams. And you calculate out what the critical relative density is for that the same kind of way. So we can say we can get elastic collapse precedes the plastic collapse if the elastic buckling stress is less than the plastic collapse stress. So all we do is make those two things equal to figure out the critical relative density where you get the transition from one to the other. So the relative density has to be less than 36 times the yield strength of the solid over the Young's modulus of the solid squared in order to get buckling before yielding. And let's see, where can I put that? So for rigid polymers, that ratio of the strength of the solid over the modulus of the solid is about one over 30. And so the critical relative density for the transition is about 0.04. So you'd have to have a pretty low-density foam, but it's possible. And for metals, that ratio is about 1/1,000. And then the critical transition density is less than 10 to the minus 5. So essentially, it never happens for the metal foams. And then for the closed-cell foams, we could include the terms for face stretching and for the gas. But in practice, the faces don't really contribute very much. And typically for foams like say metal foams or a rigid polymer that had a yield point, the faces rupture. And then if the faces rupture, then you don't get the gas compression term, either. So I'll just write the full thing down. But typically, you don't need to use it. So the first term would be from the edges bending. And the second term would be from the faces stretching. And this would be from the gas. But in practice, the first term is really the only one that is significant. So for closed-cell foam, this equation works pretty well, too-- the same one as for the open-cell foams. OK, so if we had, say, a ceramic foam that was brittle, there'd be a brittle crushing strength. And then we get failure when the applied moment M is equal to the fracture moment Mf. And this works very similar to the plastic yield strength. So we find the applied moment goes as the global stress times the length cubed. And the fracture moment goes into the cell wall strength times the cell wall thickness cubed. So the brittle crushing strength goes as another constant-- let's call it C6-- times the wall strength times the relative density to the 3/2 again. And C6 is about equal to 0.2. And typically, ceramic foams have open cells. So I'm just going to leave it at the open-celled formula there. So there's one last thing for the compressive behavior, and that's the densification strain. And we just have an empirical relationship for the densification strain. So if you compress the foam and you get to very large strains, then the cell walls start to touch, and the stress starts to rise steeply. And there's some strain at which that occurs. And we call that the densification strain. And in the limit, the modulus at that point would go to the modulus of the solid. If you could completely squeeze all the pores out, the stiffness of that would go to the modulus of the solid. And you might expect that that densification strain is just 1 minus the relative density, but it actually occurs at a slightly smaller strain. So in a large compressive stress, or strain, I guess we could say, cell walls touch, and we start to get this densification. So the modulus in the limit would go to the modulus of the solid. And you might expect that the densification strain was just equal to 1 minus the relative density. But it occurs at a little bit less than that. So empirically, we find that it's just 1 minus 1.4 times the relative density. And then I have this plot here, which is really just fitting a line to that data for densification strain. So those equations describe the compressive stress or the compressive behavior of the foam. So we've got the moduli, we've got the three compressive strengths, and we've got the densification strain. So what we're going to do later on in the course is we'll use those models to look at how we can use foams and things like sandwich panels and looking at energy absorption. And we'll also look at these equations in terms of some biomedical materials-- looking at trabecular bone, and looking at tissue engineering scaffolds. So there's one last property I wanted to go over, and that's the fracture toughness. So if we were pulling the foam in tension, and we had a crack in the foam, we'd want to know what the fracture toughness would be for a brittle foam. And this follows the same sort of argument as we had for the honeycomb. So all of these equations really are just following the same kinds of arguments. But you can kind of see how having the honeycomb calculations makes it easier to do the foam ones. So we'll do the fracture toughness calculation, and then I want to talk a little bit about material selection and selection charts for foams. So that's less equation-y. OK, and we're just going to look at open cells here. So imagine we have a crack of length 2a. And we have some remote stress applied, so remote tensile stress, so I'm going to call that sigma infinity-- the far-away stress. And then we have a local stress on the cell walls. I'm going to call that signal local. So I have a little schematic that kind of shows what we're doing. So we're pulling on it. There's some crack. The crack length is large compared to the cell size. And we want to know what the fracture toughness is. So we can say from fracture mechanics the local stress is going to be equal to some constant times the faraway stress times the square root of pi a over the square root of 2 pi r. And that's at a distance r from the head of the crack tip. And if we look at our little schematic here, we could say it's hard to say exactly where the crack tip is, but it would be somewhere in here. And we'd say this next unbroken cell wall is a distance r ahead of the crack tip. And that r is going to be related to l. It's going to be some function of l. So I can say the next unbroken wall ahead of the crack tip at some distance r is going to be related to l. And that's subject to a force, which is going to be the local stress times l squared. So that force is going to go as local stress times l squared. And the local stress-- I can substitute this thing here in-- that's going to be proportional to the faraway stress. And I'm going to get rid of the pi's. And I'm going to substitute for r. I'm going to put in l. So it's going to be proportional to the faraway stress times the root of a over l and times l squared there. And then we're going to say, again, the edges are going to fail when the applied moment equals the fracture moment. And the fracture moment is going to go as the modulus of rupture of the cell walls times t cubed. And the applied moment is going to go as f times l. And I've got f from up there, so that goes as the faraway stress, sigma infinite, times the root of a over l. And now I've got l cubed, because there's an l squared there and there's an l down here. And then if I just equate those, then this is going to go as sigma fs times t cubed, like that. So then I can say the fracture strength is equal to my faraway stress. That's going to go as my modulus of rupture times the root of l for a times t over l cubed. And then my fracture toughness is going to be this tensile stress times the root of pi a. So there's going to be some other constant here, which I'm going to call that C8. We've got the modulus of rupture of the solid. I've got the square root of l, and I'm going to multiply it by pi so it's like other fraction mechanics kinds of equations. And then we multiply that times the relative density to the 3/2 power. And here, if we look at data, we find that that constant is about equal to 0.65. And here's another one of these plots. So here I've normalized the fracture toughness of the foams by the modulus of rupture of the cell walls times the root of pi l. So I've taken the cell size into account here, and I've plotted against the relative density. And that equation there is the same as this equation I've got down on the board. And this is the only one of the properties that we've looked at that depends on the cell size. There's a cell size dependence here. All right, so I think that's all the modeling of the foams. Are we good? I gave you a lot of equations. We're good? All right. So I want to talk about how we might design foams to improve their properties. And then I want to talk about how we might select foams for certain applications and look at selection charts. So when we've been talking about the foams, especially the open-cell foams, we've been saying their deformation is largely by bending of the cell edges. And if we could do something to increase the stiffness of the edges or the strength of the edges, then that would increase the overall properties of the foam. And there's a couple of ways to think about doing that. So the foam properties-- if the foam is controlled by bending of the edges, and the edges have some flexural rigidity, EI, if we could increase that EI of the edges, we would increase the properties of the foam. And one way to do that is by making the edges hollow. So if we had hollow edges, and you had a tube, then that would increase the EI. And we can work out how much it's going to increase them. And I have a little example here of-- a natural example of hollow foam struts. So this is a grass. I don't know what kind of grass it is. I just saw this grass. And we picked some different grasses, and we took some SEM pictures. And it has a really kind of common structure for grasses. It's very common for grass stems to have sort of a solid outer part and then a foam-like inner part. It's so common that botanists have a name for it. They call it the core-rind structure. And if you take one of these grass stems, and you look at the sort of foamy bit in the middle, and you do a SEM picture of that, you can see that the little cell walls are actually little hollow tubes. So one of these things-- it's a little hollow tube. So what I wanted to do is work out how much the modulus of the foam would increase if you could make all the edges into little hollow tubes. So we're going to start by saying the foam behavior is dominated by cell bending, so edge bending. And the foam properties can be increased by increasing the EI of the cell wall. So there's a couple of ways we could do that. So the first one is looking at hollow walls. So imagine I have a thin-walled tube-- just a circular, thin-walled tube. There's my little wall there. It has some radius little r, and a wall thickness t. And then imagine I have the same amount of mass, but now I have a solid circular section. And I'm going to say the radius of that is big R. So for our thin-walled tube, the moment of inertia is pi r cubed times the thickness, t, if it's thin. And for our solid circular section, I is going to be pi big R to the 4th over 4. And if I say I want to set this up so that the masses are equal, then the areas of the cross-sections have to be equal-- say it's from the same material. So the masses are going to be equal if pi R squared is equal to 2 pi r t. So I'm going to solve here for R. So the pi's are going to cancel out. So the masses are equal if R is equal to the square root of 2 times r times t. And then what we're going to do is see how the big is the moment of inertia of the tube relative to the solid. And the tube is pi little r cubed t. And the solid was pi R to the 4th, divided by 4. And I'm going to get rid of the R here, and get rid of the pi's there. So R to the 4th is going to be 4r squared t squared. So the 4s are going to go. And this boils down to r over t. So if I had a thin-walled tube, the moment of inertia is going to be r over t bigger than if I had the same mass in a solid circular section. So you can see for the little plant here, by making a thin-walled tube, you're increasing the stiffness of the foam with the same amount of material. That's the idea. And you can do a similar kind of analysis for other properties. So that's if we have hollow tubes. So another option is we could have cell walls that are sandwich structures. So imagine if the cell walls themselves were little, tiny sandwich structures. So when you have a sandwich beam, what you have is too stiff, strong faces that are separated by some sort of porous core, like a honeycomb or a foam or balsa wood. And the idea with the sandwich structure-- if I draw a little sketch of the sandwich, here's my faces. So imagine those are solid. So they might be aluminum sheets, or they might be fiber reinforced composites. And then we have some sort of cellular thing here as the core. And the idea is, that's analogous to an I-beam. So in the sandwich beam, we have two, stiff, strong faces separated by a lightweight core. So the core is typically a honeycomb, or a foam, or balsa wood. And the idea is, you increase the moment of inertia of the cross-section with little increase in weight. And if you think of an I-beam, an I-beam has a large moment of inertia, because you're separating the flanges by the web. And the sandwich beam works in the same way. You're separating the faces by the core. But the core doesn't weigh very much, because it's a cellular thing. So the faces of the sandwich are like the flanges in the I-beam. And then the core is like the web. So the idea is to make something called a micro-sandwich foam. So what you want to do is make the cell walls into sandwiches. And one way to do that is to disperse a large volume fraction of thin-walled spheres into the foam. And you have to get the geometry right to make it work. So let me draw a little kind of sketch here of how it works. So here's our thin-walled spheres. And then you're going to distribute those in a foam. Here's another sphere over here. The spheres are not perfect. Let's say there's another one in here. And then the idea is this stuff in here would be the foam. So these guys are hollow spheres. And say the spheres have a diameter D. And say they have a wall thickness here of t. And say that the separation of the spheres I'm going to call c. You can see that there. And then the cell size of the foam I'm going to call e. So there's a bunch of parameters you have to kind of play with to get this to work. So you have to have thin-walled spheres so the faces are thin. The sandwich panels work best when the faces are thin. So you need the thickness of the sphere to be much less than D. You need the faces to be stiff relative to the foam. So you need the modulus of the sphere material to be greater than the modulus of the foam. And you need the volume fraction of the spheres to be relatively high to get the spheres close enough together for this to work. So you want that volume fraction to be something like 50% to 60%. And for the foam, you need to have the foam cell size less than the separation between the spheres. You need to have a number of-- you can't just have one pore in here. That's not really like a foam. It won't behave like a foam as a continuum. So you need to have a number of different cell sizes in between each sphere. And so you need the cell size of the foam to be a lot less than the separation of the spheres there, c. But if you can control this geometry, you can get the sandwich effect. And you can get improved properties by doing that. So there's ways you can play around with the structure of the foams to improve their properties. So that was one thing I wanted to say. Another way to improve the properties of a foam-like material is to use one of those lattice materials. So we've been talking about ways to improve the bending stiffness. But if you could get rid of the bending altogether and have axial deformation in the cell walls, that would be much stiffer. And you can get axial deformation by having those 3D truss kind of materials. So I have a picture of this. There we go, so there's one of those 3D truss materials. So another alternative is to sort of get rid of the bending altogether, and to try to make a truss-type material. So there's various ways to make these. I think that we talked about a few of them earlier on. And you can analyze them as truss-type structures. And I can just run through a sort of little dimensional argument to get the modulus. So the modulus is going to go as the stress over the strain. The stress is going to go as a force over a length squared. The strain's going to go as a deformation over l. So this is just like what we had before for the foams. But in this case, the deformation is going to go with the force times the length over the area of the cross-section divided by Es, because we're pulling it or pushing it axially. So that goes as Fl over t squared Es. And if I just put that back in the equation here for the modulus, I get that we've got F over l. And I've got delta here, so that's F l t squared Es. And you just get the modulus goes as the modulus of the solid times t over l squared. And that goes as the modulus of the solid times the relative density. So for the open-celled foams, the modulus went as the relative density squared. So if it was 10% solid, the modulus would be 0.01. And this is saying if it's 10% solid, the modulus is 0.1. So it's much bigger. So this is all sort of well and good. The only difficulty is that when you look at the modulus, you can do reasonably well. But when you look at the strength, some of the members are going to be inevitably in compression. When you have these truss materials, some members are going to be in tension. Some members are going to be in compression. And the compression members tend to buckle. And once the compression members buckle, then you're back to the same kind of strength relationship that you have for the foam. So that's one of the difficulties of this. So let me say that the strength-- so if the strength was controlled by uni-axial yield, it would go linearly with relative density. But if it goes with buckling, it goes as the square. So I'll just say the compression members can buckle. And say you had a metal lattice. Then there's some interaction between the plastic behavior and the buckling. And you use what's called the tangent modulus instead of just the Young's modulus. And the tangent modulus is lower. And there's also what's called knock-down factors that can be large, too. So the knock-down factor can be like 50%. So the measured strength can be half of what you thought it was going to be. This should be a squared over here. Sorry. So even though the stiffness of these 3D trusses can be quite good, the strength often isn't quite as good as one might hope. So that's one of the issues with them. All right. So do you see the idea, though, with all these different micro structures, is that you can control the structure in a way to try to increase the bending stiffness or get rid of the bending stiffness and increase the axial stiffness? So there's things you can do to play around with that. And I wanted to talk a bit today about material selection charts for foams. So when we talked about woods, we started talking about this. Remember, I derived a little performance index. We said if we had a material and we wanted to have a given stiffness, and we wanted to minimize the mass, we had that performance index that was E to the 1/2 over rho. And we had a chart of modulus versus density. And we saw that wood was really good. You can do that for other sorts of properties, not just modulus. So you can make-- depending on what the mechanical requirement is, you can work out different performance indices. So I want to go into that in a little bit more detail. So the question is, how do we select the best material for some mechanical requirement? So in the wood section, we looked at the minimum mass of a beam of a given stiffness. And we saw that the performance index was E to the 1/2 over rho. So let me do another one of these little examples, and then I'll show you some more of them. So another example would be what material-- minimize the mass of a beam of a given strength or a given failure load. So we'll call the failure load Pf. And we can see the maximum stress in the beam is going to be the moment in the beam times the distance from the neutral axis y, and divided by the moment of inertia. So here, M is the maximum moment in the beam. And y is the maximum distance from the neutral axis. And I is the moment of inertia. And I'm going to say i goes as t to the 4. And I'm going to define a failure stress of the material sigma f. So sigma max is going to go as my failure load times the length. That would be the moment. The distance from the neutral axis is going to go as t. And the moment of inertia is going to go as t to the 4th. And that's going to be the failure strength there. So I can solve this for t. And then I'm going to write the mass in terms of t, and put that in there. So here t goes as Pf l divided by sigma f. And that's going to be to the 1/3 power. I guess I can scoot over here. Then we can say that the mass M goes as the density of times t squared times l. So the mass M is going to go as rho times l times t squared. So that whole thing goes to the 2/3 power. So if we look at the material properties, the mass goes as the density times the failure stress raised to the 2/3 power. So if we want to minimize the mass, we want to minimize rho over sigma f to the 2/3, or we want to maximize sigma F to the 2/3 over rho. So that's the performance index for that case. So we can obtain these performance indices for different loading configurations and different mechanical requirements. And I don't want to go through a whole lot of them, but I'm going to put this up with the notes. So this is from Mike Ashby's book on Material Selection in Mechanical Design. And this is a whole series of these performance indices for different situations, for things loaded in torsion, for columns and buckling, for panels and bending. So these ones are all for stiffness. And they all involve a modulus raised to some power divided by a density. So a tie in tension, c over rho, the beam in bending is E to the 1/2 over rho. A plate in bending is E to 1/3 over rho. So you don't need to memorize those. But you can see you can derive these for different situations. And here's another one for strength-limited design. So the shaft is, depending on what the specifications are, it's the strength raised to the 2/3 power over rho. The beam loaded in bending-- the top one there-- sigma f to the 2/3 over rho. That's what we just did. So there's all these different kind of performance indices. So depending on what your situation is, you would pick one of these indices. And then what you can do is use these material selection charts, which plot one property against another on log-log scales. And because all of these performance indices involve a power, they always end up being a straight line on your log-log plot. And here this one, I think, is the same as what I showed you for the wood. This one's the modulus here plotted against density. So foams are down here. And other engineering materials are over here. And these guidelines here are the different performance indices. So this one's E over rho. This one's E to the 1/2 over rho. This one's E to the 1/3 over rho. And for this case here, as you move the lines up to the top left-hand corner, E is getting bigger. Rho is getting smaller. And so the actual value of the performance index is getting bigger. So you can use this to select a material. So we've made these charts for foams as well. So here's a couple of charts for foams. And I think what I'm going to do is just go through them quickly. And there aren't really that many notes, so I'll just put the notes on the website. And you can come and write all the notes down. And then we can finish this today. So this one here is the Young's modulus versus density. And these are all sorts of different foams. So the low modulus ones tend to be flexible. The higher modulus ones tend to be more rigid. And you could use this to select foams, if you wanted. You can also see what the range of values is. So the values of the modulus here goes from a little less than a 100-- because this is two orders of magnitude here, I think, each one of these-- down to about 10 to the minus 4 or a little less than that. So there's a huge range. There's almost a range of a factor of a million in those moduli. And the same with the strengths here. The strengths go from 10 to the minus 3 mega-pascals up to about maybe 30 mega-pascals, something like that. And you can see for the modulus and the strength, things like the metal foams are good. The balsa's good. Here's the balsa up here. Metal foam's up there. So you can kind of see the range of properties that you could get. And then you could also-- need a drink, hang on. You can also plot the specific property. So here's the compressive strength divided by the density plotted against the Young's modulus divided by the density. And here you want to be up at this end. So you would have a high strength and a high stiffness. So the balsa and the metal foams are good up here. This next plot-- this is the compressive stress at 25% strain. And this is the densification strain. And if you think of having your stress-strain curve looks like this, something like that, so you could say that's a strain of 0.25 and that's the stress that corresponds to that. So that stress times the densification strain, which is out here someplace, is an estimate of the energy underneath the stress-strain curve. So you can think of this right-hand plot here-- those dashed lines-- these lines like this and this and this-- each one of those corresponds to how much energy you would absorb under the stress-strain curve. So points that lie on here would have an energy of 0.001 megajoules per cubic meter. And over here, we're at 10 joules per cubic meter. So again, the balsa and the metal foams are good over here. So you can use these plots to try to identify foams for particular applications. And I think there's a couple more. It doesn't have to be mechanical properties. Here is thermal conductivity versus compressive strength. So you can imagine if you wanted some insulation, you wanted to have a certain thermal conductivity value, you probably also need at least some minimal compressive strength. You could also have something like a maximum service temperature, that maybe the foam is going to melt at some temperature. You can't go beyond that. So there's some property there. And I think there's one more here. You can look at things like the density in terms of the buoyancy of a foam, if you have some buoyancy application. And you can look at cell size on this one here. And cell size can be important for things like filtration and catalysis. So the amount of surface area goes as 1 over the cell size-- the surface area per unit volume. And so the cell size can be important for those sorts of applications. So the idea is, you can make these material selection charts for foam. And you can put data on there. And you can compare foams. And you can use these performance indices. So I'm going to leave it at that. There is a little bit more notes. But I'll just put them on the website, and you can get them from there. So I think we're good for today.
MIT_3054_Cellular_Solids_Structure_Properties_and_Applications_Spring_2015
9_Foams_Thermal_Properties.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at OCW.MIT.edu. PROFESSOR: So what I wanted to do today was talk about thermal properties of foams. And foams are often used for thermal insulation. And that's always closed cell foams that are used for thermal insulation. And we'll see why. And the foams tend to have a low thermal conductivity. And that's largely because gases have lower conductivity than solids. And if you have mostly gas, you're going to have a lower conductivity. So they have a low conductivity because they have a high volume fraction of gas. And they've got a low volume fraction of the solid. They also have cells. And the heat is transferred partly by radiation and convection. And if you have small cells, you reduce the amount of convection and radiation. And we'll see that. So that, by having a cellular structure, and in particular, by having small cells, you can decrease the heat transfer. OK, so let me write some of the stuff down. So closed cell foams are widely used for thermal insulation. And the only materials with lower thermal conductivity than closed cell foams are aerogels gels. And I'll I talk a little bit more about aerogels later on today. But the difficulty with aerogels is that they tend to be very weak and brittle, like ridiculously weak and brittle. So we had a project on aerogels a couple of years ago. And the students who I was collaborating with would make aerogels. And they'd bring it up to my office. And I would pick it up and like-- I would pick it up like this, and it would break. So they have very low thermal conductivity, but they're very brittle. And I brought a few of our samples of aerogels, just so you can see what they look like. And I'll pass them around in that little tube, so you can kind of play with them. OK, so we're going to focus on foams. And whoops-- And we can say the low thermal conductivity of foam arises mostly from the high volume fraction of gas and that the gas has a low lambda, a low thermal conductivity. So lambda is thermal conductivity, so I'm just going to put lambda there. Then it has a small volume fraction of solid, which has a higher thermal conductivity. And then the foams have a relatively small cell size. So one of the things we're going to look at is how does the cell size effect the thermal transfer, the thermal conductivity. OK, so there's lots of applications for foams. And I guess one of the main ones is in buildings, insulating buildings-- also insulating refrigerated vehicles, things like LNG tankers. So there's lots of the applications for using foams for thermal insulation. Foams, in addition to having a low thermal conductivity, they also have good thermal shock resistance. So thermal shock is if you have a material, you heat it up, and then you suddenly cool the surface of it, for example. So say you takes something and you quench it in water or quench it in some fluid, then the surface, it wants to shrink because its temperature drops, but it's connected to everything underneath it and it can't really shrink. And so it's constrained and you can get cracking and spalling. And so, it turns out foams have a good resistance to that thermal shock kind of loading. And we'll see why that is, too. Roughly, you can see if the thermal expansion strain is the thermal expansion coefficient times the change in temperature. And the stress that you might generate is just going to be related to the modulus times alpha times delta-t. And because we're going to see that alpha for the foam is the same as alpha for the solid, but the E foam is going to be a lot less that E of a solid would be. So because the modulus is smaller, you would get a better thermal shock resistance. OK, so I wanted to go over a couple of sort of laws of heat conduction, so we can talk about what thermal conductivity is and how we define it. So the first one here-- --first one here is for steady state conduction. So when we say steady state conduction, what we mean is that the temperature is constant with time, the temperature doesn't change with time. So time's not going to come into the equation here. And heat transfer for steady state conduction, where there is no change in the temperature with time, described by Fourier's Law. And that says that the heat flux q is equal to minus lambda times the gradient in temperature. And if you want to think about just a one diversion of that, it's equal to minus lambda times dt by dx. So here, q is our heat flux. So that would have units of joules per meter squared per second. So how much heat transfer per unit area per unit time. Lambda is the thermal conductivity. And it has units of watts per meter k, so degrees kelvin. And then delta or-- and then this is our temperature gradient. OK, so that's Fourier's Law, and we're going to use that later on when we talk about the foams. And then, just so that we have things a little more complete, if you have a non-steady heat conduction, if the temperature varies with time, then there's a difference equation that involves the thermal diffusivity. So if we have non-steady heat conduction-- so t varies with time. I'm going to call a time tau. Then a partial differentiation, the partial derivative of temperature with respect to time is equal to the diffusivity, that's given the symbol a, times the second derivative of temperature with respect of distance, so with respect x squared. So here a is the thermal diffusivity. And it is equal to the thermal conductivity divided by the density and divided by the specific heat. So here, rho is the density and cp is the specific heat. The specific heat is the heat required to raise the temperature of a unit mass by the unit temperature. And so, the density times cp is the volumetric heat capacity. It's how much energy you would need to raise a certain volume by, say, 1 degree k instead of a certain mass. OK, so on the table here, on the screen, we have different materials. And we have the thermal conductivity lambda. And we have the thermal diffusivity, a. And I guess I should also say a has units of meters squared per second. So this table is arranged in order of decreasing thermal conductivity. So here's copper at the top, 384, watts per meter k. Here's, you know, different metals. You've got aluminum. Here's a couple of ceramics. They're about a factor of 10 less than the metals. Here's the polymers, another factor of 10 less than that. And here's some gases. Air is about 0.025. Carbon dioxide is less than that. Triclorofluoromethane, which used to be used as a gas in foams because it's got such low thermal conductivity, is 0.008. But it's no longer used because it's a-- what you call it? A fluorocarbon. Anyway, it decreases our ozone layer. So they don't use that anymore. Now here's some wood. So that's one sort of cellular solid. And they're around 0.04-- something like that. And here's a group of polymer foams. And they're a little over 0.025. So if you think of-- if you had the gas, air-- if the air was the gas inside the foam, 0.025 is lambda for the gas. So you're not going to get lower than that. And you have to use a low conductivity gas to get these values, like 0.025 here, 0.020, 0.017. And then, hear some other sorts of sort of mineral fibers, glass foams, glass wools. OK, so that's just a table so that you have some data there. All right? Yes? STUDENT: [INAUDIBLE] --foams, if they are closed cell, with a different gas rate. Because if they're open cell-- PROFESSOR: --Right. The gas is going to-- STUDENT: --it would just always be air. PROFESSOR: It's going to go. And in fact, one of the difficulties with using the lower conductivity of any gases is there's a phenomenon called aging, that if, you know, you've got your gas inside your foam, it's going to diffuse out into the air. And air's going to diffuse in. So over time, the thermal conductivity tends to increase because you're getting air coming and the local-- conductivity gas going out. But I think, typically, that process takes a number of years. It doesn't happen in a week. But if you're designing a building and want the building to be there for 50 years, it occurs faster than that. So it's not ideal from that point of view. All right. So let me talk a little bit more about thermal diffusivity. Let me scoot over here. So materials with a high value of that thermal diffusivity, a. They rapidly adjust their temperature to their surroundings. So if they have a high value of a, what it really means they've got a, say, a high value of lamda-- so high thermal conductivity. And, say, a low value of this volumetric heat capacity. So it doesn't take much energy to change their temperature. And they also conduct heat well. So they tend to adjust their temperature to their surroundings quickly. OK, so then, let's talk about the thermal conductivity of a foam. So I'm going to call that lambda star. So the star is the foam. And then we'll talk about-- lambda s will be the lambda for the solid that it's made from. So if you think of the thermal conductivity of the foam, there's contributions from different types of heat transfer. So you could have conduction through the solid. I'm going to call that lambda s. You could have conduction through the gas. You could have convection within the cell. So convection has to do with having, say, within the cell, it might be a different temperature on one side of the cell to the other side of the cell. And the warmer side of the cell, the gas is going to tend to rise to the warmer side and fall to the cooler side. And you get a convection current set up. So you can get heat transfer from that. And you can also get heat transfer by radiation. So radiation can cause heat transfer, as well. So we're going to have contributions from conduction through the solid. So the amount of conduction in the foam from the solid-- I'm going to call lambda star s. So lambda s would be the conductivity of the solid. And lambda star s is the thermal conductivity contribution from the solid in the foam. So we get kind of-- through the solid. We have conductivity through the gas. So it's lambda star g for gas. And then we could have convection within the cells. We'll call that lambda star c. And then we could get radiation through the cell walls and across the voids. We'll call that lambda star r. And so, the thermal conductivity of the foam is just the sum of those four contributions. So we're just going to go through each of those contributions, in turn, and work out how much thermal conductivity you get from each of them. And it turns out most of the thermal conductivity comes through the gas. So if we first look at just conduction through the solid, we've got that contribution to the conductivity of the foam from the solid, it's just equal to some efficiency factor times the thermal conductivity of the solid times the volume fraction of the solid or the relative density. And here, eta is an efficiency factor. And it accounts for that tortuosity in the foam. So if you think of the solid in the foam, it's not like we have little fibers that just go from one side to the other like this and the heat just moves along those fibers. You know, the foam cells have some complicated geometry and the heat has to kind of run along that complicated geometry. And people have made estimates of what this is. And it's roughly a factor of 2/3. So I guess it would depend on exactly the foam cell geometry. But typically it's around 2/3. So that's conduction through a solid. That's straight forward. Conduction through gas is similarly straightforward. It's just the conductivity of the gas times the amount of the gas. And the volume fraction of the gas is just 1 minus the volume fraction of the solid. So it's just 1 minus the relative density. So the conduction through the gas is just lambda g times 1 minus the relative density. So we can do a little example here. And you can see how much of the conduction comes from the solid in the gas. So for example, if we look at a foam that's 2.5% dense and say it's a closed cell poly-- what are we doing-- polystyrene. So the total thermal conductivity of the foam is about 0.04 watts per meter k. And the thermal conductivity of polystyrene is 0.15 watts per meter k. And the thermal conductivity of air is 0.025. So let's assume it's just blown with air. And then if I just add up, what's the contribution of conduction through the solid and conduction through the gas-- so I just use those two little equations-- conduction through the solid-- it's going to be 2/3 of this value of lambda s times the amount of the solids-- that's 0.025 and then plus lambda g, which is 0.025 times the amount of the gas, which is 0.975. And if I work those two things out, this is 0.003 and this is 0.024. So that total is 0.027 watts per meter k. So you can see if the total is 0.04, most of it's come from the gas. A little bit's come from the solid. And the rest is going to be from convection and radiation. And that's typical. And that's the reason that they sometimes use low thermal conductivity gases to blow foams for thermal insulation because the gas makes up such a big fraction of the total conductivity. If you can reduce that, you reduce the overall conductivity. So, we'll say foams for insulation are blown with low conductivity gases. But as I mentioned, you have this problem with aging that, over time, that gas is going to diffuse out and air is going to diffuse in. Then the overall thermal conductivity of the foam is going to increase. So that's the conduction. And then the next contribution is from convection. So imagine we have one of our little cells here. And it's hotter on that side than it is on that side. And hot air is going to rise. Cold air is going to fall. So you get a convection current set up. And because of the density changes, you get a buoyancy force in the air. So that's kind of driving the convection. But you also have a viscous drag. So the air is moving past the wall of the foam. And there's going to be some viscous drag associated. And how much convection you can get depends on the balance between this buoyancy force and the viscous drag. So we'll say the gas rises and falls due to density changes with temperature. And the density changes give rise to buoyancy forces. But we also have these viscous forces from the drag of the air against the walls of the cell. So air moving past the walls-- this is kind of a fluid mechanics thing-- so that air is a fluid. And in fluid mechanics, they often use dimensionless numbers. And there's a dimensionless number called the Rayleigh number. And the Rayleigh number, you can think of it-- it's not quite the balance of the buoyancy force against the viscous forces. But it involves those forces. And convection is important if this Raleigh number's over 1,000. And here's what the Rayleigh number is. It's the density of the fluid times the acceleration of gravity times beta. Beta's the volume expansion coefficient for the gas-- times the temperature change. And we're going to look at a temperature change across a cell. And then, times the length. That's going to be the cell size. And we divide that by the fluid viscosity and the thermal diffusivity. So let me write down what all these things are. So rho is the density of the gas. So the g's gravitational acceleration. Beta is the volume expansion of the gas. And for a constant pressure that's equal to 1 over the temperature. Then delta tc is the temperature difference across a cell. And l is the cell size. Mu is the dynamics viscosity the fluid. And a is our thermal diffusivity. So what I'm going to do is just work out, for a typical example, how big of a cell size do you need to get this Rayleigh number to be 1,000. And we're going to see that, typically, that cell size is big. It's like 20 millimeters. So in most foams, the convection really isn't very important at all. So it's typically-- people don't worry about convection. And let me just show you how that works. So for our Rayleigh number, which is ra-- for the Rayleigh number to be 1,000-- say we had air in the cells. And say the temperature was room temperature. Then the volume coefficient of expansion is just 1 over t. So it's 1 over 300, say. degrees k to the minus 1. Let's say our change in temperature across one cell was 1 degree k. Bless you. The viscosity of air is 2 times 10 to the minus 5, pascal seconds. The density of air is 1.2 kilograms per cubic meter. And the thermal diffusivity for air is 2 times 10 to the minus 5 meters squared per second. And if you plug all of these into that equation for the Rayleigh number and you solve for the cell size, you find that the cell size, l, is 20 millimeters. So that says convection is only important if the cell size is bigger than that. And so most foams have cells much smaller than that. And convection is negligible. So I have enclosed cells and the heat's not transferred so easily from one cell to another by the gas moving. And by having small cells the convection drops out. So you don't have to worry about that. So the last contribution to heat transfer is from radiation. And there's something called Stefan's law that describes the heat flux for radiated heat transfer from a surface at one temperature to another surface at a different temperature across a vacuum. So we can say we have a heat flux qr not from a surface of one temperature. So I'm going to call that t1-- to one at a lower temperature. I'm going to call tnot-- with a vacuum in between them. So this is [? Stefan's ?] law so this is the radiative heat flux is equal to the emissivity of the surfaces, which is beta 1 times a constant called Stefan's constant-- sigma times the fourth power of temperatures. I'm taking the difference of the temperatures so here are the Stefan's Constant-- is sigma. And that's equal to 5.67 times the 10 to the minus 8. And that's in watts per meter squared per k to the fourth. And beta is a constant describing the emissivity of the surfaces. So it gives the radiant heat flux per unit area of the sample relative to a black body. And that's a characteristic of the emissivity. All right, so then, so if we-- yes? STUDENT: [INAUDIBLE] PROFESSOR: Now-- so right now, forget the foam. We have no foam. We just have two surfaces with a vacuum between them. And now I'm going to stick a foam between the surfaces. And we're going to see how that changes the heat flux, OK? So the next step is we put the foam between those two surfaces. And the heat flux is going to be reduced because the radiation is going to be absorbed by the solid and reflected by the cell walls. And so we're going to characterize how much it's reduced. So there's another law called Beer's Law, which characterizes the reduction in the heat flux. Piece of chalk's getting to small OK, so Beer's Law gives us the attenuation, so the sort of reduction in the heat flow. So qr is equal to qr not. That would be the heat flux, if we just had the vacuum. And then there's an exponential law. And it's the exponential of minus k star t star. And here, k stars in an extinction coefficient for the foam. Talk a little bit more about that in a minute. and t star is just the thickness of the foam. And then this thing is called Beer's Law. So we have very thin walls and struts. And we're just going to consider optically thin walls and struts to make life easy. Then we can say that, if they're optically thin, they're transparent to radiation. They're optically thin if they're less than about 10 microns. Then this extinction coefficient is just the amount of solid times the extension coefficient for the solids. So it's just the relative density times the extinction coefficient for the solid. OK, and then I can say, the heat flux by radiation. I can use two equations to write that down now. And then I'm going to let them be equal to get the thermal conductivity. I can say qr is going equal to lambda r times dt by dx. So that's the Fourier's Law that we started out with. And then I've also got the qr that I'm going to get by combining the Stefan's Law with the Beer's Law up there. So if I do that, I get that qr is beta 1 times sigma times t1 to the fourth minus t not the fourth. So that's the qr not up there from down there. And then I've got an exponential for the attenuation. And instead of k star, I'm going to put the relative density of times ks. and then I've got the thickness of the foam, t star, as well. OK, so that's qr, but that has to equal lambda times dt by dx. So I'm going to use some approximations. Here and I'm going to end up with an expression for the contribution from radiation to heat transfer in the foam. Yeah? STUDENT: So when you say optically thin walls, where t is less than 10 microns, you mean like the walls of the foam? PROFESSOR: Yeah, yea. STUDENT: So it's different t than the-- PROFESSOR: t star is the thickness of the whole thing, yeah. So imagine we had our two surfaces. And they might be like 100 millimeters apart or something. t star is the sort of thickness of the foam in between the two surfaces. And the optically thin is the cell walls, which are microns kind of thickness. OK. So I'm going to make some approximations here. And that's going to allow me to solve for t star. So I'm going to say that dt by dx x is approximately equal to just t1 minus t not over the thickness of the foam or I'll call that delta t over t star. And then, the other approximation I'm going to use is that t1 to the 4th minus t not to the fourth is equal to 4 times delta t times the average temperature cubed. So here t bar is the average temperature, t1 plus t not over 2. So then, if I use those two approximations, I can write that qr, our heat flux from radiative transfer. I got the beta 1. I've got the sigma. And instead of the difference of the fourth power, I'm going to write 4 delta t t bar cubed. And then I've got my exponential. Blah, blah, blah, blah, blah. So then, here's the relative density times ks times t star, the overall thickness. That's going to equal the radiative contribution to the thermal conductivity of the foam. And instead of dt by dx, I'm going to have delta t over t star here. So part of the reason for doing these approximations I end up with a delta t term on both sides. Now I can cancel that out. And if I just take this mess here and multiply it by t star, then I've got lambda r star. That's our thermal conductivity contribution from radiation. So one of the things to notice here is that, as the relative density goes down, then the contribution from radiation to the thermal conductivity of the foam goes up. OK, so this chart here shows thermal conductivity as a function of relative density. And it breaks down the contributions from the gas, g, the solid, s, and the radiation, r. And you kind of see the gas contribution doesn't change that much. These are relative densities between a little over 2 and a little less than 5%. So the amount of gas-- it's mostly gas in all of these things. The solid contribution increases as the relative density increases. So you'd expect that. And then, as I just said, as the relative density goes down, the amount of radiation contribution goes up. And so you can kind of see how that all fits together. Another plot that shows the thermal conductivity versus the relative density. These are for a few different types of foams. You can see for this plot here, you reach a minimum in the thermal conductivity. And that's because you've got this trade off between the contribution from the solid and the contribution from the radiation. And those two kind of trade off and you get to a minimum. So let me write some of this down. So I'll just say that-- hang on. Write this over again. This is looking at the overall thermal conductivity. And we can see the relative contributions of lambda, solid, lambda, gas, lambda, radiation. I'll just say this shown in the figure. I'm going to say the next figure shows a minimum in the thermal conductivity. Then I'll just say there's a trade off between the conduction through the solid and I can direction from the radiation. And then we also have a plot here that shows the conductivity versus the cell size. And you can see that the conductivity increases with cell size. And the reason for that is the bigger the cells get, the radiation is reflected less often. And one thing I wanted to mention with the cell size is that if you look at aerogels, the way aerogels shells work is that they have a very small cell size, a very small pore size. So typically, it's less than 100 nanometers. And the mean free path of air is 68 nanometers. So the mean free path is the average distance the molecules move before they collide with another molecule. And if your pore size is less than the mean free path, then that reduces the thermal conductivity. It reduces the ability of the atoms to pass the heat along between one another. So the way the aerogels work is they have a very small pore size. And what's important is how big the pores are relative to the mean free path of air. OK, so that's the thermal conductivity. I wanted to talk about a few other thermal properties of foams, as well, today. So one is the specific heat. And since the specific heat is the energy required to raise the temperature by a unit mass, then the mass is the same-- you know, if you have a certain mass of foam or a certain mass of solid-- the specific heat from the foam is the same as the solid. So the specific heat for the foam is the same as the specific heat for the solid. So that would have units of joules per kilogram per degree k. And the next property is the thermal expansion coefficient. And it's a similar thing. The thermal expansion coefficient for the foam is equal to the thermal coefficient of expansion for the solid. So imagine you have-- say you had something like a honeycomb. If you heat it up a certain amount, every member is going to expand by alpha. And if every member expands by alpha, the whole thing expands by alpha. And this is the same. And it's the same idea with the foam. So if every member just gets longer by alpha, then the whole thing gets bigger by alpha. OK, so the last topic I wanted to talk about was the thermal shock resistance. And thermal shock is the idea is that if you have something that's hot, and say you quench it in a liquid-- so you put it suddenly in a liquid-- the surface is going to cool down faster than the bulk of it. And because the surface is trying to contract because it's cooling down, but it's attached to the bulk of it and it's constrained, it can't really cool down, then you generate stresses. And if the stresses are big enough, you can cause fracture and have the thing crack and spall. So we'll say if the materials is subjected to a sudden change in the surface temperature, that induces thermal stresses at the surface and can induce spalling and cracking. So we're going to think about a material at one temperature that's dropped into, say, a liquid at a different temperature. So the surface temperature is going to drop to the cooler liquid temperature and it's going to contract the surface layers. And the fact that they're bound to the layers underneath that are not contracting as quickly, it means that you generate a thermal strain. So the thermal strain is going to be the coefficient of thermal expansion times the change in temperature. So you're going to constrain the surface to the original dimensions. And then you're going to induce the stress. So if it's a plane or thing, it's e alpha delta t. And then, there's a factor of 1 minus nu, just because it's a plane, in a plane. And then you'll get cracking or spalling when that stress equals some failure, stress. So I can rearrange this and solve for the critical delta t that you can withstand without getting cracking. So I just rearranged this and say sigma's equal to sigma f. That would be sigma f times 1 minus nu over e and over alpha. So that's the critical change in temperature to just cause cracking. So now what I can do is I can substitute in there for what you would have for the foam. And I'm going to do it just for the open cells just because it's easier to write the equations. So for the foam, I would have some sort of fracture strength. So when we did the modeling of the foams, we said that was equal to about 0.2 times the modulus of rupture times the relative density to the 3/2's power and 1 minus nu. And if I divide by the modulus of the foam, that's es times the relative density squared. And then we just had alpha for the foam was the same as alpha s. So then, I can rearrange this slightly and say it's equal to 0.2 over the relative density to the 1/2 power. So I'm canceling out these relative densities here. And then I can combine all the solid properties together. And I'm going to say that nu for the solid is about equal to the same as nu for the foam. So what I can do here is I can group all the solid properties together. And this just is delta t critical for the solid, right? So this is saying that the critical temperature range before you get cracking in the foam is equal to the range for the solid, but multiplied by this factor of 0.2 and divided by the square root of the relative density. So if the square of-- the relative density is going to less than 1. So this number here is going to be bigger than 1. So it's saying that the temperature range that will give you spalling in the foam is going to be bigger than the temperature range in the solid. So the foam's going to be better than the solid, OK? And that uses our little models from before. So I think I'm going to stop there, probably cause my throat is starting to get too sore. There's a little case study in the notes. And I'll just put that on the notes on the Stellar site. It's like one page and it's really straightforward. You can just read that, OK? So this is the end of the bit on thermal conductivity. That's just this one lecture. And this is really the end of the whole section on modeling of the honey combs and the foams. So that's kind of the first half of the term is modeling the honey combs and the foams. And the second half of the term, we're kind of applying those models to different situations. So next week, we'll have the review on Monday, have a test on Wednesday, week after that is Spring break. I can't believe we're at Spring break already. And then after that we'll start we'll do the trabecular bone for a week. We'll do tissue engineering scaffolds and cell mechanics for two or three lectures. We'll look at some other applications to engineering design, look at energy absorption and sandwich panels. And then, I'm going to talk about plants a little bit at the very end, OK? So we've already covered a lot of the kind of deriving equations part of the course. The rest of the course is more applying the equations to lots of different situations, OK? So I'm going to stop there just because my throat is giving out.
MIT_3054_Cellular_Solids_Structure_Properties_and_Applications_Spring_2015
11_Trabecular_Bone_and_Osteoporosis.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. LORNA GIBSON: So we're going to start talking about trabecular bone, and we're going to do a bone today and on Wednesday. And I'm hoping we can more or less finish it on Wednesday. So hello. Hold on a sec. So these are some images of trabecular bone, and you can see that it has a foam-like structure. And trabecular bone exists in certain places in the body. There's three main places that it exists. So it exists at the end of the long bones. And over here this is a femur. This is the very top of the femur, and you can see this is all trabecular bone in here. This is a tibia, and this is the top of your knee there. And you can see how the bone get more bulbous at the ends, and it's filled with a trabecular bone. This is a vertebrae. So vertebrae are actually mostly trabecular bone, and they have a really thin shell of what's called cortical bone, the dense bone, on top of it. So trabecular bone exists at the ends of the long bones, it exists in the core of the vertebrae, and it also exists in sort of shell or plate-like bones. So in your skull, for example, there's a layer of trabecular bone in between two layers of the compact dense bone. And in your pelvis, it's the same thing. So those of you who took 3032, you remember when I passed around the bird skulls, there was that very porous kind of trabecular bone. So trabecular bone is of interest medically in three main kind of medical situations. So the first one is osteoporosis. So I want to talk a little bit about osteoporosis now, and then we'll talk about it in more detail later on. I guess, we'll probably start today. Another medical issue is osteoarthritis, and the properties of the trabecular bone are important in arthritis, and the third issue is in joint replacements. And so we're going to talk a little bit about osteoporosis, osteoarthritis, and then joint replacements. And then I'll talk a little bit more about modeling bone like a foam and how it deforms and how it fails. And then we'll talk a little bit how we can model osteoporosis. So let me write down some of these things. I guess we'll start here. So trabecular bone has a foam-like structure, and what we're going to see is that we can use the models for foams to describe the mechanical behavior of the bone. It exists at the ends of the long bones. And at the ends of the long bones, the bones become more bulbous. And really what that's for is to increase the surface area so that there's cartilage between the ends of the two bones. So there would be a bone here and a bone there, and there's cartilage in between them that sort of lubricates that joint and makes low friction at the joint. And the bone gets larger to decrease the stresses on the cartilage. So if you have the same force and you have a larger area, you're going to have a smaller stress. So that's why the bone gets bulbous like that. And then by having the trabecular bone, because it's so porous and lightweight, you're not having a big, dense hunk of bone at the end of the long bones. So it exists at the ends of the long bones. And I'll just say the ends have a larger area than the shafts. And that's to distribute the loads on the cartilage or to reduce the stress on the cartilage. And then the trabecular bone reduces the weight. So it also exists in the core of the vertebrae, and in fact, it makes up most of the vertebrae and then in things like the skull and the pelvic bones in shell and plate-like bones. And so it's the core of a sandwich structure there. So it's of interest in osteoporosis, in osteoarthritis, and in joint replacements. So if we start by thinking about osteoporosis, you probably know that osteoporosis is a disease where the bone mass becomes reduced and there's a greater risk of fracture, so there's especially a greater risk of hip fractures and vertebral fractures. And it turns out in both of those sites, if you just look at these bones here, typically if you have a hip fracture, what happens is the neck of the femur breaks. So this is called the neck here. This is called the head, this spherical bit there. So the neck has a fracture, and you can see most of the bone there is trabecular bone, so it's really carrying most of the load and the same with the vertebrae. This sort of cylindrical part of the vertebrae here carries most of the load. It has a shell of really thin cortical bone, but it's mostly trabecular bone. And when the loads are vertical like this, that's really the trabecular bone that's carrying most of the load. And people get sometimes what are called wedge fractures where instead of having a sort of a cylinder with parallel faces like this, the trabecular bone fails, and the bone ends up like that so that there's-- yeah, I know. You make that wincing expression. It's like ouchy. And in fact, it's very ouchy for people who get that. And when you see little old ladies who are all hunched over, that's why. The bone has actually failed. It's actually been crushed into these wedge fractures, and there's no way they can straighten it out, and it's quite painful. So people who look at osteoporosis are quite interested in the mechanical properties of trabecular bone for this sort of reason. The hip fractures are particularly serious because people become immobilized and then sometimes because they're immobilized, they get pneumonia, and in elderly people they sometimes die. So something like 40% of elderly patients who are over 65 die within a year of having hip fracture. So it's not that the hip fracture kills them, it's that they become so immobile, and they can't move, and they can't walk around, and they end up getting pneumonia. So it's quite a serious thing. And there's something like 300,000 hip fractures a year in the US, and the cost of treating these hip fractures is something like $19 billion. So yeah, it's a huge problem. And as the population is aging, as baby boomers like me get older and older, there's going more people having hip fractures. So it's a huge deal, osteoporosis. So we'll say bone mass decreases with age, and osteoporosis is extreme bone loss. And a little later today I'll show you some pictures of what it looks like when people have osteoporosis. So the most common fractures are of the hip and the vertebrae, and at both sites, most of the load is carried by the trabecular bone. And the hip fractures you are the most serious. 40% of elderly patients pass away within a year. So that's sort of a little introduction to osteoporosis. The next issue that people are interested in is osteoarthritis. And in osteoarthritis, there's a degradation of cartilage at the joints, and the stress on the cartilage is affected by the modulus of the bone that presses against the cartilage. You can kind of magic if you have a fiber compass, for instance, most of the stresses, if you're loading it along the fibers, is carried by the fibers because they're stiffer. So if you have like say trabecular bone that has varying density, the denser bits are going to have higher moduli, and it's there's going more stress associated with that. And so the modulus of the trabecular bone can affect how the loads are distributed in the cartilage, and that can affect the damage in the cartilage. And the shell, as I mentioned before, this sort of shell of cortical bone or the dense bone at the joints can be quite thin. It can be less than a millimeter. So I brought my little bones along with me again. So this is the head of a femur here, and this is a piece of a knee joint here from a tibia. And you can see just looking at these how thin the cortical shell is. So you can get an idea of how thin that is. So osteoarthritis involves a degradation of the cartilage at the joints. And the stress on the cartilage is affected by the moduli of the underlying bone, and the cortical shell, the totally dense bone, can be quite thin. So the mechanical properties of the trabecular bone can affect the stress distribution on the cartilage. And if osteoarthritis gets particularly bad, then sometimes people have joint replacements. So when it gets really bad, the cartilage is degraded completely, and the bone is rubbing on bone, and that's quite painful. And when it gets to that point, people generally have a joint replacement. And so the way the joint replacements are done is say somebody who is going to have a hip replacement, what they do is they chop off the top of the femur. So they would chop the femur off somewhere around here, and then they have a metal implant that has a spherical ball. That's like the head of the femur. And then it has a sort of stem and a shaft here that goes into the hollow part of the long part of the shaft of the femur. And so they use a number of different metals for this, titanium and stainless steel, and there's a cobalt-chromium alloy are also used. So you need metals that are biocompatible, aren't going to corrode, aren't going to have degradation products. And then the bone grows around that implant, and the bone grows in response to mechanical loads. So the density of the bone depends on the magnitude of the load, and the orientation of the trabeculae depends on the orientation of the principle stresses that are applied. So let me write that down. So they cut off the end of the bone, and they insert the implant into the hollow shaft of the remaining bone. And the metals they use are titanium, stainless steel, and a chromium-cobalt alloy. And then the bone grows into that implant. And the bone grows in response to mechanical loads. So the density of the bone depends on the magnitude of the stresses, and the orientation of the bone depends on the principle stresses. So one of the issues that comes up in joint replacements is that there's a mismatch in the moduli between the metal and the bone. So if you think the metal, like something like stainless steel, has a modulus of around 200, 210 gigapascals. And the cortical bone has a modulus of about 18 gigapascals, and the trabecular bone has a modulus between about 0.01 and 2 gigapascals, depending on its density. So you're taking the bone out, and you're replacing it with something that's much, much stiffer, and that changes the stress distribution around the remaining bone. And one of the things that can happen is you can get a loosening of the implant. So the bone can grow in initially, but over time, you get a different stress field in the bone. And if you have a different stress field, then the bone can resorb away from the implant and cause what's called loosening. So if the implant becomes loose, that's clearly not a good thing. It's a bad thing. And often orthopedic surgeons don't like to do these joint replacements in young people partly because they don't always loosen, but occasionally they do. And if they loosen they can go back and do a revision. But you can kind of imagine after they've chopped the head of the femur off and they put one implant in, it's not that easy to go back in and replace that with another one. You would need one with a longer stem, and the whole thing becomes a little bit more complicated. So this issue of stress shielding is what it's called when you have something much stiffer that's shielding the stresses in the bone. The issue of stress shielding means that they don't like to do the replacements on younger patients unless you can get stress shielding. And if we just compare-- if we look at the cobalt and chromium alloy, the modulus of that in gigapascals is about 210. If we look at the titanium alloys that are used, the modulus is about 110. If we look at the stainless steel-- it's 316 stainless steel-- it has a modulus of around 210. And then if we look at the bone, the cortical bone has a modulus of about 18, and the trabecular bone has a modulus 0.01 to 2 gigapascals depending on the density. So after the joint replacement happens, the remodeling of the bone is affected. So the idea is that the stiffer metal carries more of the load, and then the bone carries less load, and then it resorbs. And that can lead to this thing called loosening, which is not desirable. Now, this typically doesn't happen till about 15 years after you've had the implant, so it's not something that would happen right away, but it can happen later on. So these are all sort of medical reasons why people are interested in trabecular bone because of osteoporosis, osteoarthritis, and joint replacements. So I wanted to start by talking about the structure of trabecular bone. And then we'll talk about what the stress-strain curves look like in compression and tension, what are the mechanisms of deformation and failure, and how we can apply our models for foams to the trabecular bone. So the idea is that the structure of the bone resembles a foam, and here's some SCM images of trabecular bone. And you can see that the bone has a varying structure. If it's relatively low density, this is a bone that's almost like an open-cell foam if I didn't tell you that was a bone, you might actually think it was an open-cell foam. And here's a denser piece of bone, and you can see there's still interconnections between all the openings, so it's not exactly like a closed-cell foam, but it's much denser, and it's almost like there's perforated plates in the structure. And then as I said the bone can grow in response to loads. So if you have loads that are more or less vertical, the trabeculae tend to line up and be more or less vertical with some sort of horizontal bracing. So this is a piece of bone from a knee, the condyle is sort of towards the top of the knee. And you can see these are sort of plate-like pieces of bone. They're almost parallel, and not too surprisingly in your knee, the loads are typically vertical, and then there's a little bracing bits that go horizontally here. So you can get different structures depending on the loading on the bone, and the density of the bone corresponds to the magnitude of the load, and the orientation of the trabeculae corresponds to the orientation of the load. AUDIENCE: [INAUDIBLE]. LORNA GIBSON: Resorb. So when the bone density goes down, when you lose bone mass, that's called resorption. So the idea is that the trabecular bone resembles a foam. And in fact, the word trabecular comes from Latin, and in Latin, it means little beam. So the foams to form by bending. They act like little beams, and so the trabeculae are like little beams, even in Latin. There's a range of relative densities, and you can see in that image up there, you can see that there's a range. And they range typically between about a 5% in dense and 50% in dense. So something like 0.1 or 0.2 might be typical. And the low-density bone resembles an open-cell foam. And the higher density, it becomes more like perforated plates. And the structure can be highly anisotropic depending on the stress field. And then I've got another image here of the trabecular bone. These images are using what's called micro computed tomography. So you've probably heard of computed tomography. Say somebody has cancer, they get put in a CT machine, and they do a scan. The micro CT is more of a research tool. It's the same kind of technology, but it's got a much finer resolution, and typically, you put a small specimen into a machine to do this. So the specimen might be half an inch in diameter and an inch tall, something like that. So these are done by a colleague, Ralph Muller, who's in Zurich, and this is one of his bread and butter things that he has these images, and he looks at osteoporosis. And you can see here the difference in the structure for the different densities. So here's a 26% dense piece of bone in the femoral head. It looks pretty sturdy and substantial. Here's an 11% dense piece from the lumbar spine, and here's a 6% dense piece. And you can kind of see when you go from 26 to 11, the struts get a little bit thinner. And when you go from 11 to 6, the struts get very thin, and in fact, if they get too thin, the struts resorb altogether, and some of their struts can just disappear. So when people get osteoporosis, what happens is they first lose bone mass by thinning the struts, but then at some point, the struts just resorb altogether. And if you think of the struts as a biological material, they have bone cells in them. So there's little osteoclasts and osteoblasts and osteocytes that live in the bone, the mineral thing, the bony thing. And those cells have dimensions of 10s of microns, so maybe 20, 30 microns, something like that. So the struts can't get any thinner than that. If they get thinner than that, then the cells can't live, and the thing just disappears altogether. And you can think of from a mechanical point of view, if you lose density by thinning the struts, you can use our sort of foam equations. And say the density went from 0.2 to 0.1, you could make some estimate of how the modulus and how the strength would vary depending on our foam models. But if you lose density by resorbing the struts, the struts just disappear altogether, then it's as if you had a steel scaffold or a steel structure of a building. And now you're starting to remove columns and remove beams. Yes, I know. That's not good, not good. And so we'll talk a little bit more about that when we talk more about osteoporosis, and you can see the consequences of that. But this image here kind of gives you a little bit of a picture of what the bone structure looks like as it gets less dense. So I want to talk a little bit more about the bone growing in response to load. Let me rub off the board. So you're probably already a little bit familiar with this idea. So when astronauts go up into space, they often do exercises where they have a treadmill, and they've got springs, and they're pulling on the springs to try to exercise themselves. And the reason they do that is when they're in microgravity, if they were doing some kind of exercise, they would lose bone mass. And they will get back to Earth where we have Earth gravity, and they would have a problem. So you see it in astronauts, in microgravity. The other place you see this just in everyday life is in professional tennis players. People have done like x-rays of the bones of professional tennis players, and obviously, they have one arm that they hit the ball with their racquet. The bones in that arm actually get bigger because they're loading that bone over and over again pretty much every day when they're playing tennis, and they're not loading the other arm. So their two arms are not symmetrical because of this loading from hitting the racquet over and over. And the people in 3032 have already seen this, but I couldn't resist bringing up the Guinea fowl experiments again. So obviously, you can only do x-rays on human. You can't sacrifice the humans and look at their bones, but you can with Guinea fowl. And so people have done experiments where they run Guinea fowl on treadmills, and they have one set of Guinea fowl that they run on the treadmill that's horizontal. They have another set of Guinea fowl that they run on a treadmill that's inclined to 20 degrees, so one would think they might have more stress on their bones from that, and then they have a control group that they don't run on the treadmill at all. And then what they do is they have a forced plate on the treadmill so as the Guinea fowl is running, they measure the maximum force in they're taking high-speed video. And then they measure the angle of the knee at that point at which the force is maximum. And they can see there's a change in the angle of the knee when they put them on the inclined treadmill, not too surprisingly. And then these are juvenile Guinea fowl that haven't completely matured their bones. And then after about six weeks of this, they sacrifice the Guinea fowl, and they do scans on the bone, and they look at the orientation of the bone, and they measure what's called the orientation of the peak trabecular density, which is a way of characterizing the orientation of the bone. And they find that the angle of the knee when the Guinea fowl are running changes by about 14 degrees. And it turns out the angle of the bone, the orientation of the bone also changes by about 14 degrees. So the bone has remodeled to match that change in the forces that are applied as the Guinea fowl are running on a treadmill. So this is all a demonstration just to show that bone grows in response to load. So let me write down some of this stuff. So I will say astronauts-- did you see Michael Collins is going to come to the talk at MIT? When I was a kid in '60s, he was one of the Apollo astronauts. He was like one of the first NASA astronauts. Anyway astronauts, so in microgravity, they would lose bone if they don't exercise. And tennis players, the bones get larger in the arm that they hold the racket with. And then I'll just write a little bit of notes about the Guinea fowl experiments. So this was done by-- it's in a paper, Ponzer et al 2006. So they have one set of Guinea fowl that run on a level treadmill, they have another set that run on a inclined treadmill, and it's inclined at 20 degrees. And then they have a control group that doesn't run on the treadmill. And then they measure the angle at the knee at the moment of peak force on the treadmill. And after six weeks, they sacrificed the Guinea fowl, and they measured the orientation of the peak trabecular density. And they find that the knee flexion angle changed by 13.7 degrees. And if you compared the inclined versus the level treadmill, and they found the orientation of the peak trabecular density, which they called OPDD, also changed by 13.6 degrees. So the idea is that the orientation of the trabeculae changed to match the orientation of the loading. Then I have a little video here. Do you like video? So I have a colleague who's at Harvard who studies animal locomotion, and they didn't do this set of experiments, but they do do experiments on Guinea fowl running on treadmills, and thought you might find this amusing. So let me see if I can make this work. [VIDEO PLAYBACK] -Sometimes you walk into a lab and you just think this is what science is all about. -I just put the Guinea fowl on the treadmill, and this is something that we commonly do. -Welcome to the Concord Field Station, a defunct Nike missile base turned scientific menagerie. It's owned by Harvard, and biologist Andy Biewener is the director here. So think of it as-- -A research lab facility for doing comparative biomechanics and physiology of largely animal movement. -And the birds are just the tip of the iceberg. -So do you want to see the baby goat and the emu? -Obviously. -OK. We keep it because it's sort of like a mascot. There used to be a lizard colony. You can hear the African greys. Then the jerboas are housed in this room here. This is where we originally did our pigeon flight studies. So usually the ones with claws and sharp teeth and aggressive behaviors, you want to watch out for them. -As you might expect. But did you know that Guinea fowl-- -They're really lovely to work with. -Sometimes. Or that-- -Rats are not very good on treadmills. -Yes. That's what a rat treadmill looks like. And this? -And this was historically a treadmill of note, the treadmill that they first taught kangaroos on and showed that kangaroos stored energy in their tendons enough that they don't actually increase their metabolic rate when they hop at faster speeds. -These are the kind of discoveries made here with the use of high-speed video and x-ray machines and semi cooperative animals. But beyond the basic biology, Biewener says engineers are using this research to build better robots, and it can help improve medical treatment for people with movement disorders. Today the big excitement at the lab is happening here. Ivo Ros is studying how heart rate changes when cockatiels fly at different speeds. So this is a way to look at how much energy it takes to fly, and that cord is measuring heart rate. But instead of the birds flying faster, the wind changes speed. -I'm going to turn it on then. -It's hard to fly fast, and it's hard to fly slow, Ros says. So the expectation is that the heart rate should be shaped like a U. But so far Ros is finding that it's a flat line. It's like the bird goes into a stress reaction when it takes off. Is that just because of the wind tunnel? What Ros wants to know is-- - --whether or not they need to be stressed to fly in the first place. -That's something that Ros is looking into, but today is mostly about training. -Keep going. Come on. -Imagine you're a cockatiel. A wind tunnel is kind of a strange experience. -A bird in a wind tunnel has to confront the fact that the world is not moving past, which defies its normal sensory cues. -Which pretty well sums up the Concord Field Station generally. For Science Friday, I'm Flora Lichtman. [END PLAYBACK] And if read The New York Times, Flora Lichtman used to work for NPR and would make these Science Friday videos for them. But now she has a gig doing things for The New York Times, and she does science videos still. I don't know if you quite call them videos, but what they do is they have these little paper puppets, and the paper puppets are animated and re-enact different episodes in science. And it's kind of amazing how they do these little science videos. So if you Google Flora Lichtman, you'll see more Science Videos with all sorts of things. I guess the other interesting anecdote is I went to the Concord Field Station once. And I had done a study on quills and animals that have quills because the quills have a foamy structure in the middle. So they're carrot, and they have sort of like carrot-like structures. And they have an outer shell that's dense, and then they have a foamy thing in the middle. Anyway I did this paper on quills and how they work mechanically. And Technology reviewed a little article about it, and they said they wanted to take a picture of me with a hedgehog. The hedgehogs are little European-- they're like little small, cute things. And I said, well, if you can find a hedgehog, I'm happy to have my photograph taken. And they had a hedgehog at the Concord Field Station. So we went out there, and we had these big leather gloves and took a picture. And I don't if it was Andy or I don't know who it was, but I said one of the people there, what did you do with a hedgehog? And he said, well, we tried to do the treadmill study. But hedgehogs are like porcupines. When they get scared, they curl up into a little ball. And so they said they would put the hedgehog down under the treadmill, and they would start it up, and it would make a noise, and it would get scared it. And it would just go into a little ball and kind of slide along to the end, and then it would kind of get flopped off. So that was the end of the hedgehog experiments. But they did have wallabies there the day I went. So they have all sorts of animals that they put on to treadmills, birds that they fly, so it's kind of interesting to go there. But the main idea here is that there was this set of experiments with Guinea fowl that showed just how precisely the orientation of the bone matches the orientation of the loads. AUDIENCE: Was there a difference between the control groups? LORNA GIBSON: Ah, so I have slides. I have slides. Hang on a sec. I got distracted by my video. Sorry. So here's the sort of schematic of Guinea fowl on treadmill. And where's the little doo-da here? So on the level, the knee flexion angle was whatever this is, 76.3, and here was the 62.6, so the difference is 13.7. And then this kind of table here summarizes these results. So this is the maximum trabecular density, and this is the angle. And here we have the incline. Let's see, the control was the yellow, and the level was the blue, and they've got the values for that peak trabecular density orientation. So they've got that for the level. And then they looked at the difference between the incline and the level in the knee angle. That's what this thing here is. And then between the level and the control, there wasn't really any difference in the knee angle because the control ones, they were just walking around. So that's the sort of slide that has the actual data on it. All right. And then I showed you the video. All right. So we need to do a couple more things before we get to the modeling. Let me get a drink. So if we want to use the models for foams to try to describe the trabecular bone, we need to know something about the properties of the solid. Remember we used the properties of the solid in the models. So we want to get the properties of the solid in the trabeculae, and there's a couple of ways you can do this. To get the moduli, you can use an ultrasonic wave propagation method, and you can measure a modulus that way. And if they do that, they measure a modulus between about 15 and 18 gigapascals. Another way to do it is to take a piece of bone, do a compression test on it, measure the modulus. Before you do the test, you put it in the micro CT machine, and you get a picture of the structure, and then you use that as the input to a finite element analysis. So the finite element analysis is a computer numerical analysis to do mechanical calculations. And if you know what the modulus of the structure is, you can back out what the modulus of the solid must have been from the fine element thing. And those sorts of experiments also showed that the modulus was around 18. And it turns out that moduli is about the same as cortical bone, and the properties of the solid trabeculae are very similar to the solid cortical bone. So let me scoot over here. So if you use an ultrasonic wave propagation, people have measured a modulus for the solid in trabecular bone of 18 gigapascals, or you can do a finite element calculation based on micro CT data for the structure. And then you measure the overall modulus for the trabecular bone, and then you back out the modulus of the solid. And people who've done that have gotten values of around 18 gigapascals too, and that's very similar to what the cortical bone is. And so we're going to use the following properties for the solid in the trabecular bone. We're going to say the density is 1,800 kilograms per cubic meter. The Young's modulus is 18 gigapascals. The yield strength has different values in tension and compression. It's about 182 megapascals in compression. And it's about 115 megapascals in tension. So those are the solid properties. So then if we look at the compressive stress-strain curves, they have the shape shown on the screen there. And you can see how similar the stress-strain curves are for those for a foam. So there's the same three regimes that we see for the foam. There is a linear elastic regime over here, there's a stress plateau here, and there's some densification regime here. These are three curves for three different relative densities. As the relative density goes up, the stiffness goes up, the plateau stress goes up, and the densification strain goes down. So is the same as the foams that we've looked at before. And if we look at the mechanisms of deformation and failure, people have looked at this. These are on a whale vertebrae. So these are tests that are done in a micron CT machine, again, by Ralph Muller's group. And here the specimen is unloaded, and here's the same specimen loaded. So you can see this platen has come down a little bit. And if you look at this column here, this trabecular here, you can see it's bent out and bowed out more. And people have found that usually the linear elastic behavior is controlled by bending of that trabeculae, and the plateau stress is usually controlled by some sort of buckling. But it's not elastic buckling. You don't recover it. If you take a piece of bone and you compress it, it's going to have a permanent deformation. So it's inelastic buckling. And I think we have some more pictures. This is another example from whale bone from Ralph Muller's group. So here's the bone unloaded. Here it's loaded to 4% strain, here it's to 8%. And you can start seeing right in this area here if you compare with up there, it's starting to form. And if you go up here to 12% strain, you see that strut right there. That was this guy up here, and you can see that it's buckled right over. So people have made measurements like this in observations, and you can actually see the buckling. And people have also done finite element modeling. They can take a micro CT scan and input that to the fine element model. And then if they do the compression and they input the properties of the solid, they can see that they get a buckling kind of failure. If you have trabeculae that are very aligned-- we have more. Here's one more in the buckling. So this is one of Ralph's little movies. So when it unloads, it looks like it recovers, but this is all just an animation. He takes several stills and puts them together, and it doesn't actually recover. It's just the way that it shows. But again, these are two different specimens of different densities. You can see how the struts deform. They bend and then they buckle. Let me stop there, and I'll put some stuff on the board. So we'll say the compressive stress-strain curve has the characteristic shape of cellular solids. And the mechanisms of deformation and failure, usually there is bending followed by, usually, inelastic or plastic buckling. And sometimes if the trabeculae are aligned like that knee that I showed you, or if the trabecular are aligned or if they're very dense, then the actual deformation is important. And I'll just say people have found this by making observations using micro computer tomography or by finite element calculations. And this is a stress-strain curve and tension here, a tension you get failure at small strains, then you get micro cracks in the bone. And these next plots just show some data for the bone. So we're plotting the Young's modulus here. So this is a relative Young's modulus, the modulus of the bone divided by the solid cell wall material. Here's the relative density. Here's data for lots of different specimens of bone. So some of this data is for human bones, some is for bovine bone. Sometimes the data is taken where the orientation of the trabeculae doesn't line up with the direction of the loading. So you might have trabeculae that are oriented this way, but you're loading it this way. There's sometimes different strain rates. Let's see. There's different groups, and so there's a huge scatter in the range of the data. But you can see if you look at it broadly and you look at that whole cluster of data, the data lie close to a line of a slope of 2. And if you think of the open-celled foam model and you had bending of the cell walls, you'd expect that the modulus would vary [? to the ?] density squared. So that's kind of the limit of how we do the modeling. We're really just interested in seeing how the properties vary with density. If you had a particular piece of bone, I don't think you could use the models to exactly predict what the modulus of that bone would be. And here's the compressive strength here. So this is the relative compressive strength. Here we've normalized it with the yield stress of the solid bone, and here's the relative density. And you can see, again, this line is of slope 2, so that kind of speaks to the buckling-type failure mode. And I think I have another one here. This is the tensile strength. So if you pull the bone in tension, you wouldn't expect to get buckling, you'd expect to get plastic yielding. And if you got yielding and you use the open-cell foam model, you'd expect a slope of 3/2, so this line has a slope of 3/2. And this line is sort of towards the upper bound of that set of data. You can imagine a line that went through it a little bit lower but the same slope. And so these open-celled foam models, they don't predict the properties of a particular piece of bone because the bone can have some anisotropy to it. The orientation of these things may not be perfectly lined up with the loading. But overall the models give you a sense of how the bone is deforming and failing. So let me write some of this down. So that's data for the modulus, the compressive strength, and the tensile strength. And those have been on those plots. Those values are normalized by data for cortical bone. I thought somebody was talking. It's just the chair squeaking. And as I said the spread in the data is large, and that's due to anisotropy in the bone and misalignment between the bone orientation and the loading direction. So when people first started doing tests on trabecular bone, they typically were orthopedics labs. And the orthopedics labs tended to initially cut the bone specimens on anatomical axes. So they would do you know the superior-inferior, or the medial-lateral, or the posterior-anterior. But the bone orientation didn't line up with those directions. So the bone might have been this way, but they were loading it this way, and so that gave this misalignment. And there could be some variation in the solid properties too. So you could imagine some solid might have more micro cracks than another. So if you took say human bone of different ages, you might expect the older bone to have more micro cracks in it. So these plots put a lot of data together, and then the lines are based on models for open-cell foams. So the relative modulus goes roughly as a relative density squared, and the cell walls are bending. And the compressive strength goes roughly as the modulus squared, and that's related to this plastic buckling. And then the tensile stress or tensile strength depends on the formation of plastic hinges, and it goes roughly as the density to the 3/2 power. And one observation that people have made is that in compression, if the modulus and the strength both go as a density squared, then the ratio of the strength to the modulus is just a constant, and that, in fact, is just the strain at failure, or the strain for that say, the plateau. And that's a strain of about 0.7%, and that's pretty consistent in trabecular bone. Let's see. And we said sometimes the bone was relatively aligned. So here's that picture of the femoral condyle again in the knee, and you can see the bones lined up. If you have bone that's lined up like that and you load it along the direction of alignment, then you can get axial deformation in the trabeculae. And then you would expect the moduli would go linearly with the density. And here's some data for the Young's modulus and the compressive strength of bone that was fairly aligned. So this was selected to be aligned. So here's the modulus here. And the square data points are the longitudinal direction, and the little diamond, these little stars, are transverse. So here's a line of slope 1, and again, they don't all lie perfectly on that line, but roughly the slope is about 1 there. And then similarly here, this is the compressive strength. Now the little squares are the longitudinal data, and they're not exactly on a slope of 1, but they're more or less on a slope of 1. So I'll just say in some regions, the bone may be aligned. And then axial deformation is important. And then you would expect the modulus to go linearly with the density and the strength to go linearly with the density in the longitudinal direction. Then finally I wanted to finish up the bit on the modeling by making one of these plots a little bit like we did for wood. So here's the Young's modulus of bone plotted against the density. The trabecular bone is down here. It's sort the lowest density. And then this is the collagen that's in the solid part of the bone, and this is hydroxyapatite, the mineral. So the modulus of hydroxyapatite is around 120 gigapascals, and the modulus of collagen is somewhere around 5. And if you make composites of collagen and hydroxyapatite, their moduli are going to be in this envelope here, and compact bone, the modulus fits in around here. Remember I said it was around 18 gigapascals. So then if you take a compact bone and you turn it into trabecular bone, you'd expect the modulus would go down along a slope of 2. So here's our little slope of 2, and more or less that's what you see with the trabecular bone. So the idea is that the models give you kind of a general idea of how the bone is behaving, but it's not really meant to predict a particular piece of bone. Because a particular piece is going to have a particular geometry. Typically they're not equi ax and isotropic. All right. So are we good with the general overview? Are we good with how fewer equations there are now that we got past the first part of the course? So I'm going to talk a bit more about osteoporosis, and I'm going to talk about some modeling that my group did to look at the consequences of osteoporosis. And then later on we're going to talk a little bit about using metal foams as a possible replacement material for a trabecular bone as well. And I have a little bit of a talk on using trabecular bone in evolutionary studies to see whether or not a species was bipedal or quadrupedal. So I think I talked about this a little bit in 3032, but I have more slides and more stuff I'm going to talk about. So let me get myself organized. So osteoporosis comes from the Latin, and it actually means porous bones. So osteo means bone, and not too surprisingly porosis means porous. So this next slide gives you some idea what osteoporotic bone looks like. So the top slide is normal bone in a 55-year-old woman. These are sections from the lumbar spine. And that bone up here is 17% dense, so the relative density is point 0.17. And this is a section from the same area of bone in an 86-year-old woman, and it's 7% dense. So you can see there's a huge difference in the density, and you can start to see what happens when you lose bone mass. So if you look at this bone up here, it's all well connected. Each little trabeculae is connected to its neighboring friends. And you can see down here, I mean, you look at this and you kind of go ouch just looking at it. Because this piece of bone here is just kind of dangling off, not connected to anything. And you can see the struts have gotten thinner, so they've lost bone mass by thinning. And then as I said when the thickness gets less than they are roughly equal to the size of the cells, then the cells can't live anymore, and the bone strut just disappears altogether. So it's not too surprising that if you lose this much bone mass, there's mechanical consequences, and there's a greater risk of fracture. And as I said the two most common types of fractures are hip fractures and vertebral fractures. So let's see here. So as people age, everybody loses bone mass. And happily for you and not so happily for me, the bone mass peaks at about 25 years old. So you're probably either not at the peak or just barely at the peak. And then it decreases after that every year. I'm considerably older than you. And in women, when you go through menopause, the cessation of estrogen production increases the bone loss. And so typically, osteoporosis is most common in post menopausal women. And osteoporosis is defined as a bone mass 2.5 standard deviations or more below that of a young, normal mean. So it's not like you fall and break your hip and they say you have osteoporosis. It's based on the bone mass. And as I said, the trabeculae thin and then they resorb completely. So anybody here take Latin? Yes. I did Latin for one year in high school. So trabeculae, with an E on the end here, trabeculae, I suppose, is the plural. Trabecula with an A is singular. And that comes from Latin. So you don't say trabeculas. That's a no-no. All right. Let me get rid of these. So if we saw that the strength of the bone varies as the density squared, you can begin to see how sensitive the strength is going to be to this bone mass loss. So say you went from a density of 0.2 to a density of 0.1, then the densities changed by a factor of 2 so that the densities gone down by 1/2, but the strength is going to go down by a factor of 4. You're going to have the strength to be a 1/4. And so you're going to have a big change in the strength. And you can imagine is the trabeculae thins, this buckling gets easier to happen. And then once the trabeculae begin to resorb as they disappear altogether, it's like I said. it's like having a building's framework. Now, you're removing beams and columns, and the strength is going to go down even more dramatically. And the way we model the osteoporotic bone is we use finite element analysis. So before we talked about using the unit cell for the honeycomb. But to use the unit cell, you have to have repeating unit cells, and obviously, you don't have that. You've got local variations in what the structure looks like. And we also used a dimensional analysis. And the dimensional analysis relies on the geometry being similar from one specimen to another, and you can't really rely on that either for the osteoporotic bone. And so what we've done is-- and this is what other people do as well, is use finite element modeling to represent the bone. So initially what we did was we used a 2D Voronoi model. So remember we talked about Voronoi honeycombs and Voronoi foams. So I like to start out with simple things, so we started out with 2D Voronoi model for a honeycomb. Then we did a 2D representation of vertebral bone. And then we had a 3D Voronoi. And I had a couple of students who did this. Matt Silver was the one who did the first two, and Sereca Vagilla was the one who did the last one. So let's see. I think that's probably a good place to stop there for today. So next time I'll talk about the modeling of osteoporotic bone, and we might talk a little bit about metal bones as a substitute for trabecular-- metal foams as a substitute. I don't think we'll get to the evolution stuff. We probably won't quite finish next time.
MIT_3054_Cellular_Solids_Structure_Properties_and_Applications_Spring_2015
5_Honeycombs_Outofplane_Behavior.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. LORNA GIBSON: All right. I should probably start. Last time, we were talking about the honeycombs and doing some modeling of the mechanical behavior and we started off talking about the in plane behavior. We're talking about loading it in this direction or that direction there. And we talked about the elastic modulus. I think I derived a Young's modulus for the one direction, a Poisson's ratio for loading in the one direction. And then we started talking about the stress plateau and we went over the elastic buckling stress, for one of these elastomeric honeycombs like this. And we went through the plastic collapse stress, for, say, a metal honeycomb that would yield. And I think I started talking about a brittle honeycomb and brittle crushing. The idea with a brittle honeycomb-- like a ceramic honeycomb-- is it could fail in a brittle manner. And the failure is going to be controlled by the cell wall in bending. And when that bending stress reaches the modulus of rupture, or the bending strength of the material, then you get wall fracture. I think that's where we left it last time, right? I had written down something about cell wall fracture. Now, I wanted to do the little derivation. Here's our little schematic up here. Here's the honeycomb. You've loaded it with sigma 1 here to such an extent that one of these cell walls has reached the modulus of rupture and has broken. And this is the little free body diagram that corresponds. I'm going to go through sigma 1 for loading in the one direction. This is the same thing for loading in the two direction. And the result for that's in the book. OK. If I have loading in the one direction, I can relate that horizontal force p to the stress in the one direction. The little p is equal to sigma 1 times h plus sin theta times b. And remember, b's the depth into the page. And I'm going to define sigma fs as the modulus of rupture of the cell wall material. It's the bending strength of the cell wall material. And we're going to say that we get fracture of that bent wall when the applied moment is equal to the fracture moment. From the plastic collapse stress from last time, we had the applied moment was equal to p times l sin theta over 2. That was just using static equilibrium, looking at that free body diagram of the beam. And if I write p, in terms of sigma 1 up here, I can just write that like this sigma 1 times h plus l sin theta times b. And then I've got this other term of l sin theta and we divide that whole thing by 2. That's the applied moment. And we're going to get fracture when we reach the fracture moment. I'm going to call that mf-- the moment at fracture. Last time, we figured out a plastic moment to form a plastic hinge. And this is an analogous thing. But in this case, remember, if we have a beam and we have the stress profile through the cross section of the beam, it's going to look something like that. So for our beam, that's going to be the thickness of the beam there. So if it's linear elastic, we get the maximum stress at the top and the bottom. And the neutral axis is here in the middle. There's no stress there. This is the normal stress distribution here. And as we increase the stress for a brittle material that's going to be linear elastic till fracture, this is going to stay linear like this until we reach this modulus of rupture stress here. When we reach that stress, then we're going to get fracture of the beam. And we can say that there's some moment associated with that. I could say that this stress block here is equivalent to some concentrated force and this stress block down here is also equivalent to the-- it's going to the same magnitude, but the opposite direction force. And I can get the fracture moment by figuring out how big those forces are and multiplying by this moment arm between the two forces. OK? The magnitude of those forces is just going to be the volume, essentially, of this stress block here. Imagine there's stresses there. It's a triangle, so the area of it's going to be a half times t over 2 times of sigma fs. And it's going to go b into the page. So if you think of the force-- if this was the stress-- if that stress was constant, it would just be sigma fs times b times t. But it's not constant. It's a linear relationship. So I'm taking the area of that triangle. That's the force. And then I want to multiply that times the moment arm. And the moment arm between those two forces-- each of these forces acts through the centroid of the area. The centroid of the area is not in the middle for a triangle, and that total distance is 2/3 of the thickness, t. OK? That's the moment arm that you get by figuring out where the centroid of these areas are. I multiply that times 2/3 t and one of the 2's is going to cancel. I can rewrite that and sigma fs times b times t squared over 6. OK? I think, last time when we talked about the plastic moment, we did a similar calculation and it worked out to sigma y bt squared over 4. So now this is sigma fs, the modulus of rupture, times bt squred over 6. The 6 is just slightly different because we've got a triangle here instead of a square shape like we had before. And now I can get the brittle crushing strength and compression by just equating that applied moment to this fracture moment. And if you do that, the result you get is this plateau stress for brittle crushing and compression. In the one direction, it's sigma fs-- the modulus of rupture of the cell wall-- times t over l squared. And then divided by a geometrical factor. And her regular hexagons, it works out to 4/9 of the modulus of rupture times t over l squared. OK? Are we good? We've got the in plane compressive properties now. We've got the elastic moduli and we've got the three plateau stresses that correspond to the three mechanisms-- to the elastic buckling failure mechanism, the plastic yielding mechanism, and then the fracture mechanism for brittle crushing. OK? If you think of the stress-strain curve of these materials in compression, the stress strain curves all look something like that. And now we've figured out equations that give us the modulus here and our collapse stress there. OK? So we can describe that stress-strain curve. All right. That's compression. And the next thing I wanted to talk about is tension. And if we think about the tensile behavior, the elastic moduli are going to be just the same. So the moduli are the same in tension and compression. And then, if we think about the stress plateau, we don't really have a stress plateau for an elastomeric material because there's no elastic buckling. If you pull it in tension, you're not going to get buckling in tension. You only get buckling if it's in compression. If you have a material that yields like a metal, you can get a plastic collapse stress and a plastic plateau. And that's very similar in tension and compression. There's a very small geometrical difference, but you can, essentially, ignore it. If you're loading the material in compression-- and imagine this was a metal-- if you load it in compression, the cell walls are getting a little further apart when I compress it. And if you're loading in tension, like this, the cell walls are getting a little closer together. So there's a small geometrical difference. But if we ignore that, we can say that the plateau stress for plastic behavior is about the same in tension and compression. And so really, the only property that's left is to look at a brittle honeycomb. And for a brittle honeycomb, you can have fast fracture and we can calculate a fracture toughness. So this next slide describes the fracture toughness calculation that we're going to do. Here's our honeycomb. I'm going to load it in the sigma 1 direction here. I've turned the honeycomb 90 degrees, so this is still sigma 1. And imagine now that we've got a crack here. And I'm going to consider a situation where the crack is very large, relative to the cell size. So it's not a crack in the cell walls. It's a crack that goes through multiple cells. I'm going to assume the crack is large, relative to the cell size. I'm going to assume that the bending is the main deformation mode. And what I'm going to do is look at-- if I have my crack tip here, I'm going to look at this cell wall a just ahead of the crack tip. And I'm going to say, that cell wall is bent. And I'm going to figure out something about the stress in that cell wall and look at when that fails. And I'm going to assume that the cell wall has a constant modulus of rupture. So the cell wall has a constant strength. You can imagine the cell wall could have little tiny cracks in it, too. And if a cell wall has a bigger crack, it's going to fail at a lower stress. But let's imagine that the cell walls are all the same strength and they all have a constant modulus of rupture. Let me write some of this down. In tension, the elastic moduli are going to the same as in compression. There's no elastic buckling in tension, so that's not going to happen. The plastic plateau stress in tension is going to be very similar to that in compression. As I mentioned, there's a small geometrical difference, but we're going to ignore that. And then, if we had a brittle honeycomb, like one of those ceramic honeycombs I showed you, then we can have fast fracture. What we want to calculate is the fracture toughness. And I'm going to make a few assumptions here. I'm going to assume that the crack length is large compared to the cell size. And if I do that, I can say that I'm going to use the continuum assumption. Hello. We'll come back to that. I'm going to say that axial forces can be neglected. We're just going to look at bending forces. And I'm also going to assume that the modulus of rupture is constant for the cell wall. First, let's just think again about the continuum. Imagine we just had a solid and we have a plate of the solid and it's loaded in tension with some remote stress-- some far away stress-- sigma 1. And the plate has a crack of length 2c perpendicular to that normal stress. And we're going to look at the stress-- local stress at the crack tip-- some distance r ahead of the crack tip there. In fracture mechanics, it's been worked out what that local stress field is. And it depends on the crack length, and then how far ahead of the crack tip you are. So you can say that if you've got a crack length of 2c in a linear elastic solid, and the crack is normal to a remote tensile stress-- which I'm going to call sigma 1-- then that crack is going to create a local stress field at the crack tip. And we're going to use this equation for the local stress field. The local stress field is equal to the far away field divided by-- or multiplied by the square root of pi c and divided by the square root of 2 pi r. So there's a stress singularity at the crack tip. And then the local stress decays as you move away from the crack tip. AUDIENCE: And what is r? LORNA GIBSON: r is the distance from the crack tip. So if that's the tip of my crack there, then r is my distance out. OK. In the honeycomb wall, if we look at the crack here, and then we look at that cell wall a that's just ahead of the crack tip, that cell wall is bent. So in the honeycomb, we're going to be looking at the bent cell wall. And that wall is going to fail when the applied moment equals the fracture moment. If we look at wall a, we could say that the applied moment is going to be proportional to p times l. Getting ahead of myself there. I'm going to do this-- because it's hard to say exactly where the crack tip is because there's a void there. I'm going to use that argument here where I make everything proportional. The moment's going to be proportional to p times l on wall a. And the fracture stress is going to be proportional to sigma fs times bt squared. Last time, we said it was sigma fs bt squared over 6. It's the same thing. I'm just dropping the 6 out. And then I can also say that this applied moment, if it goes as pl-- p is just going to be my local stress times lb. And then I multiply times l. So if you think of just thinking about-- if you got a load p on this member here, l, there's going to be some local stress there. And p is just going to be that local stress times the cell wall length times the width into the page. And then, that local stress, sigma l, I can replace with that equation over there. So that local stress is going to go as sigma 1 times the root of c over the root of r. And I'm going to say the distance ahead of the crack tip goes as l. Instead of having r, I'm going to say it's l. It's not necessarily exactly l, but it's going to be some fraction of l. That's my local stress there. And then I've got an l squared times b. And if I set that equal to the fracture moment, that's going to be proportional to sigma fs bt squared. Are we good here? You have to think of the crack tip. And there's some local stress field ahead of the crack tip. And we're saying that the load p is equal to that local stress times a cell length times the depth into the board. And then multiply it times l to get the moment. And then I replace that local stress with this standard equation for the remote stress and the crack length and the distance ahead of the crank tip. So here, the b's are going to cancel out. And now I can solve for a fracture stress in the one direction. And that's going to-- well, let me get proportional-- that's going to be proportional to sigma fs. Then there's going to be a t over l squared. And then, this is going to go as the square root of l over c like that. And now, if I want to get a fracture toughness, the fracture toughness is just the strength times the square root of pi times the half crack length c. So here, my fracture toughness is just that strength times the root of pi c. So I can say that's equal to a constant times sigma fs times t over l squared. And now, times the square root of l. These root of c's have canceled out. So that's my equation for the fracture toughness. And one of the interesting things here is that the fracture toughness depends on the cell size. This is the first property that we've derived an equation for where it depends on the cell size. OK. And here, c's just going to be a constant. All right. Now we've got a set of equations that describe the in plane properties. We've got equations that describe the linear elastic moduli in the plane. We've got three equations that describe the compressive stress for elastic buckling failure, for plastic yielding failure, and for a brittle crushing failure. And we've got an equation for the fracture toughness, as well. OK? We've got a description of the in plane behavior of these hexagonal honeycombs. The next thing I wanted to do was talk a little bit about in plane behavior, but for a different cell shape-- for triangular honeycombs. Because they deform by a different mechanism. And they can be used to represent the lattice materials that we looked at earlier, too. If we have a triangular honeycomb with triangular cells, triangulated cells behave like a truss. And you can analyze trusses by just saying that the joints are pin jointed. There's no moments at the end of the joints or end of the members. And the forces are all axial and so the behavior's a little bit different. I wanted to show you how these triangular honeycombs work, too. I can scoot this up. Imagine that you've got a honeycomb that's an array of triangular cells like this. And say we're applying some bulk stress sigma to it, like that. And say it's got a depth b into the page. When we have a triangulated structure like this, it behaves like a truss. And we can analyze it as being a pin-jointed structure. There's no moments at the joint. And if it's pin-jointed and there's no moments at the nodes, then we just get axial forces along the members. And even if the nodes were fixed-- as they are in these ceramic honeycombs-- you can show that if it's triangulated, even if you accounted for any bending, it really is a very tiny contribution to the deformation in the forces. It's less than a couple of percent. I'll say even if the ends are fixed-- I'll just say the bending contributes less than 2% to the forces in the deformation. If I have a triangular cell like that, and say I pick a unit cell like this, and I say that the bulk stress produces a load of p on the top and p over 2 at each of the bottom nodes there, then the force in each member is going to be proportional to p. And for a given geometry of triangle, you can figure out exactly what the force distribution would be in each of the members. But I'm going to use one of these proportional arguments again, just to get a general result. Because I don't really care that much about the details of the geometry. OK. If I have a little set up like this, I can say that the overall stress is going to be proportional to p over lb. And the stress in each member is going to be proportional to p over l times the thickness-- or b times the thickness. This is the overall stress. The overall strain is going to be proportional to some deflection of the triangle divided by the length. So if I said, say, this length here was a length l. And then the deflection of each member is going to be proportional to p times l over es times the cross sectional area of the member, and that's just b times t. OK? So this is the stress on the whole thing, the strain on the whole thing, and relating the delta to the p. And then, the modulus of the whole honeycomb is going to go as the stress over the strain. So that's p over lb divided by delta over l. These l's here cancel. And delta here is pl over es bt, and so the b's cancel and the p's cancel. And the modulus I get for the honeycomb is just some constant related to the cell geometry times the modulus of the solid times t over l. And if you did an exact calculation for equilateral triangles, you'd find that that constant's 1.15. The interesting thing to note here is that the modulus for these triangular honeycombs goes as t over l, not as t over l cubed. For the hexagonal honeycomb, it went as t over l cubed. And here, because the deformations are axial-- not bending-- it's much stiffer. And it's much stiffer to have one of these triangulated structures. I'll just say, here, that the modulus goes as t over l cubed for the hexagonal honeycombs due to the bending. One of the reasons that people are interested in those lattice materials is that they, too, have moduli that go as t over l. That basically go with the relative density, rather than with the relative density cubed. So they're much stiffer than, say, a hexagonal honeycomb. OK? Are we good with the triangulated honeycombs? Yes? AUDIENCE: What is c? LORNA GIBSON: c's just a constant related to the cell geometry. For equilateral triangles, it's 1.15. You could work it out, but it just makes the whole thing a little more complicated to do that. OK. That's the in-plane behavior. And next, I wanted to talk about the out-of-plane behavior. Remember, we said the hexagonal honeycombs are orthotropic and the orthotropic materials have nine elastic constants. And we've figured out four so far. We've figured out the four in-plane elastic constants. There's five out-of-plane elastic constants to describe the elastic behavior completely. And so we want to talk about these other elastic constants. The honeycombs are also-- I should just back up a little bit. The honeycombs are used in sandwich panels. And when they're used in sandwich panels-- I brought a little panel in with carbon fiber faces and a nomex core. If you bend that panel like that, you're going to get shear stresses in the core. And the shear stresses are going to be going this way and this way on, and that way, that way on. And so those shear stresses are out-of-plane. They're in the x1, x3, or x2, x3 planes. And so you need the out-of-plane properties for the shear properties in the sandwich panels. Honeycombs are also sometimes used as energy absorption devices. Not these rubber ones, but imagine there was a metal one. And when they're used for energy absorption devices, they're typically loaded this way on. Again, that's the out-of-plane direction and you need the out-of-plane properties. And for the out-of-plane properties, the cell walls don't tend to bend. Instead, they just extend or contract. And you get stiffer and stronger properties. Let me just write something down and then we'll start to derive some of those properties. The cell walls contract or expand instead of bending, and that gives stiffer and stronger properties. OK. OK. There's five elastic constants in the out-of-plane directions. We'll start with the Young's modulus. And if I take my honeycomb and I load it in the x3 direction-- just taking this thing here and just loading it like that-- the cell walls just axially contract and the stiffness just depends on how much cell wall I've got. So the modulus in the three direction is just equal to the area fraction times the modulus of the solid. That's just the same as the volume fraction, or the relative density. So it's quite straightforward. The cell walls contract or extend axially. e3 is just es times the relative density. And that's just es times t over l. And then there's a geometrical factor here. h over l plus 2 over 2. h over la plus sin theta times cos theta. Again, a little bit like those triangular honeycombs. The thing to notice here is that in the three direction, the modulus goes linearly with t over l, whereas in the in-plane directions, it goes with t over l cubed. So there's a huge anisotropy in the honeycombs because of this difference. Imagine a honeycomb might be 10% dense. t over l might be something like a tenth-- 0.1. So e star 3 is going to 0.1 of es, roughly, and in the other direction, it's going to be 1/1000th. So there's a huge anisotropy because of this. Let me just-- square honeycombs. This just shows looking at the out-of-plane directions and the different stresses and properties that we're going to look at here. The next one we're going to look at is Poisson's ratio. And first, we're going to look at loading in the x3 direction. And if we load it in the x3 direction, the cell wall's just strain by whatever the Poisson's ratio is for the solid times the strain in the three direction in the other two directions. We'll say for loading in the x3 direction, the cell wall's strain by nu of the solid times whatever the strain is in the three direction in the other two directions. If we load it in the x3 directions and everything contracts by that much in the other two directions, that just means that the Poisson's ratios-- nu 3 1 and nu 3 2 are going to be the same. And they're just going to be equal to the Poisson's ratio of the solid. So if each wall is going to contract by that amount, the whole thing's going to contract by that amount. And that's going to give you that Poisson's ratio. Let me just say, here, also-- and remember that I'm defining nu ij as minus epsilon j over epsilon i. We're loading in the three direction here. And then you can get the other two Poisson's ratios using those reciprocal relationships. So nu 1 3 and nu 2 3 can be found from the reciprocal relations. And remember, those relationships come from saying that the compliance tensor, or the stiffness tensor, is symmetric. We can write, for instance, that nu 1 3 over e1 is equal to nu 3 1 over e3. So I can write that like that. And then I can say nu 1 3-- that is going to equal to nu 3 1 times e1 over e3. And we just saw that nu 3 1 was equal to nu s. And we see, from before, the e1 is equal to some constant. Let me just call it c1 times es times t over l cubed. And e3 is going to be some other constant times es time t over l. The es's are going to go. And if t over l is small-- even if it's say, a 10th-- and this is going as t over l cubed and that's going as t over l, then I can say this thing is about equal to 0. It's going to be small. It's not to be exactly [INAUDIBLE] 0, but it's going to be small so we're going to say it's 0. So I'll just say for small t over l. And then, similarly, nu 2 3 is going to be close to 0, as well. So there's the Poisson's ratios. We've got the Young's modulus, the Poisson's ratios, and next we want to get the Shear moduli. And the shear moduli is little more complicated. The cell walls are loaded in shear but the neighboring cell walls constrain them and they produce some non-uniform strain. I'm talking about shearing it this way on. You can see on this figure here, we're talking about shearing it, like tau 2 3 or tau 1 3, this way. And so each wall is going to shear, but the walls are attached to each other so they can't just do it independently. They have to be constrained by each other. And the exact solution is a little bit complicated. And I'm just going to give you an estimate of what that modulus is. And we're going to see that it depends linearly on t over l, as well. I'll just say the cell walls are loaded in shear. An estimate is g star 1 3 is equal to g of the solid times t over l times a geometric function. It's cos theta over h over l plus sin theta. And for regular hexagonal honeycombs, it's 1 over root 3 times gs times t over l. Again, just note the linear dependence of the modulus on t over l. And in the book, there's a method using upper and lower bounds that gives an estimate for g 2 3. I'm not going to go into it. I just want you to notice that the shear moduli go as t over l, just like the Young's modulus does. And Sardar, who's sitting in the back there, has done even more involved calculations and analysis of the shear moduli of the honeycombs in this direction. So I'm not going to go into all the gory details on that. OK. That gives us the moduli now. So now we've got all nine elastic moduli. OK? And the next thing to do is, then, to figure out the compressive strength. So we're going to look at compression again, and then we'll look at tension. If we look at compressive strengths, again, we've got different modes of failure. And if I have an elastomeric honeycomb like this one here-- if these cell walls were a little longer, I might be able to actually do it. If you compress this enough, you produce buckling in the cell walls. And this is a schematic of this buckling pattern here. And you can see there's a diamond pattern where it alternates up and down in the different cell walls. We're going to do some approximate calculations, but you can see the idea of how the material behaves in this direction, just from these approximate calculations. OK. Say we have our honeycomb like this, and here's the prism axis this way. And now, we're going to load it up with some stress in the three direction. I'm going to call this sigma star elastic 3 when it buckles. And what we're going to do is just look at a single plate. And look at the buckling of a plate. We're going to analyze it just looking at a single plate and then adding up how many plates we have per unit cell. It's actually more complicated than this because, obviously, the plates are attached together and there's some constraint by attaching the plates. But we're not going to worry about that. If you have a column-- just a, say, circular cross-section column-- and you apply a compressive load to it, it buckles at the Euler load. And similarly, there's an Euler load for plates. And that equation is usually written as a p critical is equal to some end constraint factor. For plates, it's usually called k instead of n. So this is an end constraint factor. It depends on the modulus of the plate. It goes as t cubed. Then, there's a factor of 1 minus mu of the solid squared and the length of the plate. Say this plate here-- actually the width of the plate there is h and the length here is b. And this thickness here is t like that. Here, k is an end constraint factor. And it's going to depend on the stiffness of the adjacent cell walls. If I had a honeycomb, and say it was-- these walls here-- the adjacent walls-- were thicker, then you can imagine those thicker walls-- it'd be harder to get them to deform. And the end constraint for the plate is going to depend on those thicker walls. So that the end constraint, k, depends on these-- say I'm looking at this wall here of width h here, then how stiff these other two walls are is going to affect that end constraint factor. What we're going to do is just do something very approximate. We're going to say if these vertical edges here-- if this edge here and that one there-- if they were simply supported-- if they're just pinned to the next column, the next member-- then k has some value. And if they're fixed, it has some other value. And we're going to pick a value in between. So we're going to do something very approximate. I'll say if those vertical edges are simply supported-- that means they're free to rotate-- then k is equal to 2.0. And this is if b is bigger than three times the length. So this is h here, or we could say l. Either way. It's really the-- it's the length when we look at the honeycomb this way on, but it's the width in that picture there. And if the vertical edges are clamped, or fixed, then k is equal to 6.2. These are values you can look up in tables of plate buckling. And we're just going to approximate it by saying k is equal to 4. We're just picking a value that's in between those two. And then, the p total is going to be the sum of the p criticals for the columns that make up a unit cell. For the unit cell, I have one wall of length h and two of length l. And if you just take that total load and divide by the area of the cell, you get that this compressive strength for elastic buckling is approximately equal to es over 1 minus nu s squared times t over l cubed. And then there's a geometrical factor here. And if you had regular hexagonal cells, this buckling stress works out to 5.2 times es times t over l cubed. If you remember, for the loading in the two direction-- in the in-plane direction-- it has the same form. And goes as es times t over l cubed, but it's much smaller. This number here was, I think, 0.2. It was much smaller. So it has the same form, but it's a lot bigger. OK? Are we good with that? The idea is we just use the standard equations for plate buckling. We make some estimate of what that end constraint factor is. And we just have an approximate calculation here. OK. That's the elastic buckling. If I had a metal honeycomb, then it might not fail by elastic buckling like that. Instead, we'd probably get yielding. If it was dense enough, we could just get axial yielding that-- if you just loaded it, you'd have axial forces. And at some point, you'd reach the yield stress. And so you can get failure by just uniaxial yield. That's one option. And if you get that, then it just depends on how much solid you've got again. So it's just the yield strength of the solid times the relative density. But usually, the honeycomb is thinner walled than that. And usually, you get plastic buckling proceeding that. In plastic buckling, you can think of it as-- say if you have a tube-- this is just shown for an individual tube here. You can see how the tube folds up. And you can get that same kind of thing with the honeycomb. Here's a single tube. It's been loaded along the prism axis of the tube. And you can see, you get these folds, and the more you load it, the more number of folds you get. And the more the folds concertina up. To do an exact analysis for the honeycombs, you would have to take into account not just one tube, but the constraint of the neighboring tubes again. And again, that gets to be a complicated, messy thing. So again, we're going to do a more approximate thing. What we're going to do is just say that we have members that are folding up like that. So the same geometry. But we're just going to look at a single cell wall and see what the single cell wall does. And someone else has done the more exact calculation. We'll just compare our approximate calculation to the exact one. OK. We're going to consider an approximate calculation. What we're going do is look at our isolated cell wall. And if you look at the figure here, the wall is going to be vertical, initially. And as we load it, eventually it's going to buckle and we're going to form one of those plastic hinges in the middle here. And then, the thing is then going to rotate about that plastic hinge and just fold up. So at the bottom here, it's completely folded up. OK? And we're going to do a little work calculations. We're going to look at the internal work done and we're going to look at the external work done. The external work is just going to be this load p times that deflection delta that the p moves through. And if we say this is half of a wavelength-- if you think of this thing going through multiple wavelengths, just consider when it folds up like that, that's a half of a wavelength. It would go two of those to get a full wavelength. That's lambda over 2. And so to go from this stage to that stage over here, the external work done is going to be approximately p times lambda over 2. Say that it's thin and that 2t is small compared to lambda. So it's going to be about p times lambda over 2. And then, we're also going to look at the work done by the plastic moment. And when we form the plastic hinge here, there's a plastic moment. And that moment is going to rotate through an angle of pi. So we start out straight here, we end up folded up like that, and we've gone from straight to that. We had to go through 180 degrees to get there. So it goes through an angle of pi. And if you have a moment going through a rotation, the work done is the moment times the rotation. We're going to equate those two works done. We're going to look at the rotation of the cell wall by an angle of pi at the plastic hinge. Our plastic moment-- it's going to be the yield strength of the solid again times t squared over 4, the same as when we were talking about the plastic moment before for the other loading direction. But now, instead of multiplying this times b, we're multiplying it times 2l plus h. That's the length of the cell wall that's associated with one cell. And now, it's not b because now we've turned the thing the other way on. We're loading it the other way on. And this plastic hinge-- if I think of-- if this was b before. And now that b is l plus 2h-- or 2l plus h, rather. That's the dimension of the-- let me draw a little hexagon so maybe you can see. OK. Now we're forming a plastic hinge halfway down the board. Imagine that this has some length b that way and we're halfway down the board. And now, the plastic hinge has to form all the way around these members here for one cell. Or you could think about it as this guy plus these guys is one cell. You can think about the unit cell different ways, but it's one h plus two l's. OK? Are we OK with that? OK. Then the internal plastic work is that plastic moment times the rotation pho-- or pi, rather. Sorry. Are we OK with this? That the work done is m times our angle? Imagine-- let me get rid of my honeycomb here. Imagine you have a point here and you have some force over here. Let's call that f. And say, the force is at distance r from f. And say that it moves through some distance. The moment here would be r times f. And if that rotates, say, through some angle-- let's call it alpha-- and here is f here, then this distance here that the force moves through is just r times alpha. So the work done is going to be r times alpha times f, or just the moment times alpha. OK? So that's all that we're doing. OK. That's the internal plastic work. And now we have to look at the external work done. And that's equal to p times lambda over 2. Here, lambda is the half wavelength of the buckling. I'm going to say for these tubular kinds of things, it's in the order of l. So if you look at that last slide here-- oops. Rats. How'd that happen? Let me scoot back down here. There. If we look at that guy again, the magnitude of the buckling wavelength is on the order of l. And here, below p, can be related to the stress in the three direction. We'll just multiply it times the area of the unit cell. And so if I equate the internal work and the external work, I can say p times lambda over 2 is equal to pi times my plastic moment. And then, for p, I can write sigma 3 h plus l sin theta times 2l cos theta. And then, lambda is l divided by 2 is equal to pi. And then I've got my plastic moment over there. And then if I solve for sigma 3, that's my compressor strength. I've got pi by 4, the strength of the solid, sigma ys, times t over l squared. Then h over l plus 2 divided by h over l plus sin theta times cos theta. And for the regular hexagons, this works out to about 2 sigma ys times to over l squared. And the exact calculation for regular honeycombs is equal to 5.6 times sigma ys times t over l to the 5/3 power. This power here-- 5/3-- is a little less than 2. And that's because the additional constraint of the neighboring cell walls. But the main thing we're interested in, in these sorts of calculations, is the power dependence on the density and this simple calculation. Obviously, it's not exact, but it gets you close. OK. I'm just going to wait for people to catch up a little. OK. The next property I'm going to look at is out-of-plane brittle fractures. Say we loaded in tension, and if we had no cracks in the walls, we'd just see uniaxial tension and the strength would just be the strength of the solid times the relative density times the amount of solid. We'll just say if defect free, the walls see uniaxial tension. And then the fracture stress in the three direction is just equal to the relative density times the fracture strength of the solid. If the cell walls are cracked, and if the crack length is very much bigger than the cell length, then the crack propagates normal to x3. Then we can say the toughness gc-- or the critical strain energy release rate-- is just equal to the volume fraction of solid times gc for the solid. And then the fracture toughness, k1c, is equal to the square root of the Young's modulus times gc. And that's just equal to the relative density times the modulus of the solid. And then the relative density times the toughness of the solid. So it's just equal to the relative density times the fracture toughness of the solid. It's just straightforward there. Then we've got one last out-of-plane property. And that's brittle crushing and compression. And if we have some compressive strength of the cell wall-- say I call it cs-- then it's just the relative density times that strength. And for brittle materials, that crushing strength is typically around 12 times the modulus of rupture, or fracture strength. We could say that's about equal to 12 times the relative density times sigma fs, a fracture strength. OK. That's the modeling of the honeycombs. I know there's been a lot of equations and derivations, but that's the basis of a lot of the things we're going to do in the rest of the course. The modeling we're going to do on the foams is based on this and the mathematics is just easier because we're going to use these dimensional arguments. We're not going to figure out all these geometrical parameters. Before we get to the foams, I wanted to talk a little bit about honeycombs in nature. And we've only got a couple minutes left, so I won't really get that far. But I wanted to talk a little bit about honeycomb materials in nature. And the two examples we're going to talk about are wood and cork. I'm going to talk a little bit about the structure of wood next time. Then, we'll see how we can apply these models to understanding how wood behaves. And we'll see how you can use these models to predict the density dependence of wood properties and also the anisotropy in wood properties. And I guess we'll probably, maybe, start it Wednesday next week. We'll talk about cork, as well. Those of you who took 3032 know that I like cork because of Robert Hooke and his drawing of cork. And I made a new video that I'm going to show you. Remember in 3032, I showed you the video from the Bodleian Library, where they had the first edition of Hooke's Micrographia. Well, it turns out Harvard has a first edition. Harvard has three first editions. Yeah. Exactly. MIT has zero first editions. Gee, why does that surprise me? And I have a friend who's a librarian at Harvard and she arranged for me to go and make a little video with the first edition of Micrographia. So I can-- I don't if we'll play the whole thing, but I'll show you the first little bit of it. And you can watch it at your leisure. And Sardar came. You came and saw it with me. You came and saw the first edition with me, right? AUDIENCE: Yes. LORNA GIBSON: Yeah. Yeah. It's very beautiful and you'll see some of the nice drawings. And I talk about the cellular structure of some of the drawings. So we'll talk about wood and cork next time. But I think I'm going to stop there because that seems like enough equations for now.
MIT_3054_Cellular_Solids_Structure_Properties_and_Applications_Spring_2015
Student_Project_Examples.txt
PROFESSOR: Another part of the course is that I also have a project in the course. And students do the projects in pairs. Partly, because I just think it's nice for them to have somebody to work with. And they've done a huge range of projects. I let them do whatever project they want. It has to have something to do with cellular solids. And some of them have done, sort of, analytical things where they've done numerical finite element analysis of some cellular structure. Some of them done very experimental things. And some have been a lot of fun. So, for instance, almost every year there are students who want to work on food foams. And one year, for example, there was a group made bread. And they looked at sort of the processing of the bread. And they looked at how if you used more yeast, or you let the yeast rise, or the bread rise for longer, how that affected the cellular structures. So they sort of changed the chemistry of how they made the bread. And then they looked at the micro structure of the bread that they got. I think they even did some mechanical tests of the bread. So that was kind of fun. And I think, probably, the most interesting project any student has done was on elephants skulls. So you could imagine an elephant skull is huge, and it's bony. So it would be very, very heavy, if it didn't have some pores in it. And elephant skulls, it turns out, have some very large pores in them. I think partly to reduce the weight of the skull, and the head, and the bone. The neck has to, kind of, carry it all. So these two students came to me. And they said they had heard somewhere or they'd read somewhere that these pores in the elephant skull had an effect on how the elephants perceived sound and the acoustic transmission of sound waves through the skull. And they wanted to do a project on elephant skulls. So I was kind of intrigued by this. I love all natural history kinds of things. And I've worked before with people at Harvard's Museum of Comparative Zoology, where they have bones. They have stuffed bodies. They have all kinds of animals over there. And I called up a colleague over there, and it turned out they had elephant skulls over there. So I went with one of the students. And it was an attic of the building, this kind of dingy place. And there was this huge room. And it was full of elephant bones. And they had several skulls, which are like the size of this table. They're huge. They're this big. And some of the skulls were cracked. And you could see these big pores in the skulls. And then the students found out that University of Texas at Austin has CT scans, Computed Tomography scans of all sorts of bones. And sure enough, they had elephant skulls scanned. And so they got a three-dimensional representation of the elephant skull through this University of Texas at Austin program. And then they used that as input to a 3D printing set up. So they 3D printed a sort of mimic of the elephant skull with some ceramic powder. And they made a skull was about this big. And then they wanted to look at the acoustic properties of it. So what they did was they suspended the skull from a string. And they had a speaker, and the speaker had a sound. And they put an accelerometer on the skull. And they measured the vibration of the skull. And then to compare it with the skull that didn't have these pores. And they got the CT image from Austin. And they 3D printed this dolphin skull. And they did the same thing. They suspended the skull from as a thread. And they measured the vibration. And they could show-- and I've forgotten the details of their results-- but they could show, basically, that the two skulls had a different frequency response to the vibrations. And they thought maybe part of it was because the pores. So these two sections here are two sections of their 3D printed elephant skull. I don't have the entire thing. But you can see this is the orbit, where the eyes would have gone in here. And if I turn it over and you look inside, you can see these pores in here. And also if you look at this section lower down on the skull, you can see this whole porous structure here, as well. And if we flip it over there's a little bit more over there. So that was probably the most intriguing project that the students did as part of this course. That was quite something.
MIT_3054_Cellular_Solids_Structure_Properties_and_Applications_Spring_2015
4_Honeycombs_Inplane_Behavior.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. LORNA GIBSON: OK, so it's five after. We should probably start. So last time we were talking about honeycombs, and I just wanted to quickly kind of review what we had talked about, and then today I'm going to start deriving equations for the mechanical properties of the honeycombs, OK? So this is a slide of our honeycomb setup here. These are the hexagonal cells we're going to look at. We talked about the stress-strain behavior. The curves on the left-hand side are for compression and the ones on the right-hand side are for tension. And so what we're going to be doing today is we're going to start out by calculating a Young's modulus, this slope here. We're going to calculate the stress plateaus for failure by elastic buckling in elastomeric honeycombs, by failure from plastic yielding in, say, a metal honeycomb, and by failure by a brittle crushing in, say, a ceramic honeycomb. And if we have time, we'll get to the tension stuff. I don't know if we'll get to that today or next time. So we're going to start calculating those properties today. And these were the deformation mechanisms. Remember, we said the linear elastic behavior was related to bending of the cell walls, and then the plateau was related to buckling if it was an elastomer. And the plateau was related to yielding if it was, say, a metal that had a yield point. And then this was sort of an overview of the stress-strain curve showing those different regions, OK? So what I'm going to talk about today to start is the linear elastic behavior. And we're going to be starting with the in-plane behavior. So in-plane means in the plane of the hexagonal cells. And then next time we'll do the out-of-plane behavior, this way on. So if I had to form my little honeycomb like this, what initially happens is the inclined cell walls bend. So if you can see over here, we've kind of exaggerated it on this sketch. So this wall here is bent. This one here just kind of moves along, goes for the ride. And this guy here is bent. So this is for loading in the what we're calling the x1 direction, sigma 1. And the same kind of thing happens when we load in the other direction, in the sigma 2 direction, these guys still bend. Now the honeycomb gets wider that way. It gets shorter this way, wider that way. And we can calculate the Young's modulus if we can relate the load on the beam in the moments to this deflection here, right? So the Young's modulus is going to be related to the stiffness and the stiffness is going to be related to how much deformation you get for a certain amount of load that you put on the beam. So I'm going to calculate the modulus for the x1 direction, the thing on the left there. And you can do the same thing for the x2 direction on the right, but I won't calculate that because it's exactly the same kind of process. OK, so let me start here. Get my chalk. So I'm going to draw a one-unit cell here. So here's my unit cell there, like that. And this member here is of length h. That member there is of length l. That angle there is theta. I'm going to say all the walls have equal thickness, and I'm going to call it t. And I'm going to define an x1 and x2 axis like this. So the horizontal is x1 and the vertical axis is x2. And I'm going to say that I apply a sort of global stress to it, sigma 1. So there's a stress in the one direction there, sigma 1, OK? And I'm going to say my honeycomb has a depth b into the page, but the depth s-- is because the honeycomb's prismatic, the b's are always going to cancel out of all the equations that we're going to get, because everything's uniform in that direction. And we can think about a unit cell here, and in the x1 direction, we could say the length of our unit cell is 2l cos theta. So in the x1 direction, that's our unit cell there, and that's 2l cos of theta. And in the x2 direction, you might think that you go from this vertex up here down to that vertex there, but if you did that, then on the next layer of cells, you wouldn't have the same distance. So the unit length in the x2 direction is actually from here to here, and then you can see the next cell, you would get the same thing. You get this bit here from the inclined member, and then you would get h down here from the next member. So this bit here is equal to h plus l times sine of theta. So I can say in the x2 direction, the length of the unit cell is h plus l sine theta, OK? So that's kind of the setup. And then what we want to look at is that inclined member that bends, we want to look at how this guy bends under the load. And if we can relate the forces on it to the deflection, and we need the component of the deflection in the one direction, then we're going to be able to get the modulus. So I'm going to draw that inclined member again over here, and it's going to see some loads that I'm going to call p, and that's going to cause this thing to bend. So I've kind of exaggerated it there, but there's the bending. And there's some end deflection there, delta. And there's moments at either end of the beam, or either end of that member, as well. And this member here has a length. That length is l. OK, are we good? So it's just kind of the setup. And I'm going to draw the deflection delta bigger over here. So say that's delta. That's the same parallel as this guy here. What I'm going to want is the deflection in the x1 direction, and when I come to calculate the Poisson's ratio, I'm going to want the deflection in the x2 direction. And if this angle here is theta, between the horizontal and the inclined member, then this angle up here is also theta, and so this bit here is delta sine theta. And this bit here is delta cos theta. Ba-doop-ba-doop-ba-doop. So the Young's modulus is going to be the stress in the one direction divided by the strain in the one direction. So I need to get the stress and the strain in the one direction. So here the stress in the one direction. If I'm applying my load, like this, sigma 1, the stress in the one direction is going to be this load p-- so this load, say, p on this member here, divided by this length here, the unit cell length, and then divided by b into the board, the width end of the board. So sigma 1 is going to be p divided by h plus l sine theta times b. And epsilon 1 is going to be the strain in the one direction, is going to be the deformation in the one direction divided by the unit cell length in the one direction. So that's going to be delta sine theta divided by l cos theta. So here, even though I've said this unit cell is 2l cos theta, there'd be two of the members here that would be twice that deflection in the one direction. So it's, for one beam, it's delta sine theta over l cos theta. Are we OK so far? So far so good? So for the hexagons, because we're going to figure out the equations more or less exactly, we're going to keep track of all the geometrical factors. When we come to the foams, we're not going to keep track of all the geometrical factors. So one of the things that makes us look a little kind of hairy is just the fact that we're keeping track of all these sines and cosines and all the dimensions and things. All right, whoop. So I need to be able to relate my load p to my deformation delta to get a stiffness out of this, to get a modulus, OK? So the way I do that is, remember in 3032, we did those bending moment diagrams and we did the deflection of the beams? This is where this comes in handy. So I'm going to draw my beam a little bit differently now. I'm going to turn it on its side. So this is still my length l, but I'm going to turn on my side just so that you can see it the same kind of way we did the bending moment diagrams. So this is still my length l across here. And there's end moments, M and M here. And P sin theta is just the perpendicular component of the load p. So p sin theta is just the component perpendicular to my beam. So I could draw a shear diagram here and I could draw a bending moment diagram here. And, if you remember, the shear diagram, if I have no concentrated load along here, and I have no distributed load along here, if this is zero, down here, it's just going to go up my P sin theta, and then be horizontal, and then come down by P sin theta, OK? So that's the shear diagram. And then the bending moment diagram, I'm going to draw down here. So I've got some moment at the end here, and this would tend to bend like that. So this would be a negative moment. Remember, bending moments were negative if there was tension on the top, and they were positive if there was tension on the bottom. So over here we'd have tension on the top, so that would give us a negative bending moment. And then, if you also remember, the moment at a particular point is equal to the integral from, say, A to B-- well maybe I should write this another way. And B minus Ma is the integral of the shear diagram between the two points. A little sloppy. OK. So if I know how I have some moment here minus M, if I integrate this shear diagram up, then this is just going to be linear here, and then I'm going to be at plus M over there. So if you look at this shear and bending moment diagram, it's really just the same as the shear and bending moment diagram for two cantilevers that are attached to each other. So let me just draw over here what the cantilever looks like. Let's see. So imagine I just had a cantilever like this, and I have some force F on it like that. And I call this distance here capital L, and I'm going to call that deflection capital delta, like that. If I drew the shear diagram for that, there'd be a reaction here, F, there would be a moment here, FL. Doot, doot, yup. So this would look-- whoops-- it's a little too long. Shear diagram here would look like this. That would be zero. This would be FL. And the moment diagram would look like this. Whoops. A little too long again. And that would be minus FL. And that would be zero. So do you see how the shear and the bending moment diagram here are really just like two cantilevers, OK? So I know that the deflection for a cantilever, delta, is equal to F capital L cubed over 3EI. It's kind of a standard result. And so I can take this and apply that to this beam here. So instead of working everything out from first principles, I'm just going to say that my beam here is like two cantilevers, and instead of F, I've got P sin theta. And instead of capital L here as the length, I've got l/2 because l/2 would be the length of one of the cantilevers. OK, so for the honeycomb, I've got two cantilevers of length l/2. So delta for the inclined member on the honeycomb is going to be 2-- because I've got two cantilevers-- the force, instead of having F, I'm going to have P sin theta. And instead of having capital L, I'm going to have l/2, all cubed. So this is like F capital L cubed over 3. And here, the modulus that I want is the modulus of the solid cell wall material, so I'm going to call that ES, and over the moment of inertia. So you see how I've done it? Is that OK? So then I can just kind of simplify this thing here. I've got P sin theta l cubed. 1/2 cubed is going to be 1/8. So this is going to be 2, if that's 1/8, times 3 is 24. So 2/24 is 12. So I've got delta for my honeycomb member is P sin theta l cubed over 12 EsI. And here I is the moment of inertia of that inclined member of the honeycomb. And that's BT cubed over 12, OK? So B is the depth into the board, and T is the thickness. We'll cube that and divide by 12. It's a rectangular section. Yeah? AUDIENCE: What was ES again? LORNA GIBSON: ES is the Young's modulus of the solid that it's made from. So clearly, if my honeycomb is made up of these members, whatever material the members are made of is going to affect the stiffness of the whole thing. Are we good? Because once we have this part, then we just combine these equations for the stress and the strain in the one direction. And we have this equation relating delta and P, and we're going to be able to get our Young's modulus, OK? We're happy? OK. All right. So I'm going to call the Young's modulus in the one direction E star 1. So everything with a star refers to a cellular solid property, and 1 because it's in one direction. So that's going to be sigma 1 over epsilon 1. So if I go back up there, I can say sigma 1 is equal to P divided by h plus l sin theta b. And epsilon 1 is equal to delta sine theta in the denominator over l cos theta. And now instead of having delta here, I can substitute this thing here in for delta. And then I'm going to able to cancel the P's out. So delta was equal to P sine theta l cubed over 12 Es, and there was an I, a moment of inertia, and I was equal to bt cubed over 12. And let's see here. So that's delta. And there's another sine theta here so I'm just going to square that sine theta there. So now the P's cancel out. The b's are going to cancel out. The 12s are going to cancel out. And I'm going to rearrange this a little bit. So I'm going to write Young's modulus of the solid out in the front. Then I've got a term here of t cubed and I'm going to multiply that by 1/l squared, and then everything else-- well, let's see. We can take this l cubed here. I can take that. Put it underneath that, so that's going to give me t/l cubed. And then I've got an h plus l sine theta here, and I've got an l there, so I'm going to take that to be h/l plus sine theta. Boop-boop-da-doop. So I've got this term with [? h/l's ?] in the thetas. There's a cos theta from the numerator here. This term here turns into h/l plus sine theta, and then I've got my sine squared thetas down there. And that's my result for the Young's modulus in the one direction. OK? Let's make sure that seems right. It seems good. OK. So one of the things to notice here is there's three types of parameters that are important. So one is the solid properties. So the Young's modulus of the solid comes into this. So the stiffness of the whole thing depends on the stiffness of whatever it's made from. There's this factor of t/l cubed-- that's directly related to the relative density or the volume fraction of solids. So what this is saying is the relative density goes as t/l, so the Young's modulus depends on the cube of the relative density. So it's very sensitive to the relative density. And then this factor here really is just a factor that depends on the cell geometry. Remember when we talked about the structure of the honeycomb, we said we could define the cell geometry by the ratio of h/l and theta, OK? And since we often deal with regular hexagonal honeycombs, I'm just going to write down what this works out to be for regular hexagonal honeycombs. So for a regular hexagonal honeycomb, h/l is 1. All the members have the same length. And theta's 3, and the modulus works out to 4 over root 3 times Es times t/l cubed, OK? So do you see how we do these things? So all the other properties work in a similar kind of way. You have to say something about what the sort of bulk stress is on the whole thing and relate that to the loads on the members. You have to say something about how the loads are related to deflections, or when we look at the strengths, we're going to look at moments and how the moments are related to failure moments of one sort or another. But it's all just like a little structural analysis, OK? Are we good? You good, Teddy? I thought you were going to put your hand up? No? You're OK? OK. OK, so the next property we're going to look at is Poisson's ratio. And I'm going to look at it for loading in the one direction. So Poisson's 1 2, say we load uniaxially in the one direction, we want to know what the strain is in the two direction, it's minus epsilon 2 over epsilon 1. And again, if I look at my inclined member, and I say that member's going to bend something like that, and that's my deflection delta there, and, say, got the same x1 and x2 axes. And again, if I look at delta here, it's the same little sketch I had before. That's delta sine theta. And this is delta cos theta. I'm going to need those components to get the two strains in the different directions. So epsilon 1 is going to be delta sine theta over l cos theta. And if I'm compressing it, that would get shorter. And we get-- and epsilon 2 is going to be delta cos theta divided by h plus l sine theta. And that would get longer. So these two have opposite signs, and so the minus sign is going to disappear here. Doodle-doodle-doot. So then I can get my Poisson's ratio by just taking the ratio of those two guys. So I could put a minus sign there and say that's the opposite sign to epsilon 2. Then this would be delta cos theta divided by h plus l sine theta. And epsilon 1 would be delta sine theta over l cos theta. And the thing that's convenient here is that the two deltas just cancel out. So the Poisson's ratio is the ratio of two strains. Each one of the strains is going to be proportional to delta, and so the two deltas are just going to cancel out. And so I can rewrite this thing here as cos squared theta divided by h/l plus sine theta times sine theta. And so one of the interesting things to notice that the Poisson's ratio only depends on the cell geometry. It doesn't depend on what solid the material is made from. It doesn't depend on the relative density. It only depends on the cell geometry. Oops. OK, and then we can also work out what the value is for a regular hexagonal cell. And if we plug-in h is equal to l and theta's equal to 30, you get that it's equal to 1. So one is kind of an unusual number for a Poisson's ratio. When we think of most materials, it's around 0.3, so it's kind of unusual that it's that large. The other thing that's interesting is that it can be negative. So if theta is less than 0, then you can get a negative value. If the cos squared is going to be a positive value, but you've got a sine theta down here, then that's going to give you a negative value. So you can get negative values. So let me just plug in an example. So say h/l is equal to 2, and theta is equal to minus 30 degrees, then this turns out to be 3/4. So cos of 30 is root 3/2, so square of that is 3/4. h/l is 2, sine theta is 1/2, but it's minus 1/2. So 2 minus 1/2 is 1 and 1/2. And then the sine theta is minus 1/2. And so it works out to be minus 1 for that particular combination. And I brought my little honeycomb that has a negative Poisson's ratio in. So this guy here-- let's see, I don't think there's an overhead here. No overhead? Guess not. I'll just pass it around. So if you take it, put your hands on the flat side and load it like this, and don't smoosh it like that. Just load it a little bit, because you [? want to be ?] linear elastic. If you load it just a little bit, you can see that as you push it this way, it contracts in sideways that way. So don't smash it. Just load it a little bit and you can kind of see it with your hands. And if you put on a piece of lined paper, it's easier to see it. OK, so that's kind of interesting. So are we good with getting the Young's modulus in the one direction and the Poisson's ratio for loading in one direction? OK, so you can do the same sort of thing to get the Young's modulus in the two direction and the Poisson's ratio for loading in the two direction. And you get slightly different formulas, but it's the same idea. And you can also get a shear modulus this way, and in-plane shear modulus. It's a little bit-- the geometry of it's a little bit more complicated. So all of those things are derived in the book, in the cellular solids book. So if you wanted to figure those out, look at that, you could look at the book. So let me just comment on that. All right, so those are the in-plane linear elastic moduli, and remember we said that four of them describe the in-plane properties for an anisotropic honeycomb. And you can use that reciprocal relationship to relate the two Young's moduli and the two Poisson's ratios. All right, so the next thing I wanted to talk about was the compressive strength. So let me just back up here a second. So if we go back to here, remember we had for an elastomeric honeycomb, this stress plateau was related to elastic buckling. So we're going to look at that buckling stress first. And this plateau here is related to yielding. And then we'll look at the yielding stress next. And then this plateau here is related to a brittle sort of crushing, and we'll do that one third. So we're going to go through each of those next. And this is kind of a schematic for the elastic buckling. So when you look at the elastic buckling, one of the things to note is that when you load the honeycomb this way on, if you load it in the one direction, you don't get buckling. It just sort of continues to-- whoops, if I can keep it in plane. It all just kind of folds up, so you just get larger and larger bending deflections. You don't really get buckling. But when you load it this way on, these vertical members here, the ones of length h, they're going to buckle. So, see if I do that, my honeycomb looks like those cells up on the schematic there, OK? So, whoops. So we're going to look at the compressive stress or strength next. That's sometimes called the plateau stress. So we can get cell collapse by elastic buckling, if, for instance, the honeycomb is made of a polymer. And then the stress-strain curve looks something like that. And what happens is you get buckling of those vertical struts throughout the honeycomb And then you could also get a stress plateau by plastic yielding. And what happens when you get plastic yielding is you get localization of the deformation. So one band of cells will begin to yield initially, and then as the deformation proceeds, that deformation ban will propagate and get bigger and bigger, and you get a wider and wider band of cells yielding and failing. So you get localization of yield, and then as deformation progresses, the deformation band widens throughout the material. So if I go back and look-- if I look at this one here, when you look at this middle picture here, you can see how one band of cells has started to collapse and started to fail. And as you continue to compress that in the one direction, this way on, then more and more neighboring cells are going to collapse and the whole thing will get wider until the whole thing has collapsed. And that's kind of characteristic of the plastic failure. And then the third possibility is brittle crushing. And then you get these kind of serrated plateau. And the peaks and valleys correspond to fractures of individual cell walls. OK, so we're going to start off with the elastic buckling failure. And I'm going to call these plateau stresses sigma star, for the sort of compressive strength. And el means it's by elastic buckling. And as I mentioned, you don't get it in the one direction. The cells just fold up. You only get it for loading in the two direction, so it's going to be sigma star el 2. Oops, need a different piece of chalk. So you get this elastic buckling for loading in the x2 direction, and the cell walls of length h buckle. And you don't get it for loading in the one direction, the cells just fold up. So again, let me draw a little kind of unit cell here. And here is our stress sigma 2, like that. And here's our little wall of length h that's going to buckle. So if I load it up, initially it'll be linear elastic. And then eventually, at some stress, it will get large enough that this wall here will buckle. And we can relate to that plateau stress, or that compressive stress, to some Euler buckling load. So you remember, if we have a pin-ended column, so just a single column, pins on either end, the Euler buckling load says you get buckling when the critical load is equal to some end constraint factor, n squared. So n squared pi squared E, and here it's E of the solid, I over the length of the column, and in this case, the column length is h-- so h squared. OK, so that's just the Euler formula. And here, n is an end constraint factor. And if you remember for a pin column, so if our column is pinned at both ends like that, and just buckles out like that, then n is equal to 1. And if the column is fixed at both ends, something like that, then the column looks like that and then it's equal to 2, OK? So if I know what the end condition is, I know what n is and I can use my Euler formula here. So the trick to this is that it's not so obvious what n is. Yes? AUDIENCE: So, when you're loading in the x2 direction here, the first thing you're going to get is the incline members deforming? LORNA GIBSON: Yeah. AUDIENCE: And then at some point, you hit a P critical that will cause the vertical members to buckle? LORNA GIBSON: Exactly. AUDIENCE: OK. LORNA GIBSON: Exactly. That's exactly right. Hello. So the trick here is that we don't really know what this n is, initially. They're not really pinned, pinned; they're not fixed, fixed. And if you think about the setup with the honeycomb here, the constraint on that vertical member depends on how stiff the adjacent members are. So you can kind of imagine, if I'm looking at one of these vertical members here, if these two adjacent inclined members were big honking thick things, it would be more constrained. And if they were little thin, kind of teeny little membranes, it would be less constrained. And you can think of it in terms of a rotational stiffness, that when the honeycomb buckles, you kind of see the member h goes from being horizontal to sort of it buckles over like this. But that whole end joint, see the end joint at the top here or the end joint at the bottom, that whole joint rotates a little bit. And so there's some rotational stiffness of that joint. And that rotational stiffness depends on how stiff the member h is and how stiff those inclined members are. So there's a thing called the elastic line analysis that you can use to calculate what n is. And basically what that does is it matches the rotational stiffness of the column h with the rotational stiffness of those inclined members. So we're not going to get into that. I'm just going to tell you what the answer is. But if you want to go through it, it's in an appendix in the book. So you can look at it, if you want. So here I'm just going to say that the constraint n depends on the stiffness of the adjacent inclined members. And we can find that by something called the elastic line analysis. And if you have the book, you can look in the appendix and see how that works. But essentially what it does is it matches the rotational stiffness of the column h with the rotational stiffness of the inclined members. So what you find is that n depends on the ratio of h/l. And I'm just going to give you a table with a few values. So for h/l equal to 1, then n is equal to 0.686. For h equal to 1.5, it's equal to 0.76. And for h/l equal to 2, it's equal to 0.806. OK, so now if we have values for n, we can just substitute in to get the critical buckling load. And if I take that load and divide it by the area of the unit cell, I'm going to get my buckling stress. So it's pretty straightforward from this. So my buckling stress is going to be that critical load divided by my unit cell area. So it's divided by the unit cell length in the x1 direction to l cos theta times the depth b into the page. So it's equal to n squared pi squared Es times I. And I is bt cubed over 12. Divided by the length of the column, h squared, and then divided by the area of the unit cell, 2l cos theta b, OK? And I can rearrange that somewhat to put it in terms of dimensionless groups. So if I pull all the constants out, it's n squared pi squared over 24 times the modulus of the solid, t/l cubed in the numerator divided by h/l squared times cos theta in the denominator. So again, you can see that the buckling stress, the compressive sort of elastic collapse stress, depends on the solid property. So here is the modulus of the cell wall in here. Depends on the relative density through t/l cubed. And then it depends on the cell geometry through h/l cos theta, and n depends on h/l as well, OK? And then we can do the same thing where we figure out what it is for regular hexagonal cells. And it's 0.22 Es times t/l cubed. And then we can also notice that since E in the 2 direction, for a regular hexagonal cell, E is the same in the 2 direction and the 1 direction. It's isotropic. So E2 is also equal to 4 over root 3 Es times t/l cubed. That's equal to-- whoops-- it's equals to 2.31 Es t/l cubed. And we can say that the strain at which that buckling happens is just equal to a constant. And for regular hexagonal honeycombs, it works out to a strain of 10%. Are we good? So we have a buckling load. We divide by the area. The only complicated thing is finding n. And you can find it by this elastic line analysis thing. So each of these calculations is like a little structural analysis, only on a little teeny weeny scale of the cells. So you see where my background in civil engineering comes in handy. Yup. OK. So the honeycombs involve the most sort of complicated equations. When we come to do the foams, we're going to use a dimensional analysis and all the equations are going to be much simpler. So this is the most kind of tedious part of the whole thing. So the next property I want to look at is the plastic collapse stress. Say we had a metal honeycomb and we wanted to calculate the stress plateau for a metal honeycomb. So we have this little schematic here, and say we load it in the one direction again. So we're loading it here. And we've got some load P, like that. And if we have our honeycomb, we load it this way on, initially, the cell walls bend. And you have linear elasticity and you have some Young's modulus. But if you have a metal, if you continue to deform it and you continue to load it more and more, eventually you're going to hit the yield stress and the cell wall. So the stresses in the cell wall are going to hit the yield stress. And initially, the stresses are just going to be-- remember, if you have a beam, the stresses are maximum at the top and the bottom of the beam. So initially you're going to hit the yield stress at the top and the bottom of the beam first. But as you continue to load it, you're going to end up yielding the cross-section through the entire section. So the entire section is going to be yielded. And once the entire section yields, it forms what's called a plastic hinge. Once the whole thing's yielded, then you can add more force and the thing just rotates. And because it rotates, it's called a plastic hinge. You know, if you take a coat hanger, and you bend it back and forth and bend it back and forth. If you bend it enough, you form a plastic hinge because it just can bend easily. So these little schematics here, if you look at the, say, one of these inclined members, the moments are maximum at the end. So Remember when we had the linear elastic deformation and I looked at the little bending moment diagram? The moments are maximum at the ends, and you're going to form those plastic hinges initially at the ends. And so these little ellipsey things here, all the ends, those kind of show where the plastic hinges are. So those plastic hinges are forming. So here's for loading in the x1 direction, and here's for loading in the x2 direction, there. So the thing we want to calculate is what stress does it take to form those plastic hinges and get this kind of plastic plateau stress? OK, so we can say we get failure by yielding in the cell walls. And I'm going to say the yield strength of the cell wall is sigma ys. So sigma y for yield and s for the solid. And the plastic hinge forms when the cross-section has fully yielded. So let's look at the stress distribution through the cross-section when its first linear elastic. So say that's the thickness t of the member. And if the beam was linear elastic, the stress would just [? vary ?] linearly, like that, right? And this would be the neutral axis, here, where there is no normal stress. So that's what happens if it's linear elastic, and I'm hoping you remember something vaguely like that. Sounds good? But as we increase the load on it, and we increase the sort of external stress, this stress in the member is going to get bigger and bigger, and eventually, that's going to reach the yield stress, OK? And once that reaches the yield stress, if we continue to load it, what happens is the yielding propagates down through the thickness of the thing here. So we get yielding through the whole cross-section. So let me scoot over here. AUDIENCE: Professor? LORNA GIBSON: Yup? AUDIENCE: When it starts to yield, does this curve change? LORNA GIBSON: Yes. I'm going to draw it for you. AUDIENCE: Oh, OK. LORNA GIBSON: That's the next step. That would be the next thing. OK, so once the stress at the outer fiber is the yield strength of the solid, then the yielding begins and it progresses through the section as the load increases. So the stress distribution starts to look something like this once it yields. OK, so that's sigma y of the solid. Actually, let me rub that out because then I can show you something else. So in 3D, this would be through the thickness of the beam. That would be the thickness of the beam there. And boop, boop. It would look something like that. OK? And then this is still our neutral axis here. And then eventually, as you load it more and more, the whole cross-section is going to yield. Whoops. And I'm assuming that the material is elastic, perfectly plastic. So the stress-strain curve from the solid I'm idealizing as-- whoops. That's not quite right. I'm idealizing as that, OK? So when you get to this point here, the entire cross-section has yielded, and that means you form the plastic hinge. The idea here is that the section then just rotates like a pin. All right. So we can figure out the plateau stress that corresponds to this by looking at the moment that's associated with the plastic hinge formation. So there's some internal moment associated with that. And then equating that to the applied moment from the applied stress. So doodle-loodle-oot. Let me see me, maybe back up here. So there's some-- if I have the stress distribution here, I could say this whole kind of stress block is equivalent to some force acting out like that and some force acting out like that. It would be sigma ys times b comes t/2 would be f. And I can say there's some plastic moment. If I think of the force here and the force there, they act as a couple and they have some moment, and that's called the plastic moment. So that's like an internal moment when the plastic hinge forms. So I'll say the internal moment at the formation of a plastic hinge. I'm going to call that Mp, for plastic moment. And we can work out Mp by looking at that stress distribution when the entire cross section has yielded. The force F is going to be sigma ys times b comes t/2. It's the stress times that area. And then the moment arm between the two forces is also t/2. And so that plastic moment is just sigma ys bt squared over 4, OK? Are we good? Sonya? AUDIENCE: What's the second [INAUDIBLE]? LORNA GIBSON: OK, so this is the force. This thing here is the force F. And I have to-- if I'm getting a moment, I'm saying that that force, if I doot-doot-doot-- the distance between those two forces there is t/2. So each force acts through the middle of the block, and so the distance between [? it is ?] t/2. And I'm going to equate that moment to the applied moment from the sort of applied stress. And then if I go back to my inclined member-- whoops, let's see. Let me get a little more inclined. That's my inclined member, there, of length l. I've got modes p that are applied at the end from sigma 1. And I've got moments that are induced at the ends. And that angle there would be theta. This length here is l, like that. And if I just use static equilibrium on that, I can say that I've got 2 times the moment, so I've got one at each end-- they're both the same sign-- minus P. And then the distance between these two P's, say I take moments about here, I've got M applied plus M applied, I've got minus P times l sine theta. That's equal to 0. So the applied moment there is just Pl sine theta over 2. So now what I'm going to do is I'm going to equate this applied moment with this plastic moment, and I'm going to relate P to my applied stress sigma 1. And then I'm going to get a strength in terms of the yield strength of the solid, there's going to be a t/l factor and there's going to be some geometrical factor. So that's just the last step. Boop-ba-doop-ba-doop. So we get plastic collapse of the honeycomb. And the stress I'm going to call sigma star plastic with a 1, because I'm going to look at the one direction. And that happens when that internal plastic moment equals the applied moment. So let's see. I've got that. Let me also write down over here, I've also got this sigma 1 is equal to P over h plus l sine theta times b. So here I can write P in terms of sigma 1, in this thing. And then write that, get the applied moment in terms of that, and then equate it to that. So this term on the left-hand side corresponds to this expression for the applied moment where I've plugged in. For P, I've plugged in sigma 1 times h plus l sine theta times b. And that's my plastic moment on the right-hand side. So if I just rearrange this, I can then solve for this plastic collapse stress. So it's equal to the yield strength of the solid times t/l squared, and then times another geometrical factor. 2 times h/l plus sine theta times sine theta. Doop-doop-doop. So the same kind of thing, there's a solid property, a t/l, a relative density term, in then a cell geometry term. And we can calculate with this for regular hexagonal cells. And we can do a similar kind of calculation for loading in the other direction. And you can get a shear strength if you want to do that, too. AUDIENCE: If you're going in the other direction, only the E or the M apply changes, right? [? Or ?] like that section. LORNA GIBSON: Yeah, this thing here is the same. AUDIENCE: That stays. LORNA GIBSON: Right. And this is-- there's a different geometry to it. Because now you're loading it this way on. OK, so we've calculated an elastic buckling plateau stress and a sort of plastic collapse plateau stress. And if you have thin enough walled, say, even aluminum honeycombs, then the elastic buckling could precede the plastic collapse. And so I'm just going to work out what the criterion would be for that to happen. So the two stresses can be equated. And then that's going to give us some criterion. So the two are equal, I'm just going to write down the equations that we had. So the buckling stress was n squared pi squared over 24 times E of the solid times t/l cubed divided by h/l squared times cos of theta. And the plastic collapse stress for the 2 direction was sigma ys times t/l squared divided by 2 cos squared theta. So I can write this-- because this has a t/l cubed term, and that has a t/l squared term, I can write this in terms of a t/l critical. So if I leave it t/l here and I put everything else on the other side, I've got 12 over n squared pi squared, then h/l squared over cos theta times sigma ys over Es. So if t/l is less than that, I'm going to get elastic buckling first. And if it's more than that, I'm going to get plastic yielding first. And we can work out an exact number for regular hexagonal honeycombs, so I'm going to do that. So if I have a particular geometry, I can figure out what n is. So for regular hexagonal honeycombs, t/l critical just works out to 3 times the yield strength of the solid over the Young's modulus of the solid. So if we know that ratio of the yield strength of the modulus of the solid, we can get some idea of what that critical t/l would be. So we'll do that next. AUDIENCE: And you said if t/l is less than that critical, then you're going to get the yielding first. LORNA GIBSON: No, if it's less, you get the buckling first. If it's really skinny, it tends to buckle first. So, for example, for metals, the yield strength over the modulus is roughly 0.002, like the 0.2% yield strength. And so that means that t/l, the sort of transition or the critical value is at 0.6%. So most metal honeycombs are denser than that. That's a pretty low density. But if we look at polymers, you can get polymers with a yield strength relative to the modulus of about 3% to 5%, and then that critical t/l is equal to about 10%, 15%. So low-density polymers with yield points may buckle before they yield. So we have one more of these compressive plateau stresses, and that's for the brittle honeycomb. So I don't think I'm going to finish this today, but let me set it up and then we'll finish it next time. So the idea here is that if you have a ceramic honeycomb-- remember I showed you some of those ceramic honeycombs-- that if you compress them, they can fail by a brittle crushing mode. So ceramic honeycombs can fail in a brittle manner. And again, initially there would be some cell wall bending, but at some point, you're going to reach the bending strength of the material. And bending strengths are called modulus of rupture. So you reach the modulus of rupture of the cell wall. So I'm not going to write the equations down today because we're not going to get very far, so I'll do that next time. But we're going to set this up exactly the same as we did for the last one, for the plastic yielding. But instead of getting that sort of blocky, fully yielded cross-section stress distribution, we're just going to have the linear elastic stress distribution, and when the maximum stress reaches that modulus [? rupture, ?] the thing's going to fail. So the form of the equations is going to be very similar to what we had for the plastic collapse stress, but there's a slightly different geometrical factor-- that's all. So we'll do that next time. And then next time we're also going to talk about the tensile behavior of honeycombs in-plane. We'll work out a fracture toughness and then we'll start talking about the out-of-plane properties, as well. So on Wednesday, we'll do the out-of-plane properties, OK? So hopefully we'll finish the out-of-plane properties Wednesday. And then next week, I was going to talk about some natural materials that have honeycomb-like structures, so things like wood and cork, OK? All right, so this is the kind of most equationy lecture in the whole course.
MIT_3054_Cellular_Solids_Structure_Properties_and_Applications_Spring_2015
17_Sandwich_Panels.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation, or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. LORNA GIBSON: OK, so we should probably start. So last time we finished up talking about energy absorption in foamy cellular materials. And today I wanted to start a new topic. We're going to talk about sandwich panels. So sandwich panels have two stiff, strong skins that are separated by some sort of lightweight core. So the skins are typically, say, a metal like aluminum, or some sort of fiber composite. And the core is usually some sort of cellular material. Sometimes it's an engineering honeycomb. Sometimes it's a foam. Sometimes it's balsa wood. And the idea is that what you're doing with the core is you're using a light material to separate the faces, and if you think about an I-beam-- so if you remember when we talk about bending and we talk about I-beams, the whole idea is that in bending, you want to increase the moment of inertia. So you want to make as much material as far away from the middle of the beam as possible to increase the moment of inertia. So if you think about an I-beam, you put the flanges far apart with the web, and that increases the moment of inertia. And the sandwich panels and the sandwich beams essentially do the same thing, but they're using a lightweight core instead of a web. And so the idea is you use a lightweight core. It separates the faces. It increases the moment of inertia. But you don't add a whole lot of weight because you've got this lightweight core in the middle. So I brought some examples that I'll pass around and we can play with. So these are some examples up on the screen, and some of those I have down here. So for instance, the top-- turn my little gizmo on-- the top left here, this is a helicopter rotor blade, and that has a honeycomb core in it. This is an aircraft flooring panel that has a honeycomb core and has carbon fiber faces. So that's this thing here. I'll pass that around in a minute. This is a downhill ski. This has aluminum faces and a polyurethane foam core. And that's the ski here. And it's quite common in skis now to have these sandwich panels. This is a little piece of a small sailing boat. It had, I think, glass fiber faces and a balsa wood core. And I don't know if any of you sail, but MIT has new sailing boats. Do you sail? AUDIENCE: I do not much these days. LORNA GIBSON: OK, but those little tech dinghies that you see out in the river, those have sandwich panel holes to them. So those are little sandwich panels. This is an example from a building panel. This has a dry wall face and a plywood face and a foam core, and the idea with panels for buildings is that usually they use a foam core because the foam has some thermal insulation. So as well as sort of separating the faces and having a structural role, it has a role in thermally insulating the building. The foams are a little less efficient than using a honeycomb core. So for the same weight, you get a stiffer structure with a honeycomb core than a foam core. But if you want thermal insulation as well as a structural requirement, then the foam cores are good. And these are a couple examples of sandwiches in nature. This is the human skull. And your skull is a sandwich of two dense layers of the compact bone, and you can see there's a little thin layer of the trabecular bone in between. So your head is like a sandwich, your skull is like a sandwich. And I don't know if I'll get to it next time, but in the next couple of lectures, I'm going to talk a little bit about sandwich panels in nature, sandwich shells in nature. You see this all the time. And this is a bird wing, here. And so you can see there's got the dense bone on the top and the bottom, and it's got this kind of almost trust-like structure in the middle. And obviously birds want to reduce their weight because they want to fly, so reducing the weight's very important. And so this is one of the ways that birds reduce their weight, is by having a sandwich kind of structure. So I have a couple of things here. These are the two panels at the top there. This is the ski, and you can yank those around. I also have a few panels that people at MIT have made. And I have the pieces that they're made from. So you can see how effective the sandwich thing is. So this was made by a guy called Dirk Moore. He was a graduate student in ocean engineering. And it has aluminum faces and a little thin aluminum core. So you can see, if you try and bend that with your fingers, you really can't bend it any noticeable amount. And this panel here is roughly the same thickness of the face on that. And you can see how easy it is for me to bend that-- very easy. And this is the same kind of thing as the core. It's thicker than that core, but you can see how easy it is for me to bend this, too. So each of the pieces is not very stiff at all. But when you put them all together, it's very stiff. So that's really the beauty of this. You can have lightweight components, and by putting them together in the right way, they're quite stiff. So here's another example here. This is a panel that one of my students, Kevin Chang made. And this has actually already been broken a little bit, so it's not quite as stiff as it used to be. And you can kind of hear, it squeaks. But you can feel that and see how stiff that is. And this is the face panel here. And you can see, I can bend that quite easily with my hands. Doodle-doot. And then this is the core piece here. And again, this is very flexible. So it's really about putting all those pieces together. So you get this sandwich construction and you get that effect, OK? [? Oop-loo. ?] All right, so what we're going to do is first of all look at the stiffness of these panels, calculate their deflection. We're going to look at the minimum weight design of them. So we're going to look at how for, say, given materials in a given span, how do we minimize the weight of the beam for a given stiffness? And then we're going to look at the stresses in the sandwich beams. So there's going to be one set of stresses in the faces, and a different kind of stress distribution in the core. So we'll look at the stress distribution. And then we'll talk about failure modes, how these things can fail, and then how to figure out which failure mode is dominant, which one occurs at the lowest load. And then we'll look at optimizing the design, minimizing the design for a certain strength and stiffness. So we're not going to get all that way through everything today, but we'll kind of make a start on that. OK. So let me start. So the idea here is we have two stiff, strong skins, or faces, separated by a lightweight core. And the idea is that by separating the faces, you increase the moment of inertia with little increase in weight. So these are particularly good if you want to resist bending, or if you want to resist buckling. Because both of those involve the moment of inertia. And they work like an I-beam. So the faces of the sandwich are like the flanges of the I-beam, and the core is like the web. And the faces are typically made of either fiber reinforced composites or metals. So typically, something like aluminum, usually you're trying to reduce the weight if you use these things, so a lightweight metal like aluminum is sometimes used. And the cores are usually honeycombs, or foams, or balsa. And when they use balsa wood, what they do is-- I brought a piece of balsa here-- what they do is they would take a block like this and chop it into pieces around here. And then they would lay those pieces on a cloth mat. So typically the pieces are maybe two inches by two inches. They lay them on a cloth mat, and because they're not one monolithic piece, they can then shape that mat to curved shapes. So it doesn't have to be just a flat panel. They can curve it around a curved surface if they want. So we'll say the honeycombs are lighter than the foams for a given stiffness or strength. But the foams provide thermal insulation as well as a mechanical support. And the overall mechanical properties of the honeycomb depend on the properties of each of the two parts, of the faces and the core, and also the geometry of the whole thing-- how thick's the core, how thick's the face, how dense is the core? That kind of thing. And typically, the panel has to have some required stiffness or strength. And often what you want to do is minimize the weight for that required stiffness or strength. So often these panels are used in some sort of vehicle, like we talked about the sailboat, or like a helicopter, or like an airplane. They're also used in like refrigerated trucks-- they would have a foam core because they'd want the thermal insulation. So if you were going to use it in some sort of a vehicle, you want to reduce the mass of the vehicle and you want to have the lightest panel that you can. Yup? AUDIENCE: So if you saw the base material and you'd have the [INAUDIBLE] sandwich panel, that piece [INAUDIBLE] the sandwich panel with something [? solid ?] in the middle? LORNA GIBSON: Well-- AUDIENCE: So, as we're getting [INAUDIBLE] aluminum piece that's was as thick as a sandwich panel. LORNA GIBSON: Yeah. AUDIENCE: [INAUDIBLE]. LORNA GIBSON: Oh, well, if you have the solid aluminum piece that was as thick as the sandwich, it's going to be stiffer, but it's going to be a lot, lot heavier. So the stiffness per unit weight would not be as good. OK? So we're going to calculate the stiffness in just one minute. And then we're going to look at how we minimize the weight, OK? OK. So what I'm going to do is set this up as kind of a general thing. We're just going to look at sandwich beams rather than plates, just because it's simpler. But the plates, everything we say for the beams basically applies to the plates. The equation's just a little bit more complicated. So we're going to start with analyzing beams. And I'm just going to start with a beam, say, in three-point bending. So there's my faces there. Boop. And I've made it kind of more stumpy than it would be in real life, just because it makes it easier to draw it. And then if I look at it the other way on, it would look something like that. So say there's some load P here. Say the span of the beam is l. Say the load's in the middle, so each of the supports just sees a load of P/2. And then let me just define some geometrical parameters here. I'm going to say the width of the beam is b. And I'm going to say the face thicknesses are each t. So the thickness of each face is t. And the thickness of the core is c, OK? So that's just sort of definitions. And I'm going to say the face has a set of properties, the core has a set of properties, and then the solid from which the core is made has another set of properties. So the face properties that we're going to use are a density of the face. We'll call that row f. The modulus of the face, Ef, and some sort of strength of the face, let's imagine it's aluminum and it yields, that would be sigma y of the face. And then the core similarly is going to have a density, rho star c. It's going to have a modulus, E star c. And it's going to have some strength, I'm going to call sigma star c. And then the solid from which the core is made is going to have a density row s, a modulus Es, and some strength, sigma ys, OK? So the core is going to be some kind of cellular material, a honeycomb, or a foam, or balsa. And typically, the modulus of the core is going to be a lot less than the modulus of the face. So I'm just going to say here that the E star c is typically much greater than Ef. And we're going to use that later on. So we're going to derive some equations, for example, for an equivalent flexural rigidity for the section, an Ei equivalent. And that has several terms. But if we can say the core stiffness is much less than the face thickness, and also if we can say the core-- the stiffness is less and also the thickness of the core is much greater than the thickness of the face, a lot of the expressions we're going to use simplify. So we're going to make those assumptions. So let me just draw the shear diagram here. So V is shear, so that's the shear diagram. We have some load P/2 at the support. There's no other load applied until we get to here. Than the shear diagram goes down by P, so we're at minus P/2. Then there's no load here, so this just stays constant, and then we go back up to 0. And then let me just draw bending moment diagram. The bending moment diagram for this is just going to look like a triangle. Remember, if we integrate the shear diagram, we get the bending moment diagram. And that maximum moment there is going to Pl over 4. OK. So initially, I'm going to calculate the deflections. And I don't really need those diagrams for that, but then I'm going to calculate the stresses, and I'm going to need those diagrams for the stresses. So just kind of keep those in mind for now. So to calculate the deflections, sandwich panels are a little bit different from homogeneous beams. In a sandwich panel, the core is not very stiff compared to the faces. And we've got some shear stresses acting on the thing. And the shear stresses are largely carried by the core. So the core is actually going to shear, and there's going to be a significant deflection of the core and shear as well as the overall bending of the whole panel. So you have to count for that. So we're going to have a bending term and a shear term-- that's what those two terms are there. So we're going to say there's a bending deflection and a shear deflection. And that shearing deflection arises from the core being sheared and the fact that the core, say, Young's modulus or also the shear modulus, is quite a bit less than the face modulus. So if you think of the core as being much more compliant than the face, then the core is going to have some deflection from that shear stress. OK, so we're going to start out with this term here, the bending term. And if I just had a homogeneous beam in three-point bending, the central deflections-- so these are all the central deflections I'm calculating here-- with Pl cubed over it turns out to be 48 is the number, and divided by EI. And because we don't have a homogeneous beam here, I'm going to call that equivalent EI. And to make it a little bit more general, instead of putting 48, that number, I'm just going to put a constant B1. And that B1 constant is just going to depend on the loading geometry. So any time I have a concentrated load on a beam, the deflection's always Pl cubed over EI, and then the sum number in the denominator and that number just depends on the loading configuration. So for three-point bending, it's 48. For the flexion of a cantilever, B1 would be 3. So think of that as just a number that you can work out for the particular loading configuration. So here we'll say B1 is just a constant that depends on the loading configuration. And I'll say, for example, for three-point bending, B1 is 48. For a cantilever end deflection, then B1 would be 3. So it's just a number. So the next thing we have to figure out is what's the EI equivalent. So if this was just a homogeneous beam, and it was rectangular, E would just be E of the material and I would be the width B times the height H cubed divided by 12. So here we don't quite have that because we have two different materials. So here we have to use something called the parallel axis theorem, which I'm hoping you may have seen somewhere in calculus, maybe? But, yeah, somebody is nodding yes. OK, so what we do, what we want to do is get the equivalent EI-- I'm going to put it back up, don't panic-- of this thing here, right? So I want-- this is the neutral axis here, and I want the EI about that neutral axis there. So, OK, you happy? There. OK, so I've got a term for the core. OK, the core, that is the middle of the core, right? So for the core, it's just going to be E of the core times bc cubed over 12. Remember, for a rectangular section, it's bh cubed over 12 is the moment of inertia. And here our height for the core is just c, OK? And then if I took the moment of inertia for, say, one face about its own centroidal axis, I would get E of the face now times bt cubed over 12. So that's taking the moment of inertia of one face about the middle of the face. And I have two of those, right? Because I have two faces. And the parallel axis theorem tells you what the moment of inertia is going to be if you move it, not to the-- you don't use the centroid of the area, but you use some other parallel axis. And what that tells you to do is take the area that you're interested in-- so the area of the face is bt, and you multiply by the square of the distance between the two axes that you're interested in. Oop, yeah. Let me change my little brackets. Boop. So, oop-a-doop-a-doop. Maybe I'll stick this, make a little sketch over here again. OK, all right. So this term here, Ef bt cubed over 12, that would be the moment of inertia of this piece here, about the axis that goes through the middle of that, right? Its own centroidal axis. But what I want to do is I want to know what the moment of inertia of this piece is about this axis here. This is the neutral axis. So let's call this the centroidal axis. And the parallel axis theorem tells me what I do is I take the area of this little thing here, so that's the b times t, and I multiply by the square of the distance between those two axes. So the distance between those axes is just c plus t over 2, and I square it. And then I multiply that whole thing by 2 because I've got two faces. Are we good? AUDIENCE: [INAUDIBLE]. LORNA GIBSON: Yeah? AUDIENCE: The center [INAUDIBLE] and the [INAUDIBLE], are those Ed's or Ef's? LORNA GIBSON: These are Ef's because this is the face now, right? So this term here is for the core. So here the core is E star c. And these two Ef's are for the face up there, OK? Because you have to account for the modulus of the material of the bit that you're getting the moment of inertia for. Are we good? OK. So now I'm just going to simplify these guys a little bit. Doodle-doodle-doodle-do-doot. OK? So I've just multiplied the twos, and maybe I'll just write down here this is the parallel axis theorem. Doot-doot-doot. Yes, sorry? AUDIENCE: So for the term that comes from the parallel axis theorem, why do we only consider Ef and not [INAUDIBLE]. LORNA GIBSON: Because I'm taking-- what I'm looking at-- so the very first term, this guy, here-- AUDIENCE: Yeah, [INAUDIBLE]. LORNA GIBSON: Accounts for this, right? And these two terms both account for the face. AUDIENCE: Oh, OK, so the face acting-- LORNA GIBSON: Yeah, about this axis. So the parallel axis theorem says you take the moment of inertia of your area about its own centroidal axis, and then you add this term here. But it's really referring to that face, OK? Let me scoot that down and then scoot over here. And this is where we get to say the modulus of the face is much greater than the modulus of the core. And also, typically c, the core thickness, is much greater than t, the face thickness. So if that's true, then it turns out this term is small compared to that one. And also this term is small compared to this one. And also this term, instead of having c plus t squared, if c is big compared to t, then I can just call it c squared, OK? So you can see here, if Ec is small, then this is going to be small compared to these. If t is small, then this guy is going to be small. So even though it looks ugly, many times we can make this simpler approximation. OK, so we can just approximate it as Ef times btc squared over 2. So then this bending term here, we've got everything we need now to get that bit there. So the next bit we want to get is the shearing deflection. So what's the shearing deflection equal to? So say we just thought about the core, and all we're interested in here is what's the deflection of the core and shear? And so say that's P/2, that's P/2, that's l/2. We'll say that's-- oops. That's our shearing deflection there. We can say the shear stress in the core is going to equal the shear modulus times the shear strain, so we can say P over the area of the core is going to be proportional to the Young's modulus times delta s over l. And let's not worry about the constant just yet. So delta s is going to be proportional to-- well, let me [? make it ?] proportional at this point. Delta s is going to equal Pl divided by some other constant that I'm going to call B2, and divided by the shear modulus of the core, and essentially the area of the core. And here B2 is another constant. So again, B2 just depends on the loading configuration. Yeah, this is a little bit of an approximation here, but I'm just going to leave it at that. OK, so then we have these two terms and we just add them up to get the final thing. Start another board. OK. So that would give us an equation for the deflection. And one thing to note here is that this shear modulus of the core, if the core is a foam, then we have an equation for that. We also could use an equation if it's a honeycomb. But I'm just going to write for foam cores. Whoops. This is for-- that will be for open-cell foam cores. Oops, don't want to-- and get rid of that. We won't update just now, thank you. OK, so the next thing I want to think about is how we would minimize the weight for a given stiffness. So say if we're given a stiffness, we're given P over delta, so I could take out the two P's here. If I divide it through by P, delta over P would be the compliance, P over delta would be the stiffness. So imagine that you're given the face and core materials, and you're told how long the span has to be, you're told how wide the beam is going to be, and you're told the loading configuration. So you know if it's three-point bending, or four-point bending, or a cantilever-- whatever it is. And you might be asked to find the core thickness, the face thickness, and the core density that would minimize the weight. So I have a little schematic here. I don't know if you're going be able to read it. So I'm going to walk through it and then I'll write things on the board. Whoops, hit the wrong button. OK, so we start with the weight equation here. The weight's obviously the sum of the weight of the faces, the weight of the core, so those two terms there. So I'll write that down in a minute. And then we have the stiffness constraint here. So this equation here is just this equation that I have down here on the board, OK? Then what you do is you solve that stiffness constraint for the density of the core. So this equation here just solves-- we're solving this equation here in terms of the density, and we get the density by substituting in this equation here for the shear modulus of the core. So you substitute that there. It's kind of a messy thing, but you solve that in terms of the density. Then you put that version of the density here in terms of this weight equation up here. So then you've eliminated the density out of the weight equation, now you've just got it in terms of the other variables. And then you take the partial derivative of the weight with respect to the core thickness c, set that equal to 0, and you take the partial derivative of the weight with respect to the face thickness, t, and you set that equal to 0. And that then gives you two equations and two unknowns. You've got the core thickness and the face thickness are the two unknowns. And you've got the two equations, so then you solve those. So the value you get for the core thickness is then the optimum, so it's going to be some function of the stiffness, the material properties you started with in the beam geometry. And similarly, you get some equation for the optimum face thickness, t. And again, it's a function of the stiffness and the material properties in the beam geometry. Then you take those two values for c and t, those two optimum values, and plug it back into this equation here, and get the optimum value of the core density. And so what you end up are three equations for the optimal values of the core thickness, the face thickness, and the core density in terms of the required stiffness, the material properties, and then the loading geometry. So I'm going to write down some more notes, because I'll put this on the Stellar site. But it's hard to read just here. So let me write it down and I'll also write out the equations so that you have the equations for calculating those optimum values. So before I do that, though, one of the interesting things though is if you figure out the optimal values of the core thickness and the face thickness and the core density, and you substitute it back into the weight, and you calculate this is the weight of the face relative to the weight of the core, no matter what the geometry is, and what the loading configuration is, the weight of the face is always a quarter of the weight of the core. So the ratio of how much material is in the core and the face is constant, regardless of the core-- of the loading configuration. And this is the bending deflection relative to the total deflection. It's always 1/3. And the shearing deflection relative to the total deflection is always 2/3. So regardless of how you set things up, the ratio of what weight the face is relative to the core and the amount of shearing and bending deflections is always a constant at the optimum. OK, so let's say we're given the face and the core materials. So that means we're given their material property, too. And say we're given the beam length and width and the loading configuration. So that means we're given those constants, B1 and B2. If I told you it was three-point bending, you would know what B1 and B2 are. So then what you need to do is find the core thickness, c, the face thickness, t, and the core density, rho c, to minimize the weight of the beam. So there's two faces, so the weight of the face is 2 rho f g times btl. And then the weight of the core is rho c g times bcl. So I'm going to write down the steps and then I'll write down the solution. So you solve. So you put this equation for the shear modulus of the core into here, and then you rearrange this equation in terms of the density of the core here. So you have an equation for the core density in terms of that stiffness, and then you solve the partial derivatives of the weight equation with respect to the core thickness, c, and put that equal to 0. And then the partial of dw [? over ?] dt and set that to 0. And if you do that, you can then solve for the optimal values of the face and core thicknesses. Yes? AUDIENCE: [INAUDIBLE] for weight, what is g? LORNA GIBSON: Gravity. AUDIENCE: OK. LORNA GIBSON: Just density is mass, mass times gravity-- weight. That's all it is. And then you've got a version of this that's in terms of the core density. You can substitute those values of the optimum face and core thicknesses into that equation and get the optimum core density. And then in the final equations, you get, when you do all that, and I'm going to make them all dimensionless, so this is the core thickness normalized by the span of the beam is equal to this thing, here. So you can see each of these parameters here, the design parameters that we're calculating the optimum of. I've grouped the constants B1 and B2 together that describe the loading configuration so you'd be given those. C2 is this constant-- oop, which I just rubbed off-- that relates the shear modulus of the foam core. So you'd be given that. These are the material properties of the-- you know, say, it's a polyurethane foam core, this would be the density of the polyurethane. Say it's aluminum faces, that would be the density of the aluminum. so you'd be given that. You'd be given the stiffnesses of the two materials, the solid from which the core is made and the face material. And then this is the stiffness here that you're given, just divided by the width of the beam, B. So the stiffness, you'd be given the width B. So you're given all those things, then you could calculate what that optimum design would be. So the next slide here just shows some experiments. And these were done on sandwiches with aluminum faces and a rigid polyurethane foam core. And here we knew what the relationship was for the shear modulus. We measured that. And what we did here was we designed the beams to all have the same stiffness, and they all had the same span in the width, B, then we kept one parameter at the optimum value and we varied the other ones. So here, on this beam, this set of beams here, the density was at the optimum. And we varied the core thickness, and we varied the face thickness, and the solid line was our model or our sort of optimization. And the little X's were the experiments. So you can see there's pretty good agreement there. Then the second set here, we kept the face thickness at the optimum value and we varied the core thickness, we varied the core density. So the same thing, the solid line is the sort of theory and the X's are the experiments. And here we had the core thickness of the optimum value, and we varied the face thickness and the core density. So you can kind of see how you can see this here. And over here, just because I forgot to say it, this is the stiffness per unit weight, over here, OK? So these are the optimum designs here, all right? So there was pretty good agreement between these calculations and what we measured on some beams. Do I need to write anything down? Do you think you've got that? Yeah? AUDIENCE: I was just going to ask, for the optimum design column that you have there, do those numbers like fall out of these equations if you do the math? LORNA GIBSON: They do, yeah. I mean, it's-- yeah, exactly. So if you remember the equation we had for the weight, so the weight is equal to 2 rho f gbtl plus the density of the core, bcl, so if you plug these things into there, then-- so this is the way to the face, that's the way to the core, then it drops out to be a quarter. So it's kind of magical. I mean, you have this big, long, complicated gory thing, and then, poof, everything disappears except a factor of 1/4. And the same for the bending deflection. So we had those two terms, so there was the bending and the shear. If you just calculate each of those terms and take the ratio of 1 over the total, or the one over the other, everything drops out except that number. So that's why I pointed it out, because it seemed kind of amazing that everything would drop out except for that one thing. OK, so then the next thing-- so that's the stiffness in optimizing the stiffness. Are we happy-ish? Yeah? OK. So the next thing-- oh, well, let's see. I don't think I need to write any. I think if you have that graph, I don't really need to write much down. So the next thing then is the strength of the sandwich beams. So let me get rid of that. You guys OK? Yeah? AUDIENCE: Yeah. LORNA GIBSON: Yeah, but you're shaking your head like this is very, very helpful for me. AUDIENCE: [INAUDIBLE]. LORNA GIBSON: Oh OK, that's OK. AUDIENCE: [INAUDIBLE]. LORNA GIBSON: That's OK, you can do that. I don't mind. But as long as you don't have questions for me. OK, and so the first step in trying to figure out about this strength is we need to figure out the stresses in the beams. So we need to find out about the stresses. And we're going to have normal stresses and we're going to have shear stresses. So I'm going to do the normal stresses first and then we'll do the shear stresses. So you do this in a way that's just analogous to how you figure out the stresses in a homogeneous beam. So we'll say the stresses in the face-- normally it would be My over I. M is the moment, y is the distance from the neutral axis, I is the moment of inertia. So this time, instead of having a moment of inertia, we have this equivalent moment of inertia. And we multiply by E of the face. So you can think of this as being the strain essentially. And then you multiply by E of the face to get the stress. The maximum distance from the neutral axis, we can call c/2. So that's y. Then EI equivalent we had Ef btc squared over 2. And then I have a term of Ef here. c squared. So one of the c's goes, the 2's go, the Ef's go. Then you just get that the normal stress in the face is the moment at that section divided by the width, b, the face thickness, t, the core thickness, c. And I can do the same kind of thing for the stress in the core, except now I multiply by the core modulus. So if I go through the same kind of thing, it's the same factor of M over btc, but now I multiply times E of core over E of the face. And since E of the core is a lot smaller than E of the face, typically these normal stresses in the core are much smaller than the normal stresses in the face. So the faces carry almost all of the normal stresses. And if you look at an I-beam, the flanges of the I-beam carry almost the normal stresses. So I want to do one more thing here. I want to relate the moment to some concentrated load. So let's say we have a beam with a concentrated load, P. So for example, something in three-point bending, typically we're interested in the maximum stresses, so we want the maximum moment. So M max is going to be P times l over some number. And this B3 is another constant that depends on the loading configuration. So if it was three-point bending, B3 would be 4. If it was a cantilever, B3 would be 1. So if I put those things together, the normal stress in the face is Pl B3 divided by btc. OK, so that's the normal stresses. And then the next thing is the shear stresses, and the shear stresses are going to be carried largely by the core. And if you do all the exact calculations, they vary parabolically through the core. But if we make those same approximations that the face is stiff compared to the core, and that the face is thin compared to the core, then you can say that the shear stress is just constant through the core. So we'll say the shear stresses vary parabolically through the core. But if the face is much stiffer than the core and the core is much thicker than the face, then you can say that the shear stress in the core is just equal to the sheer force over the area of the core, bc. So here, V is the shear force of the cross-section you're interested in. And bc is just the area of the core. And we could say the maximum shear force is just going to be V over-- actually, let's make it P, P over yet another constant. And B4 also depends on the loading configuration. So if I was giving you a problem, I would give you all these B1, B2, B3, B4's and everything. So the maximum shear stress in the core is in just the applied load, P, divided by this B4 and divided by the area of the core. OK, so this next figure up here just shows those stress distributions. So here's a piece of the cross-section here. So there's the face thickness and the core thickness. You can think of that as a piece along the length, if you want. This is the normal stress distribution, here. So this is all really from saying plane sections remain plane. These are the stresses, the normal stresses in the core. And you can see they're a lot smaller in this schematic than the ones in the face. And then this is the parabolic stress in the core. And similarly, there'd be a different parabola in the face. And these are the approximations. Typically these approximations are made so the normal stress in the face is just taken as a constant. The normal stress in the core is often neglected. And here the shear stress in the core is just a constant here. So the two things you need to worry about are the normal stress for the face and the shear stress for the core. Are we good? We're good? Yeah, good-ish. OK, so if we want to talk about the strength of the beam, we now have to talk about different failure modes. And the next slide just shows some schematics of the failure modes. So there's different ways the beam can fail. Say it's in three-point bending just for the sake of convenience. One way it can fail is, say it had aluminum faces. This face here would be in tension, and the face could just yield. So you could just get yielding of the aluminum. That would be one way. It could be a composite face and you could have some sort of composite failure mode. You can get more complicated failure modes for composites, but there could be some sort of failure mode. This face up here is in compression, and if you compress that face, you can get something called face wrinkling. You get sort of a local buckling mode. So imagine you have the face, that you're pressing on it, but the core is kind of acting like an elastic foundation underneath it. And you can get this kind of local buckling, and that's called wrinkling. That's another mode of failure. You can also get the core failing in shear. So here's these two little cracks, denoting shear failure in the core. And there's a couple of other modes you can get, but we're going to not pay much attention to those. The whole thing can delaminate, and, as you might guess, if the whole thing delaminates, you're in deep doo-doo. Because, remember when I passed those samples around, how flexible the face was by itself and how flexible the core is by itself. If the whole thing delaminates, you lose that whole sandwich effect and the whole thing kind of falls apart. We're going to assume we have a perfect bond and that we don't have to worry about that. The other sort of failure mode you can get is called indentation. So imagine that you apply this load here over a very small area. The load can just transfer straight through the face and just kind of indent the core underneath it. We're going to assume that you distribute this load over a big enough area here, that you don't indent the core. So we're going to worry about these three failure modes here-- the face yielding, the face wrinkling, and the core failing and shear, OK? So let me just write that down. And then you also can have debonding or delamination, and we're going to assume perfect bond. And then you can have indentation, and we're going to assume the loads are applied over a large enough area that you don't get-- So you can have different modes of failure, and the question becomes which mode is going to be dominant? So whichever one occurs at the lowest load is going to be the dominant failure mode. So you'd like to know what that lowest failure mode is. So we want to write equations for each of these failure modes and then figure out which one occurs first. So we'll look at the face yielding here. And face yielding is going to occur just when the normal stress in the face is equal to the yield stress of the face. So this is fairly straightforward. So this was our equation for the stress in the face. And when that's equal to the face yield strength, then you'll get failure. And the face wrinkling occurs when the normal compressive stress in the face equals a local buckling stress. And people have worked that out by looking at what's called buckling on an elastic foundation. So the core acts as elastic support. You can think that as the face is trying to buckle into the core, the core is pushing back on the face. And so the core is acting like a spring that pushes back, and that's called an elastic foundation. So people have calculated this local buckling stress, and they found that's equal to 0.57 times the modulus of the face to the 1/3 power times the modulus of the core to the 2/3 power. And here, if we use our model for open cell foams, we can say the core modulus goes as the relative density squared times the solid modulus. And so you can plug that in there. So then the wrinkling occurs when the stress in the face, the Pl over the B3 btc is equal to this thing here. OK, so one more failure mode that's the core shear, and that's going to occur when the shear stress in the core is just equal to the sheer strength of the core. So the shear stress is P over B4 times bc, and the shear strength is some constant, I think it's C11, times the relative density of the core to the three halves power times the yield strength of the solid. And here, this constant is about equal to 0.15, something like that. So now we have a set of equations for the different failure modes, and we could solve each of them, not in terms of a stress, but in terms of a load P. The load P is what's applied to the beam, right? So we could solve each of these in terms of the load, P. And then we can see which one occurs at the lowest load, P. And that's going to be the dominant failure mode. So one way to do it would be to, for every time you wanted to do this, to work out all these three equations and figure out which one's the lowest load. But there's actually something called a failure mode map, which we're going to talk about. So let me just show you it and we'll start now. I don't know if we'll get finished this. But there's a way that you can manipulate these equations and plot the results as this failure mode map. And you'll end up plotting the core density on this plot, on this axis here, and the face thickness to span ratio here, and so this will kind of tell you, for different configurations of the beam, different designs, for these ones here, the face is going to wrinkle, for those ones there, the face is going to yield, and for these ones here, the core is going to shear. So I'm going to work through these equations, but I don't think we're going to finish it today. So this is just kind of where we're headed is to getting this map. So we'll say the dominant failure mode is the one that occurs at the lowest load. So the question we're going to answer is how does the failure mode depend on the beam design? And we're going to do this by looking at the transition from one failure mode to another. So at the transition from one mode to another, the two modes occur at the same load. So I'm going to take those equations I had for each of the failure modes, and instead of writing this in terms of, say, the stress in the face, I'm going to write it in terms of the load, P. So using that first one over there, the load for face yielding, I'm just rearranging that. It's B3 times bc times t/l times the yield strength of the face. And similarly for face wrinkling, I can take this equation down here and solve it for this P here, OK? And then I can take that equation at the top and solve that for P2 for the core shear, and that's equal to C11 times B4 times bc times sigma ys times-- oops, wrong thing-- times the relative density to the 2/3 power. OK? And then the next step is to equate these guys. So you get a transition from one mode to the other when two of these guys are equal to each other, right? So there's going to be a transition from face yielding to face wrinkling when these guys are equal. And I'm not going to start that because we're going to run out of time. But let me just say that I can pair these two up and say there's a transition between those two. And that transition is going to correspond to this line here, OK? So at this line here, that means you get face yielding and face wrinkling at the same load, OK? And then if I paired up-- let's see here. If I paired up face wrinkling and core shear, these two guys here, I'm going to get this equation here on that plot. And then if I paired up these two guys here, the face shielding and the core shear, I would get that line there, OK? So once I have those lines, that tells me, you know, anything with a lower density core and a smaller face thickness is going to fail by face wrinkling. Anything with a bigger density is going to fail by face yielding. And anything with a larger face thickness and a larger density is going to fail by core shearing. And so you can start to see that it-- I'll work out the equations next time, but you can start to see that it kind physically makes sense. Intuitively, this face wrinkling, it depends on the normal stress in the face, in compression. So obviously the thinner the face gets, the more likely that's going to be to happen. So it's going to happen at this end of the diagram. And it also depends on that elastic foundation, on how much spring support the foundation has, right? So the lower the core density, the more likely that is to happen. Then if you, say you have small t, so the face is going to fail before the core, as you increase the core density, you're making that elastic foundation stiffer and stiffer, and you're making it harder for the buckling to occur. It can't buckle into the elastic foundation, so then you're going to push it up to the yielding. And then as you make the face thickness bigger, as t gets bigger, then the face isn't going to fail and the core is going to fail. So you can kind of see just looking at the relative position of those things, they all kind of make physical sense. So I'm going to stop there for today and I'll finish the equations for that next time. And we'll also talk about how to optimize for strength next time. And we'll talk about a few other things on sandwich panels.
MIT_3054_Cellular_Solids_Structure_Properties_and_Applications_Spring_2015
Role_of_Images.txt
PROFESSOR: So there's a lot of images in the class, partly because we're studying these materials and you can see just with the ones sitting in front of me, they have this porous, cellular structure. So I show lots of images of materials. We also look at how the images deform under load. And I think, perhaps, that's something that might be a little unexpected. So for instance, we have a stage in the electron microscope where we can actually deform things in the microscope. And the stage is set up that it has a load cell on it. And we could also measure how much it deforms, the materials. So we can actually watch the materials as they're deforming. So even though the cells may be 100 micron size, you can watch how they deform and how they fail, and you're going to learn a lot about the mechanics from these sorts of observations. So we have both video of these deformations and also still photography at different time points that we show. Another interesting little video clip that I show in the class is we look at the interactions between biological cells and tissue engineering scaffolds. So for example, people who've looked at trying to heal, say, burns in skin where there's a large area of skin missing, one of the ways that you can do that is by using a collagen-based tissue engineering scaffold. And when you have a burn in your skin and it just heals the normal way and you get all the scar tissue forming, that scar tissue is thought to form in conjunction with a process of what's called wound contraction. So cells will actually migrate into the wound bed and actually mechanically pull the edges of the wound together, and that partly closes the wound and then scar forms as well. And those two processes are thought to be related to each other. So I've done a collaboration with Professor Ioannis Yannas here at MIT who developed one of these scaffolds for burn patients. And he and I have been interested in this wound contraction problem. And it turns out if you just put fibroblast skin cells into a dish of culture medium with one of these tissue engineering scaffolds, they will contract the scaffold. And you can use an optical microscope to actually watch the cells do this. So you can focus on an individual cell and you can see the cell elongating. You can see the scaffold itself contracting. And you can see this whole process. So this is one of the little video clips that we show in class. And the students always find that fascinating.
MIT_3054_Cellular_Solids_Structure_Properties_and_Applications_Spring_2015
Faculty_Introduction_and_Background.txt
LORNA GIBSON: My name's Lorna Gibson. I'm the professor for 3.054, it's a course on cellular solids. And I've been working on cellular solids since I was a graduate student, since I did my Ph.D. And cellular solids are materials that are made up of an interconnected network of struts or plates. And there's examples like engineering honeycombs and foams, and there's lots of examples in nature. Things like wood and cork and there's a type of porous bone. And there's lots of examples in medicine too. Tissue engineering scaffolds, for example. So my background is in civil engineering, and in civil engineering we study structures. And typically people think of large structures like bridges or buildings. But in fact when we analyze the cellular solids, we use the same kind of mechanics. It's just the scale is very much smaller. So we're looking at structures where the scale might be hundreds of microns or millimeters, things like that, but the same sort of mechanical principles apply to that. OK, so I grew up in Niagara Falls, in Ontario. And people always think of Niagara Falls as being the waterfall and all the tourist stuff, there's a casino there now. But in fact, there's loads and loads of big civil engineering works in Niagara Falls, mostly associated with the hydroelectric power station. So when they make hydroelectric power in Niagara Falls, the power station is actually about a mile downstream from the Falls. And what they do is they have a big hydraulic gate that goes into the river and it diverts water from the river above the Falls into a whole series of canals and tunnels and there's a big reservoir where they store water. And then the water from this reservoir goes into the penstocks, the tubes that go down to the turbines and then make the electricity. Niagara Falls is not a big town, but if you drive around Niagara Falls, you see these canals, you see the reservoir, you see the big power station. And so there's these really huge, impressive civil engineering works. And my father worked for an engineering company in Niagara Falls and they specialized in the design of hydroelectric power stations, and I think that's how I got interested in engineering. So I've been interested in bird watching for some time. Mostly just because birds are beautiful and there's all sorts of interesting behaviors you can see with birds. But since I started doing research on cellular solids and, in particular, teaching this course, I realize there's lots of examples of things about birds that have to do with cellular materials. So for instance, some people had once told me that woodpeckers avoid head injury and brain injury by having a special cellular material in between their brain and their skull. And that this acted kind of like a foam in a bicycle helmet. That it would absorb the energy of the impact. And I thought oh, well, I like bird watching and I study cellular materials, I should find out about this. So I started looking into it and people had looked at the anatomy of the woodpecker skull and brain. And, in fact, there is no special cellular material. But by that point, I was kind of hooked. And I actually did a project at one point looking at why it was that woodpeckers don't get brain injury. And it's largely a scaling law. It has to do with the fact that their brains are very small. Another aspect of birds that has to do with cellular solids is how birds make themselves very light. And here we have an owl skull. This owl, unfortunately, had an accident with a car. But somebody picked up its body and took it to Mass Audubon, and I got this from somebody at the Massachusetts Audubon Society. And if you look at the skull-- I don't know if you can do a close-up here-- if you look at the skull, you can see there's a dense layer of bone on the outside and there's another dense layer bone on the inside, and there's a sort of foamy layer bone in between. And that's called a sandwich structure. And this foamy type of bone is called trabecular bone. And that's one of the things that I study. And it turns out that particular structure gives you a very stiff, strong, lightweight structure. So you can see an example of how cellular materials are used in engineering but here sort of manifested in the owl's skull in making the skull very light.
MIT_3054_Cellular_Solids_Structure_Properties_and_Applications_Spring_2015
Project_Logistics_and_Support.txt
PROFESSOR: So typically in the course we start off talking about the structure of these cellular materials. We do some modeling of honeycomb and foam type materials, and that takes not quite half the term, but most of the first part of the term anyway. And I give them more problem sets at the beginning of the term, and they don't really start on the projects at the beginning. So I give them some background information to get them going. And I assign the project at the beginning of the course, and I think there's three kind of deadlines. One is they have to give me a proposal. I think that's typically about a month into the course, and that can be fairly brief, but I want to at least know they've got a team. They've got an idea. They've got some idea of how they're going to carry out their project. Then in about another month they have to send me a sort of update on the project, and by that point, I would have expected them, if they're doing a literature review, to start reading some papers and be able to tell me something about the background to their project. If they're doing experiments, I would have expected them to at least gone into the lab and maybe made some materials, or bought some materials, and done some preliminary tests. And I give them feedback at that point. And at the end of the term, they hand in the final project. And I see them twice a week in class, and I don't really have a formal recitation, but what I do do is every week that a problem set is due I have office hours. And in fact, I don't actually have it in my office. I book a room, and we do a little tutorial. So there's times throughout the term they can see me and come and ask me questions about the project as well. So the question is what kind of feedback do I give the students and how do they interact with me on the projects, and typically there's about 20 students that take the course. So if they do it in pairs, there's roughly 10 projects. Some of the projects students just do literature searches, and I don't really get that involved with those. They're perfectly capable of going to the library and doing a literature search. And some of the students who are taking it are graduate students, and they often do finite element numerical calculations. And again, I give them some advice about how to set it up, but often they have experience doing this, and it's sort of applying what they already know to something new, but they kind of know what to do. The way I get the most involved is if students do experimental projects. So for instance, in this elephant skull project, I had this connection at the Museum of Comparatives Zoology, and I took the student up there. They had this idea about 3D printing it, and we have a 3D printer in the department. And one of the technical staff, Mike Tarkanian, is very, very good with the students and very helpful in getting them set up on the 3D printer, so he helps with that. And I give them sort of general advice. When they said now we want to measure some sort of acoustical response, I suggested maybe you could suspend it from a wire, or thread, or something and then put an accelerometer and measure the vibrations. So I try to give them some general advice like that, but they then carry the experiment out themselves. They have to figure out how to actually put it into practice and how to do it. And I think they get a big kick out of that. MIT students enjoy that kind of thing so that's kind of fun, and obviously I was very delighted too with this elephant skull project. So there's different things I try to help with-- mostly giving general advice about how they can do their experimental projects. So what are some of the challenges that students encounter in their projects. So I think one thing is students sometimes are little over ambitious in what's possible to accomplish, because typically we have to cover some material before they can even start the project. So typically they don't start the project until a month or six weeks into the term, and the term is only three months long more or less. And so they really have a fairly limited amount of time-- maybe six weeks or eight weeks, something like that-- to actually do the project. So if they want to make materials, if they want to do some sort of processing to make a foam, for example, they don't really have a lot of time, because often it takes some trial and error to be able to do that. So the projects have to be fairly focused so that they can actually get something interesting out of it in a fairly short amount of time. And sometimes students might want to do something where they would have to order materials, and it may take a few weeks for the materials to come in. So again, I try to discourage them from doing projects where there's going to be a long lead time on getting some critical thing that they need for the project. So there are some limitations, partly because of just the time we have for them to actually do it. And they're not just taking my course. They're taking other courses, so there's sort of a limited amount of time they can spend on it, too.
MIT_3054_Cellular_Solids_Structure_Properties_and_Applications_Spring_2015
10_Exam_Review.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. LORNA GIBSON: So I was just going to be here to answer questions. AUDIENCE: Just clarifying, What was the material that we were covering? LORNA GIBSON: In the test? AUDIENCE: Yeah. LORNA GIBSON: So the test covers everything up to the end of the part on modeling foams, but not the bit on the performance indices, and the material selection charts for foams. So I think up to the end of the fractured toughness of foams. AUDIENCE: I see. OK. So not covering past [INAUDIBLE]. LORNA GIBSON: Not covering thermal properties, no. It doesn't cover thermal properties. Here you go. So you know I got this MacVicar award, and we had a lunch on Friday at the Catalyst restaurant. And I had to get up and speak. And just as I got up and spoke, there was a red-tailed hawk swooped by the window. It was perfect. It was perfect. Yeah. Hi. So I'm just here to answer questions. So come on. Somebody must have questions. It's all perfectly clear? You want me to do-- AUDIENCE: Talk through test one from last year a little bit. LORNA GIBSON: You would have to give me test one from last year. I didn't bring it with me. I just brought the problem sets. AUDIENCE: I have it on my computer, and I could read you the problems or just hand you the laptop. Whichever you prefer. LORNA GIBSON: Why don't you hand me the laptop and I'll try to do it. Is that OK? OK. So the question is about the test for 2014. OK. So the first question was, describe four processes for making honeycombs, and comment on the type of material usually used for each process. So I did post the solutions, right? Did you look at that? AUDIENCE: Yeah, I looked at them. I guess I just feel like I don't fully understand why things are there. But I can look at it some more. LORNA GIBSON: Well, I can go over it, if you want. AUDIENCE: I'll just look at it some more. LORNA GIBSON: No, I can go over it. So four processes. Let's see. So there's the expansion process, where you take sheets and you glue the sheets together, and then you pull them apart. So you can only really use that process for materials that are going to have large plastic deformations. So you could use it for metals, some polymers. But you couldn't really use it for ceramics. You couldn't use it for glass because as soon as you yanked on it, you'd break the sheets, right? So-- AUDIENCE: Is it rigid polymers that you can use it for? LORNA GIBSON: Well, something like nylon you can use it for. Something that's got a little yield, that will have some sort of yield point. Not like if you had epoxy, you couldn't use it for epoxy. So, OK. So that's one process. Another process is a corrugation process where you have a wheel that has little gear knobs on it. And you run your flat sheet through that and it comes out with the half hexagonal profile, and you glue those together. So again, you need something that's going to yield. So that would typically be a metal that you would use that with. Let's see. Another processor making honeycombs is 3D printing. You can 3D print honeycombs. And there's different ways to do it. One way is by having an ink. So if you want to print in ink, typically that's some sort of polymer that you're printing. I suppose physically it's possible to print glass or to print a metal, but you'd have to have some very high temperature setup to do that. So typically a resin of some sort. Let's see. Other ways to make honeycombs. You can extrude honeycombs. So the ceramic honeycombs we saw were made by extruding a ceramic slurry. And typically, you would do that with a ceramic that's a slurry and a powder. You wouldn't necessarily do that with a metal. I don't think I've seen any metal honeycombs that are extruded like that. OK? AUDIENCE: OK. LORNA GIBSON: Are we good with number one? AUDIENCE: Yeah, that makes sense. LORNA GIBSON: We now need your password. AUDIENCE: Oh, sorry. So is there an actual difference between 3D printing and the extrusion process? LORNA GIBSON: Yeah. So the extrusion you have a die, and you squeeze the material through the die, right? So extrusion's kind of like the toothpastey thing. And 3D printing, you can have an ink or you can do 3D printing where you have, let's say, a powder. And then you print the binder. And then you heat it up some way to get the binder to cure. And then you get rid of the powder that's not bound. That's another way to do the 3D printing. OK? AUDIENCE: Can you also pour it into a mold? LORNA GIBSON: Yeah. Yeah, those silicon rubber honeycombs that I showed you, those are all made by pouring a liquid into a mold and then curing it. Yeah. Yeah, there's other ways, it just asked for four, so I randomly thought of four. OK. OK, are we good? So the next one is a hexagonal titanium alloy honeycomb has h/l is two, theta is 45, and t/l is 0.05. It says, the end constraint factor for elastic buckling is n equals 0.806. The titanium has a modulus of 110 gigapascals, and yield strength of 880. And then you have to calculate some properties. So would you like me to do that? AUDIENCE: Yeah. LORNA GIBSON: Yeah, you would? OK. AUDIENCE: Does anybody else want that? LORNA GIBSON: I don't see a lot of other people wanting anything else, so I might as well do that. Let's see. I would need a piece of chalk. Here we go. OK. So it's a titanium alloy honeycomb, and we're told h/l is 2, theta's 45, and t/l is 0.05. And we're told that n-- why don't I put the n over here-- n is 0.806. And we're told the modulus of the solid is 110 gigapascals, and the yield strength of the solid is 880 mega pascals. OK. So it says calculate the value of and describe the mechanism of deformation failure for-- and the first part is the Young's modulus in the two direction, e star 2. OK. So I don't remember these formulas either, so I need to look at my notes. And oh, I don't have the formula for e 2 in my notes. Let me see. Is it in any of the problems? AUDIENCE: I have the formula sheet. LORNA GIBSON: You have the formula sheet? I didn't bring the formula sheet with me. You have it? Yeah, I think it's there. I'm pretty sure it's there. OK, so this is just like substituting, and there's nothing complicated about this. So it's equal to Es times t/l cubed times h/l plus sine theta divided by cos cubed theta. So then you just plug everything in. So this is 110 gigapascals. And I'm going to put 110,000 mega pascals, because it's probably going to be less than a gigapascal. Then t/l is 0.05. So that's 0.05 cubed. And then h/l is 2 plus sign of 45 is 0.707. And then we divide by cos theta cubed, 0.707 cubed. And then I'm not going to work it out, but that's OK. I assume that's what's in the solution. Are we good? So it's just substituting. That's all it is. In the second part-- OK. You need to change the time on this, because it just keeps timing out. AUDIENCE: Sorry, I don't know how to change it, but I'll try. LORNA GIBSON: Oh, is that what this is? OK, this is the test. Oh, this is from 2013. Oh, the next-- if we keep going. Here we go. He's got it, so you take your computer. AUDIENCE: That's probably better. LORNA GIBSON: That's perfect for everybody. OK. The second part is the plateau stress for loading in the x2 direction. So that's that. For loading in the x2 direction, it could either be an elastic buckling collapse stress, or a plastic buckling collapse stress. So I'm going to calculate both, and then whichever one is lower, that's the one it would be. So let's see here. So that's going to be-- I'm missing my formulas again. Here we are. So here's the buckling one. It's n squared pi squared over 24 times t cubed over lh squared times 1 over cos theta. So you put 0.806 squared in here. Pi squared 24. So here you go. This is t over l cubed times h over l squared. So that would be 0.05 cubed. And that would be 1 over 2 squared, and then 1 over 0.707, and then whatever that equals. OK? Are we good with that? And then you'd want to calculate sigma star to plastic. And that's equal to sigma ys, t over l squared, and 1 over 2 cos squared theta. All right? So then sigma ys was 880 mega pascals. And t over l was our 0.05. And that's 1 over 2 times 0.707 squared, and then whatever that equals. OK? And then whichever one of those would be less is the plateau stress. So I don't-- yeah? AUDIENCE: So I know-- I think I made this mistake in the problem set, but here, because it's titanium, we don't consider it brittle. LORNA GIBSON: Right. Right. I mean, you could. But you don't need to. AUDIENCE: That was something with ceramics? LORNA GIBSON: Yeah, or if it was a glass, or ceramic, or maybe an epoxy. Something that was brittle. And besides which, if you look at the question, I only give you a yield strength and a solid modulus. So to get the brittle thing, I would have to give you a fracture strength. AUDIENCE: That's true with all the problems. LORNA GIBSON: I usually give you what you need. Especially on the test, I'm going to give you what you need. You're not going to be looking things up. OK? So we're good so far? OK, let me go back to the question. So then the third one is the out of plane Young's modulus in the x3 direction. And that's just going to be Es times the relative density. And the relative density-- let's see. So it's Es times rho star over rho s. So that's 110,000 mega pascals. And then there's also the very first equation on this is the relative density. So it's t/l times h/l plus 2 divided by 2 cos theta h/l plus sine theta. So I'm not going to substitute everything in, OK? So that was it. It was very plug and chug. Are you good? AUDIENCE: Can I ask a question about a specific question. LORNA GIBSON: I'm going to go through the rest of them. She wants me to do the whole test. You want me to do the whole test, right? AUDIENCE: That would be great, but if other people have other questions it's fine. LORNA GIBSON: Why don't I do the whole test. And then there's going to be time, I think. AUDIENCE: [INAUDIBLE]. LORNA GIBSON: Let me do the test from last unit. OK, so that's number one. Or that's one and two. And then three is, a closed cell elastomeric polyethylene foam has a relative density of 0.05 and a volume fraction of solid in the edges of 0.6. They give you the Young's modulus of the solid is 0.2 gigapascals. The pressure within the cell walls is atmospheric, 0.1 mega pascals. And the Poisson's ratio of the foam is 0.3. And you're asked to get the Young's modulus of the foam, the compressive plateau stress. And then there's a question about why does the Young's modulus depend on the solid modulus and relative density, while the Poisson's ratio does not. So let me go through, then. So the first one is, what's e star. And we're told relative density is 0.05. The volume fraction in the edges is 0.6. So remember, that was what we called phi. So phi's 0.6. The Young's modulus of the solid is 0.2 gigapascals. The initial pressure within the cells is 0.1 mega pascal. and Poisson's ratio for the foam is 0.3. And it's a closed cell. So if you remember-- oh, let's see. I think back here, was I supposed to-- I was supposed to say something about the mechanism of deformation and failure for the first one. So the mechanism of deformation in the modulus in the 2 direction is bending. The mechanism of failure here is buckling. The mechanism of failure here is yielding. And the mechanism of deformation here was axial deformation, OK? So I forgot to say that. OK, let me go back here. So for this one, if it's a closed cell foam, remember there were three terms to the modulus. There was one from bending of the edges, one from stretching of the faces, and one from the gas contribution if you've got gas inside the cells. So again, I'm going to have to peek at the equation. Foams. Here we go, foams. OK. And then this gas one. Get rid of that. OK, so this one is just the same kind of thing. It's just plug and chug. Do I need to put all the numbers in? AUDIENCE: No. I guess I'm a little bit confused why you need the pressure term. Because you talked about faces bursting. LORNA GIBSON: Ah, so if it's the modulus, remember the modulus-- so the question is, why do you need to worry about the pressure because I talked about the faces bursting. And remember, the stress strain curve looks something like that. Maybe the slope of the curve is a little bit higher over here. But this is the modulus down here, right? The modulus is related to the initial stress strain relationship. And initially, they're not going to burst. You'd have to load it up to some amount of stress before the faces burst, right? So when you're down here, the faces certainly down at the beginning, they're not burst, right? You have to get some stress before they're going to burst. And in some materials, when you get up around here, around the plateau stress-- let's just call that sigma star-- then they might burst. And then the pressure term would disappear and the face term would disappear. So if I don't tell you to ignore them, if I don't say they're going to burst, or I don't say they're negligible, I would calculate them. And then if they're small, then you say, well, they're negligible. OK? Are we good with that? You're good? You're good? Sardar, you don't need to be here. But you can stay if you want, but you don't need to be here. OK, are you good? Everybody else good? OK. That was A. B, what's the compressive plateau stress of the foam. So here, they want to know what sigma star is. You're told it's elastomeric. So if it's elastomeric, it's like a rubber. It's rubbery. So if it's rubbery, it's going to buckle. It's not going to yield. It's not going to be brittle. So you can just calculate the elastic stress here. And if we flip over to our handy dandy list of equations-- blah, blah, blah, blah. Oh, pooh. I'm realizing-- yeah, so I don't have the term here for the faces or for the gas, so here we could assume it's going to rupture. So let's assume it's going to rupture. So if we assume that it's going to rupture, it would be just like the open celled foams. And then you can just use that. And that's been found to work fairly well for the open celled and the closed cell foams. AUDIENCE: And we assume that faces rupture because-- LORNA GIBSON: Well, to be honest, I can't remember if last year I got to this point in the test and somebody said, we don't have the equation, and I gave them the equation with the other terms, or if we just assumed that the faces ruptured. I can't remember. To be honest, I think probably for elastomeric foam, you could assume that they'd probably don't rupture unless they're very, very thin. AUDIENCE: So what kind of foams do they typically rupture in? LORNA GIBSON: So certainly if you had a metal foam, they'd probably rupture. If you had, say, a polymer foam that was more rigid, like a rigid polyurethane. So polyurethanes can be flexible, which means they're made out of an elastomer, or they can be rigid. And the foams that are typically used for insulation, thermal insulation, are typically closed celled polyurethane foams. And those typically have very thin faces, and they would rupture. Yeah? AUDIENCE: Are you looking for the Young's modulus in this problem? LORNA GIBSON: The Young's modulus was the first part, right? So part A was the Young's modulus. AUDIENCE: In B-- LORNA GIBSON: In B is the collapse stress, the compressor strength. AUDIENCE: So I think in my notes, I think it has this. If the-- LORNA GIBSON: Right, if p0 is bigger than-- so what she's showing me in her notes is I had a little note in the class, in the lecture, that if the initial pressure in the cells is greater than atmospheric, then the cell walls are pre-stressed and you have to overcome that in the buckling. AUDIENCE: Is that atmospheric? LORNA GIBSON: That is atmospheric pressure. AUDIENCE: So you don't need the [INAUDIBLE]. LORNA GIBSON: No. Yeah. OK? OK. Yeah, you don't know that that's atmospheric. So do you ever do things in PSI? No, you don't. Because when I was a student a long time ago, the thing I remember learning was atmospheric pressure is 14.7 PSI, more or less. And the conversion between mega pascals and PSI is there's more or less 145 PSI to a mega pascal, so the atmospheric pressure is about 0.1 mega pascals. Yeah? AUDIENCE: I don't know if this is a silly question, but for part B, how do we get from the Young's modulus of the foam-- LORNA GIBSON: Oh, sorry, sorry, sorry, sorry. I put the wrong thing down here. Sorry, my mistake. OK, now you happy? Sorry. OK, shall I move on to the next part? Another question? AUDIENCE: Well, I had a question about number two, but maybe we can come back to that-- LORNA GIBSON: OK, let me finish this, and then we'll go back to number two. So this one here, the part C is why does the Young's modulus foam depend on the solid modulus and the relative density while the Poisson's ratio does not. So when I write the equation for the Young's modulus, the solid modulus comes into it, the relative density comes into it. And remember when we had the Poisson's ratio, it's just a constant that depends on the cell geometry. So here's C, nu star just as a constant. And that constant just depends on the cell geometry. So if you think of the Poisson's ratio, it's the ratio of two strains, right? So say I have my foam here. So say that's just a block of foam. Little cells in it here. Little cells. And say I press on it this way here. And let's call this the one direction and the two direction. If I press it in the two direction, the Poisson's ratio is then just nu would be-- let's see, this would be 2, 1. It'd be the strain in the one direction over the strain in the two direction. So it's the ratio of two strains, right? And if you think of our model for the elastic behavior of the foam, each of those strains is going to be related to some bending deformation in the cell walls or the cell struts. And this strain here is going to be-- let me make this proportional. It's going to be proportional to a delta over l. And this one here is going to be proportional to the delta over l. So this might be delta in the one direction, and this will be delta in the two direction. But those two things are both going to be related to the bending deflection of the beams, right? And since both of those deltas are related to the bending deflection of the beams, we could write them-- if you want, I could write that as f l cubed over E of the solid times t to the fourth. And then that's times 1 over l. And then this thing here is also f l cubed over E of the solid t to the fourth 1 over l. So everything cancels out except the geometrical constant. And if you remember when we did the honeycombs, it looked exactly the same. When we looked at the Poisson's ratio of the honeycombs, we had the strain in one direction over the strain in another direction. And each of those strains was related to some component of delta. There was a delta 1 and a delta 2. But the delta 1 might be delta sine theta and delta 2 was delta cos theta. So if the deltas are the same, then it all just cancels out. And all you're left with is a geometrical constant. OK? Do you get physically why that is? AUDIENCE: So I have a question about this. Because we're given most of the equations that we need, is it only in conceptual questions that we should know how we actually derived that version? LORNA GIBSON: I'm not going to ask you to derive that equation for a closed cell foam. AUDIENCE: I meant this last part. LORNA GIBSON: Well yeah, you should be able to explain that. But I mean just at this level. Nothing very mathematically involved. AUDIENCE: OK, cool. LORNA GIBSON: OK, are we good? So that was the end of the test for the undergraduates, OK? And then for the graduate students, just like the problem sets, I just have an extra question. And that's what I did this year, too. So the graduate students have one extra question. So you and you. Is anybody else a graduate student? I think it's just the two of you. You're post post-graduate. OK. OK. So let's see. So this one says, the performance-- so this is on the performance indices which I told you you didn't need to know for this test, partly because, remember, we missed two lectures. We're not exactly on the same spot as we were last year. But I can do it if you want. Do you want me to do it, or should we do other questions? AUDIENCE: What do the grad students think? LORNA GIBSON: Sure. OK. So the question is, the performance index to minimize the mass of a beam of a given bending stiffness, length and square cross-section is e to the one half over rho. So you remember, we derived that e to the one half over rho in class. In the section on wood, we saw that this performance index for wood is higher than that for the solid cell wall material in wood. Do you remember that? The e to the one half over rho for the wood was, I think, rho s over rho star to the one half times Es to the one half over rho s. So explain why wood has a higher value of e to the one half over rho than the solid cell wall material. And then part B is, suggest a design for an engineering material based on wood that has high values of e to the one half over rho. So one way to explain it is to say that if you're looking at e to the one half over rho, you can say for wood, E over Es is equal to rho star over rho s for loading in the axial direction. So this will be for loading actually along the grain. And that's what we were looking at. So that's what I'm talking about here. So I think-- let me just see if this is right. Yeah. So this equation here is exactly the same as that equation there, right? And this is basically saying that this is the performance index for the wood. This is the performance index for the solid. And this factor here is bigger than 1, because the solid density is higher than the wood density. OK? So really, all you have to do is say that for the wood, the modulus in the longitudinal or the axial direction along with grain varies linearly with the relative density. And it probably would be a good idea to say that this is a result of the cell walls deforming axially. So when you take the cells, if you think of the wood cells as being something like that and you're loading it this way on, the cells just actually shorten, and the modulus depends on the-- it just is the volume fraction of solid times the modulus of the solid. And that's where this comes from. And once you have this, that basically gives you that. OK? Are we good with that? So that's why it's higher. Another way to look at it as sort of more of a hand-wavy argument is that if you have a certain amount of solid-- so say you have a certain mass of solid. If it's solid, it takes up a certain cross-sectional area. So say that your beam's a certain length, that's going to have a certain cross-sectional area. And if you have wood, if you have a cellular material, if you have the same mass, you're essentially making the dimensions of that piece bigger. So you're moving the material further away. And as you're making it bigger, you're increasing the moment of inertia. And so you're increasing the bending resistance of it. That's another, more hand-waving way to talk about it. And then the second part is to suggest a design for an engineering material based on wood that would have high values of e to the one half over rho. So remember when we looked at those material performance charts, we said that wood was similar to engineering fiber composites. But those data for fiber composites are assuming that it's solid, the fiber composite's a solid. So if you could take fiber composites and make little tubes of fiber composites and assemble the tubes together so that it was like wood, would get something that would be even higher. So if you could make, say, a fiber composite honeycomb material, and you'd want to have the fibers aligned along the prism axis of the honeycomb, then you would get higher values. It would be the same-- it'd be this sort of argument again, but now with a fiber composite. So you'd want-- if this was your fiber composite like this, you'd want the fibers-- well, in wood, they're at a little bit of an angle. But say they were lined up like that. You'd want them something like that, then loading it that way on, right? And if one way to think about those charts is if you-- say we had a plot. And say this was log of the modulus and that was log of the density. And I think I'll just draw the envelope. So foams were somewhere down here, metals were somewhere over here, and with elastomers we're somewhere in here. I think ceramics were up here. And then I can't remember exactly where composites were, but composites were around about here. I'll just say FRC for fiber reinforced composites. And I think woods were kind of in here. Something like that. And then we had our performance index, right? So remember, there was a performance index, something like that. And that slope of that was e to the one half over rho. So every point on that line had the same value of e to the one half over rho. And essentially, if you had the fiber composite and you made a honeycomb out of it, you would be taking the data from here and shifting them out that way. You'd be pushing them out over here, so you'd get a higher value of that performance index. OK? So that's the test from last year. That's the end-- yeah? AUDIENCE: So for along right here or different it would be cubed. LORNA GIBSON: Yeah, so the thing about the honeycombs is-- AUDIENCE: The opposite. LORNA GIBSON: Right. AUDIENCE: [INAUDIBLE]. LORNA GIBSON: It'd be worse, that's true. So the thing about the honeycombs is they're very stiff in the axial direction, but you pay for that in the other directions. And it's the same for wood. So the wood is very good when you load it along the grain, but you pay for it the other way. But if you think of from the tree's point of view-- if you're a tree. So here's my little tree. So here, say we have a tree trunk, and we have some branch. Branch over here, branches, tree. So the grain is lined up this way. And then when there's a branch, the grain turns around and goes that way, right? So if you think of the tree as a whole, the whole tree blows in the wind like this. So it's like a column like this, and everything's lined up that way. And you're loading it this way. So that is the stiff direction, right? And if you're a branch, the branches are more loaded by gravity. So they're loaded that way. And then because the fibers, the grain turns around, they're also oriented in the good direction. So from the tree's point of view, it's optimized things. Then you remember when I talked about the old wooden sailing ships, when they made the old wooden sailing ships, if this was the deck here and that was the haul there, they would get pieces of wood to fit in here that were called the knees. That was the knee. And they would try to get a piece that was from a branch like this. And they would try to match the curve of that joint with the branch with the curve that they needed in here so that the grain followed the pattern of what they needed for the boat. OK. Other questions? AUDIENCE: I'm not quite sure what the difference between tangential versus ray here. LORNA GIBSON: Oh, OK. So in the wood, you mean? AUDIENCE: Yes. LORNA GIBSON: OK. So can I rub this stuff off? We're happy? Let's see. Say again? So tangential and radial. OK. So say the wood cells look something like this. So these would be the fiber cells or the tracheids And then the rays typically are more rectangular cells. So they might look something like that. And then they would be some more fibers or tracheids, depending on if it was a soft wood or a hardwood. So these would be either fibers or tracheids in a hardwood or a soft wood. And then these would be the ray cells in here. So they have a different structure. They just look different. They're different shape. AUDIENCE: This is the top? From the top? LORNA GIBSON: Yeah, this is looking from the top down. And then if you think of the tree-- so the tree's going to have growth rings, right? So the growth rings are going to look-- obviously I'm not making perfect circles, but you get the idea-- something like that. And then the rays go this way. They go radially. OK? So this would be the radial direction, and then those are the rays. Are we good? AUDIENCE: So which way's the tangential? LORNA GIBSON: So the tangential would be this way on, OK? So if I loaded this way like that, that would be loading it in the tangential direction. And if I loaded it this way, that would be loading it in the radial direction. The length of the rays runs in the radial direction. The length this way on. So this thing here corresponds to one of these lines I've drawn here. And then these guys here are the stuff in between here. AUDIENCE: How do you know what the tangential, the Young's modulus and stiffness is? LORNA GIBSON: So say we were loading it tangentially, we're loading it like that. Then-- AUDIENCE: If you have a tree, how do you apply a tangential load on it? LORNA GIBSON: Oh, well it's not the whole tree, right? So say we have a piece of wood that we cut out like this. So say I have that. And if I loaded it this way on, I'd be loading it tangentially. The tree's big, right? So I'm not talking about loading the whole tree, I'm talking about taking a piece of wood out of the tree and loading it. AUDIENCE: OK, so you can't really load tangentially for the entire trunk. LORNA GIBSON: No, I'm talking about taking a piece out and loading that piece. AUDIENCE: How about a ray here? Do you take the-- LORNA GIBSON: So the same thing. You'd take-- say this was the piece of wood that you were looking at. Now you would just load it this way on. OK? I think-- I brought my thing because I have the slides. Let me see if I can find-- I think there was a slide that showed this. OK. That was Furry Fridays. That was the wood sculptor. Here we go. OK, so imagine that that cube is your piece of wood that you're loading, right? So imagine this is the-- you cut a little piece out. Then you're loading it tangent. Can you see, then? You can load it-- you're not loading the whole tree. AUDIENCE: Yeah, I was thinking about loading the entire tree and then applying the tangential load on it. LORNA GIBSON: Yeah, because then-- I see the problem. AUDIENCE: It's going to give us shears. LORNA GIBSON: Yeah. Yeah, because I could say, well, if I was trying to load the whole thing. Say I was loading it from here to there. Well if you look at it one way, it looks tangential. If you look at it the other way, it looks radial. So think of cutting a piece out, because that is what you're going to do. You're going to cut a piece out. OK? All right. Are there other questions? Yes? AUDIENCE: With that formula sheet, do you only give that formula for the honeycombs, or also for the foams? LORNA GIBSON: I'm going to give you-- if you look at, I think, problem set 2, I gave you a sheet that had three pages of equations. And it looked exactly like this. So there was one saying, properties of two dimensional cellular solids-- honeycombs. There was a whole thing of in plane properties and out of plane properties. That was one page. The next page was properties of regular hexagonal honeycombs. And then the next page was properties of three dimensional cellular solids foams. AUDIENCE: OK, excellent. Thank you. LORNA GIBSON: OK? Like I just said, I think there's maybe one or two equations missing from this. But if it was something you needed, I would give it to you. I would give it to you. OK? AUDIENCE: So I should have scrolled down for it? LORNA GIBSON: What? You should have scrolled? So you're like me. This happens to me all the time. I have some website. I'm looking at it. I'm like, OK. I got it. I think I've got everything. And then I realized I'm supposed to-- I missed something because I was supposed to scroll down. AUDIENCE: Problem set two was only the honeycombs. That's why. LORNA GIBSON: Well, I think that was probably all we covered was the honeycombs on that problem set. AUDIENCE: So that's why I only got that part. LORNA GIBSON: OK. All right, yeah. So I'm going to give you, this will be attached to the test. OK? So I think on the test that I posted it was attached, wasn't it? Yeah. [INAUDIBLE], did you have a question? AUDIENCE: For 2 part D, I don't think we went over that. I was confused as to how-- is it just that equilateral triangular cells always have-- is always truss behavior? LORNA GIBSON: Let's see. AUDIENCE: Oh, sorry. It's the 2014 test. LORNA GIBSON: 2014-- oh, sorry. I missed a part. Sorry. Yeah, so it says, the same titanium alloy is used to make a honeycomb with equilateral triangular cells. And what is the in plane Young's modulus for loading in the x 2 direction of the triangular honeycomb? So this is-- say you have cells that look like that now. And that's x 1. That's x 2. OK. So the Young's modulus for this-- so I happen to remember the formula. So I guess I'm thinking you might have put this on your cheat sheets. It's 1.15 times Es times that, times the relative density. So I'm trying to remember. Do I have that on here? I don't have it on here. AUDIENCE: Are we just supposed to know that? Are we just supposed to-- LORNA GIBSON: Well, I guess what I would hope that you would know is maybe not the constant, but that it should go as Es and linearly with the relative density. Because it's a truss and because it deforms axially. I don't really expect that you would remember the 1.15. Yeah? AUDIENCE: So does that mean for all the foams, of which there were like 10 constants, we don't need to write all down, like what C1 equals-- LORNA GIBSON: I think that's what this thing gives you. Let's see. It gives you all the Cs. All right, then I'll make sure I give you the Cs. I'll make sure I give you the Cs. But who's got a pen so I can write that down to make sure that I do that? Does somebody got a piece of paper. Or I could write it down here. Oh, here we go. I can write it down this little sticky thing here. OK. So I'll stick that on there so I remember to do that. AUDIENCE: Will you also be writing what Cs? Because you have, in your equation, C1, C2-- LORNA GIBSON: Well, I think I would say-- say I asked you for, I don't know, the yield stress for a foam in compression that yields plastically. I would say, the constant for that is 0.3. I wouldn't give you C2. I wouldn't do it by numbers, I would tell you what the number was for the thing you needed, because I mean the way the numbers are, the only reason they're numbered is because that's the number they are in the book. They're just ordered sequentially in the book. But I don't expect you to remember which-- it's C6, or C5 or something. So anyone else? AUDIENCE: Can you explain the difference between uniaxial yield and plastic buckling? LORNA GIBSON: Oh, OK. So if you have something and it fails by uniaxial yield-- so say you have a honeycomb like this, and you're loading it this way on, OK? So if you're loading it that way on, these walls of the honeycomb are just axially deforming, initially. Right? So the elastic behaviors, they just axially deform. So it works out that if these cell walls are very thick, then you can reach a yield stress before any buckling occurs. And then the strength would just be that yield stress of the cell wall material times the relative density. OK? But the cell walls have to be thick for that to happen. So then imagine that the cell walls aren't thick. Imagine that the cell walls are thin. So say I have the same honeycomb like this. If the walls are thin, and say the solid material itself-- so this is for the solid-- it has some stress strain curve. And it may have a linear elastic part, and then a yield thing like that. So say this is the yield strength here. Say we compress that this way on the same thing in the three direction. Then if a material's got a yield point, there can be an interaction between plastic yielding and elastic buckling. And you can get plastic buckling. And the plastic buckling, you're going to get the wrinkles that go along the length of it this way. remember I showed you that tube that kind of collapsed and folded up kind of thing? That's plastic buckling. OK? And typically, people use what's called the tangential modulus to calculate the buckling stress for plastic buckling. And the tangential modulus would be something related to the tangent over there. I don't expect you to be able to derive plastic buckling equations. But the plastic buckling-- you know what elastic buckling is, right? Yeah. Yeah. So one way to think about plastic buckling is, if you have-- and I'm trying to remember. This is called the slenderness ratio. And I'm trying to remember, is that l over r? Imagine you had just a circular cross section and you had a length, l. So you have a column here, and it's got a length, l, and it's got a radius, r. Like that, OK? So the longer it gets, the more slender it is, the higher the slenderness ratio is. And this, I think, is some sort of stress. If the slenderness ratio of just a single column is short-- if it's stubby, if you had a column that looked kind of like that, It's not going to buckle. It's going to yield. And so if you compress that, it would just yield. And it's just going to yield at the yield stress, right? It's just going to yield at sigma y of the solid, whatever the solid is. If you have a long column, it would buckle elastically by an Euler buckling. And Euler buckling-- let's see. I'm going to run out of room here. If you think of it in terms of a stress instead of a load, it's going to be in squared pi squared Es i. Let's say i goes as r to the fourth. And this is going to be l squared r squared. Right? This is going to be a pi in here. There's going to be a pie in here. I might have lost a factor of 4, but it's going to be-- let me make this proportional, OK? So the slenderness ratio, there's going to be an l over r squared term here. So I could cancel out the four there and put a squared. So sigma Euler is going to go as Es times r over l squared. Like that. And so this is the Euler buckling stress here, OK? So this would be elastic. And right here at this little corner, it turns out life isn't quite that mathematically exact. If you're near that corner, it's not like here it's buckling elastically, and here, it's buckling plastically. What happens is, if you looked at data, data might do something like that. So the sum interaction between the elastic and the plastic. And that's kind of what's going on with this thing here. Does that makes sense? AUDIENCE: Plastic buckling can-- OK, so if you unload plastic buckling, you get some of the elastic part back? LORNA GIBSON: You're not going to get much back. AUDIENCE: You're not? LORNA GIBSON: No, because once-- to get the plastic buckling, you're very close to this. By the time you get that deformation, you've got locally, it's yielded. It's not all elastic everywhere. It's going to yield in places. And once it starts yielding, it's-- if you think of these buckles forming, it's not like you're at one spot on this curve throughout the whole thing. Some of it's more deformed, and some of it's less deformed. Let me pull up those plastically buckled columns, those tubes. Get rid of that one. Let me try and remind myself where they were. I think-- honeycombs, I want honeycombs. Out of plane, that's what I want. It was this thing here. So you see when you have-- that's just one tube, but the whole honeycomb would-- imagine that you have groups of tubes put together. They would have to fail in some compatible way. But the deformation and the stresses are not going to be uniform through this whole thing, right? One part of it's going to be at one stress, and something else is going to be at another stress. So parts of it are going to yield plastically, and you're not going to recover that. OK? So in fact, they use these sorts of things for energy absorption devices, like in cars and things like that. To absorb the energy from the impact. More questions? Let him have a turn. AUDIENCE: Sorry, I have a question about I think 2013. LORNA GIBSON: 2013, OK. AUDIENCE: The last question. It's about the plastic. LORNA GIBSON: 2013. Let me rub some of this stuff off. OK. Here we go. OK. Oh, we haven't covered this at all. So the last question, this one here? AUDIENCE: Right. LORNA GIBSON: Yeah, so this question's on energy absorption. We haven't got there yet. AUDIENCE: How about-- OK. LORNA GIBSON: And the third question's on sandwich structure. So when I taught the course in 2013, I did the topics in a different order. So I did honeycombs, and I did foams. And then I think I did sandwich panels, and I did energy absorption. And I left the stuff on the wood and the cork to the end. So we haven't done that. So don't panic if you haven't-- if you can't do that. OK? You should have known. Come on, you should have known that if it talked about things we haven't covered yet, I'm not going to give it on the test. OK, what else? AUDIENCE: Can you explain more plastic hinges? LORNA GIBSON: Plastic hinge, OK. So let's just say we have a beam in bending, OK? And say it just has a load p in the middle, all right? Are we good? So this load in the middle, then this reaction is p over 2, and that reaction is p over 2. And if I drew the sheer force diagram, it'd go p over 2 up, we go over, go p over 2 down, like 2 p over 2 down. Over and back up. OK? And then if I drew the bending moment diagram, it would go up and down like that. And that would be zero. And that would be zero. And this would be PL over 4. OK? Are we OK with that? We haven't got to the end of the answer, but-- AUDIENCE: I have a question. We're not expected to-- LORNA GIBSON: No, no, you don't need to do this. I'm just trying to explain it now. You don't need to retain that information, for heaven's sakes. No. Come on, I'm so disappointed. AUDIENCE: For the moment, is it positive for the counterclockwise turns? LORNA GIBSON: Oh, so you remember for the beam bending, there's a different convention. It's positive if it's tension on the bottom. Are you in mechanical engineering? I though you were in mechanical engineering. AUDIENCE: No, I take physics. LORNA GIBSON: Oh, you do physics. All I really want to say is the moment's maximum in the middle, OK? So let me just say the moment's maximum in the middle, OK? So then let's look at the cross section. So say I look at a cross section here. Let's just make it rectangular to make it easy for me to draw. So it has width, b, and a height, h. OK? So this would be h on this picture over here. And remember, the neutral axis goes through the middle on the cross-section here. And so one half of the beam is in tension, and the other half of the beam is in compression. So for this situation here, this half of the bean is going to see compression, and that half of the beam is going to see tension, OK? Are we happy with that? We're happy with that. OK. Now let me draw the stress distribution. So if it's linear elastic and it hasn't yielded yet, the stress distribution is going to look like this. So this is h again. That's the height. And now b is into the board. And I'm plotting the stress this way. So this thing here is my neutral axis. It has no stress. Remember, there was one plane that has no stress, and for a rectangular cross-section, it goes through the middle of the cross-section. It goes through the centroid. Is this ringing a bell? I'm hoping this is ringing a bell. Come on. I know we did this in 302, too. I know we did. OK. OK, so this is all linear elastic, right? So at some point-- so I'm going to get to the plastic hinge. At some point, If you keep loading it and p gets bigger and bigger, the moment gets bigger and bigger, the stress gets bigger and bigger remember, the equation here for the stress is equal to My over i. This moment, the maximum moment's going to be this moment here. The maximum y is going to be h over 2, the distance from the neutral axis. And i is going to be bh cubed over 12 for the rectangular section. So if I keep loading it up, at some point, the maximum stress is going to equal the yield stress. right? And in our cellular things, is going to equal the yield stress of the solid. So our beam is one of our edges in the foam, or struts in the honeycomb. So at some point, is going to equal the yield strength of the solid. So let me draw the stress distribution again, where we start to have plasticity. So here, the stress is equal the yield stress. And it's going to equal the yield stress at the bottom, too, because it's all symmetric, right? And the neutral axis is still going to be in the middle here. So it's going to 0 down there. So initially, when it's just barely reached the yield stress at the outer part of the beam, then this stress distribution would still be linear in between. But once you start to load it more than that, then the plastic region starts to seep in from the outside inwards. And what we do here is we assume that the solid is elastic perfectly plastic. And if you remember, when we said things were perfectly plastic, or if they were elastic perfectly plastic, they look like that. The stress strain curve for the solid, I'm assuming, looks like that. So I'm assuming the yield stress in the solid is just a constant. That if I strain it more, there's no work hardening. I'm neglecting work hardening. AUDIENCE: You said inward that the-- LORNA GIBSON: In board? AUDIENCE: Inward, you said something like-- LORNA GIBSON: Inward. So this is-- let's see. I didn't bring a bean with me today. No beam. Do we have anything beam-like? Ah, here we have a beam-like thing. OK. So say this is my beam, and I'm loading it this way on, OK? And this is b, and that's h. So this picture here is looking at it that way on, OK? And this picture here, I've drawn h, but now I'm just looking at the stress distribution across h. And b is into the board. Is that OK? Does that answer your question? AUDIENCE: No, I mean for the plastic. LORNA GIBSON: For this part? OK. I'm working up. I haven't finished it yet. So this is the same kind of view as here. I drew it a little bigger. It shouldn't have draw bigger, I should have drawn it the same height. But it's the same thing. OK? So you'll buy that at some point, we reach the yield strength here. And if I keep loading it up and I assume that the solid is perfectly plastic, that there's no work hardening, then the stress distribution would look like this. OK? And then if it yields more, then it's going to look like that. And if it yields more, eventually I'm going to get to the stage here, where it's-- let me redraw this. That would be-- you get the idea, OK? This will go over here, and down here, and like that. OK? OK. So are we happy with this stress distribution across the cross-section? Yeah, OK. So that's when it forms the plastic hinge. So when it forms a plastic hinge, the stress distribution looks like this. So these are supposed to be the same size. They're not quite. Let's see. So one of the things I talked about was the plastic moment that kind of characterized that plastic hinge. And the plastic moment is just the internal amount of moment that the beam can withstand when it's yielded completely across the whole cross-section. So when we're at this point here. So you calculate that by saying, that stress there is equivalent to a force. That stress there is equivalent to a force. And you get the moment by multiplying those forces times that distance there. OK? Because you think of those two forces as being a couple, and the moments the force times the distance between them. So the plastic moment was sigma ys. And say we're talking about our honeycomb or foam or something. That was our cell wall thickness t. So let me call it t instead of h for the foams and the honeycombs. So this force here is going to be the stress time the area over which it acts. And let's say we look at it for a honeycomb. Then I've got the stress is acting over this distance here, and then times the depth into the page, right? And if it's the honeycomb, that depth into the page is just b, OK? And then this moment arm here is just t over 2 as well, because that's t over 4, and that's t over 4. So it's t over 2 again. So it's sigma ys bt squared over 4 for the honeycomb. And say we have an open cell foam. The edges aren't of thickness b, they're of thickness t. So then it's m p is just sigma ys t cubed over 4. Are we happy? And really, physically what that is is it means that the beam can't hold any more force. You can't apply any more force to it. It's just going to rotate like this once you've gotten to that plastic moment. That's why it's called a hinge, because it just can rotate like hinge rotates. Like a door hinge, OK? AUDIENCE: How does it rotate? LORNA GIBSON: Well, where's my original picture. So if this was the beam, when you form the plastic hinge, your beam would just look like that. And this would be your hinge point. I'm a civil engineer originally. We try to avoid this. So that's why, in the foam and in the honeycomb, that's when it fails is when you get that plastic hinge forming. OK? All right. We have a few more minutes. AUDIENCE: That kind of looks like plastic buckling. LORNA GIBSON: Well yeah, it's not buckling, but it's plastic, yeah. It's permanent. Anyone else? OK. No other questions? Should we call it a day? Is that helpful? All right, then. It's what I do. Come on. It's what I do. All right. So I'll see you Wednesday. And my plan is to grade the tests before spring break, so I shall have it back to you.
MIT_3054_Cellular_Solids_Structure_Properties_and_Applications_Spring_2015
2_Processing_of_Cellular_Solids.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. LORNA GIBSON: I think, last time, we got as far as talking about processing of foams and we talked a bit about processing of polymer foams. And today, I want to pick up where we left off. And I was going to talk about processing metal foams. We'll talk a little bit about carbon foams, ceramic foams, and glass foams. We'll finish this section on processing today and then we'll start talking about the structure of cellular materials. And hopefully-- we won't finish that today, but we'll finish it the start of tomorrow's lecture. And then, we'll start doing mechanics of honeycombs tomorrow. OK? So that's the scheme. I thought what I'd do-- I have a whole series of slides that show, schematically, a variety of different processes for making metal foams. So I thought I would just go through the slides- and I think I did this really quickly last time-- but I would write down a little bit of notes on each of the slides so that you've got some notes on it, too. This is the first method here. And many of these methods were developed for aluminum foams. But you could, in principle, use them for other types of foams, as well. This first method here-- let me see if I can get my little pointer. [INAUDIBLE] The idea here is that you take molten aluminum. Down in this bath here, you've got molten aluminum. And they put, into the aluminum, silicon carbide particles. And the silicon carbide particles adjust the viscosity of the melt. They make it more viscous. And then they just have a-- I've got this thing here-- they've got a tube here that they blow gas in with. And if you go to the bottom of the tube, there's an impeller, or a little paddle, that stirs the gas up. And so the gas just forms in the molten aluminum here. And then they have conveyors which pull off the molt-- or the metal foam, and then it cools. The idea is that if you just had the aluminum, you couldn't really do this because the bubbles would collapse before they cooled down and became a solid. But adding the silicon carbide particles increases the viscosity of the melt. It helps prevent drainage of the foam. Normally, if you have a liquid foam, just from gravity, you're going to have some drainage. It's going to tend to-- some of the liquid is going to tend to sink, just from gravity. By putting the silicon carbide particles in, you increase the viscosity. It helps prevent the drainage. And you can get the foam bubbles to be stable. Let me just write one little note here. The first method here involves just bubbling gas into the molten aluminum. And that molten aluminum is stabilized by silicon carbide particles. Sometimes people use aluminum particles and the particles increase the viscosity of the melt. And that reduces the drainage and it stabilizes the foam. That process was developed at Alcan in Canada and at Norsk Hydro in Norway. And this foam here is an example of the foam that they've made. And you can kind of see-- there's a density gradient in this foam. I'll pass it around. You can see it. But the bubbles are smaller down here and there's fewer of them than there are up here. And that's partly from the drainage. The molten aluminum is draining down and you get more liquid at the bottom and then you get more solid at the bottom. So that's the Alcan foam. OK. Another-- there's half a dozen of these processes for metal foams. Another version of the process involves taking metal powder and combining it with titanium hydride powder. And then you consolidate it and you heat it. So if I can show through the schematics here. In the first step, you take the two powders, you mix them up. So the second thing down here is mixing the powders up. And then you press them together. There's a dye here that you press it, and then you get pieces. And then you can heat that up. And the way that this works is that the titanium hydride decomposes at a temperature of about 300 degrees C. And if the other powder is something like aluminum-- aluminum melts at something like 660 degrees C-- the aluminum has become soft at 300 degrees C, but it's not molten. And the titanium hydride-- when it decomposes-- forms hydrogen gas. So the hydrogen gas forms the bubbles that you need to make the foam. Are you riding your bicycle? You are tough. Very tough. I ride my bike, but I gave up a couple weeks ago. Well, three weeks ago, I guess. When it started snowing, I gave up. The idea with this process is you can use titanium hydride powder with aluminum and the aluminum becomes soft at the temperature that the titanium hydro decomposes. And when it forms the hydrogen gas, that gives you the bubbles, and so you get the foam made by that process. And that was developed by a place called Fraunhofer. And I have one of their little foams here. This is an example of their foam made by that powder metallurgy process there. OK. Let me just write a little note about that. So you can mix up titanium hydride powder with a metal powder and then heat that up. When you do this, you need to have a metal powder that's going to be deforming by, say, high temperature creep at the temperature that the decomposition of the titanium hydride happens. And for aluminum, that works. So you need to have the metal material be able to deform fairly easily, in order to get these bubbles to form a nice foam. But that's another way you can do it, is by consolidating these two powders. Here's another way to do it. You can also make use of this property of the titanium hydride-- that it'll decompose and form hydrogen gas-- by just putting it into molten aluminum. And in this example here, you've got an aluminum melt in here. And this time, they've added 2% calcium to it. Again, to adjust the viscosity. And then they add the titanium hydride in this step here and they've got a little mixer thing here that's spinning around and will mix it. And then they'll put a lid on it to control the pressure and the titanium hydride will decompose, the hydrogen gas will evolve, and you'll get the foam made by that method, too. So you can stir titanium hide right into a molten metal, as well. OK. So that's another method. And I think that was made by something called the Alporas process. And this is an example of one of their foams. I'm pretty sure that's an Alporas foam there. Yep. AUDIENCE: [INAUDIBLE] LORNA GIBSON: They can't really control it perfectly. In that first example that's going around-- maybe you might have missed it-- there's this drainage. The first one's made by a molten process and you get drainage. And you get different sized bubbles. And you get different-- you get a density gradient in the thing, as well. So you can't control these things perfectly. OK. Here's another method for the metal foams. This one here involves replication. In this method here, what you do is you start out with an open-cell polymer foam. In this step up here, there's an open-cell polymer foam. You fill that with sand. So you fill up all the open parts of the cells with sand. Then you burn off the polymer. And so you've got a little channels where the polymer used to be. And then you infiltrate those channels with the molten metal that you want to use. So you replicate the polymer foam structure. This is the infiltration process here. This little thing here is your furnace. And then you get rid of the sand and then you're left with a metal replica of the-- a replica of the original polymer foam. And this example here is one of these things that's made by this replication process. So that's an open-celled aluminum. I think the-- if you look at the density of these things, they're fairly low. And so there's quite a large volume of pores and they're all interconnected. So it's not that hard to get the sand out. This method would involve replication of the open-cell foam-- the polymer foam-- by casting. So you fill the open-cell polymer foam with sand. You burn off the polymer. And then you infiltrate the metal into that. And then you remove the sand. OK? Then, another process involves just using vapor depositions. So you take an open-cell polymer foam again. Here-- let me use my little arrow. Here's the open-cell polymer foam up here. And you have a furnace here with a vapor deposition system. And they use a nickel CO4 system to do this. You then burn out the polymer. You're left with a metal foam with hollow cell walls, where the polymer used to be. And then usually, what they do is they center it. They heat it up again to try to densify the walls. The only teeny weeny problem with this process is that the gas they use this incredibly toxic. And so it's not cheap and it's got health hazards, as well, associated with it. But you can do it. That gives you a nickel foam when you're finished with that. And you could also use an electrodeposition technique that's similar. OK. That's another method. Another method is shown here. This is the entrapped gas expansion method. And what they do in this method is they have a can and the can has a metal powder in it. It's whatever metal you want to make the foam out of. In this example here-- it's probably too small for you to read in the seats-- but it's titanium. They've taken a titanium alloy. They've got a powder of titanium alloy. They then evacuate the can. They take all the air out of it. And then they back fill it with argon gas. They put in an inert gas in there. And then what they do is they pressurize and heat the thing up and so the gas is internally pressurized by doing that. And then, sometimes when they do this, they want to have a skin on the two faces. So in this next little image here, it's done where they roll the can and produce a panel that's got solid faces. And then when you heat it up, the gas expands and you end up with a sandwich panel by doing this. So this bottom figure down here-- I'm not having much luck with the pointer. This bottom figure down here, they've heated it up. The gas expands and you've got this solid skin on the thing, which is from where the can use to be. So that's trapping of a gas. OK? There's a couple more methods for the metal foams. One involves centering hollow metal spheres together. And the trick there is to make the hollow spheres. And the way that can be done is by taking titanium hydride again, if you want to make titanium spheres. You put it in an organic binder-- in a solvent-- so you've got a slurry here. And you've got a tube that you blow gas through. And as you do that, you get hollow titanium hydride bubbles. And then you can do the same thing where you heat that up. The hydrogen gas evolves off, and you're left with titanium. You're left with titanium spheres down at the bottom here. And then you can pack those together and press those and form a cellular material that way. You can also center hollow metal spheres. And the last method I've got for the metal foams is that you can use a fugitive phase. With the fugitive phase, you would take some material that you could get rid of at the end of the process. Say, something like salt, that you could leach out. Here, we have our bed filled with salt. And then you would infiltrate that with a liquid metal, typically under pressure. And then, after the metals cooled, you leech out the salt. You get rid of that. You can pressure infiltrate a leachable bed of particles, and then leach the particles out. OK. We have a whole variety of methods that have been developed to make metal foams. And most of these have been developed, probably, in the last 20 years. Something like that. Some of them are a little bit older than that. But there's been a lot of interest in this recently. Those are all methods to make metal foams. I wanted to talk just a little bit about a few other types of foams. People make carbon foams. And they use the same kind of method as they do to make those bio carbon templates I told you about. When you take wood and you heat it up in an inert atmosphere and it turns into a carbon template, you can do the same thing where you take a polymer foam, you heat it up in an inert atmosphere, and everything except the carbon is driven off. And you're left with a carbon foam. It's the same process they used to make carbon fibers. There's carbon foams. There's also ceramic foams that you can make. I brought the little sample of ceramic foaming again. You can pass that around. And those are typically made by taking an open-cell polymer foam and passing a ceramic slurry through the polymer foam so that you coat the cell walls. And then you fire it so that you bond the ceramic together and you burn the polymer off. And you're left with a foam that's got hollow cell walls. You can also make ceramic foams by doing a CVD process on the carbon foam that you could make by the previous process. And people also make glass foams. And to make glass foams, they use some of the same kinds of processes as people use for polymer foams. I'll just say similar processes to polymer foams. OK. That covers making the foams, and I think we talked about the honeycombs last time. I wanted to talk a little bit about making what are called 3D lattice materials, or 3D truss materials, as well. Let me [? strip ?] that up there. Yeah? AUDIENCE: [INAUDIBLE] LORNA GIBSON: Chemical vapor deposition. AUDIENCE: [INAUDIBLE] use this for metal foams? LORNA GIBSON: Well, people were quite interested in using them for sandwich panels-- the cores of sandwich panels-- lightweight panels. There was interest in using them for energy absorption, say, car bumpers. The automotive industry was quite interested in this, in terms of trying to make components with sandwich structures that would be lighter weight, or energy absorption for bumpers. Or filling up-- if you take-- say you take a metal tube. If you think of a car chassis and it's made of tubes, if you fill those tubes with a foam-- especially if you fill them with a metal foam-- you can increase the energy absorption quite substantially. So when you have a tube in a chassis, if it's loaded axially, it will fold up and you get all these wavelengths of buckling. And if you've put a foam in there, it changes the buckling wavelength and it increases the amount of energy you can absorb. So not only is the energy absorbed by the foam itself, it actually changes the buckling of the tube so you can absorb more energy. So there was a lot of interest in that. There was an interest in using them for cooling devices for, say, electronic components. The idea was you would take, say, an aluminium open-cell foam and you would flow air through that. And say you have your device that's generating heat, you'd have a foam underneath it. And the aluminum conducts the heat fairly well. And then you would blow air through it to try to cool it off. So there were a bunch of different applications people have had in mind for them. AUDIENCE: What about glass? LORNA GIBSON: Glass foams, I think, are largely used for insulation in buildings, believe it or not. I think, actually, one of the dorms at MIT-- maybe the Simmons dorm-- has a glass foam insulation. AUDIENCE: [INAUDIBLE] LORNA GIBSON: Well, I think because the foam-- because the cells are closed, the gas is trapped in the cell. Whereas with a fiberglass, gas could move through the fibers more easily. So I think that's partly how it works. OK. Well, let me talk a little bit about the lattice materials, too, and how we make those. We're going to start talking about the modeling of honeycombs and foams. And when we do that, we're going to see that if we have a structure that deforms by bending, the properties vary with the amount of material, in a certain way. But if we have materials where the deformation is controlled by axial deformation, the stiffness and the strength are going to be higher at the same density. People made these lattice-type materials to try to get something with a more regular structure, and especially a triangulated structure. You see how these things are like little trusses? Triangulated? Triangulated structures, when you load them-- say I load this like this-- there's just axial components-- axial forces in each of the members. And so, theoretically, this would be higher stiffness and strength for a given weight than, say, a foam would be. So people were interested in these lattice material. This one here is made of aluminum. And I wanted to talk a little bit about how you can make these things. One way you can do it is by injection molding. And this here is just the centerpiece of something that would look like this. So there would be a-- I didn't bring it, but there's a top face and a bottom face that fit onto this. And they would be injection molded as three different pieces, and then assembled together. So you can make a mold in this complicated geometry, and you can make a lattice material by injection molding. We'll start with polymer lattices first. One way is injection molding. Another way to do it is by 3-D printing. You can generate a structure like that by 3-D printing. You can also make trusses in 2D and you can make them so that you can snap fit those together. So you can make little 2D trusses. Here's a little truss here and here's a little piece of a truss here. And you can make a little snap fit joints. Do you see how these ones have little divots in them? And you can make it so that these things will fit together. I think these guys-- can I do it? No. You'd have to take-- oops. Wait a minute. No, it's not that way. There we go. So you can snap them together like that. OK. I can't get it to-- there we go. So you can do that. And you do that over and over again. And if you do it over and over again, you get something that looks like that. OK? You can make a snap fit thing. Let me pass all these guys around and you can play with those. That's the injection molding one. This is the snap-fit one. Got that? Let's see. I think I have a little picture here. This is an example of the snap fit truss here. It's the thing that's getting passed around. And another clever way that was developed was by taking a monomer that's sensitive to light. You take a photo sensitive monomer. And you put a mask on top of it and the mask has holes in it. And then you shine collimated light on it. You shine, say, a laser on it. And the light goes through the holes in the mask. And it starts to polymerize the polymer because it's photosensitive. And then, as the polymer-- as it polymerizes and becomes solid, it then acts as a waveguide and draws the light down deeper into the monomer. And so the polymer acts as a wave guide. It brings the light down. And this is a schematic over here. This is a schematic showing the set up. And these are some examples of some 3D trusses that they've made using this technique. And one of the nice things about this is you can get a very small size cell size. So this is-- I think that bar-- it says 1,500 microns. That's what? One and a half millimeters. So you can get a nice, small cell size if you want that. Let me write that down. You take a photosensitive monomer and then you have it in a mold beneath a mask. And then you shine collimated light on it. And as the light shines on it, it polymerizes the monomer. So then it solidifies and then it guides the light deeper into the monomer. OK. That's that. And then finally, there's metal lattices, as well. And so this is, obviously, a metal lattice here. It's an aluminum alloy. And the metal lattice is made by taking that polymer lattice that was made by the injection molding technique. You coat that with a ceramic slurry. You burn off the polymer, and then you infiltrate the metal where the polymer used to be. OK. That's the section on processing of the honeycombs and the foams and the lattices. So there's a variety of different techniques that people have developed for making these kinds of materials. And I thought it'd be useful to just describe some of the techniques. As I think I said last time, this isn't comprehensive. This doesn't cover every technique. But it gives you a flavor of what techniques people have developed for making these kinds of materials. OK? Are we good? We're good. OK. The next part, I want to do on the structure of cellular materials. And I have a little overview. I don't think we'll finish this today, but we'll finish it tomorrow. Yeah? AUDIENCE: [INAUDIBLE] LORNA GIBSON: Down here? AUDIENCE: What happens to the ceramic? LORNA GIBSON: They get rid of the ceramic. Typically, the ceramic is not very strong and it's just fired enough so they can infiltrate it with the metal. And then they-- yeah, I think with mechanical smushing around, you can get rid of the ceramic. And the ceramic's brittle, so if you break the ceramic, you're not going to break the metal. AUDIENCE: I'm wondering if you could make a type of metal lattice [INAUDIBLE] with reducing [? the ?] oxides? LORNA GIBSON: I guess you could, if you could-- but you'd have to then make the oxide in that shape, too. You've always got to make something in that shape. AUDIENCE: [INAUDIBLE] LORNA GIBSON: Yeah, maybe you could make a foam. But to make these lattices, you need this really regular kind of structure and be able to control the structure. OK. Let me scoot out of this set of slides and get the next set up. OK. We want to talk about the structure of cellular solids. And we classify cellular materials into two main groups. One's called honeycombs. This thing down here is a honeycomb. And honeycombs have polygonal cells that fill a plane and then they're prismatic in the third direction. So you can think of them as just being a prismatic-- and they can be hexagons, they can be squares, they can be triangles-- but you can think of them as prismatic cells. And the cells are just in a 2D plane. And then we also have foams. All of these ones over here are foamed materials. And they're made up of polyhedral cells. The cells themselves are three-dimensional polyhedra. And this slide here shows a number of different types of foams. These ones are polymers up here. These are two metals. These are two ceramics. This is a glass foam down here. And this is another polymer foam down here. OK? AUDIENCE: [INAUDIBLE] LORNA GIBSON: No. I just know that. AUDIENCE: OK. LORNA GIBSON: I took those pictures so I know that. No, you can't tell by looking at them. In fact, that's one of the things about how we model the cellular materials. The fact that their structure is so similar is what gives them similar properties. And they behave in similar ways because they've got similar structures. OK. We've got 2D honeycombs, where we have polygonal cells that pack to fill a plane. And then they're prismatic in the third direction. And then we have what we call 3D cellular materials, which are foams, which have polyhedral cells. And then they pack to fill space. The properties of all of these materials depend, essentially, on three things. They're going to depend on the solid that you make it from. If you make the material from a rubber or from an aluminum, you're going to get different properties. So they depend on the properties of the solid. And some of the properties that we're going to use that are important for this type of modeling are a density of the solid-- which I'm going to call rho s-- a Young's modulus of the solid-- which I'm going to call es-- and some sort of strength of the solid-- which I'm going to call sigma ys for now. And you could think of other things. There could be a fractured toughness of the solid. There could be other kinds of things. One thing that the properties of the cellular material depend on is the properties of the solid. Another is the relative density of the cellular material. And that's the density of the cellular thing divided by the density of the solid. And that's equivalent to the volume fraction of solids. So it makes sense that the more solid you've got, the stiffer and stronger the material's going to be. So it's going to depend on how much material you've got. And it also depends-- the properties also depend on the cell geometry. The cell shape can control things like whether or not the honeycomb or the foam is isotropic or anisotropic. You can imagine, if you have a foam, and you've got equiaxed cells, you might expect to have the same properties in all directions. But if you had cells that were elongated in some way, you might expect you'd have different properties in the direction that they're elongated relative to the other plane. So cell shape can lead to anisotropy. For the foams, you can also have what we call open-cell and closed-cell foams. If you look at this slide here, and we look at this top right images-- these two up here-- the one on the left in the top is an open-celled foam. There's just material in the edges. There's no faces. And so a gas can flow between one cell and another. And then if you look at the one on the right, this is a closed-celled foam. There's faces. If you think of the polyhedra, you've got solid faces covering the faces of the polyhedra. For an open-cell foam, you've only got solid in the edges of the polyhedra. And the voids are continuous, so they're connected together. And for a closed-cell foam, you've got solid in the edges and the faces. And then the voids are separated off from each other. So we'll say, the cells are closed off from one another. Another feature of the cell geometry is the cell size. And the cell size can be important for things like the thermal properties of foams. It's important for things like the surface area per unit volume. But typically, for the mechanical properties, it's not that important. And we'll see why that is when we do the modeling. OK. Yes? AUDIENCE: For the closed-cell foams-- because we can't really see it without cutting it, is it that all of the faces are closed? Or is it like some fraction of the faces are closed? LORNA GIBSON: If you look at this one on the top right here, they're pretty much all closed. But the reason we have this little picture down here is some of them are closed and some of them are open. So you can get ones that are in between. But typically-- this is kind of unusual. Usually, they're either all open or all closed. If we look at the mechanical properties of cellular materials, typically the cell geometry doesn't have that much of an effect. The relative density is much more important. The relative density, we define as the density of the cellular solid. And when I use a parameter like rho or e or something, if it's got a star, it's for the cellular thing and if it's got an s, it's for the solid. So rho star is going to be the density of the cellular material. And rho s is going to be the density of the solid it's made from. And so the relative density is just rho star over rho s. And I just wanted to show you how this is the volume fraction of solids. So rho star is going to be the mass of solid over the total volume. Imagine you've got a honeycomb or a foam and you've got, say, a unit cube of it, the sum total volume of the whole thing-- the density of the cellular material is going to the mass of the solid over the whole volume. The density of the solid is going to be the mass of the solid over the volume of the solid. This is really just equivalent to the volume fraction of solids, how much solids you've got. And that's also n equal to 1 minus the porosity. Typical values for cellular materials-- I think last time I passed around one of those collagen scaffolds-- those tissue enineering scaffolds. It was in a little plastic bag. That collagen scaffold has a relative density of 0.005, so its 0.5% solid and 99.5% air. And if we look at typical polymer foams, the relative density is typically between about 2% and 20%. And if we look at something like softwoods-- wood is a cellular material. And we look at softwoods, the relative density is usually between about 15% and about 40%. Something like that. OK? As the relative density increases, you get more material on the cell edges, and if it's closed-cell foam, on the cell faces. And the pore volume decreases. And you can think of some limit. If you keep increasing the relative density more and more and more, eventually you've got-- it's not really a cellular material anymore. It's more like a solid with little isolated pores in it. And so there's two bounds. And the density has to be less than a certain amount for you to consider it a cellular material in the models that we're going to derive to be valid. And if the relevant density is more than a certain amount, people model it as a solid with isolated holes. If I have a unit square of material, if it's a cellular material, you might expect that you've got pores that would look like this. And you've got relatively thin cell walls, relative to the length of the material. And for a cellular material, typically, the relative density is less than about 0.3. And when we come to the modeling for the honeycombs and the foams, we're going to see that the cell walls deform, in many cases, by bending. And that you can model the deformation by modeling the bending. And that the bending dominates the behavior if the density is less than about that. And at the other extreme, you can imagine if you had just little teeny pores. I have a little pore here and one here and one there and one there. That's not really a cellular material. It's just got a teeny weeny little bit of pores. And that could be modeled as isolated pores in a solid. Each one is acting independently. And people have found that that is appropriate if the relative density is greater than about 0.8. And then, in between, there's a transition in behavior between the cellular solid and the isolated pores in the solid. OK? Are we OK? The next thing I wanted to talk about was unit cells. Especially for honeycombs, people often use unit cells. A hexagonal cell is an obvious one to use to model this kind of behavior. For honeycomb materials, you can have unit cells and you can have different ones. On the left here, we've got triangles, in the middle, I've got squares. On the right-hand side, I've got hexagons. And you can see, even if you have a certain unit cell, there's also different ways to stack it. So the number of edges that meet at a vertex is different for, say, this example on the top left and this example on the bottom left. Here, we've got six members coming into each vertex, and here, we've got four. And again, this stacking for the two square cells is also different. So you can have different numbers of edges per vertex. Another thing to note that's kind of interesting-- if you look at the honeycomb cells here, this one on the top left-- this equilateral triangle one with the stacking-- and this one on the top right-- the regular hexagonal cells-- those two are isotropic for linear elastic behavior, whereas all the other ones are not. So we have 2D honeycomb unit cells. We can have triangles, squares, and hexagons. They can be stacked in more than one way. And that gives different numbers of edges per vertex. And in that figure, a and e are isotropic, for linear elasticity. OK. When we come to modeling the honeycombs, we're going to focus on the hexagonal cells. We'll talk a little bit about the square and triangular cells, as well. And then, for foams, when you look at the structure of a foam, it's obviously not a unit cell that repeats over and over again. But people started off trying to model the mechanical behavior of foams by looking at periodic repeating polyhedral cells. And there's three cells here that are prismatic. We're not really going to talk about those beyond this. So they're not really physically realistic or interesting. But people would use these two cells here in initial attempts to model foams. And this one here is called the rhombic dodecahedra. Rhombic because each of the faces has four sides and dodecahedra because each polyhedra has 12 faces. I forget if I've bored you with my Latin already. Hedron means face in-- oh, this is Greek, I think. Hedron means face. Do is two, deca is 10. So dodeca is two plus 10. It's got 12 faces. OK? So that's the rhombic dodecahedra over here. And then this bottom one down here is a tetrakaidecahedra. It's a similar thing. Tetra's four, kai mean and. Four and 10-- tetra kai deca-- it's got 14 faces. OK? And those two pack to fill space. I think those are the only uniform polyhedra that pack to fill space. Here is the 3D foams. We have the rhombic dodecahedron and the tetrakaidecahedron. And the tetrakaidecahedron packs in a bcc packing. Initial models for foams-- they took these two unit cells. And what they would do is have an infinite array of them to make up the whole material. And then they would isolate a unit cell. And they would apply loads-- some say compressive stress, for example. And then they would figure out what the load, or force, was in every single member, and how much that member deformed. And they would figure out the component of the deformation in the same direction that they were putting the load on. And they would figure out things like a Young's modulus, or they would figure out when there was some failure of one of these struts, and they would figure out a strength for the foam. But you can kind of imagine, geometrically, not that easy to keep straight. A little bit complicated. So one way to model foams is by using these unit cells. But we're going to talk about a different way to do it, as well. OK. So those are unit cells. When they make foams, as we just talked about, one way to make a foam is by blowing a gas into a liquid. And if you blow a gas into a liquid, then the surface tension is going to have an effect on the cell geometry and on the shape of the cells. And if the surface tension is isotropic-- if it's the same in all directions-- then the structure that you get is one that minimizes the surface area per unit volume. And so people were interested in what sort of cell shape minimizes the surface area per unit volume. And Lord Kelvin, in the 1800s, was the person who worked this out. And this is called the Kelvin tetrakaidecahedron. And it's not just a straight tetrakaidecahedron. There's a slight curvature to the cells here, to the faces. And you can kind of see it in some of the edges here. Like if we-- let me get my little pointer. If you look at that edge, it's not straight. This edge here is not straight. It's got a little bit of a curvature to it. But this minimizes the surface area per unit volume. And then more recently, in the 1990's, there were two people-- Dennis Weaire and Robert Phelan-- discovered that this structure here-- which isn't a single polyhedron, but it's made up of a few polyhedra. That has a slightly smaller surface area per unit volume. Smaller by 0.3%. So, a tiny bit smaller. OK. Let's see. What I'll say here is that foams are often made by blowing a gas into a liquid. And if the surface tension controls and it's isotropic, then the structure will minimize the surface area per unit volume. OK. That's relevant if the foam is made by blowing a gas into a liquid and surface tension is the controlling factor. Sometimes foams are made by supersaturating a liquid with a gas, and then you nucleate bubbles, and then the bubbles grow. So there's a nucleation and growth process. So that's a little bit different. And if you have a nucleation and growth process, you get a structure that is similar to something called a Voronoi structure. In an idealized case, imagine that you have random points that are nucleation points and that you start to grow bubbles at those nucleation points. So you start off with these random points. And the bubbles all start to grow at the same time and they all grow at the same linear rate. If you have that situation, then you end up with this Voronoi kind of structure. And I've shown a 2D version of it here just because it's easier to see in 2D, but you can imagine a 3D system. And in order to make one of these Voronoi honeycombs, you can imagine-- if you have random points-- here, say that little point there is one of the nucleation points, and here's another point here-- you form the structure by drawing the perpendicular bisectors between each pair of points. This is the bisector between these two points. Here's a bisector between those two points. And then you form the envelope of those lines around each nucleation point. And that, then, gives you that structure. And you can see, this structure here is kind of angular. It doesn't look that representative of something like a foam. But if you have an exclusion distance, where you say that you're not going to allow the nucleation points to be closer than some given distance-- your exclusion distance-- then you get this structure here. And this starts to look a lot more like a foamy kind of structure. So these Voronoi structures are representative of structures that are related to nucleation and growth of the bubbles, or nucleation and growth processes. Let me write down something about Voronoi things. And these Voronoi structures were first developed to look at grain growth in metals. They weren't developed for cellular materials. But you can use them to model cellular materials, as well, as long as it's a nucleation and growth process. We'll say that foams are sometimes made by supersaturating a liquid with a gas, and then reducing the pressure so that the bubbles nucleate and grow. So initially, the bubbles are going to form spheres. But as the spheres grow, they start to intersect with each other and form polyhedral cells. And you get the Voronoi structure by thinking about an idealized case in which you randomly nucleate the-- you have nucleation points at a randomly distributed space. They start to grow at the same time and they grow at the same linear rate. OK. The Voronoi honeycomb, or the foam-- you can form that by drawing the perpendicular bisectors between the random points. So each cell contains the points that are closer to the nucleation point than any other point-- or any other nucleation point. And if we just do this process as I've described here, you end up with a Voronoi structure, where the cells appear kind of angular. And if you specify an exclusion distance, where you say the nucleation points can't be closer than a certain distance, then the cells become less angular, and of more similar size. OK. So are we good with the Voronoi honeycomb nucleation and growth idea? Alrighty. All right. If we think about cell shape-- if we start with honeycombs and we just think about it hexagonal honeycombs, if I have a regular hexagonal honeycomb so that all the edges are the same length and this angle here is 30 degrees, then that is an isotropic material in the plane in the linear elastic regime. One of the things we're going to do is calculate-- if I loan it this way on, what's the Young's modulus? If I load it that way on, what's the Young's modulus? And we're going to find they're the same, in fact, no matter which way on I loaded it. It would be the same. But if I now have my honeycomb, and imagine that I stretched it out-- and I'm kind of exaggerating how much we might stretch it out. But if we did something like that, it wouldn't be too surprising to think that the properties are going to be different if I loaded it this way on and that way on. And in terms of the cell geometry, I'm going to call that vertical cell edge length h. And I'm going to call this one-- the inclined one-- of length l. I'm going to say that angle is the angle theta. And the cell shape can be defined by the ratio of h over l and that angle theta. OK. When we derive equations for the mechanical properties of the honeycombs, we're going to find that they depend on some solid properties. Say, the modulus of the honeycombs can depend on the modulus of the solid. It's going to depend on the relative density raised to some power. And we're going to figure out what that is. And then it's going to depend, also, on some function of h over l and theta. And that function really represents the contribution of the cell geometry to the mechanical properties. OK. That's the honeycombs. It's fairly straightforward to characterize the shell shape for the honeycombs. It's a little more involved to do it for the foams. And the technique that's used is called the mean intercept length. At least, that's one technique that's used. Let me wait until you've finished writing because I want you to see the picture as I talk about it. OK? OK. Here's-- whoops. My pointer keeps disappearing. This top left picture here shows an SEM image of a foam. And you can see, you've got some big cells and little cells and there's no obvious way to characterize the cell shape. And what people do to calculate this mean intercept length is they would take an image. They would then sketch out just the cell edges that touch a plane's surface. So all these black lines are just the-- if you took your-- if you put ink on your foam and you just put it on a pad and put it on a piece of paper, you would get this outline of the edges of the cells, where they intersect that plane. And then what people do is they draw test circles. Here's the test circle here. And they draw parallel equiaxed, or equidistant lines. So the lines here are parallel. They're all at, say, zero degrees. And they're all the same distance apart. And then they count the number of intercepts. They count-- say we went out here. The cell wall intercepts here. There's one that intercepts here. And then, it'd go up here. Here's another intercept. Here's another intercept. So they count the number of intercepts of the cell wall with the lines. And then they get a mean intercept length, which is characteristic of the cell dimension. And then what they do-- because this is just in one orientation-- you would then rotate those parallel lines by, say, 5 degrees and do it all over again. And get another length at 5 degrees and one at 10 degrees one at 15. And so you get different lengths for the intercepts as you rotate your parallel lines around. And then you make a polar plot, and that's what the thing is down at the bottom here. And so you plot your mean intercept length as a function of the angle that you measured it at. And you can fit it to an ellipse. And if you do it in three dimensions, you fit it to an ellipsoid. And the major and minor axes of that ellipse, or ellipsoid, are characteristic of how elongated the cell is in the different directions. And the orientation of that ellipsoid is characteristic of the orientation of the cells. Those of you who took 303, too, you remember Mohr's circles? Is this beginning to look familiar? It's the same kind of idea as Mohr's circles. Same way we have principal stresses and orientation of principal stresses, now we have principal dimensions and the orientation of the principal dimensions. So it's the same kind of idea. OK? Let's see. I feel like I'm getting to the end here. Maybe I'll stop there for today. But next time, I'll write down the whole technique about how we get these mean intercepts and get this ellipsoid. And I'm going to write the mean intercepts down as a matrix. But you could also write it as a tensor. And there's something called the fabric tensor, which characterizes the shape of the cells. And as you might imagine, the same is with the honeycomb. If you have equiaxed cells in the foams, you might expect you would get isotropic properties. If you have cells that are stretched out in some way-- so you've got different principal dimensions for them-- then you've got anisotropic material. And you can relate how much anisotropy to the shape of the cells. OK. I'm going to stop there for today. I'll see you tomorrow. Seems very sudden. I'll see you tomorrow. I'll pick up and I'll finish this section on the structure. We've got a bit more to do. And then we'll start looking at honeycombs and modeling honeycombs. The honeycombs are simpler to model just because they have this nice simple unit cell. So we'll start with that, and then we'll move from there to the foams. OK?
MIT_3054_Cellular_Solids_Structure_Properties_and_Applications_Spring_2015
12_Trabecular_Bone_Osteoporosis_and_Evolution.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. LORNA GIBSON: So last time we were talking about trabecular bone and that it's this porous kind of foam-like type of bone. And we talked a little bit about the modeling. And I think I got as far as starting to talk about osteoporosis. And I wanted to talk today about how we can model osteoporosis using those voronoi honeycombs that we talked about a while ago when we were talking about the structure. So Bruno has a project on trabecular bone for the class. And he needed bone samples. And so we talked about different kind of bone samples we could get. And the thing is if you get human bone-- well, there's all sorts of issues about just handling human bone and permissions, and it's complicated. So that was too complicated. We've used bovine bone before. You just go to the slaughterhouse and get bovine bone. But one of the things with trabecular bone is because it grows in response to loads, the geometry of it can be different, sort of architecture can vary from one spot in the bone to another. And I have a colleague who started doing tests on whale bones, just because it's a way of getting nice, uniform bone. So I was on a Ph.D. Committee a few years ago for a student who was in the Woods Hole program. She was at MIT, but was doing Woods Hole thing. And I don't know if you've heard of the Atlantic right whales, the North Atlantic right whales. They're endangered. There's about 500 of them left in the world. And they migrate between like typically the Bay of Fundy and off the Florida coast. So they go up and down the coast. And they sometimes get hit by ships. And then bones break and that kills them. And her study was on ship impacts on right whales. And so I got to know people at Woods Hole who worked on whales. So Bruno, I called up my friend at Woods Hole. And the Woods Hole guy didn't have any bones. But he put me in touch with somebody at the Mass Fish and Wildlife Department. And he had a couple of whale vertebrae that he was willing to give up. So I got one of them for you. So here it is. So I went out to the Mass Fish and Wildlife yesterday. And they produced this bone for me. I could either pass it around. It's not too heavy. Or you could come up and look afterwards. Shall I pass it around? Or do you want came up afterwards? Maybe come up-- pass it? OK. So one of the things is our like vertebrae, there's these things that stick up like this. There's one that's missing off of this bone. There should have been one down here too. But this is sort of what's called the body of the vertebrae. And in human vertebrae it's about that big. But it's the same kind of general structure. There's these kind of bony plates that come off. And the body is almost all trabecular bone. So you can see on this side, this is a growth plate here. But on this side, there normally would have been a thin shell of the cortical bone. And when I pass it around, if you look up at this point here, you can see there's just a little bit of that left. But it's kind of gotten worn out. And the rest of this, if you look, you can see it's the trabecular bone. And you can see it's pretty uniform, which is why I thought this might be good for your tests. So I thought you should get in touch with Mike Tarkanian. And I talked to him a little bit about cutting it with a water jet cutter. So I think we could use the water jet cutter. And I think I emailed you. If you could cut it in half, I'd like to have some pictures of it cut in half. And he's got a diamond corer, cylindrical corer. So you could make little cylindrical specimens. And I know if you want to do compression tests or beams. AUDIENCE: Well, I was planning to do compression tests. LORNA GIBSON: Yeah, so you could probably use some kind of a bands or something to cut them up. So, and this is where the spinal column goes through in the whale. So there you have it. Anybody want to ask me anything about the whale bone? OK, so let me pass that around. And I guess I have to tell you a couple of other stories. So while I was there, they have a brand new building. And it's all got solar panels and geothermal heat. And it's all very groovy. And the guy who I was talking to about the bone, he wanted to give me a little tour of the building. And he said, oh, you've got to see your freezer. OK, so the freezer. So he opens the freezer door. The freezer's like a room. And there's like a bear. I'm serious, like a bear on the floor, like a dead bear on the floor of the freezer. And he said it was like a two-year-old bear that had, I guess, just come out of hibernation a couple weeks ago. I think it had gotten hit by a car or something. And somebody must have called them up. And they have it. So they had this bear. They had like several deer. They had a coyote. They had like boxes full of all kinds of animals. So anyway it was kind interesting to see all these animals there. And you know what he said about the deer? He said, normally deer in Massachusetts don't starve. Like if you live in the suburbs, you actually have a problem with deer eating your vegetable garden because there's deer all over the place. But he said, this winter, you know how much snow we've had and how cold it's been? He said, deer have been starving this winter. And I think a couple of the deer they'd had had actually starved to death. And people had called up. And they had come and kind of collected the carcasses. So anyway that was my little trip out to Mass Fish and Wildlife yesterday. OK, so let's go back to talking about the bone. And I think last time we kind of left off more or less here. So this was a slide of what osteoporosis looks like. And you can see the bone loss is a combination of thinning of the struts and resorption of the struts. And we wanted to try to model this, sort of an engineering sense. And the way we did that is we used our voronoi honeycombs. So the bone has an irregular structure. And we wanted to look and see if we could model something that had an irregular structure. So we used the voronoi honeycomb. And if you remember when we talked about the structure of cellular materials earlier on in the course, we said that these are generated by putting down random seed points and then drawing the perpendicular bisectors. And if you have a constraint that the seed points can't be closer than some exclusion distance, then you get structure where you have cells that are not all exactly the same size, but roughly the same size. So that's what we did here. We got this structure here. And I had a graduate student, Matt Silva, who did a lot of these studies. And then he used this and analyzed it using finite element analysis. So the first thing he did was he calculated the elastic moduli of the structure. So he applied loads. We calculated deformations. Figured out elastic moduli. And so this plot here shows a comparison between the analytical equations that we derived at the first part of the course and what he calculated for these voronoi honeycombs for the final element analysis. So here's Young's modulus down here, for example. In the closed form, the line, that's just the analytical equation we had originally. And the little dashed line is his finite element. Here's the Shear modulus down here. And here's the possum's ratio up here. So you can see, there's a pretty good agreement between these two things here. So let me write some of this on the board. And then I can keep going after that. So for the 2D voronoi honeycomb, we have the random seed points. And we used the perpendicular bisectors. And we used a minimum separation distance between the points. So we generated the structure. And then we did a finite element analysis. And from that, we calculated the modulus. And what we found was the finite element analysis results were pretty close to our closed form analytical model. And if we think about the modulus, the modulus is the average stiffness over the whole honeycomb. And when we look at the strengths next, the strength is going to be related to the weakest few struts. And we're going to find that the strength doesn't work quite the same way. So first, we got the modulus. That's sort of the simplest thing to calculate. And then after that, we wanted to calculate the compressive strength. So we did a similar thing. We set up the voronoi honeycomb in the finite element analysis. We had a few more elements along the length of each strut. And then we modeled the elastic buckling and the plastic failure behavior. And we looked at honeycombs that had different densities. And the lowest densities failed by buckling. And the higher densities failed by a sort of plastic yielding. And we assumed that the cell walls were elastic, perfectly plastic. Remember we said if we have a material where-- this is for the solids-- so say this is the stress in the solid and that's the strain in the solid. If it behaves like that, we say that's elastic, perfectly plastic. So we modeled the walls as elastic, perfectly plastic. And we made the ratio of the solid modulus to the yield strength similar to what it would be for bone, which is about 0.01. And for that particular value, the transition between the elastic buckling and the plastic yielding failure was at a relative density of about 0.035. So then what we did was we analyzed structures that were a little lower than that were equal to that and then a little higher than that. So we would try and see what happened with these different failure modes. And then we found that if we look at the compressive stress strain curve, this model here had a relative density of 15%. And the transition occurred at a relative density about 3.5%. So this one here failed by a plastic failure. You can see, if we unload it, there's some plastic deformation. We get a little strange softening here, which is kind of characteristic of the plastic failure. When we look at the overall deformation of the honeycomb, we saw local kind of failure, like we do in aluminum honeycombs. So that was the kind of stress strain curve for a relatively dense honeycomb that failed by yielding. And then this is one for a much lower density. This is now point 1.5%. And this one fails by elastic buckling. And if we load it up and unload it, we recover most of the deformation. Barry, do you think you could make that stop? Yeah. AUDIENCE: There's a chance it could be-- LORNA GIBSON: Below. Just because we're recording it. It just doesn't seem very good. OK, so what we did was we made five different voronoi honeycomb. So we had five sets of seed points. And we had five slightly different geometries. And then we averaged the results of those. So if we make that calculation, this is the strength of the voronoi honeycomb here divided by the strength of the periodic regular hexagonal honeycomb. And this was plotted against relative density. And you can see the strengths for the voronoi structure are a little bit lower than for the regular periodic hexagonal honeycomb. And they reach a minimum here. And the minimum's around about 0.05 relative density. So this was the 1.5. That was a 3.5. That's 5% and 15%. AUDIENCE: Why is there like a wide gap between 0.05 and-- LORNA GIBSON: Well, I think because we felt-- I think this one failed by some combination of-- there was a limit to how many of these we were going to do. And this is what we chose to do. We wanted one that we knew was going to fail by plastic yielding. And that was this one. And we wanted one that we knew it was going to fail by elastic buckling. And we wanted a couple in between. So we didn't do-- I guess we were lazy is the real reason there's not another point in the middle there. AUDIENCE: It just seems like there might be a variable-- LORNA GIBSON: You think it might go [CAREENING NOISE] like that. Except there's a physical reason why this happens. And I'm going to get to that in a minute. So let me I'll finish explaining this, and then I'll put the notes on the board. So first of all, the strength is less than the regular hexagonal honeycomb. And then there's also this minimum here around about 5% density. And I think the reason that the strength is not the same as in the voronoi in the periodic honeycomb is because if you look at the distribution of strains-- or you can think of the distribution of stresses-- so these were the normal strains at the nodes in the honeycombs. And the distribution here is for the voronoi honeycomb. So the voronoi honeycomb, you have lots of members of different lengths. There are different orientations. And so there's some distribution of stresses and strains in each member. And that gives you the distribution. In the regular hexagonal honeycomb, if you look at just the nodes, there's really just the vertical member, which has a certain strain. And all the vertical members are going to have the same strain at the nodes. And then the obliques members, if you just look at the nodes, there's just going to be a maximum tension and a maximum compression at the nodes. Because there's a unit cell and it repeats, all the oblique ones are going to have the same maximum and the same minimum. So the dashed lines here are for the regular hexagonal honeycomb. So the thing to observe here is that the voronoi has some strains and correspondingly some stresses that are outside the range of the regular periodic honeycomb. And so if there's parts of it that are seeing higher strains and higher stresses, it's going to fail at a lower load. So I think it's this distribution because you've got this random structure, and you've got different lengths and different orientations of the members. So that's one reason why these strengths are less than the periodic structure. I think there's a minimum here, because-- I think before the test I mentioned there's some interaction between elastic buckling and plastic yielding. And when you get that interaction, that also reduces the strength. And so there's a minimum near where the crossover is between the elastic buckling and the plastic yielding. So let me write some notes of the compressive strength. So we'll say the cell wall-- so the cell wall elastic, perfectly plastic. And the yield strength relative to the modulus for the solid was 0.1, which is pretty much what it is for bone. And we assumed that the possum's ratio of the solid was 0.3. And for this value of sigma Ys over Es, the transition between elastic buckling and plastic yielding is at about 3.5% relative density. So then we made models with densities that were a little bit less than that and a little bit more than that. And then the strengths we got from the voronoi were between about 0.6 and 0.8 times what we got from the periodic. So then what we looked at were these maximum strains of the nodes. And we found that because the voronoi honeycomb had a much broader distribution of those strains, that led to the lower strengths. And then we found the minimum strength was at a density of about 5%. And if you think just about a pin ended column, and you make a plot of the strength-- so I'm just going to say the strength of that columm-- against l/r, the slenderness ratio. So say it's a cylinder, l would be the length. r would be the radius. If you just had Euler buckling, you get a curve that looked like that. The Euler buckling load is pi squared EI over l. squared. So I goes as r to the 4th. And if we get the stress, then it's going to be that divided by the area of the column. So it's pi squared E. Moment of inertia is pi r to the 4th over 4 l squared. And the area is pi r squared. So it goes is as r over l squared, or 1 over l over r squared. So this would be the Euler buckling. So that's elastic buckling. But at some point, the column is going to yield. If I make it really, really short, then it's going to yield before it buckles. And at some point, this would be the real stress here. And this would be failure by the plastic yielding. And in practice, there's not like a sharp corner here. You know, if you made columns of progressively longer length, and you tested them, the little short ones would yield. So they'd be along here. But they don't kind of yield and the next one buckles. In fact, you get something like that. So that when you're near that transition, when you're near this point here, the actual failure stress is a little bit less than that. And I think that's partly what's going on over there. It's the minimum. OK, so those two studies, we just looked at the modeling and the strength of honeycombs. And we were kind of looking at how does the random structure change the properties. So the randomness of the structure didn't really change the modulus much at all. But it did affect the strength. And then the next thing we did was we looked at putting defects into the bone. So we knew that the bone, partly the density is reduced by thinning of the struts and partly by resorption when there's a lot of density loss, a lot of bone loss. So we wanted to look at putting defects where we actually removed some of the struts. So we did another series of voronoi models. So we did another series of tests where we looked at it if we have a certain density that we start with and then we look at losing the same bone mass or relative density in our honeycomb, and we look at what happens if we thin the struts versus if we remove struts. So these plots here show that. And Matt Silva also did this. So this is the residual modulus plotted against the reduction in relative density. So residual modulus is the modulus after we've reduced the density by some amount relative to the initial modulus. And if we have an intact honeycomb where we just thin the struts, we don't remove any of the struts, if we have an intact honeycomb, as you reduce the density, the modulus just goes down like that. So that's really just the same as those analytical formulas that we had. But if you start removing struts, not too surprisingly, the modulus goes down substantially more. And at this value here, I think it was 30% or 35% density reduction, you reach what's called the percolation threshold. At the percolation threshold, you have two separate pieces of material. So obviously the mechanical properties are going to go down to zero when you reach that percolation threshold. And then this plot over here is the same sort of thing, but now for strength. So it's the strength of the bone, or the strength of the honeycomb, after you've either thinned or resorbed the wall, divided by the strength of the intact honeycomb. And again, you're reducing the density here. And this is for the intact model where you're just thinning the struts. And this is for the models where you're removing the struts. So you can see that if you think of-- this is kind of a simple model, but if you think of the bone, if you first thin the struts, you're going to lose a little bit of strength. But then if you start removing the struts, you're going to lose a lot of strength. So that's really where people run into-- there's much higher risk of fracture once you get to the point where you might be resorbing struts. OK, so let me see, what's next thing? OK, let me just finish the slides, and then I'll put the notes up. So this is the same sort of thing, just plotting the strength and the modulus on the same plot. So you can see the shape of the curves is very similar. The modulus is a little bit more sensitive than the strength. And here we are thinning, and here we are removing the struts. And then the next step was that we made a model that was more similar to bones. So let me write down the notes for this. And then we'll do that one that's more similar to bone. Thought it didn't makes sense. OK, so then we were interested in trying to model something that was a little closer to the structure of bone. And so we set up this model here. So we started with just a square voronoi. So you just force the points into a square, voronoi, or a square pattern, you get a square voronoi. And then what we did is we just perturbed the points a little bit. And we got this perturbed voronoi array. And so we made this model here. And we took a piece of vertebral bone. And we measured the angle of orientation of a lot of the struts. And we matched our voronoi model to that distribution in the bone. So our model looked something like this. So you can kind of see how it's more or less vertical and horizontal, but not exactly. And here's a sort of comparison with a slice of vertebral bone. And again, because the loads are more or less vertical in the bone, the trabeculae tend to orient that way. And then have some horizontal struts. So here you can see on the left, we've got a voronoi model that's more or less representative of the bone. And we've removed some of the struts. So we're going to the same thing with this model. We thin the struts. And we remove the struts. And we try to see what the residual strength is. And you can see there's for of at least a 2d model this isn't a bad representation of the vertebral trabecular bone. And this was the stress strain curve for both the specimen of the vertebral bone that was tested and then the honeycomb model that we made to kind of match it. So a similar kind of behavior. This is how our model failed, this sort of a local band of struts that fail. Let's call it local deformation band. And then what we did was we thought about changing the density. And what we did was we removed either horizontal or vertical struts, or we thinned either the vertical or the horizontal struts. So these are the same kinds of plots as I showed before. This is the residual modulus. This is the residual strength. This is the density reduction. And here we're reducing the density by making struts thinner. So it's still intact. We haven't removed any struts. But each of the struts gets thinner. And this top line is for thinning the longitudinal or the vertical struts. And this was for thinning the transverse or horizontal struts. And then this is for removing struts here. So we're removing a bigger number. So the more we remove, the more the density changes. And then this plot here is for the strength. So again, this is thinning. So we're moving either the horizontal or the vertical ones-- I'm sorry, thinning either the horizontal ones or the vertical ones. And then this is for removing struts. And again removing more to reduce the density more. So you can kind of see we're kind of working our way to more complex models here. So this one here was looking at the bone. Let me write some notes for this. And then we did a 3D voronoi model. So I'll do that one next. Oop, over here. So I'll just say the model was adapted to reflect the trabecular bones study in the vertebrae more. So we perturb a square with array of seed points to get a structure that was more like the bone. And then we looked at the reduction in the thickness and the number of strides in the longitudinal and transverse directions. OK, and then the next model we did was the 3D version. So we made a same kind of thing. But with the 3D model, we had fewer cells in that model, because once it goes to 3D, you've got more struts in each cell. But this was the same idea. We uniformly thinned the struts, or we removed the struts. And again, you can see removing the struts is a much bigger effect. And also the other thing to look at here is for 3D the percolation threshold is 50%. So it kind of makes sense that if it's in 3D, and you've got struts in all directions, you're going to have to remove more of them. You're going to reduce the density more to break it into two separate pieces at the percolation threshold. So that was the 3D model there. And then this is just a comparison of the 3D with the 2D for the modulus. We just did the modulus for the 3D structure because it sort of gets computationally more involved. So the 3D, these two lines here corresponded to removing the struts and the change in the modulus. And the little triangles were the 3D voronoi calculation that we did. The little crosses here were the same sort of calculation done for tetratridecahedron. One of my former students had loaded that. And then at the bottom here are the lines for the 2D structures, for either a regular hexagonal cell or for a 2D voronoi cell. And you can see, there's not a huge difference whether or not you take a regular structure or a voronoi random structure. But there's a fairly significant difference between the 2D and the 3D. So the 3D, just you have to remove more material before you get the same reduction in modulus. So let me write some notes for that. So it's the same kind of analysis, but just with a 3D model. So do you see the idea with these models? It was an attempt to look at a way that you could computationally estimate how much modulus loss or strength loss you get by either thinning the struts or removing the struts. So I gave you a way to model the osteoperotic bone. Yes. AUDIENCE: Can you clarify the percolation threshold? LORNA GIBSON: So the percolation threshold is say you have some network and you start removing things. If your remove enough, you have two separate pieces of things. If you remove enough struts, you have two separate pieces. That's called the percolation threshold. So I think it's-- you want to know why it's called that? So I think that originated because it wasn't used in this kind of context. I think instead it was used in a context where imagine you have 2D plate. So you put 2D holes in it, little circular holes. And they we're looking at flow of a fluid through the plate. So you can imagine, if you put enough holes, eventually they line up or they-- line up is not the right term-- but there's enough of them that they connect. And you end up with two separate. You have a path through for the fluid. That's called the percolation threshold. But in mechanics it's sort of two separate pieces. OK, does that makes sense? So you know, at the percolation threshold, the stiffness is zero because you have two separate things and the strength is 0. AUDIENCE: So the two doesn't have to be like separated by-- LORNA GIBSON: It could be-- yeah, yeah, it doesn't have to be a straight line. AUDIENCE: Does it make a threshold between whether everything is interconnected? LORNA GIBSON: Yes. Or there's some path that separates them. Yeah, that's what it is. OK, so that's the end of the bit on osteoporosis. I had a couple more things I wanted to talk about on bone. So the next part is on the idea of using metal foams as a bone substitute materials. So when they make hip implants-- so they're typically metals. They're titanium or stainless steel or something. Often what they do is they coat the outside with some sort of porous coating. And the idea is the porous coating allows the bone to grow in. So you can kind of imagine, like especially on the stem and around the head of a femur, you'd like the bone to grow into that to attach. And having a porous coating helps. And there's a couple of ways they do it currently. They use porous sintered beads. So they put little beads of metal on the outside. And the idea is that the bone grows into the little gaps between the beads. Or sometimes if they have-- not so much for hip implants but sometimes when people have say car accidents and their face get sort of smashed up, and they have to have, say, a plate put in their face, and they need like a flatter plate, they use like a wire mesh. And they have sort of a wire mesh that goes on the outside. And it's the same thing. It's the idea to try to get the bone to grow into that plate. So some people are thinking instead of using the porous sintered beads, or instead of using one of these wire mesh plates, that you could use metal foams. And so there's been some interest in using metal foams in coatings of implants. And longer term, there's been some interest in trying to make sort of a vertebral body that would involve using a metal foam from the vertebral body. So you know that whale bone that we just passed around, that vetebral body, that cylindrical part, it's almost all trabecular bone. So there's some interest in trying to use metal foams for spinal surgeries. Maybe not to replace the whole body. But to use in part of the surgery. So I have a little slide here which shows a bunch of metal foams that people have made with the idea in mind that perhaps some of this could be used in orthopedic surgery. So these are some different kinds of metal foams. And these ones are made from titanium or tantalum. So typically, the metals that they use in orthopedic implants are the cobalt chromium alloys. Titanium, they sometimes use tantalum or stainless steel. And they use those because they're biocompatible. And they're very corrosion resistant. So these ones here are mostly titanium. And this is one that's a tantalum. So let me just go over how they make them. And then I've got another slide that goes over it in more detail. And I'll write a few notes down. So this guy on the top left up here, that's made by taking an open cell polyurethane foam, like a seating foam, like a cushion. And then what they do is they heat that up in an inert atmosphere, so that they are left just with the carbon. So it's a sort of vitreous carbon. And then what they do is they coat that carbon by a CVD process with tantalum. And so they end up with a tantalum foam with a very thin layer of carbon at the core. The carbon makes up something like 1% of the final composition and the tantalum is the 99%. So that's sort of a replica process there. This one here is made by another replica process. They take an open cell polyurethane foam. They infiltrate it with a slurry, which has the titanium hydride particles in it. Remember when we talk about processing of the foams at the beginning, I said, if you heat up the titanium hydride, eventually the hydrogen would be driven off, and you'd be left with the titanium. So they do that, and then they sinter it, and they get a titanium foam. This one is made by a fugitive phase process. So the idea with a fugitive phase is you burn off some phase. So you could mix powders, consolidate the powders, then you burn off one of the powders. And you're left with the other one. And then you need to sinter that together to make it have some reasonable mechanical properties. This one here is made by using a foaming agent. This one's made by expansion of argon gas. I think when we talked about the metal foams, we talked about the idea of packing, say, titanium powder in a can. And then you evaporate the can. And you then pressurize the can with argon gas. And then you heat treat it. So you heat it up. And as you heat it up, the argon gas expands. And you're left with the pores. This one's made by a freeze casting process. I have a slide. And I'll talk about that in a minute. This one's made by a selective laser sensory. So it's like a 3D printing. But instead of printing in ink, this time you have a bed of a powder. And you've got a laser that selectively sinters. So you turn the laser on and off. And where it's on, it's going to bind the material. When it's off, it's not going to bind the material. And then you raise the thing. You make a little bit more powder. You do it all over again. And you can get different patterns. And then this is made by a sort of process in which they take powders and press them and ignite them. But I think that's not very commonly used. So this is some more details about how they might do it. So this is the fugitive phase process here. You could take a titanium and a filler powder, pack them together. You know, you'd mix them up, pack them together. You would raise it to a certain temperature to decompose the filler. So typically the filler decomposes at a lower temperature than what you sinter the powder, the metal powder that's left. So you decompose the filler. You drive that off. And then you sinter the metal powder. And you're left with porous titanium. This one's the expansion of the foaming agent. So you take your titanium powder. You might have a binder and then a foaming agent. Mix those all together. They heat them up until typically the binder becomes a liquid. And the foam foams up the liquid binder. Then they drive off the binder. And then they sinter at a higher temperature the titanium. So you you've got a porous titanium left. This is the freeze casting or the freeze drying process. So here they would take titanium powder and put it in agar. And the agar's in water. So the agar is like a jelly, like a gel. But it's mostly water. And then if you freeze it, what happens is the water freezes. And it drives off the titanium agar into the interstitial zones between the frozen ice crystals. So these little dots here are the ice crystals. The ice crystals are growing as it gets colder. And as the ice crystals get bigger and bigger, you're left with the titanium and the agar in between those ice crystals. And then if you sublimate the ice off, you're left with a porous thing. And you can sinter the titanium if you want to make it a little more dense. And this is a rapid prototyping thing here. So you spread the powder. You could either print a binder or you could use a laser to sort of almost weld the particles together. Then you would drop the piston, put more powder down, have the binder go again until you made your product. And then you would get rid of all the unbound material. And you'd have your final product. OK, so these are some of the methods they can use for making metal foams. I don't know, should I write notes? Or are you good if I just put the notes on the website? I'll put the notes on the website. And then this is a stress strain curve for a titanium foam. So it looks like all these other kinds of foams and bone and wood and everything else that we've looked at. And this is some data for titanium foams that are made by different processes. And we've just taken different data from the literature and put it together. So this is the modulus here. This is a slope of 2. You can see some of the data is sort of near that slope, but below it. And there is obviously a large spread in the data too. But if you go back and look at the different types of structures, then not all these have this kind of typical foam-like structures. The structures aren't all quite like a foam either. Yeah? AUDIENCE: So is there any objective in this to make the foam similar in structure to the bone that will be growing into it? Or does it just need to be scaffolding? LORNA GIBSON: I think for this, they just want to make a porous thing. And they're thinking about coatings. So the coatings aren't necessarily similar to the bone. When we finish the section on bone, we're going to start talking, I think, about tissue engineering. And when we talk about tissue engineering, people have made scaffolds to try to make them the same shape as the anatomical part that they're trying to mimic. And then they make a cellular kind of core. So they have made some more of an attempt to do that. I could give you a sneak preview. Would you like a preview? So these are some scaffolds that are generated from a making different kinds of tissue. These aren't all from bone. But this one here, for example, is for regenerating bone. And they've printed it in this exact geometry, because that's going to replace some anatomical piece. And they want it in that geometry. So I'll talk more about that. But so for the scaffolds, they sometimes do that. But not so much for these bone coatings. Let's see. Se we did that. We did that. So these are the data. And then over here, there's the compressive strength. And again, you know, this is our line with a slope of 3/2. Again, there's a lot of scatter, because there's a lot of variation in the structure of these things. But it's in the ballpark. So are we good with metal foams? And there's just like a page and a half of little notes. Should I just put that on the website? You're good? OK. OK, so the last topic I wanted to talk about on bone has to do with how people look at the structure of bone in evolution and in evolutionary studies. So the idea here is, in particular in looking at sort of pre-human species, sort of hominid species, people are interested in when primates went from being quadruplets to bipeds. So obviously, bipedalism, walking on two legs, is characteristic of us. And people would like to know if they find some fossil-- you can't just tell from the fossil directly is it a biped or a quadraped. You can't see the species moving because it doesn't exist anymore. So you'd like to have some way of making some estimate of whether or not it was a biped or a quadruped. Let me get my notes together here. So I wanted to kind of look at the big picture a little bit and look at the evolution of different species. And this is a phylogenetic sort of chart. And this is kind of-- have you heard of the tree of life? This is like the tree of life. So this piece of it is-- metazoa is for animals. So not plants, animals. And this goes back about 1.2 billion years. So these are millions of years ago. And then these are different sort of eras and ages that are defined. And when we have a branch here, this is a common ancestor. And then this is a branch in one direction, and that's a branch in another direction. So these points here, like 1 and 2 and so on, the implication there is that there was a common ancestor to everything that traces back to there. So this point here, 1, is 1.2 billion years ago. So this was sort of very early kind of species. And the very first things were, well, multicellular things. I mean, there were little amoeba type things. But the more sort of sophisticated animals were sponges. And there's three kind of classes of the sponges. There's calcarea. And they're called calcarea because they're mineralized. And they're mineralized with a calcium carbonate. And then there's another branch of them called-- I don't know if I'm going to say this right, but hexactinellida. And those have glass. Those are called glass sponges because they have SiO2 is the mineral in those. And then there's these guys here the demospongiae. I think some of those have calcium carbonate and some of them don't. Oh, no, some of them have silica. And some of them don't. So these things here are sort of very early multicellular structures. And the mineral in them is either calcium carbonate or silica. And then if we move up, I've sort of highlighted a few of them. The cnideria-- I know there's a C, but that's actually-- when I say s-nideria, my biologist friends laugh at me. And they say no, no, no, it's nideria. The cnideria are the corals and the jellyfish. And corals are also mineralized with calcium carbonate. And you can see they branched off something like a billion years ago. Then we get up here. These are the mollusks. So the mollusks are things like bivalves, like if you like to eat claims, things like that. So bivalves, snails, and things like octopi, octopus. So those are all molluscs. And molluscs, when they're mineralized, also are calcium carbonate. So those are the calcium carbonate kind of shell. So we haven't got up to anything bony yet. Bone is collagen plus a calcium phosphate. So we haven't gotten to anything that's a calcium phosphate yet. Then another large class is arthropoda. That's insects and spiders and crustaceans. Those all have a chiton skeleton. So if you think of like the exoskeleton of an insect or a spider, those are chiton. And crustaceans, things like lobsters, those also have a chiton shell. And in crustaceans, it might be mineralized. But again, the mineral is a calcium carbonate. So all the way up here, most of these things, if there is any mineral, it's calcium carbonate. And if we get up finally to the vertebraes, the vertebraes have the calcium phosphate and have a bone, like what we think of as real bone. So the vertebrates obviously involve things like mammals, birds, snakes, and fish. So this is kind of the big picture going back. And sort of one of the interesting things is that bone doesn't come along until you get somewhere over here. And I have one more little, nice slide here. So when I talked about the sponges, they were these guys here with the glass sponges. Joanna Aizenberg at Harvard did a nice study on glass sponges. And she looked at this one here. It's called the Venus flower basket is kind of the common name. And it has this hierarchical structure. And it's remarkably stiff and tough. And what she did in her paper was look at optical and mechanical properties. But they looked at the structure at different length scales. So there's kind of a cellular structure at this length scale. It's kind of a tube. This was just a picture I took in a natural history museum. But if you look at higher magnification, each little strut is made up of sort of fibers and has a hierarchical structure itself. So that's just one of the sponges. And here's a similar chart for the vertebrates. So this point here is where the other chart kind of branched off. And if we start with the earliest things again, the earliest ones with the most common ancestor back here is something called cyclostomata. And those are things like jawless fish, so things like lamprey and hagfish. Do you know what a hagfish is? It's this kind of eely thing. And I have the video for you if you don't know what a hagfish is. So let me get out of here because it's just an amazing thing, the hagfish. OK, so let's see, I got my sound on. [VIDEO PLAYBACK] REPORTER: Here, at the University of Guelph, about an hour outside Toronto, materials scientist Atsuko Negishi and biologist Julia Herr think that these lovely creatures, called hagfish, may revolutionize how we make strong materials. ATSUKO NEGISHI: These are specific hagfish. They're well known for their unique defense mechanism. REPORTER: So if I wanted to see this, what would we do? Like could we poke at it with a stick? JULIA HERR: I think the best way to do it is to reach in there and grab one. REPORTER: Oh, my gosh, look at that disgusting-- oh, no, I've been slimed. I feel like an outtake from Ghostbusters. Look at the quantities of this stuff. [END PLAYBACK] LORNA GIBSON: He used to do this for The New York Times. And I think he's got his own going on, but he used to make these little videos. And he had a show on PBS a year or two ago all about materials. And there were like four different episodes. And he talked about different kinds of materials. And he went to different labs. But he's quite a character. But anyway the hagfish have this defense mechanism where they make the slime. And I don't know if you know Professor McKinley over in mechanical engineering here at MIT, but he collaborates with those people of Guelph. And he studies what he calls non-Newtonian fluids. Well, a lot of people call non-Newtonian fluids. A Newtonian fluid is a thing like water, where the viscosity is a constant no matter what sort of strain rate you shear it at. And a non-Newtonian fluid does not have that property. The viscosity changes. And some of them have a strength, like in the hagfish one has a strength. So Gareth's studies things like this kind of hagfish slime. I don't know if he still has them. He used to have hagfish in his lab over him building, whatever it is, 1 or 3 or something over there. OK, so that's what the hagfish are, just because that's amusing. And they don't have bone. So they and the jawless fish don't have bone, even though they're called vertebrates. Then the next sort of most recent thing is the chondrocytes. Those are the cartilaginous fish, so things like shark. So sharks don't actually have bones. They have cartilage. And they have a little bit of mineralization in the cartilage, but they don't actually have bone. And the first thing that actually has a bone is the ray finned fish, which are these guys here. And that occurred about 450 million years ago. And then everything in the vertebrate since then is bony. So there's coelacanth-- I don't know if you've ever see those. Every now and then they find one of these things in Florida or something-- lung fish. There's these guys here, which are the frogs and the toads and the salamanders. So it's finally getting to be spring after the winter from hell. And the salamanders are going to come out into the vernal pools and mate. And it's cute. So anyway, they come out this time of year. And then there's the amniota, things that have eggs of one sort or another. So that includes us, the mammals, the birds, the snakes, and the turtles. And so those would have branched off about there. So the last thing I wanted to point out here is that there's this huge kind of diversity of animals that have evolved over millions and millions of years. And that the first ones that were mineralized used the calcium carbonate and that the bony type materials didn't really evolve or didn't appear until about 450 million years ago. And then these are the vertebrates that have these kind of bony things. So as I've said many times now, the bone grows in response to loads. And the bone structure reflects the mechanical loads and the function. And evolutionary studies have looked at both cortical bone and trabecular bone architecture to try to say something about the locomotion of the animal or of the species. So there's ones that look at cortical bone. But I'm just going to talk about one that deals with trabecular bone. So this study was done by a group of peoples, the first author was Rook. And what they studied was the ileum. So this is a pelvis. And there's the ileum is one of the bones in the pelvis. And they found fossils of a hominid species that was about 8 million years old, called Oreopithecus bambolii. And bambolii refers to the place in Italy where these fossils were found. And so they found two-- or at least somebody found two pieces of an ilium. And they took x-rays. And they made a digital reconstruction so that they would get one ilium-- it turned out that the two pieces were two different parts-- so they could make a whole one out of the two pieces. And then they looked at the structure of the trabecular bone. And they compared that structure to the structure of the trabecular bone in humans and other primates. And they wanted to see is it more like the humans, which are obviously biped, or is it more like some of the primates, which are quadrupeds. So this just shows for a human and a non-human primate what the structure of the ilium is. And these little black boxes with the letters are what are called anatomical landmarks. So they're sort of comparable spots on the bone of different species. And what they were doing was looking at the trabecular architecture at these different spots. So you can kind of see how they're more or less analogous in the two different species. And this is the digitally reconstructed ilium that they put together from their fossil species. And again, these little letters refer to these anatomical landmarks. And then what they did was they compared the Oreopithicus, the fossil, with the human and then three non-human primates. So these four images are all from the fossil. These are the human. These are champi, siamang and baboon. And this square here corresponds to that one there. This is B, corresponds to that one. And C and D are those two there. So they had two fossils, they made the digital reconstruction. They looked at certain areas. And then they looked at the same or analogous areas in these other species. And they tried to look for similarities and differences in the bone structure. So let's look at this first box A. You can see there's a very white bit here. And the white corresponds to more dense. So there's a white bit that's more dense in their fossil. And in the human bone, you see is a similar sort of band right there. And if you look at the other non-human primates, that band is missing. So that says to them, OK, this feature, this one feature at least, is more similar to the-- sorry, in the fossil here is more similar to the human than it is to the non-human primates. And then they had three other landmarks that they looked at like that. So this one here again is a sort of dense regions. So you can see that white dense region. There's some there. There's some here. So those are the fossil and the human. But there's a teeny bit here and a teeny bit there. But it's not as pronounced in the non-human primates. Yes. AUDIENCE: From I get at least so far, the portions of the bone that are dense versus not dense seem less relevant to the direction of loading than the orientation of the foamy parts of the trabecular bone. LORNA GIBSON: So the density reflects more or less the magnitude of the stresses. So if the stresses are higher, it's going to be denser. And the orientation of the bone, like whether or not which way the struts are oriented, that reflects the sort of ratio of the principal stresses. So if the principal stresses go in a certain direction, the bone's going to tend to line up in that direction. That's what that Guinea fowl study was kind of about. OK? Are we good? OK. And then these other two, so in B-- and you can't really see it from here, but they looked at the sort of architecture of the trabecular bone. And they said that it was more similar in the fossil in the human than it was in these other three primates. And this region here, this looks very similar to that bit there to me. But I think there was some other feature about this region that they were looking at. And again, it was more similar to the human than it was to the non-human primates. So by just looking at the pattern of the bone and the density of the bone and comparing it to these other species, they said that the-- and you know the hip, because we're standing like this, you would kind of expect to see differences in the hip. That's why they wanted to look at the ilium. So the conclusion they made was that these observations suggested that the species was bipedal, or at least spent some of its time as a bipedal animal. And I think that might be it. Do I have more I have one little summary here. So just to summarize what we've done on bone this week. We talked about the structure of the bone. We talked about mechanical properties in the foam models. We talked about osteoporosis in the voronoi models, how you can try to represent the loss of bone and the loss of bone strength using those models. We talked just a little bit about the idea of using metal foams as bone substitutes or coatings as implants, and then this little bit on trabecular architecture and evolutionary studies. I have some notes, but I think we've got just a couple minutes left. So maybe I'll just scan those and put them on the website? Are we good with that? AUDIENCE: I have a question about what we just looked at about the different species. We always consider on the density changes. Can there always be changes in the solids? LORNA GIBSON: So it changes a little bit from one species to another. So the question is, does the solid properties of the bone, the solid bone itself change? So if you look at cortical bone in different species, it changes a little bit, but like 10%, not a huge amount. So the two most common things people have compared are bovine bone and human bone. And there is a slight difference between them. But it's not a huge difference. One of the other things people have looked at in osteoporosis that I didn't really talk about was there's another whole set of things that can go on that reflect what you're talking about. So they talk about bone quality as well. And when they talk about bone quality, they're talking about are there micro cracks in the solid. So you might imagine as you get older, it's not just that you lose the struts or that the struts get thinner, but the solid bit itself has more cracks in it. So you can imagine if the solid bone itself had little micro cracks, then it too would be weaker. And then you think what I put up was bad. It gets even worse if you put that in as well. So, yes, people do look at bone quality, which is sort of looking at with age. And typically it's fairly elderly people that the bone quality is an issue. I guess there are certain diseases where it's an issue. But in osteoporosis, it's sort of elderly people. Any thing else? Should I stop there for today? So there were hardly any equations in this. Did you know that? So we got to the part where there's lots of equations. So next week I'm going to talk about tissue engineering. I think I'm going to talk a little bit about different kinds of scaffolds, how they make scaffolds, how the scaffolds fit sort of into an anatomical things, what is that supposed to represent, how they use them clinically a little bit. And we had a research project on osteochondrol scaffold, so scaffolds for replacing bone and cartilage. And I'm going to talk a little bit about that as sort of a case study in tissue engineering scaffolds. And I have some stuff on cell mechanics and how biological cells interact with scaffold. I don't if we're going to get to that next week or not, but somewhere close to that. So the next bit is on tissue engineering scaffolds, osteochondrol scaffolds, cell mechanics. And there's not that many equations in that part either. So OK.
MIT_3054_Cellular_Solids_Structure_Properties_and_Applications_Spring_2015
18_Natural_Sandwich_Structures_Density_Gradients.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: All right, then, I guess we may as well start. So what I wanted to talk about today was natural sandwich panels and sandwich beams. So there's lots of examples of sandwich structures in nature, and we've been looking at the engineering sandwich structures. And we've seen that you can get a lightweight structure by having this sandwich construction. And so there are several examples I was going to talk about today. And I think because this isn't really on the test, I'm not going to write a lot on the board. So there's some notes. I'll just put them on the website, and you can look at that if you want. Because we have kind of a shorter time today. I'll just try and talk and explain what's what. Hey, Bruno. How are you? So this is the first example. So many leaves of Monocotyledon plants have a sandwich structure. And this is an iris plant and iris leaves. And for those of you in 3032, I think you know that these are glass flowers. So the Harvard Museum of Natural History has a glass flower collection that was made in the 1800s. And there was a botany professor there who made these as sort of a lecture demonstration vehicle. And so he would bring then to class, and he would show different things about the plants with the glass flowers. But now they're just in the museum, and they're very realistic. So I just wanted to show you those. So let's see, it's not working. Turn it on. There we go. So if we look at a cross-section of an iris leaf, it looks like the diagram on the left. So here's the iris. And you can see there's these kind of solid fibers, and those solid fibers are called schlerchyma. And they only exist at the top and the bottom of the leaf. So I went out this morning. And if you look outside of the Stata building, there's that little kind of river-y thing, and there's some iris leaves growing there. So I went and got some iris leaves. And you can tell we had a horrible winter because usually when I give this lecture in the spring, the leaves are like twice as big. But this year, they're just little, short, wimpy ones. But I'm going to pass it around. And if you just like move your thumb over the top, you can feel little ridges, little bumps. And those little ridges that you can feel are these little schlerchyma fibers. So you kind of see they kind of stick up a little. And so when you move your thumb over it, you can feel that. And then you can see that the middle of the iris leaf has this kind of foamy-type structure here, and that's called parenchyma cells. So you can think of the leaf as very much like one of the sandwiches. This is like a fiber-reinforced composite at the top and at the bottom. And then this is kind of like a foam core in between, separating the fiber-reinforced faces. And so the iris leaf behaves mechanically like a sandwich beam. So I'm going to talk a little bit about how we can actually demonstrate that using the equations that we developed in class. This is another example. This is, I guess, what Americans call the cat tail, but Canadians and English people call it a bull rush. And you can see this is a slightly different construction, but there's the same sort of idea. So instead of having a foamy core as in the iris leaf, you've got these kind of webs here that go in between the top and the bottom, and that forms like a series of I-beams almost. And you can think of that also like a sandwich panel or a sandwich beam. So you've got two stiff top and bottom pieces, and then you've got these kind of webs that separate them, kind of like a honeycomb core would be. So that's another example of a leaf that has the sandwich-type structure. And this is very common in these Monocotyledon leaves. So if you think of a cat tail or you think of an iris, they tend to be kind of narrow at the base, maybe an inch or two wide at the base, and they can be quite tall. The iris leaves can get two or three feet tall. The cat tails can get five or six feet tall. And they stand up more or less straight. They bend over a little, but they stand up more or less straight. And this sandwich structure is one of the things that lets them stand up straight at a fairly low weight. And from the plant's point of view, there's a sort of metabolic cost associated with making more material. So it we can minimize the amount of material, it's a better thing for the plant. These are some other examples of grasses that are sandwich-type constructions. This is from some papers by Julian Vincent. And the black little circles here are the schlerchyma, are those sort of dense fibers. Then you can see in both of these cases, the dense fibers are on the outside, and the parenchyma cells, which is the white, are on the inside. And so this is sort of another set of micrographs of the iris. So this is just showing the outside, and these are the ribs viewed from the outside. And this is the core, just sort of viewed along the length of it. And so you can idealize the structure as being like a sandwich that's got sort of fibers on the top and on the bottom. So the top and the bottom are like a fiber composite. And the middle part, with the parenchyma cells, is kind of like a foam. And so we did a little project on iris leaves, and we wanted to see if you could show that they behave mechanically, like a sandwich beam. So you remember that we had that equation for the deflection of the sandwich beam. There were two terms. There was a bending term, and then there was a shearing term. And so we took some little sandwich beams. We cut little kind of rectangular beams. We hung little weights. We measured how much they deflected, and we wanted to see if we could use this equation to predict their stiffness and how much they deflected. So to do that, we needed to know a bunch of things. We needed to know some of the geometrical parameters. So we needed to know what volume fraction of the face is those solid ribs, how thick's the core, how thick's the face? And so we measured a bunch of these geometrical parameters. We tested it like a cantilever so we knew what B1 and B2 were for the cantilever. We knew how long the beam was, so we know what l is. We knew what loads we applied, so we knew what P was. But we needed to make some estimate of what the face modulus was and what the core shear modulus was, too. And so we made some estimates of that. So this table here just shows some of the dimensions of the leaf. The leaf tapers, and this is at the thin end, so here's the face thickness. Here's the sort of length of this. Some square cells in the face. This is the core thickness here. This is the dimensions of the core cells. This is the diameter of the ribs, the spacing of the ribs, the volume fraction of solids in the ribs. And we did that at different lengths along the different positions along the length of the rib, or length of the leaf. So we had the geometrical parameters, but we needed to get this E of the face and G of the core. And to do that, we looked at the literature. And people had done tests on the fiber parts of leaves. They'd done little tensile tests, and they'd measured modulii between about two and 20 gigapascals. And then we did some tension tests on the iris leaf. And in tension, those ribs are going to take most of the stress. And if you know the volume fraction of the ribs, you can back out what the stiffness of the ribs must have been. If you know the stiffness of the ribs, you can figure out the stiffness of the face. So we calculated that, and then we looked at the literature. And people have done tests on parenchyma cells and different types of tissue on things like apples and potatoes and carrots. And these are the values for the Young's modulus they get. They're between about 1, and the highest one was 14 megapascals. But most of these values for the Young's modulus are around about four. And the shear modulus is roughly about half of the Young's modulus. So we said the shear modulus was around two. So we have these values we could plug it in and then calculate what the stiffness would be for the iris leaf. And so this was a little analysis we did. So this was the measured beam stiffness up here. We had four beams, and they were different stiffnesses. They all had the same length. They all the same face thickness. The core thickness varied. They all had the same width. We cut them to have the same width so we could calculate a flexural rigidity. That's the EI equivalent. We could calculate the bending deflection term, the shear deflection term. And this is the calculated beam stiffness. And then this is the ratio of the calculated over the measured. So it's not exactly right. Obviously, there's some difference here. But it's in the same order of magnitude. It's in the same ballpark. And one of the complications that we didn't really try to take into account was that the leaf isn't a nice rectangular structure. The leaf has this kind of curved cross-section to it. And we made a bit of an approximation to that, but it wasn't that close, really. We could have probably done better on that. But I think the idea that the iris behaves like a sandwich is a reasonable one. So that was the iris leaf. And then I wanted to show you some other structures in nature that are sandwiches. So this is a seakelp, help like a seaweed thing, in New Zealand. This is the largest intertidal seaweed. The fronds, the sort of long pieces of it, are up to 12 meters long. So that's almost 40 feet. So 40 feet is probably like from one side of this room to the other side of the room. It's quite long. And you can see, if you look at this section here, this is all like a honeycomb-type section here. And the honeycomb is like a honeycomb in a sandwich, and the top and the bottom faces are like the face of the sandwich. So this would be like the face here. That would be the honeycomb core. And that would be the other face on the other side over there. And those honeycomb-like cores, apparently, have some gas-filled pockets that then provide buoyancy to keep the whole thing floating. So it photosynthesizes. So one of the things about these leaves is that they have multiple functions. It's not just that they have to have a certain stiffness so they don't fall over. The plant wants to photosynthesize, so you want to maximize the surface area as well, and you want to have exposure to the sunlight. So there's a number of things that the plant's trying to do in having this structure. So that seakelp is one example. These are skulls from birds. And so this is a pigeon here. This is a magpie. If you come from the West you see magpies out West. You see them in Europe as well. And this is a long-eared owl. This long-eared owl's around here. And I brought in a couple of bird skulls as well. And you can see that all of those birds skulls are sandwich structures. The one for the pigeon has sort of a foam-like core here. And you can see that the two faces aren't sort of concentric for the pigeon skull. They sort of not following each other. But here, this would be, say, on the top shell of the magpie, where the two, the inner and outer face, are sort of concentric. Then you get these kind of little ribs of trabecular bone in between them, and then the same with a long-eared owl. You get these little ribs in between them. And so you can see that there's a sandwich structure there. And obviously, birds want to be light. They have to be light to fly, to take off, and so they want to be light. So I've got two skulls here. And I'll pass them around. Please be careful because they're kind of delicate. This one is from a screech owl, and you see screech owls around here. This was a screech owl that had an intersection with a car. Yeah, so the skull fractured, but you can see the sandwich right there. You see the two little bits? So you can see the inner plate and the outer plate and the foam, the trabecular bone. So that's the screech owl. And this is a red tail hawk. So you can't really see the shell and the sandwich structure here. But I want to pass it around just so you can see how light it is. So it's amazingly light. So a red tail hawk is probably about this big, something like that. And this is one of the things that makes them very light. So those are the bird skulls. Oh, yes, so now I have to tell you about the owl. So I think the people in 3032 have heard this before. But the other people haven't. So one of the things about the owl is if you look at the whole skull, if you look at this picture here, one of the things is that this bone here is not symmetrical with that bone there. Normally, when you think of a body, you think of the bones being symmetrical. But those bones are not symmetrical, and those bones are near where the ear is. And it turns out on owls, at least on some owls, the ears are at different heights on their heads. And people think that one of the things that allows the owls to do is it allows their hearing to sort of pinpoint where something is. And owls can catch little creatures at night, but they can also catch little creatures underneath the snow. So they can catch things that they can't even see. And they have a number of adaptations to improve their hearing, but this is one of them. So here's a little owl Allison Curtis is a Canadian friend who lives in northern Ontario, and this is looking out of her living room window. And that's a barred owl. And you can see the barred owl has caught this little vole here. And you can see in the background it's winter in Canada. and there's snow all over the place. So this owl has probably caught that little vole underneath the snow. And then it's come to eat it. And this is another picture of-- you can see this is where an owl landed in the snow. It's wings hit the snow, trying to catch something underneath. And this is another kind of beautiful print of the owl's wings hitting the snow in the winter time. So did I show you the fox video? Should I show you the fox video? You saw it, right? I think I showed it last time in 3032. But you guys haven't seen it. Let me show you the fox video because foxes do the same kind of thing. Their ears are the same as ours. They're in the same position. But they have this-- let me see. Where's the sound thing? We don't really need the sound for this, but there's BBC sound. So we get this music, even though the fox can't hear the music. Here we go, fox no drive. Check this out. Is it going to come up? Is that going to play? OK [VIDEO PLAYBACK] -It listens for the tiny sounds of its prey moving about below. PROFESSOR: So you see how it cocks its head, and it does this with its head? It's putting its ears at different heights when it does that. So check this out. And look carefully, you can see the little animal it's got in its mouth when it comes out. There's a little tail. So part of the reason dogs and foxes and coyotes do that thing, I think, is because they put their ears at different heights, and it helps them pinpoint where something is. [END PLAYBACK] You know I love these Nature videos, right? So that's the fox video. Let me see if I can stop that. So that's one of the interesting things about owls. Let me go back to my little PowerPoints. So here's another example of a creature that has a sandwich-type structures. So here's the sandwich here. Here is the ever so charming looking cuttlefish. And the cuttlefish is not actually a fish. It's a mollusk. So it's related to things like octopus, things like that, and squids. It's a cephalopod. And you can't see it so well in this picture, but I'm going to show you something else and you see it. It's got like little tentacles. These things here are actually separate little tentacles. And because it's not a fish, it doesn't have like fins that can kind of swim with. And it's got this thing called the cuttlefish bone. And this is a cuttlefish bone here. And that bone has the sandwich structure here. And it's not actually a bone. It's really a shell. It's a calcium carbonate thing, not a calcium phosphate thing. But the cuttlefish can control how much air goes into those little pockets. And it can control its buoyancy by controlling how much air goes into those little pockets. And I brought with me a cuttlefish bone. Have you ever owned like, I don't know, like a parrot or a pet bird? Apparently, pet birds love to sharpen their beaks on this cuttlefish bone. So if you go to a pet store, you can buy this stuff. So you won't be able to see the little sandwich structure because it's a very small length scale. But you can kind of see there's a sort of different material on the inside than there is on the outside of that. So do people know the other thing that cuttlefish are famous for, besides the bone? Change colors. Can I show you a video of cuttlefish changing color? Yeah, of course. So let me get rid of this again. Go back to this. Let's see, somewhere-- where's the cuttlefish? Here we go. Did I do it? Is it thinking? Here we go. Where's the cuttlefish? So this is another one of these Science Friday videos from National Public Radio with Flora Lichtman. [VIDEO PLAYBACK] -OK, let's play a game. [GAME SHOW MUSIC PLAYING] [APPLAUSE] PROFESSOR: See it? -Biologist Sarah Zielinski took these shots. And if you needed a helping hand to find the cuttlefish, don't feel bad. -I've certainly taken photos in the past then come back to look at them and gone, I'm sure there was a cuttlefish in there somewhere! -These cephalopods are master camouflagers. But while they're hiding their body, they're revealing something about their mind, or at least their visual system. -In very simple terms, they can tell us what they can see by the body patterns they produce on their skin. -They produce these body patterns by expanding or contracting chromatophores, these little ink sacks on their skin. And they use different displays for different reasons, like for male-to-male combat. -Two males will turn into each other and pass these kind of waves of dark chromatophores over a really bright sort of iridescent stripey body pattern and somehow solve these combats. Eventually, one male gives up and goes away. -And then there's this unsolved mystery. It changes color when it grabs a snack. -That doesn't make perfect sense because it seems to make it very conspicuous. So one theory is that it's just a happy signal of how excited it is to have caught something, some response that it doesn't have any control over. -But most of the time they seem to be using their chromatophores more intentionally, primarily to blend in. -Because otherwise they're more likely to be eaten, so it's very important they don't make mistakes about ambiguous visual information. -And ambiguous visual information is specifically what Zielinski's interested in. So here's the experimental setup. Print out laminated patterns, like this checkerboard, and stick them in a tank. -And we place the animals in the tank. And we record the body patterns that they produce. -You're seeing them on squares, but they do the same thing on top of circles. They produce-- - --the disruptive pattern, where you get these blocky components of high-contrast components. -But when you put a cuttlefish over squiggles, it produces-- - --a sort of mottley pattern, where you get these little groups of dark spots showing across the body. -So what happens when you put a cuttlefish on something in between, when you put them on incomplete circles? When we see something like this, our visual system likes to fill in the blanks, something we do constantly, Zielinski says. -The reason why cartoons and sketches work is because we can recognize objects based on their edges alone. -And we can identify objects even if they're broken up or-- - --have an object that is occluded by another object. That's no problem for us. We can still work out what the object is most of the time. And I was interested to know whether cuttlefish can solve similar problems. -And Zielinski and colleagues report this week that cuttlefish do seem to-- - --fill in those gaps and interpret those little segments as a whole circle. -Or anyway, the broken circles prompted the same camo pattern as full circles. So if you're wondering, uh, I see these as circles, too. What's the big deal? The weird thing here is that there's no reason why cuttlefish, which are-- - --invertebrates, and they're in the same group as slugs and snails. - --should see the world the way we do. -Yes, it's like they're alien, but we also seem to have so much in common with them. -So the next step? -Because we can't share the perceptive experience of a cuttlefish, it's hard to know exactly what it is that they're doing to fill in that missing information. And I want to try to get a better grasp on that and also see whether they actually respond to true illusory contours. -So you're going to show optical illusions to cuttlefish? -(LAUGHING) That's what I'm hoping to do, yes. [END PLAYBACK] PROFESSOR: So let's go back to sandwiches. I think I have-- do I have one more? There we go. So horseshoe crab shells, so different sorts of arthropods, the shells are sandwiched too. This is from Mark Myers' work. So we're looking at the cross-section of a horseshoe crab shell. So again, it's the same idea-- the animal wants to minimize the amount of material or minimize the weight, and this is a way of doing that. And I went to the Galapagos about a year ago. And there was a place where they had these giant Galapagos tortoise shells. And one of them was broken, and you could see there was a sandwich structure in the Galapagos tortoise shells. These Galapagos tortoises, their shell is like this big. They're gigantic. They're huge. So those are my examples of sandwich panels and beams and shells and whatnot in nature. So the idea is that nature too wants to minimize weight and minimize the amount of material, and the sandwich structure is a way of doing that. So I have one more thing I wanted to talk about today. So this isn't quite sandwich structures, but it's looking at another kind of natural structure that is designed to reduce the weight of plant stems, in this case, palm stems. And there's a couple of interesting things about this. So when you look at palms, like let's pretend we're not in Boston. We're in California, where they have palms. And we're in LA, and they don't have winter. And if you look at the palms growing, when the palm's short, it's about this big in diameter. And as it gets taller and taller, the diameter doesn't really change. It gets taller and taller and taller, but the diameter doesn't change, at least in some species. Whereas if you think of a tree, a tree starts out with a little skinny diameter. And as the tree gets taller, the diameter gets bigger. And it sort of tapers and does that whole thing. So palms don't do that. And palms are not trees. They're a botanically different thing from trees. So here's a coconut palm. And so the question is, as the stem gets taller and taller, how does it resist the bending loads that get bigger and bigger? So probably, the main load on these sorts of things is from the wind. And often these plants are in areas where they have hurricanes. And you see them in hurricanes, you see the pictures of the palm stem blowing way over. And so how do they resist the larger internal stresses as they get taller and taller, if the diameter doesn't get bigger and bigger? And the way they do that is that they deposit additional layers of cell wall as the plant ages. So if you think of a tree, when a tree grows, it just deposits more and more cells. And the cells have roughly the same thickness. So there's ones that are deposited in the spring have thinner walls. Then the summer and the fall have thicker walls. But more or less, it's similar. Whereas the palm, it deposit cells, and then as the trunk of the palm gets taller, as the stem gets taller, it deposits more layers on the cell wall. So this is an example in an SCM. You can see here this is a young cell, and it's got-- this one that's not marked is a primary cell wall, and then this is the first layer of the secondary cell wall. And then this is an older palm. And you can see here it's got more layers, and so the cell wall itself has gotten thicker. So that means that the density of the tissue changes as the palm ages. And it does so in a very kind of clever way. If you think of the palm as being like a cantilever that's vertical and it's bending in the wind, when we have a cantilever beam or any kind of beam, the stresses are going to be biggest on the periphery, right? They're going to be biggest on the outside. And if you think of the palm as having a circular cross-section, that outer periphery is going to see the biggest stresses. So it would make the most sense if that was the densest tissue. And that's exactly what the palm does. So there was a nice study done by Paul Rich quite a number of years ago. And he studied palms in Central America and looked at the density and measured the mechanical properties. And I'm going to talk about his stuff today. So the white is the low density. The gray's the medium, and the black's the high. So you can see the low density's on the middle of the young stem, and just at the very base and then the periphery is the dense tissue. But as the stem gets taller and gets older, then stuff that was low density is now high density. And only the very middle here is the low density. And that some stuff that was low density has turned to middle density. And some stuff that was low density has turned to high density. So it's done this by adding more and more layers to the cell wall, making the cell wall thicker and making the cells themselves denser. So this is looking just at a single palm. So each one of these lines is a single palm. And this is looking at how the density changes from the periphery to the center of the palm. So if you cut the palm down and, say, we take a little sample radially from the middle to the outside or from the outside to the middle, he then measured the density. And it's probably easiest to think about the dry ones because that's kind of what you would compare wood to. So the dry densities varied from about one gram per CC, that's about 1,000 kilograms per cubic meter, down to almost zero in this particular species here, probably like 50 or something like that. And if you compare this with woods, this little arrow here is the density of most common woods. So if you looked at pine and spruce an oak and maple and ash and hickory, they would all be in that little range there. So a single palm stem can have a bigger range of densities than many different species of wood. So it has this kind of profile of the density. And the thing I was interested in is seeing how mechanically efficient that was to put the denser material at the outside. So I looked at the stiffness of the palm, and I also looked at the strength. So I just replotted that data on this slightly different axes here. So this is the radial position relative to the outer radius, and this is the density. And I subtracted off the minimum and then took the range. And for this species here, the minimum density was almost zero. So this expression simplifies to something like that. And just because it's mathematically simpler, that's what we're going to look at. So the density goes roughly as the radius squared. And Paul Rich also did a lot of mechanical tests on the palm, and he took out little beams of different densities. And he measured the stiffness and the strength of the beams. So he measured the modulus of elasticity here versus density. And he measured the modulus of rupture here. And these are all along the grain. And he found that the Young's modulus varied with the density to the 2.5 power, and the strength varied as the density squared. And if the-- [BUZZING SOUND] Oh, hello. [LAUGHTER] So these were just sorts of empirical findings that he made. If you have prismatic cells and you deform them axially, and the cell wall was the same in the different specimens, then the solid modulus would be a constant. And you would expect that the modulus of the beam would go just linearly with the density, sort of like a honeycomb loaded [? at a ?] plane. But what he measured was that the modulus and the strength varied with some power of the density. And the reason for that really was that the cell walls of the denser material had more layers. And in the additional layers, the cellulose microfibular angle was probably different, so that the different layers had different stiffnesses. And if you have layers of differences, then you're going to get this power relationship. So what I then did was I took his data, and I tried to see how efficient that would be in bending. So he had found that the density varied with the radius raised to some power. This power n was 2, but I wanted to do it just for a general case, so I said I was just n. And he said that he found that the modulus varied with the density raised to some other power m. And for him, m was 2 and 1/2. And so I could write just another equation saying that the modulus goes as the radius to the mn power. And then you could do a little calculation where you work out with the equivalent flexural rigidity is. So you have to integrate up. You kind of say you have a little band at a certain radius. That radius has a certain modulus. And you can figure out the moment of inertia that goes with that particular radius. And then if you integrate it up over the whole thing, you can say that the flexural rigidity for the gradient density is some constant times pi times the outer radius to the fourth power divided by those two powers mn plus 4. So m was the power here for the modulus. And n was the power there for the density. And then you could compare that with having the same mass just uniformly distributed over the whole cross-section. And then if you take the ratio of the flexural rigidity for the density gradient versus the flexural rigidity for the uniform density, you can show that it's this equation here. And then if you plug in these measured values for those exponents for n and m, you find that the flexural rigidity with the gradient density relative to the uniform density is a factor of 2 and 1/2. So the stem is 2 and 1/2 times stiffer by having that density profile. So there's a huge sort of mechanical advantage to doing that. And just sort of physically, if you know the stresses our biggest on the outside, it would make sense to put the denser material on the outside. And then the other thing I looked at was the strength of the palm. So imagine this is our very schematic palm here, and then there's a circular cross section. So I wanted to compare the bending stress distribution with the bending strength distribution. So the stress goes as the modulus times the strain, just Hooke's law. And here we're assuming that plane sections remain plane, like that's the standard assumption of bending. So if you assume plane sections remain plane, then the strain goes with the curvature times the distance y from the neutral axis, the distance from the middle. So this distance here would be the dis-- [? same ?] at loaded with a loaded p here. That distance would be y there. And then I can plug in some things here. So instead of E, I'm going to plug-in my relationship with the radius to that mn power. And here's my curvature, and instead of y, if I say that some radius, I'm going to say y is our r cos theta. And so I'm going to say that the stress goes-- [SNEEZE] Bless you. Goes as radius raised to some power mn plus 1. And again, for the species I know what n and m are, so the stress goes as the radius to the sixth power. And then I can also compare with what Paul Rich had found for the strength. He found that the strength-- so sigma star is the strength-- was proportional to the density raised to some power q, and that power was 2 in the measurements that he made. And so I can say that the strength goes as the radius to this power nq, so to the fourth power. And then if I plot the stress distribution and the strength distribution-- so imagine, this is through the cross-section here. So this is the diameter of the stem. And this is the neutral axis here in the middle. The strength goes as that solid line there. It goes as the fourth power. And the stress goes as that dashed line there, as the sixth power. So they're not exactly on top of each other, but they're very close to being on top of each other. So basically what the palm has done is it's arranged the material in such a way that the strength matches the stresses that are applied to it. So if I just had a constant density, my stress profile would look like that. And if I had a constant density, the strength profile would kind of like that. So the strength here would be a constant, and this would be the stress here. So the stuff in the middle, it's much stronger than it needs to be. Whereas the palm has arranged things so that it's got just the right amount of strength for the stress, as a function of the radial position. So it's kind of a clever thing. So that's kind of a beautiful thing. And I think that is it. I think that's-- yeah, that's the end of it. So all these images came from this other book that we wrote. And if you wanted to get the sources, you could get them from there. So that all I wanted to talk about today was some examples of sort of efficient mechanical design in nature and the sandwich panel structures as one, and these radial density gradients is another. We have a project on bamboo right now, and the bamboo also has a radial density gradient, and it's the same thing. The densest material's on the outside, and the least dense is on the inside. So I think I'm going to stop there for today. So what I was going to do on Monday is talk a little bit about bio-mimicking. And that won't take the whole class at all. And I thought we could spend the rest of the class on Monday just doing a review. So the test's on Wednesday. So if you want to bring questions, that would be a beautiful thing. I can't really can I review the whole last six weeks or something in an hour and a half or something. So if you want to bring questions, I'll be here and we can just go over questions. Does that sound good?
MIT_3054_Cellular_Solids_Structure_Properties_and_Applications_Spring_2015
Lecture_Preparation.txt
LORNA GIBSON: So really the way I set up the lectures is I write out notes for myself of what I want to cover, and the notes are pretty detailed. And in the past I always just had the notes for me. And even though they were reasonably neat and I could read them, I didn't hand them out to the students. But because I've turned my fall course into an MITx course and I want to make the lecture notes available for that, when I was doing that course for MITx I made really nice notes. And a friend of mine who lectures, I was asking her how she did it, and she said she actually goes and measures the chalkboard, and she measures the aspect ratio, how tall to wide the chalkboard is. And then she sets up her notes so that they're the same aspect ratio, and she plans out exactly where she's going to put everything on the board on her notes. And so I started doing that. And when I'm doing the lecture I put it all on the board, and I find a lot of students-- even though I hand the notes out now-- they like to write their own notes. And I think it helps them pay attention in class and helps them kind of focus on the material. So another thing I do in the lectures, when I first started lecturing and for a long time I just focused on the engineering, you know, on the equations, this is the derivation, this is an example, this is how you use this. And it was all just about kind of the engineering of whatever I was working on. And I found over the last few years, actually in both courses, in the fall one on mechanical behavior and in the spring course on cellular solids, I look for more interesting examples and stories, kind of stories about the people who discovered some of the principles that we talk about. I tell them stories about engineering situations that came up and there was some interesting thing happened. And the students love it. I mean, they really like having the kind of hard core mechanics broken up with some sort of stories. So I do that a lot more now than I used to do. So probably most lecturers have some kind of interesting example or historical thing that I talk about. So for instance, when I teach the fall course, the mechanical behavior materials, one of the first things we talk about is stress. So stress is a force per unit area. If I take this piece a wood and I pull on it like this I'm pulling on it with a force that goes out like this. Stress is just that force divided by that area. And the unit of stress in the SI system is called the Pascal. And it's named after Blaise Pascal who's a French mathematician. And a couple of years ago I was in France for a conference and I was in a little town called Clermont-Ferrand, which is in the middle of France. It's a pretty little town. And I'm just walking around one day kind of seeing the square and the cathedral and all this stuff, and I see there is sign, there's like Pascal something or another. And this was apparently the site of Blaise Pascal's house. And right next door to it is Cafe Pascal. So of course, I have to take a photograph of Cafe Pascal. And in this other course, when we get to the bit about stress and I tell them about the Pascal I show them the picture of the Cafe Pascal. And the other amusing thing that they kind of get a kick out of because of Boston, you know how in Boston there's the Freedom Trail and there's a red line goes around all these historical sites all this colonial and revolutionary stuff around Boston. It's kind of cool, all the tourists to it. Well in Clermont-Ferrand there's a Pascal Trail. And there's little metal medallions of the portrait of Blaise Pascal put in the sidewalk and you can walk around Clermont-Ferrand doing the Pascal Trail. So I kind of keep an eye out for stuff like that and I put that into the course now, and I never used to do that kind of stuff. I have a picture from the Library of Congress, which I went to just for fun a while ago. And one of the main buildings is this old, beautiful historical building for the Library of Congress. And they have a marble staircase that goes up the middle. And the staircase has all these little cherubs. And there's an agriculture cherub and he's holding a sheaf of wheat. And there's a wine cherub and he's holding a little thing of grapes. Well, it turns out there's a mechanics cherub and he's holding a gear. So at the end of the first lecture I show them the mechanic's cherub with the gear. So there's these cute little things. So I just keep an eye out for these cute little things. And last fall, this past fall, one of the students came up at the end of the first lecture and he said, I really like art. And he says, is there going to be more art in the class? And I'm like, not usually. This has kind of exhausted my art in mechanics. And I said, but I'll keep an eye out. And through the term there actually were different things that I saw that had to do with mechanics and art I showed the class. So one of them was at the Peabody Essex Museum this fall there's been and display of the mobile sculptures of Alexander Calder. And they also made these very large, I think they're called stabile sculptures, as well. The big sail at MIT is one of his sculptures. And these mobiles are actually a really nice example of free body diagrams in mechanics and balancing the forces. So anyway, I got a picture of one of his mobiles and I went up to the exhibit. And when I was at the exhibit I found that he did a degree in mechanical engineering. He actually was a mechanical engineer. And so the students were very, very tickled by the sculpture, the fact that he studied engineering. So anyway, I look out for stuff like that.
MIT_3054_Cellular_Solids_Structure_Properties_and_Applications_Spring_2015
13_Tissue_Engineering_Scaffolds_Processing_and_Properties.txt
NARRATOR: The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. LORNA GIBSON: So last time, we finished talking about trabecular bone. And what I wanted to talk about this week was tissue engineering scaffolds. So the idea with tissue engineering is you want to be able to repair damaged or diseased tissue. And typically, that's done by regenerating the tissue in some way. So in your body, many types of cells, like maybe not blood cells, but most types of cells for sort of structural tissues are attached to an extracellular matrix. And this is sort of a schematic of the extracellular matrix here. And the composition depends on the type of tissue that's involved. For example, in skin, it would be collagen and something called glycosaminoglycans and elastin. In bone, it would be collagen and a mineral-- a calcium phosphate mineral, hydroxyapatite. So the composition varies. But the idea with the tissue engineering scaffold is that you want to make a material that essentially substitutes for the extracellular matrix. And it does so on a sort of temporary basis. So the idea is you put something in the body. The cells attach to that, whatever scaffold you put in. And the scaffold has to be made in such a way that the material-- that the cells, they can migrate through it. They can attach to it. They can differentiate. They can proliferate. So the cells can do all their normal function. And the idea is that as the cells are doing their normal function, they then secrete the natural extracellular matrix. And the engineered thing that you put in is resorbed. So there has to be a balance between the rate at which the scaffold you've made resorbs and the rate at which the cells are depositing the new native extracellular matrix. So that's one of the key things about this. So this is just an example of sort of a schematic of an extracellular matrix. In this case here, there's collagen fibers. So these guys here collagen fibers. And these kind of hairy-looking things are proteoglycans. So they have a core of protein with sugars kind of hanging off of them. And there's different kinds of GAGs, they're called, that hang off of them. So one's chondroitin sulfate. The CS here stands for Chondroitin Sulfate. There's one called dermatan sulfate. And there's one called heparin sulfate. So there's different of these glycosaminoglycans. So let me write down some of this, and then we'll kind of get into this. So what I wanted to do today was show you some examples of tissue engineering scaffolds. Show you some of the sort of design requirements. So you have to have, obviously, a material that's porous so the cells can get in there. And that's where the cellular solids comes in. So we'll talk about some scaffolds, some of the sort of design requirements for them. And also, we'll talk a little bit about processing of the scaffolds and mechanical properties of the scaffolds. So I'm hoping I can finish most of that today. And then next time, I have a little case study on osteochondral scaffolds. So osteo means bone. And the chondral means cartilage. So this was sort of a two-layer scaffold that we developed in collaboration with some other people at MIT and some people at Cambridge University. And it went from a research thing to a startup thing and being used clinically. So I was going to talk about that Wednesday. OK. So let me just get started here. So the goal of tissue engineering is to regenerate diseased or damaged tissues. And in the body, the cells attach to the extracellular matrix. And that's sometimes called the ECM. Sometimes, the scaffolds are called scaffolds. And sometimes, they're called matrix because the extracellular matrix is called matrix. So then, the composition of the ECM depends on the tissue, but it usually involves some sort of structural protein. So something like collagen or elastin. It also typically involves some sort adhesive proteins. So something like fibronectin or laminin. And it involves these proteoglycans, which are the core of protein with a sugar hanging off them. Whoops. And the sugars are typically glycosaminoglycans. And for short, people call them GAGs, just because it's easier to say. So some examples of the GAGs are chondroitin sulfate. And we're going to talk more about that a little later because that's one that we've used in making collagen-based scaffolds with [INAUDIBLE]. So for example, if you look at the composition of extracellular matrix in something like cartilage, it has a collagen component and a hyaluronic acid component and some GAGs. And this hyaluronic acid is a proteoglycan. If you look at bone, extracellular matrix in bone, it's made up mostly of collagen and hydroxyapatite. And if you look at skin, skin is made up of collagen, elastin, and proteoglycans. And the idea is that the cells have to be attached to this extracellular matrix in order to function. Or, they have to be attached to this or to other cells in most cases. So next week, I'm going to talk a bit about cell mechanics. And we have some video where we watch a cell deforming a scaffold. And I don't have videos to show you, but I had a student who took videos of cells migrating along with scaffolds. We'll look at how the stiffness of the scaffold affects how the cells migrate along the scaffold. So that's kind of the native ECM in the body. And the idea with tissue engineering is that you want to make a scaffold that's porous that mimics the extracellular matrix in the body. So, say, there's some tissue that's damaged. Say there's damaged cartilage. You want to provide sort of an extracellular matrix that's a sort of synthetic thing that's going to provide the same function as the ECM in the native tissue. So people have been working on scaffolds for regenerating all sorts of different tissues. And probably, the most successful one has been used to regenerate skin. And there's been scaffolds available for regenerating skin for probably almost 20 years now. And one of the first ones was developed by Professor Yannas in mechanical engineering here at MIT. And it's actually still sold by a company called Integra. But this research to develop scaffolds for lots of different tissues, orthopedic tissues, things like bone and cartilage, cardiovascular tissues, nerve-- like Professor Yannas works on peripheral nerve these days. People have looked at trying to make scaffolds for gastrointestinal tissues. So all sorts of different tissues. And at MIT, there's quite a lot of interest in this. There's a lot of people working on it. So Bob Langer works on it. Linda Griffith. Sangeeta Bhatia. There's really quite a number of people at MIT who work on it. Those are just some of the people at MIT. So the idea is in the body, the cells are constantly resorbing the extracellular matrix and depositing more. So if you think about bone, for example. Remember we said bone grows in response to load? Even in healthy bone, the bone is constantly being resorbed and deposited. And in normal bone, the rate of resorption and the rate of deposition is roughly the same. When people get osteoporosis, the thing that happens is that that balance gets out of whack. And so it's not being deposited at the same rate it's being resorbed. And the idea with the tissue engineering scaffolds is that they degrade over time. And that the cells that were attached to them are forming their own extracellular matrix. So there's kind of a balancing act between the cells depositing the native ECM and the tissue engineering scaffold that was provided, say, by the clinician being resorbed. So the scaffolds are actually designed to degrade. And controlling that degradation rate is one of the design parameters of the scaffolds. OK. So that's kind of the overall, kind of big picture. And what I wanted to talk about next was some design requirements for the scaffolds. So if you think about it, there's different sort of ways you can think about what the requirements are. So you have to make the scaffold out of some solid. And there's some requirements for the solid. So obviously, you want a solid that's biocompatible. That's one kind of main requirement. Another requirement is that not only the solid has to be biocompatible, but when the solid decomposes, if it decomposes into other components, they have to be biocompatible too. So you don't want the solid to degrade during this resorption process into toxic components. That would be a bad idea. And then, the other thing is the solid itself has to promote cell attachment, and cell proliferation, and cell migration, all these kinds of things. So we're going to talk about some different materials for the scaffolds. And some of them are sort of native proteins. So some of them are things like collagen. Collagen already has binding sites for cells to attach to it. Obviously, it's one of the proteins in the native ECM. There's also a number of synthetic polymers you can use. And with the synthetic polymers, they don't have natural binding sites for the cells. And so you have to coat them with something else. So you have to coat them with, say, adhesive proteins so that the cells will attach to them. So we're going to talk about the requirements for the solid. Then, you make the solid into some sort of porous, foamy thing. I think I have some slides here. Here's an example of a collagen GAG scaffold. And this is one of the ones that is made in Yannas' lab. And you can see it looks a lot like a foam. It's very, very porous. And there's some requirements for the sort of cellular structure, the foamy structure of the scaffold as well. So typically, you want interconnected pores so it's easy for the cells to migrate in. Typically, you want pores to be within a certain range. It turns out if the pores are too small, it makes it difficult for the cells to get in. Sometimes, they can by eating away at the material. But typically, the pores-- you want them to be bigger than a certain size. You also want them to be smaller than a certain size because how much specific surface area, how much surface area per unit volume you've got, depends on the pore size. The smaller the pores, the more surface area per unit volume you have. And then, the number of binding sites you have for cells to attach to it depends on that specific surface area. I'm going to write all this down, so I'll do that. So there's requirements for sort of the pore structure. And then, there's also some requirements for the whole scaffold itself. So for instance, it's got to have some minimal mechanical integrity. So there's sort of some requirements for that. So there's requirements for the solid. There's requirements for the sort of cellular structure, the porous structure. And there's requirements for the overall scaffold. So let me write some of these things down. So there's some requirements for the solid. So it must be biocompatible. It must also promote cell attachment and proliferation in the cell functions. And then it must degrade into nontoxic components. So there's some requirements for the cellular structure, too. And what you want to have is a large volume fraction of interconnected pores. And so you want that to facilitate the cell migration. And also, the transport of nutrients into the cells. So you also want the pore size to be within a critical range. So you need the pores bigger than a lower limit so the cells can migrate through easily, can kind of get in there. And you want the pore size to be less than an upper limit to have enough surface area to have enough binding sites to actually attach cells. And for different tissues, there's different critical ranges of the pore size. So for example, for skin they found that you want to have a pore size between about 20 microns and about 150 microns. And for bone, the pore sizes that people tend to use are between about 100 and 500 microns. So there's the pore size. And one other feature is that the pore geometry should be conducive for the cell type. So lots of cells are somewhat equiaxed. Maybe a little elongated. But if you look at something like nerve cells, like peripheral nerve, they're incredibly elongated. So you want to have pores in the scaffold that are also very elongated. And then for the overall scaffold, it needs to have some mechanical integrity. You guys OK? Yeah? We're good. Oh, achy. Yeah. I know. So you want to have some overall mechanical integrity. I mean, the thing has to be put into the body in surgery. And people are going to be pushing and poking at it. And so it has to have some just overall mechanical integrity. Also, it turns out that if you put stem cells into scaffolds, the types of cells they differentiate into depends in part on the stiffness of the scaffold. So you want to be able to control the stiffness of the scaffold. AUDIENCE: How do they do this research? Do they use animals? Or they do it in vitro? LORNA GIBSON: Yeah. So the question is, how do they do the research? So they do a sort of series of different things. So at one level, you could have the scaffold and you'd put cells on it. Say you're making a bone scaffold. You'd put osteocytes onto it. So one level, you just put cells onto it. And you want to see, are the cells attaching? Are they dying? Are they proliferating? So sometimes, what people will do is put the-- they'll seed a certain number of cells at a certain time. Say, time 0. Then, they'll look at how many cells are attached at 24 hours or 48 hours. And you kind of see the cell attachment. You can measure relatively easily. Another thing people do is animal studies. So for instance, Yannas does research on peripheral nerves and scaffolds for peripheral nerves. And they cut a piece out of the sciatic nerve of rats. So obviously, they have a surgeon and do a surgery thing with it. You can't just kind of do this in the lab. You've got to get permissions and stuff to do it. And then, they put in the scaffold. And the scaffold's actually in a tube. And so they put the two stumps of the nerve end at either end of the tube, and then the tube's filled with a sort of porous scaffold that we're talking about here. And then, they wait some period of time. And they take video of rat running, things like that. They then sacrifice the rat and they do histology. And they look at the sort of cross-sections and see what it looks like. And so I'm going to talk next time about this osteochondral scaffold we worked on. So we did cell studies. We did goat studies. We put into goat knees. There was a longer term sheep study. And then, the student who's in Cambridge, England, started up a company and he ended up getting approval in Europe to start clinical trials. And then he worked with an orthopedic surgeon who started putting it in people. But typically, they're looking at cells, looking at animals before you get to the people stage. And one of the things people do when they're making these scaffolds is you want to use materials that already have some sort of regulatory approval. So say FDA approval or approval in Europe. So typically, people don't start with a brand new material from scratch. Because to get approval for that would just take a very long time. So typically, people start with-- the solid material is already approved for some other sort of use. OK. So one requirement for the overall scaffold is it has to have sufficient mechanical integrity. Sufficient. And then also, as I mentioned, the stiffness of the scaffold can affect differentiation of cells. And the other thing that is really a factor for the overall scaffold is you want to control the rate of degradation of the overall scaffold. So you want that rate to be matched to the rate at which the new tissue is forming. So it has to degrade at a controllable rate. OK. So I want to talk about the materials that people use. And you can kind of break them down into a few classes. So one class is natural polymers. So things like collage. So you can get collagen. And that's an example up there of a scaffold that's made with collagen. Another class of materials is synthetic biomaterials. And if you've had stitches or surgery, you may know that some of the sutures they use are resorbable. So some of those polymers that they use for resorbable sutures are also used for tissue engineering scaffold. And then, there's also hydrogels that people use as well. So those are probably the three main groups are sort of natural polymers, synthetic biopolymers, and I guess the hydrogels are sort of a subset of the sympathetic biopolymers. So collagen is probably the most common kind of natural polymer that's used. They also use GAGs. And this scaffold up here is made by making a coprecipitative collagen with a GAG chondroitin sulfate. People also use alginate. I think one of the project groups-- you guys are going to make some sort of alginate scaffold, right? No? [INAUDIBLE] foamy thing? Yeah. And people also use something called chitosan, which is a derivative of chiton, which is what's in the exoskeleton of insects and things like lobsters. So those would be all examples of natural polymers that can be used and people have tried. I'm going to talk a bit more about collagen, just because it's the most common one. So collagen is a major component in the natural extracellular matrix. And not surprisingly, it has binding sites for cells to attach to it. So if you use that, that kind of takes care of that issue. Let me put it down here. So collagen exists in many types of tissues. Exists in skin. Exists in bone, cartilage, ligament, tendon-- cartilage. So it's very common. It has surface binding sites for cells. It has a relatively low Young's modulus. So the Young's modulus is a little less than a gigapascal. But you can increase the modulus by either cross-linking or by using it in conjunction with some synthetic polymers. And I'm going to talk a little bit about how you make these scaffolds up here. And the first step in making those scaffolds is you put the collagen in acetic acid, and then you add the glycosaminoglycan and it forms a coprecipitate. And the fact that it forms a coprecipitate with the glycosaminoglycan, the GAG, means that you can use a freeze-drying process. And that's how that's made. Collagen is one option. So then synthetic biopolymers is another option. And as I said, typically they use the materials that are used for resorbable sutures. So there's several of those. There's something called PGA. That's polyglycolic acid. And something called PLA. That's polylactic acid. And then you can combine those two and make something called PLGA polylactic co-glycolic acid. And you can control the degradation rate of these things by controlling the molecular weight. And in this case, you can also control it by controlling how much of each of those things you put in. And there's another one called polycaprolactone. So those are several synthetic biopolymers that people use. There's lots of different materials, but these are just some typical ones. And then another class are hydrogels, which are produced by cross-linking water soluble polymers to form an insoluble network. And those are typically used for soft tissues. Sometimes, they're used for things like cartilage. And again, there's a few different materials that are commonly used. One's PEG, Polyethylene Glycol. One's PVA, Polyvinyl Alcohol. And another one's PAA, Polyacrylic Acid. So for these synthetic polymers, there's many different processing techniques available. And I'll talk a little bit about some of the processing techniques. But one of the limitations is they don't have natural binding sites. And you have to coat them with some sort of binding agent, like an adhesive protein, to get the cells to attach to them. And then, as I mentioned before, you have to make sure that whatever material you choose, if it's a synthetic material that when it degrades, it's not toxic to the cells. Because you don't want to have some sort of toxic reaction or inflammation. OK. So there's a couple more things about materials. So those are all polymer-based materials. When people are trying to make scaffolds for bone tissue engineering, they also include a calcium phosphate mineral. And there's different versions of the calcium phosphate. So they can include-- you can buy, for instance, hydroxy powders now. There's another calcium phosphate called octacalcium phosphate, which will, with water, turn into hydroxyapatite. So typically, there is this mineral, some sort of calcium phosphate, is combined with either collagen or with one of these synthetic biopolymers. And one other option is something called an acellular scaffold. And what that is they take some natural tissue and they remove all the cell material from it. And so when they remove all the cell material, what they're left with is the native ECM. And that's called an acellular scaffold. So it's a native ECM with all the cell matter removed. And they remove the cells by-- they can use sort of a physical agitation or chemical, or using enzymatic methods. Using something like trypsin to get rid of the cell. So there's ways that they can get rid of the cells. OK. So are we good so far? So there are some requirements for what materials we kind of use, what the cell structure should be, and these are some examples of typical materials. So I wanted to talk a little bit about the processing of the materials. Let me wait until people catch up a little. Oh, and I have some scaffolds I was going to pass around. So this big sheet is a piece of the collagen GAG scaffold that I showed a minute ago. And then this little piece is a mineralized version of that. So this has the collagen plus calcium phosphate plus hydroxyapatite in it. OK. So this slide shows some examples of different scaffolds that people have made. And I was going to talk a little bit about some of these methods. And why don't I talk about them, and then I'll write some notes on the board. So this top one here on the top left, that's the collagen GAG scaffold that's made in Yannas' group. And that's made by a freeze-drying process. So you put the collagen in acetic acid, then you put in the GAG. The GAG and the collagen form a coprecipitate. And then you can freeze that. And if you freeze it, what happens is-- it's just like if you freeze saltwater. The water freezes. The pure water freezes. And you've got increasingly higher brine content in the bit in between the water grains, or in between the ice. So you get the sort of solid ice forming. And the collagen and the GAG are kind of squeezed into the interstitial bits between the ice crystals. And then if you sublimate the ice off, you're left with this porous kind of structure that looks like a foam. So that's made by a freeze-drying process. And I'll go over it in a bit more detail when I write the notes on the board. You could also foam some of these polymers. So just blowing a gas. The same way you can blow a gas through engineering foam. You do the same thing with some of these polymers. This one's made by foaming. You can have a fugitive phase process. So this is made by salt leaching, the second row on the left there. So you could imagine you could take a polymer powder. You could mix it with salt. You can sort of mix them up, combine them together. You heat it up to get the polymer to melt and to sort of form a connected mass. And then you leech out the salt. And then you get pores where the salt was. This one here is made by an electrospinning process. So you have a nozzle. You feed the polymer through the nozzle. Then you have plates that are charged and you get fibers forming and kind of scattered in different directions by the electrospinning process. Then, this one here represents scaffolds that are made by things like 3D printing, selective laser centering. You can have laser-sensitive polymer. And you can produce scaffolds that way. I think the geometry of this one matches some part in the body. I think it was a knee or something like that. And then, these two examples down here are the acellular scaffolds. That's what I was talking about at the very end there. So those are from porcine pork heart tissue. You know, pig heart tissue. And those are mostly elastin. And they've had all the cell matter removed from them. OK. So you can kind of see that these synthetic scaffolds here have a structure that's not so different from these native ECM scaffolds down here. OK. So let me write some of the things about the processing on the board. Let me rub this off. Start over here. So these freeze-dried scaffolds are used for skin regeneration. And I think I have some more slides here. So it's kind of a two-step process. In the first step, you make what they call a slurry. So you make the slurry by taking the collagen. And for skin, I think you want type 1 collagen. Yeah, skin is type 1 collagen. There's different types. You put it in acetic acid, and then you add the GAG. And we use chondroitin 6-sulfate is just the particular GAG that we use. And one of the things that the acid does is that it swells the collagen. And collagen has a sort of periodic structure in it, sort of periodic banding. And the acid destroys that periodic banding structure. And that helps increase the resistance to having some host immune response. So that you remove the immunological markers and it makes it less likely that the scaffold's going to get rejected by the body. So then when you put the GAG in, you form a coprecipitate. So this next step just shows kind of mixing the whole thing up. And then you've got kind of a little slurry that you can store. And then, this is the freeze-drying step here. So you put the slurry, the suspension, into a pan. Kind of just like a cookie sheet, really. And then you freeze it. So if you think of this phase diagram here, where you have temperature and pressure. So here we have liquid, solid, and vapor. So if you start off at this point here, you freeze it. So you've reduced the temperature. So that forms the ice. And the ice is surrounded by the collagen and the GAG fibers. And then if you do the sublimation step, you reduce the pressure and increase the temperature a bit. And then you get over to the vapor end of the world. And then you're left with this porous scaffold. And then, let me see. Let me do one more step here. And then you can control the size of the pores by controlling the freezing temperature. So the size of the pores is exactly related to the size of the ice grains that are forming. And the faster it freezes, the smaller the grains are going to be. And then, the smaller your pore size are going to be. So you can control that. The type 1 collagen is mixed with acetic acid. And it then swells the collagen and disrupts periodic banding. And it removes immunological markers. And then you add the GAG, the chondroitin 6-sulfate. And then that cross links with the collagen and forms a coprecipitate. And then you can freeze dry that to get the porous scaffold. And typically, the relative densities of these scaffolds is very low. So typically, they're 0.5% dense. The relative density is 0.005. So they're 99.5% air and the rest of it's the collagen and the GAG. And the pore sizes are typically between about 100 and 150 microns. And Yannas uses the same scaffolds for the nerve regeneration. And he uses a directional cooling. And that then elongates the pores, so that-- the idea is that they elongate so the nerves kind of grow along that length. So that's one way. Another way is leaching a fugitive phase. So let's see. I think-- yep. Here we go. Back to there. So if you look at the one on the second row on the left, that's done by using salt as the fugitive phase. People use other things. You can use wax. Paraffin wax works as well. So it doesn't have to be salt. There's different things you can use. So you combine a powder of the polymer with your fugitive phase Say, salt. Then, you heat it up to get the polymer to bind. And then you leach out the salt. So you can control the porosity by the volume fraction of the fugitive phase, and then the pore size by the size of whatever the fugitive phase is. Another technique is electrospinning. The idea is you produce fibers from a polymer solution that you extrude through a nozzle. And then you apply a voltage across some plates to spin the fibers. And then you get a network of these fibers. And typically, their micron-scale diameter. And the last method I'm going to talk about is rapid prototyping. So you can think of using 3D printing. Or you could use selective laser centering or stereo lithography using a photo-sensitive polymer. So the idea is-- you know how this works. You just build up layers of solid, one layer at a time. And then you can make complex geometries with that. So that's one of the advantages. If you wanted to make a part to fit a particular place in the tissue, then it's convenient that you can control the geometry of the whole part. OK. So that just kind of summarizes very briefly, kind of how the tissue engineering scaffolds are meant to work, what kinds of materials people make them from, and a few of the processes. And there's many, many processes. These are just sort of a few common processes. I wanted to talk a little bit about the mechanical behavior because that's kind of what I do. And so this next plot just shows a stress strain curve in compression for a collagen GAG scaffold. And I'm hoping that by now you're getting the idea all of these cellular materials have this kind of shape of a curve. So there's the same kind of linear elastic regime, and then a collapse plateau. These collagen things, as you can imagine just pressing them in your hand, they fail by buckling, by inelastic buckling. And then there's a densification regime. So they look like all the other kinds of curves that we've got. One of the things we've done in the modeling is typically in the model, we want to be able to calculate or measure the modulus of the solid or the strength of the solid from which the thing's made. So [? Brendan ?] Harley was one of my PhD students. And he took a little microscope, cut a little strut. The struts are very small. The pore size is 100 microns. So the struts are on that order. He glued one end of the strut to a glass slide, and then he used an AFM probe to do a little bending test on that little strut. If he could measure the deflection, he knew the length. He knew the geometry of the strut. He could figure out what the modulus was for the solid. So he backs out what the modulus is. So he did these measurements on a dry strut. And then by comparing the overall modulus of a dry scaffold with a wet scaffold, he estimated what the modulus of the wet scaffold or the wet strut would be. So the modulus of the dry collagen GAG was 672 megapascals. A little less than a gigapascal. And wet it was about 5. So there's a huge difference between the wet and the dry. OK. So let me just write a few notes about that. So in compression, there's the three regimes that we see for all these cellular materials. And so you can estimate the modulus by using the model we have for the foam for the modulus. The modulus of the foam divided by the modulus of the solid goes as the relative density squared. And that's related to bending in the cell wall. And then, the collapse plateau is related to elastic buckling. And so that's equal to some constant times E of the solid times the relative density squared. And that's related to elastic buckling. And then we measured E of the solid doing this little AFM beam bending test. OK. And for one of these low-density scaffolds, we measured the modulus. We measured the buckling strength. And we got pretty good agreement by using these equations here. And the good agreement was if this constant was 0.2. That was the strain, that buckling. Yeah. AUDIENCE: So what does it mean to be wet in here? And why is it so much lower? LORNA GIBSON: Well, we make the scaffolds by this freeze-drying thing. And like this thing I passed around, that was dry. We just immerse it in water. We just put it in water. And then, he does the test. So he does the test on the whole scaffold dry, and then he does the test on the whole scaffold wet. And I assume there's some sort of bonding that gets disrupted by having the water. I don't know the details of how that works, but there must be some change in the bonding to make that happen. So let's see. I'm trying to see. What else should we do today? Oh, yeah. So we get pretty good agreement with this sort of simple foamy model. One of the things we did find was that when we tested higher-density scaffolds, the agreement wasn't so good. And I think this was because when you get higher density, it's just hard to get the collagen GAG mixture to mix in with the acetic acid. And so we ended up getting inhomogeneous scaffolds with sort of large voids in them. So in order for the modeling to work, you have to have a scaffold that's relatively homogeneous. You don't have kind of big defects in it. I think that's all I'm going to say about that. I have one more slide here. So those are sort of three-dimensional scaffolds we've been talking about. Sort of foam-like scaffolds. People have also made honeycomb-like scaffolds as well. And this just shows some examples of some honeycomb-type scaffolds. So this one here, I think was made in Sangeeta Bhatia's lab. You can sort of think of it as a hexagon, but it's also kind of triangulated as well. These two here were made by George Engelmayr. He worked with Bob Langer at one point. And these two scaffolds here, they're both sort of rectangular cells. And they were designed to look at how the cell geometry or the pore geometry affected the sort of morphology of the cells that attached. So if you have different, say, [? porous, ?] you get different morphology in the cells that you're trying to attach. And then, George Engelmayr also made these scaffolds here. And those were designed to be anisotropic and have different mechanical properties in different directions. And I think what they had done was try to match the anisotropy in the mechanical properties to anisotropy in heart tissue, in cardiac tissue. So these are some examples of honeycomb-type scaffolds. So let me just write down a few things about those. So I think these were more-- obviously, the scaffolds are used-- the ultimate goal is to use them clinically. But sometimes, people make scaffolds just to study cell behavior. And some of these, I think, were made just to study how the cells would behave on them. So they're sort of idealized to do that. So that triangulated. It kind of looks like a hexagon, but there's also sort of triangles in there. I think the thing they were looking at with that was transport of nutrients to cells. And from a mechanical point of view, if it's triangulated you'd expect, say, the modulus of that to go linearly with how much solid there is there. So the rectangular honeycomb and the diamond shaped pores, they were used to study the effect of pore geometry on the cell orientation. They used fibroblasts. And I'm going to call it the accordion-like honeycomb. That's mechanically anisotropic. And the mechanical anisotropy is matched to the cardiac tissue. OK. So I think I'm going to stop there for today. That's sort of the end of this part. And next time, I'm going to talk about the osteochondral scaffolds. I don't think it really makes sense to start it for two or three minutes. So I'll start that tomorrow. Or Wednesday. And we should be able to finish that on Wednesday. So one of the things that these honeycomb-type scaffolds kind of suggests is that the scaffolds are used both to try to regenerate tissue in the body in clinical applications, but they're also used as sort of an environment for cells, in order to study cell behavior. So next time, I'm going to talk about an osteochondral scaffold. And the idea with that was to try to use it clinically. But next week, I'm going to talk about cell mechanics a bit. And when people study cell mechanics, or look at the mechanics of biological cells, not the cellular structures. So when people look at trying to study how cells behave, they need some environment to put them on. Typically, people started by just using flat 2D substrates. But the flat 2D substrates are kind of easy to study, but they don't really represent the tissue in the body. And so people are now using tissue engineering scaffold as an environment that they can control to study how cells behave. So they study cell attachment, cell proliferation, cell migration, and cell differentiation all by using the scaffolds as kind of a controlled environment. So we'll talk about that, probably not-- I don't know if we'll get to it on Wednesday. But we might start it on Wednesday and finish it next week. OK?
MIT_3054_Cellular_Solids_Structure_Properties_and_Applications_Spring_2015
6_Natural_Honeycombs_Wood.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. LORNA GIBSON: So I should probably get started. So I just wanted to mention this Friday the libraries are having Furry Friday. So they have therapy dogs come, and if you like dogs, it's kind of fun to go get cuddled by the dog. The other thing I wanted to mention, last term, there was a student taking 3032 who was interested in art. And I kept trying to find art pictures for him, and he's not here. But I thought everybody else can like the art too. So I belong to the Peabody Essex Museum in Salem, Mass. And they have an exhibit right now on wood and on sort of using wood as a sculptural material. And this is one of their posters to advertise it. So this thing was carved out of a single piece of wood, I think. And they've got lots of other sort of sculptural wood. So I thought you might like to see that. If you wanted to go to Salem, there's a couple of options, if you don't have a car. You can take a commuter rail to Salem. You can also take the ferry. And if you take the ferry to Salem, it's like a five minute walk to where the ferry lets you off to get to the Peabody Essex. And it's a kind of neat museum. It's not too big. It's kind of small. But it's a beautiful building, and they have neat stuff there. So you could go to the Peabody Essex Museum. Hmm? STUDENT: It's very nice. LORNA GIBSON: Yeah, you've been there? Yeah, it's really nice. So I was going to talk about honeycomb-like materials in nature today. And I'm going to talk about wood today, and I might finish this today. I might not. And then I'm going to talk about cork for a little bit on Wednesday, and then we'll start talking about foams after that. So I have a couple of sort of cute little language historically things. And you know how I like that stuff too. So I have two things about words that are related to wood. So the word "materials--" you know where the word materials comes from? It comes from the Latin. So there's a Latin "materies materia." And materies materia means "wood" or "the trunk of a tree." So if you think of studying materials, in olden times that was like studying wood. And another cute thing that I found was that in old Irish the names of the first few letters of the alphabet are named after trees. So the letter A, that's called alem in old Irish, and alem is the word for elm. And B is-- I don't know if I'm saying these right. It's called beith, and that's the word for birch. And C is call. That's the word for hazel. And D is dair, and that's the word for oak. And so they sort of named the letters of the alphabet after different kinds of trees, different kinds of woods. So I just thought those were kind of interesting historical things. So I wanted to start by talking about wood structure. And then we're going to look at how would deforms and fails, and talk about the data that people have measured for the wood properties, things like stiffness and strength. And then we'll talk a little bit about how the honeycomb models can be applied to understanding the mechanical properties of woods. So this is kind of a generic trunk of the tree here. And we're defining three axes. The radial axis comes radially out of the tree. There's the tangential axis, so that's the x1 and the x2 axes. And then there's the longitudinal or the axial axis, x3. So if you think of the wood as being, in a very, very simple way, just like the honeycomb, the radial would be this way on. The tangent would be that way on. And that axial would be that way on. So it's like that. And if you neglect the growth rings, you can say that other woods orthotropic, and that's typically what people do. They neglect the growth rings, and they say that it's orthotropic. And the density of the woods, the relative density ranges from about 5% for balsa wood to about 80% for lignum vitae. So I brought in some pieces of wood. So this is balsa wood. You're probably familiar, making different kinds of models with balsa. So balsa's very light. It grows in Ecuador. And it's the lightest wood. And this is lignum vitae. You're probably not so familiar with that. This actually grows in Florida, and it's the densest wood. It has a relative density of 0.8. And it's so dense, that if you put it in water, it sinks. So it's a very dense wood. And the way the wood cells grow is that if you look at the sort of structure here of a tree, there's the bark on the outside here, and then there's the kind of wood cells inside the bark. And there's a layer of cells in between the bark and the wood called cambial layer. And that's really the layer of cells that are alive and are dividing. So if you think of the wood cells, they're living when they're in that little cambial layer there. And they're dividing. And that cambial layer, the cells have a plasma membrane and a protoplast. And then they sort of exude the plant cell wall. So a little like bone cells, like if you think of the bone in your body, there's osteoblasts and osteoclasis, different kinds of bone cells. But the bone cells secrete the sort of collagen and the calcium phosphate that are the sort of hard mineral part of the bone that you think about in the bone. And that's not a living thing. The cells are the living thing. That's like an extracellular matrix. In the trees, it's a little bit the same. So there's the living cells that are just under the bark, and they have this plasma membrane and the protoplasm. And over a few weeks they excrete the plant cell wall, and then they die. So the living cells die, and you're left with the plant cell walls. And then as the tree grows, you're always having a layer of these cambial cells, and it forms bark on the outside and wood on the inside. So there's sort of a layer of cells that are differentiated, such that on the outer layer they form the bark, and on the inner layer they form the wood. And as the tree grows, that cambial layer is kind of expanding out radially. So let me write some of these things down. Let's see. So let me just write down those two little word things because I think they're cute. So the word materials is from the Latin materies materia. And that means "wood" or the "trunk of a tree." And here's the little old Irish thing. It's not like I think-- I'm not going to put this on the test or something. I just thought it was cute. So the letter A is alem, which is elm. And letter B is beith, which is birch. Letter C is call. That's hazel. And D was dair, and that's oak. So that's just for general interest. So then the wood structure we can think of it as orthotropic, if we ignored the growth rings. And if you have a sort of large diameter tree and you take a piece of wood not from the very center, but from somewhere near the outside, then that's not a bad approximation. Now, the relative density of the woods ranges from about 0.05 for balsa to about 0.8 for lignum vitae. Any Latin scholars here? I took one year of Latin in high school. Anybody take Latin? No, no Latin. So I think lignum vitae I think is "tree of life." "Vitae" is the sort of life. And when it has this ending A-E it means "of life." So I think that's the "tree of life" is lignum vitae. So trees have cambial layer beneath the bark. And the cell division occurs in that cambial layer. So the new cells on the outer part turn into bark, and the new cells on the inner part turn into wood. And then we have the living plant cells that have the plasma membrane and the protoplast. And those cells then secret the plant cell wall, which sort of surrounds them. So in trees, the living cells lay down the plant cell wall over a period of a few weeks. And then the living cells die. Oops. Back. Here. Now you always retain a layer of those cambial cells. So you may have heard if you have a tree and you cut a ring around the tree through the bark, if you go into those cambial cells and you destroy them, you kill the tree, because you're killing that layer of living cells. So then we want to look at the cellular structure of the woods as well. And I've got a couple of slides here. This one is of softwoods. And softwoods have two types of cells. That have tracheids, which are the bulk of the cells here, and the tracheids provide structural support. And the tracheids also have little holes along the length of them at their ends called pits, and those pits allow fluid transport up and down the tree. And then the softwood also has these ray cells here. So those are examples of ray cells. So this is a transverse section. This is a longitudinal section here. And the rays are parenchyma cells which store sugars. So softwoods have tracheids and rays. And then hardwoods, here's an example of a hardwood oak. They have three types of cells. There's cells called fibers, so these guys all in here would be fibers. They provide the structural support. They have vessels, these really large cells that provide fluid transport up and down the tree. And they also have rays. So here are some rays here. And again, those rays are parenchyma cells that store sugars in the tree. So let me just write down what all these cells are. So in softwoods most of the cells are these tracheids, so they make up the bulk of the tree, something like 90% of the tree. And they provide structural support. They have holes in the cell wall for fluid transport, and those are called pits. And to give you some idea of what size they are, they're are a few millimeters long, so something like 2 and 1/2 to seven millimeters long. And then they're tens of microns in the other two directions, so they're something like 20 to 80 microns across. And the cell wall thickness, t, is usually a few microns, so something between about two and seven microns. So typically, the denser the wood, the thicker the cell wall's going to be. Whoops, let's see if I can get the rays down here. Put it on the same board. So the rays are parenchyma cells that store sugar. And then the hardwoods have three types of cells. They have the fibers that provide the structural support. And the amount of cells that are fibers varies, depending on the species, but it's usually somewhere around 35% to 70% of the cells. And then they the vessels, which are the sap channels. That provides for the conduction of fluids. And that's between about 6% and 55% of the cells. And then, again, there's rays that store sugars, and they usually make a boat 10% to 30% of the cells. So there's the structure of this sort of cellular structure, at this kind of length scale of tens of microns. And then there's also a structure within the cell wall itself. And the cell wall itself is made up of cellulose fibrils in a matrix of lignin and something called hemicellulose. So if you look at the cellulose structure, the cellulose has a regular structure, a sort of periodic lattice. And it's crystalline for most of the length of the fibrils. So this is the structure of the cellulose here, and this is showing it at a slightly larger length scale. It might have a crystalline region here and then a non-crystalline region here. And these macro fibrils, which are made up of bundles of micro fibrils, are about 10 to 25 nanometers. And each one of the micro fibrils might be three to four nanometers across. So you have these cellulose fibers. And then the cell wall is made up of different layers. So there's what's called the primary wall here, which has a random arrangement of the cellulose fibrils. Then there's an outer layer here. These are all called secondary layers. This is S-- I think that's S1. Yeah, it's S1. And it has this arrangement of the fibrils. Then there's a layer called S2, and it's generally the thickest layer in the cell wall. And the cellulose fibrils are aligned not perfectly vertical, but a little off the vertical. And the angle between the vertical and the orientation of the cellulose fibers is called the microfibril angle. And then there's a third layer here, S3, with, again, a different winding of the fibers. So because S2 layer is the thickest layer and because the fibrils are closest to the vertical axis, the S2 layer actually contributes the most to the longitudinal modulus and stiffness and strength of the cell wall. So that's kind of the arrangement of the cell wall. And then so that one cell would have that. Another cell would have that. And in between the two, there's a layer called the middle lamella that kind of glues them together. So that's the arrangement of the cells. Let me scoot over here. So the cells are often modeled as a fiber reinforced composite that has four layers to it. And in each layer there's different volume fraction of the fibers and different orientation of the fibers. So the cell wall has this fiber-reinforced structure. Here's the cellulose fibers in a matrix of lignin and hemicellulose. And there's four layers, each with the fibers in a different orientation. And then there's the middle lemella between the two cells. So in doing the modeling of a material like wood, you need to know what the properties of the cell wall material are, because, obviously, the properties of the wood would depend on the cell wall properties. And it turns out that they're similar. They're not exactly the same, but they're similar in different species of wood, so we're going to call them more or less the same. So the density of the solid is 1,500 kilograms per cubic meter. The modulus of the solid in the axial direction is 35 gigapascals. The modulus in the tangential direction or transverse direction is 10 gigapascals. And the strength of the solid in the axial direction is 350 megapascals. And the strength of the transverse direction is about 135. So here A means Axial direction, and T is transverse. And just for comparison, if you just look at cellulose, cellulose has some pretty amazing properties. The modulus of cellulose is about 140 gigapascals, which is very high for a polymer. And the strength of cellulose fibers run between about 700 and 900 megapascals. So the cellulose fibers have very impressive properties. And that's one of the things that gives wood very good properties. So the next thing is I want to show you some stress-strain curves for wood. And you'll see how similar they are to the honeycombs that we looked at before. And then we'll look at how the cells are deforming as they're getting loaded. And from that, we're going to do some modeling. So let me just wait till people get caught up. Are we caught up? More or less? OK. So these are all compression curves, so I'm just going to talk about compression. So these are curves for different types of woos. And on the left, the wood is loaded in the tangential direction. So in terms of the sort of honeycomb model, it's loading at kind of this way on, like that. And on the right, are a set of curves for wood load it in axial compression. So in axial compression loading, we're loading it that way on. And we've got different species here. So the lowest density is balsa, around about 100 kilograms per cubic meter. The densest species on this plot is beech, which is around 700 kilograms per cubic meter. And then there's pine and willow, some other species in between here. So you can see the shapes of the curve look just like the curves that we had for the honeycombs. So here there's a linear elastic bit. There's a stress plateau. And there's a densification bit. And then, as the density goes up, it gets stiffer, and the strain at which the densification occurs gets smaller, and the strength gets higher. And if we look at the axial properties, the shape of the curve is similar. We get linear elastic stress plateau densification. But if you look at the scale here, this scale goes from 0 to 100, whereas that scale went from 0 to 20. And so the stiffness and the strength along the grain are much higher than they are across the grain. And you probably already know that. Wood is stronger and stiffer along the grain than across the grain. So that's what the stress-strain curves looked like. And the fact that we're getting the curves that look like that makes us think maybe the mechanisms of deformation and failure are similar to the honeycomb, too. So here's a set of curves for balsa all plotted on the same scale. And, again, you can see for loading across the grain, either in the radial or the tangential direction, the stiffness and the strength is a lot less than if you load it in the axial direction. So a number of years ago, we had a project on balsa. And the thing we were interested in doing was looking at how the cells deformed and failed. And because balsa's a low-density wood, it was easier to see the deformation in the cells, because the cells were thin. So that's why we chose balsa. I actually have a project on balsa right now. And [? Sardar ?], my postdoc, is doing more detailed kind of finite [INAUDIBLE] modeling, trying to represent the structure of balsa. And I think I mentioned, the reason we're interested in it is the balsa's used as a core in sandwich panels in wind turbine blades. It's actually the best material that they can find, it's better than any engineering material. So that's comparing the three curves for balsa. And then if you look at a specimen that's loaded in the [? SCM ?], with a loading stage, you can measure the stress-strain curve, and you can take photographs of what the cellular structure looks like at different stages of loading. So here, this picture one, is unloaded. And these four images here are looking at the same section of cells, the same area of cells. And you can see there's a big vessel here, and that's the same vessel there. So here, this image two, is at this point on the stress-strain curve. Here's three, at that point. And four is at that point. So if you look at this carefully-- and I've got another higher mag picture I'll show you in a second-- you can see that what's happening is the cell walls are bending. So it's kind of like taking my honeycomb like this, and I'm doing that to it, and the cell walls are bending, so just the same as the honeycomb. And then eventually, if I load it enough, you get to this sort of densified stage. And you're doing this, and the stress-strain curve increases sharply. So here you can see how the cells have densified over here. It kind of looks a lot like my honeycomb when I-- maybe I do it this way-- when I smush it up like that. It looks kind of similar. So if we look at the higher mag picture, again, these four images are the same area of the cells. And if you look at that a little bit of crud, it's the same on all four of them there. So this is the unloaded one. And these were loaded from top to bottom. And this is loaded to some extent/ that's loaded more, and that's loaded more. So if you look at this cell here, it's got this little tear on it. So you can sort of find it again. If you look at that cell there, that's what it looks like unloaded. And here you can see-- see that wall there? You can see how it's bent up. So it's bent like the honeycomb walls. And here it's bent even more. And eventually, it has this sort of a shape here, and it's deformed permanently. It's formed one of these plastic hinges. So it's like the aluminum honeycombs almost, that it's filled like that. So in the balsa wood, when we load it in the tangential direction, we're getting bending of the cell walls and then yielding and plastic hinges forming, just the same as we would in an aluminum honeycomb. Are we good with that? Well, I'll go through all three directions. And then I'll write down the notes. So this is loading the balsa in the radial direction. And these things here are the rays. So we're loading it in that direction. And here you see this also bending occurs, but the rays act a little bit like fiber reinforcement. So the rays are a little bit stiffer, and they sort of reinforce the thing a bit. And this is the loading platten here, and you can kind of see that the failure starts at the loading platten, and as you sort of load up more, it progresses in from the loading platten. So we're going to look at the modeling of the balsa in the radial direction, and we're going to count for the rays, at least in a crude way. And then when you load them balsa in the axial direction, initially you don't really see much happening. So if one is unloaded, one's down here. And two is at this peak stress up here. And really, if you look between one and two, you just don't see an awful lot of difference. And that's because what you're doing is you're taking the wood and you're loading it this way on. And it's so stiff, you just don't see much deformation. So there's not really much to see. But then eventually, something starts to fail. And in this case, what fails are the end caps. So the balsa wood has these long cells here. But then at the end of the cells, there's little caps on the ends. And the cells kind of fit together like that. So that eventually, if you keep smushing it, those end caps start to fail. Here you can see how bright it gets, and the cells are starting to crush together and kind of fail those end caps. And in fact, each one of these serrations here, if you look at, say, from that peak up to that peak, that corresponds to a length of about the length of the cell, or the length of the cell between the end caps. So in axial deformation you're just actually deforming the cells until you break those end caps. If you look at denser words, they fail in slightly different ways. This is a Douglas fir, which is much denser. This particular specimen, the whole thing is kind of buckled over. So it's not really so representative of the structure itself. This is Douglas fir in radial compression. You can see this picture, it looks just like what I showed you for the balsa wood, that sort of propagation of the failure. These long things here are the rays. And this is a Norway spruce in axial compression. And this is fairly common in denser words. You get this buckling formation. And what happens is, I think, you get some yielding of the cell walls initially, but that leads to buckling, like a plastic buckling. And you can see on this higher mag picture down here, you get these really small wavelength buckles in the cell wall. And the two-- you get a plane that kind of shears over itself. And you can see in the top image, this top half has shared over relative to the bottom half. And all the deformation is in this little band here. So this stuff here is all going on in that band up there. So let me write down some notes about how these things to form and fail. And then we'll get to the modeling in little bit. So we can say the stress-strain curves resemble those for honeycombs. And I'll say the mechanisms of deformation and failure are most easily identified in low density balsa wood. So for balsa, if we look at the tangential loading, we see bending of the cell walls and then eventually plastic yielding. And for radial loading, the rays act as reinforcing. And for axial loading, you get axial deformation and then the failure of the end caps. And I'll just say failure by plastic buckling is also observed, say, in the denser woods. [INAUDIBLE] So then we can look at some data for the properties of woods. And these charts plot relative Young's modulus and relative strength against relative density. So here the modulus of the wood is divided by the modulus of the solid cell wall material. And here we've normalized everything by the modulus of the solid cell wall material in the axial direction, because the cell wall itself is anisotropic. And so here's the relative modulus, and here's the relative density. These are log-log plots. And we see that when we load the wood in the axial direction, the moduli is just linearly related to the density. And when we load it across the grain, it varies with the cube of the relative density. So do you remember our little honeycomb models? If I took the honeycomb and I loaded it this way, it went as the cube of T over L. And that's because the bending. And so the wood doesn't lie perfectly on that cube line, but it's fairly close. And then similarly, if we took the honeycomb and we loaded it this way on, it deformed axially. The modulus depended linearly on the density. So you get the same kind of relationships there. And then if you look at the strength, the strength along the grain goes linearly, and the strength across the grain goes with the square. And we'll see when we get to the modeling in a minute, that if we loaded, say, an aluminum honeycomb this way on, the strength would go linearly with the density, if we're just yielding the cell walls. And if we loaded it this way on, it went as the square of T over L. So these things kind of correspond. And you can see the structure of the wood is a lot more complicated than just a simple honeycomb. And so these models are sort of first order, and they're fairly crude. They don't try to capture every detail of the wood structure. But they can give you a sense of where the wood properties are coming from. So let me just write down some of these observations. So the data for the wood-- the modulus along the grain goes linearly with density. It goes more or less as the cube for loading in the tangential direction. And the radial direction is somewhat stiffer different than that. The strength in the axial direction goes linearly with the density. And the strength across the grain goes with the square of the density. And then there's data for the Poisson's ratios too. So let me just write them down. So the modeling based on the honeycomb is sort of a simplified model that gives you kind of a first-order description of the behavior. And it doesn't really attempt to capture all the details of the softwood and hardwood structure. And in the equations, I'm going to take the cell wall properties along the grain, or along the axial direction. And we're going to have a bunch of constants that describe the cell geometry, and those constants are also going to reflect the cell wall anisotropy. So we can model the wood structure as something that's a bit more of a simplified thing, just like this. And we say we've got cells that are roughly hexagonal, and then we've got some cells that are more or less rectangular that are the ray cells. And if you look at lots of micrographs, you can get some idea what the dimensions of the cells are. And these dimensions were measured for a particular density of balsa wood. So if we look at the linear elastic moduli, we can start off with a tangential loading. And if we have the tangential loading, we can model it as a honeycomb loaded in the plane, and we get cell wall bending. And from the cell wall bending in the honeycomb model, you would get that the tangential modulus varies with the relative density cubed. And the structure's not quite that simple. There's ray cells. There's end caps. And they act to stiffen it a little bit. And the data lie a little bit above this line. Then if we look at the radial loading, the rays kind of line up with the radial direction, and the rays act as reinforcing plates. And so you can just use kind of an upper-bound composites idea to get the modulus. And the rays tend to be a bit denser than the fibers. So if I say a Vr is the volume fraction of rays, and R is the ratio of the relative density of the rays compared to the fibers, so it's rho over rho S for the rays divided by rho over rho S for the fibers. And that varies a little bit from what one species to another, one specimen to another. But it's something a little over 1, something between 1 and 2. Then I can say the modulus in the radial direction is the volume fraction of the rays times R cubed times the tangential modulus plus 1 minus the volume fraction of rays times the tangential modulus. And that works out to be about 1.5 times the tangential modulus. I wanted to work this out in terms of the tangential modululs, so I've put this in terms of the tangential modules in the first term there. So we get that the radial modulus is slightly larger than the tangential, but also goes roughly as the cube of the density. And then for the axial loading, we just have axial deformation in the cell wall. And the Young's modulus just varies linearly with the density. So these are kind of simple models, but they kind of explain to first order the density dependence of the wood moduli and the anisotropy. So it's kind of nice because they're fairly simple models, and it gives you kind of a big picture. So if you wanted to know the modulus of a particular piece of wood, this probably isn't the best way to figure it out. But if you wanted to kind of compare how do woods behave in general and how does the density affect the properties and why are they anisotropic, this is a pretty good way to do it. We could also look at the Poisson's ratios. And just because I didn't want to write them down again, I've just left on what the data were down here. But let me just write what the model would give us for nu RT and nu TR, the model would give us one if we had regular hexagonal cells. And these are the values we get here. This might be 0.6, 0.7 would be a typical value, somewhere around 0.4 in there, so they're not quite one, but they're close to it. And I think the reason they're a little less is because the rays in the end caps provide some constraint. If you have the honeycomb, if I just had these cells, and I squeeze it like this, these guys can move out. If it's a regular hexagonal honeycomb, the strain that I'm applying here is equal to the strain going out that way. But if I have rays this way that sort of constrain it or end caps, it means that the Poisson's ratio is going to be a little bit less. So I'll just say constraining effect of the end caps and rays-- constraining. Then for nu RA and nu TA, the model says the value we would get would be zero. And these are pretty close to zero. They're not quite zero, but pretty close. And then the last pair nu AR and nu AT, the model says that we would get nu of the solid. And the data's close to 0.4, which we might expect would be about the nu of the solid. So, again, there's some variation in the Poisson's ratios. They're not all just one number. But you can see these ones here are about zero, and that's roughly what the model says. These ones here are closer to 1. And then these ones here are closer to what you might expect for a solid material. So it gives you the kind of general idea. Are we good? We're good, yeah? So we can do a similar thing for the compressive strength. So for tangential loading, we get plastic hinges forming and the bent cell walls, just like in an aluminum honeycomb. Then we get that the strength over the cell wall strength goes as the relative density squared, so just like the honeycomb. the radial loading, we can do the composites thing again. So we can say the strengths in the radial direction is about equal to the volume fraction of rays times R squared times the tangential strength plus 1 minus the volume fraction of rays times the tangential strength. And for balsa, I have some values here. VR is about equal to 0.14. R is about equal to 2. And so the radial strength is about equal to 1.4 times the tangential. And in higher density woods, the value of R is a little bit smaller, and in general, the radial strength is a bit larger than the tangential, and both depend on the density squared And then for axial loading, if the failure's initiated by yielding in the cell walls, then the axial strength's just going to depend linearly on the density. So the idea with these models isn't that they kind of describe a particular piece of wood exactly. It's more that it gives you a general picture of how the cells are deforming and failing, and how the properties scale with density and why the wood's anisotropic. Are we good? Yeah? Caught up. So there are a couple more sort of interesting things we can do with looking at the wood properties. So we've been talking about how to model the cellular structure. But people have also looked at how to model the cell wall as a fiber composite. And this plot and the next one kind of show you how you can combine all of that together. So remember, I said the modulus of the cellulose was around 140 gigapascals. So here's the modulus of the cellulose, at least the crystalline part of the cellulose plotted in that little envelope there. The lignin and the hemicellulose have a modulus around 2 or 3 gigapascals, so it's down there. And if you made composites with cellulose fibers in lignin and hemicellulose matrix, those composites would have a modulus that fell in this envelope here. They've got to be in between those two limits, right? The modulus have to be between those two limits. The density have to be within the densities of the constituents. And if you look at the modulus of the wood cell wall, it lies in this envelope here. Along the grain it'd be here, and then across the grain is further down here. So the cell wall modulus is in here. And then if you take that cell wall and you make it into the honeycomb-type material that wood is, if you load it along the grain, you're going to get this linear dependence of modulus on density. And if you load it across the grain in the radial or the tangential direction, you're going to get this cubed dependence here. So here's a set of data for different woods of different densities. And that envelope kind of encompasses all of them. But if you look at the slope of that data, it's roughly equal to a slope of 1. And so it corresponds to that equation there. And similarly, here's a set of data for different species of woods of different densities loaded perpendicular to the grain. And they lie on a line that has more or less a slope of 3. And this set of data here along the grain intersects the wood cell wall towards the top of that envelope, and this set of data here intersects closer to the bottom of that envelope for the cell wall material. So this gives you a way of sort of putting everything together on one plot-- the cell wall as well as the cellular structure. So that plot does it for the modulus. And you can do the same kind of thing for the strength. Here's the cellulose up here. Here's the lignin down there. Here's the wood cell wall, the composite made from those two. And then here's data for different kinds of woods loaded along the grain and for load across the grain. So it gives you a way of putting all this modeling into one set of plots. So let me just write a couple of little things about that. So we could say you could model the cell wall as a fiber composite. And you can use the composition upper and lower bounds to give an envelope. And then you can also show the cellular solids models on the same plot. So overall, it shows you how the hierarchical structure fits together and can be modeled. Now there's some more cute things we can see. So another thing I want to talk about is material selection, because it turns out wood is very good compared to other materials in certain applications. So we're going to look at, say, having a beam of a given stiffness at a given span, and say it's just a square cross-section beam of edge length T. And the question is, what material would minimize the mass of the beam? So say we have some span we have to have. It's got to have some rectangular cross-section, some given stiffness. And the question is, what's the material that minimizes the mass? So there's a little short calculation we can do to figure that out. And then I've got another plot, and you can compare different materials on this other plot. Then you'll see how good wood is compared to other materials. So from beam of a given stiffness and given span, and say it's a square cross-section, then the question is, what material minimizes the mass of the beam? So the mass is just going to be the density times t squared times l And if it's a beam, say it's got some central load on it, a concentrated load, the deflection's going to go as pl cubed divided by some constant and divided by the Young's modulus and the moment of inertia I. So the stiffness, if I just rearrange this, the stiffness, p over delta, that's going to go as p over delta CEI, and I's going to go as t to the fourth over l cubed. And then I can solve that for t squared. And I want t squared because I'm going to plug it back into the equation for the mass. So if I solve this for t squared, I've got my stiffness p over delta. I've got l cubed divided by CE. And then I take that whole thing to the 1/2 power like that. And then I plug the t squared back into the little equation for the mass. So I've got density minus p over delta times l cubed over CE. And we'll take that whole thing to the 1/2 power-- [INAUDIBLE] another l. And so to minimize the mass, you want to look at the material properties. And here, the material properties are the density and the Young's modulus. And to minimize the mass, you want to minimize rho over E the 1/2 power. Or conversely, you want to maximize E to the 1/2 over rho. So if you just had a bar that you were just pulling on, you would just want to maximize E over rho. But if it's a beam and bending, it works out that you want to maximize E to the 1/2 over rho. And if we look at the next slide, this next slide then plots on a log-log scale, it plots the modulus on this axis and the density on that axis. And here this plotted data for lots of different materials. So there's engineering alloys. Metals are up here. Engineering ceramics are here. Composites are here. Polymers are down here. Elastomer is way down here. Foamy things, down here. And this envelope here is woods. And notice log scale here. The lowest stiffness polymer foams here are 0.1 gigapascal, and diamond is up here at 1,000 gigapascal. So there's like five orders of magnitude difference in the modulus here. So then, if you look at the bottom right corner here, there's a bunch of old dashed lines. And this red one here is E to the 1/2 over rho. So if it's log-log plot, E to the 1/2 over rho's going to show up as a straight line. And every point on that line has the same value of E to the 1/2 over rho. And the material that would be the best for a beam of a given stiffness would be the one that has the biggest value of E to the 1/2 over rho. And if I move the line up to the top left here, I'm increasing E. I'm decreasing rho. It's got the biggest value of E to the 1/2 over rho. So the materials that are on this line here, they all have the same value of E to the 1/2 over rho. And they've got the biggest value-- well, virtually the biggest value. So let's look at what those materials are. There's things like engineering ceramics, like diamond that maybe are not the most convenient thing to make our beam out of, and tend to be brittle and might break. So we have some issues. There's engineering composite, so things like carbon fiber reinforced plastics. And at this sort of tip of the composites, there'd be things like unidirectional fiber composites. And then here's other woods down here. So the woods have the same performance index, this is called, the same value of E to the 1/2 over rho as the best engineering composites. And so they have very good properties for their weight. And one of the interesting things is if you look at this performance index of E to the 1/2 over rho, this is the performance index for the wood. This is for the solid cell wall material that the wood's made from, so E to the 1/2 of the solid over rho for the solid. And from the modeling of the wood, just looking at the axial modulus, this thing here is equal to that times rho S over rho. So if you look at this, this is the performance index for the wood. This is the solid it's made from. This number here is bigger than 1, right? Because the density of the solid is bigger than the density of the wood, and so this is saying the wood is more efficient than the thing that it's made from, than the solid that it's made from. And so that's the sort of plot for the stiffness. And there's a similar plot for the strength. That if you do the same little kind of calculation, you find that the performance index for the strength is some failure strength raised to the 2/3 power over a rho. And again, here we're plotting strength versus density on a log-log plot. And this red line here is the strength of the 2/3 over rho. And again, if we scoot over here so we have a parallel line, every point on that line has the same value of the strength to the 2/3 power over the density. And these are the materials that have the highest values. And again, here's engineering composites. These are ceramics. But the ceramics, they have a high compressive strength, but they tend to be brittle. So it's not really a practical strength. These are metals in here. And here's the woods down here. So it's kind interesting just to see that the wood has such a good property. Yes? STUDENT: So I realize why this is valuable setting up the problem this way. But if you're actually trying to design something, why would you want to fix your cross-section? You could change your material and change your cross-section. LORNA GIBSON: So this is the starter version of this problem. And there's another part two of the problem is to change the shape. And you could look at what shape's efficient. There's something called a shape factor that gives you the efficient shapes. So you could take the material and turn it into a different shape and have a more efficient thing because it was a different shape. So you can account for that. STUDENT: So then if you varied, like let's say you made your cross-section smaller, like even if it was still square, you could just still make it smaller. LORNA GIBSON: Yeah. I'm saying we've got a given stiffness. So if we're given a certain stiffness and a certain span, we would need a certain cross-section to get to that stiffness. Are we happy? OK. So that's one thing. Let's see here. So let me just write a few more notes about the material selection, and then there's one more thing I wanted to show you about the woods. Hmm? C is just a constant. So it's just a number. So if you had a beam in three-point bending, then C would be three. If you had a beam that was simply supported with a central load, C would be 48. C is just a number. STUDENT: One more question. So follow your line there, and the choice is really just about cost. LORNA GIBSON: No, it's not about cost. There's nothing on cost here. It's all really about the properties. What's the best combination of properties to minimize the mass, and then which material has that combination of properties. You can do charts like this that include cost. You can make these charts with whatever property. STUDENT: [INAUDIBLE]. I guess maybe there's a difference off two or so of strength. LORNA GIBSON: Between pine and balsa? Yeah, maybe more than that. I think-- I can't quite see where-- pine's close to 100, and balsa's, I don't know, 20 or something. STUDENT: [INAUDIBLE] LORNA GIBSON: Yeah, and it's not-- the point of this isn't so much looking at the absolute value of a strength. It's looking at the value of this performance index. And what you want to do is maximize that index to get the material that's going to minimize the mass. So let me just read a couple notes about this. So we have these-- these are called material selection charts. So you plot the log of one property versus the log of some other property. And then we have a line of constant E to the 1/2 over rho. I'll just say it's shown in red because you're going have the same plots. And the materials with the largest values are in the upper left. So the woods have similar values to engineering composites. And you can do a similar thing for strength. So I have a few more minutes. So I have a few minutes, and I want to talk about a couple of uses of woods. So one is in old ships. So I don't know if you know Professor Lechtman has this course Materials and the Human Experience, and they talk about sort of ancient uses of materials. And I did a section, a module, on woods and the use of woods in old colonial ships, like The Constitution that's in Boston Harbor. So this is kind of a schematic of an old ship. And the thing that was interesting and the thing I talked about in this module was that people chose particular species for particular parts of the boat. And they would choose a particular species depending on its properties. And a lot of the hull was made of oak. So oak's a very dense wood. But they would get something they called straight oak, and they would get something they called compass oak. And you can see this little thing down here, this little kind of schematic here, this little sketch. This is straight oak, just a straight trunk. And this thing here would be the compass oak. And what they would do is they would use the straight oak for straight parts of the boat, so something like this, these pieces here. And then they would actually look for trees that had the curve of the branches to match some part of the boat that they were looking for. So, for instance, if you have the hull out here and the deck here and they had their cannons here, there's something called a knee, which is sort of a bracing piece that goes between the deck and the hull. And that bracing piece is curved. And they would actually look for trees in which the branches curved at the same kind of curvature as they were looking for in that piece. And then they would use it for that piece. And the advantage of this is they basically had the grain running along the curve, and so they got the best properties out of the wood by doing that. So they had this straight oak and compass oak, and that was one cute thing. And often they used white oaks. And I brought a piece of white oak in. You can see how dense it is. And the US Navy often used something called live oak. Live Oak grows in the South. Anybody from the South? You see these big trees with huge sort of spreading branches. Those are the live oaks. And apparently, the US Navy, I read somewhere, still has a forest somewhere with live oak for doing things like repairing The Constitution. So let me just pass those guys around. So those are a couple of oaks they would use for the hull. Then they would use white pine for the masts. And the reason they used white pine for the mast is because the white pine grows very, very tall and very straight. And white pine was actually like a strategic resource in the 1600s, the 1700s. And it turns out that when the British Royal Navy was doing all that colonial stuff in the 1600 and 1700s, Britain actually ran out of trees for masts for boats, and they would actually import masts from New England. And there were these people called surveyors who would go around and they would mark certain trees that were supposed to be saved for these masts for the British Royal Navy. And the thing was that the size of the boat and how many cannons you could put on the boat depended on how big the mast was. So the size of the boat depended on the mast, because the mast height controlled how much sail area you could get. So the taller the mast, the more sail/ the more sail, the bigger the ship. The bigger the ship, the more cannons. And so having these tall Eastern white pines was a sort of a strategic resource. And I have a piece of white pine. Unfortunately, my dog got to this one. And be careful. It's a bit splintery. But you can see it's a lighter kind of wood. And if you go around New England, if you go to the arboretum, you can see white oak. You can see Eastern white pine. The other wood they used is lignum vitae, that first dense one that I passed around. And if you notice that lignum vitae has kind of a waxy feel to it. And they used that in the block and tackle, so like pulleys and stuff like that. And it was thought to be self-lubricating because of that kind of waxy layer on it. And because it's very dense, if you think of like a block and tackle and you've got like a rope going over a pulley, you've got a pressure from everything sort of fitting together and the bits bearing against each other. And the fact that was very dense made it very good for the block and tackle. And so they used the lignum vitae for that. And there's one other cute story about lignum vitae. I don't know if any of you've ever read Dava Sobel's Longitude. Anybody read that? I'm a sucker for those history of science books. So her book Longitude is about the development of an instrument to measure longitude. Originally, they could get the latitude from the stars, but they were really bad at getting the longitude. And so boats would go off, and they wouldn't really be able to figure out where they were, until they had a method to measure longitude. And there was some British board of something or another. They put forward a prize for somebody who could produce a way of measuring longitude accurately. And there was a guy called John Harrison, and he built a clock. He built a very accurate chronometer. And if you knew when sunrise was and sunset was, and you knew the time and where you left, you could figure out where you-- it's kind of like time zones. You could figure out where you are today. And he built a chronometer, and one version of his chronometer used a lignum vitae for the same reason, because it was very dense, and it was very stable. And the clock that he eventually won the prize with was in the 1700s, 1759. I think they went on some trip with it. It was 81 days at sea, and it lost five seconds over 81 days. So that's pretty impressive for 250 years ago. So that was the lignum vitae in the clock. I have one more picture, and then I can finish up the thing on wood. And we'll start the cork next time. So this is another example of using wood. And this is sort of a more modern use. So this bridge here is made with a glue-laminated wood. So this big beam here, the big arch, is made up of sections of wood which are glued together. And you can glue the sections in a curved shape if you want. They sort of have molds to do that. And when they make this glue-laminated wood, they cut the defects out. So they cut knots out, and they control the pieces of each laminate that they use to get the best quality. And the glue-laminated wood actually has better properties than just two-by-fours or whatever you would cut down, lumber that you would cut from a tree. So glue-laminated wood is kind of a nice kind of wood structure that's used now. And you see it all the time in things like ice rink arenas, like large spans. It's kind of beautiful. You can see the wood grain in the curve in the wood grain when they make these things. So that's the wood lecture. I'm going to stop there. So next time I'll talk about cork. I just have a little bit about cork. And then we'll start talking about foams.
MIT_3054_Cellular_Solids_Structure_Properties_and_Applications_Spring_2015
19_Biomimicking.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. LORNA GIBSON: All right, well, I guess I may as well start. I don't know if anybody else is going to come. So I wanted to finish up by talking a little bit about the biomimicking. And some of these examples, you've seen before. But I just thought I'd put them all together, and we could look at them as one thing. So if you remember, when we talked about wood, one of the things that I showed you was that people have taken wood and pyrolized it. So they get a carbon template of the wood cells. And then they infiltrate that with silicon carbide-- or, with a silicon vapor infiltration. And they make a silicon carbide ceramic. So they can get a replica of the structure. So sometimes when people say "biomimicking," some people think of it as replicating something. But it doesn't have to be just replicating something. It can be also just some design inspired by the biological material. But this thing really is a replica. And this was another version where there was the-- they took the silicon carbide material and then infiltrated that with liquid silicon to get a fiber-reinforced material, really. So there was the wood composites we talked about. Jennifer Lewis's group up at Harvard is doing 3-D printing of honeycombs. And one of the things they've been interested in is not just printing of the pure resin, but having a fiber-reinforced resin. And the first thing they did was they made these honeycombs like this. And they had small silicon carbon fibers-- or, were they silicon carbide? Maybe it was carbon fibers-- in the ink. And so they would just-- if the ink was being laid down, the fibers would just line up in the direction of the ink. And so the fibers tended to be in the plane of the honeycomb. And if you think of things like wood, you want the fibers to normal to that plane. And more recently, they've-- hello. Oh, hello. Oh, look. Oh, look, almost everybody is here. So more recently, they've got a technology now where they're rotating the nozzle as they print the honeycomb. And as they rotate the nozzle, they get some change in the orientation of the fiber. So they're beginning to be able to make honeycombs that are fiber-reinforced. And they can get the fibers aligned with the prism axis of the honeycomb, which is more or less what the wood does. So I wasn't going to write anything on the board today. I was just going to go over some of the slides and do the review. So they're beginning to make honeycombs that have the same sort of structure on the cell wall level, or at least a similar structure, as to what the wood composites have. So that's another example there. This is just another close-up of their fiber-reinforced walls in honeycomb specimens. We talked about trabecular bone, and we talked about the fact that people are starting to look at using foamed metals as coatings on orthopedic implants. And there's been some interest in looking at using foamed metals for more permanent parts of the body, more permanent bone parts, things like vertebral cages, stuff like that. And so this is just an example here, with the trabecular bone on the left and the tantalum foam that's made by replicating an open-celled polyurethane foam on the right. And you can see the similarity in the structures of those two things there. Then we talked about tissue engineering scaffolds. And if you remember, these two scaffolds here on the bottom were made from pig heart tissue. And they're made by just removing all the cells. So that actually is the natural extracellular matrix. And then, these other structures up here, these were all engineered tissue engineering scaffolds that are made in a synthetic way. And the idea is to try to mimic the extracellular matrix in the body. So you can see the similarities there. And then more recently, we were talking about sandwich panels. So this is the example from the helicopter rotor blade. That's from an aircraft flooring panel. And this was the Irish leaf, and these were the bird skulls. So the same sort of idea, that there's these engineering lightweight structures, and there's also similar things in nature. And then, I think, last time, we also were talking about palms. And the palm stems had density gradients in them. And one of the things we showed was that by having that density gradient, the stress distribution across, say, the radius of the palm was almost matched by the strength distribution. So it was a very efficient way to use the material. And at MIT, that was a student in architecture who was looking at doing this with concretes with aerated or foam concretes and making a radial density distribution. So he made beams. He made columns. He made different kinds of things. With the concrete, there's a little bit of a limitation, because the concrete is much stronger in compression than it is in tension. So if you had a beam loaded in tension or a concrete column that might buckle, you'd still have to have some reinforcing bars in there to take the tensile loads. And one thing I think I didn't really talk about was animal quills and other sorts of plants stems. And many of these have a structure that's made up of a cylindrical shell with a foam or a honeycomb core. So here are some examples in nature. If you look at grass stems, there's a dense layer on the outside, and then a foamy layer on the inside, and then just a hollow layer in the middle. And if you look at porcupine quills-- this is a porcupine quill-- these are made of keratin. They're like modified hairs. So it has a dense layer on the outside, and then this foamy stuff on the inside. This is a hedgehog spine here. And again, you can see there's a dense layer on the outside and these ribs on the inside. And this is the toucan beak-- you know, the toucans that live in Central America. And the beak has a foam core. Again, they're keratin structures, and yet the outside is solid. But the inside has a foamy structure. And Mark Myers did a paper on this a while ago. So we got interested in these structures that have a solid shell on the outside but a foamy thing on the inside. And we wondered if there was a mechanical reason for that. And you can show that, at least in some of them, if, say, you have a grass stem-- it's really common in plant stems. Do I have some more plant stems? Yeah, here we go. Here's a milkweed stem. So it's got these dense fibers with this foamy core here. And blue jay feathers-- feathers have this as well. So they have an outside layer on the quill that's solid, and then an inside layer that's foamy. If you look at things like plant stems, they blow in the wind. And you can look at the buckling resistance of the stem. And because there's this shell with the foam-like core, it's not just the overall buckling of the whole thing. You can get that local face-wrinkling mode again. So the outer shell can wrinkle. And you can show that having a foam-like core helps prevent that wrinkling from happening, the same as with the sandwich panel. You remember we talked about the face wrinkling on the sandwich panel? Well, on these stems, you can get wrinkling of the outer shell. And the foam helps prevent that from happening. And you can show that you can actually-- for the same buckling load, you can reduce the weight of the plant stem, or the bird feather quill, or whatever by having that foamy core. So people have looked at this too. And there's a group in Germany who had looked at the idea of mimicking the horsetail stem. So this is a plant stem here, the horsetail plant. And they made something they called a technical plant stem, where they made this structure here. And they made it out of fiber-reinforced composites. And you can see, the little holes here represent those holes there, in the plant stem. So the idea was to try to get something that was good at resisting the buckling, but at a lower weight. And they were doing that with this thing here. And there was a group in Japan that did a similar thing. They used-- I think they took a copper tube, and then they took copper and aluminum wires and filled the copper tube with the wires. They extruded that. And then they melted out the aluminum, which, I think, also helped to soften and make the copper bond together. And they got these structures here. And you can see, that's similar to some of the plant stems as well. So these are all examples of cellular structures that have-- mechanically efficient structures. They're lightweight, and they're strong, and they're stiff. And these natural structures have been mimicked in engineering applications. So that's really all I wanted to talk about today. But I think we wanted to use the rest of the class as a review. So I haven't made a one-hour summary of the last six weeks, because that's not really possible. So I thought I would just answer questions. If you have questions, I'll try and answer them. So for the test-- so, the test is on Wednesday. You can bring one 8 and 1/2 by 11 sheet. I wasn't going to give you all those honeycomb and foam equations, partly because-- the only thing I would really want you know is open-cell foams. And I was hoping, by now, that you might have registered those equations somewhere in your brain. So I'm not going to ask you for-- some equations are obscure. I might ask you-- expect you to know what the Young's modulus of an open-celled foam is by now, or the axial modulus of a honeycomb or something. But I don't think I'm going to ask you anything like, calculate air pressure contributions to the modulus of the closed-cell foam. I don't think we're going to do that on this test. So I think you should know what the modulus of a-- the Young's modulus of an open-celled foam is, the shear modulus of a foam, because you need that for the sandwich panels. But you don't need reams and reams of those equations, so I wasn't going to give you those this time. OK? So Jenny, did you have-- so I finished the biomimicking thing. We're just going to do a review. I don't know if you want to stay or if you want to go. You want to stay? Well, whatever. So do you have questions, Jenny? AUDIENCE: I do. I just wanted to have question 4 from the last pset reexplained. Because I know that you explained it to in office hours, and I know that the solutions are online, and I looked at them. I'm still confused. LORNA GIBSON: OK. So, you're going to have to remind me what problem 4 is, because I don't remember. AUDIENCE: Question 4 says, polymethacrylate foam at solid strength of 3.0-- or, solid Young's modulus, rather, of 3.0-- gigapascals is being considered for the energy absorption layer in a bicycle helmet. LORNA GIBSON: OK. I'll tell you what, I think I have it on my little disk here. Maybe that's easier, because then I can read it. AUDIENCE: [INAUDIBLE]. LORNA GIBSON: Yeah, let me just see if that's going to come up. There it is. OK. So this one here about the foam, about the-- AUDIENCE: The one with the graphs. LORNA GIBSON: --energy absorption? AUDIENCE: I'm just a little bit confused by the graphs. LORNA GIBSON: OK. Let me see if I can make this bigger. Hang on a sec, my computer's thinking. OK. OK. And I think I gave you-- right, I gave you this graph here. Right? AUDIENCE: Yes. LORNA GIBSON: OK. So can you read what I've got here, or is that-- should I make it bigger? AUDIENCE: I can't really read it, [INAUDIBLE]. LORNA GIBSON: Does that help? OK. So I have to admit, when I put this together, somebody-- I can't remember who it was-- told me you thought it was overconstrained. And it turned out it was overconstrained. And then I said, forget the thickness. Forget that I've given you the thickness, right? So disregard the thickness. AUDIENCE: No, the velocity. LORNA GIBSON: Oh, speed-- the speed? The speed-- all right, OK, the speed. No, I don't want to register. OK. So from what I gave you, you can figure out the normalized peak stress, right? So you can get-- so if I do this, is this good? You can see what I'm pointing at? So you can get, the peak stress is just the mass times the acceleration over the area-- are we good with that-- divided by Es, which I gave you. So I think most people probably got the peak stress here. And then, because I had given you the velocity and the thickness, I was thinking you could calculate the strain rate. But if I don't give you the velocity, say we're not calculating the strain rate at this point here, OK? So here, we're-- did I put this-- I don't know if I have the graph on this solution. There we go. So this point here is the 2.5 times 10 to the minus 4 for the peak stress. And if you're not given the velocity, I think what I thought you would do then would be just assume a velocity. And then you can check it at the end. So since I had given you this velocity of 12, let's just say that's what we assumed. OK? So then you get a strain rate of 480 per second. So let's just say we assumed that velocity. Then we could scoot back over here. So we know we're on this line here for the sigma p over Es. And we want to be up towards the top of these different strain rates. So if you look at the strain rates, see, the very last one at the top is 1,000 per second, and the next one's 100 per second. So we're halfway in between those. And they're so close together, you can't really read the difference. But we're up here somewhere. So then we read off a w over Es for that. OK? Are we good? Then, if we know the w over Es, this is the number, here, that I read off. You know what the Es is, so you can get w. If you can get w, w is in joules per cubic meter. It's in energy per unit volume. But you know the area, and you know the thickness. So you can get the energy in joules. And then-- oh, did I-- I must have rubbed that off. Didn't have it on this version. The version in my notebook, I think, calculated what the velocity would be that corresponds to this. And I think it turned out to be 8 meters per second or something. So I had assumed 12, and I think it worked out to 8. And on those log log graphs, whether or not it's a strain rate of 480, or a little bit less, or a little bit more, you can't read the difference on these things. OK? AUDIENCE: Can you explain how this graph relates to the other graphs from lecture that were simpler? Because we had ones that were all density, and ones that were all the same strain rate. And I guess I'm just confused why. LORNA GIBSON: Oh, OK. Hang on. So let me see if I can-- hang on. No, I think I know what you mean. Let me see. I want to pull up the lecture notes. I think I'm finding the right thing. Here we are. So in the lecture notes, there was a thing that looked like this. Is that what you're talking about? Yeah. So the top set are the stress strain curves. So those are OK. You do a compression test, you measure that. Then, the middle set, you take, say, for one density, for one stress strain curve-- say that's your curve, the middle one there, 0.03. Say you loaded it up to some point, or say you looked at some point here, on the curve. You would figure out, for that stress, what's the area under the curve up to that stress. So you'd have a stress and an area under the curve up to that stress. And you could then-- say we know what this foam is. It's-- I don't know-- a polyurethane or something. So say we know Es. Then we could divide those two numbers by Es. And we would plot that one thing. Let me just walk over here. So this is the 0.03. So if it's in the linear elastic reading, it would be somewhere in here. Then I would scoot along, say, to here someplace. I would do a whole bunch of points all the way along there. And at every point, I would say, what's the stress, and what's the energy absorbed up to that stress. OK? And then I would plot-- doot, doot, doot, doot, doot-- up here, I would plot all those points. OK? And then when I got to this part here, that corresponds to that part over there. OK? Are we all good with that? OK. So then we repeat-- so we get one curve on the middle chart. Then, for the different densities and the different stress strain curves, we plot a different curve for each of the different densities doing the same process. And the thing we notice is that these points here are really the optimum point. Because at that point there, you absorb as much energy as you possibly can for that stress. OK? And we notice, happily, that those points lie on a line, basically, on a straight line. And we tick of what the different densities are-- so 0.01, 0.03, 0.1, 0.3. OK? And that line there, and all of these stress strain curves, and all these lines here, curves here, they all correspond to one strain rate. All right? So now I could take that line there for that first strain rate, and it would be one of these thinner lines here for a particular strain rate. And I would mark off-- just the same as I've got here, 0.01, 0.03-- I'd go 0.01, 0.03, 0.1, 0.3. All right? And then I would repeat this whole process again for a different strain rate. So I'd go back to doing some mechanical tests at, now, a new strain rate. And typically, the new strain rate is going to be bigger or smaller by a factor of 10 or so, because you're not going to see much difference in the behavior unless you get big changes in the strain rate. So you're going to change the strain rate significantly. You get a new series of these stress strain curves. Then you get a similar-- very similar, but not quite the same-- series of these curves here. And what you'd find is you'd have another line here that would be offset a little from the first one. And the positions of where these densities were would also be offset a little from the first one. And then you'd draw the second one. So the second strain rate line would go here. And the little density positions would be offset a little bit, and you would mark them off. And you basically repeat it for different strain rates, and then you build this thing up. And then the density lines connect too. Is that OK? So that's how you generate that. So the idea is, this bottom diagram is really summarizing all of those shoulder points. The bottom diagram is really summarizing all of these points where the stress starts to scoot up at the densification machine. Yeah? AUDIENCE: So in question 3, we were constructing the graph in the middle for this particular one. What confused me is that on the graph, the variable on the x-axis is stress, whereas in the question, we were given the stress. So I drew something that looked as it should have looked. And it turned out to be right, but I'm very confused as to why. LORNA GIBSON: OK. Let me try and get rid of that. And I'm going to have to remind myself what the question is again. OK. So we have an open filled aluminum foam. I asked you to write the equations for each of the different regimes. And those were straight out of the notes, I think. Right? AUDIENCE: Yeah, I think the three inputs were different relative densities. LORNA GIBSON: Yeah, and then you had to construct the energy absorption curve based on those equations. And I give you three relative densities. So this was my solution. So all this stuff here, I think, was straight out of the notes. So there was the-- oops, let me back up so we start at the beginning. So there was the linear elastic part. There was the stress plateau. That was just as it starts to densify. And then, there's that line joining up the points. So you were OK with all of that? AUDIENCE: Yeah. No, the only part that confused me is because once I simplified-- I inputted all of what we were given such that I had equations in terms of the relative density so I could just apply it to the different ones given. But then I wasn't very sure how-- because I got constants, I wasn't really sure how-- the slopes should look like, because they were constants. LORNA GIBSON: So you got-- I'm not sure what you mean about you got constants. So what I did was I said, well, I know that the diagram has to have this basic shape, right? In the first part here is where it's linear elastic. And this part here is the stress plateau. And that part there is the densification. OK? And I said, well, if I can find these two points that correspond to the change between the linear elastic part and the stress plateau and the point that corresponds to the stress plateau and the densification, then I've got the diagram. Right? OK? AUDIENCE: Yeah, I think that-- LORNA GIBSON: Oh, can I stop talking now? AUDIENCE: I mean, if you wanted to-- LORNA GIBSON: Well, it's for you. So you're OK? AUDIENCE: I think. LORNA GIBSON: You. AUDIENCE: So what confused me about this question was that in order to find the two points, you'd need to know the strain at which that [INAUDIBLE]. So I wasn't sure how you would find that strain. LORNA GIBSON: Yeah, I think I didn't tell you. Let's see. Well, let's see. Do you have to have the strain? You have that. I think if you have-- I think you don't have to have it in terms of the strain. I suppose you could put it in terms of the strain. I mean, that would be a different way to do it. AUDIENCE: But isn't the equation for the stress plateau already in terms of the strain? LORNA GIBSON: Yeah, but this assumes that the stress plateau is just perfectly flat, right? So this thing here would be useful, the strain at which the plateau starts. But I think that's the same as saying-- that's the same point as saying you're at that point there, because that's where the plateau starts. OK? So I think there's different ways you could try to approach this. But I think-- let's see, I'm just trying to remember what I did here. Yeah, so what I did here-- so to get point a, I said, well, this is the equation for the linear elastic bit, right? And this is the equation for the stress plateau. Where I'm at point a here, this stress is the stress plateau. So that's why I've put sigma star plastic there. And then I said, this is-- the sigma star plastic is-- and this works out to some number here, some number times the relative density to the 3/2 power. And then I just made this little table here, where I said, OK, these are the densities I need to get the curve for. This is going to be my sigma star plastic for those densities. And from that, then I can get this column here. It's basically just that equation with this substituted in. And do you see that that corresponds to the point a? So I mean, you could do it by figuring out what that strain is and then figuring out all the points along that line up to that strain. But you don't have to do it that way. Do you know what I'm saying? So you're saying-- if I had-- so say I have my idealized stress strain curve like that, right? You're saying, well, I had to know that strain there so that I know where this stops, right? And I'm saying that corres-- and you could do it this way if you wanted to. You could say that this is the energy absorbed up to that point. And if you know that the modulus goes as the relative density squared times Es, and you know this, if you know those two things, you can figure out what that strain is. So you could do it that way if you wanted to, but I just did it a slightly different way. AUDIENCE: Yeah, so I don't understand how you just were able to get rid of the epsilon minus epsilon on that curve. LORNA GIBSON: Oh, let's see. So I think, in the first part, where it's linear elastic, I just go up to epsilon 0, right? So this part here doesn't really involve the epsilon, because I've gotten rid of it by taking the square of the stress. And then here, the other point I'm looking at is this point here. And I've assumed that epsilon 0 is much smaller than epsilon d, and I've ignored it. And that's, I think, how I did it in the notes in the class. OK? So when you get to that point, this strain here is usually a few percent at most. And this strain here is typically 80% or 90%. So it's pretty common to do that. OK? Other questions? So don't forget, the test covers everything from the thermal conductivity of the foams, it covers the stuff on trabecular bone, sandwich panels, and then, the energy absorption stuff. Yeah? Are you a little exhausted already? Yeah. So I'm guessing that this week and next week, you have everything due. You have papers, and projects, and tests. Do you have very many exams left, like final exams? Yeah, you've got finals? AUDIENCE: [INAUDIBLE]. LORNA GIBSON: Oh. All right. Anybody else have any questions? Because I think we can just go and do other things if nobody has any other questions. So the test is on-- so let's just review where we're at for the rest of the term. So the test's on Wednesday. And you can bring one cheat sheet, but I am not giving you all those equations with honey combs and foams. So I think, on your cheat sheet, you would want to put Young's modulus of an open-celled foam, shear modulus of an open-celled foam, compressive strength of an open-celled foam, shear strength of an open-celled foam. But I don't think there's going to be anything more complicated than that. And Monday, I was going to do the how I became a professor talk. So if you've seen it and you don't want to see it again, you're welcome to not come. But for you guys, I just talk about how I got here. And it's about my life. It's not about cellular solids or anything. And I don't know, you guys liked it. You like it, don't you? AUDIENCE: Yeah. LORNA GIBSON: Yeah. Yeah. So if you want to come, I'll do that. And then Wednesday, I thought, I just need to collect the projects. I wasn't going to do anything on Wednesday. And so I've been thinking about the how I became a professor talk. And I think-- so I have this idea that students would like to hear more of these talks from other faculty. Would that be true? AUDIENCE: Yes. LORNA GIBSON: Ah. Because I've been in touch with Cindy Barnhart, and we're going to try and organize something for the fall. So I'm going to approach some other professors-- not just in our department, across all of MIT-- and see if I can get other people to do the how I became a professor talk. So you would like that? OK, yeah. So we'll see if we can make that happen.
MIT_3054_Cellular_Solids_Structure_Properties_and_Applications_Spring_2015
1_Introduction_and_Overview_MIT_3054_Cellular_Solids_Structure_Properties_Applications_S15.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. LORNA GIBSON: OK, so what we're going to talk about in this course are materials that have a cellular structure. So they're all very porous. And typically they have low volume fractions of solids, like less than 30% solid. And we're going to talk about different types of cellular solids. So one type are honeycombs. And this would be kind of your standard hexagonal honeycomb, something like that. We're going to talk about foams. And I'm sure you've all seen polymer foams. Here's an aluminum foam. We'll talk about other sorts of foams as well. We're going to talk about some medical materials. And I brought in some bone samples, some trabecular bone samples here. I brought in a little tissue engineering sample here. And then, we're going to talk about materials in nature that have a cellular structure too. So we're going to talk a little bit about wood. And I brought in a piece of balsa wood. I'll pass these around in a minute so you can play with them. This is the lightest wood. And I brought in lignum vitae. This is the densest wood. Lignum vitae is so dense that it sinks in water. And I have a couple of projects right now on natural materials. So I thought I'd talk a little bit about those too. We have one on bamboo and making structural products out of bamboo. So this is like a beam made out of bamboo. And this is a piece of oriented strand board made out of bamboo. So the same way you have like wood oriented strand board, you can do the same thing with bamboo. And we have a project that involves this, so I thought talk a little bit about that later on in the course. So we'll talk a little bit about the processing, how you make these materials, the structure. We'll talk a little bit about how we do the modeling. So we're going to start with the honeycombs, partly because they have a nice, simple unit cell that you can repeat and you can analyze fairly exactly. And then we're going to talk about modeling the foams. And then once we've got the modeling background, so we're going to get equations describing the mechanical properties, the stiffness and the strength, once we've got that, then we can apply it to lots of things. So we can apply it to understanding the trabecular bone, the tissue engineering scaffolds, or how cells interact with scaffolds. We're going to look at energy absorption in foams because they're good for absorbing energy. We're going to look at a lightweight sandwich panels as well. So we're going to look at all these different things. And I have some slides I'm going to go over today that have more pictures of all of this. And I'll pass this around to a sec, but let me go through the logistics first. So this first hand out has a few of the kind of details. There's two books that I've coauthored with colleagues. The one on cellular solids, if you wanted to buy a book, is probably the most relevant one to buy. If you don't want to buy it, you certainly don't have to. I'm sure the library has multiple copies of it. And the other more recent one is called Cellular Materials in Nature and Medicine. And that's a little more specialized. And again the library has that. So you don't need to run out and buy that. You can just get it there, but let me just mention there's that reference and it might be helpful. OK, so let me talk a little bit about the projects. What we do in this class is everybody does a project. And I like for you to do it in pairs, just so that you have somebody to work with. I think it's nice to have somebody to do it with. And the project has to be something on cellular materials, but I really leave it pretty open ended what the project is. It's really up to you to decide what you'd like to do. And to give you some idea what people have done, people have done all kinds of stuff in the past. So people have worked on negative Poisson's ratio honeycombs. I didn't bring any of those in with me, but you can design these honeycombs so that they have this property that when you push it this way-- instead of if you make the strain smaller in this direction, it gets wider in this direction normally, but with negative Poisson's ratio materials, it will contract in that direction. So I've had people do projects on that. I've had people work on osteoporosis. I had a group once who worked on elephant skulls and I brought part of their project with me. So it turns out elephant skulls have a sort of porous layer to them. And this is-- they have these large pores in the top of the skull. I don't have a whole skull, but this is part of it. And what they did was they had heard that elephant skulls have these pores and that the pores somehow affect sound transmission and how the elephant hears. And so they wanted to do a project on that. And that was really all they knew to start with, elephants have pores and we want to do a project on it because it's cool. So I helped them put together this project. We went up to Harvard. We went up to the Museum of Comparative Zoology where they have elephant skulls, which are like about this big around, huge skulls. And some of them had the outer part of the bone was broken and you could see these big pores. So they got some nice pictures of elephant skulls. And then they found that the University of Texas at Austin has computer tomography images of all sorts of bones of different animals. And sure enough, they had elephant skulls. So they got the file for the micro CT image, which gives you the sort of 3-D picture of the skull. And then they use that to 3-D print small versions of the skull. So that's what this is, they printed smaller versions. And these were just a couple of slices that I got from them at the end project. And then what they did with one of the skulls, they suspended it from a wire, and they took speakers, and they had sound that vibrated the skull, and then they put an accelerometer on the skull and they measured the vibration response of the skull. And they also made, I think, it was a dolphin skull, which did not have these pores, and they kind of compared the vibration response from the sound for these two skulls with two different structures. So that was a project for this class. People have worked on tissue engineering scaffolds. Often there's people who work on food foams. So one year, people worked on bread. They did bread processing. They made bread by having different amounts of yeast, different rise times, different ingredients. And they made bread. And I have a little-- I like these historical things. And I have a little thing here I wanted to show you. So the people in 3032 last year, last fall, will know that I went to England in November. And I went to the Royal Society for an editorial board meeting. But I also went and looked at their archives. And one of the things they showed me was this article, which is in the sort of an original, sort of archives of the Royal Society from 1600s. And it's by a guy called John Evelyn. He's famous for writing a book called Silva about trees and wood. But he's written this article here. And I love the title. It's "The Several Manors of Making Bread in France, Where by General Consent, the Best Bread in the World is Eaten," by Mr. Evelyn. So here we have in the Official Royal Society bread making science. And in fact, the article was several pages long. There was quite a lot on bread and how you make it in France, where the best bread is eaten. So if you want to do something on food foams, people have done meringue before, various things. They look at typically changing something about the recipe, or the composition, or the processing, or the baking. And they look at the structure. Sometimes they look at mechanical properties too. So anyway, there's a whole list of things there. You can think of other things. If you look through the books, you might get some ideas as well for projects. So what I'd thought I'd do next is I wanted to just kind of give an overview of what we're going to talk about. And I've got a bunch of slides. Let's see I forgot some things, because I do that. Books, yeah, OK, let me pass some of these things around so you get to play with them too. So here's some honeycombs. I don't know if we can pass all these things around because it's a little unwieldy. So this is an aluminum honeycomb. I like bringing toys in. These are little rubber honeycombs. This is a little ceramic honeycomb. This is a little paper honeycomb. And let's see, we have some foams here. So here's a metal foam and a ceramic foam. And this is a sort of, not quite a foam, it's made of hollow spheres by centering hollow spheres together. So you can play with that. And what else do we have? We have little lattice things here. So here's a sort of 3-D lattice material. So this has sort of a cellular structure, but it's very, very regular. It's not like a foam. So that's called a 3-D lattice material or sometimes a 3-D truss. And what else should we pass around? We need to do the bones. Here's the wood. You can feel how different the densities of the two woods are. I would like these all at the end because I show these around for different classes. Here's some bone, you can see of the bone looks like the foam. And this is a tissue engineering scaffold for generating skin. All right, so while those are getting passed around, I'll talk a little bit about what we're going to cover in the class with some slides. OK, let's see, should I dim the lights? Would that be a good thing, Craig, if I dimmed the lights? CRAIG: They're kind of preset, you can try maybe two. LORNA GIBSON: OK. Doesn't seem to-- there we go. How about that? That OK? All right, so I like these historical things. So this is a picture of Robert Hooke's drawing of cork from his book Micrographia. And he was the first person who used the word cell to describe a biological cell. And it comes from the Latin cella, which means a small compartment. So you can think of the cells as small compartments. That kind of makes sense. And he kind of very modestly says, "I no sooner to discerned these, which were indeed the first microscopical pours I have ever saw, but me thought I had with the discovery of them perfectly hinted to me the true and intelligible reason for all the phenomena of cork." So he's saying by looking at the structure of cork, he thinks he understands everything about the properties of cork-- very modest. But in fact, this is kind of the foundation of material science. So material science is all about looking at the structure of materials and trying to say something about the properties of the behavior of the materials. And that sentence kind of sums that up. So that's why I like that sentence. So what we're going to do is look at different kinds of cellular materials. We'll look at engineering ones. And these we typically refer to as honeycombs or foams. Honeycombs have two dimensional prismatic cells, while foams have three dimensional polyhedral cells. And we'll look at applications for the honeycombs and foams in things like lightweight sandwich panels, in energy absorption devices, and things like thermal insulation. We'll talk a little bit about the thermal properties. We're also going to talk about cellular materials in medicine, so trabecular bone. We'll talk about osteoporosis and how loss of bone reduces the strength, and how you might estimate that. We'll talk about tissue engineering scaffolds, something about their mechanical properties. You may think the mechanical properties aren't probably the most important thing. But in fact, the mechanical properties do have some effect on how the cells interact with the scaffold. So we'll talk a little bit about cell scaffold mechanics. And then we're going to talk about cellular materials in nature at the end of the course. So we'll talk a little bit about honeycomb like materials, like wood and cork, and foam like materials, like the trabecular bone. There's also a type of tissue in plants called parenchyma that looks just like a foam. And there's some sponges that have some interesting features. And often, in nature the cellular material appears in combination with some solid material. And so it's sort of a structural component. And you can see sandwich structures in nature, leaves and skulls. You can see materials that have density gradients, palm stems and bamboo are examples of that. And you can see materials that have cylindrical shells with compliant cores, and things like plant stems and animal quills are like that. So that's kind of the range of materials that we're going to talk about. And one of the interesting things about cellular materials is that you can make cellular materials out of almost anything now. And this is really a huge range of materials and lots of different applications for this. So one of the fundamental things we're going to do is look at the mechanisms by which the materials deform and how they fail. And we'll use a structural analysis to obtain the bulk mechanical properties, so things like the stiffness, the moduli, the strength, the fracture, toughness. We'll look at how you can control the design of the microstructure to get the properties that you might want, and also how you might select for the best material for a given engineering application in engineering design. So we're going to get those three things. So let me start by showing you some more examples of engineering cellular solids, and in particular, some micrographs. So that you can see what it looks like on a small scale too. So these are the honeycombs. These are the sorts of things that I'm passing around now. The aluminum one, paper resin, and the ceramic ones. The aluminum and the paper resin ones are typically used in the cores of sandwich panels. And the ceramic ones are used in the catalytic converter in your car. So what they do is they block off every other cell at one end and then the opposite cells at the other end and exhaust gas from your car is forced to go through those channels in the honeycomb. And the walls, if you look at that triangular one, those triangular walls themselves are porous, and they're coated with the catalyst, which is platinum. And that forces the exhaust gas through the wall in contact with the platinum, and then comes out the other end. And they're ceramic, obviously, because the gas is hot and they need something that has a high thermal resistance. So these are some examples of honeycombs. These are some examples of engineering foams. When I tell people I work on foams, they always think of polymer foams, like polystyrene or something. And there's lots of polymer foams. But you can actually foam any materials now. There's metal foams. There's ceramic foams, and glass foams, carbon foams, all sorts of foams. So those are some examples. You can see when you look at these images here that the foams have a low volume fraction of solids, like if you look at say this polyethylene one here. Say we look at this guy up here, then you can see there's not much solid, there's a lot of gas. So the volume fraction of solids is fairly low on that foam there. So one of the things we're going to talk about is how the volume fraction of solids affects the properties. You can also see on the top left and the top right, the top left one has what we call open cells. There's just edges along the polyhedra, there's no faces over the membranes. And the right hand one is a closed cell foam. So there's like membranes that cover the faces of the cells. So we're going to talk a little bit about the differences and the behavior of open cell and closed cell foams too. These are food foams. So I've already said you might want to do a project on food foams. And these are just some examples of different kinds of foods that are in fact foams. And it turns out the food industry spends quite a lot of time and effort thinking about the mechanical properties of food. And it turns out if the texture of the food isn't right, then people don't like the way it feels in their mouth. There's something they actually called mouth feel. So it turns out if your cereals too soggy, it's icky. If it's too crunchy, it's icky. So it's sort of a happy medium. And food companies spend quite a lot of time and money worrying about the mechanical properties of food. This is an example of showing that the cells could be antisotropic, the cells could be elongated in one direction. For instance, in the top one on the polyurethane foam. And if they're elongated in one direction, it's not too surprising, you might have different properties in that direction from the plane perpendicular to it. And then the bottom images is of pumice. Pumice is a volcanic rock. And you can see how the pores are kind of flattened out there. And they're flattened out because that was once molten lava. And the molten lava was flowing down a mountain side of a volcano. And as it flowed, it got sheared. And the shape of those pours reflects the shearing as the molten lava flow down the volcano. And so this kind of sort of stretched out cell shape is going to give you antisotropic properties, different properties in different directions. This is the 3-D truss that I'm passing around. I don't know if it's exactly the same one, but it's a similar one. And these trusses are triangulated structures. And we'll talk a little bit about their properties too. And then we also are going to talk about some applications. So obviously, these materials are mostly air. And that gives them a low weight. And that means they're often used in structural sandwich panels as the core of the panel. And these panels have stiff faces separated by a lightweight core. And the idea is to make it a little bit like an I-beam. So the way you have the flanges on the I-beam, the faces are like the flanges, and the porous core is like the web of the I-beam. They can also undergo large deformations at relatively low stress. And that means they can absorb a lot of energy. So if you think of the energy as the area under the stress strain curve, if there's big strains and big deformations, then there's going to be a large area. And that sort of energy absorption occurs at a fairly low stress. So typically, when you want to absorb energy, it's not just how much energy you want to absorb. You have to do it without actually breaking the thing you're trying to protect. So you don't want to generate high stresses as you go along, and foams are good at this. Foams are also good at being thermal insulators. They have a low thermal conductivity. And that's because they're largely made of gas and the gas has a lower conductivity than the solids. So that gives them a lower conductivity. And they have a large surface area. And the smaller the pore size, the bigger the surface area per unit volume. And that makes them good for things like carriers for catalysts. And that's why they're used for these catalytic converters too. OK, so here's some examples of cellular materials in medicine. So here's some examples of trabecular bone. Trabecular bone exists at the ends of your long bones. So say in your hip or in your knee. It also exists in your vertebrae in the middle of the spine in the vertebrae there. And it also exists in your skull. And you can see it's a porous type of bone. It looks very similar to the foams and the sorts of mechanical models we make for foams can be applied to the bone as well. And so that's one of the things we're going to do later in the course. These are two slides showing what happens when people get osteoporosis. The left hand slide is from a 55-year-old female to the same bone, the same slice. And the right hand one is from an 86-year-old female. This thing here, row star over row S, that's the relative density, the density of the bone divided by the solid that it's made from. That's the same as the volume fraction of solids. And so on the normal bone on the left it's about 17% solid. And on the osteoporotic bone on the right it's, about 7%. So you can kind of see the bone density has gotten lower, partly by thinning of the struts, but partly by resorption of the struts, as well. And obviously the one on the right is going to have a lot lower strength than the one on the left. These are micro CT images of bone. And again, you can see how the structure looks different at different relative densities. The one on the left is sort of in the middle at around 11% dense. The one in the middle is the most dense 25%. And the one on the right is 6% dense. So it's not too surprising that the one on the right would have a much lower strength from the other ones. And we'll look at how we can model that. This is just showing some deformation in bone. I have a colleague, Ralph Mueller who's got a micro CT machine, which allows you to do compression tests in the micro CT. So he can make these sort of images where he scans it at zero strain. He compresses it a little bit. He scans it again. He compresses it a bit more. He scans it again. And he these are stills from his images, but he makes animations from this. And if you look at the top right up here, you see these struts here. They're pretty straight in this one. They're a little bent and starting to buckle here. And then if you look at that one strut there, you can see how it's buckled right over. So you can look at the deformation mechanisms by looking at the CT scans and things like that. People are starting to think about using metal foams for coatings of orthopedic implants. So one of the issues with implants is that say you have a hip implant or a knee implant, you remove the bone that's preexisting, and then you replace it with some sort of implant. Typically, the implant has a stem that fits into the hollow part of the bone and then has a sort of joint piece to it that fits into the joint. And you can get some loosening of the stem in the remaining bone. And one idea is that you use porous coatings to minimize that. And right now, typically what they do is center beads, metal beads on to the stem. And another idea is maybe you could use a metal foam. And these are some different types of metal foams that people are looking at. Another type of cellular material in medicine is a tissue engineering scaffold. And this just shows some different examples made by different processes. And we'll talk more about these later on in the course. This one here at the top left is a collagen based one made by a freeze drying process. And I don't know if you saw MIT's website yesterday and today, Ioannis Yannas was the one who developed this. And he's just being inducted into the National Inventors Hall of Fame. And this is what he really was inducted for is he's invented a skin-- well, he calls it a dermal regeneration template for regenerating skin, mostly in people with serious burns. Then these are some other sorts of scaffolds that are made by different processes. This is by a sort of rapid prototyping process here. The bottom two, these are kind of interesting. These are actually the extracellular matrix in the body. And they've had all the cells removed from it. So these tissue engineering scaffolds are really designed to mimic the extracellular scaffold in your body or extracellular matrix in your body. And you can see how when you remove the cells, the structure of those two things looks a lot like a closed cell foam. So that's the kind of structure you're trying to replicate. We'll look a bit at cell mechanics. This is a cell contraction of a scaffold. So here these sort of very thin transparent bits of the collagen based scaffold, and this is a fiber blast on it right here. And I had a student, Toby [? Fryman, ?] who worked with me and Ioannis Yannas on this. And you can see from the video the cell is actually contract in the scaffolding making it deform. And you can calculate what forces the cell must be imposing on the scaffold by knowing something about the geometry of the struts and how the cells attached. And then it's going a little bit more. So we'll talk more about that. And then there's a final picture down here, where you can see these two, the two points up here and down here have now been brought pretty much right together. So we'll talk about that in more detail. We've also done studies on cell attachment and how that attachment rate or the amount of cells that attach is related to the surface area, the surface area per unit volume. So these are just some tests from that done by [? Fergal ?] O'Brien, a post-doc that worked with me. We've also done some studies on cell migration. And Brendan Harley was the student who did this. And he stained the cells with one stain and he stained the scaffold with something else. So red are the scaffold and green are the cells. And then he used a confocal microscope to track the cells. And he tracked the cells and where they moved versus time. And if he has the location at different times he can get the velocity. And one of the things he did was he changed the stiffness of the scaffold and he found that the migration speed depended on how stiff the scaffold was. So he was looking at sort of interactions between mechanical properties of the scaffold and behaviors like cell migration. And then we're going to look at materials in nature. So here is wood. So you can see the cellular structure of wood. It's a lot like the honeycombs. It has sort of a prismatic structure. That one happens to be cedar, but other woods look similar. Now, this is balsa wood. And this is showing just how the balsa deforms. I think this was loaded from top to bottom. And this is at zero load. And then this is more load, and more load, and more load. And if you look at that cell there with this little kind of tear in it here, that's the same as the cell down here, and that's the tear there. So you can see how the cell walls bend and how they deform. And you can model that using the honeycomb models. This is just another image showing actual failure of wood, buckling of the cell walls. This is cork. So these are modern scanning electron micrograph of cork. And one of the interesting things is the cork cells have these little corrugations. You see how they're not flat, they have little kind of wrinkles in them. And that gives rise to sort of an interesting property of cork. If you take cork and you load it. So here we were pulling it in the direction of these arrows, pulling it like along this direction here. And again, this is the same set of cells. That tear there is the same as that tear there. And you see all these little corrugations here, they've all straightened out when we're pulling on it. And the Poisson's ratio of the cork is zero. It's kind of like a bellows. Like if you had an old camera, or you have an accordion bellows. If you pull the bellows in and out, it doesn't get any wider this way or the other way. You're just sort of opening the bellows and closing the bellows. And the cork cells are doing kind of the same thing. And it gives them this Poisson's ratio of zero. Which it happens is one of the things that makes it easy to get the cork into your wine bottle, because as you're pushing on it, it's not pushing out in all directions. This is only for this one direction, but it's not pushing out in all directions. These are parenchyma cells in plants, in carrots and potatoes. All those little blobs in the potato, those are starch blobs. This is called a Venus flower basket sponge, and Joanna Aizenberg, at Harvard, has studied this quite a lot. This has a hierarchical structure. If you look at the overall sponge and then you look at each of the sort of struts that make up the lattice, that too has a hierarchical structure. And she's looked at the optical properties of this glass sponge. It's kind of a beautiful thing. And then there's some cellular structures in nature as well. There's sandwich structures. There's density gradients. And there's tubes with a cellular core. So here's some examples of that. Here's the iris leaf, you know the iris plant has these long kind of leaves that stand up like this. And it's just like a sandwich panel. The parenchyma are kind of like a foam in the middle here. And there's very dense fibers called sclerenchyma that run up and down the length of the leaf. And they're like fibers in a fiber composite. And here's a bull rush or a cat tail leaf. And they're like little I-beams almost. It's like a whole series of little I-beams. And again that's sort mechanically efficient. These are examples of sandwich structures in bird skulls. Some of you know I'm a birder. So I sort of sneak in bird stuff from time to time. But you can see how these birds skulls are all sandwich panels, and obviously birds want to minimize their weight for flight. And this is one of the ways that they do that. This is a horseshoe crab, sort of similar kind of thing. This is from Mark Myers in San Diego. He did a study on the crab in its shell as a sandwich. And this is the ever so handsome cuttlefish. And the cuttlefish has something called a cuttlefish bone. This is the bone here and the bone is made up of these kind of sandwich type structures. The cuttlefish is related to octopus and squid and things like that. And it's hard to see in this picture here, but these little things here are actually like little tentacles. There's several tentacles that it eats stuff with. The cuttlefish is actually a mollusk. All those things are mollusks. It's called a fish, but it's not really a fish. And here's an example of a natural material that has a radial density gradient. Have you ever noticed if you look at a palm, like you see those pictures of Hollywood in LA, and the palm trees, you know, they line the street. But they're all about the same diameter from the bottom to the top. And when you think of like an oak tree, it's not like that. It's big diameter at the bottom, skinny diameter at the top. So when wood grows, the wood has more or less the same density in the bottom and at the top. So as it's growing, the density is more or less the same. And it resists the bigger loads from getting taller by adding circumference. So it gets wider and wider as it gets older and older. But palms don't do that. They come out of the ground a certain diameter. And most palms just grow that same diameter. As they get taller and taller, you can imagine there's wind forces, and different kinds of forces are on it, the stresses get bigger and bigger. And the way they resist those is that the cell walls get thicker, and they preferentially get thicker on the outside. And if you think of moment of inertia, remember moment of inertia is increased more with the material on the outside of a beam. And that's kind of what the palm is doing. So if you look at young cell walls and old cell walls, here's some SCM pictures of young ones and it's sort of skinny. And SCM pictures of older ones, and the cell wall has gotten thicker. So we're going to talk a little bit about that. And it turns out, this is an incredibly efficient way to deal with getting taller and needing to resist bigger loads. Another material that has a radial density gradient is bamboo. So this again shows these sort of dense sclerenchyma fibers. Do you see these kind of dense parts here? And you can see there's not very many of them here. And there's more and more as you get towards the outside there. So there's a density gradient there. So we'll talk about that. And some plant materials have a cylindrical shell with a compliant core. Plant stems or commonly like. This is a milkweed stem. And you can see it's got these sort of fibers that are almost completely dense. And then a sort of lower density core, cellular core, here, and a void in the very middle. And you can show that that core helps prevent local buckling. So if you take a drinking straw and you bend it, you get that kinking kind of failure. You can show that having this sort of foam like core in the middle helps to resist that. Imagine you have a drinking straw and now you put foam in the middle. It's going to be harder to get it to kink like that. So that's what the plants are doing. Animal quills do the same thing. That's a porcupine and a hedgehog quill. And all of this stuff is in these two books. So it doesn't really matter to me if you go out and buy the book. I don't make very many much money on these books. So this is not an income producing thing. But those are the books that all these pictures have been taken from. All right, so I think that's my sort of introduction to the class. Are there any questions about how we're organized or what we're going to be doing? Are we good? It's OK? OK. So then I think what I'm going to do for the rest of the time is start the next section of the course which is on processing of cellular materials. Now, I have another little hand out here. So I don't know if I'll remember to do this for every lecture, but I like to have a little outline, that partly makes me be organized. So it's just a little outline for the lecture. AUDIENCE: [INAUDIBLE]. LORNA GIBSON: You asked me to do what? AUDIENCE: Put the room light back up. LORNA GIBSON: Put the what up? AUDIENCE: Lights. LORNA GIBSON: Oh, the lights. I'm going have another set of slides though. So let me get out of the intro slides. I know I have another set of slides. So I'm going to just leave the screen up. And kind of put stuff on the board and talk about the slides. AUDIENCE: That's fine. LORNA GIBSON: You're good? OK. OK. So I wanted to talk a little bit about processing of cellular solids and then, next time we'll start talking about the structures. It seemed good to talk about the processing before we got to the structure. So I'm going to talk a little bit about honeycombs and how they make honeycombs, and then foams, and then lattice materials. Yeah? AUDIENCE: The slide you're showing with the the shell with the foam inside it, are there techniques for analysis of it? LORNA GIBSON: Well, I don't think we're going to get into all the gory details, but I can certainly give you references. That's something that one of my students did at one point. And in fact, I've been collaborating with Jennifer Lewis up at Harvard and she has a student who's making sort of cylindrical shells with foams out of ceramic foam. So he's 3-D printing sort of with coaxial nozzles a cylindrical shell that's pretty solid. And then a ceramic foam on the inside. And that's one of the things he's playing around with. So he's looking at ways you might make engineering versions of that. So I wanted to start with looking at honeycombs and how they make honeycombs. And I thought what I'd do is I've got some slides. And I'm going to talk about the slides. And then I'll put some notes on the board to kind of describe what we're doing, OK? So this is the first sort of slide on the honeycombs. And the two main techniques that they're made by, especially those aluminum honeycombs and paper honeycombs that I passed around, one technique is an expansion process. So what they do is they take flat sheets of some metal, say aluminum, and they put little stripes of glue on it in different places. So these little kind of specialty things are where the glue goes. And then they stack those guys up in a sort of particular arrangement. And then what they do is they pull it all apart, kind like a paper doll thing. They pull it all apart and when they pull it apart, they get the hexagonal shape. So let me just show on the board how you do the gluing and how that works. So they would start with some sheets. Say we start with two sheets like that. They'd put some glue down, say there. And then there's a gap. And then they put glue on the opposite side over there. And then there's another gap. And then they put glue there. And then they do the same positions but the opposite sides on the next sheet. So they do that. And then if you glue those together-- well, let me do another one. Maybe do a couple more. So then if I do one like that, it's glued there, and there, and there. That guy gets there, and there and there. And when glue that-- when you push that together and then take it apart, you've got something that looks like this. So say I call that one, two, three, and four. Then this would be 2 and 3. OK, so this thing here is that. Where it's not glued, you get them doing that. And then it's glued again down here. And so you get this kind of pattern. And one of the things about these honeycombs that are made by the expansion process is these inclined walls have a thickness t. And because there's two sheets up here, the vertical walls have a thickness 2t. So that's typically what you see in the commercial honeycombs that are made by this way. And this process is used for aluminum honeycombs, for paper resin honeycombs, for Kevlar honeycombs. And I'll just say note that the inclined walls have a thickness t, and the vertical walls are 2t. So that's the expansion process. And the process that's commonly used for honeycombs is called a corrugation process. And for the corrugation process, it's just like the lower schematic here shows. You take a flat sheet. So you've got a roll of a flat sheet. And you've got some rollers that have the right shape to give you the corrugated profile that you want. You pass the sheet through the rolls and you get individual sheets out. And each sheet is kind of a half hexagon. And then you put the sheets together and that forms the whole hexagon. So you have a flat sheet that's fed through a shaped wheel to form half hexagonal sheets, which you then bond together. And it's the same kind of thing that the inclined walls have thickness t and the vertical walls have thickness 2t. And this corrugation process, you can only really use it in materials that you can deform a fairly large amount to get the corrugations. So typically, this is for metals. And aluminum is probably the most common metal that this is used for. AUDIENCE: How are the corrugated sheets attached to each other? LORNA GIBSON: I think they're just bonded with epoxy. Yeah, so obviously if you wanted to use it for high temperature performance-- you know, all of these things are bonded with some sort of epoxy or some sort of resin. So there's an issue if you wanted to use it higher temperatures. So another process that's used to make ceramic honeycombs is an extrusion process. And you just take a ceramic slurry and you pass it through a die. And you can make a ceramic honeycomb by doing that. And I believe that's how they make the ceramic honeycombs I passed around, the catalytic converter ones. Other techniques involve rapid prototyping. You can 3-D print honeycombs. And Jennifer Lewis has a project on 3-D printing of honeycombs up at Harvard. And one of the interesting things they're looking at is not just printing with an ink, but printing with a fiber reinforced ink. So they're making cell walls of the honeycomb that are fiber reinforced. And one of the tricks is trying to orient the fibers in the way that you want them to be oriented. So there's rapid prototyping techniques as well. You can use also selective laser centering. Let's call it selective laser scanning. So you can have a photosensitive polymer and use a laser to cure that and build up a honeycomb type structure. And you can also cast honeycomb structures. So those rubber honeycombs that I pass around, those are made by casting. You take a liquid silicone rubber and you add a hardener and you pour it into a mold. Another kind of interesting way that people have made-- well here's another example of the honeycombs that are 3-D printed. And this is an example of-- or a couple of examples of looking at a bio carbon template. So what that means is that these materials are based on the wood, but none of them are actually wood. So what they do is they take wood, like they take pine, or they take beech or something. They take some kind of wood and they carbonize it. So that they do the same processes as used for making carbon fibers. So you put the wood in an inert atmosphere. And you pyrolyze it. You heat it up to I think 800 degrees C. And all you're left with is the carbon. And it preserves the structure. And you replicate the structure. You just get the same structure. There's some shrinkage. The shrinkage is about 30%. But you get the same structure as the wood. So this material up here is actually all carbon. It's just replicating the wood that was used, the pine that was used. An then what people have been doing is using that carbon structure and then infiltrating that with gaseous silicon. And they form silicon carbide. So these structures down here are all silicon carbide replicas of wood. And they're thinking about using that for things like filters for high temperatures or for catalyst carriers. And one of the attractive features is wood has fairly small cells. The cells around 50 microns across. And so you get a large surface area. And this is a similar thing here. These two are the carbon template. And here they've used silicon and they've actually filled in the voids. And so they've got silicon carbide where the cell walls used to be. And they've got silicon where the void used to be. So people are playing around with this is another way of making a honeycomb type of structure. And they use other kinds of plants besides wood as well. But that's the kind of general idea. So the idea is that wood has a honeycomb like structure. And the cells are fairly small. the cells are in the order of 50 microns sort of in diameter and maybe a few millimeters long. And this bio carbon template replicates the wood structure. So the wood is pyrolized at 800 degrees C in an inert atmosphere. So say an inert gas. And that gives you the bio carbon template. And you maintain the structure, although there's some shrinkage. structures And then this carbon structure can then be further processed. So for example, you can infiltrate it with a gaseous silicon. And you end up with a silicon carbide wood replica. So possible applications are things like high temperature filters, or catalyst carriers. I think that's it on the honeycombs. Are we good with all sorts of methods? And my little talk here on processing is certainly not comprehensive. I'm sure there's other ways people have developed. These are some main ways. All right then, I want to talk about foams as well. People have developed different types of processes for different types of solids, so polymers, and metals, and ceramics. So I just go through each class of solid and talk about that. So the idea with polymer foams is that you want to introduce gas bubbles into either a liquid monomer or a hot polymer. And then you want the bubbles to grow. And then you want to stabilize them and solidify it by other cross linking or by cooling the hot polymer. So there's a variety of ways of doing that, but let me just put that down. So there's a few ways to get the bubbles in there in the first place. One, is just by mechanical stirring. So if you've ever made meringue, you know what that is, you just take a whisk and you beat egg whites and bubbles. The air will get enveloped in the egg whites. They also do that with polymers. Or you can use a blowing agent. And there's several varieties of blowing agents. So the blowing agents are divided into physical and chemical blowing agents. And the physical ones, they force the gas into solution under high pressure, and then you reduce the pressure, and the gas bundles expand. So you can use physical blowing agents. Or you can introduce liquids that, if you're using a hot polymer, that at the temperature of the hot polymer, they form a gas. So that the liquid just turns into a gas. And that would form vapor bubbles. And then the chemical blowing agents. There's a couple of different ways that those work. You can either use chemical blowing agents where you have two parts that react together to form a gas. And so that gas then blows the foam. Or you can have a chemical blowing agent that reacts with the polymer to form a gas and that blows the foam. So either way. And you can have them decompose on heating. So the same kind of thing. Evolved the gas when it gets into the hot polymer. So there's these different ways of blowing the foams. And there's many, many different types of these blowing agents. But, these are kind of the general techniques. And whether or not the foam forms an open cell or a closed cell structure depends on the rheology of the polymers, so the viscosity of it, and also the surface tension. Another way to make a foam is to make something called a syntactic foam. A syntactic foam is made by taking thin walled hollow spheres and then using, say a resin, like an epoxy resin, to bond them together. So you end up with something that's porous. And you've got the void from the hollow sphere, but you don't foam it in the same way that you blow bubbles through it in some way. One other thing about polymer foams is they sometimes have a skin on the surface. So when you blow them, say you've got a mold, there will be a skin that forms against the mold, and sometimes the process is designed in a way that the skin is thick enough that it acts like a skin in a sandwich panel. So they control the mold in a way and the blowing process so that they get a foam in the middle and thicker skins on the top and the bottom surface. And that forms a sandwich panel. Those are called structural foams. Let's see. So I think what I'm going to do next is the next section's on metal foams. And I've got a few slides on that. So I think I'm going to run through the schematics and just talk about it. But, I'll put the notes on the board next time. And there is one thing I forgot to do at the beginning. I like to tell you a little about me. And I want to hear about you. So I wanted to leave a few minutes for that. So let me just wait until people are finished writing stuff down. And I'll go through these in a few minutes, and then we'll through it in more detail next time. And I'll write notes down. OK? Are we good? So there's a whole variety of ways of making foamed metals. And most of them have been focused on aluminum. But you could in theory do them with other types of metals. So this was one of the first processes and it just involved taking a molten aluminum, so here's the aluminum down here in a crucible. They added silicon carbide powder to it and then they just used a stirring paddle, like they just stirred it up and mixed gas in that way. And they found that they got bubbles that rose up. And then they used conveyor belts to just kind of pull the foam off. And the thing about the silicon carbide was that if you didn't have that, then the bubbles wouldn't be stable enough that you could do this. The bubbles would collapse before you got to be able to pull them up. But the silicon carbide I think makes the aluminum melt more viscous and it helps prevent sort of drainage and collapse of the bubbles. And so that's one way. And there's a type of foam called Cymat, and this is an example of the foam that's made with that process. Maybe I'll bring it next time and we can pass it around. Another method is to use a metal powder and titanium hydride powder. Then you can consolidate that. So here's-- it's hard to see the writing, but this is a aluminum powder. This would be the titanium hydride powder. You mix them together and then you compact them. You press them together. And the titanium hydride decomposes and forms the hydrogen gas at a temperature at which the aluminum is not really quite molten but it's kind of viscous-y, kind of softening. And so when the aluminum is soft like that and the titanium hydride decomposes and forms the hydrogen gas, you get a foam from that process. And I think, somewhere, yeah, this foam here I think was made by that process. Then in a similar thing, you can just put titanium hydride powder into molten aluminum, and again the titanium hydride powder evolves the hydrogen gas and you get foamed aluminum from that. And I think this foam here was made with that process. They all look kind of similar. Another method is by replicating an open cell polymer foam. So I think I passed an open cell polymer foam around. And that's made by taking an open celled-- an open cell aluminum foam-- it's made by taking an open cell polymer foam, you fill up all the voids with sand. You then burn off the polymer, but now you've got sand in all the sort of places where there were voids. And then you infiltrate molten metal into that. And then you get rid of the sand. And then you're left with an aluminum foam that replicates the polymer that you started off with. So this replication technique. There's a vapor deposition technique. And this was developed by Inco to make nickel foams. So they take again an open cell polymer foam, that's kind of this thing here is, and they infiltrate into it nickel CO4. The only teeny detail that's a problem with this process is that happens to be highly toxic. So they put this gas through here. And then they get nickel depositing on the polymer and they burn the polymer out and they center it. So it is possible to do this. They have done this. But it's not that practical because of the toxicity of the gas. Now another method is something called entrapped gas expansion. And here, what you do is you take a can, like a metal can. This one's titanium, a titanium alloy. And then you put a titanium powder in here. You evacuate the can. So the can has a little valve on it, so you can evacuate it. And then you backfill it with argon gas and you pressurize the argon gas. So you've got a powder with sort of pressurized gas inside of a can. And then you hot isostatically press it. So you heat it up and you press it uniformly in all three directions. And you compact it. And then, if you want you can roll it. Sometimes people roll it because they want to make sandwich panels, and they want to have a certain thickness, and they want to have faces on the panel. But then you heat that up. And as you heat it up, the gas evolves again. And the thing expands and you get a foam that way. So that's another method. Another method is by making hollow spheres and then bonding the spheres together. And in this process-- this was developed at Georgia Tech. They used the titanium hydride again. They made a slurry of the titanium hydride in an organic binder in a solvent. And then they had a little kind of needle that they injected gas. And so they had this slurry and they were blowing gas through this needle and they got hollow spheres of the titanium hydride. And then they heated it up, and again evolved the hydrogen gas off. But now they're just left with titanium spheres. And then they bonded the spheres together. And these aren't titanium. This is an iron chromium, like it's not quite a stainless steel. But this is the same thing. I can pass that around next time too. Those are the little beads that they make. And then there's also fugitive phase technique. So you can take say salt particles, put them in a mold and pour a liquid metal into that, and then, leach the salt away. And I think that's it for the metal foams. So I think I'm going to stop there for today in terms of the lecture. I'll go over that again next time. And I'll write stuff on the board. But I wanted to tell you a bit about me. So the people in 3032, they already know me because they had me in the fall. But I see some unfamiliar faces. So I thought I would tell you a little about me. So I grew up in Niagara Falls in Canada, big power station. Lots of big civil engineering works in Niagara Falls. And my father worked at an engineering company that specialized in the design of hydroelectric power stations. It was founded by the guy who designed the Niagara Falls power station. And then I went to university in Toronto. And I did a degree in civil engineering in Toronto. And then when I finished my degree in civil engineering, I wasn't really sure what I wanted to do next. And I applied for some jobs, and I applied to graduate school. I applied to MIT. And I didn't get in-- ouch. But I did OK, it turns out. And I ended up-- I had an advisor when I was in Toronto who had taken a sabbatical in Cambridge, England. And he said he thought I might enjoy Cambridge, England. And I ended up going to Cambridge, England to doing my PhD there. And I worked on the cellular solids for my PhD. And it was a nice combination because I was interested in material behavior and mechanics, but I had a background in me in civil engineering. And these are just like civil engineering structures, but they're on a little teeny weeny scale, not like big buildings or bridges or something, like little teeny things. And really all of this has come out of doing that PhD in Cambridge. But when I was there, I never even thought about being an academic. And I never applied for any academic jobs. I didn't think I wanted to be an academic. And I went and got a job in Calgary in the oil business. And I was working at a consulting firm that did work for the oil business. I hated it. I just hated it. It was like I had a boss. I hated having this boss. And, you know, the projects were too short term. The winter in Calgary-- if you think this is bad, you've seen nothing. Like less snow, but cold. I mean like 30, 40 below, everyday, cold. Real cold. So I stayed there one winter. And somewhere along the way there, I decided maybe the academic job thing would be good. And I just sent my CV out to a bunch of Canadian universities. And I ended up getting a job at the University of British Columbia in Vancouver. And I lived in Vancouver for two years. And I probably would have stayed there, except there was a gigantic recession, and it was all very depressing, and there was no money. And that the universities in Canada are almost all run by the provincial governments. And the government had no money. It was all, you know, frustrating. And I sort of thought, oh, I'll look around and see what else I can get. And I answered a little ad in Civil Engineering Magazine for a job at MIT. And I got the job at MIT. And I was in the civil engineering department for about 12 years. And then I moved over to the materials department. Because my work started off on sort of sandwich panels and structural things. And then it kind of became more biomedical stuff and had less and less to do with civil. And I've been in the materials department since then. And this is kind of what I do, this kind of work. So that's kind of my little five minute story.
A_Vision_of_Linear_Algebra
Part_3_Orthogonal_Vectors.txt
GILBERT STRANG: OK, ready for part three of this vision of linear algebra. So the key word in part three is orthogonal, which again means perpendicular. So we have perpendicular vectors. We can imagine those. We have something called orthogonal matrices. That's when-- I've got one here. An orthogonal matrix is when we have these columns. I'm always going to use the letter Q for an orthogonal matrix. And I look at its columns, and every column is perpendicular to every other column. So I don't just have two perpendicular vectors going like this. I have n of them because I'm in n dimensions. And you just imagine xyz axes or xyzw axes, go up to 4D for relativity, go up to 8D for string theory, 8 dimensions. We just have vectors. After all, it's just this row of numbers or a column of numbers. And we can decide when things are perpendicular by that test. Like say the test for Q1 to be perpendicular to Qn is that row times that column. When I say times, I mean dot product, multiply every pair. Q1 transpose Qn gives that 0 up there. So the columns are perpendicular. And those matrices are the best to compute with. And again, they're called Q. And one way to, a quick matrix way, because there's always a matrix way to explain something, and you'll see how quick it is here. This business of having columns that are perpendicular to each other, and actually, I'm going to make those lengths of all those column vectors all 1, just to sort of normalize it. Then all that's expressed by, if I multiply Q transpose by Q, I'm taking all those dot products, and I'm getting 1s when it's Q against itself. And I'm getting 0s when it's one 1 Q versus another Q. And again, just think of three perpendicular axes. Those directions are the Q1, Q2, Q3. OK? So we really want to compute with those. Here's an example. Well, that has just two perpendicular axes. I didn't have space for the third one. So do you see that those two columns are perpendicular? Again, what does that mean? I take the dot product. Minus 1 times 2, 2. 2 times minus 1, another minus 2. So I'm up to minus 4 at this point. And then 2 times 2 gives a plus 4. So it all washes out to 0. And why is that 1/3 there? Why is that? That's so that these vectors will have length 1. There will be unit vectors. Yeah, and how do I figure, the length of a vector, just while we're at it? I take 1 squared or minus 1 squared gives me 1. 2 squared and 2 squared, I take the dot product with itself. So minus 1 squared, 2 squared, and 2 squared, that adds up to 9. The square root of 9 is the length. I'm just doing Pythagoras here. There is one side of a triangle. Here is a second side of a triangle. It's a right triangle because that vector is perpendicular to that one. It's in 3D because they have three components. And I didn't write a third direction. And their length one vectors because just that's how when I compute the length and remember about the 1/3, which is put in there to give a length 1. So OK. So these matrices are with Q transpose times Q equal I. That again, that's the matrix shorthand for all I've just said. And those matrices are the best because they don't change the length of anything. You don't have blow up. You don't have going to 0. You can multiply together 1,000 matrices, and you'll still have another orthogonal matrix. Yes, a little family of beautiful matrices. OK, and very, very useful. OK, and there was a good example. Oh, I think the way I got that example, I just added a third row. The third column, sorry. The third column. So 2 squared plus 2 squared plus minus 1 squared. That adds up to 9. When I take the square root, I get 3. So that has length 3. I divided by 3. So it would have length 1. We always want to see 1s, like we do there. And if I-- here's a simple fact. But great. Then if I have two of these matrices or 50 of these matrices, I could multiply them together. And I would still have length of 1. I'd still have orthogonal matrices. 1 times 1 times 1 forever is 1. OK, so there's probably something hiding here. Oh, yeah. Oh, yeah, to understand why these matrices are important, this one, this line is telling me that, if I have a vector x, and I multiply by Q, it doesn't change the length. This is a symbol for length squared. And that's equal to the original length squared. Length it is preserved by these Qs. Everything is preserved. You're multiplying effectively by the matrix versions of 1 and minus 1. And a rotation is a very significant very valuable orthogonal matrix, which just has cosines and signs. And everybody's remembering that cosine squared plus sine squared is 1 from trig. So that's an orthogonal matrix. Oh, it's also orthogonal because the dot product between that one and that one, you're OK for the dot product. That product gives me minus sine cosine, plus sine cosine, 0. So the column 1 is orthogonal to column 2. That's good. OK. These lambdas that you see here are something called eigenvalues. That's not allowed until the next lecture. OK, all right, now, here's something. Here's a computing thing. If we have a bunch of columns, not orthogonal, not length 1, then, often, we would like to convert them to, so we call those, A1 to AN. Nothing special about those columns. We would like to convert them to orthogonal columns because they're the beautiful ones, Q1 up to Qn. And two guys called Graham and Schmidt figured out a way to do that. And a century later, we're still using their idea. Well, I don't know whose idea it was actually. I think Graham had the idea. And I'm not really sure what Schmidt, how he got into it. Well, he may have repeated the idea. So OK, so I won't go all the details. But here's what the point is the point is, if I have a bunch of columns that are independent, they go in different directions, but they're not 90 degree directions. Then I can convert it to a 90 degree one to perpendicular axes with a matrix R, happens to be triangular, that did the moving around, did take that combinations. So A equal QR is one of the fundamental steps of linear algebra and computational linear algebra. Very, very often, we're given a matrix A. We want a nice matrix Q, so we do this Graham Schmidt step to make the columns orthogonal. And oh, here's a first step of Graham Schmidt. But you'll need practice to see all the steps. Maybe not. OK, so here, what's the advantage of perpendicular vectors? Suppose I have a triangle. And one side is perpendicular to the second side. How does that help? Well, that's a right triangle then. Side A perpendicular to side B. And of course, Pythagoras, now we're really going back, Pythagoras said, a squared plus b squared is c squared. So we have beautiful formulas when things are perpendicular. If the angles are not 90 degrees when the cosine of 90 degrees is 1 or maybe the sine of 90 degrees is 1, yeah, sine of 90 degrees is 1. For those perfect angles, 0 and 90 degrees, we can do everything. And here is a place that Q fits. This is like the first big application of linear algebra. So let me just say what it is. And it uses these cubes. So what's the application that's called least squares? And you start with equations, Ax equal b. You always think of that as a matrix times the unknown vector, being known, right hand side b Ax equal b. So suppose we have too many equations. That often happens. If you take too many measurements, you want to get an exact x. So you do more and more measurements to b. You're pasting more and more conditions on x. And you're not going to find an exact x because you've got too many equations. m is bigger than n. We might have 2,000 measurements, say, from medical things or from satellites. And we might have only two unknowns, fitting a straight line with only two variables. So how am I going to solve 2,000 equations with two unknowns Well, I'm not. But I look for the best solution. How close can I come? And that's what least squares is about. You get Ax as close as possible to b. And probably, this will show how the-- yeah. Yeah, here's the right equation. When you-- here's my message. When you can't solve Ax equal b, multiply both sides by A transpose. Then you can solve this equation. That's the right equation. So I put a little hat on that x to show that it doesn't solve the original equation, Ax equal b, but it comes the closest. It's the closest solution I could find. And it's discovered by multiplying both sides by this A transpose matrix. So A transpose A is a terrifically important matrix. It's a square matrix. See, A didn't have to be square. I could have lots of measurements there, many, many equations, long, thin matrix for A. But A transpose A always comes out square and also symmetric. And it's just a great matrix for theory. And this QR business makes it work in practice. Let me see if there's more. So this is, oh, yeah. This is the geometry. So I start with a matrix A. It's only got a few columns, maybe even only two columns. So its column space is just a plane, not the whole space. But my right hand side b is somewhere else in whole space. You see this situation. I can only solve Ax equal b when b is a combination of the columns. And here, it's not. The measurements weren't perfect. I'm off somewhere. So how do you deal with that? Geometry tells you. You can't deal with b. You can't solve Ax equal b. So you drop a perpendicular. You find the closest point, the projection that in the space where you can solve. So then you solve Ax equal p. That's what least squares is all about, fitting the best straight line, the best parabola, whatever, is all linear algebra of perpendicular things and orthogonal matrices. OK, I think that's what I can say about orthogonal. Well, it'll come in again. Orthogonal matrices, perpendicular columns is so beautiful, but next is coming eigenvectors. And that's another chapter. So I'll stop here. Good. Thanks.
A_Vision_of_Linear_Algebra
Five_Factorizations_of_a_Matrix.txt
GILBERT STRANG: So this lecture is, in a way, a whole linear algebra course created by highlighting five different ways that a matrix gets factored. By factored, I mean a factorization would be-- the first example will be a matrix A. Every matrix A factors into a matrix C and a matrix R, C times R. And I'll describe those. And later ones involve eigenvalues. They involve singular values. It's linear algebra in a nutshell of these factorizations. So this is coming from the latest and the final edition of my linear algebra book. I'm grateful to say there's a sixth edition this year, 2023. For the start, let me start right out with some of the key ideas before you see the first factorization. So one key idea is whether a set of vectors is linearly independent or dependent. And that simply means-- dependent simply means that some combination of those vectors gives the zero vector. And of course, I'm not allowing the 0 combination, which takes 0 of every vector, but some other combination like 2 of the first vector plus 3 of the second vector minus 11 of the third vector gives the zero vector. That would mean those vectors were dependent. And combinations is what you can do to vectors. You take combinations, multiply them by numbers-- x1, x2, x3. They multiply the vectors-- A1, A2, A3. And then you add those together. So every time I teach linear algebra, this is in the first lecture. How do you multiply or-- and how do you think of a matrix A times a vector x? And I think of it not as a lot of little, tiny computations, dot products, but I think of it vector-wise. I think of A times x as a combination of the columns of the matrix. And we'll come back to that. That's important. And now that third line is speaking about C times R. Those are matrices. Now we're multiplying a matrix by a matrix. Before, it was a matrix times a vector. So I'll have to show you important ideas because there are several ways to multiply a matrix by a matrix. And they're all important. It's a fantastic subject. And one example would be every matrix will get factored. This will be factorization number 1. Every matrix A will get factored into a matrix C that just comes from the columns and times a matrix R that comes from the rows, which you will see soon. Oh, I even wrote it at the top, number 1, A equals CR. But I'm going to leave the details, the all-important details, for the coming slides. I thought here I would tell you factorization number 2 because it's probably the most used factorization in numerical computing. Probably billions of dollars a year are spent in taking-- this is usually a square matrix A. And we're trying to solve n equations and n unknowns, where n could be 10,000. And the way to do it is a factorization. And it's known as LU. Everybody connects to that-- those words, LU. L is for lower, and U is for upper. And those are used because the two matrices-- L is a lower triangular matrix. Above the diagonal of the matrix is all 0. All the nonzero parts of the matrix L are below, whereas U is the opposite. It's 0 below the diagonal. And its nonzeros are all on the diagonal or above. So virtually, all matrices factor into a lower triangular times an upper triangular. Sometimes, you need a matrix P, a permutation. You'll see permutations show up here once in a while. That's just a reordering of the rows. Sometimes, if the rows start with-- if the first row starts with a 0, you're not happy with that, and you have to have to switch rows. And why is this L times U so great? Well, because you can solve Ax equals B, let's say, solve a set of equations, by first solving the problem with the matrix U and then-- let me-- it's-- so we're going to solve it in two steps, as the slide says. The first step is to solve-- find the solution to LC equal B. So L is lower triangular. B is the right-hand side we want. And C is the answer to that. You could say it's half the problem. It's the lower triangular half of the problem. So that gives us an answer C. That's not the x that we're finally aiming for. But it's halfway. And then put that C on the right side of the second equation, Ux equals C. Now we're solving for x with the upper triangular part. Do you see the two triangular matrices? And triangular systems are fast to solve. And that's why it's just fantastic to split the matrix into two fast cases. And then when you put them together, A times x is-- A is LU. So we have LU times x. And the Ux is C. So we have LC. And that was B. So we got Ax equal b. Every course is going to teach that. I don't have to-- I won't say more about that. But of course, it's fundamental. So I'm going to go back to this C times R, which is not so classical. But I think it's a great way to start linear algebra. An example's always the best. So you see a matrix A. You see it has three columns-- 1, 2, 1, 1, 3, 4, 3, 7, 6-- three columns. And it has three rows. So it happens to be a three-by-three square matrix. And it's small enough that we can find the C and the R. So listen up because this is what-- this is the key point. So what's the matrix C? C is for columns. So you see that I took the first column and the second column of A and put them into C-- 1, 2, 1 and 1, 3, 4. I did not take the third column in C. And why not? Because it's dependent. That column-- 3, 7, 6-- is a combination of the others. So what do I mean by a combination? I mean that 2 of the first column would be 2, 4, 2. And then if I add on 1 of the second column-- 1, 3, 4-- I think that 2, 4, 2 and 1, 3, 4 add up to 3, 7, 6. So in a way, 3, 7, 6 is already there in the first two. And that's reflected in the R, in the R matrix, the one that's more horizontal that has row, emphasizes the rows. So do you see-- here we're seeing how matrices multiply. So you see the matrix C with two columns. And I'm multiplying by the matrix R that has two rows. And let's look at the third column of R. What do we see in the third column of R? We see the 2 and the 1. That's exactly what we wanted to get the third column of A. So I'll just say again, columns 1 and 2 of A were independent. They go in different directions. So they go straight into the matrix C. Independent columns go in C. But R is aimed to catch up with the columns like the third column of A that are combinations. And again, you see why it's 2 and 1, because 2 of the first column plus 1 of the second column-- that combination of independent columns gives the dependent column. And if I jump to the bottom line, that matrix R, that row matrix, starts with the identity. Does everybody know about the identity matrix? It's the equivalent of 1 for matrices. If you multiply by the identity, you get whatever back again. So the identity is that 1, 0, 0, 1 part. And then the F part is the part that reflects the dependent guy, the third column-- 2, 1. So my point is going to be that every matrix-- I can pick out the independent columns, put them in C, and then R will tell me what combinations of those independent columns gives all the columns. So here is the mathematical truth of this thing. I'm going to use the letter little r for the count of how many independent columns. So r was 2 before. The n minus r are the other columns, the dependent ones. The factorization is that every matrix A factors into C times R, where C contains the independent columns. And R has these two or three pieces. So R has the identity. Do you see R? R is the IF with a P. So three matrices are involved with R-- the identity matrix because when C multiplies the identity matrix, it just gives those independent columns in C. The identity doesn't do anything, the I. C times F-- that's the combinations that give the dependent columns. And this matrix P I'm-- tempted to say this damn matrix P, that permutation-- just if the columns came in a crazy order or a different order instead of-- it's super nice when the independent ones all come first, and then the dependent ones all come last. That's what we saw in the example. But there's an example at the bottom where they-- can you pick out the independent columns in the matrix on the 1, 2, 3, 4, 1, 2, 4, 5 matrix? So the first column is independent. It's fine. But the second column is just 2 times the first. So that second column is dependent. And then we get an independent column-- 3, 4. That's a new direction. And then we get 4, 5 which is dependent, again. So this had two independent columns-- the 1, 1 and the 3, 4. And you see them sitting in C. And then you have the R matrix that tells you how much of each independent column do you need to get every column of A. If you want the identity matrix to come first, that's what a permutation. Does it switches the order of the columns. And so it puts that identity matrix, the 1, 0, 0, 1 matrix, first. Every matrix factors into C times R. And C is the independent columns. And R has this special form. It's famous as the row-reduced echelon form. Whatever. So onwards. Now here are the facts. I'm using little r for the count of independent columns. It turns out the matrix R has that many rows. And then A is equal to C times R, C times R. Matrix multiplication is central here. And the C has-- oh, now I've introduced a new word, the column space. So we have a matrix. What's the column space? Let me go back to a matrix and talk about the column spaces. How about the column space of that matrix A that we now understand pretty well? The column space is all combinations of the columns. So I would say the column space of that matrix is all combinations of the first two columns because the third is already a combination. So I don't have to include it in the basis-- the good word is the basis-- for the column space. If I'm wanting all the independent columns-- if I'm wanting all the columns, I really just need the independent ones because the others are combinations. So the column space C is a plane here. It's the combination of just two vectors. And the row space R is also a plane. It's the combination of two rows, combinations of two rows. So we have a column space. Space is a big word. And it means take all the combinations of your basis vectors. And so let me go forward to a great first theorem of linear algebra. So this gave me pleasure. I have to admit that. It's at that last line. The column space of a matrix-- remember, our matrix had two independent columns-- and the row space-- so that matrix has two independent rows. Mathematics is telling me that from A equals C times R, all rows of A are combinations of rows of R. You'll have to think about these ideas and the first-- and work some problems finding C and R and seeing this wonderful fact. If you had a matrix that was 50 rows and 100 columns, then it would not be clear for-- without a big computer how many rows are independent and how many columns are independent. But the truth will be that those two numbers are the same. If it has 11 independent rows, it's got 11 independent columns. For me, that's just a wonderful fact. So that's the first great fact. And now here's the big picture of column space. So we are really moving along here. We are moving. So these boxes, these four boxes, are filled with vectors. So we call them vector spaces. And I'll describe what-- that word, space, is important. So what is the row space? That's all the rows and all combinations of the rows. That makes the row space. You get a space by starting with some vectors, taking all their combinations. It fills out the plane. It fills out three-- some three-dimensional space. It fills out some R-dimensional space. Let me just ask you, just-- let's just take it. Suppose I have two vectors in three dimensions, two independent vectors in three dimensions, like as in my example, two independent vectors in three dimensions. So I'm in 3D. I'm in three-dimensional space. And I take all combinations of those two independent vectors. And what do I get? I get a plane that goes through 0, 0 because one possible combination is 0, 0. And it's a nice, flat plane in 3D. You can visualize that, that-- a base-- it's-- the dimension of the plane is 2 because it took two vectors to give you everything you needed to know about the plane, take all combinations of those two vectors. So we had a two-dimensional plane inside three-dimensional space. What about the columns? Same idea-- you start with the columns. You take all their combinations. So that's the column space. Again, the space means take all combinations. So it's a whole-- I've drawn it as a square or rectangle or a box. It's filled in by taking all combinations. It gives me a plane. It gives me a three-dimensional subspace. Well, if I have a bigger matrix, like a 10-by-10 matrix, then the rows-- then I might have six independent columns. The column space of this matrix would be all combinations of those six independent columns. They would each-- all those were sitting inside 10-dimensional space. Just visualize 10-dimensional space, or don't visualize it. I can't. But anyway, if you can, do it. And in that space, the 10-dimensional space, is some thin, very thin, part of it, which has only six independent vectors-- so a six-dimensional column space inside the big 10-dimensional space. Our first great theorem was that the number of independent rows equals the number of independent columns for every matrix. That's just wonderful. So that means that the size of the row space and the size of the column space are the same. And then the other two guys, the null spaces, are the solutions to Ax equals 0. So Ax equals 0 says some combination of columns or of rows is giving 0. And so that-- this is accounting for the rest. So we had six-dimensional row space inside 10-dimensional space. Then the null space, which I'll come back to-- but the point is it will be four-dimensional so that 6 and 4 make 10. The whole 10-dimensional space is being split into the independent rows and the null space on one side and the independent columns and the null space of-- oh, you see this symbol, A transpose. A transpose just means reverse the rows and-- with the columns. If I have a matrix A with 10 rows and eight columns, then A transpose will be the other way around, eight rows and 10 columns. Well, this is a whole linear algebra, of course. But we're moving. I have to tell you a little bit how you would find which columns are independent and which are not. It's an algorithm. How would a computer find the independent rows? We have to be able to answer that question because for big matrices, we need the computer. You remember, we're-- it's interesting that the computer would look first for R, the row matrix, where I introduced, first of all, C, the columns, the independent columns. Well, it finds them both at the same time, this algorithm. It just progresses column by column across the matrix. Column by column, we go across the matrix. And we ask, is this new column-- say, column K plus 1-- is it a combination of those previous K columns or is it independent? That's the key question. If it's a combination of the other ones, then it'll go in with the F part. If it's independent of the first ones, it'll be a new independent one. And it'll go-- it'll-- the I, the identity matrix, will grow by 1. So I'm not going to push you to follow the steps of this elimination. Well, actually, they-- elimination was invented in China 3,000 or 4,000 years ago. So it's fundamental idea. It's just subtract a multiple of one row from another row to get a 0 in a place where you want a 0. We can understand what's going on when we have-- when our matrix has enough well placed zeros. So elimination is just subtracting rows from other rows to produce zeros in the right position so we can see what's going on. Here, we're actually going to use elimination and see and-- the idea of null space and the idea of equations. This we can do. This we can do. So over at the top left, you see three equations. And you see four unknowns-- x1, x2, x3, x4. So we have a three-by-four matrix. And it's a full matrix-- hasn't got any zeros. And the idea of elimination is to get some zeros into that matrix. In fact, we can see where those arrows are showing where we want to go. Starting with the matrix A, three-by-four matrix with these numbers-- 1, 3, 4, 2, 7, 9, 11, 37, 48-- are those columns independent or not? Elimination will tell us. And we want to know which rows are independent. So I'm just going to look at that-- those numbers at the top for a little while. Well, I'll go halfway down just to share the nice math notation. So I wrote the equations at the top. But really, I wasted a lot of time writing in all those plus signs and equal signs, and so on. Really, it's the numbers that count. So you see the numbers 1, 2, 11, 17 in the first row. So they go in the first row of the matrix. I just skip all the plus signs and x1, x2s. I know they're there. It's the numbers that matter. And then we hour our-- we get, by elimination, to this much simpler form, R, with the matrix R. Now, let's see. Maybe we could do some of these in our head. I think that if I add equation 1 to equation 2, I get equation 3. Do you agree with me? I'm at the top left of the slide. If I add x1 to 3x1, I get 4x1. And if I add 11x3s to 377x3s, I get 48x3s. Of course, I cooked it up this way. Instead of adding, if I subtract the first equation from the third and I subtract the second equation from the third, then I have nothing left. That equation becomes 0 equals 0. That tells me that really, this matrix, this big matrix A, has only two independent rows that I'm-- they're sitting there in R. Do you see the-- you see R, the last matrix, the 1, 0, 3, 5? That comes from the first equation after elimination. And the 0, 1, 4, 6-- those are the coefficients of x1, x2, x3, x4 in the second equation. And I didn't even bother to write the 0 equals 0 equation because I don't need it or want it in R. R gets rid of all the 0 equals 0 stuff. So where are we? We started with a matrix A. Well, we started with four equations. And the numbers in those equations went into A. And I did elimination. I subtracted 3 times row 1 from row 2. And that knocked out the 3x1, and so forth. So I ended up with the equations on the right, which went into R, the 1, 0, 3, 5 and the 0, 1, 4, 6. You see, as predicted, we start with the identity matrix in R, the 1, 0, 0, 1. And then we have the other stuff, the dependent coming from the dependent stuff. And we started with the first two columns in C-- 1, 3, 4, 2, 7, 9. Those are our original first two columns. And we didn't have the third column or the fourth column because those are combinations of the first two. So this is a matrix of rank 2. It has two independent columns, the first two. And it's got two independent rows, which are-- I could choose the first two rows of A or, better, the first two rows of R. The whole point of this was to be able to solve Rx equals 0. And the-- when I've got-- or Ax equals 0 is the same as Rx equals 0 now. And R is this simple matrix. So do you see that that vector, minus 3, down where it's talking about the null space-- null means 0. So these two vectors there-- they solve Rx equals 0. And therefore, they solve Ax equals 0. That's what you're looking for in solutions of equations. If you took that minus 3, minus 4, 1, 0 vector-- those are x1 and x2 and x3 and x4-- they easily solve our simplified equations at the top. So altogether, we have done what professors assign students to do for centuries of linear algebra. We have solved the equations by elimination. We had some messy equations. We took combinations of those to give some nice equations. Then we solved those nice equations. And we could express the result as these two vectors at the bottom, or-- but we could express the whole elimination as C times R, the two independent columns of A times the two rows of R. This is my final slide on this first factorization. Then I'm not going to give such detail on the later factorizations because the textbook and your professor know those so well. I just want to say here that matrices-- really, using matrices in matrix algebra is wonderful. And one of the things you can do is to break them into smaller matrices. So you see that W, H, J, K? Well? That's our big matrix with four submatrices in it. And W is the part of the matrix that's coming from independent columns. So that arrow in the top line says that W moves to the identity. That was the theme of all those crazy computations we did-- was to make the top left corner into the identity. And then it makes the rest of the matrix into what you see, that-- but just to say that if you think of elimination by blocks instead of just single numbers, then you see the big picture, the big picture. But the real picture was done on that slide at the top. That's where we did the elimination. And then we got the nice pieces that we're seeing here. So that was all factorization 1, A equals CR, because you need some help with that. That's brought up so many of the key ideas and tools of linear algebra-- go into A equals CR. And it's just wonderful. So now, oh, I'm seeing the word "orthogonal." So this next factorization is-- takes a matrix A. The columns of A are probably not perpendicular to each other. All the examples we had-- the columns of A-- if I looked at the angle between them, it wasn't 90 degrees. There's a simple test for perpendicular that-- it's called the dot product. And matrices usually-- vectors don't pass that test. They're not perpendicular. But it's wonderful when they do. Perpendicular vectors are super easy to compute with. "Orthogonal" is just another word for "perpendicular." And it's a shorter word and a more used word. So what's the idea of this factorization? Again, we take any matrix A. And we're going to get it into two factors. But they're not going to be L and U anymore. They're not going to be C and R. They're going to be Q times R. And Q is this new idea, this new idea of perpendicular vectors. I just want to say that perpendicular vectors are so easy to compute with. If independent vectors are good, perpendicular vectors are super independent because they're going in 90-degree directions. And look to see the nice equation that you get if you have these perpendicular vectors. So I'll use that letter q, little q. So do you see where I'm multiplying a matrix, Q transpose, times Q in the middle, that matrix multiplication? So Q, the columns of Q, are our orthogonal vectors. And I adjust their length to be 1. That's no problem. So they're perpendicular. And their lengths are 1. And Q transpose-- well-- so transpose is an operation that we use a lot. It just changes the rows-- the columns into rows. So the columns were q1 to qn in the second factor. And then the first factor-- we change Q1 to Q1 transpose, which is a row. So we have now n columns perpendicular vectors in Q. And we have n rows that are perpendicular vectors in Q transpose. That T is, for short, for transpose. And the whole point is that when I multiply those two matrices, Q transpose times Q, that's when the perpendicular stuff pays off because Q1 transpose times Q2 is 0. Perpendicular vectors-- they're-- when you multiply them, a row times a column-- if they're perpendicular, you get 0. So we actually get the identity matrix from Q transpose times Q. So that means Q is a-- well, Q is one-- is for me-- well, I'll say the queens of linear algebra. I'm not making them the kings. But the queens of linear algebra are these matrices whose columns are length 1 and perpendicular to each other. But this is an important idea-- that if I start with a bunch of independent vectors, they're going in different directions. No combination is 0. Then I can turn-- bend-- turn those vectors by a matrix R so that they are not just going in different directions, they're going in 90-degree perpendicular directions. It's just terrific to have vectors that are perpendicular. So that's actually factorization 3-- is doing it to the columns of A. And then factorizations 4 and 5 are doing it to-- you'll see. Factorizations 4 and 5 are the great achievements of linear algebra. So I promised you five factorizations. And we've got three, two to go. And they're actually together here. They'll take time in the course. And so one is about eigenvalues. Every professor wants to explain eigenvalues. Those are nice eigenvalues and eigenvectors. And the other one is about what are called singular values and singular vectors, not so famous except for the fact that they are the best of all. So I'll try to compare them so you'll see how they're related because eigenvalues are simpler to see. But singular values are important, too. So let's suppose we have-- it's nice if the matrix is symmetric, S-- so S for a symmetric matrix. That means that if I flip it across the diagonal, it's still the same. So the number that's in the bottom left is also in the top right. Now, here's the key idea, eigenvalues and eigenvectors of that matrix S. So the big equation is S times X equals the number lambda times X, SX equal lambda X. What does that mean? That means that X is a vector. S is a known matrix, known symmetric matrix. And I'm looking for vectors X which don't change direction when you multiply them by S. They don't change direction. Previously, we might have been looking for vectors that go to 0, XS equals 0. Not now. The key equation you'll see in that top line is S times X equals a number lambda times X. So it's still in the direction of X. Its length just got changed by that factor, lambda. And that lambda is called the eigenvalue. X is the eigenvector. "Eigen" is a German prefix that is-- everybody uses here. So XS equal lambda X-- you will study that type of question. It's, how do you find the X's and the lambdas? In a way, the problem is nonlinear because lambda is multiplying X. So it's not like XS equal B, a right-hand side. SX equal lambda X takes new ideas. And you'll see them. And the beautiful thing is that when S is a symmetric matrix, then these eigenvectors, these X's-- it turns out that they're perpendicular to each other. And so if I stay on the left side of the screen, which is about S, you'll see that SX-- if I use matrix notation, S times-- I can list all the eigenvectors. There are n of them, luckily-- so same size as S. If I multiply S times those eigenvectors, I get those eigenvectors back again times that diagonal matrix of lambdas, which are just numbers. So SX equals X lambda-- that's the factorization 4. Well, I can put the X on the far right and write it differently as S equals X lambda X inverse. A symmetric matrix, X, is its eigenvector matrix, X, times its diagonal eigenvalue matrix, lambda times the inverse of X. So it's a factorization. It's got three matrices in it now. You're going to learn about those guys. This is the second half of the course. The first half of the course was elimination, spaces of vectors, Ax equals 0, linear equations. Now we are deeper with eigenvectors and with singular vectors and singular values. Now, what do those mean? The point is that every matrix-- doesn't have to be square, doesn't have to be symmetric. Every matrix A has a full set of singular values. And what are-- and vectors. Singular vectors are really more important than the value-- than the numbers. The values are those numbers, sigma or lambda, that scale things. It's the vectors that are great. We'll always take the vectors to be length 1. And then the scaling goes into the sigma or lambda. What's the deal with singular values? I'm now telling you about number 5, factorization 5, the end of a linear algebra course, frequently, or-- and the past courses didn't get as far as singular values. But you got to push to get there because it's so important. It's become more important-- oh, it's hard to say this-- more important than eigenvalues. Well, it's closely-- so closely related that you-- but you just have to-- singular vectors apply even when the matrix is rectangular. And they apply to every matrix. So they are exceptional. And what is the main point of singular vectors? Listen up because this is it. Every matrix A has, you can find, a bunch of perpendicular vectors, inputs-- v1, v2, up to vr, the rank, the number of independent vectors. So orthogonal vectors going in, v's, multiply by A. Orthogonal vectors come out. For eigenvectors, they come out in the same direction. For other matrices, that's too much. You can't expect them the same direction. But the amazing, miraculous thing is that you can-- there are orthogonal vectors going in. If you pick the right orthogonal perpendicular vectors going in, then the-- and multiply by A, then perpendicular vectors come out. So these are singular vectors that a matrix takes one orthogonal bunch of, you could say-- in orthogonal-- one set of orthogonal vectors in the row space-- multiplies, produces orthogonal vectors U. So that also is a matrix factorization. And the famous one is the one halfway down the page on the far right, A equal the U matrix times this diagonal sigma matrix times the V transpose, the outputs. Sorry, the v's are the inputs because being on the right, they're going to hit a new vector-- the first. So if I multiply by x, I hit them with V, or V transpose. That rotates the space. Then I hit them with sigma, this diagonal matrix that just stretches the thing. And then I rotate again. Isn't that amazing? Every matrix is a product of a rotation, a diagonal stretching matrix, and another rotation. Well, you know what it means to rotate a plane, just rotate a plane through an angle. If you're an airline pilot, you know about rotations in 3D. What are those? Yaw is one of them. There are three rotations that an airplane can do in the same plane, either horizontal plane or vertical plane, one angle if we have just a-- in a plane, the rotation just takes one angle. In three-dimensional space, it takes three angles. In four-dimensional space, well, that's for the linear algebra course. So what are we saying? What does this singular value stuff tell me? It says, again, that for every matrix, I can find perpendicular vectors for inputs. And when I multiply the matrix, the outputs are perpendicular, too. That's what it says. That's what it says, perpendicular inputs that give perpendicular outputs. They're very special. Normally, that won't happen. But there is a set of perpendicular inputs where the outputs are perpendicular. And I'm going to show you that, or try to, in 2D, in 2D So I'm looking-- I'm thinking I have a two-by-two matrix. And my vectors, V and W, are the usual 1, 0 and 0, 1. So if I multiply them by this matrix-- so they're perpendicular. V is perpendicular to W. No problem. If I multiply by my matrix, I get A times V and A times W. Well, the odds are a million to one that they're not perpendicular. I can't expect so much luck that the-- maybe if the matrix was a diagonal matrix or some simple guy, then AV and AW would be perpendicular. But generally, they won't be. So V and W are not the pair that I'm-- the input pair that I'm looking for. Let me make one more try and fail again. I'm going to try to put in W and minus V. So I'm looking at the middle picture now. I'm looking at the middle picture. I'm multiplying W by A. So that still gives me the AW. And I'm multiplying minus V by A. So it comes out 180 degrees around from the original AV in the first picture is A minus V in the second picture. You need to think about it. This is a little bit of a proof. So what's the point? Well, this new pair, W and minus V, didn't work, either. The angle was not 90 degrees after I multiplied by A. And that's what I'm shooting for. But here's the little idea, small idea. The angle in the first case was smaller than 90 degrees when I multiply by A. The AV and the AW had some small angle, theta, between them. But now if I multiply W and minus V by A, I get a big angle, bigger than 90 degrees. And mathematics steps into the problem here, looking for an idea. And mathematics says that somewhere between the VW that I started with and the minus-- and the W minus V, the second pair, somewhere, there's a little v and a little w that make it right because it was-- AV was too close to AW in the first picture. They were too far apart in the second picture. But as I-- if I go smoothly around, I'll hit a point where they're exactly right. And that's what the third picture is telling me. So there's a proof that every two-by-two matrix-- this is a two-dimensional proof-- that every two-by-two matrix has a pair little v, little w, which are perpendicular, and then also A times v and A times w are perpendicular. So that's factorization 5. And that's my-- that's-- if you cover that in linear algebra course, you're doing well. Despite the fact that that slide is number 12 out of 12, there is another slide that doesn't have a number. If you're up for this, this is about deep learning. So if you want a job eventually-- probably, most people would like to have a job-- then a lot of jobs, we know, are coming from deep learning. So I would tell you a little bit of-- this is the last chapter in the linear algebra book. It's probably not part of the linear algebra course, officially. But why not learn about deep learning, artificial-- AI? What is deep learning? Well, deep learning starts with some training data, some inputs with known outputs. So I'll call the inputs u1, u2, u3. I maybe have 1,000 known inputs and 1,000 known outputs. I've got a lot of information from the training data. And then I want to be able to predict for a new input, u, not one of these known one, not part of the training data, but a new guy, what is the output. So I'm going to assume that the relationship, which I'm never going to know-- the relationship is something reasonable, that if I have a-- if my new input is nearer to one of the u's, then the output will be nearer to one of the w's. But it will be affected by all the other u's and w's. So it's called interpolation or something. I'm looking for a function that fits the data. And then I can use that function for new data, test data, where I don't know the output. That's the thing. And now I want to tell you the key idea of deep learning. The question is, what kind of a function are we going to choose that we're going to-- it'll have some freedom so that we can make it fit the known training data. We can create-- there are a lot of-- there are a zillion ways I could create a function that was correct on the known data that gave me the right answers, w, from the inputs, u. What I want is to be good on data that I haven't seen. And that was the important problem, important scientific problem, that has been largely solved and understood much better. But there's more to understand still by deep learning. So I'm just going to go quickly here. The question is, what kind of function should we have? What kind of function-- should I try one big matrix? Well, matrices are linear. And I don't know any reason why this unknown function should be linear. So matrices are going to come in there. But also, some nonlinear stuff has to come in. And it turns out-- do you remember the chain rule from calculus? It's a cool rule out of calculus that if a chain of functions-- you take a first-- you take an input, give it input in a function F1. You take that output, put it into F2. Take that output, put it into F3. There's a chain of-- chain of simple functions is a good idea. You quickly can build up quite a array of interesting functions out of very simple ones. And now comes the very last slide to tell you the particular functions that deep learning uses, that particular functions that deep learning-- so people tried linear, the function Ak times the input plus some vector Bk, like a straight line as Ax plus b. So the first attempts were-- had linear stuff. Well, how can I teach linear algebra and have to confess here that linear is a limitation, a big limitation? And it was too much. And a good linear interpolation doesn't-- there isn't one. You've got to make something nonlinear. And here's the crazy thing, the fact that it surprised everybody-- that this particular nonlinear function that everybody calls ReLU, R-E-L-U-- that's a particular function that's 0 for when the input is negative. And it gives back the input y when y is positive. So the graph of the function is two straight lines, horizontal line below 0, 45-degree line above 0. That's ReLU. So that's a nonlinear function just of a number. The input is a number. Tell me, so what is ReLU of minus 7? If I input minus 7, the ReLU of minus 7 is 0. What is ReLU of plus 7? 7. So I get 0 or 7 when I input minus 7 or 7. That's the magic function. Anyway, the idea is to use that nonlinear function in every one of these simple F's. And then you put all these F's in a chain. And then you find these matrices, A and B, that are the substantial part of the function. And it's a giant calculation, which we are not going to do here. But it's a success. This idea has created functions. It may not be the only way to do it. I would like to say-- I would like to hope and think that there could be-- from this one, we could learn the-- learn what's important and find other ways of creating a-- functions. But this combination of a chain of very simple functions-- each of those simple functions involves linear-- a matrix A and a vector b. Then this ReLU thing that's nonlinear thing that knocks out the negative part-- and then go on to the next function, on to the next function, and choose the good A and b to fit your data. And then you've got a way to predict the output for new inputs. And that's what so many problems are about. If you know some training inputs and outputs, how do you predict the output from a new and different input? Well, that's what deep learning does. And I hope this last couple of slides, which are-- which represent the final chapter of the sixth edition of Introduction to Linear Algebra-- and I could-- a lot of people email me to-- about the books or about linear algebra. And that's-- makes my life interesting, too. Thank you.
A_Vision_of_Linear_Algebra
Part_2_The_Big_Picture_of_Linear_Algebra.txt
GILBERT STRANG: OK, in this second part, I'm going to start with linear equations, A times x equal b. And you see actually, the first real good starting point is A times x equals 0. So are there any solutions to the matrix, any combinations of the columns that give 0, any solutions to A times x equals 0? Now, I'm multiplying a matrix A by a vector x in a way you'll know. I take rows of x times, it's called a dot product. Rows of A times x. So I have a row of numbers. And x is a column of numbers. I multiply those numbers and add to get the dot product. And I'm wondering, can I get 0 for each? Is every row-- so having a 0 there is telling me, in geometry, that that row is perpendicular, orthogonal to that column. If a row dot product with a column gives me a 0, then in n dimensional space, that row is perpendicular, 90 degree angle to that column x. So I'm looking to see, are there any vectors x that are perpendicular to all the rows? That's what Ax equals 0 is asking for. Oh, and that's what I've just said right there. I've used the word orthogonal. That's more of a high level word than perpendicular. So I'll stay with that. It sounds a little cooler. OK. And now, we can also look at that transpose. Oh, do you know what the transpose of a matrix is? I take those rows and flip the matrix, so that those rows become the columns. And the columns of A become the rows of A transpose. So I'll look at A transpose times-- we'll call it y for the new problem. A transpose y is all 0s. And then the null space will be any vector, any solutions, any y that's perpendicular to the rows of A transpose. So I would need couple of hours of teaching to develop this properly because we're talking here about the fundamental theorem of linear algebra, which tells me that the vectors in the null space, like that, are perpendicular to the vectors. These guys are. That's the row space. Oh, but maybe I have told you. We've said that, from this equation, that tells you the geometry that that row vectors are perpendicular to the x vector, the thing in the null space. So x is there. The rows are there. And they're perpendicular. Now, if I transpose the matrix, remember that means exchanging rows and columns, so I have a new matrix, new size even. It will the same-- but it's a matrix. The same will be true for it. The rows become the columns. And the solutions to the new equation with A transpose go in that space. So then that little perpendicular sign is reminding us of the geometry. So rows perpendicular to the x's. Columns perpendicular to the y's. That's the best. I finally saw the right way to say that. So I have two pairs. And I know how big each of those four things are. Those are the four fundamental subspaces, two null spaces, two solution spaces with 0. Null means 0. So these x's are in the null space because of that 0. Those are the n's. And then this is the column space and the row space. So we've got four spaces altogether, two pairs. And now, you get to see the big picture of linear algebra, where the four fundamental subspaces do their thing. There you go. You can die happy now. The row spaces there, those are rows of the matrix, independent rows of the matrix. That's why I don't put in all the rows. There are m rows. But I only put in independent ones. So that might be a smaller number r, r the rank. And here are the solutions, the guys perpendicular to them. This is the rows of the matrix. These are the vectors perpendicular to it. These are the columns of the matrix. These are the vectors perpendicular to the columns. You see it's just a natural splitting of the whole spaces of vectors into two pieces and two pieces. And I think of the matrix A, when it multiplies stuff there, it gives stuff here. When A multiplies a vector x, you get a combination of the columns. That with the very, very first slide. A times x is a combination of the columns. And then we look at some x's, if there are any, where A times x gives 0. And there's 0 right there. OK. OK, so that's the big picture. And I'll just point to another little point that's hiding in this picture. You see that little symbol there, that little thing, and it's also here? What that means is that those guys are perpendicular to these. And these are perpendicular to these. So we have four subspaces, two pairs, two perpendicular pairs. And that's when you get the idea of knowing what they mean, knowing how to find them, at least for a small matrix, you've got the heart of linear algebra part one. This is the first half of linear algebra. OK, I'll just see what else there is. Oh, here, oh, well, this is another comment. I've hardly told you how to multiply two matrices. The usual way is rows times columns. But linear algebra being always interesting, there's another way that I happen to like, columns times rows. Now, there is a column times a row. Now, column times a row, we've seen that once for that rank one matrix. Do you remember I said that those rank one matrix, one column times is one row are the building blocks? Well, here is the building. Those are n of those blocks. A column times a row, a column times a row. And here is a reminder of the-- oh, we've only-- oh, we're coming up to A equal LU, the first one. Get on with it, Professor Strang. OK. OK, now we're solving equations. Now we're going to get L times U. So right. So there's two equations and two unknowns solved in high school and how. Do you remember how? That's the whole point. If I take twice that equation, so it's 4x plus 6y equal 14, and subtract from this one, then I get an easy equation for only y by itself. So that's what I did. That's called elimination. I eliminated this 4x. It's gone. It's 2 times that. That's why I chose to multiply it by 2. Then 2 times this gives me 4 x's. When I subtract it, it's gone and I'm left with 1y equal 1. So I know the answer y equal 1. And then I go backwards to x equal 2 because 2x plus, this is now, 3 equals 7. 2x is 4. x is 2. And the real point about linear algebra done right is that all those steps can be expressed as a break up, another way to break up the matrix A into a lower triangular matrix. You see that that matrix is triangular. It's lower triangular. And this one is upper triangular. So those are called L and U. Yeah, yeah. So what we did here is expressed by that matrix multiplication. You really want to express everything, in the end, as multiplying a couple of matrices. Then you know exactly where you are. So that's the idea of elimination. And now, we only were doing a 2 by 2 matrix. You remember our little matrix was pathetic, 2, 3, 4, 7. That was our matrix A. We can't stop there. So linear algebra goes on to matrix of any size. And this is the way to find the triangular factor L and the upper triangular factor U. That would need more time. So all I want to say is, when you're doing elimination solving equations, then in the back of your mind or in the back page, you are producing an L matrix lower and a U matrix upper. So yeah. Let me see. Yeah, here we see them. The L matrix is all 0s above. The U matrix is all 0s below. And that's what is really happening. So that's what computer system totally focuses on. OK, that's the first slide of a new part. So I'll stop here and coming back to orthogonal vectors. Good.
A_Vision_of_Linear_Algebra
Part_1_The_Column_Space_of_a_Matrix.txt
GILBERT STRANG: OK, here's the, well, the title slide. Since this year happened to be 2020, and that means clear vision, I thought I'd get that into the title of these slides. And then you've seen in these six pieces as a sort of look ahead, and I'm going to start on that first piece, A equals CR. That's the new way I like to start teaching linear algebra. And I'll tell you why. OK, oh, here, we have a few examples. Well, that will lead to our ideas. You see that matrix, A0. A matrix is just a square or a rectangle of numbers. But those numbers have special features. If you look closely, well, you say 1, 3, 2 as row 1. And then what do you see for row 3? 2, 6, 4. And those are two vectors in the same direction. Why is that? Because 2, 6, 4 is exactly 2 times 1, 3, 2. And in the middle there is 4 times 1, 3, 2. So I have three rows in the same direction. And actually, also, this is the magic. Can I tell you this right at the start? The columns, look at the columns. 1, 4, 2. If I multiply that by 3, I get 3, 12, 6. If I multiply it by 2, I get 2, 8, 4. So somehow, magically, the columns are in the same direction exactly when the rows are in the same direction. They're different. That's what linear algebra is about, the relations between columns and rows. OK, and well, here's another one I'll look at. There again, you see row 1 plus row 2 equal row 3. So it's not quite like this where every row was in the same direction. But here is if I add rows 1 and 2, I get row 2. So that's a matrix of rank 2, we'll say. You'll see it. OK, then here here, S is for symmetric matrices. Those are the kings of linear algebra. And here are a few small samples. And the queens of linear algebra are these matrices I call Q. Those are called orthogonal matrices. Orthogonal meaning perpendicular. So and they tend to express a rotation. So that's a rotation matrix, an orthogonal matrix. That rotates the plane. And there is a pretty general matrix that we'll see at the very end. OK, so I'm into the start of the column space. So that's a word I don't use in the videos for quite a while. But here, you see I'm using it in the first minutes. So I look at a matrix. Well, first, let's just remember how to multiply a matrix by a vector. OK, there is a matrix A. There is a vector x with three components. And the way I like to multiply them is to take the columns of A. That's what I'm focusing on, columns of A. There they are, 1, 2, and 3. I multiply them by those three numbers x1, x2, x3, and I add. And that's called a linear combination. Linear because nothing is squared or cubed or anything. And combination because I'm putting them together, adding them together. OK, so that's the idea. And now, the big idea is in that top line. I want to think of all combinations. So this is one particular combination with a particular x1 and x2 and x3. But now, I think of every x1 and x2 and x3, all the vectors that I could get. Well, of course, I could get the first column by taking 1 and 0 and 0. That would give me the first column. But it's really mixtures of the columns that this produces. And it fills out. It fills out, in this case, a whole plane in three dimensions. These vectors have three components. We're in three dimensions. And can you just imagine in your head, two lines meeting at 0, 0, 0. So they cross. But I just have two lines. And now, I fill in between those lines. Filling in between those two lines is taking the linear combinations. That's where they are. And the result is I get a plane. I do not get the whole space because nothing is going in a third direction for this matrix. All right. So let's see more about this. So that's that word column space. And I use the capital C for that. And it's all the vectors I can get that way, all the combinations of the columns. And now I ask. Oh, well, maybe I just answered this question. Sorry. I ask, is the column space, all the combinations, is it the whole 3D space, which everybody calls R3 for real 3, or is it a plane, or is it just a line? Well, the answer is plane. That probably even gives us the answer. That's the good thing about this subject. The answer is a plane because I have two different lines that meet at the 0. And when I fill in between them, I have a flat plane. I don't go in the third direction. Good. So that's the column space for this matrix. And here's a little more saying about that. We kept column 1. And we kept column 2 because you remember those two columns, the first two, were different. They went in different directions. They go in different directions. We did not keep the third column because it was just the sum of the first two. It's on the plane, nothing new. So the real meat of the matrix A is in the column matrix C that has just the two columns. And what about R? Because this is my plan for the first few weeks, first two to three weeks of linear algebra, is to understand. So that 5, 5, 3 would be called a dependent vector because it depends on the first two. Those were independent. So those are the two that I keep in the matrix C. And then that matrix R, oh, well, now I'm multiplying two matrices. And you know how to do that. But I always have another way to look at it. So the way I look at it is by linear combinations. Do you remember those? So multiplying is a combination of these guys. First, I have 1 of the first column. That's my first column. And the next time, I have 1 of the second column. That's my second vector. And the third one is this guy, 1 of that and 1 of that. So these two are the independent ones, and that's dependent. And a full set of independent ones is called a basis, really fundamental. So I guess I think that linear algebra should just start with these key ideas, just go with them. And we learned something. It almost falls in our laps. It's a first great and not obvious fact about linear algebra. I'm just amazed to have it here. The number of independent columns in A, which it was two, is equal to the number of independent rows in R, also two. You remember that we had two rows and two columns? So two columns first in C, two rows in R. And the point is that that's telling us-- and we just checked that those two rows were-- two columns were independent. The two rows are independent. The basis, and then we learned that the column space has dimension 2. R equals 2 for this example. And the row space has the same dimension. So that column rank R equals the row rank R. It's like if you had a 50 by 80 matrix, OK, that's 4,000 numbers. You couldn't see what those these dimensions are. But linear algebra is telling you that a dimension of the row space and the column space, 50 of one and 80 in another, are equal. OK, so this is again coming early, and we'll see it again. But it's good to start linear algebra from day one. And then here is another great fact about equations because matrices lead to these two equations where x is the unknown. And this equation has 0 on the right hand side. So how could we get 0 on the right hand side? We could take 1 of that. And let me change that to a minus sign and that to a minus Sign. One of those minus one of those minus one of those would be 0, 0, 0. So that 1 and minus 1 and minus 1 would tell us an x. And that's the solution. In applying linear algebra in engineering, in physics, in economics, in business, you end up with equations. Things balance. And you want to know how many solutions there are. And linear algebra was created to answer that question. OK, so now, I'm just going to say a little more about this starting method of the course. Oh, I want to focus here on these interesting matrices, where every column is a multiple of the first column. Every row is a multiple of the first row. Instead of having two independent columns and rows, these matrices have only one. So then C has one column. And R has one row. And the rank is 1. These are the building blocks of linear algebra, these rank 1 matrices, column times row. The previous matrix would have one of those blocks and a second block. A big matrix from data science would have hundreds of blocks. But the great theorem in linear algebra is to break that big matrix into these simple pieces. So that's the goal for the end of the course. OK, and finally, a last thought about these. So this is C times R. I'm urging teachers to present that part at the early. So what are the good things, I've marked with a plus. First of all, the columns, we're looking at them in C. And we see them from A. We take them directly from A. R turns out to be a famous matrix. Row reduced echelon form it's called. So to see that pop up here is terrific. And then this wonderful fact that row rank equal column rank is clear from this C times R. So those are all terrifically good things. The other thing I have to say is that C and R are not great for avoiding round off or being good in large computations. This is a first factorization but not the best one for big computing. Right. So ill conditioned means they are difficult to deal with. And also, we often have a matrix with all the columns are independent. And it's a square matrix. All the columns are independent. We can solve Ax equals b all the time. But then if all the columns are independent, then our matrix C is just the same as A. We didn't get anywhere. And R would be the identity matrix, like a 1, because A equals C. So this is the starting point, picking out the independent columns, but not the end, of course. And I'll stop here and pick up on the next factorization right away. Thanks.
A_Vision_of_Linear_Algebra
Intro_A_New_Way_to_Start_Linear_Algebra.txt
GILBERT STRANG: Well, I'm Gil Strang. And I'm very happy if you know the linear algebra videos on OpenCourseWare or on YouTube. That's for the math course 18.06. And I'm even happier if you like them. And I'm here today to update them for several reasons. Well, a lots happened in linear algebra in these years. Fortunately, the whole subject has just become more and more essential, more and more important, more and more beautiful. And so I wanted to say something about those later steps. And also, when I teach it now, I have a new starting point. And I'll show you that. So I'll go a little slowly on that starting point. The slides tell you the whole course. And that's crazy to have a full course within a short video. But especially the first part is new. And I'm even writing a textbook called Linear Algebra for Everyone that will start this way. I hope that the new start brings in real linear algebra ideas right away. And let me show you where those are. So this is an outline of the whole video. And the first line, which I think about in my mind as matrix A, is a product of a C matrix, a column matrix, and a row matrix R. And you'll see what those are. So that's the new idea. That will come first. Then these are five famous essential shorthand descriptions of the key chapters of linear algebra, the key chapters. So they represent, for example, LU is those letters are famous. And computer commands would be exactly those letters LU for elimination for solving equations, the first job of linear algebra. And then QR. So Q is a very interesting important type of matrix. That's standing for an orthogonal matrix. There is the word orthogonal or perpendicular. So those are the best matrices to compute with. And that QR gets us there. And S is for a symmetric matrix. And it will involve-- oh, well, I should say, the first half, ending with there, with QR, is about solving equations. The second half, these three are about eigenvalues and eigenvectors and singular values, a different way to approach the whole subject and a very, very important way. Among my goals is to help courses around the world get singular values included because you really don't want to miss them. That's the high point of the theory. And it's expressed like all the others as breaking a matrix into two or three pieces, two or three parts. So that's my plan for this video. And I hope it's helpful. Again, it's a whole course in a short time. And please go to the real 18.06 videos for the details. Thanks
A_Vision_of_Linear_Algebra
Part_4_Eigenvalues_and_Eigenvectors.txt
GILBERT STRANG: Moving now to the second half of linear algebra. It's about eigenvalues and eigenvectors. The first half, I just had a matrix. I solved equations. The second half, you'll see the point of eigenvalues and eigenvectors as a new way to look deeper into the matrix to see what's important there. OK, so what are they? This is a big equation, S time x. So S is our matrix. And I've called it S because I'm taking it to be a symmetric matrix. What's on one side of the diagonal is also on the other side of the diagonal. So those have the beautiful properties. Those are the kings of linear algebra. Now, about eigenvectors x and eigenvalues lambda. So what does that equation, Sx equal lambda x, tell me? That says that I have a special vector x. When I multiply it by S, my matrix, I stay in the same direction as the original x. It might get multiplied by 2. Lambda could be 2. It might get multiplied by 0. Lambda there could even be 0. It might get multiplied by minus 2, whatever. But it's along the same line. So that's like taking a matrix and discovering inside it something that stays on a line. That means that it's really a sort of one dimensional problem if we're looking along that eigenvector. And that makes computations infinitely easier. The hard part of a matrix is all the connections between different rows and columns. So eigenvectors are the guys that stay in that same direction. And y is another eigenvector. It has its own eigenvalue. It got multiplied by alpha where Sx multiplied the x by some other number lambda. So there's our couple of eigenvectors. And the beautiful fact is that because S is symmetric, those two eigenvectors are perpendicular. They are orthogonal, as it says up there. So symmetric matrices are really the best because their eigenvectors are perpendicular. And we have a bunch of one dimensional problems. And here, I've included a proof. You want a proof that the eigenvectors are perpendicular? So what does perpendicular mean? It means that x transpose times y, the dot product is 0. The angle is 90 degrees. The cosine is 1. OK. How to show the cosine might be there. How to show that? Yeah, proof. This is just you can tune out for two minutes if you hate proofs. OK, I start with what I know. What I know is in that box. Sx is lambda x. That's one eigenvector. That tells me the eigenvector y. This tells me the eigenvalues are different. And that tells me the matrix is symmetric. I'm just going to juggle those four facts. And I'll end up with x transpose y equals 0. That's orthogonality. OK. So I'll just do it quickly, too quickly. So I take this first thing, and I transpose it, turn it into row vectors. And then when I transpose it, that transpose means I flip rows and columns. But for as symmetric matrix, no different. So S transpose is the same as S. And then I look at this one, and I multiply that by x transpose, both sides by x transpose. And what I end up with is recognizing that lambda times that dot product equals alpha times that dot product. But lambda is different from alpha. So the only way lambda times that number could equal alpha times that number is that number has to be 0. And that's the answer. OK, so that's the proof that used exactly every fact we knew. End of proof. Main point to remember, eigenvectors are perpendicular when the matrix is symmetric. OK. In that case, now, you always want to express these facts as from multiplying matrices. That says everything in a few symbols where I had to use all those words on the previous slide. So that's the result that I'm shooting for, that a symmetric matrix-- just focus on that box. A symmetric matrix can be broken up into its eigenvectors. Those are in Q. Its eigenvalues. Those are the lambdas. Those are the numbers lambda 1 to lambda n on the diagonal of lambda. And then the transpose, so the eigenvectors are now rows in Q transpose. That's just perfect. Perfect. Every symmetric matrix is an orthogonal matrix times a diagonal matrix times the transpose of the orthogonal matrix. Yeah, that's called the spectral theorem. And you could say it's up there with the most important facts in linear algebra and in wider mathematics. Yeah, so that's the fact that controls what we do here. Oh, now I have to say what's the situation if the matrix is not symmetric. Now I am not going to get perpendicular eigenvectors. That was a symmetric thing mostly. But I'll get eigenvectors. So I'll get Ax equal lambda x. The first one won't be perpendicular to the second one. The matrix A, it has to be square, or this doesn't make sense. So eigenvalues and eigenvectors are the way to break up a square matrix and find this diagonal matrix lambda with the eigenvalues, lambda 1, lambda 2, to lambda n. That's the purpose. And eigenvectors are perpendicular when it's a symmetric matrix. Otherwise, I just have x and its inverse matrix but no symmetry. OK. So that's the quick expression, another factorization of eigenvalues in lambda. Diagonal, just numbers. And eigenvectors in the columns of x. And now I'm not going to-- oh, I was going to say I'm not going to solve all the problems of applied math. But that's what these are for. Let's just see what's special here about these eigenvectors. Suppose I multiply again by A. I Start with Ax equal lambda x. Now I'm going to multiply both sides by A. That'll tell me something about eigenvalues of A squared. Because when I multiply by A-- so let me start with A squared now times x, which means A times Ax. A times Ax. But Ax is lambda x. So I have A times lambda x. And I pull out that number lambda. And I still have a 1Ax. And that's also still lambda x. You see I'm just talking around in a little circle here, just using Ax equal lambda x a couple of times. And the result is-- do you see what that means, that result? That means that the eigenvalue for A squared, same eigenvector x. The eigenvalue is lambda squared. And if I add A cubed, the eigenvalue would come out lambda cubed. And if I have a to the-- yeah, yeah. So if I had A to the n times, n multiplies-- so when would you have A to a high power? That's a interesting matrix. Take a matrix and square it, cube it, take high powers of it. The eigenvectors don't change. That's the great thing. That's the whole point of eigenvectors. They don't change. And the eigenvalues just get taken to the high power. So for example, we could ask the question, when, if I multiply a matrix by itself over and over and over again, when do I approach 0? Well, if these numbers are below 1. So eigenvectors, eigenvalues gives you something that you just could not see by those column operations or L times U. This is looking deeper. OK. And OK, and then you'll see we have almost already seen with least squares, this combination A transpose A. So remember A is a rectangular matrix, m by n. I multiply it by its transpose. When I transpose it, I have n by m. And when I multiply them together, I get n by n. So A transpose A is, for theory, is a great matrix, A transpose times A. It's symmetric. Yeah, let's just see what we have about A. It's square for sure. Oh, yeah. This tells me that it's symmetric. And you remember why. I'm always looking for symmetric matrices because they have those orthogonal eigenvectors. They're the beautiful ones for eigenvectors. And A transpose A, automatically symmetric. You just you're multiplying something by its adjoint, its transpose, and the result is that this matrix is symmetric. And maybe there's even more about A transpose A. Yes. What is that? Here is a final-- I always say certain matrices are important, but these are the winners. They are symmetric matrices. If I want beautiful matrices, make them symmetric and make the eigenvalues positive. Or non-negative allows 0. So I can either say positive definite when the eigenvalues are positive, or I can say non-negative, which allows 0. And so I have greater than or equal to 0. I just want to say that bringing all the pieces of linear algebra come together in these matrices. And we're seeing the eigenvalue part of it. And here, I've mentioned something called the energy. So that's a physical quantity that also is greater or equal to 0. So that's A transpose A is the matrix that I'm going to use in the final part of this video to achieve the greatest factorization. Q lambda, Q transpose was fantastic. But for a non-square matrix, it's not. For a non-square matrix, they don't even have eigenvalues and eigenvectors. But data comes in non-square matrices. Data is about like we have a bunch of diseases and a bunch of patients or a bunch of medicines. And the number of medicines is not equal the number of patients or diseases. Those are different numbers. So the matrices that we see in data are rectangular. And eigenvalues don't make sense for those. And singular values take the place of eigenvalues. So singular values, and my hope is that linear algebra courses, 18.06 for sure, will always reach, after you explain eigenvalues that everybody agrees is important, get singular values into the course because they really have come on as the big things to do in data. So that would be the last part of this summary video for 2020 vision of linear algebra is to get singular values in there. OK, that's coming next.
A_Vision_of_Linear_Algebra
Part_5_Singular_Values_and_Singular_Vectors.txt
GILBERT STRANG: OK, so I was speaking about eigenvalues and eigenvectors for a square matrix. And then I said for data for many other applications, the matrices are not square. We need something that replaces eigenvalues and eigenvectors. And what they are-- and it's perfect-- is singular values and singular vectors. So may I explain singular values and singular vectors? This slide shows a lot of them. The point is that there will be-- now I don't say eigenvectors-- two-- different left singular vectors. They will go into this matrix u. Right singular vectors will go into v. It was the other case that was so special. When the matrix was symmetric, then the left equals left eigenvector. They're the same as the right one. That's sort of sensible. But a general matrix and certainly a rectangular matrix-- well, we don't call them eigenvectors, because that would be confusing-- we call them singular vectors. And then, inbetween are not eigenvalues, but singular values. Oh, right. Oh, hiding over here is a key. A times the v's gives sigma times the u's. So that's the replacement for ax equal lambda x, which had x on both sides. OK, now we've got two. But the beauty is now we've got two of those to work with. We can make all the u's orthogonal to each other-- all the v's orthogonal to each other. We can do what only symmetric matrices could do for eigenvectors. We can do it now for all matrices, not even squares, just this is where life is, OK. And these numbers instead of the lambdas are called singular values. And we use the letter sigma for those. And here is a picture of the geometry in 2 by 2 if we had a 2 by 2 matrix. So you remember, factorization breaks up a matrix into separate small parts, each doing its own thing. So if I multiply by vector x, the first thing that's going to hit it is v transpose. V transpose is an orthogonal matrix. Remember, I said we can make these singular vectors perpendicular. That's what an orthogonal matrix-- so it's just like a rotation that you see. So the v transpose is just turns the vector to get here to get to the second one. Then I'm multiplying by the lambdas. But they're not lambdas now. They're sigma. The matrix, so that's capital sigma. So there is sigma 1 and sigma 2. What they do is stretch the circle. It's a diagonal matrix. So it doesn't turn things. But it stretches the circle to an ellipse because it gets the two different singular values in-- sigma 1 and sigma 2. And then the last guy, the u is going to hit last. It takes the ellipse and turns out again. It's again a rotation-- rotation, stretch, rotation. I'll say it again-- rotation, stretch, rotation. That's what singular values and singular vectors do, the singular value decomposition. And it's got the best of all worlds here. It's got the rotations, the orthogonal matrices. And it's got the stretches, the diagonal matrices. Compared to those two, those are the greatest. Triangular matrices were good when we were young an hour ago. Now, we're seeing the best. OK, now let me just show you where they come from. Oh, so how to find these v's. Well, the answer is, if I'm looking for orthogonal vectors, the great idea is find a symmetric matrix and with those eigenvectors. So these v's that I want for A are actually eigenvectors of this symmetric matrix A transpose times A. That's just nice. So we can find those singular vectors just as fast as we can find eigenvectors for a symmetric matrix. And we know there, because A transpose A is symmetric. We know the eigenvectors are perpendicular to each other orthonormal. OK, and now what about the other ones because remember, we have two sets. The u's-- well, we just multiply by A. And we've got the u's. Well, and divide by sigmas, because these vectors u's and v's are unit vectors, length one. So we have to scale them properly. And this was a little key bit of algebra to check that, not only the v's were orthogonal, but the u's are orthogonal. Yeah, it just comes out-- comes out. So this singular value decomposition, which is maybe, well, say 100 years old, maybe a bit more. But it's really in the last 20, 30 years that singular values have become so important. This is the best factorization of them all. And that's not always reflected in linear algebra courses. So part of my goal today is to say get to singular values. If you've done symmetric matrices and their eigenvalues, then you can do singular values. And I think that's absolutely worth doing, OK, yeah, so and remembering down here that capital Sigma stands for the diagonal matrix of these positive numbers, sigma 1, sigma 2 down to sigma r there. The rank, which came way back in the first slides, tells you how many there are. Good, good. Oh, here's an example. So I took a small matrix because I'm doing this by pencil and paper and actually showing you that the singular value. So there is my matrix, 2 by 2. Here are the u's. Do you see that those are orthogonal-- 1, 3 against minus 3, 1? Take the dot product, and you get 0. The v's are orthogonal. The sigma is diagonal. And then the pieces from that add back to the matrix. So it's really, it's broken my matrix into a couple of pieces-- one for the first singular value in vector, and the other for the second singular value in vector. And that's what data science wants. Data science wants to know what's important in the matrix? Well, what's important is sigma 1, the big guy. Sigma 2, you see. Well, it was 3 times smaller-- 3/2 versus 1/2. So if I had a 100 by 100 matrix or 100 by 1,000, I'd have 100 singular values and maybe the first five I'd keep. If I'm in the financial market, those guys, those first numbers are telling me maybe what bond prices are going to do over time. And it's a mixture of a few features, but not all 1,000 features, right. So this is singular value decomposition picks out the important part of a data matrix. And you cannot ask for a more than that. Here's what you do with a matrix is just totally enormous-- too big to multiply-- too big to compute. Then you randomly sample it. Yeah, maybe the next slide even mentions that word randomized numerical linear algebra. So this, I'll go back to this. So the singular value decomposition-- this, what we just talked about with the u's and the v's and the sigmas. Sigma 1 is the biggest. Sigma r is the smallest. So in data science, you very often keep just these first ones, maybe the first k, the k largest ones. And then you've got the matrix that has rank only k, because you're only working with k vectors. And it turns out that's the closest one to the big matrix A. So this singular value is among other things is picking out, putting in order of importance the little pieces of the matrix. And then you can just pick a few pieces to work with. Yeah, yeah. And the idea of norms is how to measure the size of a matrix. Yeah, but I'll leave that for the future. And randomized linear algebra I just want to mention. Seems a little crazy that by just randomly sampling a matrix, we could learn anything about it. But typically data is sort of organized. It's not just totally random stuff. So if we want to know like, my friend in the Broad Institute was doing the ancient history of man. So data from thousands of years ago. So he had a giant matrix-- a lot of data-- too much data. And he said, how can we find the singular value decomposition? Pick out the important thing. So you had to sample the data. Statistics is a beautiful important subject. And it leans on linear algebra. Data science leans on linear algebra. You are seeing the tool. Calculus would be functions would be continuous curves. Linear algebra is about vectors. This is just n components. And that's where you compute. And that's where you understand. OK. Oh, this is maybe the last slide to just help orient you in the courses. So at MIT 18.06 is the Linear Algebra Course. And maybe you know 18.06 and also 18.06 Scholar, SC, on OpenCourseWare. And then this is the new course with the new book, 18.065. So its numbers sort of indicating a second course in linear algebra. That's when I'm actually teaching now, Monday, Wednesday, Friday. And so that starts with linear algebra, but it's mostly about deep learning-- learning from data. So you need statistics. You need optimization, minimizing. Big functions of calculus comes into it. So that's a lot of fun to teach and to learn. And, of course, it's tremendously important in industry now. And Google and Facebook and ever so many companies need people who understand this. And, oh, and then I am repeating 18.06 because there is this new book coming, I hope. Did some more this morning. Linear Algebra for Everyone. So I have optimistically put 2021. And you're the first people that know about it. So these are the websites for the two that we have. That's the website for the linear algebra book, math.mit.edu. And this is the website for the Learning from Data book. So you see there the table of contents and all and solutions to problems-- lots of things. Thanks for listening to this is-- what-- maybe four or five pieces in this 2020 vision to update the videos that have been watched so much on OpenCourseWare. Thank you.
A_Vision_of_Linear_Algebra
Part_6_Finding_the_Nullspace_Solving_Ax_0_by_Elimination.txt
[SQUEAKING] [RUSTLING] [CLICKING] GILBERT STRANG: OK, this is about finding the null space of a matrix A-- any matrix, square or rectangular. And what does that mean? That means, well, in algebra, we're solving the equation Ax equals 0. So A is our matrix, x is a vector that we're looking for, and Ax is a combination of the columns of A. So we're looking for combinations of the columns that give the zero vector-- dependent columns, we'll be saying. So that's the goal. And the new start for linear algebra that I've suggested solves that problem for small matrices. It kind of just does it for a small matrix. But for any matrix, or a big one, we need a system. So this kind of completes the idea by giving the system, which uses the algorithm of linear algebra, which is elimination. So elimination is going to be the key to solving this problem-- to finding the null space. So that's my picture. Oh, these are the-- I'll just mention-- the three books which discuss this. The Introduction to Linear Algebra is the main textbook. Then the Learning from Data book-- actually, that's where this new idea got started. And the Linear Algebra for Everyone has got the idea more completely. So I'm sort of speaking about a section out of that third book. OK. And they all have websites. Math.mit.edu, the addresses are there to see more. Actually, I should say, so these are the key ideas of this lecture. And yesterday, I looked at my lecture from a few years ago-- quite a few years ago-- lecture seven in the OpenCourseWare series for Math 18.06. So that was on the same topic, elimination and Ax equals 0. So a lot of good stuff is there, but there's a little more to say now. And that's what this is about today. So OK, here are key ideas of the lecture. So the null space-- so now you see it in writing-- the null space is all the solutions x to Ax equals 0. Remember, x is a vector. And elimination is the key and it keeps the same null space. And the Matlab command is rref of A. That's the command that does elimination. And we'll see the identity matrix, we'll see a matrix F, and the new idea was to factor the matrix into a column matrix C times a row matrix R. So this is really putting all the ideas together. And we learn that if a matrix is short and wide, then there are certainly-- if we've got lots of columns, then some of them will be dependent and that means that there'll be solutions to Ax equals 0. So here we go. Oh OK, I'm going to start with an example. And elimination has already done its job. So what elimination did was to get to that-- you see that identity matrix in the first two columns? 1, 0 and 0, 1. So you can't ask for a simpler matrix than that. So once we've got that, we can't mess with it, so 3, 5, 4, 6, we're stuck with-- those are our other columns. So the first two columns are independent. Columns of the identity matrix are very independent. But then 3, 4 is a combination of those two columns, right? 3, 4 is 3 times the first column plus 4 times the second. So there is an x in the null space. That's one of the vectors we're after. I'll call it a special solution because it just came specially from one column. And you see it down below as s1. So if we took minus 3 of the first column, minus 4 of the second column, and then plus 1 of the third column, we would have 0. Ax equals 0-- what we're looking for. And the second special solution would come from the last column of the matrix, 5, 6. So again, that's 5 of the 1, 0 column and 6 of the 0, 1 column. And if we put that together into a special solution s2, we want minus 5 of column 1, minus 6 of column 2, plus nothing of column 3, plus column 4, giving 0. So in this case, elimination has produced a simple matrix R-- simple because it's got the identity there. So can I show you one more example of R before I begin to talk about how do we get to R? OK, so here's another R. A little different, though. It's different because it's got a row of 0's. Well that won't pose any problem, but we just have to think what to do with it. We always move 0 rows to the bottom. And there it is. And it does have an identity matrix as the first example did, but you see that the 0, 1 is often column 3. The identity is in columns 1 and 3 here and that makes a small change. But the idea is still the same. Those two columns are the independent ones. They're very simple. The 1, 0, 0 and 0, 1, 0, those are totally different directions. Then the 7, 0, 0 is 7 of the first column. So that spotted a special solution-- an x that I call s1. At the bottom line, you see the minus 7 of the first column plus 1 of the second column produces 0. If you just look at those columns, minus 7 of the first plus 1 of the second, everything cancels. And the other one is going to come from the 8, 9, 0 column. That'll be 8 of the first column and 9 of the third-- the two bits of the identity. So you see that when we get to this reduced row echelon form-- horrible set of words, but what it means is elimination has made it as simple as it can be. OK, I might have something more to say. Yeah, so this is summarizing what you've just seen. So we have the simplest case where the identity just sits there at the left or the more general case where the identity is mixed in with the other two columns. We can live with both. And I'm using F for the other columns-- the columns that are not part of the identity. And then it has that extra 0 row. OK. And now, I want to write it-- a key part of this lecture is to see the result this matrix R-- see it in matrix form instead of a bunch of numbers. Often, in the computing, you're just pages full of numbers and you don't see what's happening. So what we're looking for is the identity matrix and the non-zero matrix F. And now, in that R0 case, that second example, there's a P. What is that matrix P doing there? Well, it's a permutation matrix-- an exchange matrix. Because the identity in this second example is not in columns 1 and 2. It's in 1 and 3. So that P has to put it there. So that P is-- and because P is on the right, it'll move columns. And there I wrote down what that P is. That exchanges column 2 and 3 and puts the identity in where we want it. OK. So that's the linear algebra-- the matrix notation part. Oh, and this is-- here, now you're seeing the new start, this C times R. So that's what I'm suggesting that if you give me a matrix, the first thing I would do would be to find independent columns-- those would go into C. And then the matrix R would-- well, we've seen the matrix R and that would tell me the combinations of those C columns to get A. Yeah, you'll see it. Well, this box is around all the matrix formulas. So we're sorting every matrix into independent columns followed by dependent columns and then this permutation-- this exchange matrix P-- if they don't really come in that order. If we need a new order. So all this is about A equals CR, the new-- well, not new. Not completely new, I'm sure, but it's the factorization of any matrix. OK. And now how do we actually do it if the matrix is big and we need the computer to help? Well, we do it by elimination. OK, this is what elimination is about. What are we allowed to do in elimination? We don't want to mess up the equation Ax equals 0. I'm operating on A, but I'm not creating or losing any of those solutions. So if I subtract one row from another row, or any multiple of one row from another row, that's the main operation of elimination. And obviously, the equations are still true. If I take 3 of equation one away from equation two, I still have a correct equation. And I could multiply a row by 15 or any non-zero number. No change. And I can switch any two rows. That third one of switching rows, exchanging rows, is just to move the 0 row down to the bottom. This is the example. Can you see those numbers? So I'm giving an example of a matrix that leads to that very same R that we worked out. So you see what I'm doing now. I'm backing up. I know the R I want, but I'm given some matrix A, and it's these elimination steps that will lead me to R. OK. So you see A over on the left? So it's a full matrix A. No identity there. But I'm allowed to do elimination. I'm allowed to do these three steps when I want to. So what shall I do? Probably, at this point, you've seen elimination as much as you want to, but allow me to say it once or twice more. So I would take 3 of row 1 away from row 2. That would produce-- that 3 that's in the original matrix would turn into a 0. So when you take 3 of that first row, that would be 3, 6, 33, 51. I subtract it and I get just that 0, 1, 4, 6. So I've got a much better second row. Now, I'm going to use the second row going upwards. So I'll take 2 of the second row away from the first row. That'll knock out the 2 that I don't want and it'll produce the identity that I do want. So do you see that? If I take 2 of that last row, that 0, 2, 8, 12, when I take 0, 2, 8, 12 away from the top row, I'm left with 1 and 0. And 3 and 5 are the numbers that happen to be left. OK. So really, what has elimination done? This is an important idea that I think I never caught onto when I was learning linear algebra. Elimination has found the inverse of this 1, 2, 3, 7-- of this 2 by 2 matrix. Because it's started with 1, 2, 3, 7 and it's ended with the identity. So that must have inverted that matrix. And then it had to apply the inverse to the second half of the matrix and that produced the 3, 4, 5, 6. So you see, this is-- oh, I think there's probably an example that leads to number two. Here, I must have thought ahead. So it says on the screen what I just said, that elimination has inverted the leading matrix. And then it's written out the step H equal WF that connects the original dependent columns to the final dependent columns in F, 3, 4, 5, 6. OK. I think elimination is just a matter of practice and you probably have done it. And here is an important point about elimination, that you could do things in different orders. I spoke and I did them in a sort of natural order, but you could do it other ways. But you wouldn't get a different outcome. You'd get the same matrix R. Well, because the equations are still true at every point. The x's, the null space of a matrix, is not changing by my elimination steps. Yeah. So let's just repeat this. When you finally get to R, when you've done the elimination and you get to R with its identity matrix, that identity matrix in R tells you which columns at the start were independent. So you need to have a way to do that and now we have a way. In the other opening talks in this series-- in this series about the new start-- the matrices were nice and small so I didn't need much of a system. I could practically wing it. But now, we need a system. And of course, if we want to use the Matlab or other code, it has to have a system. So anyway, we now have a system of row elimination steps and we know what it ends with. It ends with an identity matrix of the right size, a matrix F-- that's the other columns-- and then possibly a permutation to tell us the order. That's elimination. OK and here's the second example. So we're moving along. You've got to pay attention because this is all there is is two examples and then put the ideas together. So here is an example of a matrix A with numbers as big as 97. I don't know how I got there. OK. So there are no 0's in the original matrix A but elimination aims to get 0's. Oh, here. It got 0's very quickly because the second row of that matrix, 2, 14, 6, 70, that's just twice the first row. So when I subtract 2 times the first row from the second row, I get that 0 row. So I'm now one step along with a row of 0's. That's great. And the third row improved too because I took 2 of the first row away from the third row and that gave me a 0, 0 at the start and then a 3, 27. So are we OK? We're at the second matrix now. It's not finished. It's not in its R form, in its echelon form, but it's a lot closer. OK. So what do we still have to do? Well I look at that 3, and I'm looking to get the identity matrix, so I'll divide that-- sooner or later, I'm going to divide that last row by 3 and get 0, 0, 1, 9. But another thing I want to do is clean up the first row. So I subtract that last row from the first row. You see that? See, I'm much closer to the identity. So I'm now moved on to the third matrix. And if you've done a lot of these, you know what I'm doing. I'm working a column at a time. And notice, column 2, I can't do anything with. Column 2 is a dependent column. It's seven times column 1. I can't improve column 2. But I can improve column 3 and that's what we do. Divide that third, that 0, 0, 3, 27, by 3 and exchange rows. Then you see that the result is the R0. So it's the R0 with the 0 row. Yeah. So just to repeat, this is a matrix A that leads through the steps of elimination, which we're remembering to R0. So we're really finished with this matrix A because this shows how to get it to R0, and then we've already seen with R0, we've figured out which were the special solutions-- the vectors in the null space. So we're good. We really have done the job. And it remains to just see a little bit, what did we do and what's going on with these other two columns? The F columns that are not the identity. And you see, what we've done is-- well, we started with columns 1 and 3. Yeah, so that's our matrix C. Those are the two independent columns. Well how do I know? Where did C come from? Because that's part of this search, where are the independent columns. They're the ones that end up with the identity because the identity is the way to go to the right independent columns. So they're columns 1 and 3 of the original matrix. So you see C there? 1, 2, 2 and 3, 6, 9. Then F is the part of R from the dependent column. So that's the 7, 8, 0, 9. And then I'm seeing the dependent columns. Yeah. You have to do this a few times, but all the ideas now are-- and there I've put it together. So the idea is, with a small matrix, like only three columns and maybe only two rows, we could find C and R almost by sight. But now, even where we're up to four columns and three rows, we need a method. And that's the point of this talk and it was also the point of lecture seven in the original 18.06-- Math 18.06 on OpenCourseWare. OK, so we've found C and R. Good. Good. I think it's just congratulating ourselves from here in. These are remembering the special solutions. Oh, and then why not-- you remember those? That second column was 7 of the first column, so that gave that special solution, that combination, to give 0. And then the other combination gave the 0 vector. So we've got two special solutions. Oh, and here is the general principle. Yeah. Yeah. It's very simple. After you've got this matrix down to identity and F, then the solutions x should have a minus FI. You remember the minus signs that we saw? So all I'm saying here on the first line is that the matrix IF times minus FI gives the 0 matrix. Of course it does. And then the second line is the little bit of special trick that you have to do if there's a P, a permutation, an exchange involved here. When the identity matrix is not sitting where you want it, you have to have a P to put it there. Then P transpose has to come into the solutions. Yeah. Yeah, because P times P transpose. So remember about permutation matrices. Linear algebra is about learning different special character of different matrices. And permutations just take the identity matrix and mess around with the rows or mess around with the columns, same thing. And then P transpose unmesses it. Brings it back to the identity. So P times P transpose is the identity. OK, so we're just about there. Oh, then this is the little point I made-- well, it's an important point. Suppose I have five columns and only three rows. That means I've got five unknowns and only three equations. So there are going to be solutions to that. There are going to be solutions other than the 0 solution. The 0 combination will surely give 0, but if I have five unknowns, five columns in there, and I have only three equations, three rows-- so five greater than three, n greater than m-- then there will be some columns in F. There'll be some non-zero solutions. And these examples show. So take that matrix M at the end there. You see there's a matrix with four columns and only two rows. So I've got four vectors in the plane. If I have four vectors in the plane, there are combinations that give 0. They can complete triangles. So in that case, rank would be 2. So we're really seeing this part of linear algebra, to simplify the matrix by elimination so that all the main facts are clear. Yeah. And oh, if you really want to see it in shorthand-- I'm not necessarily pushing this last idea, but if you want to see it in a shorthand, we could think of the matrix as having just four blocks. So it's a 2 by 2 matrix, but unfortunately, each of those guys is a block. So W is the one in the corner that we talked about earlier that gets inverted. So if we look at the last line, that W in the corner, that ended up as the identity. The second block row, the J, K, ended up as 0's. And the one remaining guy, it tells us the combinations that we are looking for-- is this W inverse H. So let me look at that equation in the box W, H, J, K, that's the matrix I'm starting with. I'm going to invert W and I know that J, K is some combination of W, H. So when I invert W, I get-- the first elimination produces the first row, starting with the identity. And when I apply that to the second, the J, K row, those go to 0. So that's what elimination is doing in real shorthand. It's taking a 2 by 2 block matrix to that matrix R at the end, with a 0 block row and an identity block in the upper corner. So that is elimination and the solution of-- and finding the null space. So really, this completes the job of the first topic in linear algebra, is identifying-- understanding these four fundamental subspaces. The null space being one of them. The column space being another. We understand now which columns are independent. And the row space being another. And so we're really understanding the idea of these four fundamental subspaces that go into the big picture of linear algebra. So this completes the first major stage of a linear algebra course. And then what's to follow will come. Eigenvalues, singular values, applications of all kinds. Least squares. Good. Good. Good math. And thank you very much. So that's my summary of finding the null space. Good.
MIT_8821_String_Theory_and_Holographic_Duality_Fall_2014
9_Large_N_Expansion_as_a_String_Theory_Part_II.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. HONG LIU: OK, let's start. So first let me just remind you what we did at the end of last lecture. So we see that the large N expansion of gauge theory have essentially exactly the same mathematical structure with, say, the mathematics of the [? N string ?] scattering. And so here the observable is a correlation function of gauging [? invariant ?] operators. And then these have a large N expansion as follows. And on this side you have just an N string scattering amplitude. Just imagine you have some kind of scattering of strings, with total number of N strings. And then this also have expansion in terms of the string counting in this form. So now, if we identify-- so if we can identify the g string as 1/N. So if we identify g string with 1/N, then these two are essentially the same kind of expansion, OK? And you also can identify these external strings, string states, within the large N theory which we called the glueball states for single-trace operators. And then each case is corresponding to [? sum ?] over the topology. It's an expansion [? in ?] terms of the topology. So here is the topology of the worldsheet string. And here is the topology of Feynman diagrams. Here is the topology of the Feynman diagrams. So still at this stage, it's just like a mathematical correspondence. We're looking at two completely different things. But probably there's no-- yeah, no obvious connection between these two objects we are discussing. Yeah, we just have a precise mathematical structure. But one can actually argue that, actually, they also describe the same physical structure once you realize that when you sum over all possible Feynman diagrams. So once you realize that each Feynman diagram, say, of genus-h can be considered as a partition, or in other words, triangulization over genus-h surfaces, [? 2D ?] surfaces. OK. So if you write more explicitly this fh, so if we write explicitly this fh, then this fh, this fnh, then will be corresponding to your sum of all Feynman diagrams of genus-h. Suppose G is the expression for each Feynman diagram. Say for each diagram. And then I can just rewrite this. In some sense, I [? accept ?] all possible triangulation of [? a genus-g ?] surface. Say there will be some weight G. And summing over all possible triangulations of a surface is essentially-- so this is essentially the same as this sum over all possible surfaces. So this is a discrete version. So sum all possible triangulations of some genus-g surfaces, or translations of genus-g surfaces. Then they can be considered as a discrete version of sum over all possible surfaces, OK? AUDIENCE: So you're saying it's like a sum over [? syntheses, ?] like a simple [? x? ?] HONG LIU: Exactly. Exactly. Yeah, because, say, imagine when you sum over surfaces, so you sum over all possible metric. You can put [INAUDIBLE]. And that's the same way as you sum over different discretizations of that surface once you have defined the unit for that discretization. So if we can identify-- so for now record this Fh. So this Fh, this Fnh is the path integral over all genus-h surfaces with some string action, weighted by some string action. So if we can, say, identify this G with some string action-- the exponential of some string action. Then we would have-- then one can conclude that large N gauge theory is just a string theory, OK? That large N gauge theory is just a string theory, if you can do that. In particular, the large N limits-- so large N limit here, as we discussed before, can considered as a classical theory of glueballs. Or a classical theory of the single-trace operators. So this would be matched to the classical string theory. So as we mentioned last time, so I was mentioning before, this expression-- so just as in the as we discussed [INAUDIBLE], the [? expansion ?] in g string the same as expansion in the topology. And the expansion in the topology can also be considered as the expansion of the groups of a string. Because whenever you add a hole to the genus-- when you add the genus, and you actually add the string hole, you add the string loop diagram. So in this sense, you can [? integrate ?] all these higher order corrections, as the quantum correction to this classical string behavior. So this is just a tree-level amplitude for string. And this [? goes ?] one into the loops. Whenever you add this thing, you add the loop. OK. Is this clear? Now, remember what we discussed for the torus. If you've got a torus, then correspondingly you have a string split and joined together. And this split and join process you can also consider as a string loop, a single string going around a loop, [? just like ?] in the particle case, OK? In the standard field theory case. And so the large N limit, which is the leading order term here, would map to a leading order in the string scattering. And the leading order in the string scattering-- they only consider tree-level [? skin ?] scatterings, and then corresponding to classical string theory. And also the single-trace operator here can be mapped to the string states. Yeah, can be mapped to the string states. But this is only-- this is a very nice picture. But for many years, this was just a dream. And because this guy looks very different from this guy, but this is difficult. So this has some [? identification is ?] difficult for the following reasons. So first, so this G just-- say your Feynman diagrams, amplitude for particular Feynman diagram. So G is typically expressed as product of field theory propagators. So imagine how you evaluate the Feynman diagram. The Feynman diagram, essentially, is just a product of the [? propagators. ?] And then you integrate it [INAUDIBLE] integrated over spacetime. So they just take the Yang-Mills theory. And if you look at the expression for this diagram, of course, it looks nothing. So they look nothing like-- OK. So let me make a few comments about this thing. Because if you want to match, say if I gave you a Yang-Mills theory, so I gave you a QCD, then you can write down-- then you can go to large N. You can write down expressions for the common diagrams. But if you say, I want to write it as a string theory, the first thing you have to say, what string theory do you want to compare? So first you have to ask yourself what string action do you want to compare. So the string action, as we discussed last time, this describes the embedding of the worldsheet into some spacetime. OK, so this is worldsheet into a spacetime. So this is also sometimes called the target space. So this is a spacetime. This string moves. And the mathematical of this is just the-- this is encoded in this mapping X mu sigma tau. OK, X mu is the coordinate for M. And then sigma tau is the coordinate as you parameterize your worldsheet. So in order to write down action, of course, you have to choice of space manifold. You have to choose your spacetime. And also you have to-- when you fix the spacetime, you don't have a choice. And sometimes the way to write down such kind of embedding is not unique. The action for such [? finding ?] is unique, so you only need to choose what action you include. And also often, in addition to this embedding, sometimes you can have additional internal degrees freedom. living on worldsheet. For example, you can have some fermions. Say if you have a superstring, then you can have some additional fermions are living on the worldsheet, in addition to this embedding. So in other words, the choice of this guy in some sense is infinite. And without any clue-- so you need some clue to know what to compare the gauge theory to. And otherwise, even if this works, you're searching for some needle in the big ocean. And then there's another very important reason why this is difficult, is that this string theory is formulated in a continuum. It's formulated in a continuum. And these Feynman diagrams, even if they're corresponding to some kind of string theory, they correspond into a discrete version of that. So at best, it's a discrete version. So we expect such a geometric picture for G, for these Feynman diagrams, to emerge only at strong couplings. OK? Emerge only at strong couplings for the following reason. So if you look at the Feynman diagram-- so the simplest Feynman diagram we draw before, say for example just this diagram. And if you draw it on the sphere, it separated the sphere into three parts, OK? So this [? discretizes ?] a sphere into three parts. And essentially, just as the sphere just becomes three points, because each particle is wanting to-- when you're trying to [INAUDIBLE] each part, you approximate it by one point. So essentially, in this diagram, you approximate the whole sphere essentially by three points. OK. And of course, it's hard to see your [? magic ?] picture from here. And your [? magic picture ?] you expect to emerge, but your Feynman diagrams become very complicated. For example, if you have this kind of diagram, because of the four-point vertex. In principle, you can have all these diagrams. And then this [INAUDIBLE] [? wanting ?] to discretize-- yes, I suppose this is on the torus. Suppose you have a-- for example, this could be a Feynman diagram on the torus, OK? For the vacuum [? energy. ?] And now this is next some kind of proper discretization. And this will go to a continuum limit, say when the number of these box go to infinity. When the number of box go to infinity, then you need a number of propagators, and the number of vertices goes to infinity, OK? So in order for continuum, a picture to emerge, so you want those complicated diagrams-- it's not your number of vertices or large number of propagators that dominate. And for those things that dominate, then you need the strong coupling. Because with this coupling, this is the leading order diagram. And there's no geometry from here, OK? So in order to have the geometry, you want the diagram are very, very complicated, so that they really-- [INAUDIBLE] a triangulation of a surface. A weak coupled diagram with small number of lines will cause [? one ?] [? and two ?] are very close triangulization of a surface. So we expect this only appears in strong couplings, OK? Yeah. AUDIENCE: By the cases like we have to sum over all the [INAUDIBLE]. HONG LIU: Yeah, sum over the [INAUDIBLE] diagram. AUDIENCE: Including those simple ones. HONG LIU: Including those simple ones. So that's why you want to-- so if you're in a weak coupling, then the simple ones-- so we sum all those diagrams. And each diagram you can associate with a coupling power. So at weak coupling, then the lowest order term would just dominate. And the lowest order term have a very simple diagrams. And then that's because [? one ?] and [? two ?] are very crude triangulization over the surface. But if you have a strong coupling-- in particular, if you have an infinite coupling-- the diagrams, the infinite number of vertices will dominate. And then that's because [? one ?] and [? two ?] have very fine triangulization over the surface. And then that can go to the [INAUDIBLE]. AUDIENCE: [INAUDIBLE] interaction a coupling constant has been [? dragging ?] out from-- HONG LIU: No. That's just N dragged out. AUDIENCE: Oh, I see. HONG LIU: No, there's what we call this [INAUDIBLE] still remaining. By coupling, it's only [? N. ?] AUDIENCE: [INAUDIBLE] HONG LIU: No, no, this isn't to [? hold ?] coupling. In coupling we mean that [INAUDIBLE]. So example we talk about, [? because one ?] [? and two ?] [INAUDIBLE]. Yeah, and then we make more precise. So in the [? toy ?] example we talked about before. So previously we talked about this example, N divided by lambda, trace, say 1/2 partial phi squared, plus 1/4 phi to the power 4. And strong coupling means the lambda large. Because of the N I've already factored out, so you're coupling just lambda. AUDIENCE: Oh, I see. HONG LIU: Yes. AUDIENCE: So in these [INAUDIBLE] the propagator in that version would become the spacetime integration? HONG LIU: Hm? AUDIENCE: I was just wondering how the propagator can [? agree, ?] can match to the spacetime [INAUDIBLE]. HONG LIU: Yeah, yeah. So the propogator-- yeah, propagator you do in the standard way. You just write down your propagator, and then you try to repackage that. As the question, you said, whatever your rule, Feynman rule is we just do that Feynman rule. And you write down this expression. It's something very complicated. And then you say, can I find some geometric interpretation of that? Yeah, what I'm saying is that doing from this perspective is very hard because you don't know what thing to compare. And further, in the second, you expect that your [INAUDIBLE] would emerge only in those very complicated diagrams. And those complicated diagrams we don't know how to deal with. Because they only emerge in the strong coupling limit, but in the strong coupling limit, we don't know how to deal with that. And so that's why it's also difficult. But [? nevertheless, ?] for some very simple theories, say, if you don't consider the Yang-Mills theory, you don't consider the gauge theory. But suppose you do consider some matrix integrals. Say, for very simple systems, like a matrix integral. So this structure emphasizes-- this structure only have to do with you have a matrices, OK? And then you can have matrix-valued fields [? or ?] this structure will emerge. Or you only have a matrix integral. So there no field at all, just have a matrix integral. That same structure will also emerge. For example. I can consider theory-- have a theory like this. Something like this. And have a theory like this, OK? And M is just some [INAUDIBLE] matrices. So this is just integral. And the same structure will emerge, also, in this series when we do large N expansion. So that structure have nothing to do-- yeah, you can do it. So matrix integral is much simpler than [INAUDIBLE] field theory because you have much less degrees freedom. So for simple systems like, say, your matrix integral or matrix quantum mechanics, actually, you can guess the corresponding string theory. Because also the string theory in that case is also very simple. You can guess where is simple string theory. But it's not possible for field theory. It's not possible for field theory. Yes. AUDIENCE: So what do you mean by matrix quantum mechanics? Like that, OK. HONG LIU: So this is a matrix integral. And I can make it a little bit more complicated. So I make this M to depend on t, and then this become a matrix quantum mechanics. Say trace M dot squared plus M squared plus M4. Then this become a matrix quantum mechanics, because it only have time. And then I can make it more complicated. I can make M be t, x. Then this becomes one plus one dimension of field theory. AUDIENCE: So in what context is this matrix quantum mechanics [? conflicted? ?] HONG LIU: Just at some [? toy ?] model. I just say, and this is a very difficult question. You said, I don't know how to deal with field theories. Then this [? part of it's ?] a simple system. And then just try to use this philosophy, can see whether it can do it for simple system. And then you can show that this philosophy actually works if you do a matrix integral or matrix quantum mechanics. Simple enough, matrix integral and matrix quantum mechanics. OK. And if you want references, I can give you references regarding these. There's a huge, huge amount of works, thousands of papers, written on this subject in the late '80s and early '90s. So those [? toy ?] examples just to show actually this philosophy works. I just showed this philosophy works, OK? But it's not possible if we want to go to higher dimensions. Actually, there's one paper-- let me just write it here. So this one paper explains the philosophy. So here I did not gave you many details, say, how you write this G down, how you in principle can match with this thing. With [? another ?] maybe [INAUDIBLE] you can make this discussion a little bit more explicit, but I don't have time. But if you want, you can take a look at this paper. So this paper discusses the story for the matrix quantum mechanics. But in the section 2 of this paper-- so this is a paper by Klebanov. So in the section 2 of this paper, it explains this mapping of Feynman diagrams to the string action. And this discretization picture give you a nice summary of that philosophy with more details than I have given to you. So you can take a look at that. And this paper also has some other references if you want to take a look at it. OK. Any questions? Yes. AUDIENCE: Sorry, but who was the first to realize this connection between the surfaces in topology of Feynman diagrams? HONG LIU: Sorry? AUDIENCE: Who first realized this relation between topology and-- HONG LIU: So of course, already when 't Hooft invented this large N expansion, he already noticed that this is similar to string theory. So he already commented on that. And he already commented on that. And for many years people did not make progress. For many years, people did not make progress. But in the late '80s-- in the mid to late '80s, people started thinking about the question from this perspective, not from that perspective. So they started to think about the order from this perspective. Because just typical string theory are hard to solve, et cetera. So people think, maybe we can actually understand or generalize our understanding of string theory by discretize the worldsheets. And then they just integrate over all possible triangulization, et cetera. And then they realized that that thing actually is like something over Feynman diagrams. And then for the very simple situations, say like if you have only a matrix integral, actually you can make the connection explicit. So that was in the late '80s. So people like [? McDowell ?] or [? Kazakov ?] et cetera that were trying to explore that. Other questions? AUDIENCE: I'm having trouble seeing how the sum over all triangulations [INAUDIBLE] each surfaces. How does that correspond to the discrete version of summing over all [INAUDIBLE]? HONG LIU: Right. AUDIENCE: That's the discrete sum over all possible [? genus-h, ?] right? HONG LIU: Yeah. I think this is the example. Yeah, let's consider torus. So a torus is a box with this identified with this, and this identified with that. OK. And let me first just draw the simplest partition here. Just draw like that. Yeah. Let me just look at these two things. So suppose I give each box-- so if I specify each box, say, give a unit area. OK? And I do this one, I do that one, or I do some other ways to triangulize it. Then because [? one and two ?] give a different symmetric to the surface. And then because [? one and two ?] integrate over all possible metric on this surface. And they integrate over all possible metric on this surface, you can integrate [INAUDIBLE] all possible surfaces. AUDIENCE: In the case of the strings for example, [? we put some ?] over the torus here and the torus and the torus there. HONG LIU: No, no. You only sum over a single torus. Now, what do you mean by summing over torus here, torus there? AUDIENCE: I thought like in the path integral, in the case of the string theory-- HONG LIU: No, you're only summing over a single torus. You're only summing over a single surface, but all possible ways to write-- all possible ways to draw that surface. So what you said about summing torus here, summing torus there, because [INAUDIBLE] what we call the disconnected amplitudes. And then you don't need to consider them in physically disconnected amplitude. You can just [? exponentiate ?] what we call by connected amplitude. And you don't need to do that separately. So once you know how to do a single one, and the disconnected one just automatically obtained by [? exponentiation. ?] AUDIENCE: [INAUDIBLE] HONG LIU: Sorry? No, no. Here the metric matters, the geometry matters. It's not just the topology. AUDIENCE: [INAUDIBLE] Feynman diagram [INAUDIBLE]? HONG LIU: Yeah. Yeah, just the key is that the propagator of the Feynman diagram essentially [? encodes ?] the geometries. And in encoding a very indirect way. Yeah. Just read this part. This section only have a few pages, but contain a little bit more details on what I have here. It requires maybe one more hour to explain this in more detail. Yeah, this is just that. I just want to explain this philosophy. I don't want to go through the details of how you do this. OK, good. So now let me just mention a couple of generalizations. So the first thing you already asked before, I think maybe both you have asked. Let me just mention them quickly. And if you are interested, I can certainly give you a reference for you to read about them, or I can put it in [? your P ?] sets. And so, so far, it's all matrix-valued fields, OK? But if you can see the theory-- or in other words, in the mathematical language, say, it's an adjoint representation. It's an adjoint representation of the-- because our symmetries are UN, it's a UN gauge group. OK? UN gauge group. But you can also, for example, in QCD, you also have quarks. So you also have field in the fundamental representations. So it can also include field in the fundamental representation. So rather than matrix-valued, they're N vector. OK, they're N [? vectors. ?] So for quarks, of course, for the standard QCD N will be 3, so you have three quarks. You have three different colored quarks. And so then your Feynman diagrams, in addition to have those matrix [? lines, ?] which you have a double line. And now here you only have a single index, OK? And then you only have a single line. So the propagator of those quarks will just have a single line. And then also in your Feynman diagram you can have loops over the quarks, et cetera. So you can again work this out. And then you find it is a very nice large N expansion. And then you find the diagrams, the Feynman diagrams. Now you find in this case the Feynman diagrams can be classified by 2D surfaces with boundaries. So essentially, you have-- and let me just say, for example, this is the vacuum diagrams, for all the vacuum process. Then you can [INAUDIBLE] or the vacuum diagrams. And then they can all be [? collectified. ?] So previously, we have a matrix-valued field. Then all your vacuum diagrams, they are corresponding closed surfaces-- so sphere, torus, et cetera. But now if you include the quarks, then those surfaces can have boundaries. And then [INAUDIBLE] into the quark groups, et cetera. And then they [? cannot ?] be classified. And so these also have a counterpart if you try to map to the string theory. So this [INAUDIBLE] [? one and ?] [? two, ?] string theory. There's string theory with both closed and open strings. And so essentially those boundaries give rise to the open strings. So here, it's all closed strings. It's all closed surface. Well, now you can, by adding the open strings, and then you can, again, have the correspondence between the two. OK. So all the discussion is very similar to what we discussed before. We just apply all this the same philosophy to the quarks. Yes. AUDIENCE: [INAUDIBLE] do the same trick on string theory and find some sort of expression which then will map to some higher order surfaces, [INAUDIBLE]? HONG LIU: Sorry, say that again? AUDIENCE: [INAUDIBLE] Feynman diagrams we move to string theory for surfaces. Is there some [INAUDIBLE] from surfaces just they go one more [? step up? ?] HONG LIU: You mean higher dimensions, not strings. Yeah, that will become-- of course, that's a [? lateral ?] idea. So that will [INAUDIBLE] you can consider [? rather ?] strings, you can consider two-dimensional surface, a two-dimensional surface moving in spacetime. And then [INAUDIBLE] into [? so-called ?] the membrane theory. But let's say where it turns out to be-- turns out string is a nice balance. It's not too complicated or not too simple. And it give you lots of structure. But when you go to membrane, then the story become too complicated, and nobody knows how to quantize that theory. So the second remark is that here we consider UN. So here our symmetry group is UN. Because our phi-- phi there is [? commission. ?] So when you have a [? commission ?] matrix, then there's a difference between the two indices, so we put one up and one down. So they are propagators that lead to-- so it leads to the lines with arrows, because we need to distinguish upper and lower indices. OK? Between the two indices. But you can also consider, for example, phi is a symmetric matrix. Say it's a real symmetric matrix. It's a real symmetric, or real anti-symmetric. In those cases, then there's no difference between the two indices. And then when you draw a propagator-- so in this case the symmetry group would be, say, SON, say, or SPN, et cetera. And then the propagators, they will no longer have orientations. OK? They will no longer have orientations. Because you can no longer-- yeah. So this will give rise-- so let me write it closer. So this will give rise to unorientable surfaces. Say, for example, to classify the diagrams, you can no longer just use the orientable surfaces. You also have to include the non-orientable surfaces to classify the diagrams. And the [INAUDIBLE] this also have a precise counterpart into unorientable strings. No, non-orientable strings. Yeah, I think non-orientable, non-orientable surfaces. Also non-orientable strings. Good. So I'm emphasizing how difficult it is if, say, we want to start with QCD and then try to find the string theory description. But this still, [? none of ?] this tries-- I just try. OK, so let's just consider, just take large N generalization of QCD. So this, again, will be some UN gauge theory, UN Yang-Mills theory, say, in 3 plus 1 dimensional Minkowski spacetime. And can we say anything about its string theory description? So [INAUDIBLE]. So maybe it's difficult, but let's try to guess it. OK. So in physics, in many situations, a seemingly difficult problem, if you know how to guess it, actually you can get the answer. On, for example, quantum hole effects, fractional quantum hole effects, you can just guess the wave function. So of course, the simplest guess-- so this is some gauge theory in 3 plus 1 dimensional Minkowski spacetime. So now we say this is a string theory. So natural guess is that this maybe is a string theory, again, in the 3 plus 1 dimensional Minkowski spacetime. OK? So we just take what-- so these will, of course, run into a string, propagating in this spacetime, OK? As I said, when you write down the string theory, you first have to specify your target space, which, as the string moves, the larger question would be just, should it be the gauge theory's Minkowski spacetime. Maybe this string theory should be. OK? And then this. Then you can just try to-- then you can just write down the simplest action. So maybe say Nambu-Goto action, which we wrote last time, OK? Or the [? old ?] Polyakov action. So this Nambu-Goto action will result [INAUDIBLE] Polyakov. And let me not worry about that. For example, you can just guess, say, maybe this is a string theory also in the Minkowski spacetime. Say, consider the simplest action. Or the equivalent of this, OK? Then at least what you could try-- now you actually have an action. Now you think that you have this object. Now you think you can compare. OK, now you can essentially compare. Say, in QCD you calculated your Feynman diagrams, and now just compare. But of course, you still have the difficulty. Of course, you have to go to strong coupling to see the geometric limit, et cetera. But in principle, it's something you can do. But this actually does not work. OK? This does not work, for the following simple reason. Firstly, that such a string theory-- so a string theory, actually the remarkable thing about the string is that if you have a particle, you can put the particle in any spacetime. But strings are very picky. You cannot put them in any spacetime. And they can only propagate consistently, quantum mechanically consistently, in some spacetime but not in others. So for example, if you want to put the string to propagate in this 3 plus 1 dimensional Minkowski spacetime, then you actually find that the theory is mathematically inconsistent. So such a string theory is inconsistent. It's mathematically inconsistent. Except for the D equal to 26 or 10. OK? So 26 if you just purely have the theory, and 10 if you also add some fermion. So such a string theory does not exist mathematically. So you say, oh, OK. You say, I'm a smart fellow. I can go around this. Because we want the Minkowski spacetime. Because those gauge theory propagating the Minkowski spacetime, so this Minkowski [INAUDIBLE] must be somewhere. They cannot go away, because all these glueballs [INAUDIBLE] in this 3 plus 1 dimensional Minkowski spacetime. And if we want to identify the strings with those glueballs, those strings must at least [? know ?] some of this Minkowski spacetime. And then you say, oh, suppose you tell me that this string theory is only consistent in 10 dimension. But then let me take a string theory in 10 dimensions, which itself consistent. But I take this 10-dimensional spacetime to have the form of a 3 plus 1 dimensional Minkowski spacetime. And the [? time, ?] some compact manifold, OK? Some compact manifold. And in such case-- so if this is a compact manifold, then the symmetry of this spacetime, so the spacetime symmetry still only have the 3 plus 1 dimensional, [? say, ?] Poincare symmetry. Because if you want to describe the QCD in 3 plus 1 dimension, QCD has the Poincare symmetry. You can do Lorentz transformation, and then you can do rotation. Or you can do translation. The string theory should not have more symmetries or less symmetries than QCD. They should have the same symmetries because they are supposed to be the same description. But if you take the 10-dimensional Minkowski space, of course, it's not right. Because the 10-dimensional Minkowski space have 10-dimensional translation and 10-dimensional Lorentz symmetry. But what you can do is that you take this 10-dimensional space to be a form of the 3 plus 1 dimensional Minkowski spacetime and times some additional compact manifold, and then you have solved the symmetry problem. But except this still does not work because the string theory, as we know, always contain gravity. And if you put a string theory on such a compact space N, [? there would be ?] always leads to a massless spin-2 particle in this 3 plus 1 dimensional part. But from Weinberg-Witten theorem we talked in the first lecture, in the QCD you are not supposed to have a 3 plus 1 dimensional massless spin-2 particle, OK? And so this won't work. So this won't work. Because this contains-- In 3 plus 1 dimensional [? Minkowski space, ?] which does not have-- OK? Or in the large N [INAUDIBLE]. So this does not work. So what to do? Yes? AUDIENCE: So does this just mean that it's mathematically inconsistent? HONG LIU: No, no. This does not mean it is mathematically inconsistent. It just means this string theory cannot not correspond to the string theory [? describe ?] QCD. The string theory description-- the equivalent string theory for QCD cannot have this feature. Yeah, just say this cannot be the right answer for that string theory. This string theory is consistent. Yes. AUDIENCE: So is that you were saying if there is a massless spin-2 particle in that string theory, there has to be a [? counterpart in the ?] QCD. HONG LIU: That's right. AUDIENCE: If there is not a [INAUDIBLE], that won't work HONG LIU: Yeah. This cannot be a description of that. From Weinberg-Witten theorem, we know in QCD there's no massless spin-2 particle. Yes. AUDIENCE: I thought we have talked about maybe we can do strings to [? find ?] QCD in a different dimension [? in ?] space. HONG LIU: We will go into that. But now they are in the same dimension, because this Minkowski 4, this will have-- because this is a compact [? part, ?] it doesn't matter. So in this part, [? there are ?] massless spin-2 particles. This does not [? apply ?] in QCD. So what can you do? So most people just give up. Most people give up. So other than give up, the option is say maybe this action is too simple. Maybe you have to look at more exotic action. OK. So this is one possibility. And the second possibility is that maybe you need to look for some other target space. OK. But now, what if you go away from here? Once you go away from here, everything else is now becoming such little in the ocean, because then you don't have much clue what to do. We just say, your basic guess just could not work. So for many years, even though this is a very intriguing idea, people could not make progress. But now we have hindsight. But now we have hindsight. So we know that even this maybe cannot be described by a four-dimensional-- so even though this cannot have a-- so this cannot have a massless spin-2 particle in this 3 plus 1 dimension of Minkowski spacetime. Maybe you can still have some kind of graviton in some kind of a five-dimensional spacetime. You have some five dimensions, in a different dimension. So there were some rough hints. Maybe you can consider there's a five-dimensional string theory. So let me emphasize when we say five or four, I always mention the non-compact part. So the compact part, it doesn't count because compact part just goes for the ride. What determines the properties, say, of a massless particle, et cetera, is the uncompact of the spacetime. Yeah, because this is a 10-dimensional spacetime. This is already not [INAUDIBLE]. So maybe we [? change ?] for string theory in five-dimensional uncompact. AUDIENCE: Five, so in 4 plus 1? HONG LIU: Yeah. In 4 plus 1 uncompact spacetime. Yes. AUDIENCE: [INAUDIBLE] compactors. When you say compact, do you mean the mathematical definition of compactness? HONG LIU: Yeah, that's right. Yeah, I just say there is a finite volume. Just for our purpose here, we can do it simply. Just let's imagine-- yeah, compact always has a finite volume, for example. Yes? AUDIENCE: Why can we just ignore the compact dimensions? Is there any condition on how big they're allowed to be or something, like limit? HONG LIU: Yeah, just when you have-- so if you know a little bit about this thing called the Kaluza-Klein theory. And you know that the compact part-- the thing is that if you have a theory [? based ?] on uncompact and the compact part, and then most of the physical properties is controlled by the physics of uncompact parts. And this will determine some details like the detailed spectrum, et cetera. But the kind of thing we worry about, whether you have this massless spin-2 particle, et cetera, will not be determined by this kind of thing. AUDIENCE: Is there any volume limit on the compact part, like maximum? HONG LIU: No, it's fine to have a finite volume. AUDIENCE: Just finite, but can it be large? HONG LIU: No matter how large, this have infinite. It's always much smaller than this one. Yeah, but now it's just always relative. It's always relative. Yes. AUDIENCE: Tracking back a little, is there any quick explanation for 26 and 10 are special, or is it very complicated? HONG LIU: Um. [LAUGHTER] No, it's not complicated. Actually, we were going to do it in next lecture. Yeah, next lecture we will see 26, but maybe not 10. 10 is little bit more complicated. Most people voted for my option one, so that means you will be able to see the 26. Right. AUDIENCE: Who [? discovered ?] 26 and 10? I mean, they are specific for this [INAUDIBLE] action rate, so for other action would be something else. HONG LIU: Specifically for the Nambu-Goto action is 26. And for the 10, you need to add some additional fermions and make it into a so-called superstring, then become 10. And even this 26 one is not completely self-consistent. And anyway, there's still some little, tiny problems with this. Anyway, so normally we use 10. OK so now, then there's some tantalizing hints for the-- say, maybe you cannot do it with the 3 plus 1 dimensional uncompact spacetime. Maybe you can do a 4 plus 1 dimensional uncompact. So the first is the holographic principle, where you have length. Holographic principle we have learned because there we say, if you want to describe a theory with gravity, then this gravity should be able to be described by something on its boundary. And the string theory is a theory with gravity. So if the string theory should be equivalent to some kind of QCD, some kind of gauge theory without gravity, and then from holographic principle, this field theory maybe should be one lower dimension, OK? In one lower dimension. Is the logic here clear? AUDIENCE: Wait, can you say that again? HONG LIU: So here we want to equate large N QCD with some string theory. But string theory we know contains gravity. A list of all our experience contain gravity. But if you believe that the gravity should satisfy holographic principle, then the gravity should be equivalent, according to holographic principle, gravity in, say, D dimensional spacetime can be described by something on its boundary, something one dimension lower. AUDIENCE: But I thought the holographic principle was a statement about entropy. HONG LIU: No, it's a state started from a statement about entropy. But then you do a little bit of leap. So what I call it little bit of a conceptual leap is that the-- or [? little ?] leap of faith is that you promote that into the statement that said the number of degrees freedom you needed to describe the whole system. Yeah, so the holographic principle is that for any region, even the quantum gravity theory, for any region, you should be able to describe it by the degrees of freedom living on the boundary of that region. And degrees freedom living on the boundary of that region, then it's one dimensional lower. AUDIENCE: Wait, so can I ask one question about that? If I have some region, some volume in space, some closed ball or something. And I live in a universe which is, for example, a closed-- like maybe they live on some hypersphere or something like this. Then how do I know whether I'm-- how do I know that the information is encoded? How do I know whether I'm inside the sphere or outside of the sphere? For example, we see that the entropy that has to do with the sphere basically tells you about how much information can you contain inside the sphere. But if you live in a universe which is closed or something, then you don't know whether you're inside or outside the sphere. HONG LIU: Yeah, but that's a difficult question. Yeah, if you talk about closed universe here, we are not talking about closed universe. AUDIENCE: I see. HONG LIU: Yes. AUDIENCE: I thought the holographic principle is that the number of degrees freedom inside the region is actually bounded by the area. HONG LIU: Right, it's bounded by-- AUDIENCE: Yeah, but why is it that we use that degree of freedom living on the boundary? HONG LIU: There are several formulations of that. First is that the total number of degrees freedom in this region is bounded by the area. And then you can go to the next step, which is maybe the whole region can be just described by these degrees of freedom living on the boundary on that region. AUDIENCE: Is that because, say, the state of density on the boundary [INAUDIBLE] the state on the boundary is proportional to the area of the boundary? HONG LIU: Yeah. Exactly. That's right. AUDIENCE: So here our goal is to recover the large N theory in 3 plus 1 dimensions without gravity. So we have no gravity. You can't 3 plus 1. HONG LIU: Right. Yeah, so if that is supposed to be equivalent to the gravity theory, and the gravity [? theory ?] to find the holographic principle, and then the natural guess is that this non-gravitational field theory should live in one dimensional lower. OK? So this is one hint. And the second is actually from the consistency of string theory itself. So this is a little bit technical. Again, we will only be able to explain it a little bit later, when we talk about more details about string theory. You can [? tell, ?] even though the string theory in this space is inconsistent. But there's a simple way. This is-- it's not a simple way. So what's happening is the following. So if you consider, say, a string propagating in this spacetime, and there are some symmetries on the worldsheet. And only in the 10 and 26 dimension, those symmetries are satisfied quantum mechanically. And in other dimensions, those symmetries, somehow, even though classically it's there, but quantum mechanically it's gone. And those symmetries become-- because they are gone quantum mechanically, then it leads to inconsistencies. And it turns out that there's some other way you can make that consistent, to make that symmetry still to be valid, is by adding some new degrees of freedom. OK? It's just there's some new degrees freedom dynamically generated. And then that new degrees freedom turned out to behave like an additional dimension. OK. Yeah, this will make no sense to you. I'm just saying a consistency of string theory actually sometimes can give you one additional dimension. AUDIENCE: What is the difference between these inconsistencies, talking about anomalies and-- HONG LIU: It is anomalies. But here it's called gauge anomalies. It's gauge anomalies is at the local symmetry anomalies, which is inconsistent. AUDIENCE: So just-- maybe this is not the time to ask this-- but are the degrees of freedom that you need to save you from this inconsistency problem. So do they have to be extra dimensions of space? Or what I'm saying is that if we need to do string theory in 10 dimensions, is it really four dimensions plus six degrees of freedom? Or are they actually six bona fide spatial dimensions? HONG LIU: Oh, this is a very good question. So if you have-- yeah, this something we would be a little bit more clear just even in-- oh, it's very late. Even the second part of this lecture is that here you have four degrees of freedom, you have six degrees of freedom. But turns out, if you only consider this guy, then this four degrees freedom by itself is not consistent. It's [? its own ?] violation of the symmetry at the quantum level. And then you need to add more, and then one more, because of course one and two have extra dimension. Anyway, we can make it more explicit in next lecture. Here I just throw a remark here. Anyway, this guy-- this is purely hindsight. Nobody have realized this point, this first point, nobody have realized it before this holographic duality was discovered. Nobody really made this connection. And at this point, saying there should be a five-dimensional string theory describing gauge theory, that was made just before the discovery. I will mention that a little bit later. Anyway, so now let's-- let me just maybe finish this, and we have a break. So now let's consider-- suppose there is a five-dimensional spacetime, string theory in some five-dimensional spacetime, say 4 plus 1 dimensional spacetime that describes QCD. Then what should be the property of this Y? So this Y denotes some manifold Y. OK? So as I mentioned, it must have at least all the symmetries of the QCD, but not more. Should have exactly the same amount of symmetries. So that means it must have the translation and Lorentz symmetries of QCD. OK? So that means the only metric I can write down must be of this form. The only metric I can write down, the metric must be have this form. So this az just some function. And z is the extra dimension to a Minkowski spacetime. And this is some Minkowski metric for 3 plus 1 dimension. AUDIENCE: You mean it's like a prototype to four dimension, we have to get the Minkowski space. HONG LIU: Yeah. Just say whatever this space, whatever is the symmetry of this-- so the symmetry of this spacetime must have the Poincare-- must have all the symmetry of the 3 plus 1 dimensional Minkowski spacetime. Then the simplest way, you're saying that the only way to do it is just you put the Minkowski spacetime there as a subspace. And then you have one additional space, and then you can have one additional dimension. And then, because you have to maintain the symmetries and [INAUDIBLE] to be thinking then you can convince yourself that the only additional degrees freedom in the metric [INAUDIBLE] is the overall function. So the function of this z, and nothing else. OK. AUDIENCE: Can that be part of kind of a scalar in Minkowski space? HONG LIU: Yeah. Let me just say, this is most general metric, consistent with four dimensional, 3 plus 1 dimensional, Poincare symmetries. AUDIENCE: Why this additional dimension always in a space part? Can it be in a time-like part? Like a 3 plus 2? HONG LIU: Both arguments suggest it's a space part. So because this is just the boundary of some region there's a spatial dimension [? reduction ?], not time. So is this clear to you? Because you won't have a Minkowski spacetime, so you must have a Minkowski here. And then in the prefactor of the Minkowski, you can multiply by anything, any function, but this function cannot depend on the X. It can only depend on this extra dimension. Because if you have anything which depend on capital X, then you have violated the Poincare symmetry. You have violated the translation [? X. ?] So the only function you can put before this Minkowski spacetime is a function of this additional dimension. And then by redefining this additional dimension, I can always put this overall factor in the front. Yeah, so this tells you that this is the most general metric. OK? So if it's not clear to you, think about it a little bit afterwards. So these are the most as you can do. So that's the end. So you say, you cannot determine az, et cetera. So this is as most you can say for the QCD. But if the theory, if the field theory is scale invariant, say, conformal field theory, that normally we call CFT, OK? So conformal field theory. Then we can show this metric. So let me call this equation 1. Then 1 must be [INAUDIBLE] spacetime. AUDIENCE: [INAUDIBLE] symmetry on the boundary as well, [INAUDIBLE]? HONG LIU: Yeah, I'm going to show that. So if the field theory is scale invariant, that means that the fields theory have some additional symmetry, should be satisfied by this metric. And then I will show that this additional scaling symmetry will make this to precisely a so-called anti-de Sitter spacetime. AUDIENCE: Field theory, and then the 3 plus 1. HONG LIU: Yeah. Right. If the field theory, say the-- QCD does not have a scale. It's not scaling right, so I do not say a QCD anymore. Just say, suppose some other field theory, which have large N expansion, which is also scale invariant. And then the corresponding string theory must be in anti-de Sitter spacetime. AUDIENCE: Are we ever going to come back to QCD, or is that a-- HONG LIU: No, that's it. Maybe we'll come back to QCD, but in a somewhat indirect way. Yeah, not to your real-life, beloved QCD. AUDIENCE: So no one's solved that problem still? HONG LIU: Yeah, no one's solved that problem yet. So you still have a chance. So that remains very simple. So let me just say, then we will have a break. Then we will be done. I think I'm going very slowly today. So scale invariant theory-- is invariant under the scaling for any constant, constant lambda. So scale invariant theory should be invariant under such a scaling. And then now we want to require this metric also have this scaling. OK? So now, we require 1 also have such scaling. That's scaling symmetry. OK, so we just do a scaling X mu go to lambda X mu. And then this term will give me additional lambda squared. So we see, in order for this to be the same as before, the z should scale the same, OK? So in order for this to be-- so we need z to scale as the same, in order I can scale this lambda out. After I scale this lambda out, I also need that a lambda z should be equal to 1 over lambda az, OK? So the scaling symmetry of that equation requires these two conditions. So on the scaling of z, this a lambda z should satisfy this condition. Then the lambda will cancel. So this condition is important because we did scale them homogeneously. Otherwise, of course, lambda will not drop out. And the second condition just makes sure lambda is canceled. OK, is it clear? So now this condition just determined that az must be a simple power, must be written as R divided by z. See, R is some constant. And now we can write down the full metric. So now I've determined this function up to our overall constant. So the full metric is dS square equal to R squared divided by z squared dz squared plus eta mu, mu, dX mu, dX mu. And this is precisely AdS metric, written in certain coordinates. And then this R, then you adjust the curvature radius of AdS. So if you don't know about anti-de Sitter spacetime, it doesn't matter. So this is the metric, and the name of this metric is anti-de Sitter. And later we will explain the properties of the anti-de Sitter spacetime. So now we find, so now we reach a conclusion, is that if I have a large N conformal field theory in Minkowski D-dimensional space, time. So this can be applied to any dimensional. It's not necessary [? to be ?] 3 plus 1. In D-- so this, if it can be described by a string theory, should be string theory in AdS d plus 1. And in particular, the 1/N here is related to the g strings here, the string coupling here. So this is what we concluded. Yes? AUDIENCE: So all we've shown is that there is no obvious inconsistency with that correspondence. HONG LIU: What do you mean there's no obvious? AUDIENCE: As in, we didn't illustrate any way that they-- HONG LIU: Sure, I'm just saying this is a necessary condition. AUDIENCE: Right, so at least that is necessary. HONG LIU: Yeah, this is a necessary condition. So if you can describe a large N CFT by our string theory-- and it should be a string theory-- yeah, this proposal works. This proposal passed the minimal test. AUDIENCE: I have a question. So when Maldacena presumably actually did figure this out, you said that this resulted from the holographic principle, like it was just figured out right before he did it. Was he aware of the holographic-- HONG LIU: No, here is what I'm going to talk. So Maldacena, in 1997, Maldacena found precisely-- in 1997, Maldacena found a few examples of this, precisely realized this. And not using this mass or using some completely indirect way, which we will explain next. So he found this through some very indirect way. But in principle, one could have realized this if one kept those things in mind. So now let me tell you a little bit of the history, and then we will have a break. Then we can go home. It depends on whether you want a break or not. Maybe you don't want a break. Yeah, let me tell you a little bit of history. So yeah, just to save time, let me not write it down, just say it. So in the late '60s to early '70s, so string theory was developed to understand strong interactions. So understanding strong interactions was the problem. At the time, people were developing string theory to try to understand strong interactions. So in 1971, our friend Frank, Frank Wilczek, and other people, they discover the asymptotic freedom. And they established the Yang-Mills theory as a description of strong interaction which now have our QCD. And so that's essentially eliminated the hope of string theory to describe QCD. Because the QCD seems to be very different. You [? need ?] the help of string theory to describe strong interaction because the QCD [INAUDIBLE] gauge theory, it's very different from the string theory. So people soon abandoned the string theory. So now we go to 1974. So 1974, a big number of things were discovered in 1974. So 1974 was a golden year. So first is 't Hooft realized his large N expansion and then realized that this actually looks like string theory. And then completely independently, Scherk, Schwarz, and [? Yoneya, ?] they realized that string theory should considered a theory of gravity, rather than a theory of strong interaction. So they realized actually-- it's ironic, people started doing string theory in the '60s and '70s, et cetera. But only in 1974 people realized, ah, string theory always have a gravity and should be considered a theory of gravity. Anyway, so in 1974, they realized the string theory should be considered as a gravity. So that was a very, very exciting realization, because then you can have [? quantum ?] gravity. But by that time, people had given up on string theory. So nobody cared about this important observation. Nobody cared about this important observation. So, also in the same year, in 1974, Hawking discovered his Hawking radiation. And they established that black hole mechanics is really a thermodynamics. Then really established that the black hole is a thermodynamic object, And in 1974 there's also a lot of important discovery-- which is related to MIT, so that's why I'm mentioning it-- is that people first really saw quarks experimentally, is that, again, our friend, colleague Samuel Ting at Brookhaven, which they discovered a so-called charmonium, which is a bounce state of the charm quark and the anti-charm quark. And because the charm quark is very heavy, so they form a hydrogen-like structure. So in some sense, the charmonium is the first-- you first directly see the quarks. And actually, even after the 1971, after asymptotic freedom, many people do not believe QCD. They did not believe in quarks. They say, if there's quarks, why don't we see them? And then in 1974, Samuel Ting discovered this charmonium in October. And so people call it the October Revolution. [LAUGHTER] Do you know why they laugh? OK. Anyway. Yeah. Yeah, because I saw your emotions, I think you have very good composure. Anyway, in the same year, in 1974, Wilson proposed what we now call the lattice QCD, so he put the QCD on the lattice. And then he invented, and then he developed a very beautiful technique to show from this putting QCD on the lattice that, actually, the quark can be confined through the strings. So the quarks in QCD can be confined through the strings. And that essentially revived the idea maybe the QCD can be a string theory, because the quarks are confined through the strings. And this all happened in 1974. So then I mentioned the same, in the late '80s and the early '90s, people were looking at these so-called matrix models, the matrix integrals, et cetera. Then they showed they related to lower dimensional string theory. But nobody-- yeah, they showed this related to some kind of lower dimensional string theory. And then in 1993 and 1994, then 't Hooft had this crazy idea of this holographic principle. And he said maybe, things about the quantum gravity can be described by things living on the boundary. And again, it's a crazy idea. Very few people paid attention to it. But the only person who picked it up is Leonard Susskind. And then he tried to come up with some sort of experiments to show that that idea is not so crazy. Actually, Susskind wrote a very sexy name for his paper. It's called "The World As a Hologram." And so that paper received some attention, but still, still, people did not know what to make of it. And then in 1995, Polchinski discovers so-called D-branes. And then we go to 1997. So in 1997, first in June, so as I said, that QCD may be some kind of string theory. This idea is a long idea, starting from the 't Hooft and large N expansion, and also from the Wilson's picture of confining strings from the lattice QCD, etc. But it's just a very hard problem. If from QCD, how can you come up with a string theory? It's just very hard. Very few people are working on it. So in 1997, in June, Polyakov finally, he said, had a breakthrough. He said that this consistent [? of ?] string theory give you one extra dimension, you should consider a five-dimensional string theory rather than a four-dimensional string theory. And then he gave up some arguments, anyway. And he almost always actually write down this metric And maybe he already wrote down this metric, I don't remember. Anyway, he was very close to that. But then in November, then Maldacena came up with this idea of CFT. And then he provided [? explicit ?] examples of certain large N gauge theories, which is scale invariant and some string theory in certain anti-de Sitter spacetime. And as I said, through the understanding of these D-branes. But even Maldacena's paper, he did not-- he was still thinking from the picture of large N gauge theory corresponding to some string theory. He did not make the connection to the holographic principle. He did not make a connection to the holographic principle. But very soon, in February 1998, Witten wrote the paper, and he made the connection. He said, ah, this is precisely the holographic principle. And this example, he said, ah, this example is precisely the holographic principle Susskind and 't Hooft was talking about. So that's a brief history of how people actually reached this point. So the next stage, what we are going to do is to try to derive [INAUDIBLE]. So now we can-- as I said, we have two options. We can just start from here, assuming there is CFT [? that's ?] equivalent to some string theory. And then we can see how we can develop this further. And this is one option we can take. And our other option is to really see how this relation actually arises from string theory. And many people voted for the second option, which in my [? email ?] is option one. So you want to see how this is actually deduced from string theory. So now we will do that, OK? But I should warn you, there will be some technicality you have to tolerate. You wanted to see how this is derived, OK? So we do a lot of [? 20 ?] minutes today? Without break? Good. OK. Yeah, next time, I will remember to break. OK. So now we are going to derive this. So first just as a preparation, I need to tell you a little bit more about string theory. In particular, the spectrum of closed strings, closed and open strings. And so this is where the gravity-- and from a closed string you will see the gravity, and from the open string, you will see the gauge theory. OK. We will see gravity and gauge theory. So these are the first things we will do. So the second thing we will do-- so the second thing we will do is to understand the physics of D-branes. So D-brane is some object in string theory. And it turned out to play a very, very special role, to connect the gravity and the string theory. OK. Connect the gravity and the string theory. Because this is the connection between the gravity and the string theory. And in string theory, this [? object will ?] deeply and precisely play this role, which connects the gravity and the string theory. So that's why you can deduce such a relation. OK. Yeah, so this is the two things we will do before we can derive this. So this is, say, the rough plan we will do before we can derive this gravity. So first let's tell you a little bit more about string theory. So at beginning, just say some more general setup of string theory. So let's consider a string moving in a spacetime, which I denote by M, say, with the metric ds squared equal to g mu mu. And this can depend on X, dX mu, dX mu. OK? So you can imagine some general curved spacetime. Say mu and nu will go from 0 to 1, to D minus 1. So D is the total number of space dimensions for this M. So the motion of the string, as we said quite a few times now, is the embedding of the worldsheet to the spacetime. So this is in the form of X mu sigma tau. OK, you parameterize the worldsheet by two coordinates. So I will also write it as X mu sigma a. And the sigma a is equal to sigma 0, and the sigma 1 is equal to tau sigma, OK? And we will use this notation. So now imagine a surface embedded in some spacetime. And this is the embedding equation. Because if you know those functions, then you know precisely how the surface are embedded, OK? And because the original spacetime have a metric, then this induced metric on the worldsheet. And this induced metric is very easy to write down. You just plug in this function into here. And when you take the derivative, you only worry that sigma and tau, because then that means you're restricted on the surface, when your only [? value is ?] sigma and tau. And then you can plug this into there. So you can get the metric, then can be written in this form. Here's sigma a and this sigma b. OK? So remember, sigma a and sigma b just tau and sigma. And this hab is just equal to g mu mu, X, partial a, X mu, partial b, X nu. OK? So this is trivial to see. Just plug this into there, to the variation with sigma and tau, you just get that, and it's that. OK? Is it clear? So this Nambu-Goto action is the tension-- tension we always write this 1 over 2 pi alpha prime-- dA. So alpha prime is the [INAUDIBLE] dimensions square. So we often also write alpha prime as ls square. So alpha prime, just a parameter, too. Parameterize to [? load ?] the tension of the string. So this area, of course, you can just write it as d squared sigma. So again, you use the notation d squared sigma just d sigma d tau. d squared sigma minus delta h. OK. So this is just the area, because this is the induced metric on the worldsheet. Then you take the determinant, and that give you the area. So this is the standard geometric formula. So now let me call this equation 1. So I have a [? lot ?] equation 1 before, but this is a new chapter. OK. So this is the explicit form of this Nambu-Goto action. But this action is a little bit awkward, because involving the square root. A square root, it's considered to be not a good thing in physics. Because when you write down action, because it's a non-polynomial. We typically like polynomial things. Because the only integral we can do is a Gaussian integral, and the Gaussian is polynomial. So this is inconvenient, so one can rewrite it a little bit. So you write down the answer. So we can rewrite it in the polynomial form. And this polynomial form is corresponding-- it's called the Polyakov action, so I call it SP, even though Polyakov had nothing to do with it. And this action can be written in the following form. And let me write down the answer. Then I will show the equivalent. AUDIENCE: Wasn't it invented by Leonard Susskind? HONG LIU: No, it's not Leonard Susskind. [INTERPOSING VOICES] AUDIENCE: Why is it called Polyakov-- HONG LIU: Polyakov-- yeah, actually Polyakov had something to do with it. Polyakov used it mostly [INAUDIBLE] first. OK, so you can rewrite it as that, in this form. And the gamma ab is a new variable introduced. It's a Lagrangian multiplier. OK. So let me point out a few things. So this structure is precisely just this hab. So that's if you look at this structure, so this structure is precisely what I called hab. So now the claim is adding [INAUDIBLE] to original variable with just X. Now I introduce a new variable, gamma. And gamma is like a Lagrangian multiplier, because there's no connected term for gamma. So if I eliminate gamma, then I will recover this. OK, so this is the claim. So now let me show that. This is very easy to see. Because if you just do the variation of gamma, do the variation of gamma ab. OK. So whenever I wrote in this is in [? upstairs, ?] it always means the inverse. OK, this is the standard notation for the metric. So if you look at the equation of motion, [INAUDIBLE] by variation of this gamma ab, then what you'll find is that the gamma ab-- just do the variation of that action. You find the equation of motion for gamma ab is given by the following. So hab, just that guy. And the lambda is arbitrary constant, or lambda is arbitrary function. So this I'm sure you can do. You just do the variation. You find that equation. So now we can just verify this actually works. When you substitute this into here, OK, into here. So this gamma ab, when you take the inverse, then [? cause ?] one into the inverse, hab, inverse hab contracted with this hab just give you 2. And that 2-- did I put that 2 in the right place? That gave you 2. And that have a 2 on-- yeah, I'm confused about 2 now. Oh, no, no, it's fine. Anyway, so this contracted with that, so gamma ab contracted with hab give you 2 divided by lambda, times 2. OK? Because you just invert this guy and invert the lambda and 2. And then square root of minus gamma give me 1/2 lambda, square root minus h. OK? So sometimes I also approximate. I will not write this determinant explicitly. When I write [? less h, ?] it means the determinant of h. And the minus gamma, determinant of gamma. OK? So you multiply these two together, so these two cancel. And this two, multiply this 4 pi alpha prime, and then get back that, OK? So they're equivalent. Clear? So this gives you [? SNG. ?] So now the key-- so now if you look at this form, this really have a polynomial form for X, OK? So now let me call this equation 2. So equation 2, if you look at that expression, just has the form-- so this is just like a two-dimensional field theory-- has the form of a two-dimensional scalar field theory in the curved spacetime. Of course, the curved spacetime is just our worldsheet sigma with metric gamma ab, OK? So this is just like-- but the key here, so sometimes 2 is called the nonlinear sigma model, just traditionally, a theory of the form that equation 2 is called the nonlinear sigma model. Nonlinear because typically this metric can depend on X, and so dependence on X is nonlinear. So it's called nonlinear sigma model. But I would say it's both gamma ab and X are dynamical. Are dynamical variables. So that means when you do the path integral, so in the path integral quantization, you need to integrate over all possible gamma ab and all possible X mu. Not only integrate over all possible X mu, but also integrate all possible gamma ab with this action. OK. So this is a two-dimensional [? world ?] with some scalar field. And you integrate over all possible metric, so over all possible intrinsic metric in that [? world. ?] So this can also be considered as 2D gravity, two-dimensional gravity, coupled to D scalar fields. So now we see that when you rewrite anything in this polynomial form, in this Polyakov form, the problem of quantizing the string become the problem of quantizing two-dimensional gravity coupled to D scalar fields. OK. So this may look very scary, but it turns out actually two-dimensional gravity is very simple. So it's actually not scary. So in the end, for many situations, this just reduced to, say, a quantizing scalar field with a little bit of subtleties. So yeah, let's stop here.
MIT_8821_String_Theory_and_Holographic_Duality_Fall_2014
1_Emergence_of_Gravity.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation, or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. HONG LIU: OK, let us start. Welcome, everybody. So first, let me just say a few general things regarding the logistics of this class. So you should have these three handouts. One, organization. One, outline. And then, one is for PSET 1. So let me just go over briefly this organization and the outline. So firstly in this class, there's no textbooks. However, there are many reviews available. But unfortunately, a lot of those reviews, none of the single review is suitable for the whole course. Yeah. So I have listed some of the reviews at the end of this organization. Yeah. So along the way, I will point it out, some specific references or some specific parts of certain reviews if they are directly relevant. And also for this class, there's no recitation. Yeah, for advanced courses like this, department does not assign TA. So we don't have recitation. So I have my own office hour, which is Monday 3:30 to 4:30. Just one hour before the lecture on Monday. Yeah. So if you have questions or if you have-- yeah, just feel free to come. So if you have some questions other time, say, if this office hour is not good for you, I'm also happy to set up some other time to meet you. Or, you can stay after the class to ask questions, et cetera. Yeah. So whichever way is more convenient to you. So today, we have this-- yeah, so form now on, everything will be just on the web. You should download the future PSETs. Or if I put any lecture notes or materials, you should find them on the web. And I assume you can all find out the website of the course. And it's also listed here in the organization page. In the organization page. So any questions so far? Good. So this class there's no exam. And the grades are solely based on PSETs and the final project. And the PSETs will be due every two weeks. So if you look at the calendar, there are about like 5 PSETs. Then, for the last three weeks, instead of, say, handing a PSET, you're handing in the small paper. Just whatever topics you would like to choose about the holographic duality. Yeah, just choose one topic. Write, say, 8 to 10-page paper. Either review or some calculation you would like to do, et cetera. Yeah. So 5 PSET is 75%. And this final project would be 25%. That's pretty much the logistics regarding this class. Do you have any questions regarding the grading, regarding the project or PSET, et cetera? Good. So let me say a few words about this outline. So this outline is way too ambitious. It gives you-- yeah, it just serves to give you a rough contour what we will cover. But really, don't take it too literally because-- first, we are not going to cover all seven chapters. That would be too much. So last time when I taught it, I covered only three chapters. And so hopefully, this time I will do four. Hopefully, time I will do four. And I may also just deviate from some of the things here, depending on the pace or depend on people's interests, et cetera. So this is going to be flexible. If there are certain things you would really very much like to hear, you can also let me know. I can think about it. In particular, among the five, six, seven. And I can think about it, maybe I'll discuss some of them. Discuss some of them. Yeah. Also, another thing if you keep in mind is that in this class, this is a class which will touch on many different subjects, say gravity, a little bit of string theory. And also, quantum field theories, et cetera. So in such a class, it's almost impossible-- I think it's just impossible to really derive everything in the self-contained way. And also, some of-- yeah, I think some of your background are also very different. So the certain part may not appear self-contained to some of you. So if there is anything which is not familiar to you, if I just mention it, if I just code it, if it's not familiar with you, then you make sure-- don't hesitate to ask me, either during the class or after the class, so that I can help you to make up on that. To make up on that. And also, to keep in mind, like other advanced graduate classes, you actually mostly learn outside the class. Inside the class, it's mostly try to provide you some motivations, guidance, et cetera, and give you a rough contour. And your own reading outside the class actually should be the main routes you learn things. The main route you learn things. So any questions? So during the class, please feel free to ask questions. Yeah. Asking questions are a great thing because if something you don't understand, there's a very good chance quite a few of your fellow students also don't understand. So if you ask the question, you're not only helping yourself, you're also helping other people. And you also help me because then I will know certain aspect is not clear to you. Then, I will try to emphasize it, or repeat it, et cetera. So please, do ask questions. And also, if you have any feedback on the class, whether it's too fast, too slow, or problems are too hard or too easy, the PSET is too short for you, just not challenging enough, just let me know. Just let me know. Yeah. Then, I will try to adjust. And again, this is a very important. Again, this is very important because the-- yeah. This will help us to improve the class. To improve the class. Any other questions? Or any questions? OK. Then, let us start our lecture. So first, we look at the hints for the holographic duality. OK. So we start by doing a multiple choice problem. OK. So what is your answer? AUDIENCE: [INAUDIBLE] HONG LIU: Good. So why is gravity? AUDIENCE: We don't have a good quantum theory of gravity. HONG LIU: Good. So we don't have a good quantum theory of gravity. Any other reason? AUDIENCE: Except string theory. HONG LIU: Yeah, string theory is still not the-- I would say not yet at the [INAUDIBLE] quantum theory of gravity. AUDIENCE: [INAUDIBLE] HONG LIU: Yeah. Yeah, this is the similar to that. We don't have a full theory of gravity nonrealizable. AUDIENCE: [INAUDIBLE] HONG LIU: Good. Anything else? Or any other opinions? Say you should choose any other interaction other than gravity? AUDIENCE: Strong is the strongest one. Strong is the strongest one. HONG LIU: That's right. You can also choose the gravity because this is the weakest one. Yeah, so the gravity is the weakest one. OK. So yeah. So I think you have said most of the things. And the gravity is very different from the others. And in some sense, all the others are understood. So a to c are now understood to be described by gauge theories in the fixed spacetime. So the fixed spacetime. In fact, mostly we use Minkowski. Minkowski. And for example, electromagnetism, [INAUDIBLE] by QED. And the QED [INAUDIBLE] gauge field. You want gauge theory plus a Dirac fermion. Dirac theory. A Dirac theory. And similarly, if you include the weak interaction-- so there's something called [INAUDIBLE] weak-- it's described by su2 times u1 gauge theory. Described by su2 times u1 gauge theory. And also, the strong interaction you know now is described by su3 gauge theory. So for all those cases, the basic theoretical structure is understood. So the basic. OK. In principle, we can formulate those theories from first principle, just use path integral. Say, [INAUDIBLE] where [INAUDIBLE] group just based on principle, relation group. And then you can actually formulate all the theory in the [INAUDIBLE] way. So in the sense, then any calculation of theories can be reduced to certain algorithm. OK. Of course, this does not imply we actually know how to calculate many quantities here. It just say in principle, they are calculable. We often don't have the technical tool to calculate them. For example, even in QCD. [INAUDIBLE] joint action, we don't have the technical tool to calculate certain things. But they are, in principle, calculable. And we can write down the algorithm to calculate it. But for gravity, this is different. So for gravity, it's very different. So for gravity. So now we understand that classical gravity, it's just equal to spacetime. It's just a theory of spacetime. So this is the content over the general relativity. The general quantum activity. But at a quantum level, we really don't have a precise idea how to formulate the theory. So there are many questions still not understood. Many, say, conceptual questions still not understood. For example, a spacetime should fluctuate. OK. The spacetime should fluctuate because of the quantum fluctuation [INAUDIBLE] in quantum anything fluctuates. And then, the natural question is whether this is still-- so natural question is whether spacetime is still continuous. OK. So this is still [INAUDIBLE]. You need to replace it by something discrete. And it's not even clear right now whether spacetime will remain a fundamental notion when you go to quantum gravity. It could well be that spacetime would be replaced by something else. And also, there are many other questions. So for example, quantum nature of black holes. Or what should be the beginning of the universe, et cetera. So all these questions are not understood. Also, gravity is the weakest interaction. It is actually much, much weaker. We will see, it's much, much weaker than all the other interactions. And also, people have been speculate-- maybe actually this feature can also be actually a fundamental feature underlying in the special role of the gravity. For many years, it seemed to people that these questions and the gravity questions, they're completely unrelated. They're completely unrelated. They're a completely different subject. And those are well understood, but somehow those we need completely new ideas. So it came out a great surprise. So it came out that great surprise in 1997 by [INAUDIBLE] that actually they actually are equivalent. So the proposal of the [INAUDIBLE] is that the-- chalk. Is that quantum gravity is actually equal to field theories on a fixed spacetime. So on a fixed spacetime just means that there's no gravity here on the right-hand side. Because if there's gravity, then you cannot have a fixed spacetime. And spacetime should be dynamical. So this is the equivalence between quantum gravity and just our ordinary quantum field theories. And this is ordinary quantum mechanics. And this is a very nontrivial quantum mechanics, which we don't yet understand. OK. This equality should be really understood as the different descriptions of the same quantum systems. So two sides are different descriptions of the same quantum system. So it just depends on how you want to see it. So one way you see it, you see a quantum gravity. Then from some other way you see it, you don't see gravity at all. You just see ordinary quantum mechanics. OK. Yeah, so this is what this equality is supposed to mean. So this equality essentially-- let me call this equation 1, which I will often use later. So this equation 1 is really a unification. Unification, which unifies quantum gravity with other field theory in a rather unconventional way. Not in a standard sense of unification. But that's bringing them together. OK. So now, let me make some philosophical remark. Now, let me make some philosophical remark. So last time, I have checked when preparing this class this [INAUDIBLE] original paper has been cited more than 10,000 times in the [INAUDIBLE] Database. And so there's a huge amount of work has been devoted to understanding this relation. But in my opinion, this subject's still in its infancy. And with many elementary [INAUDIBLE] we will see in the course as course goes on, there's still many elementary issues which are not understood. And to us, this relation is still like a magic. And we don't really have very good idea where this relation comes home. We have some rough idea, but not very precise idea. So the purpose of physics, of course, is to turn what looks like a magic or art into some rules so that it becomes something trivial. So at the beginning of course of the gravity, Newtonian gravity was magic. But in the end-- and what Newton did is to make the gravity to be not different from the force we see in ordinary life. And similarly, for other-- yeah, for other interactions we have seen before. Anyway. So personally, I believe when we really fully understand this relation, this will really be a huge landmark in the physics. And comparable, say, maybe to Newton's understanding of gravity or Maxwell-Boltzmann, et cetera. So the goal of this course is to help you understand how we-- our current understanding of this relation. And also, to help you to derive the related-- yeah, so these are two very different objects. And in order to set up an inequality, you also need to define a dictionary. For example, this is like Chinese. This is like English. So you need to define a dictionary between the two and what one said to the other. So we will also work out the dictionary. And also, we will develop with the tools and how to use this relation. And how to use the relation. And also, discuss many features of this relation and the implications. So the purpose is to help you to-- really, to bring to the forefront of this very exciting subject. So any questions? Good. No questions? OK, so now let's move to-- so after this introduction. So let's talk about this. OK. So if you look at this relation. So looking from the right-hand side. So you just have some kind of ordinary field theory-- ordinary quantum mechanical system, which we, in principle, know very well conceptually, without any gravity at all. Somehow, if you view it in some different way than the gravity and the spacetime, and the dynamical spacetime should emerge out of this. So in some sense, this 1 implies that the quantum gravity-- so let me just use the quantum gravity plus the dynamical spacetime. OK, dynamical spacetime. So it's important it's a dynamical spacetime-- can really merge from non-gravitational system. It can emerge from a non-gravitational system. So the idea of emergence of gravity from some other degrees freedom is not a really new idea. So [INAUDIBLE] in 1967, Sakharov, who was a Russian physicist. Sakharov is a very common name, a very common Russian name. But this is the most famous one. This is the most famous Sakharov who has invented the hydrogen bomb for the former Soviet Union. And he was also the guy who also later, I think, got Nobel Peace Prize for some human rights stuff. But he was also a very excellent physicist, including this idea of emergence of the gravity. So what he observed is that-- he found actually certain [INAUDIBLE]. He would study, say, some materials, et cetera. Then, he finds that certain [INAUDIBLE] systems can actually have mathematical description which looks like a magic connection, et cetera. Yeah, just mathematical, it looks like the equation for the magic and the equation for the connection, et cetera. And then he was speculating, maybe the general relativity can actually arise just from some kind of electron systems, just from the ordinary [INAUDIBLE] systems, and as effective description. And in fact, in the 1950s, even before the Sakharov have [INAUDIBLE] idea, people working in GR, in General Relativity, they already found many striking parallel between Einstein's equations and also associated many phenomena we hydrodynamics. And so we know that hydrodynamics is really just an effective description. And if you just look at the river-- so in fact, it's the discrete water molecules. But if you describe the motion of the water molecules at a microscopic level, then you find the hydrodynamics. But the hydrodynamics does not apply at more fundamental levels. So it's more like some kind of effective theory. So in the 1950s, people already found that there are many features of Einstein equation-- it's reminiscent. Actually, just reminiscent of certain features of hydrodynamics. So there's already this suspicion that maybe the gravity or spacetime actually can emerge out of something else. Just like the hydrodynamics is not a fundamental theory. It actually emerges from molecules, dynamics of the molecules. Anyway. So from the field theory perspective, it's also natural to ask whether massless spin-2 particles can arise as bound state, say, of lower spin particles. Say, like spin-1. Like photons, which are spin-1. So photons. Or gluons in QCD. Electrons, or quarks, et cetera. So these are the spin [INAUDIBLE] objects. And if yes-- so massless spin-2 particle in some sense-- so when you learn QR, you might have learned that the massless spin-2 particle may be considered as a hallmark form of gravity because of the-- so the basic propogation. The gravitational wave. So the basic propogation of the gravity is the gravitational wave. And that's propogated as spin-2 particles. And if you quantize the theory-- a propogation as spin-2 object. And when you quantize the gravitational waves, then they become massless spin-2 particles. They become spin-2 particles. So in some sense, if your theory can allow the massless spin-2 particles as-- so if your theory contains the massless spin-2 particles, then they must contain gravity. OK. So for example, even in strong interaction, which are, say, the theory describes gluons and the quarks. And the gluons and the quarks, they can indeed form. So these are the gluons and the quarks. In [INAUDIBLE], they're indeed spin-2 bound states. The gluons are spin-1, quarks are spin [INAUDIBLE]. But you combine them together, you can make [INAUDIBLE] the spin-2 bound state, even just in strong interactions. But of course in nature, the spin-2 bound state, they're all massive. They're all massive. And there are some very unstable, massive particles. When you create them, immediate decay. So they cannot be gravity. For gravity, you need massless spin-2 particles. But if you look at this fact, you cannot have the feeling. Say, maybe I take the QCD. Maybe I take a little bit. Maybe I can make that spin-2 massive particles into some massless spin-2 particles. Then I would have generated the gravity from a QCD like series. And then, that would be a revolution. And then you will be immortal. But unfortunately, even though this was a very promising idea for many years, and this hope was actually completely dashed by a powerful theorem of Weinberg and Witten. So there's a powerful theorem from Weinberg-Witten that say this can never happen. So this is the-- they're not possible. So this is 1980. So the paper. If you want, you can take a look at the paper. So this is volume 96 and the page 59. So they proved two theorems in that paper. So let me just copy the theorem down. Copy the theorem down. So they say a theory that allows the construction of a Lorentz-covariant and the conserved current for a vector-- current. Say, j mu, cannot contain massless particles. Cannot contain massless particles of spin greater than 1/2 with non-vanishing value of-- OK, so this is first theorem. So the second theorem-- I think I should have enough space. So the second theorem. Second theorem says that a theory that allows a covariant, Lorentz-covariant, and the conserved stress tensor. So let me just call it T mu mu. A stress tensor cannot contain massless particles of spin j greater than 1. So the key-- so let me just repeat the theorem a little bit. So the first theorem said if you have a Lorentz-covariant and the conserved current, and in such a theory there's no charged particles can have spin more than 1/2. So there's no charged particle can have spin more than 1/2. Of course, this cannot contain graviton. And the second theorem says if you have a conserved stress tensor, Lorentz-covariant and a conserved stress tensor, then this theory cannot contain any particles with spin greater than 1. So of course, this cannot also contain a graviton. Graviton would be a spin-2. So this theorem-- yes? AUDIENCE: Is there anything like RG flow of mass? [INAUDIBLE] HONG LIU: This is final mass. Yeah. Just the [INAUDIBLE] mass. You don't talk about even normalization here. Yeah, this is just [INAUDIBLE] statement. So this theorem turned out to be actually very easy to prove. And in some sense, it's rather instructive. And just to give you the sense of the power of this theorem, we will prove it in a little bit. So before proving it, I will first make some remarks to make you appreciate what these two theorems means. Do you have any questions before I do that? OK, good. So let me just make some remarks on this theorem. So the first remark is that the theorems really apply to any particles, to both. So normally we say the fundamental particle is the particle which you put in your theory is the particle which say-- yeah, let me say [INAUDIBLE]. Rather fundamental, let me call it elementary. So the particle which already appear in your Lagrangian. And composite particles. Composite particles, say, would be some kind of bound state or some particles which don't appear in your original Lagrangian. So this theorem does not distinguish them. As far as you have some particles, [INAUDIBLE] region. So now, let's try to see whether this theorem is compatible to [INAUDIBLE] these things we already know. So let's first try to apply to, say, QED. So the theorem is compatible with QED. So we know that QED is a Maxwell theory plus Dirac's theory. And then the Dirac-- and the fermions in the Dirac theory interact with the photons in the Maxwell theory. So this theory have a massless photon of spin-1. And this will have a conserved charge because of, say, electrons that do have a charge. This theory has a conserved current. Has a conserved current. But this theory does allow a spin-1 particle. It does allow the spin-1 particle. And this is not contradictory with the theorem 1. So theorem 1 said if you have a Lorentz [INAUDIBLE] and the conserved current, then you cannot have massless particle of spin 1/2 with non-vanishing charge. In this case, photon is massless. But photon does not carry charge. Photon is neutral under the electric charge. So the existence of the photon is compatible with that theorem. And now, let's look at the [INAUDIBLE] theory which also contain massless spin-1 particle. And the [INAUDIBLE] theory which contain the spin-1 particle is the Yang-Mill theory. So Yang-Mill theory. For example, let's consider su2 Yang-Mill theory. So the Yang-Mill theory have three gauge field, A mu a. And a is equal to 1, 2, 3. Because 1 and 2 [INAUDIBLE] of su2 gauge symmetry. And from this three gauge field, you can combine them into the following form. I have A mu 3. Then, I have A mu plus minus. You could do 1 over square root 2 a mu 1 plus minus A mu 2. OK. So [INAUDIBLE]. So you can easily check yourself that this a mu plus minus is massless spin-1 particle. Particles. Can create massless spin-1 particles charged under the u1 subgroup generated by sigma 3. So sigma 3 is the generator associated with A mu 3. And so those two are charged under this generator. So now, [INAUDIBLE] we have a contradiction because those are massless spin-1 particles charged under this. And the theorem 1 said you have a conserved current, then you cannot have any particle recharged-- any charged particle with spin greater than 1/2. But those are the spin-1 particles. But actually, we're consistent. Because as you may remember, in Yang-Mills theory there does not exist-- that actually does not exist. The way out [INAUDIBLE] there does not exist a conserved Lorentz-covariant current for actually-- for this u1. For the u1 generated by this sigma 3 divided by 2. So this is actually compatible. This is actually compatible because such a current does not exist. OK. So you will show this fact yourself in your PSET. But you may remember, if you have studied quantum field theory 2, then you may remember this fact from the discussion of [INAUDIBLE] gauge theories. So in this course, we will from time to time use various facts of, say, [INAUDIBLE] theories. But they're not essential. If you don't allow [INAUDIBLE] theories, you can still understand this duality. You can still understand the relation one. But of course, if you know gauge field, it will help you a lot. Any questions about this? Good. Let's continue. So this theorem does not forbid graviton from GR. So if you try to quantize Einstein's general relativity around the flash space for example, then you will find the massless spin-2 particle. So this is what we call graviton. And this is not contradictory with that theorem 2. It's because you may still remember when you learn GR is that in GR, General Relativity, actually there is not. There's no conserved. So here, it's conserved. Not covariantly converged, just conserved. Lorentz-covariant stress tensor. OK. So in GR, actually there's no conserved in the Lorentz-covariant stress tensor. So actually, the GR [INAUDIBLE] graviton. But this theorem 2, it's nevertheless, very powerful. It's nevertheless very powerful. It said very powerful. It said all [INAUDIBLE]. None of [INAUDIBLE] QFTs in Minkowski spacetime, which is the one we normally work with, can have emergent gravity. OK. So QCD like theories, no matter how you try to trick the theory, the spin-2 massive bound state can never become massless. They can never become massless. Because those series, they do-- for example, our QCD, our strong interaction, they do have a covariantly conserved stress tensor. A Lorentz-covariant conserved stress tensor. But this theorem does not validate this equation 1 I just erased. Because this theorem, even though it's very powerful, it does have a hidden assumption. It does have a hidden assumption. This assumption is so obvious and so self-evident, that even though Weinberg and Witten, they're extremely careful people, they did not bother to mention it. In those times, nobody with their right mind would have mentioned such a thing. So the hidden assumption is that the kind of particles they are talking about-- is that the particles are they talking about, the particles, which they tried to rule out, live in the same spacetime as original theory. OK. So if you write down, say, QED in Minkowski spacetime, and then you ask about whether you can have some spin-2 particles coming out from QED. And then of course, you ask about whether you have particles come out QED living in the same Minkowski spacetime. Yeah. So in some sense, this assumption is self-evident. But precisely, this is assumption, which taking advantage by this relation, which first envisioned by [INAUDIBLE], is that precisely in this relation, in this equation 1 which I raised, that the gravity actually does not live in the same spacetime as the original theory. And does not live in the same spacetime as original theory. So even in the early days, when people were trying to dream having such kind of spin-2 particles that come from QCD like theories, there's already an immediate puzzle. Suppose the QCD like theories can have emergent graviton in the same spacetime. Then firstly, your QCD theory is defined in a fix spacetime. But now if it contains spin-2 particle in the spacetime itself, it will be dynamical. Then, where does the original QCD theory [INAUDIBLE]? OK. And then you will just similarly go into this kind of self-contradiction, like somehow-- how can that-- yeah. Anyway. So in this relation, in the holographic duality, gravity does not live in the same spacetime. OK. So this, in fact, live in one dimension or higher, or they can be several dimension higher depending on the examples. So this void this theorem in obvious way. And also, void this conundrum I said earlier. Because in a different spacetime. So the field theory is still in the fixed spacetime, but then the gravity then can be dynamical. Then, the spacetime which gravity lives can be dynamical. So any questions regarding this so far? Yes. AUDIENCE: [INAUDIBLE] in GR, there was no conserved Lorentz-covariant stress tensor. Because the covariant derivative of the stress tensor [INAUDIBLE]. HONG LIU: No covariant derivative is not conserved. This covariant is not conserved. Conserved means the ordinary derivatives, not the covariant derivatives. A covariant conserved stress tensor is not conserved. If you write down a conserved stress tensor, then it's not covariant. Then it's not covariant. Yeah. Similar thing with this case for the gauge fields. And in this case, gauge invariant currents-- the gauge invariant and the conserved currents is not Lorentz-covariants. And the Lorentz-covariants, gauge invariant current then is not conserved. Any other question? AUDIENCE: I have a question. HONG LIU: Yes. AUDIENCE: [INAUDIBLE] HONG LIU: Sorry. Say it again? AUDIENCE: [INAUDIBLE] theory, every particle is massless. HONG LIU: Yes. AUDIENCE: And the [INAUDIBLE] spin-2 particle, which carries [INAUDIBLE] HONG LIU: Yeah. For [INAUDIBLE] field theory, indeed everything's massless. You can construct Lorentz-covariants conserved stress tensor. So that theory [INAUDIBLE] does not allow spin-2 excitations in the same spacetime. AUDIENCE: [INAUDIBLE] HONG LIU: There's no massless spin-2 particle. AUDIENCE: [INAUDIBLE] HONG LIU: Yeah. Yeah, but you don't have massless spin-2. You don't have massless spin-2 particles. AUDIENCE: [INAUDIBLE] HONG LIU: No. It depends. Actually, talking about the massless in the [INAUDIBLE] field theory is a little bit tricky. It's a little bit tricky. They come in all kind of spectrum. They come in the continuous spectrum. Yeah. Yeah, but this theorem does apply to ordinary [INAUDIBLE] field theories. So in those theories, you won't have massless. Yeah, when we prove the theorem, then you can see precisely what do we mean by massless spin-2 particles. If you stay in the course, then you will know. Yeah. Yeah, we will see it later on. But not for a while. Not for a couple of weeks. And maybe in a couple of weeks, we will see it. In two weeks. One or two weeks. Any other questions? OK. So now, let me give you the proof, which is pretty simple. And use some elementary facts, very elementary facts. Because you see, this theorem contains very little input. That theorem really contained very, very little input. So if you can prove such a thing, it cannot be complicated. If you can prove it, it cannot be complicated. Because the input is so little. It does not require any details. It does not require any details of a theory. So if you can prove such a theorem, it must be just from kinematics, not come from any dynamics. So indeed, that's how the theorem goes. So the proof. So we will do proof by contradiction as you would naturally expect. So let us suppose there exists a massless spin-j particle. Massless particles. So let's assume this theory contains some theories [INAUDIBLE] exist massless spin-2 particles. Massless particles of spin-j. And just from the Lorentz symmetry, you can immediately write down-- so from what we learned in QFT 1 and 2. So just by Lorentz symmetry-- it doesn't matter what is the nature of those states. Because the mass formal representations over the, say, Lorentz group and [INAUDIBLE] group. So [INAUDIBLE] immediately write down the one particle state of such particles. And you normally can write them as k and sigma. So k just is the ordinary spacetime momentum. So for massless particle, of course the k0 squared should be equal to k squared. And the sigma is the helicity. Helicity is a projection of the angular momentum in the direction of your momentum. And for a massless particle of spin-j, the sigma can only be plus minus j. So this is helicity. So if you don't remember this, you can go back to review Wemberg's book. Wemberg's volume 1 QFT. Volume 1, Section 2.5 discussed how you write down the general one particle states for, say, any quantum-- in any quantum theory as far as you have Lorentz symmetry. OK. And the one properties of this sigma is that let's now consider rotation. Now, let's consider rotation. So let's consider rotation operator. This [INAUDIBLE] by angle theta around the direction of the k. Around the direction of the k, which is the direction of the k here. So let's consider the action of these under state, this one particle-- this massless particle state. Then by definition, this just give you i sigma theta k, sigma. So this is essentially come from the definition of the helicity. And the k does not change because this is the rotation along the direction of the k. The k itself does not change. But this eigenstate of this rotation [INAUDIBLE]. OK. Again, if you don't remember this, go back to Wemberg's book and you can easily find it. So now if you have a conserved current, if you have a covariant conserved current, J mu, then you can always construct the charge, the conserved charge, which is the integral over J0. And similarly, if you have T mu mu, then from T mu mu, you can construct a momentum operator, which is the T mu. So this is just the standard momentum operator in your field theory. So this is just still the setup. Of course just by definition, the p mu acting on k sigma, you just get back k mu. OK, because this is the momentum eigenstate. And if this particle is also charged under this conserved charge, then-- so if charged under k, under this conserved charge, then Q act on this state will also give you the charge q. So this is operator. This is operator. These are all operator. Operator. And then, this should give you charge q. So now we want to show this 2 theorem implies-- so with this set up, this theorem can-- so we want to show two things. 1, if q is nonzero, that means if the particle is charged under a conserved charge, then j must be smaller than 1/2. It cannot be greater than 1/2. So we assume it's a conserved stress tensor. Otherwise, j should be always smaller or equal to 1 at most. OK. So that's what that theorem means translated in this language. OK. Any questions? So now, let's prove it. Let me just do it here. So now, let's prove it. I think we just have enough time, maybe to prove it. So to prove it, let's just first list some elementary facts. First, just do some elementary fact. First, from Lorentz symmetry-- we do have to assume Lorentz symmetry. OK. You can show-- so this I will just write down the answer and leave yourself to show it. OK? In your PSET. So if you look at the matrix [INAUDIBLE] of this conserved current between two such states of different k, in the limit which k goes to k prime, you have q k mu k0. And similarly, in this limit-- so this we will do it in your PSET. So let me call this equation 2, equation 3. So this is based on the following normalization. Based on the following normalization between the two states. OK. So this relation is actually very easy to see intuitively. Yeah, you can prove it by-- yeah, let me give me just a quick hint for [INAUDIBLE] regarding 2, is that you can first show that k sigma J0 sandwiched between the two, in the limit when this k become the same is just given by q divided by 2 pi cubed. So this is, again, somewhat intuitive because k is eigenstate of Q. Because the k is an eigenstate of this Q hat. And Q hat is just the integral of J0. It is integral of J0. And so you can imagine on the right-hand side, there must be a Q because these are essentially the eigenstate of the integration with J0. And then when you take into account all these [INAUDIBLE] functions here, and the integral-- and the relation between the integral, then you will naturally find the right-hand side is just q divided by this. So once you know this, then you can immediately see the first [INAUDIBLE] equation 2 there. Because at first, there's a j mu there. Then the right-hand side must have index mu. So the only thing can carry index mu is the k mu. There's nothing else. And also, you know when the mu equal to 0, you should just get the q. Then, downstairs we might have a k0 to cancel them. So this relation, in principle you can just-- just by intuition, you can just-- by dimensional analysis, [INAUDIBLE] just directly write it down. And similar in the second relation-- in the second relation, the analog of q. So q is the charge associated with Q hat. The analog of the q for the stress tensor is just the k mu. So you just replace this by k mu. It just relates to the q by k mu. Then you get the second line. Because k by k mu [INAUDIBLE]. OK. So I will leave yourself to prove this rigorously. So this is the second observation. It said for massless particles, which have k squared equal to 0, and k prime squared equal to 0, because [INAUDIBLE] particles. So k squared and k prime squared should be equal to 0. And that means for any two particles, a massless momentum like this, you see immediately just by writing this down this must be smaller than 0. So this should come from your high school physics. The k dot k prime should be smaller than 0. And that means that k plus k prime must be time-like. OK, must be time-like. Because this square is essentially given by this guy because k square and k prime equal to 0. So this guy must be time-like. OK. And so that means you can always choose a frame that the special part of this is 0. We can choose a frame k plus k prime equal to 0. So such a frame is just k mu, say E, 0. Say we put all of these in the z-direction, in the 3-direction. Then you can choose k prime k mu equal to-- OK? And this just follows. These two are two [INAUDIBLE] particles. OK, good. So now, we can almost envision this. Then, there's still a third elementary observation we need to put here. So the third observation which uses this formula. So let's now take sigma just to be j. So now, under a rotation of theta around 3-direction-- around this direction, which we have the [INAUDIBLE] momentum. So we choose a momentum to be in this direction. Then you should have-- so let me call that operator to be R theta is the rotation in the z-direction, in the 3-direction. So this operate on k, j. So let's take sigma equal to j. Take sigma equal to j. So this should be just equal to exponential i, j theta k, j. And if this acting on k prime j. So k prime have opposite orientation as k because this is minus e. And the opposite orientation means your helicity should also change sign. So that means here should be minus i j theta k prime, j. OK. Yeah, because this is the rotation along the positive 3-direction. OK. So now we are in business. So now finally, we will try to-- from those facts together, we will now see a contradiction. Now see a contradiction. Now you can see that this object. Consider k, j-- k prime, j. Say this R minus 1 theta acting on J mu R theta k, j. So now, let's consider this object. So this object, I can evaluate it by two ways. First, I can just act R theta on both states. OK. So R theta on this one just gives me this. R minus 1 acting on that just give me opposite of this. So all together, if I do that way, then I get exponential 2 i j theta k prime, j J mu k, j. But I can also act this operator on this current. So this current should transform as a Lorentz vector. So if I do this way, then I should get lambda mu new k prime, j J new k, j. So these acting on that should just effect a Lorentzian summation on j. And so this is in the Lorentzian summation matrix. So lambda mu mu just is the standard rotation matrix acting on the vector. So lambda mu mu just the standard rotation matrix. For example, just given by 1, 0, 0, 0. Say, cosine theta minus sine theta. So just a rotation along the 3-direction. So this just the rotation acting on the j. So now similarly, if you examine this quantity. Yeah, similarly if I just replace here by T mu mu, OK? by T mu mu-- we are out of time, so let me be a little bit faster now. Similarly, if we replace that by T mu mu, then we will get the same thing 2 i j theta k prime, j T mu mu k, j, which obtained by acting directly on k, j. By the way, that R acting on T mu mu, then T mu mu transform as a 2 tensor. Then we should get lambda mu rho mu lambda. Lambda mu rho k prime, j T lambda rho k, j. So now, here's the key. Now, here's the key. So do you already see the contradiction? So let's look at this equation. This is just like a vector. The whole thing is like a vector. The same vector. So this is like an eigenvalue equation. So this lambda mu mu acting on this vector get eigenvalue 2 i j theta. So this is like an eigenvalue equation. But if you look at this matrix, this can only have eigenvalue exponential plus minus i theta and the 1. So this is thing we are familiar with. So [INAUDIBLE] minus i theta, 1. So similarly with this equation, just you have 2 lambda, OK? So now, that means if this quantity is nonzero, which we know it is nonzero from 2 and 3. It is nonzero. That these 2 i j theta can only take a value plus minus i theta and the 1. So that means that j-- from here, let's say equation 4 and 5. From equation 4, that j must be smaller than 1/2. Because there's a 2j there. Similarly, from equation 5, means j must be smaller than 1. So this is what we want to prove. So this is what we want to prove. Yes? AUDIENCE: Aren't k and k prime [INAUDIBLE]? I mean, 2 and 3, the limit k goes to k prime and k plus k prime is 0. HONG LIU: Yeah. So I can take them-- if j is greater than 1/2. For example in this relation, if j is greater than 1/2, then the only possibility is for this to be 0. But we know when you take k equal to k prime, at least in that limit this is nonzero. And so this cannot be 0. So this quantity cannot be identical to 0. OK. So this gives you the proof. So if you find the loophole in this proof, other than they have live in the same dimension-- yeah, that would be great. AUDIENCE: What is the whole matrix element is 0? What if the whole matrix element is 0? HONG LIU: No, the point is that the whole matrix element cannot be 0. AUDIENCE: Yes. HONG LIU: Because of this. AUDIENCE: Isn't the limit k goes to k prime? And there, you assume that k plus k prime is 0. So it means both k and k prime should be 0, no? HONG LIU: I'm sorry? No. k plus k prime is not-- AUDIENCE: [INAUDIBLE] HONG LIU: No, this is a spatial vector. AUDIENCE: [INAUDIBLE] HONG LIU: No, this is spatial vector. No, this is spatial vector. Yeah. Yeah. This is spatial vector. OK. Sorry, I'm a little bit late. Yeah. That's all for today.
MIT_8821_String_Theory_and_Holographic_Duality_Fall_2014
2_Classical_Black_Hole_Geometry.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation, or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK, let us start. So let me first remind you what we did in last lecture. We proved-- we showed that the Weinberg-Witten theorem for beats existence of massless spin-2 particles. Massless spin-2 particles are hallmark of gravity, so that's why we look for them. In the same-- so emphasize in the same space time, say a QFT lives. So this theorem essentially cites this. You will never have emergent gravity start from a [INAUDIBLE] quantum field theory, which is a well-defined stress tensor. So as we already mentioned, it's a new pole of this theorem is that actually emergent gravity can actually live in a different space time. So as in a holographic duality. In fact, in holographic duality the gravity lives in one dimension higher, but we are not ready to go there yet. So in order to describe this thing we still need to do some preparations. So the preparations-- let me just outline the preparations we need to do. So the first thing we will do is black hole thermodynamics. So these will give hint for something called the holographic principle, which is actually more general than the holographic duality discovered so far. And the second thing is, we will also quickly go over the large N gauge theory-- the properties of the large N gauge theories. So this give hints-- something called a gauge string duality. So the behavior of the large N gauge theory give you a strong hint that actually there's a string theory description for ordinary gauge theories. OK, and then when you combine these two things together, then you get what we have currently, the holographic duality. And then we will also talk a little bit of sting theory. A tiny bit. Not a lot, so don't be scared. So we also will talk a little bit about string theory. So in principle, actually, I can talk about holographic duality right now, but going over those aspect can help you to build some intuitions, and also to have a broader perspective than just presenting the duality directly. So before I start today's lecture, do you have any questions regarding our last lecture and regarding this-- yeah, general remarks here? Yes? AUDIENCE: So moving forward, are we going to define emergent gravity as this existence of a massless spin-2 particle? PROFESSOR: No, that we will not do. AUDIENCE: So how are we defining emergent gravity? PROFESSOR: You construct Newton's Law? Yeah, you will see it. You will see it when we have it. Yeah, essentially you see-- essentially it's handed over to you. You don't have to go through that step. You don't have to go through that step. Yeah. Any other questions? OK, good, let's start with black hole thermodynamics. So let me start by doing a little bit of dimensional analysis, just to remind you important scales for the gravity. So the first thing is what we called the Planck scale. So in nature we have fundamental constant H-bar, Newton constant, and the C, speed of light. And immediately after Planck himself introduced this H-bar, he realized you can actually combine the three of them to come up with a map scale, which is just hc divided by GN. And this, if you plug in the expressed numerical value, it's about 1.2 to the 10, to the 19 gev, divided by c squared. You can also write it in terms of the gram, which is about 2.2 10 to the minus 5 gram. And then you can also have-- we can also construct the length scale, which is hGN divide c cubed, which is about 1.6 to 10 the minus 33 centimeter. And tp equals lp divided by c, OK? So this was discovered-- Planck introduced them in 1899, so most of you may know the story. And then, just immediately after he introduced H-bar-- so he introduced H-bar in 1899. In the same year he realized you can write down those numbers. And of course, those numbers meant nothing to him because at that time he didn't know special relativity, he didn't know quantum mechanics-- essentially, he didn't know anything. But he famously said that this unit-- so he claimed those should be basic units of physics, and he also said these are the units that would retain their significance for all times and all cultures. He even said they even will apply to aliens. But only after 50 years-- so after 1950s people get some sense. So many years after special relativity, many years after general relativity, and also quantum mechanics, et cetera, people started grasping the meaning of those scales. And so let me briefly review them. So you can-- OK, the feeling of the meaning of those scales by looking at the strength of gravity. So let me first start this example for electromagnetism, which you have a potential, which is e squared divided by r. So if you have two charged particle of charge e, then the potential between them is e squared divided by r. And then for particle of mass m then you also have Compton wavelengths. So if you have a particle with mass m, you also have Compton wavelengths. It's H-bar divided my mc. So you can get rough measure of the strength of electromagnetism-- the actual strength of the electromagnetism by conceiving the following [INAUDIBLE] dimension number, which I call lambda e, which is the potential evaluated at the minimal-- in quantum mechanics, essentially this is the minimal distance you can make sense of. Because once you go distance smaller than these, you can no longer-- and then the quantum uncertainty will create the energy uncertainty bigger than m, and you can no longer talk about single particle in a sensible way. So the essentially the minimal length scale you can talk about single particles. So this is essentially some energy scale. So you can compare these to the, say, the static mass of the particle. So this give you a measure of the strength of electromagnetism. Of course, if you plug this in you just get e squared divided by H-bar c. And of course, we know this is the fine structure constant, which is indeed the coupling. Say you would do-- indeed, it's a coupling in QED, OK? So you can do the same thing for gravity. You can do the same thing for gravity. So for gravity we have essentially g, and say if you take two particle of masses m then the potential between them is gm, m squared, divided by r. And then again, you can define effective strength for the gravity. I evaluate this potential at Compton wavelengths divided by the static energy of the particle. And then you can just plug this in. So this is GN m squared, divided by H-bar, divided by mc, then divided by mc squared. And then you find this is just equal to GN m squared, divided by H-bar c. OK, so now if you compare with this equation-- OK, so this is just given by m squared divided by mp squared. OK, so for gravity this effective strength is just given by the mass of the particle. So because for gravity the mass is like some kind of effective charge, and then divided by Planck scale-- this Planck mass-- squared. OK, and you can also write it as lp squared divided by rc squared, or just as this Planck length divided by [INAUDIBLE] wavelengths of this particle. OK, so for most elementary particles-- so for typical elementary particles we know the m is always much smaller than mp. And then the lambda g would be typically much smaller than 1. So for example, say, if you can see the electron, the electron mass would be 5 to the 10 to the minus 4 gev divided by c squared. Of course this is much smaller than this Planck mass. And then you find that this ratio-- so you can work out this ratio. So let's compare it to the corresponding strength for electromagnetism. So this is about 10 to the minus 43 if you work this out. So this tells you the gravity's really weak. So for ordinary elementary particles-- so the gravity is really weak, and so we can forget all about gravity until you reach the Planck mass, or your Compton wavelengths reach the Planck length. And then the fact of the quantum gravity will be important. So from this exercise we know that the mp is the energy scale, that the effective gravity strength become of order one-- become of order one-- that is, quantum gravity fact becomes significant. And similarly-- so lp is the corresponding length scale associated with such energy. So this give you a heuristic feeling-- give you a heuristic indication that the meaning of those Planck scales. OK, so there's another important scale associated with gravity. So any questions on this? Good, there's another important scale associated with gravity. It's called Schwarzschild radius-- Schwarzschild radius. So just from dimensional analysis, the Schwarzschild radius can be argued as follows. So can see that-- again, we can just even see from Newtonian gravity. So I would say let's consider the object of mass m. Then we ask is the distance-- at what distance, maybe I should-- at what distance from it the classical gravity becomes strong. So for this purpose, let's consider, say, a probe mass-- say, m prime. So, can see the probe mass. So I define a scale which I call rs, as I require the potential energy between m and m prime. at such a scale rs, then this become of order, say again, the static energy of this probe political, or of this probe mass. So if you cancel things out then you find rs is of order GNm divided by c squared. OK, so this give you-- I'll give you a scale. You can also ask-- so this is from Newtonian gravity. Of course, when your gravity becomes strong you should replace the Newtonian gravity by Einstein, the relativity. And then when you go to relativity-- when you go to relativity, general relativity, then you find then there's a Schwarzschild radius. So there's a Schwarzschild radius which given by 2 gm divided by c squared, which corresponding to the sides of a black hole. OK, corresponding to the sides of a black hole. So this is a classical scale-- purely classical scale. Just the scale which the classical gravity becomes strong. In particular, rs can be considered as the minimal length scale one can probe an object of mass m, OK? So classically, black hole absorb everything. So once you fall into black hole you can never come back. And so the minimal distance you can approach an object of mass m, it is given by the Schwarzschild radius. So when you go inside the Schwarzschild radius we just fall into the black hole, and you can never send information out. And the one interesting thing compare-- yeah, one interesting thing regarding this Schwarzschild radius. You said Schwarzschild radius increase with mass. OK, if you increase the mass the Schwarzschild radius increases. So in principal it can be very large when you can see the very large object-- when you can see the very massive object. So now-- so let us summarize. Let us summarize. So for object of mass m there are two important scales. Two important scales. So one is just the standard Compton wavelengths, and the other is the Schwarzschild radius. So one is quantum and the other is classical. The other is classical. So let's take the ratio between them. Let's take the ratio between them. So if you take the ratio between them-- so let's forget about these two, just to consider of order up to order 1. So c squared, divided by H-bar, divided by mc. And then this again give you gm m squared divided by H-bar c. And then this is again just m squared divided by mp squared. This again is m squared divided mp squared, and p is the Planck mass. So let's consider the different scenarios. So the first, let's consider-- just suppose the mass of the object is much, much greater than mp. OK, so in this case then the Schwarzschild radius are much larger than the Compton wavelengths. Much larger than the Compton wavelengths, And essentially, all of physics is controlled by the classical gravity, because you can no longer probe-- yeah, because a Compton wavelengths is much, much inside the Schwarzschild radius, which you cannot probe. So the physics is essentially classical gravity. And the quantum effect is not important, so you don't have to worry about the quantum effect. I will put quote here. Not quote, I will to put some-- yeah, some quote here. You will see what this means. And the second possibility is for mass much, much smaller than p. So in this case, then the Compton wavelengths will be much, much greater than the Schwarzschild radius. OK, so this is a quantum object, so the quantum size is much larger than the Schwarzschild radius. But we also know, precisely in this region, this lambda g is also very small. The effective strength of the gravity is also very, very small. So we also have found before that the lambda g is also very, very small. So in this case, as what we said here-- the gravity is very weak and not important. It's much, much weaker than other interactions, so you can essentially ignore them. Then we're only left with the single scale which mass is of all the mp. And then, as we said before, the quantum gravity is important. So let me just say, quantum gravity important. If this were the full story, then life would be very boring. Even though it would be very simple. Because the only scale you need to worry about quantum gravity is essentially natural zero. It's only one scale. And it will take us maybe hundred if not thousand years to reach that scale by whatever accelerator or other probes. So there's really no urgency to think about the quantum gravity. Because right now we are at this kind of scale. Right now we are this kind of scale. It's very, very far from this kind of scale. But the remarkable thing about black hole-- so this part of the physics is essentially controlled by Schwarzschild radius, because the Schwarzschild radius is the minimal classical radius you can achieve. Yeah, you can probe the system, and the quantum physics is relevant. But the remarkable thing is that it turns out that this statement is not correct. This statement is not correct. Actually, quantum effect is important. So the remarkable thing is that the black hole can have quantum effect manifest at the microscopic level. Say, at length scale of order Schwarzschild radius, OK? So that's why it makes the black hole so interesting, and also makes why black hole is such a rich source of insight and information if you want to know about quantum gravity. And as we will see, actually, we also contain a rich source information about ordinary many body systems, due to this duality. Any questions regarding this? Yes? AUDIENCE: So when I talk about the m much larger than mp, so the m is not only an elementary particle, it can be just and object? PROFESSOR: Yeah, it can be a bound state. So it can be-- here, we always talking about quantum object. But it can have mass very large. AUDIENCE: That's still a quantum object? PROFESSOR: Yeah. What you will not see from a traditional way, for such a large mass object, you will not see its quantum uncertainty, because quantum uncertainty is tiny. The Compton wavelengths is very tiny, and so the fluctuations are very small. And so you have to probe very, very small in scale to see its quantum-- from traditional point of view, we have to probe very, very small in scale to see its quantum fluctuations. And that scale is much, much smaller than the Schwarzschild radius. AUDIENCE: OK. PROFESSOR: Any other question? Good, so let me-- before we talk more specific about black holes, let me just make one final remark. It's that in a sense, this Planck length-- this length scale defined by Planck can be considered as a minimal localization length. OK, for the following reason. So let's firstly imagine in non gravitational physics-- so in non gravitational physics if you want to probe some short distance scales then it's easy if you are rich enough. Then you just accelerate the particles to very high energies. Say e plus, e minus, with p and minus p. Then that can probe the distance scale, then this can probe length scales. Say of order h divided by p. OK, so if you make the particle energy high enough you can, in principle, probe as short as any scale as you want. Anything, as far as you can make this p as large as you want. So in principle, you can take l all the way to zero. So the scale comes all the way to 0. If you take p, go to infinity. But in gravity, this is not so. So with gravity the square of the distance [INAUDIBLE] so when your energy is much, much greater than ep-- say, the Planck mass-- then, as we discussed from there-- so this is a central mass energy, OK? Say if your central mass image becomes much, much larger than the Planck scale-- Planck mass-- then rs controlled by the image-- yeah, and [INAUDIBLE] p. Yeah, let me just forget about c. Let me just say-- OK? Then the Schwarzschild radius from now on-- so you go to y, OK? So the Schwarzschild radius will take over as the minimal length scale. OK, so what's going to happen is that if you collided these two particles at very high energies, then at a certain point, even before these two particles meet together, they already form a black holes over the Schwarzschild sites, OK? If this energy is high enough, then we will form black hole, and then you can no longer probe inside the Schwarzschild radius of that black hole. So that defines a new scale which you can probe, OK? So the funny thing about this Schwarzschild radius is that it's proportional to energy, rather inverse proportion to energy as the standard Compton wavelengths, OK? So after a certain point, when you go beyond the Planck mass, when you further increase in the energy, then you're actually probing the larger distance scales rather than smaller distance scales, due to the funny thing about the gravity and the funny thing about black hole. OK, so actually, this scale increases with p-- increase with you center of mass energy. OK, and the high energies those two longer length scales. OK, this essentially defines the Planck length as a minimal scale one can probe. OK, so when your center of mass energy is smaller than the Planck scale-- than the Planck mass-- then your Compton wavelength of course is larger than lp. But when this is greater than mp, then as we discussed here-- then the Schwarzschild radius, of course, [INAUDIBLE] object will be greater than the Planck side, and will break through the common wavelengths and will be greater than the Planck sides. And this give you, essentially, the minimal radius to probe. Alternatively, we can also just reach the same argument. Simply, I can just write down a couple equations. So let's consider you have uncertainty. So suppose you have a position, data x, and then the answer in the energy or momentum associated with the data x is data p. But on the other hand, the distance you can probe must be greater than the Schwarzschild radius associated with lp, data p. OK, data p. So if you combine these two equations together-- so this is greater than GN H-bar divided by lp. So now I have suppressed the c. So you can see from this equation-- you can see that data x must be greater than H-bar GN, which is lp. OK, which is lp here. This is the same argument as this one, but this is a little bit formal. So the essence is that once you're energy is big enough, then you will create the black hole, and then your physics will completely change. AUDIENCE: [INAUDIBLE] black hole evaporates [INAUDIBLE] they do not contain that information. PROFESSOR: Right, yeah so we will go into that. When the black hole evaporate, we still don't probe the short-- it's still harder to probe the short distance scale. Yeah, we will talk about that later. So yeah, here just a heuristic argument to tell you that because of this, actually the physics are very special. The physics of the gravity is very special. Any another questions? AUDIENCE: As a probe for quantum gravity, I was thinking-- what if there were some phenomenon in which gravity is weak, and it's a macroscopic scale, but there's some kind of coherence happening that will-- maybe on the scale of galaxies or something like that-- that will allow quantum effects to manifest. Like an analogy of what happens in a laser or something like that. PROFESSOR: Yeah, that is black hole. The black hole is the way which gravity can manifest at the quantum effect that matches in scales. And we don't know any other ways for gravity to manifest such quantum physics at large distance scales. OK, so let me conclude this I generally discussion. Again, by reminding you various regimes of gravity. Various regimes of the gravity or quantum gravity. So in this discussion, you should always-- in the discussion I'm going to do in the next minutes, you should always think, when I take the limit I always keep my energy fixed. I keep the energy scale I'm interested in fixed. I keep that fixed. So the classical gravity regime is the regime in which you take H-bar equals zero-- take Newton constant finite-- and the regime of a particle physics, we would normally be, sometimes, say, QFT in fixed space time. So this is a quantum field theory in the fixed space time, including curved-- including curved. So this is a regime in which H-bar is finite, while you take the Newton constant go to 0. And then there's, of course, the quantum gravity regime, which is the GN and the H-bar both finite. And then there's a very interesting regime, which is actually the regime most of us work with. So there's also something called the semiclassical regime for quantum gravity. So this is the regime you keep H-bar finite, and you expand this system in Newton constant, OK? So around-- so you expanded whatever quantity Newton constant around-- say G Newton equal to zero. So G Newton equal to 0 is the classical regime. It's the regime the gravity's not important, but including H-bar. And now you can take into account the quantum gravity fact, semiclassically, by expanding around the GN. OK, so this is normally what we call semiclassical regime of gravity. Yes? AUDIENCE: Do we know that G goes to 0 [INAUDIBLE] and it's not some other part? PROFESSOR: Yeah, this is a very good question. So this is indeed the question. Indeed, most of the particles regarding black hole is in this regime. So our current understanding of quantum physics or black hole is in the semiclassical regime, and you treat any matter field H-bar finite, but the gravity's weak. And so there are various indication that this limit is actually not smooth, but the only for very subtle questions. For simple questions, for typical questions, actually this is a limit. This limit is smooth, but there can be very subtle questions which this limit is not smooth. And one such question is this so-called the black hole information loss, and the subtle limit of taking this-- yeah, you [INAUDIBLE] taking this limit. Any other questions? And this is a regime, actually, we will work with most. OK, this is a regime we will work with most. So we will always-- in particular in nature. So right there I'm keeping the H-bar explicitly, but later I will also set the H-bar equal to 1. And so H-bar will equal to-- so we are always typically working with this regime. So H-bar equal to one. And then you will have-- yeah, then you take into account the fact of GN in the perturbations series. Good, so now let's move to the black hole. OK, with this-- yes? AUDIENCE: [INAUDIBLE] Also working in the other sermiclassical regime. I mean finite GN, but expand H-bar. PROFESSOR: Yeah, this is not so much. It's because it's easy-- in some sense it's easy. So in a sense, we are doing a little bit both. Yeah, so later-- right now I don't want to go into that. You will see the effective coupling constant and show the quantum gravity fact is in fact H-bar times GN. It's H-bar times GN. And so the quantum gravity fact will be important when the H-bar times-- when you do perturbation series H-bar times GN. So when I say you are doing same thing GN, essentially it's because I'm fixing H-bar. So you're actually doing perturbation series H-bar GN. So you have to do both. Any other questions? Good. Right, so now let's talk about the black hole. Let's talk about classical geometry. OK, so here I will assume you already have some background in GR-- in general relativity-- and for example, you have seen since Schwarzschild metric, et cetera. And if you have not seen a Schwarzschild metric, it's also OK. I think you should be able to follow what I'm going to say. So for simplicity, right now let me consider zero cosmological constant. OK, zero cosmological constant. OK, zero cosmological constant corresponding to-- we consider symptotically Minkowski space time. And so in the space time of zero cosmological constant entered in the space time metric due to an object of mass m. It can be written as-- so this is the famous Schwarzschild metric. And so of course here we are assuming this is a spherical symmetric, and a neutral, et cetera. So this object does not carry any charge. And this m and this f-- it's given by 1 minus 2m, mass divided by r, or it's equal to rs divided by r. OK, so this rs is the Schwarzschild radius. So now c is always equal to 1. So if this object-- so we consider this object is very close metric center. If this object have finite sides, then of course this metric only is varied outside this object. But if the sides of this object is smaller than the Schwarzschild radius, then this is a black hole. OK, and the black hold is distinguished by event horizon and r equal to rs. OK, and the r equal to rs. So at r equal to rs, you see that this f becomes 0. OK, so f equals become 0. So essentially at here gtt-- so the metric for the tt component is becomes 0, and the grr, the metric before the r component become infinite. And another thing is that when r becomes smaller than rs, the f switch sign. So f become active, and then in this case then the r become time coordinates and the t becomes spatial coordinates, when you go inside to the r equal to rs. OK, this is just a feature of this metric. So now let me say some simple fact about this metric. So most of them I expect you know-- I expect you know of them. But just to remind you some of those facts will be important later. OK, so these are mostly reminders. So first, this metric-- if you look at this metric itself, it's time reversal invariant. OK, because if you take t go to minus t, of course the invariant on the t goes to minus t. OK, so this-- actually, because of this, this cannot describe a real black hole. So the real life black hole arise from the gravitational collapse, and the gravitational collapse cannot be a time symmetric process, OK? So this cannot describe-- so this does not describe a real black hole, but it's a good approximation to the real life black hole after this object have stabilized. So after the gravitational collapse has finished. So this is a mathematical-- in some sense, it is a mathematical idealization of real life black hole. OK, so this is first remark. So the second remark is that despite this grr goes infinity, this metric component goes infinity. So the space time is non-singular at the horizon. OK, so you can check it by computing, say, curvature invariants of this metric. You find the number of the-- all the curvature invariant that are well-defined. So this horizon is just the coordinate-- you can show that this horizon is just a coordinate singularity. Which we will see-- actually, we will see it in maybe next lecture, or maybe at the end of today's lecture, that just this coordinate, r and the t, coordinate become singular. The coordinate itself become singular. So this t, we normally call it Schwarzschild time. So let me just introduce a name. So this t, we call it the Schwarzschild time. So Schwarzschild found this solution while fighting first World War, really in the battlefield. And a couple months after he finished this metric, he died from some disease. Right, this is that. So another thing, you can easily check yourself with r equal to rs-- the horizon is a null surface-- is a null hyper surface. It's a null hyper surface. The null hyper surface just say this surface contains geodesics which are null. And this is a-- the third remark is an extremely important one, which we will use many times in the future. We said the horizon is a surface of infinite red shift compared to the-- infinite red shift from the perspective of observed infinity, OK? So let me save time, now, to add to this qualifying remark from the perspective of observed infinity. So now let me just illustrate this point a little bit more explicitly. So let's consider observer-- consider an observer-- so let me call oh-- at some hyper surface r, which is close to the horizon. Yeah, say as someplace which is close to the horizon. Slightly outside the horizon. And let us consider another observer, which I call o infinity, which at i go to infinity. OK, very far away from the black hole. So let's first look at this observer. So the i equals infinity, then your metric-- so at i equals infinity, this f just becomes one, because when your r goes to infinity, this r just become 1. And then this become the standard Minkowski space time, written in the spherical coordinates. So we just have the standard Minkowski time written in spherical coordinates. And then from here you can immediately see-- so this t is what we call Schwarzschild time. t is the proper time for this observed infinity. Say, for o infinity. OK, so now let's look at someplace i equal to rh. So at i equal to rh, then the metric is given by minus f rh dt squared with the rest. OK, and to define the proper time for the observer at-- we can just directly write it as minus d tau squared. So that's the proper time observed by observer at this hyper surface, OK? So then we concludes-- so let me call it tau h. So we conclude that the problem time for oh is given by f 1/2 rh times dt. OK, so it relates to the proper time at infinity by this factor. So if I write it more explicitly-- so this is 1 minus rh, divided by rs 1/2 times dt. So we see that as rh-- suppose this observer-- the location of this observer approach the horizon, say, if rh approach to rs, then this d tau h divided by dt will go to 0. So that means compared to the time at infinity the time at r equal to rs becomes infinite now. So-- or approximate, let me say, becomes infinite now. So that means any finite interval-- any finite proper time interval for observer at oh-- for this oh-- when you view [INAUDIBLE] infinity become infinitely null, OK? Become very long time scale. OK, become very long time scale. So you can also invert this relation. OK, you can also invert the recent relation. Say, suppose some event of energy with energy eh-- with proper energy eh-- say, for this observer at oh, for this observer oh. Then because of the time relation between them, because of the time relation between them, then from the perspective of this observer at infinity the energy is given just but you invert the [INAUDIBLE] between the time, because of the energy and time conjugate. So the e infinity becomes eh times this f 1/2 rh. So for this just again says, in a slightly different way, that for any finite eh local proper energy-- so this is a local proper energy for the observer at oh-- this e infinity-- the same event viewed form the perspective of the observer at infinity goes to 0. S rh goes to rs. So that e is infinity red shifted-- become infinite red shift. So any process with local proper energy viewed from infinity corresponding to very, very low energy process. So this actually will play a very important role. This feature will play a very important role when later we talk about holographic duality and it's implication, say, for the field series, et cetera. Yes? AUDIENCE: So this is just a pedantic comment, but I think you need a minus sign in front of your f, just to make sure proper time isn't imaginary. PROFESSOR: Sorry, which minus sign? AUDIENCE: In the d tau stage. PROFESSOR: You mean here? You mean here or here? AUDIENCE: Below that. PROFESSOR: Below that, yeah-- oh, sorry, sorry. Thank you, I wrote it wrong. It should be rs rh. Yeah, because rs is above. Sorry, yeah-- so rs is above, so varied rh, so rh is downstairs. Thank you, so rh is-- so I always consider rh as [INAUDIBLE] equal to rs, OK? Any other questions? OK, and so some other fact. And again, I will just list them. I will just them. If you're not familiar with them, it should be very easy for you to go through them, to re-derive them, with a little bit knowledge in gr. So number four is that it takes a free fall-- free fall means we just follow geodesics-- free fall of a traveler a finite proper time to reach the horizon-- say, from the infinity-- but infinite Schwarzschild time. So also you can easily check that it actually takes infinite Schwarzschild time for object to fall through the horizon. But from the free fall observer itself, it's actually just finite proper time. And so from the perspective of the observer at infinity, it looks like this object never fall into the black hole. It's just frozen at the horizon. So another remark is that once inside the horizon-- that means when r becomes smaller than rs-- the traveler can not send signals to outside, nor can he escape. So that's why this is called even horizon. So we will see this slightly later. In next lecture we will see this explicitly. So finally, there are two important geometric quantities associated with the horizon. Two important geometric-- OK? So the first one is the area of the spatial section. So suppose-- so let's consider we are at i equal to rs, and then here you just said r equal to rs, and then this is a two-dimensional sphere, OK? So this two-dimensional sphere corresponding to a spatial section of the horizon. So first, A is the area of a spatial section. So you can just say let's look at the area of this part with the r equal to the horizon radius. So this just give you a equal to ah ah, equal to 4 pi rs squared. And this rs is 2GN, so this become 16 pi GN squared. OK, so this is one of the key quantities of a black hole horizon. It's what we call the horizon area, OK? What we call this horizon area. And the B is called the surface gravity. B is called service gravity. So the surface gravity is defined by the acceleration of a stationary observer at the horizon as measured at infinity. OK, at infinity means at spatial infinity. So you can-- if you are not familiar with this concept of surface gravity, you can find it in standard textbooks. For example, the Wald say page 158, and also section-- OK. Just try to check it there. So, say, suppose you have a black hole. Say this is the Schwarzschild radius. Of course, things want to fall into the black hole. So if you want to remain at a fixed location outside the black hole, then you really have to accelerate. You have to fire some engine to keep yourself to stay there. And you can calculate what is acceleration you need to be able to stay here, OK? And once you are closer and closer to the horizon, that acceleration becomes bigger and bigger-- eventually, becomes infinity when you approach the horizon. But because of this red shift effect, when this acceleration is viewed from the units for observer at affinity, then you have infinity divided by infinity, then turns out to be finite. And this is called a surface gravity. It's normally called a kappa. Normally called a kappa. And this is one of basic quantities-- basic geometric quantities of the horizon. So I will not derive it here. I don't have time. And if you want to see this, Wald's book. So you can calculate that this is just given by 1/2 the derivative of this function f evaluated at the horizon location. OK, so f is equal to 0 at the horizon, but f prime is not. OK, so you can easily calculate. So this is 1 over 2rs from here, and is equal to 1 over 4GN. OK, so this is another very important quantity for the black hole. Actually, I think right now is a good place to stop. OK, so let's stop here for today, and the next time we will describe-- then from here we will discuss the causal structure of the black hole, and then you will see explicitly some of this statement, if you have not seen them before.
MIT_8821_String_Theory_and_Holographic_Duality_Fall_2014
19_Massdimension_Relation.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation, or to view additional materials from 100 of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. HONG LIU: OK, good. So let me first remind you what we did last time. So last time we discussed that, for any operator, say some operator O in the field theory, we expect this one to one correspondence to some field, say, phi in the gravity side. And of course, the quantum numbers of those, they should match because of the four [? interpretations ?] of some symmetry groups and four [? interpretations ?] of the same symmetry groups. Of course, their quantum numbers should also match. And, in particular, on the field theory side, say if you have such an operator, it's natural to add the source term and then to deform the Lagrangian in such a form. And then we explained that, this phi may be considered as the boundary value of phi in the boundary of ADS. So the boundary value of phi, and it's mapped to the-- if phi have some boundary value, then that is corresponding to your boundary series Lagrangian have such a term. You know, Lagrangian. And then we also discussed this for the vector field, and for conserved current, say, the case which is a conserved current, then that implies that, in a particular example of this, if you have a conserved current that that's corresponding to, you must be dual to a gate field. And in particular, the stress tensor must be to dual to the magic deformation in the field series site. So any questions regarding this? Good. OK. So now let's try to develop more this relation. So this is normally called the operator field mapping. And so let's look at this relation a little bit more. So my immediate question, one can ask-- so the obvious question is regarding, say, the spin of the operator that should map to the spin of this operator, et cetera. But if you can see the conformal theory, which is the scaling variant, a very important quantum number for scaling operator is the scaling dimension. As we mentioned before, we said, in the conformal theory, everything is scaling variant. And then you can assign any operator-- any operator can be transform under appropriate representation of the conformal symmetry of the scaling dimension. That's one of the most important quantum numbers. So now let's look at how the scaling dimension of this operator are related to the physical properties here. Because here, there is no low scaling dimension in the sense this is just some gravity field and here is isometric it's not the scaling symmetry. And then we also can try to understand the worries of properties of this phi, how they are reflected in O. So that's the purpose of today's lecture. So for this purpose, now, let's look at the gravity side. So as we said last time, we always work in the regime of the semiclassical gravity. So on the gravity side-- OK. So what we will show is that, actually, the dimension of this operator can be directly related to the mass of the corresponding field in the gravity site. And so this is normally called a mass dimension relation. So now let's just consider, on the gravity side, so the action can generically be written as the following. So everything is controlled by the gravitational interaction. So now I have introduced a new rotation, which we explain. So you have Einstein gravity. And then you have some matter fields. OK? You have some matter fields. Matter Lagrangian. So this is the generic form, say, of action in the gravity side. So I've introduced this 2 kappa squared, which is essentially whatever is the Newton constant in this d plus one dimension. So normally we put the 6 pi G Newton constant here. So d plus 1 just means the Newton constant in d plus 1 dimension. And I just now called this 2 kappa squared for convenience. And typically, the matter fields, say, suppose you have a scalar field, then you will have-- say for inform, plus, say, nonlinear terms. And if you have, say, if you have vector fields, then you will have a Maxwell actions, et cetera. So just whatever matter field you have in the gravity side, and they just have this-- so now let's consider small perturbations around the pure AdS. So pure AdS, you only have a metric, and all the matter field is essentially zero. So now let's consider the small perturbations around the AdS. Or maybe I should say around the AdS. So first thing you notice is that these 2 kappa squared is multiplied over all action. So that means that all the interactions here, essentially, are controlled by this gravitational interaction. Yeah, it's all controlled by kappa. And when you can see the small perturbations, say, when you can quantize the small perturbations, et cetera, it's convenient to use a canonically normalized action. But if you look at the matter field here, the [INAUDIBLE] is this 1 over 2 kappa squared, which is a lot convenient. Actually, with that 2, then I don't need 2 here. So let me just get rid of the 2. So it's convenient to use a canonically normalized action. So it's considered a small [INAUDIBLE] around AdS, say, for convenient to canonically. normalize. AUDIENCE: Excuse me, why-- HONG LIU: Yeah, one second. Normalize the kinetic term of small perturbations. OK? Which means it's convenient when we can see, say, for example the series for phi, when we consider scaling, they take phi-- say k, phi equal to kappa phi prime, and then you can see that in the phi prime, we'll have a canonical action. And say, for example, if you can see the magic perturbations, and then you can write it as hmn. So this is the AdS matrix. OK? Then the phi prime and hmn will be canonically normalized. So later, we'll just drop this prime. I will just call this phi. So we just do a scaling operation. Yes? AUDIENCE: Why did the 1 over 2? HONG LIU: Oh no, it's just because there is 1 over 2 here. AUDIENCE: But what path, you have r-- you say the r is in the action and r should-- HONG LIU: No, I'm talking about the matter field. AUDIENCE: But in the action, you write the r minus 2 and the plus [INAUDIBLE]. HONG LIU: Yeah. AUDIENCE: So ordinarily, should it be r plus 1 over 2 [INAUDIBLE] something else? HONG LIU: It doesn't matter. It's just how I normalized things. You can normalize-- if you worry about the 1/2, it doesn't matter. Here we're not worried about 1/2. AUDIENCE: OK. HONG LIU: Good? So now here's the important point. So here is the important point. And remember, when we have the relation between the Newton constants, so the Kappa squared is a Newton constant. And remember, this is proportional to our n squared. Remember what we discussed last time. So that means, under this scaling, so kappa is actually of all the 1 over n, kappa is the Newton constant it's a square of Newton constant. It's very small. So this is in units of literature. And we always can see that in the large N image so that the Newton constant is small. And so this is small. So in this regime, if we keep phi-- so for canonically normalized field, say, if [INAUDIBLE] hmn are of order 1, then their perturbation is naturally small. Then their perturbation is naturally small because they are essentially controlled by the kappa. And it also means, if you plug this into the action, so it expands also long linear terms. So it also means that nonlinear terms are order kappa or higher. So this is very easy to understand. So let's consider, say, phi cubed term come from here. So previously, we have 1 over kappa squared, phi cubed term, so after this scaling, then I just have kappa phi cubed. So for phi of all the 1, then the cubic term will be suppressed compared to the quadratic term because the quadratic term, after the scaling, will be of all the 1, will be all the 1. So the lambda near term will be suppressed or higher and suppressed. So except if we are interested in considering the contribution of those higher order terms, then to leading order, we can actually just consider, for small fluctuations, we can just consider the quadratic action. This means that to leading order? So we can consider the quadratic action. So this means we can just-- so this corresponding to quantize the free series, OK? So this corresponding to quantize the free series. Remember, we always use h bar equal to 1. So for those matter fields, h bar is always equal to one. And after you do the scaling to get rid of the kappa, we just treat them as the free field theory quantization. Good? Any questions about this? So now let's look at scalar fields, just for illustration. Let's look at what's the behavior of a massive scalar field. So now let's consider a massive scalar field. for illustration. And we consider some scalar field phi in the [? back, ?] which is due to some scalar operator O in the boundary. Just imagine. OK? So now let's consider the quadratic action for phi. So we can write down the quadratic action. So after you have scaled out this kappa, and the quadratic action can be written explicitly as the following across all the kappa terms. So this is just a free action. And of the GMA, just the AdS. So the quadratic order, this just in the background AdS. So we'll use this notation to denote the [INAUDIBLE] component of AdS. So we're most of the time using this metric. This is my background AdS metric. And this is my action. So we'll also use of the notation that the x m actually, I should write this matrix here, but it doesn't matter. The x m should be considered as z x mu. And x mu is the coordinates parallel to the boundary. So do you remember how to quantize the free field series action? So this is the time we need to remember that. So the first thing is how you solve the equation motion. So we actually don't need to go to details. I will just mention a few important features. OK, so this is just the standard [INAUDIBLE] equation for massive scalar fields in the curved spacetime. And this gMN is just given by this metric. So this is a seemingly complicated partial differential equation. But here we have lots of symmetry. So there is a translational symmetry along the directions parallel to the boundary. Then we can just write down a Fourier transform. So because of the translation symmetries in x mu directions. So we can just do a Fourier transform. We can write phi z x mu in terms of, say, DDK. So this k.x is just the standard Minkowski contraction because of the t and x part. And essentially it's slightly a Minkowski metric. So [INAUDIBLE] here, k.x is just the standard Minkowski contraction. So now it's the first transform. Then you just plug this in. Then this is very easy to write down. The only derivative now would be just in the z direction. Because then all the derivatives in the t and x can be replaced by k. And then the equation becomes very simple. So it takes you a couple of seconds to write it down. So let me just write it down. So it can be written in the following form. And then the k squared just is the standard one. Again, the k squared is just as the standard Lorentz contraction. So omega would be the k zero in the time direction. We're just do the standard Lorentz contraction. So k mu. So now, this equation can actually be solved exactly. So normally, when you quantize it, you solve the equation motion, then you find the complete basis of solutions. Then you expand in terms of that complete the basis, the co-option of the expansion would be your creation and annihilation operators. And then that's how you quantize this series. So this equation can actually be solved exactly. So you can actually easily find out the complete set of modes. But it would be actually not needed for the current purpose right now. So maybe not solve it. So we'll leave it here. But what will be important for us is consider the symptotic behavior of the phi, the phi near the boundary. Now, let's consider the behavior of a phi near the boundary, which will be very important for us. Now let's just consider behavior of phi near the boundary. So near the boundary, z goes to 0. So this term compared to this term is always much smaller because z goes to 0. m squared R squared is finite. So we can drop this term. But kinetic term, typically, cannot drop because kinetic term depends on 1. So we just throw away this term. So as z goes to 0, and maybe also expand this term more explicitly. Then what you find, just drop this term and then write in this term more explicitly. Then you find the equation can written as following. So this is only valid when z equals to 0, leading order. So now this is equation has a very nice form because this equation is homogeneous. So z squared, then the partial z squared, z partial z, and then you have this thing which there's no derivative. And so this is not in some sense, homogeneous in the derivative. Such kind of equation you can always solve it by some power of z. So this just write down answers. So assuming phi is proportional z to some power. And this power, let me call it delta. So you plug these answers into this equation, then you get [INAUDIBLE] equation. So you can easily see, from first term, you get delta, delta minus 1. From the second equation, you get 1 minus d times delta, m squared r squared. And this is a simple quadratic equation now you can immediately solve. So you found two solutions. Find two solutions. So we are writing this in the form 1/2 d plus minus nu. I will define this thing to be my nu, so this is definition. So I will introduce the following notation. I will take the plus sign. I call the plus sign delta. So from now on, delta refers to plus sign. And sometimes I use the notation called delta minus, which is the minus sign, which is also equal to d minus delta. So this is the definition. So now I use this as the definition. So from now on, my delta is always d divided by 2 plus this number. So now what we have found is that as z go to 0, what we have found is that as z goes to 0, that destabilizes. So as z goes to 0, the phi kz has the following behavior. So you have two independent solutions. So then I can put arbitrary [INAUDIBLE], z d minus delta to exponents, and another one is Bk z to the delta. Then plus have all the corrections. OK? So this is two leading behaviors in z goes to the zero limit. And of course, I can also go to the coordinate space. Then this is just the x, z, which is Ax x z d minus OK? So first notice that this exponent, this d minus delta exponent is always smaller than delta. So this term always dominates. So this is the leading behavior. And this is the sub-leading behavior. So keep this in mind. So now we have found the fourth symptotic behavior of the solutions. And now we can discuss physics. Now we can discuss some physics. Now let me make remarks. For the first remark-- any questions so far? Good. So the first remark, you notice that this exponent are real if m squared R squared greater than d minus 4. Then you say, ah, for sure, these exponents are all real. Because normally, we only consider the positive mass squared. But [INAUDIBLE] idea is something very interesting happening. It's actually, in AdS, the [INAUDIBLE] mass squared also makes sense. In fact, you can show, as far as those exponents are real, means that, as far as, so one can show, as far as the star is satisfied, the theory is actually well defined. So in AdS, you are now-- you have an [INAUDIBLE] mass squared. But if you violated this, then you find instability. This is a very nice story, but I don't have time to discuss in detail here. But my former student [? Nabil Ackbar ?], when he was doing TA for me last time, he wrote a very nice note explaining how this works. And I put that note on the web. So this equation is called the BF bound. F is of our colleague Freedman. And the B is a very complicated name I always pronounced wrong. So maybe not try here. Anyway, so [? Nabil ?] gave a very nice explanation of where this comes from. And he shows that when you violate this, you actually get instability. And this maps into a very nice quantum mechanics problem, the quantum mechanics problem of minus 1 over x squared. Some of you may have experience with that problem. Anyway, so please take a look at those. I put it on the web. And it's a very instructive discussion. But we'll take a lecture to do that. Anyway, so now I just make the claim. It's that, as far as those exponents are real, the theory's well defined. OK? But if this exponent is violated, then there exist modes which are exponentially growing in time indicating the system in unstable. So this star is called the BF bound. The BF bound. So it's actually using instructive to compare with a standard-- actually, I forgot to use this blackboard. Anyway. Yeah. So [INAUDIBLE] to compare with Minkowski spacetime, so this compare with Minkowski case. So, in the Minkowski case, if you have a free particle, say, if you have a free massive scalar field-- so of course, we know this is our equation motion, OK? And from here, if you go to the Minkowski space, every direction is translation variant, then you just get this dispersion relation. You just get this dispersion relation. You can write phi equals to mass at omega t. Let me just write it more carefully here. You can write phi [INAUDIBLE] omega t plus ikx. Then plug it in. Then you find for omega squared equal to m squared plus k squared. So in Minkowski spacetime we don't allow m squared to be smaller than 0 because if m squared is smaller than 0, then that means, for certain range of k, say for k, for those values of k which are smaller than the absolute value of m squared, then this will be negative. So for those things, the omega will be purely imaginary. You can be plus minus purely imaginary. And then you plug it in here. Then you find it in phi and typically have exponential omega t plus exponential minus omega t. So the phi can have these two behaviors. And then, of course, this will exponentially grow. So this tells you that the magnitude of phi will exponentially grow with time. And so that indicated this instability. So this is the same as the analog I said earlier before. In the Minkowski spacetime, if you have a mass squared, and then your scalar field one wants to slide down, and this just tells you the behavior wants to increase. So this the phi. So this is V phi. Then the phi wants to increase. So this is phi to 0. But in AdS, something nicer happens. So in AdS, the instability only sets in if the mass squared becomes sufficiently [INAUDIBLE]. So if you just have the [INAUDIBLE] mass squared, if it's greater than this value, it's still OK. It's still OK. So the reason is actually very simple. So the reason is very simple. You can try to convince yourself by looking at the equations. And you can see the physics from the equations. But the basic physical reason is following. In AdS, so you can just look at that equation. Because of the spacetime curvature-- so the constant modes are not allowed. So you see here, the most dangerous modes are the constant modes. And when you have constant modes, and then m squared becomes-- no matter how small it is, no matter how small m squared is, if you have constant modes, and constant mode means k equal to 0, then that means that omega squared will always be negative. As far as m squared is nonzero in the negative, there's always constant mode which is unstable. But in AdS, because of the spacetime curvature, the constant mode is not allowed. So you always have some kind of kinetic term in the sense that you always have some kind of kinetic energy. So a field is forced to have some kinetic energy, which can compensate this negative m squared. But this won't happen when the mass becomes too negative. And then you have instability. Then you have instability. I suggest to you-- so this is just words, but those words can be reflected by looking at the structure of those equations and try to get some feeling about these by looking at those equations a little bit more carefully. OK, so this is the first remark. This is the first remark. So the second remark, is that we know that the AdS have a boundary. And the light array which of the boundaries in the front of time. So this is the one important feature we discussed at the beginning, which is the boundary in the finite time. In fact, in the AdS, it's just pi over 2 in the finite time. So that means, actually, energy can be exchanged on the boundary. Essentially, you can send stuff in or send things out. And so we actually need to impose the-- so in order to have a sensible physical system, you need to impose appropriate boundary conditions between appropriate boundary conditions. So the most important basic criterion-- again we are not going to the detail. And again, [? Nabil's ?] notes discussed this because he has to-- in order to discuss instability, he has also to discuss the boundary condition, et cetera. And you can look at [? Nabil's ?] notes. So basic criterion to impose a right boundary condition is to make sure that your energy is conserved and that your energy don't dissipate into nothing, don't dissipate at the boundary so that you have a well-defined system. Good. So now. So when you do canonical quantization, then what do you do is that you expand phi in terms of a complete set of normalizable modes. Of course they also have to be no normalizable as, normally, in the quantum mechanics, normalizable modes which satisfies the appropriate boundary condition. So before talking about the explicit boundary condition, let me first mention that issue of normalizability. So to talk about normalizability, you have to devalue in the product. So here we have the standard inner product for quantum field series in the curved spacetime. So here we have the standard inner product. And let me just write it down. So this is not specific to AdS. This just works for any quantum field theory, for any free scalar field series in the curved spacetime. So, for that series, you can define an inner product that follows. So consider. You have two modes, phi1, phi2. And then the inner product can be defined as the following. You choose a constant the time slice. You have to choose a constant time slice, which I called sigma t. So this is a constant slice. By time, I really mean this time. You choose a constant time slice. And then, of course, you integrate over spatial coordinates on these constant time slice. So this is the inner product. So this is the standard inner product. And you just plug in the AdS metric that gives you the explicit form for the AdS. So if you're not familiar with this, and you can check that this makes sense, that by showing that, actually, this thing is actually independent of t, so it does not depend, actually, your time slice. Because you don't want to your definition of inner product depend on specific time you choose. Then the normalizable would be the standard condition, whether this thing is finite, or infinity, et cetera. And you plug-in some modes with itself. And if it is finite, then it's normalizable. And if it's infinite, then it's not normalizable. So now this is our symptotic behavior. And you can check yourself, you can also convince yourself that the most dangerous behavior because the metric diverges near the boundary. So the most dangerous part of this integral is near the boundary because your metric diverges there. And you can just plug this behavior and this metric behavior. And then just pay attention to the behavior near the boundary. Then you can decide which mode is normalizable, which mode is non-normalizable. Then you find, again, this takes you a few minutes to do. So you'll not do it here. So I will only talk about the real nu. From now on. Just real nu. It means that we always satisfy that bound. So we find that, for that mode, so this mode always goes to 0. So you have these two modes. This mode always goes to 0 because delta is always positive because nu is positive, and nu is non-active, and d minus t over 2 is positive. So this is always goes to 0. And you can show that this is always normalizable. OK. So there's no divergence at the boundary. But for these modes, you're going to see these modes become dangerous when the mass become very large. When the mass becomes very large, then the nu is big. When nu is big, eventually the delta can be pretty big. And then can be greater than d. And then this will be an active exponent. And then this will blow up. So it turns out this is actually non-normalizable indeed. So this is non-normalizable for nu to be greater than 1. You actually don't need to for, mass to be very large, because for nu to be greater than or equal to 1, this is already non-normalizable. But it's an interesting thing. You said this mode is also normalizable when nu is in this range. Good. Any questions? Yes AUDIENCE: Can we define inner product some other way? HONG LIU: No, this is the only way. This is the canonical way, just generic for quantum field theory in the curved spacetime. This is not specific AdS. Yeah, essentially, this is a unique one, yeah I would say. AUDIENCE: So I'm just trying to understand inner product. In the case that I take, make it a norm in the sense that I take inner product of a field by itself. So why, for example, should this be positive? HONG LIU: No, no this is in general not positive. In general, this is a what we know in the Klein-Gordon field, say, if you do in the Minkowski spacetime, this in general, not positive definite. AUDIENCE: Right, sure. HONG LIU: Yeah, same thing in the Klein-Gordon fields. Yeah, but the [INAUDIBLE] is, so that means this thing does not make sense to interpolate as a probability. But you can still use this as a definition of normalizability. AUDIENCE: So it still does the property that something will only have so it's not a semi-norm in the sense-- like it is actually a good inner product? HONG LIU: Yeah. AUDIENCE: Isn't it this it just the generalization of probability current of [INAUDIBLE]? HONG LIU: Yeah, no, this is the same as the Klein-Gordon equation. This is the same inner product as you would define for the Klein-Gordon field, a free massive scalar field in Minkowski spacetime. And that this is just the generalization to a general curved spacetime. Good. So now, yeah, so we see that this is always normalizable. And this sometimes is normalizable and sometimes not normalizable. So now, let me talk about the boundary conditions. So for the time reason, I will not show that the boundary condition I'm going to talk about will give you a well-defined energy in the sense that there's no energy exchange at the boundary. And for that, take a look at [? Nabil's ?] notes. I will not go into detail on that. But the boundary condition I say will satisfy that property. So let's first look at when nu is greater than or equal to n. In this case, the answer is obvious. You don't even need to check the energy conservation. Because you only have one possibility because this mode is non-normalizable. And a non-normalizable mode, you cannot keep them when you do the quantization. Quantization, you have to use normalizable modes. So the boundary condition you have to choose in order that conical quantization is equal to 0. So you should only keep this mode of a nu equal to 1. Because this is non-normalizable. So now, when nu is in this range, so both modes are normalizable, but you cannot choose both of them. You cannot allow both of them because then it leads to the non-conversation. You need to put some boundary conditions there. So then, naturally, the equal to 0 also works in this range. So this is called the standard quantization. You can still impose, even though the A modes become normalizable, you can still impose A equal is 0. So tentatively you can also impose B equal to 0. Just take one of them. Because A now is normalizable. So this is called the alternative quantization. So sometimes, it's also possible to impose the mixed boundary condition between them-- to impose a mixed boundary condition. I will not go into there. Anyway, so for example, if you look at [? Nabil's ?] notes, you can easily check for those boundary conditions that there's no energy flux, through the boundary. So, from now on, I will use the following terminology. From now on, we'll use the following terminology. So from now on, when I say normalizable, I always say the mode which are picked by quantization, OK? So when I say normalizable, in this case, of course, this is normalizable. When I say normalizable, for nu greater than 1, it's no ambiguity, just the B mode. And when nu is between 0 and 1, the normalizable refers to the mode which you choose to quantize. And then the other you consider to be non-normalizable. Is it clear, the terminology? So in the standard quantization, the B-mode is normalizable, and the A is non-normalizable, even though, mathematically, A is normalizable. And in the alternative quantization, the B is non-normalizable, even though, mathematically, it is normalizable. So normalizable would mean, the behavior specified. I'll call the other modes non-normalizable modes. So with this terminology, I always have 1 normalizable mode, one non-normalizable modes. OK, let's move on a little bit more. So this is the second remark. And the third remark-- let me just erase this. So the third remark, so normalizable modes, by definition, is the one we use to do a canonical quantization. And it's the one we use to build up of the Hilbert space. But now, since we have an equivalence between the two series, the Hilbert space of the two series will be the same because, otherwise, the state of the two series should be the same. Otherwise, we cannot talk about the equivalence. So that means that any normalizable mode, so anything which fall off, say, as, for example, in the standard quantization, like z to the powers delta at infinity near the boundary, you should consider as mapped to some states of the boundary theory. So this is, actually, a very general statement. So this will also work if you include those higher order corrections which are suppressed by kappa. So, in general, you can apply it to a complicated interacting nonlinear series in the gravity side. And the normalizable modes are always the modes which are in your Hilbert space in the gravity side. And so that means the mass corresponding to some states in the boundary field series. Good. So now, let's look at the interpretation of the non-normalizable modes. Non-normalizable modes, suddenly they are not part of the Hilbert space. So if they are present, in particular-- let me just finish this. then we'll say it. So, if present, it should be considered-- so sometimes, they're just not present. But, if in some situations, they are present, then you should view them, view such kind of non-normalizable modes-- if I'm in the background. So by background, for example, the AdS, pure AdS, which we are expanding around is our background. Yeah. So you can imagine, say, [INAUDIBLE] non-normalizable modes deform the AdS, then you have a new background, then you have a new background. And in that background, you're not supposed to quantize the cell, because these are non-normalizable modes. These are non-normalizable modes. AUDIENCE: Yes, sorry, I guess I didn't quite get the terminology normalizable for 2, I guess? You mentioned it, but could you repeat it? HONG LIU: Sorry? AUDIENCE: Normalizable, not normalizable. HONG LIU: Yeah. Normalizable means the behavior specified by the quantization. AUDIENCE: So I don't think-- I don't quite understand-- HONG LIU: Yeah, let me just repeat. For nu greater than 1, normalizable, it's just normalizable, because there's only one normalizable mode. For nu between 0 and 1, mathematically, both modes become normalizable. But when we quantize them, and we [INAUDIBLE] one of them. OK? As I said, in order to preserve the energy conservation-- so we have to select one of them. So there are two possibilities. You select the standard quantization, which you said A equals to 0, you said, the alternative quantization you set B equal to 0. So the 1 you set 0 is considered to be non-normalizable in this terminology. And the 1 you allowed, which means the one you are used to expand as a canonical connotation are called normalizable. I will elaborate this a little bit more. Yeah. Yeah, after five minutes, then you will see. Yes? AUDIENCE: What do you mean by the other ones being present? If we don't quantize them, have they appeared? HONG LIU: Yeah, they can appear. I will, again, explain in one minute. So now, I'm going to make a connection for this statement to the statement we are making last time. We're making last time. So last time we said, if a gravity field has a boundary value, then this boundary value can be considered as a source multiply the operator which you can add to a Lagrangian. You can add to the action. And so this is precisely like that. It's precisely like that. So let me just repeat. Say, let's consider the standard quantization just to be specific and say standard quantization. In this case, the non-normalizable mode corresponding to A not equal to 0, so if the non-normalizable modes are present, and this is corresponding to A not equal to 0. And we see from here, this term, A term, always dominated the other term. So, in other words, when you go to the boundary limit, it's the A which determines the boundary value of the field. Let me just say. The A, which is determined boundary value of the field. I put the code. AUDIENCE: [INAUDIBLE]. HONG LIU: A equal to 0 when you do the canonical quantization. AUDIENCE: Oh, OK. [INAUDIBLE] HONG LIU: But A equal to 0, you mean this is the condition in the normalizable modes. So here, we are discussing the interpretation of the non-normalizable mode. So, now, suppose A is nonzero. Suppose a non-normalizable mode is non-zero. Suppose, now, the non-normalizable mode is non-zero. We are down with normalizable modes. This is a condition for the normalizable modes. OK? So since this term dominates, the A should be interpolated as a boundary value of the fields. And if you have a non-zero A, so if you have a non-zero A, say, if A x is equal to some phi x, according to what we discussed last time, now up to this kinematic factor, this thing, which you need to strip away because this just determines your equation motion. So if A equal to 5 equal to 0, that implies that the boundary series action should contain the term. So what this means is that the non-normalizable, there are certainly not part of the Hilbert space. If they are present, they actually determine the boundary theory itself. So the non-normalizable modes determine the boundary theory itself. So we will later see examples of this. But let me just repeat, so if you look at the gravity solution, then you look at its symptotic behavior near the boundary. And if you see-- let's, say, consider two solutions, two different solutions. If the two different solutions have the same non-normalizable behavior-- say the non-normalizable modes are the same, but only the difference in the normalizable modes, then you will say these two solutions describe two states of the same theory. Because they only differ by normalizable modes. Then it describes the different states in the same theory. But now, if I look at two such solutions, and these two solutions differ by the behavior of the non-normalizable modes, then we would say, these two solutions corresponding to two different series because one is due to two different series. Because non-normalizable modes needs to add additional terms to your boundary theory action. Have I answered your questions? AUDIENCE: So basically, you just assume any field which is a type of-- maybe a combination of those two modes, but if the-- HONG LIU: Forget about linear combination of the modes. Just say, if you consider two solutions, which if there only differ by normalizable modes, then they correspond to different states of the same theory. And if they differ by non-normalizable modes, that means they describe different theories. Even the theories they describe are different. And so that's why we call this background. We call this background because the symptotic conditions at infinity has changed. AUDIENCE: I have a question. So, in the alternative standard upon the quantization case that-- which would be equal to 0. HONG LIU: Right. AUDIENCE: But we know that the d minus delta is always larger than delta. But in that case, it seems near the boundary. The boundary determines, not the number. HONG LIU: Right, right. Yeah. That's a very good question. So that, you have to stretch a little bit. Yeah. Yeah. Yeah. Good? So let me finish my last equation before we break. So more precisely, because of this z term, so in the standard quantization, yeah let me just write it there, so write it here. Let me just replace this one. In the standard quantization, if you have such a term in your boundary series Lagrangian, that's corresponding to this phi x is equal to limit z goes to 0, z to the power delta minus d phi zx. So I have to multiply a power so that it will just extract this A factor from the behavior of the phi. And the so this is the relation. So this is the important relation. So, last time, we only said that phi should corresponding to the boundary value of this capital phi. But now, by working out the symptotic behavior, we have refined that statement. Now it's a really precise mathematical statement. Any questions regarding this? AUDIENCE: Is it a guess or-- HONG LIU: No, this is just a [INAUDIBLE]. This limit precisely extracts this A term. This term, because that opposite power to this one, and this one will be smaller. This one will be smaller because that multiplier, this one, when z goes to 0 will go to 0. So you automatically extract this term. AUDIENCE: [INAUDIBLE] HONG LIU: It's just this statement. The A is related to this. OK. AUDIENCE: No, this statement is a guess from the example we discussed last time. So last time, by looking at this line example, we deduce that the boundary value of a field should be related to the source. The boundary value of a [INAUDIBLE] field should be related to the source of a boundary operator. And at that time, we don't know what we mean, precisely, by boundary value. And now we have worked out a precise symptotic behavior. And now you can talk about what you mean by boundary behavior. And then that refines that statement to that precise mathematical statement. Good. So let's start. So let's see what we can deduce from this relation. So turns out that there's a very important information you can deduce from this equation. And this is my number five. So let me call this equation star. So this equation star, relation star, actually tells you that this delta, which defined before, in this way, and appears in your symptotic formula like this, this delta is actually, it's the scaling dimension of the operator dual to phi of O. So we're assuming this phi is due to some operator O in the boundary. And so this is actually this precise in the scaling dimension of O. So now, let me explain that. So before I do this, do you have any questions? Good. OK, so let me first remind you how we define the scaling dimension of an operator in conformal field theory. So if you have a scaling [INAUDIBLE] theory, then the overall scaling is a symmetry. So such a scaling is a symmetry. And so under such symmetry transformation, say, operator, we call it a scaling operator if you transform. And under this symmetry has the following. So transform into a new operator prime. And evaluated at the new x prime is equal to just differ by a scaling from the previous operator. OK, so we pause. So the operator, with such property, a scaling operator. So good operators, which are representations of the conformal symmetry always have this behavior. And this delta defines the scaling dimension. So this delta is the scaling dimension. So this is the definition of the scaling dimension. So this is almost similar to a scalar field. So a scalar field means that the new scalar field, you evaluate it at a new point. It should be the same as your old scalar field. You evaluate it at the old point. And here the difference is just of a pre-factor. And this number defines the scalar information. Good? So now I will derive that this relation implies that this operator transforms precisely according to this delta. So let me first I remind you that the boundary scaling x mu goes to x mu prime equal to lambda x mu is related to the bark, also to isometry, which is the x mu prime equal to lambda x mu. But, at the same time, you also scale z. So this scaling symmetry, the counterpart of this scaling in the gravity side is that you scale both of them together. And the gravity metric is invariant under this. So now let's consider transformation. So now let's consider the relation between the phi x z and the O x. So let's consider such a transformation on both sides. So on the gravity side, such a transformation leads to a new field. And this should be dual to transform the field on the field theory side. AUDIENCE: Which [INAUDIBLE] HONG LIU: Yeah, that's right. Yeah. Yeah. That's right. Now, this is [INAUDIBLE]. This is a boundary. OK? But I think, since this is a symmetry, so this transformation of both symmetries in the field series side on the gravity side, that should have low consequences. That should have low consequence. That means. That means. And the phi prime, this mu phi is the boundary value of the corresponding capital phi, OK? This should still be the same because your physics should not be changed under this. So now let me remind you that phi is a scalar field. In the gravity side, this is a scalar fields. So scalar fields transform, under such a coordinate transformation, transforms as phi prime, X prime, z prime. You go to phi x z. So this is just the definition of the scalar field in the gravity site. And now let's apply this to the boundary value. Then we find that the phi prime x, which is a boundary value capital phi prime, so this should be equal to that. So now we use that this relation of z prime is lambda times Z. And this is just equal to that. So this just becomes lambda to the delta minus t phi x. Clear? So now you can just plug this back into here. So there's a factor of lambda minus delta minus d. And this comes with a factor of lambda to the power d and then tells you that O prime x prime equal to lambda minus delta O x. And we show that this delta, which appears here, or, whatever, here, is that delta. It's the dimension of the operator. AUDIENCE: But in standard quantization, you mentioned that-- HONG LIU: No, this is a standard quantization. AUDIENCE: A equals 0 in standard quantization. HONG LIU: No, no, we're talking about the source. We're talking about non-normalizable modes. We're not talking about canonical quantization. We're not talking about canonical quantization. We're talking about attending non-normalizable modes, because when we're adding on such a term to action. AUDIENCE: But isn't that a field operator correspondence? HONG LIU: Sorry? AUDIENCE: Isn't that a field operator correspondence? HONG LIU: Yeah, this is. AUDIENCE: But these are non-normalizable. HONG LIU: Yeah, that's what we, yeah, yeah, yeah. Yeah. So we're talking about the non-normalizable modes. And the non-normalizable modes corresponding to [INAUDIBLE] operator, and [INAUDIBLE] source into an operator in the boundary series. And just this relation shows that delta is this dimension. Any other questions? AUDIENCE: So then, when we quantize everything, this operator O is dual to normalizable [INAUDIBLE], or non-normalizable? HONG LIU: No. Here we are talking about the question-- here we're talking about this relation that if there is a boundary value of capital phi, which is this boundary value, then that's corresponding to add such a term to your Lagrangian. And under such a correspondence, then we deduce that this delta is the dimension of this operators. It's the dimension of this operator. And this is deduced from the non-normalizable perspective. And in indeed, you can ask, say, suppose I create normalizable modes using this O, whether that is consistent with that. Indeed, that you can double check. Is it clear? Anyway, so this is what we have right now. So later, we will discuss the correspondence for the normalizable modes. And we worry about later. Later, we will talk about the correspondence for normalizable modes. And then you will see it's compatible with this relation. So now we find, for scalar, for a standard quantization, for scalar, in the standard quantization, for scalar in the standard quantization, that the dimension of the operator related to the mass in the gravity side by the following relation. [INAUDIBLE] to [INAUDIBLE] by this relation. So we have found that explicit relation between the conformal dimension and the mass. So now, let me elaborate a little bit on this relation. So let's consider, suppose the m equal to 0. Well, m equal to 0, then you find the delta equal to d. So delta equal to d, on the field theory side, this is a marginal operator, and so we see that the massless field on the gravity side is mapped to a marginal operator. So are you familiar with the terminology of marginal, operator, relevant operator? Et cetera. Good. AUDIENCE: Wait, can you remind us what is it? HONG LIU: This is defined in terms of the RG flow. Are you taking the quantum field theory too? AUDIENCE: Yes. We haven't quite gotten there. [INAUDIBLE]. HONG LIU: Yeah, then you will get there, maybe, in a week or two. Yeah, certainly by the end of this semester. Yeah, so let me just say one-- yeah I will say a few more words. And then you will see what it means. Yeah, just I will say a few more words. Anyway, so will m squared smaller than 0, then he is negative. Then this is d squared over 4 matter something. So that means the delta is smaller than d. When the dimension of the operator is smaller than the spacetime dimension, so that, from field theory, according to you, is called the relevant operator. And now, when m squared is greater than 0, it's mapped to delta greater than d, so this is mapped to an irrelevant operator. AUDIENCE: Why do they have [INAUDIBLE]? HONG LIU: Why do they have these names? Marginal, relevant, or irrelevant? Yeah, I will say a little bit about that. Just wait one second. So now, let's say what this means from the perspective of this relation. So from the field series side, we have phi x O x mapped into-- so this phi should be identified we A. So this is identified with this term, d minus delta. OK, leading term. OK, so now let me remind you what this marginal and relevant, irrelevant means. So operator marginal means, when you add such a term to your Lagrangian, the effect of such a term does not change with scale. So whether you're looking at a very high energy, looking at very low energy, the effect does not change very much. OK, so that's why it's called a marginal operator. So in the regulation group, there's a very simple scaling argument to show that, when the delta is equal to d, and then the effect of the operator does not change with the scale. In particular, when you go to the UV, this does not change. For a marginal operator. When you go UV, say, when you go to higher and higher energies, the effect of these operators remains the same, roughly. So now, we know from this UV connection, UV mapped on the gravity side to z goes to 0 means we are approaching the boundary. And now we see, precisely for d equal to delta, this term is actually constant. So go to smaller and smaller z, and the effect of this term does not change. It's just a constant. So again, does not change. So now, let's look at the relevant operator. So the relevant operator, so in the field theory, when you do the RG, the relevant and the irrelevant means, whether this term becomes relevant when you go to the IR. By relevant, means that the term becomes more and more important when you to the low energies. So that means that, for the relevant operator, when you go t higher energies, it becomes less and less important. By definition, the relevant operator means, when you go to low energies, it becomes more and more important. And it means, when you go to higher energies, it become less and less important. So it means that, when you go to UV, the relevant operator becomes less and less important. So now, let's look at this side. The relevant operator corresponding to delta smaller than d, then you have a positive power. And that means, when z goes to 0, when you go to UV, actually, this term goes to 0. So this is precisely compatible with the field theory behavior we expect when you add such a term to a Lagrangian. And the effect of such a term should be less and less important when you go to higher energies. And here, we see a precise counterpart here. And similarly, now, if you go through the irrelevant operator. Irrelevant means irrelevant when you go to IR. When you go to UV means more and more important. So that means-- but here, when delta is greater than d, then these here have elective power. So here, goes to infinity, become bigger and bigger and, indeed, this term becomes more and more important. So we see a very nice correspondence between this behavior we expect just from the ordinary RG and the behavior of this term in the gravity side. So this is a very important self consistency check of our matching, both of this relation and of this relation. Any questions on this? Yes. AUDIENCE: What's the topology of AdS boundary? HONG LIU: Sorry? AUDIENCE: What's the topology of AdS boundary? HONG LIU: It's a Minkowski space. AUDIENCE: But it's 5-- it's 5 times something. HONG LIU: It is a Minkowski space. Boundary of AdS is just Minkowski space. AUDIENCE: I have a question. [INAUDIBLE] in RG why [INAUDIBLE] HONG LIU: That take some story to do. Yeah, that you have to do in the quantum field theory too. Yeah, but it's very easy to understand if you just know a little bit of free field theory. And in the field theory point of view, when the dimension of this operator is smaller than the dimension of this operator, and then this is dimensional, then this has a positive mass dimension. And this is, like, mass term. And the mass term becomes more and more important when you go to lower energies. And that's important because the higher energies. Yeah, just from dimensional analysis. Yeah, it takes maybe five minutes to explain very clearly. But we don't have these five minutes. OK, so let me just summarize. So we have this relation. Let me maybe just summarize here. So we have the other relation. So summary. So we have phi due to go to some O. So here, normalizable modes should be considered different normalizable modes corresponding to different states. And here, non-normalizable modes is mapped to different actions. So you have a different non-normalizable modes, then you map to different actions. You map to different series. So in other words, map to different series. So when you turn on the non-normalizable modes, you are deforming the theory. So just to elaborate, just more precisely, in the standard quantization, which is A modes are non-normalizable, when the A-modes are non-normalizable, then A x. On the gravity side, is mapped to this last term on the field theory side. If you have a non-0 x, A x mapped to the following term in your action. And then your mass, we deduce that this side is mapped to the dimension, give the delta dimension of this operator O. So later we will explain, as part of this normalizable mode story, so this B-- so A is non-normalizable but B is normalizable. So later, as we will explain, in the normalizable mode story, is the B x here, we are actually mapped to the expectation value of O. So B is a normalizable mode, then, actually, that maps to a expectation value of O in the corresponding states. So A mapped to how you deform the theory, and the B just mapped to the state which specify your expectation value. So this, we'll see later. And in alternative quantization, you just do it oppositely. In alternative quantization, you just switch A and B. In alternative quantization, you just switch A and B. So now B maps to B x O. And A x maps to the expectation value. And m now maps to d minus delta. So now, the dimension of the operator is d minus delta rather than delta. So now, because you just take the different exponents. It's the same thing, just take different exponents. So now, the dimension of the operator is d minus delta. So now, let me also make you some other examples. So you can consider gauge fields. You can consider, say, a Maxwell field in the gravity side. Then that's [INAUDIBLE] dual to some conserved currents, so besides earlier. And then you can check, then again, you just solve the Maxwell equation, which I will not have time to do it here. But I may ask you to do it in your p set. It said A mu as z goes to infinity becomes-- so the path of A parallel to the boundary goes to a constant. There's no z. It's really a constant plus b mu some number to the z to the d minus 2. So this is a leading behavior in the case of the vector field. So using the very similar scaling argument I was doing there, you can deduce that the operator due to A must have dimension d minus 1. This I will leave to you as an exercise. Just exactly parallel statement for the scalar case. And with this symptotic behavior, you can show, just using the argument, you can show that the operator dual to A must have dimension d minus 1. And this is precisely the dimension a conserved current must have. So conserved current because it's conserved. So its dimension, it's actually protected. So no matter in what kind of interaction, no matter in what kind of case, you always have the same dimension d minus 1. So this precisely agrees with that. So do you understand why? AUDIENCE: Why. HONG LIU: Yeah, the reason is simple. It's just because j0, the dimension of the j0. You can understand the following. The j0 is the charge density. And the charge density charge, by definition, does not have a dimension. And by definition the density will always have a dimension minus 1, d minus 1. And then the other components, they just follow by the conservation. Yes, so, indeed, this is, again, this is a consistency check. AUDIENCE: So for this case, there is no choice of quantization? HONG LIU: Yeah, there's no choice for quantization, right. Actually, in AdS four, in some special dimensions, you can also do a different quantization. But the story is a little bit subtle. In some special dimensions, sometimes you can do it. Yeah, in AdS four, you can do a different boundary condition. So now let me also mention the stress tensor, which is the other important one. And we mentioned that the stress tensor, which is due to the perturbation of the metric component. So if you write the AdS metric as some function times dz squared plus g mu mu dx mu dx mu, so this is the part of the metric parallel to the boundary, then we discussed, last time, if g mu mu goes to the boundary as something like this, then eta mu mu plus some perturbation, then that means that, in the field theory site, you're turning on a stress tensor perturbation. So that's what we argued last time. If the boundary value of your [INAUDIBLE] metric, say, is perturbed, away from the Minkowski metric, then corresponding from the field theory point of view, we turn on a source for the stress tensor. So again, by using this relation, you can deduce that the stress tensor must have dimension, the corresponding operator must have dimension d. Again, this gives you exercise for yourself. And again, that is precisely the stress tensor. The dimension should satisfy that stress tensor. And again, because the stress tensor is a conserved current et cetera, always, no matter in what kind of series always have dimension d. So now, after I have discussed this relation, let's now discuss how to compute the correlation functions. So let's discuss how to compute correlation functions. So the basic observable in the field theory, even more so in the conformal field theory, is just correlation functions of local operators. It's an important part of the [INAUDIBLE]. And in the field theory, typically, when we can see the correlation function of local operators, we use something called a generating function, though. I hope you're familiar with this. Say, again, in the rating function, though-- so now let's consider Euclidean, which is a little bit simpler. So essentially, it's a Euclidean creating correlation function by adding such a term to your field theory. And if you want to calculate correlation functions of O, then just take derivative with phi and then satisfy equal to 0. Then you get the correlation function of O. So I hope you are familiar with this. so here, you should think of phi and O as just some, really, I have a collection of all possible operators, even though I just write the single one. You should think of them as a collection of all operators. You can put anything here, stress tensor, conserved current, any scalar operator, any spin operator. Yeah, you can put anything there. So in a simpler situation, you just don't turn on any operator. You just take phi equal to 0. So this is a simpler situation. Then you just have the partition function. So this correlation function is defined by the Euclidean path integral, for example. And you just insert this in the Euclidean path integral. And when phi is equal to 0, this is just the Euclidean path integral. And this is just gives you a partition function of the whole system. And since we believe the duality should be 2, then this partition function should be exactly the same as the partition function on the gravity side. Suppose you can compute the partition function from the gravity side. Then it must be the same. Otherwise, there's no equivalence. So this is the partition function of the gravity side. So these have to be true. So now, with this relation, again, we are right in the heuristic form that some operator O is due to some [INAUDIBLE] phi. And the source for the O is due to the boundary value of phi-- again, I write in the heuristic form. So the boundary value of phi should be understood in that limit. Because of this, then that means we must have-- if we turn on a [INAUDIBLE] or a phi, deforming the series by this phi, the mass corresponding to the gravity side of the partition function, with this phi, have the boundary value given by phi. Then it must be the same because we have identified turning on the boundary value to turning on the operator, turning on this thing in the field series. So we have now related the generating functional to the gravity partition function with some non-normalizable boundary conditions. So, in general, give me one more minute. So in general, this equation is actually empty because we don't know how to define the gravity site. Full quantum gravity partition function. What are you talking about? If you don't know how to do that. But we know how to compute this guy in the semiclassical limit, which is the string coupling goes to 0, and the other prime goes to 0 limit. So in this limit, we can just write the gravity path in function, essentially as the Euclidean path integral of all gravity fields. Again, I'm using this heuristic form to write the boundary condition. But it should be understood in that form. And again, I'm by integrating over phi, you should imagine integrating of all gravity fields. So now, in the limit, so this SE is proportional to 1 over kappa squared, which is the Newton constant. So in the semiclassical limit, in this limit, the kappa squared goes to 0. So you can actually evaluate this path integral using the site point approximation. So as the leading order, so this gravity can actually be evaluated. So leading order in the semiclassical limit, you can just evaluate it by site point approximation, then this is just given by SE. Evaluate it as a classical solution which satisfies the appropriate boundary conditions. So that means, actually, in this limit, we can actually evaluate this explicitly. And the nice thing is, as we said before, the semiclassical limit actually corresponding to the [INAUDIBLE] limit in the field theory. So we can actually compute explicitly the generating functional in the [INAUDIBLE] of the limit over the field theory. So let's stop here.
MIT_8821_String_Theory_and_Holographic_Duality_Fall_2014
16_Geometry_of_Dbranes_and_AdS_CFT_Conjecture.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality, educational resources for free. To make a donation, or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: OK. Good. So let us start. So last time, we talked about finding the spacetime geometry produced by D3 brane. Such just based on symmetry, you can write down the metric as the following. So it's some function, and then a Minkowski metric-- a Minkowski metric for the 3 plus 1 direction. And then the path for the transverse direction, which you have rotation symmetry. So you should think of h-- the D3 brane as a point in R6, then there's a spherical symmetry in R6 around this point. And then there are other 3 plus 1 direction, which is this 3 plus 1 direction. OK? So when you solve the Einstein equation-- when you find that this f, h, can be solved explicitly by the following form-- some other function of H by the same function, with H given by-- and R is the constant, which is given by the following form. So R essentially have the form of some constant, Newton constant times the tension of the D3 brane. And then you have N of them. And then this can also be-- if you've written everything in terms of Gs and alpha prime, it can be written as 4 pi gs and alpha prime squared. OK? So this R-- so this is just like what you normally see in the Schwarzchild metric. It's now G times the mass. Except, because of the translation symmetry in this 3 plus 1 direction, so we have the tension appear here. So GN is proportional to-- so Newton constant is proportional to gs squared. And T3 is proportional to 1 over gs. So this product is proportional to gs. OK? And there's N of them. So there's N here. And on dimension of ground, this is dimension 4. So you have alpha prime squared here. OK? So again, you can write down everything on dimensional ground. And the only thing is just this 4 pi. You have to be careful. OK? So this is the full D3 brane metric. So this is the full D3 brane metric. So now let's understand a little bit of physics of this metric, which gave us a deeper understanding of the physics of D3 branes. So first, let's consider various regime of r. OK? So let's consider, say-- first consider r goes to infinity. So in r goes to infinity, Hr just become 1. H just become 1. OK? And then you just recover a 10-dimensional Minkowski metric. OK? So this is very easy to understand. You are infinity away from the D3 brane. Of course, you don't feel anything. And so you just recovered the 10-dimensional Minkowski spacetime. So here is 1. Here is 1. Just is a funny way to write down 10-dimensional Minkowski spacetime. So this is just a flat R6 metric. OK? And now-- let's now consider r go to infinity is just only r much, much smaller than R-- than this capital R. Then this function-- so again, let me call this f. Let me call this H. So for example-- did I call them H or g? H. Yeah. So for example, the f-- it's a similar thing with g, with H. So the f will be 1 plus-- so something of order R to the power of 4 divided by r to the power of 4. Similar with the H. So now you recognize that this 1 over r4 is precisely long range Coulomb potential in six dimension. OK? In R6. Because only transverse direction matters. And in three spacial dimensions, if you have a point, then you have 1 over r. And in six dimensions, then you again-- you go down two powers, and then you get 1 over r4. So this behavior can simply be understood-- say, essentially, as you have some kind of Coulomb potential. Both due to the-- say the conserved-- both due to the charge of the brane and also due to the gravity. OK? So you have some long range potential. So now the interesting thing-- things become more interesting is when r become of order R. When r become order R, then this Hr start to be significantly different from 1. OK? And then that tells you-- so this is the-- when r order 1, the deformation-- yeah, at this range, the deformation of spacetime metric from D3 branes becomes significant. OK? So in other words, this capital R, this constant, essentially signals the distance from the brane where the gravitational effect becomes strong. OK? The gravitational effect of the D brane becomes strong. And in particular, you can check yourself. You can calculate yourself the curvature radius now become of order R. Or the curvature itself-- yeah, let me just say the curvature itself becomes R to the minus 2. OK? So essentially, the R gives you a measure of when the gravity becomes strong, or the scale of the curvature. So we solve this metric in the context of the super gravity-- in the context of low energy effect of super gravity. So in order for the approximation to work, we said before, we wanted alpha prime times the curvature to be much, much smaller than 1. So we said less time. So that means we want this to be much, much smaller than 1 so that the super gravity approximation can be valid. So now, if you look at expression of the R, so this tells you that the gsN must be much, much greater than 1. OK? Yes. N must be much, much greater than 1. And also, in order to have the string loop crack into small, we also need gs to be much, much smaller than 1. So we need gsN to be much, much greater than 1, but also gs to be much, much smaller than 1. So in order for the-- super gravity approximation to work. OK? So now let's consider-- so when we set up the problem, so this metric is written that r equal to 0 is the location of the brane. OK? r equal to 0 is the location of the brane. When we solve this problem, i equal to 0 is the presumed location of the brane. So now let's see what happens when we-- yeah, that's where you put the source. OK? That's where you put the source. So now let's consider-- now let's ask the question, what happens to the metric if r goes to 0? If we try to approach the brane. OK? We try to approach brane. So this is very easy to do. So when r become much smaller than r to the capital R, when r goes to 0, then in this expression, this term will dominate. OK? We can forget about the 1. This term will dominate. So in this case, the H will be given by R to the 4-- R4. OK? Then we just plug this into your original metric. OK? So you plug this back. So when you plug this back-- so H is equal to-- so H minus 1/2 then just become r squared divided by R squared. Then we still have dt squared plus dx squared. Then plus R squared divided by r squared. So it's positive 1/2. Then dr squared plus r squared. d omega 5 squared. OK? But now something miraculous happened. Something miraculous happened. So let me slightly write this metric in the slightly more explicit way. When I write it-- so let me just separate these two terms. OK? So there are two important thing happened. There are two important thing happened. Just to give you a-- yeah, let me just do it here. Yeah. Let me do it here. Maybe it's good enough. Maybe that would be good, just to be safe. So previously, let's just draw the product like this. So let's call this R6. OK? Just for my purpose-- yeah, let's consider this is the R6. And yeah, forget about what's outside. Just imagine the-- yeah. There's no outside. Just the whole thing is R6. OK? And when we start solving this problem, the D3 brane-- D3 branes are a point in this R6. And then, there's some flux coming out of it. OK? There's some F5 flux coming out of it. So this is the picture. AUDIENCE: The flux is normal to the F6? PROFESSOR: Hm? AUDIENCE: The flux is normal to R6? PROFESSOR: No. The flux is normal to the S5 surrounding the point. And just like that. I'm just saying, think about the plane as a whole R6. There's nothing outside the plane. AUDIENCE: OK. PROFESSOR: OK. I want to reserve the things outside the plane for other purpose. I just now think about R6 is the whole thing. This is your whole space. OK? So if you look at this situation, there are two things. First, there's a source at i equal to 0. You should see a data function which is a source of the flux and is a source of the mass. OK? There's some tension there. And the other thing is that if you look at S5 around this point-- and this 5, when you go to the point, then S5 will become 0 sides. OK? So around this point. Yeah just in flat space, around the-- if you write in the polar coordinates-- so the sides of this S5 surrounding this point become 0 sides when you go to that point. OK? Yeah. But now, if you look at this metric-- now if you look at this metric, now actually when you go to the-- when you go to the location of the brane, actually, you never reach i equal to 0. Actually, this S5 will now become a constant side. OK? Also, if you look at this-- so there are two things happening. So first this is 1. This is 2. So the first thing is that S5 direction-- let me call S5-- now has a constant radius given by this R. So this is the first thing. And the second thing is that, because of this, the R squared-- over R squared structure-- say if you write as the proper distance-- write it as a dl squared, so dl will be the proper distance. Then when you solve this, then you find l is essentially log r, say, plus some constant. Then as r goes to 0, the l goes to-- when r goes to 0, the l goes to minus infinity. That means that i equal to 0-- yeah, let me just write it here. This means i equal to 0-- see it's at infinite proper distance away. So for all practical purposes, the brain has disappeared. OK? So the geometric picture is that when you include this-- when you include the gravity of this source, so now I have to draw something called the embedding diagram just to use this extra direction to show the curvature. OK? So when you go-- when you approach this point, what do you find? You don't find the point. Actually, you find there's an infinite throat. And so when you try to go approach this point, and this is the cross section of this throat, which I can now draw, is S5 with the constant radius. And then this throat has an infinite long distance. OK? So l and here and have an infinite distance. OK? So that's what's happening here. And yeah, the sides of this sphere, of course, is R, of radius R. So in this picture-- so when you include, what we say, the back reaction of the D3 branes, essentially the D3 branes themselves have disappeared. The only thing left are this curved geometry and the fluxes. Yeah, they're still-- the fluxes won't go away. So you will still see the S5 flux-- the F5 flux through this S5. OK? So there's still flux through this S5. OK? Under this metric-- so this is S5, which completely decoupled from the rest of the parts. So this have a constant radius. So this S5 have decoupled from the rest part. So it's natural just to look at this part. And this is actually at AdS5 So let me call it 1. So 1 is just AdS5. It's the metric of-- 5-dimensional anti-de Sitter spacetime times an S5. OK? And the idea is 5 have the metric. So let me just copy here. Essentially just this part-- r squared R minus dt squared plus dx squared plus R squared r squared dr squared. OK? So this is the S5. So something magic has happened. When you try to reach the brane, then you find the brain is no longer there. And actually develop an infinite throat. And this throat, the metric is given by the S5 times S5. OK? And of course, at infinity, you still see a flat space. And infinity, you still see a flat Minkowski space. OK? So now, we have to take a little bit conceptual leap. So now, we have to take a little bit conceptual leap. Yes? AUDIENCE: So we just basically bound the most general solution of gr with the given symmetries given by the brane? PROFESSOR: Yeah. With-- what do you mean by most general? We have specific D3 branes. AUDIENCE: Well, so we fixed the symmetry given by the D3 brane, and then-- PROFESSOR: No, you also fixed the D3 brane charge and D3 brane tension. AUDIENCE: OK. PROFESSOR: That fix everything. Yeah. Yes? AUDIENCE: So what is the nature of the source? Originally, in D3 brane, it's like electromagnetic-- PROFESSOR: Yeah. AUDIENCE: --but now it's like a mass. PROFESSOR: No, now the mass-- even the mass is gone, you only see a curved space time and with some fluxes. AUDIENCE: But I mean, [INAUDIBLE]. PROFESSOR: Yeah. But the mass now is completely gone. It's, in some sense, sitting at an infinite distance away. You can never reach it. And for all practical purpose, if you leave this spacetime, you will never see the source. What you will see-- what you will observe is just a curved spacetime with some fluxes. And this R also, perfect solution of the Einstein equation. Yeah, because away from the source, it's also perfect solution of Einstein equations. Yeah, you have first? AUDIENCE: Yeah, so I was just trying to understand. So the point is that-- so is this somehow protecting us from these issues that-- even electromagnetism or even quantum electromagnetic field has issues of like the divergence, of kind of going towards a point charge? PROFESSOR: Right, right. AUDIENCE: So is this somehow getting around that by making it go far away? Or not really? PROFESSOR: Yeah. No, this is not getting around that. This is-- the solution presents itself. We did not dictate the solution. This is pure mathematics. And we're just trying to interpret these mathematics. AUDIENCE: But it does achieve that-- I mean, we didn't-- PROFESSOR: Yeah. Yeah, achieved that. Right, right. Yeah, just the solution presented itself. AUDIENCE: That's crazy. OK. PROFESSOR: Yeah. Yes? AUDIENCE: So all the electromagnetic charge, wouldn't this tear the space like this, but at least [INAUDIBLE]. What about other [INAUDIBLE]? PROFESSOR: Yeah. Yeah. Yeah, something similar will happen, but the story is even more complicated. And so the D3 brane, the metric is simplest. Yeah, for reasons that I will mention later. So the reason I choose D3 brane is just because of the-- the metric is the simplest one. Other questions? AUDIENCE: So the long-range Coulmomb potential, that's from the charge of the brane? PROFESSOR: Both. AUDIENCE: OK. You can't really separate them I guess. PROFESSOR: Yeah, in principal, you can separate. It'll depend on your probe. Yeah. But from the metric-- from the metric, actually, from the metric it's just from the assumption that come from the-- from metric, of course, you should just interpret that as the gravitational part. Yeah, this is just gravitational part. AUDIENCE: I have a question. So there, you'll draw the flux as flowing out of the cylinder or along the cylinder? I guess it's along the cylinder. PROFESSOR: No, no. Not along the cylinder. Along the cylinder is the r direction. The flux is in the S5. Just think of a hollow S2. Think of a hollow square. Think of a hollow S2. Then S2 you can have magnetic flux. AUDIENCE: Yes. And the flux is all over the radial direction. PROFESSOR: And then there's no radial direction. If you only have S2, then there's no radial direction. Just only have S2. AUDIENCE: Oh, OK. PROFESSOR: And then you can have a flux on that S2. AUDIENCE: Yes. PROFESSOR: Yeah. Something proportional to F theta phi, say in the natural magnetism. Yeah, so this would be a magnetic flux in the-- yeah. AUDIENCE: Oh, yes. But you write it like in R6. If you write it in R6, it is along the radius direction. PROFESSOR: Yeah. Just forget about the-- in this picture it's back in the radial direction. But once we're in this picture, and then this becomes completely coupled. AUDIENCE: Oh, oh. I see. PROFESSOR: And then you just have separate S5. AUDIENCE: I see, yeah. PROFESSOR: Yeah, yeah. OK. So now, we have to do a little bit conceptual leap. It's not too big. It's that now we have two of the three braes. OK? So in the description one, you just have a D brane. You really just have some Dirichlet boundary conditions in flat Minkowski 10-dimensional spacetime, and where open strings can end. Where open strings leave, say. So in this picture, you just-- you specify some plane, some surface in the Minkowski 10-dimension. And so this is R3. Say 01, 3 if you include the time direction. So you just specify some plane in the spacetime, in the 10-dimensional spacetime where, on this plane, you can have open string citations, OK? And then the system, of course, can also have closed strings. Then this close string can interact with the brane, can interact with the open strings, et cetera. And the only-- so everything is in the flat space. Everything is in the flat space. And you have open string, you have close string, open string. Even on the brane, the close string can interact with the brane, can-- a close string can interact with open string. In particular, for example, you can send the close string and absorb by the brane, and then decay into open string. It's possible. So this just goes one into say, yeah. Or you can have two open strings collide, then become a closed string, and then jump out of the brane. Yeah, you can have all kinds of this kind of interaction. The key point is that in this description, the only thing you did is Dirichlet boundary conditions specified on some component your worksheet fields, for the string. OK? So this is-- but now, from here, we have also found the different, second description. Inside, we have spacetime metric. One which is some curved spacetime, them plus F5 flux on s5. OK? So here, there's no D branes anymore. We just have some curved spacetime with some fluxes. So here, the only thing leaves is a closed string. So only closed strings. But those closed strings see a curved geometry. OK? So now I can include all dimensions. To contrast these two, let me draw this picture again. Now let's draw a 10-dimensional picture. So this is R, Minkowski 10. So this is Minkowski 10. So when you now go to smaller radius r, and then you develop an infinite throat-- so this becomes AdS 5 times s5. OK? 5 down the throat. And here, only closed string can propagate. There's no D brane, but the closed string at infinity in Ads5, so you can have closed strings. OK? So is this clear? AUDIENCE: What would have closed strings? Because in low energy cases, no strings would only have effective field. PROFESSOR: But you have-- yeah. But this is a geometry. This provides the geometry. In principle, the expectations would be closed string. Now you can do string theory in this geometry. AUDIENCE: Oh. Oh, I see. PROFESSOR: So now essentially-- AUDIENCE: [INAUDIBLE] as a target. New targets space-- PROFESSOR: Exactly. AUDIENCE: Oh, and do new string theory-- PROFESSOR: Exactly. So we started with a D brane. And then we solved the geometry due to the D branes-- geometry due to that D branes. And then now you can quantize the string theory in that new geometry. Then that will describe how things interact with the D brane. AUDIENCE: Oh, OK. PROFESSOR: Yeah. But in this picture, there's no D brane anymore. There's only closed strings. OK? So we just have-- yeah. Good? Yes? Hm? AUDIENCE: What was F5? PROFESSOR: F5, it's this F5 form of the type IIB string, which this D3 brane carry the charge. So this generalized charge I discussed last time. Other questions? AUDIENCE: But in last class, we know the F5 is from the effective theory. PROFESSOR: Yeah, yeah. AUDIENCE: Yeah. And the effective theory is from the open string spectrum. PROFESSOR: No. No, no. F5 is close string spectrum. AUDIENCE: Oh, it's closed string spectrum. So why can't we apply string theory twice? The first time we get a flux. PROFESSOR: Yeah. AUDIENCE: And then we get to modify the targets with-- PROFESSOR: Yeah, right. Yeah. Yeah, you can. Yeah. String theory, you can work with any field of spacetime. You can think about propagating a string in the generic closed spacetime. AUDIENCE: OK. Because it seems like the second application of string theory is like a higher order correction. PROFESSOR: No. No this picture-- this picture, I'm just looking at-- forgot about D branes. I just look at some charge object. I look at some charge object in string theory. And that object have some charge, have some mass. Then I will call this metric. I will call this metric. And the string must be able to live in that-- must be able to live in that metric. And if I want to work out the property of the string, then the metric-- I quantize the string in this geometry. And quantum string geometry. So this point, if you were just purely to view the D3 brane as some charged object in the string theory. You just solve this geometry. OK? Any other questions? So this is a very important point. It's a very important point. So now, the key is that consistency A must equal to B. So these two descriptions must be equivalent. OK? So this can be generalization as the simple exercise we described earlier. That if you want to describe the interaction between the two D branes, you can do it in two equivalent way. One is to consider some loops of open strings, which only need to consider the flat space. Just consider loops of open strings. Or you can see that the graviton exchange between the two D branes. And the graviton exchange between the two D brane is to view that the other D brane have already curved your spacetime and that the other D brane live in that curved spacetime. So this are a precise generalization of that equivalent. OK? Now we have two descriptions of the D branes which are completely different-- this one involving flash space and open and closed strings, and this one only involving closed string but in some curved spacetime. OK? And in principle, both descriptions can in principle be valid for all value of alpha prime and g string. OK? So the two parameters in the string theory is alpha prime and-- by all alpha prime, just consider for all energies. Yeah. OK? AUDIENCE: Why? PROFESSOR: Hm? AUDIENCE: It seems like they're constrained, no? PROFESSOR: Yeah. Yeah. So first, in this picture, there's no restriction what is the alpha prime or g string. And you just impose the Dirichlet boundary condition. And when you have a g string, you just need to include the higher loop diagrams, et cetera. So when we work out this geometry, so this statement is more try to apply to b. In b, when we work out this geometry, and also that AdS5, we were working using the approximation that the alpha prime should be small and the g string should be small. AUDIENCE: Yes. PROFESSOR: But once we have worked out that solution in these regime, in principle, if you're technically powerful enough, you should be able to extend this to general values of g string and alpha prime. It may not be precisely this metric, but with some kind of formation of that from other prime corrections and g string corrections. But you should be able to do that. Yeah. Yeah, so that's why I said in principle. AUDIENCE: Got it. PROFESSOR: OK? So we must have these two descriptions. And these can be valid for all range of the parameters you can be interested in. OK? And so this is a very surprising statement, this A equal to B. But there's not much to do about it, because of this is a complicated thing. And this is also a complicated thing. This is even more complicated, doing string theory on some curved spacetime. In particular-- say if you consider g string is a very small, et cetera, then maybe you can do something. But then it's not that interesting, because on both sides, you do some perturbation theory, et cetera. Anyway. But the remarkable thing-- which Maldacena realized in 1997 is that you can actually take a limit of this statement. And when you take a limit of this statement, then the statement becomes very interesting. Then you can actually do a lot. OK? And the limit is the low energy limit. So if you consider the low energy limit, then you derive what is now called AdS/CFT. We will now explain how to do that. Yes? AUDIENCE: It seems like this correspondence between A and B only exists in the low energy limit. Because here, the closed string is traveling in the base spacetime. That spacetime is only the low energy corrections from the D brane, not the higher corrections. Because we only solved the low energy effective theory. PROFESSOR: No, that's-- this is a good question, but that's what I was trying to say earlier. So we find this metric by working in this regime. But now if I want to go away from this regime-- if I want to go away from this regime, then if I want to go away from this regime, then I have to take into account other prime corrections, et cetera. And this is a complicated process. But in principle, I can do that. In principle, I can do that. So that geometry may get modified, the schedule. But, in principle, I can do. So now it turns out, if you consider lower energy limit of this A equal to B, then you can actually derive a very powerful statement. OK? AUDIENCE: Is this actually the lower energy limit? PROFESSOR: No, this itself is not low energy limit. This is a classical gravity limit. AUDIENCE: What's the difference? PROFESSOR: Hm? AUDIENCE: What's the difference? PROFESSOR: I will explain that. No, this is the classical gravity limit. AUDIENCE: OK. PROFESSOR: And at least the low energy limit does not involve this. Yeah, I will explain what this low energy limit. Yeah. Any other questions? Good? OK. So now let's look at low energy limit. In particular, here, I don't do any restriction on the g string. So g string can, in principle, be strong here. I don't impose any constraint on the string happening now. I only consider low energy. OK? So low energy limit-- because when I fix the energy, E-- some process I'm interested in, some typical energy I'm interested in. Then I want to take the alpha prime go to 0 limit. OK? Or you fix-- equivalent, you can fix alpha prime. Then you take E equal to 0. OK? Because only the dimension is product of the matters. And I can describe either way. OK? So physicality, what is more convenient-- mathematically, normally what is more convenient is that we think we fix E, and then take alpha prime goes to 0. But you can think either way. OK? So the low energy limit is essentially just the limit of alpha prime times e squared goes to 0. That's the dimension statement. OK? So this is the low energy limit. I only consider those physical process with E that would satisfy this region. OK? Good? And I'm doing no restriction on the gs. So now, let's consider A. What is the low energy limit for A? For A, we know-- so A, we have two factors. We have open string factor, which is live on the D brane. Then we also have close string factor. OK? Which live outside the brane. For the open string factor, we just recover. So we discussed before-- on the D brane, when you go to low energy, what do you get? AUDIENCE: Super Yang-Mills? PROFESSOR: Yes. You just recover your Mills series. It just recover super Yang-Mills series. And in this case-- let me give a special name for the D3 brane. It turns out you get to the n equals 4 super Yang-Mills. Just the specific super Yang-Mills theory in four dimension. And then we sketch group un. OK? Because we have D branes. And in particular, as we said before, the Yang-Mills coupling is related to the gs. OK? It's proportional to the gs. And so the D3 brane give rise to a 3 plus 1 dimensional field theory. In the 3 plus 1 dimensional field theory, the Yang-Mills company is dimensionless. So there's no other dimensional parameter here. So there's only-- so there's only numerical factor, 4 pi, which you can work out by explicit calculations. So this is key, because if there's a scale here, then the story is more complicated, because you have to take alpha goes to 0, et cetera. OK? So the key is that this is a dimensionless coupling. So they have a simple relation. So in open string, when you take low energy limit, you just get a Yang-Mills series and some finite coupling. OK? Which is determined by the string coupling constant. But in this series, there's also closed string factor. In a closed string factor, again, when you take a low energy limit, you get the massless modes. You get graviton, dilaton, et cetera. But only massless modes. OK? But now, there's a very important point. The carpeting between the closed and open string are mediated through gravitational effect. OK? So the coupling between the massless-- so only massless modes are left. So the coupling between the massless closed and open string, or closed string themselves, those interactions just-- it's always controlled by G Newton. This we know, because this is a low energy limit. Graviton, it's controlled by G Newton. It's gravitational by gravitational interactions. Another key is that g newton, in contrast to this Yang-Mills coupling, which is dimensionless, G Newton is dimensional. And the g newton is proportional to gs squared alpha prime to the power of 4. So that means, in the low energy limit, then the dimension is coupling-- then the dimensionless parameter is g to the power, say, 8. Because this have dimension 8. So these is dimension-- this one goes to 0. OK? So that actually tells you, in a low energy limit, the interaction between open and closed string will decouple. OK? This is our familiar fact. That electron-- when we talk about electron, you don't have to talk about gravity. Because their energy is so low, their gravity is so weak. OK? And the gravity only become strong if their energy become big. OK? It's just because of this. In 3 plus 1, we just GN times E squared. But here we are in 10 dimensions, E to the power 8 doesn't matter. So now that means, in the E goes to 0 limit, what we find is we find that interacting n equals 4 super Yang-Mills theory plus three gravitons plus other massless modes. But they all become free, because the interaction essentially switches off. Any interaction in this factor, they switch off. In particular, there's no interaction between these two factors. OK? So this is what we get on the A picture. AUDIENCE: So the top really is actually G Newton E8. Times E8. PROFESSOR: No, I'm just saying this is a dimensionless combination of them. AUDIENCE: It's the actual coupling? PROFESSOR: No, the actual coupling can be 1/2. It can be square root of these. Controlled by the G Newton. And the key thing, it's controlled by G Newton. And G Newton is a dimensional parameter. That's just the key. Yes? AUDIENCE: Why can't they direct through gauge boson? PROFESSOR: No the gauge gauge bosons interact with themselves. The interaction with the gauge boson with graviton is gravitational. Have to go through G Newton. The gauge boson will interact with themselves. It's captured by here. So anything involving gravity, involve dimensional couplings, then they will switch off in the low energy limit. Just like we don't have to worry about the gravitational interaction of an electron when we talk about hydrogen atom. AUDIENCE: But we know that from [INAUDIBLE], but how do we see from here that-- PROFESSOR: No, no. It's just from here. This is just the same. You can calculate it carefully. You can calculate-- you can calculate it precisely, work out the sketching amplitude between the photon and the graviton, take the low energy limit, and you find that sketching goes to 0. You can do that calculation explicitly and take the limit. But this argument tells you that you don't have to do the calculation. It will always work. OK. So now let's look at Picture B, when we take the low energy limit. And now let's go back to this metric. Now let's go back to this metric. Actually, it's handy. So now, let's look at picture B, the low energy limit. Let's look at this metric. So this is a curved space time. So when we take low energy limit, we have to be careful. Because one important thing you always keep in mind when you have a curve of spacetime, the time depends on where you sit. You have to always specify a reference time. Because time is no longer the absolute thing. OK? Because at the local time, et cetera, you have to specify your time frame. OK? And so we have to ask what is this E? In picture A, translate into this picture. In Picture A, everything is in Minkowski spacetime. And that E is essentially E for the Minkowski spacetime at the infinity in the picture B. OK? So in other words, the E-- in other words-- so here, let me just say the curve of spacetime must be careful to specify what energy we are interested in, what energy we are talking about. OK? So the energy depends on now your choice of time, or it depends on your observer. OK? But in order to compare a with b, we have to translate what is E in A. This translated into B is defined with respect to the t-- this t there. OK? In other words, it's the time at i equal to infinity. OK? So that energy scale is defined to that specific time. But now, when we talk about the physics happening in this spacetime-- for example, we're talking about the string theory string g excitations of the mass of the string, et cetera, you have to refer to the local proper time at whatever place-- at whatever place you are talking about your physical process. OK? So that means-- oh, actually today we are going to 6:30. Yeah, right. So suppose you are at some location, r. Then the local proper time, which I will call tau-- the local proper time, the tau, then it's related to the time at infinity by this factor. OK? By the factor. This-- the local observed equals this, the tau squared. So they're related to this by this factor. So now you can easily convert-- then the local proper energy is related to the energy we are talking about here, by H to a power of 1/4 E. OK? OK? So now, it's very important, when we take the low energy limit, we take this E small. But due to the gravitational redshift effect, this E tau does not have to be small. OK? And so this is the thing will be important here. So when r is much, much greater than r-- so when we are sitting around here, then H is 1. OK? Then the limit E square alpha prime goes to 0 really just tells you that all massive closed string decouple. All massive closed string decouple. OK? In particular, though, the interaction among closed string themselves. By the same argument, they also become weaker and weaker when you go to low energy limit. So essentially, again, they become free gravitons. OK? But now let's look at the other regime and r around here. OK? So this is for that. But now, let's consider r smaller than R, capital R. And then, in this regime, we go to this AdS5 times S5. Then H becomes R to the power of 4, r4. And now this statement, and the E squared, alpha prime, go to 0, translates. Now you have a redshift factor. And tau squared. if I translate it in tau, the local energy scale. In particular, now, let's remember this r4 is given by that. So this can be written e tau squared r squared 4 pi gsN goes to 0. So now, important thing happened. Important thing just happened. Then you find this alpha prime actually canceled with each other. Alpha prime here and alpha prime there, they cancel each other. And here, there's nothing in alpha prime left. OK? So when you take this limit-- when you take this limit and it's-- then this quantity goes to 0. And now, E tau can actually be anything. As far as r sufficiently small. OK? So that means for any E tau, you can even have huge energy-- local proper energy. You can satisfy the low energy limit if that mode live at r very close to 0. Means that they live on these throat deep enough. So this is a very simple. Physically, this is very simple. Because of this curved space effect, because of H, this curved space effect, there's a huge amount of red shift between here and there-- between the local proper time here and there. Very big energy here. Viewed from here, can become very, very small. In particular, you go deeper into this throat, the redshift factor is bigger. So normally, the higher the energy, if you go deep enough, will always appear within the low energy limit here. OK? OK? So that means, when you sit deep enough, that actually anything is allowed, including a massive stream mode. OK? Including massive string mode. So now, we remember, when r goes to 0, essentially you get this AdS5 times S5. So in r equal to 0, we just get this geometry, which is AdS5 times S5. So now what we find is that in a low energy limit, in the B, we get free gravitons at r equal to infinity. And then plus full string theory AdS5 times S5. OK? Full string theory AdS5 times S5. And again, the two factors decouple, because when you take the low energy limit, then those strings in mode have to live very far down in the throat. And then they have low overlap with the graviton and live at infinity. OK? And they have low overlap with graviton and live at infinity. Yes? AUDIENCE: So I'm confused about the way that you wrote that. So you say three-- so there's three gravitons that are at infinity. But also full string theory. But doesn't full string theory include gravitons? PROFESSOR: No, no, no. Yeah. This is-- there's some different graviton ideas, 5 times S5. Full string theory in AdS5 times S5. Those are Minkowski space graviton. This is Minkowski space graviton. So this is a Minkowski space graviton. I think we should leave here. The natures are very different. OK? Yes? AUDIENCE: Is full string theory only for closed strings? PROFESSOR: Only closed string. There's no open string in this picture. They're only closed strings. OK? AUDIENCE: [INAUDIBLE]? PROFESSOR: Sorry? AUDIENCE: So [INAUDIBLE]? PROFESSOR: No, no. There's no artificial separation. That's what I just explained. No free graviton live here. And the string theory live infinite down the throat. Throat, in the low energy limit. There are infinite-- there are essentially as large proper distance as you want to separate them. Depends how low energy you want to go. AUDIENCE: But there is something living at finite r, right? There is always [INAUDIBLE]. PROFESSOR: Yeah. I'm just saying, when you take the low energy limit, then the separation then becomes infinite. AUDIENCE: I just don't understand why [INAUDIBLE]. PROFESSOR: No, no. They have nothing to do with each other. AUDIENCE: Oh. PROFESSOR: Geographically, one is in this side of the galaxy. The other is living on the other side of galaxy. So their wave functions have low overlap with each other. AUDIENCE: But with the [INAUDIBLE] clearly something between there. PROFESSOR: Yeah. I'm just saying when you take the low energy limit, then the only mode which can survive, other than the massless mode, which lives around here-- just in the finite energy mode that cannot live in this place. Yeah? AUDIENCE: Because the string theory, or [INAUDIBLE], flux as well? PROFESSOR: Yeah. Yeah, yeah. Yeah, with the flux, yeah. Yeah. OK? Then these two factors should decouple. So now we have derived-- see from this A equal to B, then we take the low energy limit. So here, we have n equals 4 super Yang-Mills series plus three gravitons. And here, you have type II B string and AdS5 times S5 plus free graviton. OK? And then the free graviton part is the same. Then you conclude-- and they're just free. And they interact, in part, of both sides, then you just equate them. You find the input for super Yang-Mills series with uN, then it's equal to Type II B strong AdS 5 times S5. So this was a surprising statement, saying A equal to B. But A and B are both, say, complicated objects. Both are complicated objects. But now, when we take the low energy limit, and then you reach something very surprising, because on this side, there's no gravity. On this side, there's no gravity. But this side is a full string theory. So now you equate some field theory with the full string theory. OK? Yes? AUDIENCE: So the super Yang-Mills is living just on the D3 brane. PROFESSOR: Yeah. AUDIENCE: OK. PROFESSOR: Any questions regarding this? AUDIENCE: So for example, [INAUDIBLE]. PROFESSOR: Yeah. Yeah. But that's always valid proper distance away. They always valid proper distance. So the key is when you talk about the general mode, you can-- anything you talk about is finite distance away. If we are taking a limit here, then this throat is going to infinity, you are living infinite down the throat. AUDIENCE: So anything finite is free. PROFESSOR: Yeah, anything-- when you talk about the finite energy, than the story is always complicated. And then all this geometry will matter. And the thing only becomes simple-- you really take the E goes to 0 limit. It's the same thing here. Same thing, in this picture, when we say you go to the n equals 4 super Yang-Mills series, you always have to take that limit-- E goes to 0 limit. Then you completely decouple the other things. Because any tiny, tiny non-zero E, there's always some gravity left. So you really, really have to take that energy, go to 0 limit. And then you rescale the energy to be finite. You understand what I mean? You always have to take that singular limit. It's just like, when you take the-- [INAUDIBLE] you have to take the infinite volume limit, and then things become simple. And here, it's the same thing. Yeah. Whenever you have some finite energy, there's always some coupling between them. Good? So do you want to have a break? AUDIENCE: Yeah. PROFESSOR: This is like a physicist's guess that, somehow, these two must be the same. OK? These two must be the same. So this is a surprising statement. Because on the one side, it's a field theory. And then on the other side, it's a full gravity theory. AUDIENCE: I have a question. So you said that, in principle, you can generalize the solution for any metric in the full string theory for any regime. PROFESSOR: Yeah. AUDIENCE: But in that case, maybe you will probably not get a geometry as same as AdS5. PROFESSOR: Yeah. That's a very good question. That's a very good question. It turns out, indeed, logically, you may not. But actually, AdS 5 times S5 is a highly-- I said I will explain. It's a very highly symmetric space. It turns out actually it is exact string theory. At least we believe it's exact string city geometry. And then when you include all those corrections, this will not change. Yes? AUDIENCE: So with this in mind, that means that people don't-- they only know this heuristically. PROFESSOR: Yeah. AUDIENCE: So there's no actual proof of this, besides the fact that everyone believes it works. PROFESSOR: Yeah. It depends on what-- the label. What do you mean-- AUDIENCE: Both. PROFESSOR: So the physicist, the way it works is that 1 works, 2 works-- yeah, n equal to 1 works, n equal to 2 works, n equal to 3 works, n equal to 4 works, then we say it works. And under this, I'm way past that. So maybe this works to n to the 10,000, something like that. But it's not a proof. AUDIENCE: Sure. PROFESSOR: Yeah. But it worked to the n to the 10,000. Something like that. AUDIENCE: OK. PROFESSOR: OK. Good? So now let's talk about this duality. So this is normally called AdS/CFT duality. OK? So to understand this duality, let's first understand the object in this relation. OK? So let me just say a little bit about the object in this relation. First, let me say a little bit about anti-de Sitter space, since that may not be something you have worked before. OK? So one way to write the metric is this metric. So this is AdS5. So this is AdS5. So this is AdS5 metric. So you can actually write it as AdS-- yeah, let me just copy it down. So you can generalize to any number of dimensions. You can generalize any number of dimensions. If we make-- if you make this part into a 3-dimensional Minkowski spacetime, OK? And then this would be the metric of AdS-- say D plus 1 dimension. So this is a D plus 1 dimension, Minkowski. Than you have another r direction. So let we call this my-- so now let me relabel this to be 1. OK? So r can be considered as a curvature radius of this spacetime. So this r is a constant. And this radius, r, goes from 0 to infinity. OK? r goes to 0 from infinity. So roughly, a geometric picture here is that-- so you can see that you have half space. OK? You have a half space, r from 0 to infinity. And that each value of r-- at each value of r, there's a Minkowski spacetime. There's a d dimension Minkowski spacetime. So if you fix r, then you just get some d dimension Minkowski spacetime. And then with some pre-factor to define the scale of that Minkowski spacetime. OK? So just like this, you have a half line-- r from 0 to infinity. Then at each point of this half line, it's 48 D dimensional Minkowski spacetime. So that's essentially what this does. So the key thing is that the Minkowski time is what we normally called warped. It's that the scale factor of this Minkowski spacetime depends on where you sit in the radial direction. OK? So this r, we normally call radial direction. It depends on where you sit in the radial direction. So when your r goes to infinity, the scale size becomes infinite. So this, we'd normally call it to be the boundary. So this we call to be the boundary. So r equals infinity, we call to be the boundary of the spacetime. So this is similar to, say, when we say-- when we write-- a Euclidean metric in this form, when r goes to infinity, the sphere size become infinite, and then this is essentially the spacetime boundary, roughly. And so this can be similarly understandable. OK? So this idea is spacetime-- idea is spacetime of constant curvature-- constant negative curvature. OK? In fact, this is the most symmetric spacetime of [INAUDIBLE] constant curvature. So you can easily check. You can easily check this satisfy the Einstein equation with the [INAUDIBLE] constant. So you can easily check if you satisfied the Einstein equation. So now, because I'm using this r to denote this radius, this curvature radius. So now I will quote my curvature [INAUDIBLE] always using the script now. So also, MN now refers to all directions in this AdS. So you can easily check if you satisfy this Einstein question with some cross-module constant. And you can just compute the-- it's easy to compute the curvature of this guy. So let me just write down the answer. So yeah, at first you can-- yeah, you can compute the curvature of this guy. You can just find the answer. You find the curvature, the rich scalar is given by minus D times D plus 1 R squared. OK? So this is a rich scalar. This R is the same as this R. And then similarity, you find the cross-module constant. Then you can find here. It's given by minus 1/2 D minus 1 divided by R squared. Sorry, here it should be minus 2. OK? And I said that this is the most symmetric homogeneous negative curvature spacetime. So this is reflected in that actually you can write the Riemann tensor-- the Riemann tensor of this spacetime purely in terms of this metric itself. Without using derivative-- OK? So this is essentially the definition of the most symmetric [INAUDIBLE] active curvature spacetime, is that the Riemann tensor can be written solely in terms of metric itself. OK? So these are just some facts of this space. So often, we use a lot of coordinates. Sometimes it's often convenient to use-- instead of using this r, we will introduce another coordinate. We introduce a z. You go to R squared divided by-- capital R squared divided by small r. Essentially, it's just the 1 over R. And because the capital R goes from 0 to infinity, then it's more z also going from 0 to infinity. But now, the boundary-- but now it's equal to 0 is the boundary, rather than equal to infinity. And then the metric can be written in the slightly different way in terms of this z coordinate. We can in this form. OK? You can see there is a d. So again, here, you just change the label r by z. And now it's equal to 0, is the boundary. And they're going to interior, because one increases D. So D equals 1 into a different direction. OK? AUDIENCE: [INAUDIBLE]? PROFESSOR: Yeah. That's a very good question. I'm waiting for some of you to ask. So this is the physicist's metric which we described. Yeah, this is the physicist's metric. So yeah, let me just say it this way. So you have an infinite throat. So going to low energy limit, we should just go very much down in the throat. OK? Because once you decouple those decreased freedom, then you can just rescale your energy scale to whatever you want. You go to finite energy, et cetera. And that's why you go into AdS5. Yeah. Once you decouple those things, then you are living AdS. Then you are living in the AdS, and then you can talk about the energy scale, or any radius as you want. It just becomes relative to itself. You see what I mean? AUDIENCE: That's kind of normalization? PROFESSOR: No. This is not normalization. This is just-- normally, when we take a low energy limit, the reason we take a low energy limit is we want to decouple those massive modes. As once we decouple those massive modes, then we consider the finite energy. And we include the finite energy. Because we consider finite energy relative to low-- we still consider a finite range of energy. It's the same thing as that. Is this clear to everybody? This is a very important point. Yeah. Good. Yeah? AUDIENCE: What was the question? PROFESSOR: The question is that previously, in order to decouple those high energy modes, when we take a low energy limit, we have to take r equal to 0. And why here we worry about r goes to infinity-- r equals infinity here? OK? Why are we worried about r equals infinity. It seems like this is going outside our previous regime. But the answer is that, once we decouple those high energy modes, then you can just extend to whatever energy range you want. Because you have served the purpose of discover-- decouple those-- yeah, because those things already went to infinity. You don't have to worry about them. Is it clear? Just like in the Yang-Mills theory, we said we take the low energy limit. We decouple those massive modes. But eventually, in the Yang-Mills theory, we would consider very high energy processing in the Yang-Mills theory. OK? But those things we decouple have even higher energy scale. OK? Good, good, good. So we call this-- so this is 1 and this is 2. So in fact, actually 1 and 2, they're not covering the full anti-de Sitter spacetime. So actually, they only cover a part of anti-de Sitter spacetime. So they only cover, say, what is so-called Poincare patch, a patch of de-Sitter spacetime, anti-de Sitter spacetime. So now let me just mention a little bit what is normally called a global AdS. OK? So the AdS, or global AdS-- so let me first write down this sentence. So 1, 2-- 1 over 2 only covers a part of full AdS. OK? So the global AdS, the so-called global AdS, can be defined-- can be described as a hyperboloid in the Lorentz spacetime. OK? Hyperboloid in Minkowski-- d plus 2 Minkowski spacetime of signature 2, d. OK? So to make this precise, say, for example, if you go to the [INAUDIBLE]-- for example, the book by Hawking and Ellis called The Large Scale Structure of Spacetime, the way they define the anti-de Sitter as follows. So you consider d + 2 dimension of Minkowski spacetime with signature 2, d. So you consider a Minkowski spacetime like this. OK? So now you have d, positive signature, and the two negative signature. So x starts from minus 1 index to 0, which gives you the negative signature. And then you have-- so this is Minkowski spacetime of the 2, d signature. So now you can see that the following hyper surface in that spacetime. OK? So this is a hyperboloid in that spacetime. And this defines your AdS. OK? So this surface defines AdS. This surface defines AdS. So, of course, this is a very different way to parametrize AdS. Then either this metric or that metric. And you can actually parametrize AdS in many, many different ways. So now let me just mention a few ways to-- a few choices of different coordinates to parametrize this hyperboloid. OK? So the first is-- one is called the Poincare coordinates. It's this one. So first is what we call the Poincare coordinates. So in this Poincare coordinates, you introduce r. So r there is given by x minus 1 plus xd. So you take the 1 over the negative signature and the 1 over the positive signature. You take x minus 1xd, add them together. So this is like some kind of [INAUDIBLE]. And then you write x mu. So there, we always write x mu equal to tx. OK? So this is a three-dimensional Minkowski coordinate. You add x mu equal to r times capital mu divided by r. OK? So in this hyperboloid, you make this definition. And then you restrict to the region of r greater than 0. Then you recover. So 1 corresponds to r greater than 0 part of this metric, of this spacetime. And you can easily check yourself. Just make this definition-- this is the exercise you should do, which I don't have time to do here. Just plug this in. Plug this into that metric. Plug this parametrization into that metric, and then you find you will recover 1. Of course, 1 is only restricted to r equal to 0 to infinity. So you only restrict to r greater than 0 part of this. And if you just introduce this in [INAUDIBLE], of course r can be smaller than 0. OK? So these are Poincare coordinates. So once you go to that metric, you then go to this metric. It's the same. Now also let me describe something called the global coordinates. So this is-- if I notice the following structure. So this has the form of the difference of two squares. So I can write-- so let's write xi to 1 to d xi squared to be some function, r squared and r squared-- times r squared. So this r have nothing to do with that r now. We are doing a different set of coordinates. I'm just using the same notation for convenience. So let's introduce a new coordinate, r, which is just the sum of all these squared. And then I introduce-- then from this equation, the x1 minus x0 squared should be equal to r squared plus 1 times r squared. OK? So now I can-- so this is a circle. So for fixed r, this is a circle. For fixed r, this is a d minus 1 dimensional sphere. OK? So I can introduce the d minus 1 spherical coordinates on this and then introduce a circular coordinate on this. So if I do that-- so let me just call the circular coordinate on here tau, the polar coordinate in the x minus y. Polar coordinate in the x minus 1 and the x0 plane. So let me call this tau. And let me call [INAUDIBLE] coordinate here omega d minus 1. OK? Then you plug this into here. Then you find the following metric. So you find the following metric. You find ds squared equal to r squared minus 1 plus r squared d tau squared plus the r squared plus 1 plus r squared plus r squared d omega d minus 1 squared. OK? So remember, tau is the polar coordinate in this x minus 1 and x0 plane. And the omega d minus 1 is the polar coordinate in the x1 to xd OK? And then you get this. Again, it's an exercise you should go through yourself. Yes? AUDIENCE: So I have a question. So I'm just trying to visualize it. So there's these-- at least the way it works in [INAUDIBLE]. So you have these two sheets which are disjoint from each other. So how is it that this enables us to go smoothly between them? It seems that there's-- so maybe I'm confused about what you did. PROFESSOR: What do you mean? AUDIENCE: So we're trying to describe this object, which is two disjoint components. PROFESSOR: We only look at one component. AUDIENCE: So what's the point of global coordinates? Is it just a different-- PROFESSOR: No, no, no. No, that only covers part of the one component. AUDIENCE: Oh, I see. OK. Yes. PROFESSOR: Yeah. OK, good. So you look at here-- yeah. Again, r goes from 0 to infinity. But this r is very different from this r now. OK? So in this definition, tau is a polar coordinate. So in this definition, tau will go from 0 to 2 pi. OK? But once you write it in this form, nothing depends on tau. So you can extend tau to any range you want. So once we write this form, we take tau goes to minus infinity to plus infinity. So this is [INAUDIBLE], rather than from 0 to 2 pi, we take tau from minus infinity to plus infinity. So sometimes, this is also called extended AdS. Sometimes people don't just bother to do that. You just extend it automatically. OK? So strictly speaking, this, as a definition of AdS, only corresponding to the part of tau from 0 to 2 pi. OK? And when you extend tau from minus infinity to plus infinity, you actually have an infinite copy of this. OK. So now, with one more thing, we can-- so this coordinate is nice. Now it covers everything. Covers everything, but even actually a little bit more. Because now you extend the tau to minus infinity. We can visualize this metrically. It might be better by doing the following. Allow the transformation. So now let's take-- yeah, before I do that, let me say a few words about-- let me just do this first. Let's take tau again. Take r equal to tangent rho. Rho is some other coordinate. And r from 0 to infinity. Then rho would be from 0 to 2 pi, pi over 2. OK? 0 to pi over 2. And you park this into here with a little bit tiny-- so after two seconds, you find the following metric. OK? So this is easy to see. Because 1 plus [INAUDIBLE] rho squared is just 1 over cosine squared, et cetera. So the reason this metric is nice, because other than this conformal factor, this thing we can easily rationalize. So this is just a cylinder. So this is just a cylinder. Let me draw it here. So this is a cylinder. OK? So this direction can be considered-- the radial direction is the rho direction. And this direction is the tau direction. And this direction is the omega d minus 1, or the polar coordinate direction. I'm only drawing one of them. OK? So essentially, this I have a structure of a cylinder. And at the center here, I have rho equal to 0. And here, I have rho equal to pi over 2. OK? Took rho from 0 to pi over 2. OK? And when you go to pi over 2, its quotient factor goes to 0. So the overall scale blows up. But here is your boundary. So pi over 2-- yeah, so pi over 2 is the boundary. OK? Is this clear? So pi over 2 is the boundary. Now the boundary map. So the boundary of the cylinder-- so the boundary now is sitting at rho equal to pi over 2, has the topology of r times s d minus 1. OK? So the boundary itself, it's-- so the boundary itself, it's a cylinder. The whole space is a solid cylinder. It's a field cylinder. OK? But the scale factor blows up when you approach the boundary, because of this prefactor. And the r is the time and the sd minus 1 is the-- now, to visualize this space a little bit better, let's look at the cross section. So imagine you put a plane in this space. OK? So now let me call this circular direction, it's called theta. And then now let's consider the cross section across this solid cylinder. So then we just have a plane. OK? A plane. I should not have-- let's consider this cross section. OK? In the center of this cylinder. Cut half. Cylinder cut in half. So for example, in this notation, so this will be rho equal to 0. So here is rho equal to pi over 2. And I introduced this angular coordinate. So that equals 1 into theta equal to 0. And here will be, again, rho equal to pi over 2 of theta equal to pi. OK? And again, this is the tau direction. Is it clear what I'm doing? I cut the cylinder in half, and that's that cross section. So now, because of the structure of this times a conformal factor overall, a scale factor. So the propagation of a light ray does not depend on this overall factor. The propagation of that ray can be seen from here. OK? So now, let's consider radial light ray, which only goes into the rho direction. There's no in this angular direction. Let's consider radial light ray. Then from that picture, it's very clear-- let's say this is tau equal to 0. Then it actually takes only pi over 2 for light ray to reach the boundary. OK? Because rho only goes from pi over 2, and light ray always travel in 45 degrees. So this is just like-- if you look at the [INAUDIBLE] plus the rho squared, this is Minkowski. And so light ray travel 45 degree. And then you reach the boundary. And then take log of pi over 2. Yeah, so this is pi over 2. So this is tau equal to 0. And then this would be tau equal to pi over 2. Then suppose that you put some reflecting wall here. And by the time tau equal to pi, it has bounced back. OK? And now, if you go another round, then at tau equal to 2 pi, then precisely go back to the origin. Then go through a period. OK? And that actually precisely is tau equal to 2 pi, which would define the angular coordinates. If you follow that, then that equals 1 into precisely one period of the light ray traveling. OK? But the key thing is that the light ray will travel pi over 2 to the boundary. And now you can check. If you look at-- not the light ray. So light ray travels along the light magnitude axis. But if you look at the time magnitude axis, which is travelled by massive particle, then you can show that the massive particle never reach the boundary. We'll go into a motion like this. OK? So this is a massive particle. So the massive particle will never reach the boundary. It will always be reflected back intuitively, from the gravity potential. OK? So you will show this fact yourself. And your P-set show that actually the massive particle will never reach the boundary. So in a sense, so if you look at this metric, at rho equal to 0, at rho equal to 0, this is essentially like a flat metric. Because at rho equal to 0, this is just a sphere. So this becomes 1. This is a sphere. Yeah. Yeah. Rho equal to 0, this is just like Euclidian space. Because the sine of rho squared become rho squared. OK? So this just becomes like a Euclidian space in polar coordinates. And then you have a time. And then this scale factor becomes 1. So at rho equal to 0, just like you have ordinary Minkowski spacetime. And then the curvature comes in when you go to finite rho, and then those factors become important. And then you can show-- yeah, just because of that, you can show that the massive particle will never actually travel to the boundary. OK? Always reflected back. In some sense, you can think that the massive particle is always pulled back by the gravitational potential. OK? Pulled by gravitational potential. So from a different point of view, it's that another way to say it is a familiar statement, which people often make and often can give you great intuition, physical intuition in many physical problems, is that AdS is like a confining box. Just like a box. So things cannot escape from it. OK? OK? And in particular, the typical curvature radius is r. So you can roughly imagine this-- because in the center, it's like a flat space. So it's more like a box of size r. OK? You can heuristically think of it as a box of size r. AUDIENCE: Yeah. Does it have finite or infinite volume? PROFESSOR: Hm? AUDIENCE: Does it have finite or infinite volume? PROFESSOR: Yeah, it has infinite volume. AUDIENCE: It's infinite volume. OK. PROFESSOR: Yeah. But because it's confining, most of the particles will only see the sides r. AUDIENCE: So one other question. So is it correct to say that it's like an antiharmonic trap? So if I'm in the space, and I throw a ball that it's going to come and hit me back in the head or something like that? It seems like-- PROFESSOR: Yeah. Right, right. Yeah. AUDIENCE: It'll bounce back? PROFESSOR: Yeah. Yeah, actually, our own friend, Frank Wilczek, in the '80s, have tried to use these ideas, this feature, to say, maybe this actually describes confinement. Because if you put [INAUDIBLE] there, they're confined. Anyway. Yeah, but keep this heuristic picture in mind. Because many physical problems actually, if you think in terms of this picture, become very intuitive. OK? OK. And often-- so you say this is an exception for the light ray. But often, we impose the reflecting boundary coming on the light ray. And from the light ray, this is also like a box. And you go to the boundary, and you bounce back, et cetera. OK? So now let me again just say it. But you should go back to check yourself. Now, by looking at the relation between here-- so here is a specific relation between r and x. And here is a specific relation between this x and this tau and this r. You can find the relation between this Poincare coordinate and this global coordinate. OK? Use a little bit effort. And then you can show yourself-- let me just draw the picture. Again, using the same picture. You can show that the Poincare patch cover a wedge-- say in this cross section. The Poincare patch cover a cross section of this global AdS. OK? So here is tau equal to minus pi. And here is tau equal to plus pi. And here, say tau equal to 0. And in this Poincare patch-- and then this 9 become r equal to 0 and t equal to minus infinity. And this line become r equal to 0 and t plus infinity. OK? Plus infinity. So the time essentially goes around like this. Time goes around like this. And this line equals t equal to 0. OK? So t equal to 0 and tau equal to 0 coincides, and then goes to minus infinity rotating into that. And the constant r surface-- so this is a constant r surface in the Poincare coordaintes-- goes like this. So the time goes up. OK? And you can check yourself just by working out the relation between those coordinates through the x. AUDIENCE: You mean this is wrapped on the cylinder? PROFESSOR: No, no, no. No, I'm just saying-- AUDIENCE: This picture? PROFESSOR: Just within this cross section, I can look at this cross section, I see how this translates into Poincare coordinates. And that's what's happening. A Poincare coordinate covers this part. OK? It cover this part. I can also draw a more fancy picture here. Then roughly, what you can show is that in this general picture, the Poincare patch covers something like this. OK? And the region between these two lines. So this is a spacial infinity of the-- in this Poincare patch, the boundary is Minkowski spacetime. So you see-- so the Poincare patch cover like that. And then this is a spatial infinity over the Minkowski spacetime. This is times infinity and times a path in the Minkowski spacetime. OK? So I urge use to do it yourself. Because this kind of thing, you really have to work through yourself to have intuitive feeling for it. But this the picture. Once you work hard yourself, you compare with my picture, and then you will understand. OK? So I will stop here today. So in some sense-- yeah, let me just say one more sentence. This extended ideas contain infinite copies of this Poincare patch. OK?
MIT_8821_String_Theory_and_Holographic_Duality_Fall_2014
18_General_Aspects_of_the_Duality.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. HONG LIU: OK. So last time, we talked about this IR-UV connection between the bulk and the boundary. So the key thing is-- let me write down the metric here, the Ads metric in this Poincare coordinate. So this is the boundaries are equal to zero. And when you increase z, you go to interior. And then for each constant, z, you have a Minkowski space time. You have a Minkowski d. So this is Ads d plus 1. So each slice is a d dimension Minkowski space. And due to this redshift factor, the more you go to the interior of the space time, then corresponding to the lower energy process when viewed form the field theory. So here, the same process is happening here compared to happening here, and here corresponding to the IR process, and the [INAUDIBLE] boundary corresponding to the UV process. So this is a key relation between the bulk and the boundary theories. And also, this gives you an intuitive understanding where does this actual dimension come from from the field theory point of view. Then from the field theory perspective, this actual dimension can be considered as a geometrization of the energy scale. And we know that the physics change with the energy scale from the field theory point of view. It's called the normalization group flow, how physics evolves when you change the energy or length scale. In the field theory, it's called normalization group flow. So you can also view that the evolution in the gravity side, say from the boundary to the interior, and that this flow in the z direction can be considered, again, to geometrize the normalization group flow of the field theory. So any questions regarding this? Good. So now let's talk about some further aspects of the duality. So the duality is that once you realize there's such relation, since the two sides are completely different objects, so the game is that you really have to do lots of guess work. Essentially, you have two sides if you want to relate things on two sides. And you have to do guess work. How does this quantity translate into that quantity, et cetera? And then check the consistency. Just like you don't know two languages, and then you have to guess between the two languages and then build up the dictionary. And we will be doing the same. We will be doing the same. Good. So this is more like a review of what we already sad. So let me be more explicit. We have N equals 4 super Yang-Mills theory. We have N equals 4 super Yang-Mills theory. And then here, you have type IIB string in Ads5 times ds5. So let me just write down Ads5 just for simplicity. So here on this side, there is a conformal symmetry which we explained before because this is a four dimensional theory. So the conformal symmetry of a d dimensional series SO d, 2. And on this side, there's precisely the same group, which is isometry of Ads5, which is precisely also SO 4, 2, which we reviewed before. And you can write down the transformation on both sides. In your pset, you have checked some of them, that they actually one to one correspond to each other. For example, this special conformal transformation. And the others, like translation or rotation, et cetera, it's clear. AUDIENCE: But being the right hand side, we still have some SO 5, S5. HONG LIU: I will talk about that. So this is more like a space time symmetry. So this is a space time symmetry. From field theory point of view, this is a space time symmetry. So in N equals 4 super Yang-Mills theory, there's also global symmetry we discussed before. This is also global symmetry. But this is global symmetry associated with space time. And there's also global internal symmetry, SO6 internal symmetry. In N equals 4 super Yang-Mills theory, we discussed last time there are six scalar fields. You can rotate them each other. And this can be considered as coming from the D3 brane. You can rotate six transverse directions the symmetry in rotating the six transverse directions. And on this side, there's exactly the same symmetry. So now this is isometry of S5. So the isometry of S5 precisely gives you SO6. So you have exactly the same SO6. And we will not be explicit. You can also actually map the supersymmetry between them. You can also map the supersymmetry between them. So N equals 4 supersymmetry. So there's a 4 supersymmetry, which just comes from N equal to 4. And because we have conformal symmetry, then the conformal symmetry does not actually commute with this 4 supersymmetry, then generate another 4 supersymmetry. And anyway, so essentially, you have eight supercharge of the [INAUDIBLE]. So this all together is 32 real superchargers. So when we say N equal to 4, so this is N equal to 4 in terms of four dimensional [INAUDIBLE]. So your supersymmetry has four [INAUDIBLE] as the supercharge. But the conformal symmetry generates another 4. So all together, you have eight [INAUDIBLE], four dimensional [INAUDIBLE], as the supercharge. For each [INAUDIBLE] in four dimensions, there are how many components? How many components are there for [INAUDIBLE] in four dimensions? AUDIENCE: Four. HONG LIU: Four. Because the two components, but each component is complex. And so there are four real components. So all together, you have 32 real charges. And similarly, you find-- I will not do this side. You find the same amount of supersymmetry. But in this case, for example, the low energy limit. Let's just talk about the low energy limit of this theory. So the low energy limit, as we said before, just has to be super gravity. So it has to be super gravity. Then you find you actually have exactly the same amount of supersymmetry in this geometry. But the interesting thing is that by definition, the supersymmetry on the gravity side is actually local. So you actually have the same amount of local supersymmetries. So if you look at this correspondence between each other, then you actually see a pattern. So now let me make some remarks. So if you look at the structure, this math in here, you see a pattern. On this side, all these symmetries are global symmetries. But on this side, all these symmetries are local symmetries. So this, I just said that in super gravity, the supersymmetry is local. And the space time isometry is just subset of a space time coordinate transformation. And space time coordinate transformation are local symmetries. So the isometry is a subgroup of diffeomorphism. So diffeomorphism just means the coordinate transformations. And these are local symmetries. Now we find the mapping is on the field theory side, the global symmetries is mapped on the gravity side into the local symmetries. So for each global symmetry in the field theory side, there's a corresponding local symmetry on the gravity side. But you may immediately ask the question, on the gravity side, if you talk about diffeomorphism, then this is a huge group-- certainly much, much larger than what we are talking about the isometry here. So why we are only talking about isometry? Why we don't talk about other parts of the diffeomorphism, only talk about isometry? So what's special about the isometry? So the second remark, why isometry? Why only isometry? Why only look at the isometry? But let me just save time. So the isometry is important for the following reason. Because even though this is a subgroup-- so as I mentioned last time, when we talk about quantum gravity, when we talk about string theory and Anti-de Sitter times S5 in terms of AdS5 times S5, you should keep in mind that this AdS5 times S5 refers to the asymptotic geometry. Because as a quantum theory, the theory can fluctuate in the middle. And the isometry can be considered as a subgroup. But the level is even though the space time fluctuates, but the AdS5 times S5 specifies the asymptotic geometry of the space time. And the isometry is precisely the subgroup which leaves the asymptotic form of the metric invariant. So these are not ordinary diffeomorphisms. So the ordinary diffeomorphisms-- so ordinary gauge transformations, let me just say more generally. By gauge transformations, also means the diffeomorphism or just general gauge transformations. So when we talk about gauge transformations, say for example, in QED or in QCD, et cetera, we always assume-- so the classical gauge transformation, we can see that we always take them to be fall off sufficiently fast at infinity. When we talk about gauge transformations, when we talk about the typical diffeomorphisms, these are the kind of transformations we assume goes to 1, say, at infinity. And this isometry, which leaves the asymptotic over the metric invariant, so these are essentially the large gauge transformations. So-called large transformations is that they don't go to the Identity at infinity. And of course, this is precisely the large gauge transformations which leaves the asymptotic invariant. So in a sense, these large gauge transformations can be considered as the global part of the diffeomorphism. So any questions on this? Yes? AUDIENCE: What does it mean that those isometries are local? What do you do with isometry [INAUDIBLE]? What do you mean when you say that isometries-- HONG LIU: Yeah. These are just coordinate transformation. A coordinate transformation is always defined point by point, right? Just these are specific coordinate transformations. AUDIENCE: And what happens to gauge symmetries [INAUDIBLE]? HONG LIU: Sorry? AUDIENCE: What happens to gauge transformations in the Yang-Mills part? HONG LIU: No. The gauge transformation in the Yang-Mills part, you don't see it. Gauge freedom is just redundant freedom. You never see it on the other side. AUDIENCE: Does this list exhaust all symmetries of the Yang-Mills particle? HONG LIU: Yeah. Exhausts all global symmetries in the Yang-Mills theory part. AUDIENCE: Not local? HONG LIU: All the global theories on the Yang-Mills side, and not local symmetries. AUDIENCE: [INAUDIBLE] discuss them, [INAUDIBLE]. HONG LIU: There's a u(n) gauge group there, but they don't have correspondence in the gravity side. The reason we don't consider the gauge symmetry, because they correspond to redundancies and they don't have to be present on the other side. You only need to do the physical [INAUDIBLE] to be identical. And this also is the reason I make this remark here. Even though all these global symmetries, they find the corresponding local symmetries on the gravity side. But precisely due to this remark, those things which really map to the global symmetries in the field theory can be considered as a global part of the local symmetries on the gravity side. So even on the gravity side, in some sense, you should think of them as global symmetries. And they are large gauge transformations. Because the ordinary gauge transformation is just corresponding to redundancy of degrees of freedom. And that should not be reflected on the other side. AUDIENCE: What do you mean by large gauge transformation? HONG LIU: Large gauge transformation means the gauge transformations which don't vanish at infinity. So the ordinary gauge transformation, just like in QED, in quantum electrodynamics, when we talk about gauge transformation. The gauge transformations we consider all the gauge transformations which go to zero at infinity. And if you consider those gauge transformations which don't go to zero at infinity, and that's what we call large gauge transformations. And those large gauge transformations essentially is like the global part of the u(1) gauge symmetry, in some sense can be considered as. AUDIENCE: But that is the gauge symmetry being gravity side. HONG LIU: That's right. So this is the part of the gauge symmetry on the gravity side. But the part corresponding to the boundary is the part associated with the infinity. AUDIENCE: Here in isometry of AdS5, we were showing, like in p-set, it's point to point local symmetry. It's like just a space time point. But how about this conform in this Yang-Mills theory? What is the symmetry of the object? What kind of object does this symmetry correspond to because it seems-- HONG LIU: It's conformal symmetry. AUDIENCE: Conformal symmetry of the component of the field? HONG LIU: No, no, no. It's the conformal symmetry of the Minkowski space. So on this side, you have isometry of AdS. On the other side, you have conformal symmetry of the Minkowski space. So you're asking what are you corresponding to on the field theory side, right? So that's mapped to the conformal symmetry of the Minkowski space. AUDIENCE: So the isometry is actually large gauge transformations. HONG LIU: Right. So the story actually is more general. So the story actually works more general. This is the statement for the N equal to 4 super Yang-Mills theory. So the story is more general. So in the case which you have, say, more general, say duality between a CFT in d dimensions in Minkowski d with the AdS d plus 1 dimensional gravity. In any such duality, then you always have the conformal symmetry, which is SO(d, 2) on this here mapped to AdS isometry. And any internal gauge symmetry, say if you have some u(s) global symmetry here, then this will be mapped into a u(1) gauge symmetry. Again, in the sense that this u(s) global symmetry can be considered as the global part of this u(1) gauge symmetry, the part which does not vanish at infinity. And it's always the case that global supersymmetry here would be corresponding to the local supersymmetry here. And also in exactly the sense that even for the local supersymmetric transformation in the gravity side, there's also a part which does not vanish at infinity. And that's the part corresponding to the global supersymmetry on the field theory side. Yes? AUDIENCE: What are the fermions supersymmetry on the AdS side? HONG LIU: Sorry? AUDIENCE: In the low energy limit on the AdS part, what are the fermion supersymmetry? HONG LIU: It's the same thing. What do you mean, what about fermions? So there will be some fermions. AUDIENCE: [INAUDIBLE] fermions? So there is more than gravitons on the AdS side involved in low energy limits? HONG LIU: Yeah. There are gravitons, and there are also some fermions. There's something called a [INAUDIBLE]. There are some other fermions, et cetera. AUDIENCE: Is it they who carry the u(1) gauge charge? HONG LIU: No. Here is more general. In this case, indeed-- so this isometry of S5, once you deduce to AdS5, that just again becomes a gauge symmetry, SO(6) gauge symmetry. And some fermions indeed are charged on this SO(6) gauge symmetry. And here, it's just more general. So whatever things, anytime if you have some global symmetry here should be mapped to some gauge symmetry here. And we will say this a little bit more a little bit later. Do you have any other questions? So now let's move to the matching of parameters. Again, this is more like a review of what we discussed before. And again, first N equals to 4 super Yang-Mills theory. And then this is type IIB on AdS5 times S5. So previously we said, from the relation of the d-brane, so the G Yang-Mills square here is related to the 4 pi GS here, string coupling. And also, from the d3-brane solution, we find is that the G Yang-Mills square-- we find, for example, here the curvature radius R has the following form, 4 pi GS and alpha prime squared. So the N is the same N on this side. So the N is the gauge group N. I should say the flux N. Anyway, let me add something. So the N is the number of d-branes on this side which translate into the flux. So the flux N. And then here is corresponding to you have SU(N). So this R, this curvature radius is related to the alpha prime squared and the gs in this way. And so on this side, the dimension parameter is given by R squared divided by alpha prime. So if you look at this relation, if you look at alpha prime 4 to alpha prime squared, then you find this is just relating the alpha. From this relation, 4 pi GS is equal just to G Yang-Mills square. And the N is N. This is the relation between the parameters. Any questions regarding this? So on the gravity side, we said these are the two parameters. And of course, you also have this N. But these are the two basic parameters. And we can also, instead of using GS, as we said before, you can also use the Newton constant. So the 10 dimensional Newton constant is length dimension eight. Then the dimensionless parameter would be GN divided by R to the power 8. And the GN is related to the GS and alpha prime by this formula. So now you can just use the relation between the GS to exactly translate this into Yang-Mills coupling. You just can use this relation and then use that to translate this into the parameter in the Yang-Mills theory side. Then you find, once you plug all those relations in, how the GS and alpha prime, et cetera related to N, then you find here actually, G Yang-Mills have disappeared in this relation. What you find is that the Newton constant essentially just related to 1 over N squared. Up to some parameter, just related to 1 over N squared. So if you're expanding G Newton, just like expanding 1 over N squared. So as we said before, we often do dimensional reduction on S5. Let me get a five dimensional Newton constant. So five dimensional Newton constant is equal to the 10 dimensional Newton constant. And the difference is the volume of S5. We wrote this down before. And the S5 is equal to pi cubed R to the power fifth. And then from here, you can just work out. So G5 has dimension 3. Then G5 divided by R cubed, again only related to N given by pi divided by 2N squared. So these relations are often useful in the future. So now let's look at [INAUDIBLE] limits of this relation. So let's first look at the classical gravity limit on the gravity side. As we discussed at the beginning of this class, by classic, we always use h bar equal to 1. So our h bar is always equal to 1. But quantum gravity is captured by this parameter, h bar times GN. So even though h bar equal to 1, in the limit when GN goes to zero, then you are in the regime in which you can ignore the quantum gravity effect. So that's what we mean by the classical gravity limit is that this should go to zero, and then alpha prime should go to zero. So this means the string effect is not important. And this is all in the unit of R. And here, when I write these relations, I have all set h bar equal to 1. So when I say the classical gravity limit, which is this limit in which h bar equal to 1, but the effect of the Newton constant goes to zero. And the reason I emphasize it is that if you have some matter field in this geometry, then those matter fields should be treated as full quantum. Just don't treat the gravity as quantum, but those matter fields should be treated as quantum. So the classical gravity limit is the same as QFT, Quantum Field Theory in curved space time. So gravity does not fluctuate. So you have rigid curved space time. But your matter field can fluctuate, h bar equal to 1. So essentially, we are dealing with quantum field theory in curved space time. So in the type IIB super gravity, there are many, many such kind of matter fields, and they all should be treated quantum mechanically. It's just that you should consider this small. So let's consider what this means. So GN small as a dimensionless parameter, this translates into field theory side if we use this relation. So that means N goes to infinity. So this is the large N limit of the Yang-Mills theory. This is the large N limit of the Yang-Mills theory. And then alpha prime goes to zero. From the relation between the alpha prime and here, when alpha prime goes to zero, so this is in the downstairs. Then that means this should go to infinity. So this is the t Hooft coupling we defined before. So this means the t Hooft coupling goes to infinity. So now we see a remarkable relation. So this is what we expect. If you still remember what we did before in the large N gauge theory, in the large N gauge theory in the large N limit, the fluctuations become very small, et cetera. So this is consistent that on the gravity side, the fluctuation in the geometry is very small. And now, we see that the decoupling of the string effect requires on the field theory side the strong coupling. This is also something roughly we said before. Remember, when we talked about large N gauge theory, large N gauge theory, you have planar diagram, non-planar diagram, et cetera. And at each level, say at the planar diagram, you need to sum over infinite number of [INAUDIBLE] diagrams which are all planar topology. And if you look at just those [INAUDIBLE] diagrams, of course you don't see a space time interpretation because they are just [INAUDIBLE] diagrams. You don't see a continuous surface. And I already alluded before that the continuous surface can emerge if that diagram becomes sufficiently complicated. And the way to make that diagram sufficiently complicated is to make it to be a strong coupling. The strong coupling, then the diagram with many, many vertices will dominate. And then the most dominated diagrams are those diagrams with not a lot of vertices. And they essentially are going to continue to limits. And so this relation [INAUDIBLE] realizes that kind of intuition. If you don't remember, go back to your loads regarding the large N gauge theory. So this relation is also remarkable for the following reason. Because this limit on the gravity side is simple, we can just deal with quantum field theory in the curved space time, which we know how to do. But on this side, it's highly non-trivial because this is an infinite coupling limit. So this will tell you that the strong coupling limit is described by classical gravity. So that means that we can actually use classical gravity to, in principle, solve problems which are strongly coupled. So also, of course, there are corrections beyond this. So quantum gravity corrections on this side, so this is a classical gravity limit if you take those parameters to go to zero. But suppose those parameters are not zero, just small. Then you can just expand in those parameters. For example, can you expand in GN divided by 8. And that expansion essentially gives you quantum gravity corrections [INAUDIBLE]. And now, from this relation, the expansion of GN then is translated. So this is expansion in G Newton. So from that relation, we see this has become the expansion 1 over N squared in the field theory side. Just expansion 1 over N squared. So on this side, you can also take into account that the alpha prime is non-zero. Then the alpha prime corrections in the expansion in alpha prime. So from this relation, translates in the expansion 1 over square root lambda. So in principle, the corrections beyond this limit can again be studied on the gravity side. And then the 1 over square root lambda corrections then corresponding to the string G corrections. And the 1 over N squared corrections corresponding to the quantum gravity corrections. So this is the classical limit. You also can see the classical string limit we considered before, we discussed before. In the classical string limit, still you can see the N go to infinity, which corresponds to GN. R8 goes to 0. But here, the alpha prime can be arbitrary. And here the [INAUDIBLE] coupling can be arbitrary. So it can be finite. So let me just say alpha prime finite no longer zero. And then this is just corresponding to lambda finite, which is no longer infinite. So this is just a standard t Hooft limit, which we take N go to infinity but keep lambda to be finite. Standard t Hooft limit, which we talked about before in the large N gauge theory. And then again, the corrections in GN will be corrections in 1 over N squared. So any questions? Yes? AUDIENCE: Doesn't small alpha over R squared mean large GS? HONG LIU: No. That has nothing to do with GS. These are two independent parameters. AUDIENCE: But expression parentheses, so there's also N. HONG LIU: This is-- AUDIENCE: So in the classical [INAUDIBLE], it means that since lambda is finite, so this means G Yang-Mills is 0. HONG LIU: Sorry. AUDIENCE: I mean lambda is finite, so G Yang-Mills is 0 and it means weak coupling. HONG LIU: No. That's what we discussed before in the large N gauge theory. The effective coupling is lambda. Lambda is your effective coupling in the large N limit. AUDIENCE: When you draw the diagram, [INAUDIBLE]. HONG LIU: Yeah. So the lambda is your coupling. The G Yang-Mills indeed, in the t Hooft limit, the G Yang-Mills will go to 0 when N goes to infinity. But G Yang-Mills is not the right parameter to look at things. Any other questions? OK, good. So now we can move on further. We can talk about the matching of the spectrum on two sides. So from now on, I will just restrict to essentially the semi classical. Here I should call semi classical gravity limit because we still treat the matter fields essentially as quantum. So we call it semi classical gravity limit. So from now on, so we will mostly just consider the semi classical gravity regime in the gravity side. Because essentially, we know very little about the string theory in this geometry. And also, I will often use the phrase which applies to the general correspondence, and not necessarily just N equal to 4 super Yang-Mills theory and the type IIB string theory. I just use the language assuming there's a general correspondence between some conformal field theory and some AdS gravity theory. So if the two theories have to be the same, two sides have to be the same, then you should be able to map their spectrum. So you should be able to map their [INAUDIBLE] space, for example. So again, [INAUDIBLE] now use the general language of the boundary and the [INAUDIBLE]. So the boundary theory is a conformal theory with this symmetry. So for the moment, I don't have [INAUDIBLE] symmetry. So that means that [INAUDIBLE] space should be organized as the representations of the conformal group, say of this SO(d, 2). And similarly here, because here, the SO(d, 2) is the isometry group which lives in infinite invariants. And again, you should be able to organize your [INAUDIBLE] space using the representations of the SO(d, 2). And those representations, of course, should be the same. So if there's one representation here, then there must be a [INAUDIBLE] representation here. And if there's two [INAUDIBLE] representations here, there would be two identical representations here. The representations must match. And on the boundary side, the local operators should also transform on the representations of the conformal symmetry. And again, they conform local operators can also be mapped to, say, the field on the gravity side. So on the gravity side, the fields should also transform under this SO(d, 2) isometry. That means that it should be a one to one correspondence between the local operators here and the bulk fields on the gravity side. There should be a one to one correspondence on this side. For example, in the boundary, if there is some scalar operator, there must be a corresponding scalar field on the gravity side. Similarly, if there is some vector operator in the boundary theory, there must be a corresponding bulk vector field in the gravity side. And similarly, if you have some symmetric tensor, then this must also be related to some symmetric tensor. So now I will often use the notation that I've used. Mu mu refers to the boundary indices. And the capital M, capital N refers to the bulk indices. Because they are on different dimensions. So then they are not quite the same. AUDIENCE: I have a question. HONG LIU: One second. Let me just finish. So here, I'm just talking about the conformal symmetries. And if the theory has some other symmetries, say some global symmetries or supersymmetries, then again, all those fields and the state, they should transform on the representations of those symmetries. And again, they should all match together. They should all match together. Yes? AUDIENCE: So we proved that the super Yang-Mills theory really lives on the boundary of the AdS? HONG LIU: No. We did not prove that. This is just a postulate. First, let me just repeat again. Yang-Mills theory lives on Minkowski space. And it's just an observation that the Minkowski space is the boundary of AdS. And then you say you can imagine that this is the boundary, this relation is related to the bulk and the boundary. And this is a postulate based on that fact. Yes? AUDIENCE: I thought one of the motivations for thinking about the holographic duality was to try to escape [INAUDIBLE] theorem. And all of a sudden, it strikes me, so we're trying to get on the boundary spin to massless particles. Then they will also exist in the bulk. HONG LIU: Sorry. In the field theory side, there's no massless spin to particles. They map to some field in the gravity side in the five dimensions. So this is some four dimensional operator. Then this maps to some five dimensional fields. AUDIENCE: I need to think about my question better. HONG LIU: Yes? AUDIENCE: So if it's a postulate that-- I mean, it's not required that the theory live on the boundary. It seems that that's just sort of a convenient way of thinking about it. But what implications would it have if it actually lived on the boundary? Would it change anything? HONG LIU: That's what I said before. So this discussion does not depend on that. This discussion does not depend on that. You can say I don't need to worry about that whether this is on the boundary or bulk, et cetera. These are just some different theories. I want them to be the same. AUDIENCE: Right. OK. HONG LIU: But, as I said last time, if you believe this bulk and boundary relation, then this is powerful because then, you can immediately deduce that the Yang-Mills theory on the S3 times time then is related to the gravity theory in the global AdS. And that is a nontrivial prediction from thinking the boundary and the bulk relation. Now you can generalize it. Because if I just look at this relation, I have no reason to suspect why. You can argue from the symmetry point of view also, but that language is more direct. In principle, after establishing the relation between the Yang-Mills theory in Minkowski space time and this gravity in the Poincare patch, I may also, just based on the group theory aspect, to say that should generalize to the Yang-Mills theory on the S sphere and the global AdS. You may also be able to do it that way. But thinking from the holographic point of view will give you a direct way to argue that, give you an alternative way to argue that. Also, in your p-set, you have checked this holographic bound. And so that's a confirmation of this. Yes? AUDIENCE: So what does the massless [INAUDIBLE] field on the right map to if it's the same representation of SO(d, 2)? HONG LIU: Sorry. Say it again. AUDIENCE: So in the representation line, so if you have the massless [INAUDIBLE] field on the right, what does it map to on the left? HONG LIU: Yeah. We are going to talk about it. We are going to talk about it. So if there are other symmetries, then everything should match. So now, as an immediate check, you can now just open your old papers. And if you load them-- because in the '80s, people have worked out the supergravity spectrum precisely on this space. So even though this relation was discovered in '97, actually in the '80s, people already worked a lot to consider this type to be supergravity on this space. Because this is a maximally supersymmetric space, et cetera. Anyway, so people have already spent lots of trouble to actually work out that spectrum. And now you can just open their paper. Then there's a big table about the different fields and their representations, et cetera. And then you can immediately see they actually map to certain representation of operators on the N equal to 4 super Yang-Mills theory. Then you can immediately check them. Just based on the group theory, check them. So I won't go through those details. But let me just mention the most important such kind of mapping for these two theories. And actually does have consequences for the general story. So the most important mapping. So as we said, on the string theory side, you always have this dilaton. So these is some scalar field. And this dilaton will appear in the scalar field, in AdS5, say, for example. And it turns out this dilaton field is mapped to the Lagrangian of the N equal to 4 super Yang-Mills theory. So the Lagrangian is a Lagrangian of the N equals to 4 super Yang-Mills theory, say, trace f squared plus phi squared, et cetera. So that's a local operator. And it turns out that operator is mapped to this dilaton. And on the N equal to 4 super Yang-Mills theory side, we have this SO6 gauge symmetry. Did I just erase them? We have this SO6 gauge symmetry. Then we have the SO6 conserved current. And it turns out this, on the gravity side, just maps to sO6 gauge field. AUDIENCE: You said it was an SO6 gauge symmetry? HONG LIU: No. In the N equal to 4, this is a global symmetry. But on this side, so this appears as isometry on the S5. But when you dimensionally reduce on S5, then in AdS5, then there will be a pure gauge field corresponding to each symmetry generated [INAUDIBLE] S5. And that naturally maps to this conserved current on the field theory side. And then another universal operator on the field theory side is the stress tensor. So no matter what theory you consider, you have stress tensor. And this is mapped to-- turns out, to the metric perturbations. It's a deviation from the AdS metric. Again, so at the representation level, this is very natural. So this is a symmetric tensor. This is a symmetric tensor. But physically, this is also natural. Physically, also natural. I will elaborate this a little bit further from a different perspective in a few minutes. So do you have any questions regarding this? So now, given this mapping, any operator is due to a bulk field. Then you can ask some immediate questions. For example, the quantum numbers of these operators will map to the quantum numbers of the bulk fields. And that's something I said you can check their symmetries. For example, not only the representations under this conformal symmetry should map under that representation. And also, the representation under that SO6 we also match, et cetera. So for local operator on the field theory side, so once we have this mapping, we can immediately ask questions related to operators on this side, and try to ask what's the counterpart on the other side. And ask the story about the field on the gravity side, and ask what's happened on this side. We can start developing the relations. So now let's start discussing them. So the first thing you can do, so immediate question you can ask is that given the operator, say operator O in the field theory. So there are some natural questions one can raise about this operator. For example, what is correlation functions, et cetera? And that we will discuss a bit later. But now let me discuss another natural question. And the natural thing to do is to deform your original theory by adding this operator to the Lagrangian. So I have an operator. I can put the source. I can put the coefficients. And these coefficients may depend or not depend on the space time coordinates. And I can always add such a term to a Lagrangian. And then this deforms my theory away from the original theory. So this is a natural thing to do. And this phi 0 is often called the source. So this phi 0 is called the source. So immediate question you can ask-- and when phi 0 is equal to constant, then this corresponding to a change in the coupling for this operator. So if your Lagrangian previously already included this operator, and then if you add such a term, so phi 0 equal to constant, then you are just changing the coupling for that operator. And if this operator is not there, then you just add the new coupling. But in general, you can make it space time dependent. AUDIENCE: Why is it coupling since there's no other operators coupling without if we say phi 0 is a constant? HONG LIU: Sorry? AUDIENCE: If phi 0 is a constant? HONG LIU: No. O is the operator. It's the local operator. AUDIENCE: Yes, but there is no other operator this operator O coupling with. HONG LIU: No, no, no. What is the meaning of operator? The meaning of operator is that O is a sum of the product of fields. And so there's already-- so this O may be phi cubed, or may be phi 4, et cetera. So example is O in the scalar field theory. If a scalar field theory has a gauge field, then I can write O as trace F squared. Then this corresponding to changing the coupling for trace F squared. So immediate question is, what does this operation-- so in the field theory point of view, you can always do this operation. So the immediate question, what does this corresponding to in the bulk? What does this operation correspond to in the bulk? So now I'll try to answer this question. As I said before, in establishing such dictionary, you often have to do lots of guesswork, and then you check. And the guesswork is based on some very small clues. So people who are able to make good guesses is that they are able to grasp important clues sometimes from very simple facts. And so that is what's good physicists do is that they can see non-ordinary things from ordinary things. So here, I will try to deduce the answer to this question by starting from this relation. So let me start with that relation. Related to the GS. Let's forget about p for pi. And now, we've talked about before GS string coupling can be considered as the expectation value of the dilaton field. Let me actually call it capital Phi. So we mentioned before when we talked about string theory that this string coupling can be considered as expectation value of the dilaton field. Of course in general, again, the dilaton can fluctuate, et cetera. So normally, in flat space, when we say the dilaton has expectation value, what we normally say is that the value of the dilaton at infinity because the boundary by infinity, that's how you fix. And similarly, for space time like AdS with a boundary, and the expectation value of this dilaton can be identified with the value of the dilaton at the boundary of AdS. So here, I write partial AdS means the boundary of AdS. So this is the value of AdS, value of phi at the boundary. So expectation value essentially can be associated with the boundary value of the field. AUDIENCE: You mean if it is a constant? You mean if expectation value of the phi is constant? Or if it's, function-wise, equal to the boundary limit? HONG LIU: No, no, no. Think about the following thing. As I said, phi in principle can fluctuate. And its expectation may also be able to fluctuate in the space time. But the only sensible way to talk about expectation value is to talk about its value at the boundary because we assume the boundary conditions don't fluctuate. It's the same thing for flat space. And we always specify the boundary condition at the spatial infinity, for example. And in the AdS, which is also space time with a boundary, than we can associate the constant parts of the expectation value as the value at the boundary. Now you have to use a little bit stretch of imagination. The following fact, you don't need to. So now, with this relation, we have established a connection between the Yang-Mills theory coupling with the value of phi at the boundary of AdS. And now remember, this Yang-Mills coupling, so I will not be careful about 1 over Yang-Mills squared. This is related. This is the coupling or source for the Lagrangian of N equals to 4 super Yang-Mills theory. So now we can deduce here two things. In particular, if we deform the Lagrangian, say, change this coupling, deform the boundary by changing the coupling, which corresponds to you add some delta G, say the Lagrangian of N equal to 4 super Yang-Mills, deform the boundary set by changing the coupling. And this corresponding to you essentially change the boundary value of dilaton because these are related. So now, this example gives us the answer to this question. This example gives us this question. From here, we can deduce first that the operator corresponding to the dilaton must be the N equal to 4 super Yang-Mills Lagrangian, as you can already deduce from some other methods, say using group theory, et cetera. And the second is that phi 0 O, in the boundary theory, adding a phi 0 O in the boundary theory must be related-- now I'm generalizing this story. The bulk field phi-- so now I'm no longer not necessarily using the dilaton. The bulk field phi due to O has a boundary value phi 0. Any questions about this? Yes? AUDIENCE: Your choice of operator O should be consistent with all symmetries, right? HONG LIU: Hm? AUDIENCE: You're not pre-choosing O? HONG LIU: No. O can be anything. AUDIENCE: Consistent with original symmetries of the-- HONG LIU: No, it doesn't have to be. You can deform the theory without-- you can break the symmetry. That deformation can break the symmetry. If you want to preserve the symmetry of the original Lagrangian, then you can choose a certain O. But in principle, you can choose any O. You can break the symmetry if you want. AUDIENCE: Do you have to always then break some symmetry on the AdS part? HONG LIU: Yeah. Impose this kind of boundary conditions. Then you may break AdS symmetry too. So now we have answered this question. And then of course, this just provides the answer. Then you would check it. And we will check it later using other methods. Now I'm saying because I already know it's true. But in real life, what you will do from this example, you will say, ah, this must be the case. Then you will start trying to find examples to check it. Start to find other ways to check it. And we will describe it later. So in the end, what we will describe is a self consistent story. I will not contradict myself in my later discussion. So if we assume this, I can also use this to argue. I can also use this identification to argue this too. So let me call this star and star star. I can use this identification to make star and star star natural for any duality, not only N equals 4 super Yang-Mills theory and this type IIB gravity. It's that any conserved curve in the boundary theory must be equal to some gauge field in the gravity side, and the stress tensor should always be due to the metric. So now I'm going to use this argument to make that a little bit more natural. So let's first consider this. So suppose I have a conserved current, J, J. I have a J mu. For simplicity, let me just take it to be u(1). Then I can deform the boundary theory by adding a source for this J mu by adding a boundary Lagrangian term like this. And a mu is the source. And then according to this identification, the A mu must be A mu-- we should be able to identify it as the boundary value of some bulk field, A mu evaluated at z equal to 0. So this is a bulk vector field. So if this is true, there must exist a vector field due to this vector field. And this must correspond to the boundary value of that. But now we can argue why this should be a gauge field. So now I want to argue this is a gauge field. And to see this is very easy because since J mu is conserved, then this coupling-- so let me call this star star star. Let me call it star star star. So since this is conserved, and this star star star is invariant under the following transformation, A mu goes to A mu plus partial mu lambda x for any lambda x. So that means the A mu, this A is the boundary value of A. So that means this capital A, that means the dynamics of this A mu should also be invariant. If I change the boundary value of A by this transformation, the dynamics of A mu should also not change. Let me call it AM so the bulk and general will be different. But this is like a gauge transformation. So this is like a gauge transformation. So we deduce that somehow, this must be some subset of the bulk gauge transformation. Now call it star to the power 4. Then star to the power 4 also must be a subset of some bulk gauge transformation. So this does not prove it, but makes it more natural. This argument makes it more natural. The fact that the boundary value of A has such kind of gauge symmetry. So in some sense, this must generalize to some gauge symmetry in the full gravity side. So similarly, you can do this for the metric for the stress tensor. Similarly, you can do this for the stress tensor. So for the stress tensor, so for the star star, again, we can add h mu mu x T mu mu to the boundary Lagrangian. And this is the source to the stress tensor. And now, from your knowledge of quantum field theory, when I add such a term to the field theory, it's the same. So this is equivalent to I deform the boundary space time metric from just eta mu mu to eta mu mu plus h mu mu. So let me call this quantity g mu mu b. So adding such a term source for the stress tensor is like you deform your space time geometry a little bit by this source. This is when h is very small. When we consider the information, we always consider the source is small. But now, we can argue this thing, mass corresponding to the boundary value of the metric in the gravity side for the following reason. So if you look at this expression, so if you look at even the pure AdS metric before you do any deformation, so this is just due to the original theory. So from the AdS metric, so the original AdS metric, the g mu mu component. So let me write this AdS metric in the form, say, R squared, z squared, separate the z squared with the rest. So this is essentially the component of the bulk metric along the boundary direction. The boundary value of this metric component evaluates z equal to 0 is equal to R squared divided by z squared eta mu mu. So that just comes from the AdS metric, just this g mu mu. And when you go the boundary, just given by this. And this is eta mu mu. We recognize it just as the boundary metric. So these correspond to deform. Since adding this term corresponds to deform the boundary metric, so we expect, when we add this term, so this corresponding to deform the g mu mu now to the 0 become. So we expect this becomes equal to g mu mu b. So this tells you that the stress tensor must be due to the metric perturbations in the gravity side. The stress tensor must be corresponding to the metric perturbation on the gravity side because it's corresponding to perturb the boundary conditions of the bulk metric-- corresponding to preserve the boundary condition for the bulk metric. So this is a very general statement valid for any correspondence between the gravity and the field theory. And that also tells you that if you have a theory which due into a higher dimensional theory, and then that theory, if the field theory has a stress tensor, then this bulk theory must have gravity because this bulk theory must have a dynamical metric. And if you have a dynamical metric, then you have gravity. So you can say, if any field theory is due to a theory of one higher dimension, that theory of one higher dimension must involve gravity-- nothing about quantum gravity. Let's stop here.
MIT_8821_String_Theory_and_Holographic_Duality_Fall_2014
12_String_Spectrum_and_Graviton.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Start by reminding you of what we did in last lecture. And, in particular, there was a little bit of confusion. It seems like, maybe some part, I went too fast. So we'll try to-- yeah. [AUDIO OUT] So again, we start, say, by choosing the worldsheet metric to be the Minkowski. Then the worldsheet action just reduce to that of a bunch of free scalar fields. Just reduced to a bunch of free scalar fields. And those free scalar field is a bit unusual, because, in particular, there's a 0 component, which we have a long sine kinetic term, OK? But, of course, this is not our full story. You still have to impose the so-called Virosoro constraint, because [INAUDIBLE], the stress tensor of this series should be 0, which is the consequence of the equation of motion for gamma. But nevertheless, you can write down the most general solution to this problem, so let me write it here. So the most general of a closed string, or the most general conclusion can be written as x mu plus p mu tau. So I've now written, what I called previous by Xl and the Xr, I have written them explicitly in terms of the Fourier modes, OK? And I have written them, explicitly, in terms of Fourier modes. And in this form, so you can compare with what we discovered before, previous, here, because v mu, then we discussed last time, that this v mu should be considered as related to the center of mass, momentum, over the whole string. And, for example, for the closed string, that's the relation. OK, so similarly, for the open string, if you use the Neumann boundary condition, here, use the Neumann boundary condition, then you find, OK, now we'll substitute the explicit expression for the Xl and the Xr. So now, it become 2 alpha prime p mu tau. So, again, this is our previous v mu. So this 2 is related to the open string. Sigma only go from 0 to pi, rather than 2 pi. And then, you can write the oscillator, this Xl Xr in terms of the explicit Fourier transform. So here, you only have one set of modes. And of course, n sigma just come from you have Xl plus Xr, and Xl, which is equal to Xr. And yeah, when these two become the same, when these two become the same, then these two combine. Because the sigma have opposite sign that combine to cause n sigma. OK? Yeah, it cause a m sigma. So this is the most general classical solution, and in the light-cone gauge, we say x plus can be setting to 0. And also everything related to alpha plus, and for tilde plus, all the oscillation modes also said to be 0. We raise it to the plus, set it to 0, so there's only [? tau ?] [INAUDIBLE] left in the light-cone gauge. And Virasoro constraint, they become-- I have tau x minus. So this v plus is the same as that, related to the p plus that way. only single v plus. OK, so you know the equation, so the i should be considered as a sum, OK? A sum of all directions, or transpose directions. So, from the Virasoro constraint, you can deduce X minus. You can deduce X minus. Yeah, again, let me just write here, our convention's always X mu is equal to X plus, X minus, then Xi. And i goes from 2 to D minus 1, and X plus minus is, say, 1 over square root 2, X0 plus minus X1, OK? So, from here, you can deduce the X minus. And also, it means that independent variables-- so this, we have determined the X minus up to a constant, because this determines the tau and the sigma derivative up to a constant. So, the independent variables. Xi, and then, also, this p plus, or v plus which appears in the plus, and then the X minus. The 0 modes for the X minus, which is not determined by those things. And these two, this is just a constant. And, again, these two are constant. These are two-dimensional fields, OK? Any questions so far? Good. So actually, [INAUDIBLE] that the zero mode part of this equation particularly important. Yeah, let me call this equation 1, this equation 2. So the 0 part of those equations are particularly important. And, for example, say, from equation 1, so the zero modes part, for the first equation-- so let's do it for closed string-- then zero mode part, you just-- alpha prime p minus, OK, you should take the root of alpha tau. So you just get the alpha prime p minus, and the right-hand side. So, right-hand side, let me also rewrite the v plus in terms of a prime, so this is 2 alpha prime p plus. And then, the zero mode means that you integrate over, you integrate over the string. OK. So this is the first equation. So the zero mode equation become like that. So, now, let me just make a brief comment, which I, at the beginning, I forgot to mention, last time, which apparently causes some confusion later. It's that, if you look at this expression, this is actually precisely-- so if you look at that two-dimensional field theory, this is actually precisely the Hamiltonian, the classical Hamiltonian for that field theory, OK? For the Xi part. This is just a free scalar field theory. Yeah, so, in particular-- so let me write that more precise. Let's take out this p plus, and then this become 4 pi alpha prime, and then this just become exactly the Hamiltonian of that theory, because 1 and 2 for the Xi, OK? So H Xi is the Hamiltonian for two-dimensional quantum fields. Quantum field theory of Xi, for the transverse directions, OK? So you can also write these explicitly in terms of those modes. In terms of those modes, then, for example, you write p minus. Yeah, if you can also write modes, then become p minus. You could, too, 2p plus, pi square, plus 1 over alpha prime. So if you just substitute those expansion into here, then you can also write it explicitly in terms of modes. OK. And then you can combine-- so this is p minus, then you can multiply this to the other side, combine all the p together. You can write it as M squared, which is defined to be p mu. P mu minus P mu P mu, which is then 2P plus P minus minus Pi squared. And then, this then become equal to 1 over alpha prime sum m mu equal to 0 alpha minus m i alpha m i plus alpha tilde minus m i alpha tilde m i, OK? Then, you see that this constraint for p prime, for p minus, now can be written, can be rewritten in terms of the relation of the mass of the whole string in terms of its oscillation modes, OK? In terms of its oscillation modes. And, similarity, for the open string-- so, this is for the closed string-- for the open, you could act the same thing applies, just you only have, now, one side of modes. So remember, in those expressions, sum over i is always assumed. And whenever I wrote m not equal to 0, it means you always sum for minus infinity plus infinity, and except where m equal to 0, OK? Yes? Any questions? So these are the consequence of the zero modes for 1. And the consequence for the zero modes for 2, again, you can just integrate over all direction of the string. Then the left-hand side just get 0, because the x minus is a periodic function, so this is a total derivative, and so this is give you exactly 0. And then, for the constraint, on the right-hand side, on this expression, which can be written explicitly. So you can also, then, plus into the explicit mode, and then they become sum m not equal to 0, alpha minus mi alpha mi equal to [INAUDIBLE]. OK, so this is sometimes called a level matching condition. So this tells you that the oscillation from the left-moving part-- this is for the closed string. For the open string, this equation just does not give you anything. For the closed string, this gives you a non-trivial constraint. Oh, I got a ring. So, for a closed string, this just tells you that the left-moving part and right-moving part have to be balanced. And this is related to that string. It's periodic. Along the string, it's periodic, so there's no special point on the string. And then, there's no special, and then you can actually not distinguish between the left-moving and the right-moving part, OK? Good. So this is a the whole classical story. This is, in some sense, the complete classical story, OK? And then, quantum mechanically, we just need to quantize those guys. Quantize those guys. And those are just ordinary quantum mechanic degrees of freedom we don't need to worry about. And this just become a quantization of a free scalar field theory. OK. STUDENT: [INAUDIBLE] all these modes are massive? Or is there any conditions it can be massless? PROFESSOR: No, this is a massless field theory. We are quantizing this theory, right? Yeah, so this is massless scalar field theory in two-dimension. STUDENT: But so that mass is-- PROFESSOR: No, this mass is the mass of the string. No, this is the mass of the center of-- this is the total mass of the string. When I say the massless, this is a massless scalar field theory in the worldsheet. This is a spacetime description. So it's important to separate two things. Things happening on the worldsheet and things happening in spacetime. So this is the mass of the string viewed as object moving in the spacetime. And when I say quantizing a massless scalar field theory, it's to think of those, the mode. So Xi is saying here, describe the motion of the string. Think of them as a field theory on the worldsheet. And that's a massless field theory. STUDENT: OK. PROFESSOR: Right. Yes? STUDENT: [INAUDIBLE] definition of [INAUDIBLE]? PROFESSOR: It's the conserve the loss of charge for the string. Cause 1 into the translation. It's to conserve the loss of charge, cause 1 into the translation. Yeah, so that's what we derived last time. So be sure that this v-- but to be original, write v, here-- are related to the center of mass momentum in this particular way. Yes. STUDENT: Could you use the fact that the right [INAUDIBLE] looks like [INAUDIBLE]. PROFESSOR: Sorry, which one? STUDENT: The fact that this alpha [INAUDIBLE] P minus is actually the Hamiltonian. Can you use it anyway? Or it's just the-- PROFESSOR: No, this is a remark. And this will make them more lateral. For us, this is not the essential remark, but it will be more lateral, when we work out the zero-point energy. And when we work out zero-point energy, then that's a more lateral thing to consider, because that will actually give you a way of the computing zero-point energy. Yeah. Yes. STUDENT: So how do you-- [INAUDIBLE]. If you plug in this extension, does it really direct me [INAUDIBLE]? PROFESSOR: Yeah. [LAUGHTER] STUDENT: [INAUDIBLE] n times-- PROFESSOR: Sorry? STUDENT: Are you supposed to have n time [INAUDIBLE] times the other number of alpha? PROFESSOR: Sorry, what are you saying? STUDENT: Here, if you plug in this [INAUDIBLE]. And in the end, you've got m times alpha m PROFESSOR: Yeah. STUDENT: But you directly use this equation. PROFESSOR: Yeah. STUDENT: And you-- well, [INAUDIBLE]-- PROFESSOR: Oh. This is trivial to see. We can see it immediately here. First, if you take the derivative, we just get Pi, and then you just get that term. And when you take derivative here, then you cancel the m. And then, in order to get the zero modes, then the m and the minus m have to cancel. So the only structure that can happen is this one. Then, the only thing left remaining is to check the quotient, here, is 1. And that, we have to do the calculation. And the rest, you don't need to do the calculation. Yeah, the only thing to do the calculation is for this one. Yeah. STUDENT: So just kind of to bore you, so all we're doing is free fields in a two-dimensional worldsheet with the diffeomorphism and variance as a gauge symmetry, and that gauge symmetry gives us some complicated constraints, which we then get rid of. PROFESSOR: Solve the constraints, yeah. STUDENT: That's all there is to it. PROFESSOR: Yeah, that's right. So, as I said before, solving the constraint help you two things. First, you solve this constraint. And second, you get rid of this X plus and X minus. Because those are the dangerous field theory, and now we get rid of. And what's remaining, Xi, are good. Will be, hey, they're good boys. They're good, well-behaved field theories. Any other questions? STUDENT: Yeah, one question. So isn't knowing how to do, basically, string theory-- well, is it knowing how to deal with the Polyakov action, and not in the light-cone gauge? PROFESSOR: Oh, sure. But that take much longer time. Even in the light-cone gauge, I'm still trying to telling a long story short. STUDENT: OK. And so I'm trying to give you the essence. But make sure-- I'm trying to tell you the essence, but at the same time, I want you to understand. So make sure you ask, whatever, it's not clear. But, for example, in the next semesters of undergraduate, of string theory for undergraduate, you will reach this point, essentially, after maybe 15 lectures. And we've reached here maybe only two or three lectures. And so, I'm trying to just give you the essence, and to try not to give you too many technical details. Say, cross checks, et cetera. There are many cross checks you can do, et cetera. I'm not going those things. Just give you the essence. Any other questions? Good. OK. So this is the review of the classical story. So now, let's do the quantum story. Again, this is a quick review of what we did last time. So, quantum mechanically, all those become operators. All those become operators. And the alpha also become operators. OK, so this commutator, as we said last time, implies that 1 over square root of alpha m i should be considered as standard [INAUDIBLE] operators, for m greater than zero. And the mode will be [INAUDIBLE], should be considered as the creation operators, OK? Creation operators. And then, this is the standard of quantization, or field theory. You reduce to an infinite number of harmonic oscillators. And in particular, this product just reduced to m times this Ami dagger, Ami just become m Nmi. OK? So Nmi is the oscillator number, occupation number, say, for each harmonic oscillator. Then, the typical state of the system just obtained by acting those things on the vacuum. For example, for the open string, we only have one set of modes. So then, we would have this form. m1 m1 alpha minus mq, iq, mq on the vacuum. And the vacuum also have a quantum number, p mu, because those p, those p plus and the pi, here, are quantum operators. And so, we take the vacuum to be eigenstate of them. And they're independent of those alphas. So this vacuum is the only vacuum for the oscillators, but they are still labeled by a spacetime moment, OK? So labeled by the spacetime momentum of the string. So for the open string. And for the closed string, similarly, you just have two sets of modes. Minus i1 n2 alpha minus m2 i2 and 2. And then, also, you have the other. let me call it k1 j1 l1 alpha minus k2 j2 l2, et cetera. In the vacuum. So this is a typical state, for the closed string. But the allowed state should still satisfy this constraint. OK, should still satisfy this constraint. So this constrain can be written as, reach the following constraint. It said, the oscillator number on the left-hand side, for the left-moving mode, or tilde. And the oscillator number for the right-moving mode, they have to be the same. So let me rewrite this equation, in terms of oscillator numbers. So this means i sum. So let me, now, write them from m to infinity. So m Nm i. OK, so this just form that equation, OK? Yeah, so this is called the level matching condition. So, for closed string, you cannot just act arbitrarily alpha and alpha tilde. The number of modes, the total thing you act from the left-hand side, from the alpha and those from the tilde, they have to be balanced by this equation, OK? Which is a consequence that there's no spatial point on the string. So now, we can also rewrite those equations at the quantum level. OK, so those equations, the quantum level, those equation just tells you that, for those states, the P mu are not arbitrary. P mu are not arbitrary. P mu must satisfy this kind of constraint. P mu must satisfy this kind of constraint. And that's can, in turn, be integrated as the determine the mass of the string, OK? Determine the mass of the string. So the mass of the string, so then for the open, mass of string can be written 1 over alpha prime sum over i sum over m infinity m Nm i. Then plus some zero-point energy. So this is just the frequency of each mode. So this is just the frequency of each mode, a frequency of each mode. And then, this is the occupation number. So this is a standard harmonic oscillator result. And then, at the quantum level, of course, there's ordinary issue, because the alpha m and m don't commute. Because [INAUDIBLE] creation, [INAUDIBLE]. So the ordering here matters. So the ordering here matters. So, in principle, yeah. It matters. Immediately, you can write it. So that will give rise to this zero-point energy, or ordering number. So, similarity, for the closed string, then you just have two sets of modes. Similarly. that's a0. So a0 can be-- so this is the place this remark is useful, because this just come from-- essentially, this part, just come from, essentially, it's just the Hamiltonian of the Xi. Just the Xi viewed as a field theory on the worldsheet. And the field theory of Xi, essentially, is a bunch of harmonic oscillators. And for each harmonic oscillator, we do know what is the ordinary number. It's just 1/2. Zero-point energy, just 1/2. And then you just add all of them together, OK? Just add all of them together. So, for example, for the open string, this just become alpha D minus 2. D minus 2 because that D minus 2 derive directions. And then, your sum m equal to infinity 1/2 omega. OK, 1/2 omega, and omega is m. Then, I told you this beautiful trick last time, that this should be equal to D minus 2 divided by 24 1 over alpha prime, because this guy, the sum over m 1 to infinity m, give you minus 1/12. OK, so this is for the open. So, similarly, for the closed string, you can do the similar thing. Just differ by sum 2, et cetera. So this give you D minus 2 24 4 divided by alpha prime. Closed. OK? So this vacuum energy-- So this can also be integrated as a vacuum energy on the circle. Vacuum energy of this quantum field theory on the circle. And, in quantum field theory, this is sometimes called the Casmir energy. And you can check yourself, that those answers agree with the standard expression for the Casmir energy on the circle. Yeah, if you choose here, [INAUDIBLE] property, because we have chosen the sides to be 2 pi. Good. So this summarized what we did, summarized what we did so far. Any questions on this? Good, no more questions? Everything is crystal clear? I should immediately have a quiz. Yes? STUDENT: So the 26 dimensions comes from making this 0? PROFESSOR: No. No, if you put the 26 here, this is not 0. STUDENT: I'm sorry. I'm thinking 1 over-- I apologize. Not 0. So, you said, the 26 dimensions comes from the fact that, somehow, 26 minus 2 over 24 is 1. It's like it's numerology, but it's like-- PROFESSOR: Yeah. Yeah, that's a very good observation. That's exactly the reason I write in this way. Right. So let me just make a comment. Maybe I make a comment later. Anyway, so from here, from these two expression, from this expression and this expression, you see this picture, which I said at the beginning. So each of these describe a state of a string. And the state of a string, the state of such a string, they oscillate in this particular way. Say they have those oscillation modes, and then it moves in spacetime, besides your center of mass momentum, besides center of mass momentum. So such object, you look at it from afar, it's just like a particle, OK? It's just like a particle. So we have established it. So now, let's look at the spectrum. So each state of a string can be considered a map to a spacetime particle. So let's now work out. And the mass of the particle can be worked out by those formulas. OK, so let's now just work out what are the mightiest particles, because we are interested in the mightiest particles, typically, OK? So now, let's start from the beginning. So now, let's start with open string. So the lowest mode, of course, is just the vacuum. P mu. OK, there's no oscillators. So, for such a mode, Nm i just equal to 0 for all m and i, OK? So this should be just a spacetime scalar. So this should describe a spacetime scalar, because there's no other quantum number, other than the momentum. So it should be a spacetime scalar. It should describe a scalar particle. And the mass of the particle, we can just read from here. So the M square equal to minus-- so this is for the open string, so we use this formula. That's the only thing. Because this here is 0. So the only thing come from a 0 term. So you're just given by D minus 2 divided by 24 1 over alpha prime. So why you need anything in lower case is that this guy is smaller than 0 for D greater than 2. Say, for any spacetime dimension greater than 2, you actually find the mass square, [INAUDIBLE] mass square. So people actually gave a fancy name for such kind of particle. They call it tachyon. And, in the '60s, actually, people designed an experiment to look for such particles, particles of negative mass, negative mass square. Anyway, we are not going to here. Let me just say, for the following, in the theory, if you see excitations, if it's a negative mass square, typically, it tells you that the system have instability, that you're not in the lowest energy state. That you are not in a low-energy state. So what this tells you is that this open string propagate in the flat Minkowski spacetime may not be the lowest configuration of the string, but that's is OK. If you're not in the lowest configuration, it's not a big deal, and it just means you have not found the correct ground state. It does not mean the theory is inconsistent, OK? And so, even though this is unpleasant, this thing is tolerable, OK? This thing is tolerable. Any problem with this? STUDENT: [INAUDIBLE]. You already set [INAUDIBLE] m equal to 0, so that is the ground state. PROFESSOR: No, this is a ground state on the worldsheet. But in the spacetime, this goes one into a particle. This goes one into excitation in the spacetime. And so this goes one into excitation in the spacetime, with an actual mass square. And, typically, if you have something with a actual mass square-- let me just say one more words, here-- then that means you are sitting on the top of a hill, and that's where you have a actual mass square. And so, it means you are in some kind of unstable state. But, of course, you are allowed to sit on the top of a hill. It's not a big deal. STUDENT: Why is this unstable, like a [INAUDIBLE]. PROFESSOR: On the top of the hill, is it stable? STUDENT: No, why [INAUDIBLE] means [? uncomplicated. ?] PROFESSOR: Oh, if you write a scalar field theory. Yeah, so this goes one into a scalar field in spacetime, now, OK? So now, if I write a scalar field theory in spacetime, say, let me call phi, with a actual mass squared. So that means, the potential for this mass square is like this. Then, that means that the phi wants to increase. STUDENT: Is that the same thing as example 4, with spontaneous-- PROFESSOR: Yeah, that's right. Similar to 4, [INAUDIBLE]. Except, here, we don't know what is the bottom. We are just sitting on the top. Anyway, so later, we will be able to find the way to get rid of this. So it's OK. So you don't need to worry about this, here. So the second mode, the second lowest mode, you just add alpha minus 1 on this worldsheet ground state, OK? So now, this thing is interesting. First, this index i, this index i is a spacetime index. It's a spacetime index. And so, this actually means, this transform means this state transform as a vector under So D minus 2 the rotation of the Xi directions. And, remember, in the light-cone gauge, the [INAUDIBLE] symmetry's broken, and this is the only symmetry which is manifest. And so, that suggests this state should be a spacetime vector, OK? Should be a spacetime vector. So now, let's work out its mass. So now, m equal to 1. So now, m equal to 1, and so, you just have 1, here. And so, then you just put the 1 here. So this is the alpha prime 1 minus D minus 2 divided by 24. So this is 26 minus D divided by alpha prime. OK, so now you see the 26 divided by 24 alpha prime. So now, you see this magic number, 26. So now, we emphasized before, in the light-cone gauge, even though only this So D minus 2 is manifest, because you break Lorentz symmetry. The gauge break Lorentz symmetry. But your theory is still, secretly, Lorentz symmetric. It should still be Lorentz [INAUDIBLE], because the string is propagate in the flat Minkowski spacetime. That means, all your particle spectrum, they must fall into [? representations ?] of the four Lorentz group, OK? They must fall into the [? representations ?] of the four Lorentz group, even though the Lorentz symmetry is not manifest, here. And now, let us recall an important fact. A Lorentz vector, a vector field. Yeah, just a vector particle. In D Minkowski spacetime, D dimension of Minkowski spacetime, if this particle is massive, then have D minus 1 independent components, so independent modes. And if it's massless, then I have D minus 2 independent modes, OK? So the situation we have for many of these is the D equal for four, four-dimensional spacetime. So in four-dimensional spacetime, a massless vector is a photon. Photons have two polarizations, have two independent modes. But if you have a massive vector, then you actually have three polarizations, rather than two, OK? But now, we see a problem. Here, we see a vector, but this factor only have D minus 2 components. Because i-- because these are the only independent modes. Here, we have a vector which has only D minus 2 components. Independent components. OK, so if you compare with this list-- because there's nothing else. Because these are the only independent modes, here. In the last [INAUDIBLE], there's nothing else. So by compare with our knowledge of the Lorentz symmetry, we conclude the only way this particle can be mathematically consistent, we said, it has to be a massless particle. So that means M square have to be 0, OK? So M square has to be 0, means that D must be equal to 26. And we actually find massless particles. We actually find the photon. So we actually find the photon in the string excitations. Yeah, one second. Let me just finish this. For D not equal to 26, Lorentz symmetry is lost. Lorentz symmetry is lost. It's because that means this particle, where M square is not 0, no matter what, these states cannot fall into a [? representation ?] of a Lorentz group. And so, being said, Lorentz symmetry is not maintained, even the Lorentz symmetry is a symmetry of the classical action, but it's not maintained at quantum level. Somehow, in the quantization procedure, a symmetry which is in your classical theory, it's lost, OK? And this tells you that the quantization is not consistent. It's inconsistent. Because it means, whatever it is, if you have something propagate in Minkowski spacetime, is has two [INAUDIBLE] [? representations ?] of the Lorentz group. That means that, yeah, this just cannot be the right-- you have to go back, to redo your thing. This is not propagating in the Minkowski spacetime. So, alternatively-- so this is a conclusion that the D must be equal to 26, OK? So you can reach the same conclusion the following way. So, right now, so the way we did this, we said we fudge this 0. Yeah, we did not fudge it, but we did something to an infinite sum, and find a valid answer. Yeah, we have to do an infinite sum of positive numbers, and then find the active number. And then we find, somehow, there's something. D minus 2 and D minus 2 somehow missing 1. Anyway. Anyway, but this is actually a deep story. It's not, say, just missing 1, or something like that. So you can reach the same conclusion by doing the following. Say, you put here a 0 as undetermined quotient. And it turns out, the same 0-- yeah, so you can check. So here, it tells you that Lorentz symmetry is lost. So you can double check this conclusion as follows. So, remember, I said that that classical action is [INAUDIBLE] on the Lorentz symmetry, and then this conserve the loss of charge, because one to the Lorentz transformation. And those charge, they become generators of Lorentz symmetry at the quantum level, just in quantum field theory. And then, by consistency, those Lorentz-- conserve the Lorentz charges, as a quantum operator, they must satisfy Lorentz algebra, OK? Then, you can check with a general D, and with a general a0. And that Lorentz algebra is only satisfied in the D equal to 26 dimension, and this a0 given by these formulas. OK, so that will be a rigorous way to derive it. Rigorous way to derive it, but we are not doing it here, because that will take a little bit of time. STUDENT: Does that also involve the [INAUDIBLE] of the small-- PROFESSOR: Hm? STUDENT: That [INAUDIBLE] of the [INAUDIBLE], alpha mode? PROFESSOR: Yeah. Yes. Yeah. STUDENT: [INAUDIBLE]. PROFESSOR: Sorry? STUDENT: [INAUDIBLE]. PROFESSOR: No, no, no. You're just assuming some general a0, here. You determine this by requiring that the Lorentz algebra is satisfied. STUDENT: [INAUDIBLE], in terms of this [INAUDIBLE]. PROFESSOR: Yeah, in terms of alpha. That's right. STUDENT: Then, [INAUDIBLE]. PROFESSOR: Oh, those commutators are fine. Those commutators are just from standard quantization. STUDENT: Oh, there's [INAUDIBLE]. PROFESSOR: Sorry, which one over 24? No, forget about 1/24. There's no 1/24. There's no 1/24. You just write a 0, here. It's undetermined constant. And check it by consistency. Determined by consistency. STUDENT: [INAUDIBLE]. PROFESSOR: Yeah, no. Yes? STUDENT: [INAUDIBLE] we'll have to work [INAUDIBLE] anyway. So why do we worry so much about [INAUDIBLE]? PROFESSOR: Sorry? STUDENT: I mean, we'll have to work D minus 4 dimensions anyway. So then, should we just begin with, like, Minkowski space D is just not correct? PROFESSOR: Right, yeah. So will find, actually, this conclusion does not depend on the details. It does not depend on details. Yeah, so you can generalize this to more general case. Say, curve spacetime, et cetera. And the [INAUDIBLE] spacetime [INAUDIBLE]-- then, you'll find the same conclusion will happen. The same conclusion will happen. And then, you reduce to four dimensions, then you will find the fourth dimension a massive vector which only have two polarizations. Yes? STUDENT: Maybe related to this question. We have no evidence that Lorentz symmetry holds in the compactified dimensions. So why do we want to keep it there? PROFESSOR: We just say, doesn't matter. It's only a question as far as you have some uncompact directions. As far as you have some Lorentz directions, then this will apply. Yes. STUDENT: Kind of going back a little bit to the [INAUDIBLE] thing. How do we know for a fact that the string tension causes-- because, for example, I think, in QCD strings, isn't string tension negative? PROFESSOR: Not really. How do you define a negative string tension? STUDENT: A negative tension? I don't know. I just read that, the QCD string, they have negative string tension. PROFESSOR: No, I think, here-- so alpha prime is a scale. It's a physical scale. Tension is-- it's defined to be positive. Just by definition, it's positive. Yeah. Other questions? OK, so let me just say a little bit more regarding-- so now we have found a tachyon and a massless vector, and, also, we have fixed the spacetime dimension to be 26. We have fixed spacetime dimension to 26. So now, if you now fix 0 equal to 26, then the higher excitations are all massive. OK, so for the photon, essentially, it's this guy. So this guy's inactive. This guy cancel this guy. So, when you go to higher excitation, then this guy will dominate, and the m square will be all positive. And the scale is controlled by this 1 over alpha prime, OK? So will be all massive with spacing, given by, say, 1 over alpha prime. So, we said, spacing m squared given by 1 over alpha prime. For example, the next level would be-- so alpha minus 1 alpha minus 1 some i some j, or alpha minus 2 i acting on 0, P, OK? So those things acting on-- to avoid confusion, let me write it more clearly. So this acting on 0, P, and alpha minus 2 [INAUDIBLE] on 0, P. OK? So this would be like a tensor, because these have two index. And this, again, like a vector. Again, like a vector. So those would be, obviously, the mass square of 1 over alpha prime. And you can check that, actually, they actually fall into the four [INAUDIBLE] [? representations ?] of the Lorentz group, OK? For [INAUDIBLE] [? representation ?] of the Lorentz group. Yes? STUDENT: Where is the one with only one index [INAUDIBLE]? PROFESSOR: Sorry. say it again? STUDENT: So the one with one index, since it's massive, it's supposed to have P minus 1 degrees of freedom, right? PROFESSOR: They have-- oh. What's happening-- that's a very good question-- so, what's happening is that this should give a tensor [? representation, ?] but this not enough. And this acts together to form a tensor [? representation. ?] Yeah, because i only going from to 2 to D minus 1, so you need to add them together. Good. So just to summarize story for the open string, we find the tachyon. We find the massless vector, which can be integrated, maybe, as a photon. And then, you find lots of massive particles, infinite number of massive particles, OK? So any other questions, or do you want to have a break? We are a little bit out of time. Yeah, maybe let me give you three minutes break. Yeah, let's have a break, now. So, again, the lowest state is just the 0, P. And then, again, all the N is 0. All the N are 0. So we just read the answer from here. Then M square, for the closed string, you just equal to a0 for the closed string. So, here, is now m squared minus 4 divided by alpha prime. D minus 2 divide 24. So, again, this is tachyonic for D greater than 2, OK? Tachyonic for D greater than 2, and this is a scalar. And this is a scalar, because there's no other quantum number. Yeah, so now we are familiar with these tachyons, so we don't need to worry about it. So let's look at the next. So, next, naively, you may say, let's do this one as open string case, but this is not allowed. This is not allowed. Why? STUDENT: [INAUDIBLE]. PROFESSOR: That's right. It does not satisfy this condition. Because, this one, you only have the left-moving excitations, but does not have the right-moving excitations. You are not balanced. So you also need to add-- so the next one will be this guy, OK? So, now-- oh, here, have a j. Now have a j. So this, we'll have m squared 26 minus D divided by 20 alpha 4 alpha prime. 6 alpha prime. Again, now, look for what representations of the Lorentz group will give you this. OK, you'll find none, unless you're in the D equal to 26, OK? The same story happens, again, only for D equal to 26, fall into the representations of Lorentz group. And reach the m squared, again, it's massless. m square is massless. So, it turns out, actually, this does not transform under any reducible [? representation ?] of a Lorentz group. It's actually a reducible, so it can be separated into several subsets. So this can be further decomposed to-- So you can take all the i and j together, take the same i, some i. So take this guy, take these two index to be the same, and the sum of all the directions. So this does not transform under the rotation of i's, so this is a scalar. But it's a massless scalar. And you can also have the situation-- so all the state are in this [? representation ?] of both states, so they are D minus 2. So there are D minus 2 times D minus 2 of them. D minus 2 times D minus 2 of them. So this, D minus 2, say, one of them can be decomposing to a scalar. And I can also take the linear slope [INAUDIBLE] of them with a symmetric traceless. So the trace part is, essentially, this scalar. And I can also take a traceless part. Traceless e i j, OK, because the trace part is already covered by this one. I don't want to repeat. And this is precisely the generalization, what we normally call the spin-2 representation, to general dimension. OK, so we have found a massless spin-2 particle. So this is a massless spin-2 particle. And then, you can also, of course, take it to be antisymmetric. So that's the only possibility, now. bij, antisymmetric bij. So these are called antisymmetric. So these will give rise to an antisymmetric tensor in spacetime. So this is an object with a 2 index, and the 2 index are antisymmetric, OK? So, similarly, the higher modes are all massive. For example, at the next level, next mass level, m square is equal to 4 divided by alpha prime. Any questions on this? STUDENT: This antisymmetric tensor, what is that? STUDENT: It's like a [INAUDIBLE]. PROFESSOR: It's a antisymmetric tensor. STUDENT: But what's the spin? PROFESSOR: Yeah, normally, in the fourth dimension will be something what we normally call 1 comma 1, a representation of the Lorentz group. Yeah, it's not 0, 2. It's what we normally call 1 comma 1, yeah. Yes. STUDENT: It's a form of bij [INAUDIBLE]? PROFESSOR: No, this is just arbitrary. Yeah, so this is your state space, at this level, right? And so, the general state would be [INAUDIBLE] of them. And those states, they transform separately, on the Lorentz symmetry. So we separate them. And so, for example, symmetric strings, they transform separately on the Lorentz transformation under these guys. So this should cause 1 into different spacetime fields. So each of those things should correspond to a spacetime field, OK? So, now, let me just summarize what we have found. So we have found-- so let's collect the massless excitations we've found. Because, as we will see, the massless excitations are the most important one. Let me also mention, let me just emphasize. In physics, it's always massless particle give you something interesting, OK? For example, here, even for D not equal to 26, those massive particles-- as I said, maybe I did not emphasize-- even for D not equal to 26, these massive particles that do form into representations or Lorentz symmetry, for any dimension, only for those massless particles, OK? It almost seems funny. And so, of course, we also know the massless [INAUDIBLE] that give rise to long-range [INAUDIBLE], et cetera. So now, let's say, let's collect the massless particles. Massless excitations. So we will write them in terms of the spacetime fields. So, for the open string, essentially, we find the massless vector field. So this is our photon. So, at the moment, I put it as a, quote, photon, because we only find the massless particle. We don't know whether this is our own beloved photon, yet. And, for the closed string, then we find a symmetric tensor. So this is what we normally call the graviton. So again, quoted. We find the massless spin-2 particle, which, if you write in terms of field, would be like a symmetric tensor, Or B mu mu, which is antisymmetric tensor. And now, this mu mu, this all wrong in all spacetime dimensions, OK? And then we have a phi. Then you have a scalar field. So this is just called antisymmetric tensor, and this phi is called a [INAUDIBLE]. Phi, which is that scalar field, is often called a [INAUDIBLE]. So, so far, even we call them photon, call this one photon, and the h mu, they are not-- to call them photon and the graviton is, actually, a little bit cheating, because we don't know whether they really behave like a photon or like a graviton, OK? We just find a massless spin-1 particle and a massless spin-2 particle. But, actually, there is something very general one can say, just from general principle. Just from general principle, one can show, based on Lorentz covariance, and, say, an absence of [INAUDIBLE] physical states, et cetera, say [INAUDIBLE], et cetera, just based on those general principle, one can argue that, at the low energies, that the dynamics of any massless vector field should be Maxwell. And, for the massless spin-2 particle, must be Einstein gravity, OK? So that's why, say, tomorrow, supposing if you mend this theory yourself, and suppose that's a quantum theory, and that theory just happens to have a massive spin-2 particle, then you don't have to do calculations. Then you say, if my theory is consistent, this spin-2 particle must behave like a graviton, OK? STUDENT: Are you saying that there's no other Lorentz invariant [INAUDIBLE]? PROFESSOR: Yeah, essentially, you can show, the low energies, it's always just based on gauge symmetry, et cetera. The only thing you can have is [INAUDIBLE], yeah, is Einstein gravity. Good. And this can be, actually, checked explicitly. So now, let me erase those things, now. Don't need them. So this can actually be checked explicitly. So, in string theory, not only can you find the spectrum, you can also compute the scattering amplitude among those particles. OK, so I said [INAUDIBLE] before, essentially, perform path integrals with some initial string states going to some final string states, et cetera, OK? So, of course, this will be too far for us, so we will not go into that. Let me just tell you the answer. So you can confirm this. So this expectation, this confirmed. This confirmed by explicit string theory calculation of scattering of these particles, OK? For example, let's consider-- so we have this massless spin-2 particle, which I called h. So let's consider, you start with initial state with two h, then scatter to get its own two final stage, which, again, h. So this is, say, graviton graviton scattering. Start with two graviton, scatter them. So, as in string theory, we'll be, say, at the lowest order, we will have a diagram like this. So a nice [INAUDIBLE] order. I'm not drawing very well. Anyway, I hope this is clear. So you start with two initial string states. You scatter into some final states, OK? So this is an obvious string theory scattering diagram. You can, in principle, compute this using path integral, which I outlined earlier. Of course, we will not compute this path integral. And so, you see, there are two vertex here. One is proportionate to string. You have two string merging to one string, and then you have a string that's split into two. So they're two. Remember, each of [INAUDIBLE] equals 1 and 2 [INAUDIBLE] string, OK? So this will be the process, in string theory. So this will be the process in string theory. So now, If you go to low energies, low energies means that, if you can see that the energy of the initial and the final particles to be much, much smaller than 1 over alpha prime. So, remember, 1 over alpha prime, it's the scale which go from massless to massive particles. So, if you can see the very low-energy process, which E is much, much smaller than 1 over alpha prime, then the contribution of the mass-- so, in some sense, in the string, in this intermediate channel, when you go from 2 initial state to 2 final state, this intermediate channel, the infinite number of string states can participate in this intermediate process. But, in the process, if your energy's sufficiently low, then, from your common sense, we can do a calculation. And then, the contribution of the massive state becomes, actually, not important. So, essentially, what is important is those massless particles propagating between them. And then, you can show that it's actually just precisely reduced to the Einstein gravity. Precisely reduced to Einstein gravity. So more explicitly-- so yeah, so when you go to low energies, then only massless modes exchange, dominate. Terminates. And then, you find that the answer is precisely agreed in the [INAUDIBLE] limit. Agree with that from Einstein gravity. Because you not only have graviton, you also have this B and phi, they are also massless modes. So this is a slight generalization of Einstein gravity. It's Einstein gravity coupled to such B and the phi, OK? So, in fact, you can write down the so-called low-energy effective action. So-called low-energy effective action, and we call LEE, here. let me see, like this. So phi is this phi, here, and R is the standard reach scalar for the Einstein gravity. H is this reindexed tensor formed out of B. So H square just a kinetic term for B. So, more precisely, you can show that the scattering amplitude you obtained from string theory, then you take a low-energy limit, that answer precisely the same as the scattering amplitude you calculated from this theory, say, expanded around Minkowski spacetime, OK? So this is Einstein gravity coupled to [? home ?] scalar field, OK? Yes? STUDENT: So what about higher loops or higher energies [INAUDIBLE]? PROFESSOR: Of course, then, it will not be the same. This is low-energies, OK? So let me make one more remark. Make one more remark. In Einstein gravity, say, like this. So Einstein gravity coupled to the matter field. So, when I say Einstein gravity, I always imply Einstein gravity plus some other matter field which you can add. So, in Einstein gravity, start your scattering process. It's that lowest order, we all know, is proportional to G Newton . OK, so this is the same G Newton, the G newton observed. Yeah, let me call this, say, scattering for this is A4. And then, the scattering for the Einstein gravity is proportionate to G Newton. So one graviton is changed, and it's proportionate to Newton constant. This is an attractive force between two objects. And, if you look at this string diagram, then this string diagram is proportional to gs square. So we conclude that the relation between the Newton constant and the string coupling must be G Newton proportional to gs square, up to some, say, dimensional numbers or some numerical factors. So this is very important relation you should always keep in mind. So now, here's the important point. [INAUDIBLE] just asked, what happens at the loop levels? So you can compare the three-level processes, because, actually, find they agree very well. We say, what happens at loop levels? So now, let me call this equation 1. So, at loop level, this 1 is notoriously divergent. So if you can calculate some scattering amplitude to the loop level, then find the results are divergent. In particular, more and more divergent when you go to more and more higher loops, OK? More divergent at higher orders in [INAUDIBLE] series, say, at the higher loops. So that's what we normally mean, say a theory is non-realizable. So this tells you-- so, if you take this gravity theory-- so this is just, essentially, our Einstein gravity coupled to some fields-- if you take the Einstein gravity, expand the long flat space, quantize that spin-2 excitations, then that will fail. Because, at certain point, you don't know what you are doing, because you get all divergences, which you cannot renormalize. OK, you can normalize. So, of course, what this tells you is that this equation itself likely does not describe the right UV physics. So that's why you see all these divergences, because you maybe lost some more important physics, which you cannot renormalize. But now, so this is supposed to only agree with the string theory at low energies, which the maximum modes are not important. But, in string theory, there are this infinite number if massive modes, et cetera. So you find, in string theory, if you do similar loop calculating string theory, the string loop diagrams, magically, are all UV finite. Or UV finite, so there's no such divergences. There's no such divergences. So this is the first hint. So this was the first hint of string theory as a consistent theory of gravity. And because, at least, at the pertubative level, you can really quantize massive spin-2 particles, and to calculate their physics in the self-consistant way, OK? Any questions on this? Yes. STUDENT: In that pertubation, you'd have to use all those upper-- PROFESSOR: Yeah, that's crucial. So that's why this kind of thing is not good enough, because that does not have enough degrees freedom. So, in string theory, you have all these additional degrees of freedom that make your UV structure completely difference. Yes. STUDENT: Then if you take that [INAUDIBLE] and you just [INAUDIBLE] from above on [INAUDIBLE], will it make [INAUDIBLE]? PROFESSOR: Yeah, so this will work, as what normally works as a low-energy effective theory. A low-energy [INAUDIBLE]. When you consistently integrate out the massive modes. And yeah, so, even at loop level, this can capture, indeed, at low energies, in my loop level, this can capture some of the string theory with that. But you have to normalize properly, et cetera, yeah. Yes. STUDENT: Does this imply that, for large objects, not things on these very small scales, that we should reproduce the Einstein field questions? PROFESSOR: Yeah. STUDENT: This is sufficient to show that? PROFESSOR: Yeah, that's what it tells you. Yeah, for example, if you measure the gravity between you and me, you won't see the difference. Yeah, actually, you will see a difference. So this theory is a little bit different, because of this massive scalar field. So, in ordinary gravity, the attractive force between you and me just come from graviton. But in this theory, because this scalar is massless and, actually, have additional attractive force. And so, this theory is, actually, not the same. So this theory, even though it's very similar generalization of Einstein gravity, but actually give you a different gravity force. So that's why, with the string theory, each going to describe the real life, somehow the scalar field has to become massive. Some other mechanism has to make the scalar field massive. STUDENT: I see. PROFESSOR: Yeah, of course, we also don't observe B mu m and this also, obviously, become massive. STUDENT: And will we find out that's the mechanism to make [INAUDIBLE]? PROFESSOR: Yeah. So this is one of the very important questions, since early days of string theory. People have been trying to look for all kinds of mechanisms to do it, et cetera, yeah. STUDENT: So there is an agreed-upon way of doing this? Or is it still sort of [INAUDIBLE]? PROFESSOR: It depend, yeah. This goes to a Lex point. Yeah, wait for my Lex point. Yes. STUDENT: Can I ask, how do we know that this effective theory couples all the fields as Einstein's theory does, though quandrant derivative? PROFESSOR: So what do you mean? You can add-- the coupling between them, this and gravity is through the-- Yeah, here, of course, you should use quadrant derivative. Yeah. Yeah, here, I did not-- I'm not very careful in defining this, but here, you use quadrant derivative, et cetera. STUDENT: What about coupling to the open strings, or to photons, or to matter? PROFESSOR: Would be the same thing. STUDENT: As? PROFESSOR: As what you would expect, that [INAUDIBLE]. Just saying, it just governed by general covariance. General covariance have to arise at low energies. STUDENT: So that general principle is effective including matter, as well? PROFESSOR: Yeah, then you can check. They can check. It it's a string theory, anything you can check is consistent with that principle. Good. So now, it's another point. So let me just make a side remark on the physical consequence of this scales field. So we see that this scalar field is important, because it, actually, can mediate, say, attractive force. But, actually, there's another very important role of this scalar field task. It's that, if you look at this low-energy effective action-- so let me now write this G Newton as 1 over g string square. Then this have the structure proportional to 1 over g string square times exponential minus 2 phi, OK. So now, there's a very important thing. So now you see, if phi behave non-trivially-- this is, actually, modify this guy-- it's actually factively-- yeah, because this is multiplied by the Einstein scalar-- so this, effectively, modifies your Newton constant, OK? In fact, they modify the Newton constant. In fact, you can actually integrate this gs as the expectation value of these phi, OK? That's the expectation value of this phi. Yeah, because if you can change your phi, and then change effective gs, then the gs can reintegrate as expectation value of phi. So this is something very important and very deep, because, remember, gs is, essentially, The only parameter in string theory. The only dimension is parameter in string theory, which characterize the strings of the string. And now, we see this constant is not arbitrary. It's actually determined by some dynamic field, OK? So that actually means that, string theory, there's no free parameter. There's no free dimensionless parameter. Everything, in some sense, determined by dynamics. So this is a very remarkable feature. So this makes people think, in the early days of string theory, that you actually may be able to derive the mass of the electron. Because there's no free parameter in string theory, so you should be able to derive everything from first principle. Anyway, but this also create a problem for the issue I just mentioned. Again, this is a side remark, but it's a fun remark. But string theory, we mentioned before, is a summation of a topology. And the topology's weighted by g string. So, if g string is small, they you only need to look at the lowest topology, because the higher topology are suppressed by higher power of g string, OK? And, particularly, if g string become [INAUDIBLE] 1, then, to calculate such a scattering, then you need to sum if all plausible topologies, and then that would be unmanageable problem, which we don't know how to do. And so, want g string to be very small, so that, actually, we can actually control this theory, OK? But now, we have a difficulty. But we also said we want the scalar field to to develop a mass. We want this scalar field to develop a mass. And, turns out, this is actually not easy, to arrange this scalar field to develop a mass, and, at the same time, to make this expectation to be very small. K. And, actually, that turns out to be a non-trivial problem. Yeah, so it's actually not that easy. OK, so my last comment. So, earlier, we said, the tachyon. So what we described so far, these are called the bosonic string, because we only have bosons. We only have bosons. It's called the bosonic string. So this bosonic string can be generalized to what's called a superstring theory. Of course, superstring. So what superstring does is the following. After you fix this gauge, again, the superstring can be written as some covariant worldsheet theory with some intrinsic metric. And, in a superstring, after you fix this gauge, the worldsheet metric to be Minkowski, then the superstring action can be written as the following. You just, essentially, have the previously free scalar action. But then, you add some fermions. You add some fermions. So these are just some two-dimensional fermions living on the worldsheet. Yeah, so these are two-dimensional spinner fields living on the worldsheet. So the reason you can add such a thing, because those things don't have obvious geometric interpretation. So you can consider them as describe some internal degrees of freedom of the string. And so, these guys provide the spacetime in the interpretation of moving the spacetime. And those are just some additional internal degrees of freedom on the worldsheet. It turns out, by adding these fermions, actually, things change a lot. Actually, they do not change very much. It turns out, things does not change very much, because this is a free fermion series, also very easy to quantize. And everything we did before just carry over, except you need to add those fermions You need to also quantize those free fermions. Turns Out, when you do that, there are actually two different quantization scheme, quantization procedure, quantization process. Two different quantizations exist. So, when you add these fermions with no tachyon. So, you actually can get rid of tachyon by including these fermions, OK? So then, the lowest mode is just your massless mode, OK? And the reason that you can have more than one quantization is that, when you have fermions-- and this is fermion defined on the circle. So fermion on the circle, you can define to be periodic or antiperiodic. So now, you have some choices. And, depending on whether you chose fermion to be periodic, antiperiodic, et cetera, and then the story become different, OK? And, sorry, we're not going into there. But, in principle, just waste enough time, in principle, now, you can do it yourself. Because, just quantize free field theory. Actually, you still cannot do it yourself. Yeah, this there's still a little bit more subtlety than that, but the principle is very similar. So, in this case, you get rid of tachyon. So these two procedures cause type IIA and type IIB string. So they give you two type of string theory. One is called type IIA, and one is called type IIB. So, also, a lot of the important difference, instead of D equal to 26, now only requires D equal to 10, OK? So now, let me just write down the massive spectrum. So now, because you have fermions, now, actually, this spacetime particle can also have fermions. Previously, it's all bosonic particles. Now, by adding these worldsheet fermions, it turns out that these can also generate the spacetime fermions. It can generate spacetime fermions. So now, the massive spectrum. So now, these are the lowest particles. There's no tachyon. So these, now, become all 10 dimensional fields. So for type IIA, again, you have this graviton, this B mu mu, then you have this theta [INAUDIBLE]. And then you have a lot of active fields come from the closed string. And so, this is for the closed string. Now, you actually have a gauge field. Also, in the closed string, you want gauge field and the three form tensor fields. So you have three indexes, 40 antisymmetric. STUDENT: Is the gauge field just coming from the fermion bilinear? PROFESSOR: Yeah, yeah, yeah. Right. That's right. They are related, actually, to fermion bilinears, yeah. So those things are exactly the same as before, and then you get some additional fields. So these are normally called the Ramond-Ramond fields. And then, plus fermions. So let me write down the fermions. It turns out that, these theories, actually was magic. So, each of those fields, they have some super fermionic part So it's actually a supersymmetric theory. So then, you also have type IIB. And, again, the bosonic string, you have H mu mu, B mu mu, phi. And then, you have the larger scalar field, chi, then the larger antisymmetric tensor C mu mu 2, and it's in the four form field. OK, so this, again, is so-called the Ramond-Ramond. Then plus fermions. And then, the [INAUDIBLE] theory of them becomes, so-called, type IIA and type IIB supergravity. So these are supersymmetric theories, and then the corresponding generation of gravity for the supergravity. So we do not need to go in to there. Yes. STUDENT: So, if you were going to do string theory, for example, for a the generalization on membranes, as you had mentioned before, if you were doing this on two manifolds, would you also have to include anion contributions? PROFESSOR: You may. I don't know. Nobody have succeeded in doing this. Yeah, you may. OK, actually, today I have to be particularly slow, because I have a much more grand plan for today. That also means that there's definitely-- you can only do the first two problems in your pset. There's only four problems in your pset, but you can only do the first two. So you want to start, either, the first two, or you want to wait until next week to do it all together. STUDENT: If we do it all together next week, so how about the next homework? [INAUDIBLE] one week later? PROFESSOR: Yeah. OK, yeah, then let's just defer to one week. Yeah, next Friday. Yeah, then the next time-- so these, in sum, conclude our basic discussion of the string theory. You have seen most of the magics of string theory, even though it's very fast. So, next time, we will talk about D-branes, which is another piece of magic from string theory. Yeah.
MIT_8821_String_Theory_and_Holographic_Duality_Fall_2014
7_Structure_of_Large_N_Expansion.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. HONG LIU: OK, let us start. So first, let me remind you what we did at the end of the last lecture. So we can see there's a large N matrix field theory. And then we saw that when you write down your Feynman diagrams to calculate things, then there's a difference now between the order of contractions. Which is now refracted whether you have a Planar diagram or non-planar diagram, et cetera. And that in turn, also affects your end counting. So at the end, we discussed two observations. One observation we said, is for the example to be considered is the non-planar diagram, even though it cannot be drawn on the plane without crossing lines, can actually be drawn on the torus without crossing lines. So the diagram can actually be straightened out on the torus. And another observation, is that the power of N is related to the number of faces you have in your diagram after you have straightened it out. So now, I'm going to generalize these two observations. So first, I will tell you a fact. Many of you may already know this. So I'll first tell you a fact that any orientable two dimensional surface is classified topologically by integer, which I will call h. So this h is called a genus. So heuristically, this genus is equal to the number of holes a surface has. So for example, if you have a plane, the plane topologically is actually equivalent to a sphere, so when I draw a sphere, that means a plane, because if you identify the point on the plane of the infinity, then it becomes a sphere. So topologically, a plane is no different from a sphere. So this genus 0. So this is plane. So h is equal to 0. And then for torus, then there's one hole, and this is just genus 1. So this is torus. You can also draw surfaces with as many holes as you want. So this is surface with two holes. A surface with two holes, and so this is genius 2. So the remarkable thing is that this actually classifies the topology of all two-dimensional surfaces. And this so-called topological invariant, there's a topological number a so-called Euler number, which is defined to be Chi is equal to 2 minus 2h. So if h labels the topology, and so the Chi is related to h in this way. So any two surfaces with the same topology will have the same Chi, because if they have the same h, they have the same Chi. So Chi is what we call the topological invariants. So this is just a mathematical fact. So now I'm going to make two claims. So these two claims are in some sense, still evident after you have studied a little bit. I'm not going to prove it here, will just make the claim, and then I will leave the task to familiarize these two claims to yourself. But as I said, these two claims are actually self-evident, if you just do a little bit of studying. So claim one-- so this is regarding the structural Feynman diagrams. So remember, the example theory we conceded last time was something like this, which phi is [INAUDIBLE] matrix. So if you can look at the Feynman diagrams of the theories, the claim is the following. for any non-planar diagram, there exists integer h so that the diagram can be straightened out. Straightened out just means non-crossing on a genus h surface, but not on a surface with a smaller genus. So actually, you can classify all the non-planar diagrams by drawing it on a genus h surface on two-dimensional surfaces with non-trivial topology. And then you should be able to find the number h, which is the lowest genus you need to make this diagram to be non-crossing, instead of crossing. Last time, we saw an example which you can stretch it out on the torus, but it was a more complicated diagram and torus would not be enough. You would need, say, a genus 2 surface or higher genus surface. So you can easily convince yourself that this is doable just by some practice. Also, this is very reasonable. Do you have any questions regarding this claim? Good. So this was claim one. The claim two-- and this is a generalization of the second observation. It's a claim of the generalization of the first observation of last time, and claim two will be the generalization of the second observation. For any diagram, the power of n coming from contracting propagators is given by the number of faces on such a genus h surface. As we explained last time, the number of faces just means the number of these connected regions in the diagram. But if you are stretching it out, then there's an ambiguous way you can count the number of disconnected regions In the diagram. And that number is the power of n coming from the contracting propagators. So this claim is also self-evident because each power of n comes from a single connected line. And essentially, a single connected line will be circles some face and so the number of independents inside the face of course, are just the number of power n. Any questions regarding the second claim? Good. So based on these two claims, now immediately write down the independents. So from this, we can find a vacuum diagram. Again, we have been only talking about the vacuum diagram. Vacuum diagram means the diagram has no external lags, has the full independence. So from here, we immediately conclude that the vacuum diagram has the following, g squared and dependence. So g squared is just the carpeting constant. So remember, each propagator is proportional to g squared and that each vertex is one over g squared. Just from there. So let's just take an arbitrary vacuum diagram. Let me call this amplitude to be A. And A should be proportional to g squared to the power E. E should be the number of propagators, because each propagator gives you a factor of g squared. And also minus V. And V is the number of vertices below to the number of vertices. And each vertex gives you one over g squared, so that should give you E minus V. And then we just multiply N to the power of F, and F is the number of faces. So without doing any calculation, so this has essentially characterized the N and the g and the accompanying dependents of any diagrams. Now, if you look at this expression, you say we are doomed, because this kind of expression does not have a sensible [INAUDIBLE] So remember our goal, the original goal of the 't Hooft is that you treat this N as a parameter and then you want to take it N to be large and then to it's ranking, one over N. Doing expansion in one over N means you are expanding around the including infinity. You want to expand around the including infinity. But if you look at this expression and then go to infinity, it's not a well-defined limit, because I can draw a [INAUDIBLE] Feynman diagram, which F can be as large as possible. So you just have sufficiently many vertices, and sufficient to be mainly propogators, F is unbounded. So then, there's another well-defined angle to infinity limits. So this expression is not a sensible N of infinity limit. If you don't have a sensible N infinity limit, then you cannot talk about doing a longer expansion, and you cannot even define the limit we just finished. Yes? AUDIENCE: I remember people do partial sum to-- can they do the same thing here? Put the large in the denominator pattern from series of singular bubbles and-- HONG LIU: Yeah. AUDIENCE: Like a random face approximation-- HONG LIU: Yes. AUDIENCE: Somehow they can infinite a sum over an infinite number of A, and put the margin into like 1 minus N something for the denominator-- HONG LIU: Yeah. So people certainly have been talking about expansion. And certainly, if this just failed here, we will not be talking about this. I'm just setting up a target, which I'm going to shoot it down. Right. Yeah, I would just say if you do the margin expansion, this probably would be the place which would turn you back. You say, ah, there's no margin limits, and that's from another problem. But hopefully, that's no ordinary person. And so remember, there are several nice mathematical tricks you have to go through. First, you have to invent the stop line notation so that you can count the end very easily. Yeah, first you have to come up with this margin idea. I think maybe not due to him, maybe other people already considered similar things like right here. But first you have to think about the double line notation to make your computation easier, and then even after you reach the staff and then you need to know this kind of topology, et cetera. After you reach this staff, still, there is a roadblock here. But 't Hooft found a very simple way to go around it. Because when I say this limit it is not well-defined, I made the assumption that when I go to infinity limit, the g is kept fixed. So that's the reason that this will blow up. But then 't Hooft came up with a different image. He said when you can see that N goes infinity, but at the same time, you can see that g squared goes to zero. At the same time, you can see that g squared goes to zero. Because the problem with this, is that if you have a diagram with lots of faces and if you multiply by a finite number, if you take g equal to 0, then you have infinity multiply something potentially goes to 0, maybe you will get something finite. One second. So you said let's consider this limit, so that the product of them is kept finite. Let's consider this limit. So in this limit, the end counting will be different. So this A will be g squared N, E minus l, because we want to keep this fixed. So we put an N factor here, then we take the N factor out. N plus F plus V minus E. And then let me just write this slightly differently. So this is Lambda. If I use this notation, Lambda now is finite. So this is E minus V. So let me just write it as Lambda L minus 1 N to the power of Chi. Now, let me explain [INAUDIBLE]. First, L is equal to E minus V plus 1, is the number of loops in the diagram. So do you guys remember this formula, why this is true? The reason this is true is very easy to understand. So if you look at the Feynman diagram, the number of loops is exactly the same as the number of undetermined momentum. And each propagator will carry momentum, and at each vertex, you have momentum conservation. But then this overall momentum conservation, which is guaranteed, so the number of independent momentum is E minus V and then plus 1. Yeah, it's just E minus V minus 1, because there's only V minus 1 independent momentum constraints. So this is the number of independent momentums, and so this is the number of loops. And the Chi is just defined to be the number of faces plus the number of vertices minus the number of edge. Yes? AUDIENCE: Is there any-- in general, is there a relation between E, V, and F, like are they completely independent to each other? HONG LIU: E, V, and F? AUDIENCE: Yeah. HONG LIU: Yeah, that I'm going to explain. Any other questions? AUDIENCE: So the reason that you can take the F to infinity limits without taking it to 0, is that why we can't apply this to QCD to get a-- HONG LIU: No, because we apply this. AUDIENCE: But if g is not small? HONG LIU: It doesn't matter. So this is an expansion scheme. And you can apply it to QCD, then you can ask whether this particular expansion scheme is a good approximation or not. That's a separate question. And the focus is pretty good, it's not that bad. Yeah, focus of each squaring is finite. AUDIENCE: But it's [INAUDIBLE]. HONG LIU: No, here it does not tell you if lambda has to be small. Lambda can be very big here. It just tells you the lambda in this limit has to be finite. Lambda can be convening and it can be very big. And actually, in the future, we will take lambda to go to infinity. So now, in order to have a well-defined limit still depends on this chi. So now, you ask why this works because we still have this chi, but now again, we need another piece of mathematics. First, when you draw, let me remind you some diagram we had last time, so this is the simplest diagram. And there's a lot of non-planar diagrams which you can draw on the torus, which is the non-planar version of this, et cetera. And you can also have more complicated diagrams. So suppose you are on the torus, I can consider more complicated diagrams, like that. For example, such diagrams. So if you think about such diagrams, then in a sense, each Feynman diagram can be considered as a partition of the surface. So if you draw a diagram of the surface into polygons. Yeah, it's very clear here. I drew this now with wavy lines, just make them straight. And this one topologically, I can just draw like this, and then I have one, one, and the other part, similarly here. So each Feynman diagram can be concealed if you just partition whatever surface you draw the diagram. Is this clear to you? So this is a very important point. Yes? AUDIENCE: Can you repeat that? HONG LIU: Yes. Look at this diagram. So this is a Feynman diagram drawn on the torus. Does this look like a partition of torus into polygons? Yes, so that's what I'm talking about. And the statement that this would apply for any Feynman diagrams. Yes, so this is like a partition of a sphere. You separate a sphere into three regions, one, two, three. Any questions about this thing? Yes, do you have a question? AUDIENCE: Yeah, in that case the third one isn't abandoned, is it? The third-- HONG LIU: So this one I am going to show you, you should view the plane as a sphere. AUDIENCE: Oh, OK. HONG LIU: Topologically, it's the same as a sphere. So now, once you recognize this, now you can use a famous theorem due to Euler. Some of you could have learned it in junior in high school, because the-- so this theorem, given a surface composed of polygons-- so if you're not familiar with this, you can go to Wikipedia-- with F faces, E edges, and V vertices. Suppose you have a surface like this, and then this particular combination chi, which is defined to be F plus V minus E is precisely equal to 2 minus h. So this is the-- this is why this is called Euler's number. And they only depend on the topology. So this combination only depends on the topology of the surface and nothing else. So we can imagine 't Hooft a very good high school student. So he already knew this. So now, we can rewrite this thing as A, reading script A lambda to the power L minus 1 N to the power 2 minus 2h. So remember h is greater than zero. So now this expression has a well-defined limit. So now this has a well-defined analogy and limit. So to the leading order, and in particular, this N dependence only depends on the topology of the diagram. Only depend on the topology of the diagram. For example, to leading order in large N, then the leading order will be given by the planar diagrams. Because that should be the diagrams with h equal zero because h is not negative. So the leading term is given by h zero. So all of the planar diagrams in this limit, in this so-called 't Hooft limit, will have N dependence which is in N squared. N squared. So now if you go back to your loads, above the fourth diagram we started last time, above the two planar diagrams we started last time, you can immediately tell that indeed it is N squared in this limit. Then you still haven't explained in terms of lambda. So you can do Feynman diagrams, et cetera. Say lambda just depends on the number of loops. So if you have one loop, then start it with a lambda equal to zero, start it with some constant. If it's two loops, then lambda, three loops, lambda squared, et cetera. So the sum of four-- sum of all planar diagrams, we have this structure. So you can imagine you can sum all this, if you're powerful enough. And then you can write it as N squared, then some function of lambda. So the planar diagram would be just N squared time some function of lambda. So if you are powerful enough to compute this f0 lambda exactly, then you can say you have solved the planar. You have solved the large N limit of this series. Unfortunately for lambda being in gauge theory we cannot do that. We don't know how to compute this. We can on only compute perturbatively which actually does not work for QCD. Yes? AUDIENCE: But N going to be infinite. It just seems like-- why does this overcome the same problem that we had before? HONG LIU: Because this is a specific power. Here there's no specific power. That F can be as large as you want. AUDIENCE: But wasn't the other problem also that N was going to infinity? HONG LIU: No that's another problem. Because here, there is a specification limit, when you take N going to infinity. In there, there's no specific limit. There's no limit when you take N go to infinity. No, the limit is N squared. I can say-- this is your essentially, your vacuum energy, right? I can say E0, divided by N squared has a well defined limit. So the key is that this is a specific dependence on N. But there it's unbounded. AUDIENCE: I guess another question is, we didn't assume that lambda is a small prohibiter. HONG LIU: No we did not assume lambda. AUDIENCE: So isn't this kind of perturbative analysis, saying that that series converges to some function, is that OK? HONG LIU: It's a very good question, and it is OK. But the reasoning is more complicated. The reasoning is more complicated. The keys are falling. What I'm writing here you can understand using two perspectives. Just one is heuristic. Just one say, suppose you can only do perturbation series, and that would be the thing you compute. And then say, if you are powerful enough to solve the theory with perturbity, then you're guaranteed just to find some general functions. Yeah but whether this series actually converges is an important mathematical question, of course, one has to answer. It turns out, actually this is convergent. You can mathematically prove this is convergent, for very simple reasons. Yeah this is a side remark, but let me mention it because this is a cool fact. So if you do just lambda of g 5, 4 series, just do the standard Feynman diagram calculation, input the basic series, the series is divergent. The series is divergent, no matter how g is small, just as the radius of convergence is 0 for any non-zero value. Yeah the radius of convergence is 0 in terms of g, no matter how small g is. The reason that it's divergent is because the number of diagrams. So when you go to Nth order, it increases exponentially in N. So the coefficient of g-- say to the power N-- the coefficient can become huge. Because of the number of diagrams to increase exponentially in N. So you think, I have g to the power N times N factorial, something like that. And that series is not convergent. But the thing that is convergent here, because can see the planar diagram, the number of planar diagram is very, very small. It only increases with N as a power of N, rather than as a factorial of N. And so you can make lambda small enough, then this can be convergent. Yeah. AUDIENCE: Didn't we already see at the end of the last lecture that only planar diagrams keep the biggest contribution, so why is that equal to-- HONG LIU: Sorry? AUDIENCE: So at the end of the last lecture, we saw that-- HONG LIU: No not necessarily. That is only for the two diagrams. If you look at this thing, I can draw arbitrary complicated non-planar diagrams, with a very large F. They only depend on F. It does not depend on anything else, if you do this. OK so in general, then of course if you include the higher non-planar diagram, et cetera-- so in general the vacuum energy, which you would normally call log z. OK log z is, essentially, the sum of all vacuum diagrams. So this is the partition functions, so it's a path integral. So log z, then we'll have the expansion from h equal to 0 to infinity, N two the power 2 minus 2h, F h lambda. OK so at each genus level, you will have some function of lambda. Yeah the leading, order we just showed is f0. And then if you add the Taurus now-- we'll add to the torus now our on non-planar diagram. It's order one. And then, for the two genus, it's f over n squared, et cetera. So let me just write down z explicitly. Z is the partition function, i S phi. So if you compute this, with the right boundary condition-- if you compute this path integral, with the proper boundary conditions, then that gives you the vacuum diagrams. The log z is the sum of all connected vacuum diagrams. I should say, the sum of all connected vacuum diagrams. Any questions regarding this? Good. Now there is a heuristic way we can understand this expansion. So it's actually a heuristic way to understand this one-way expansion. So let's just look at this path integral. So let's look at the Lagrangian. So the Lagrangian I wrote there. In this 't Hooft limit, I can write it as minus N divided by lambda. So I want to write things in terms of lambda. So I multiply the pre-factor N downstairs, and then upstairs. So g squared N give me lambda. And then I have a trace, et cetera. So now it's easy to see that you're leading order, this things should give you order N squared. Because there is already a factor of N here, and the trace is the sum of N things. So generically, this should be of the order N squared. So now we-- a little bit leap of faith-- say supposing the large N limit, since the leading order is order N squared, you can argue that actually all the N squared is the expansion parameter, if you do scatter point approximation. And then naturally, you will see the power will be given by 1 over N squared. any questions regarding this? Yes? AUDIENCE: What does N over lambda factor to? HONG LIU: It's-- AUDIENCE: OK. HONG LIU: So clearly this discussion actually does not-- so when I say clearly here, it requires a little bit of practice. But clearly, our discussion only depend on the matrix nature of the Lagrangian and the fields. So I'm going to make a claim. It says for any Lagrangian, of matrix valued fields of the form L, which is N divided by some coupling constant, times a trace of something. Doesn't matter what you put inside the trace here, as far as you have a single trace. Such a series will always have the expansion like this. It will always have an expansion like this. So let me just summarize. In the 't Hooft limit-- by 't Hooft limit, I always mean this form-- the coupling constant is defined such that you have some coupling concept here, and then you have over N factor. And then of course, you can also have some coupling constant inside here. It doesn't matter. As far as those coupling constants are independent of N. And as far as things inside the trace are independent of N. So for such a kind of series, the 1 over N expansion is equal to the topological expansion. It's the expansion in terms of topology of Feynman diagrams. So this is a very, very beautiful-- and as I said, it's [INAUDIBLE], because in principle, it puts a very simple structure into something that's, in principle, very complicated. Yeah, any questions regarding this? Yes? AUDIENCE: Maybe it's not can I just understand it in such a way that he kind of treats the Feynman diagram as a triangulation of different spaces. HONG LIU: Yeah for example, you can think of from that point. Yeah, so we use that to derive, to use this formula. Yeah and that picture will be very useful in a little bit from now on. Keep that picture in mind. The Feynman diagram is like the partition of some surfaces. And that will be very useful later. AUDIENCE: Does there ever arise a situation in which you care not about two surfaces, but Feynman diagrams on three surfaces or something like that? Because this asks the question, you don't necessarily have to consider the topology of two surfaces. Are there any situations in which it's more complicated? HONG LIU: Yeah but we always draw Feynman diagrams on the paper, which is two dimensional. Yeah it's enough. Two dimensions is enough. You don't need to go to three dimensions. Yeah and this structure only comes when you go to two dimensions, because if you go to three dimension, of course, they don't cross. In three dimension, you can no longer distinguish planar or non-planar diagram. AUDIENCE: Well if you did-- OK. HONG LIU: Good any other questions? AUDIENCE: Why is it always orientable surfaces? HONG LIU: That's a good question. It's because those lines are orientable because when we draw the double roation, so you have this two, double rotation. So essentially those lines are orientable. They have a direction. And essentially, this is, of course, one and two. Yeah so those Feynman diagrams actually have a direction, have a sense of orientation. So I'm going to mention, by passing in nature, but let me just mention also now. So if it's not [INAUDIBLE] matrix, say if it's a real symmetric matrix, then the then there's no difference between two index. And then there's no orientation. And that would be related to m orientable surfaces. And then you need to slightly generalize this. Any other questions? AUDIENCE: Now you mentioned that it has something to do with string theory, but does that it has anything to do with scatter particles-- HONG LIU: Yeah, so we're going to talk about that. No, we're going to talk about that. Good? So now let me talk about general observables. I think we're a little bit short on time, if we want to reach the punchline today. So right now we only have looked at the vacuum diagrams. So now let's look at the general observables. Before talking about general observables, let me just again make a side remark, Which is the gauge versus global symmetries. So in the example we talk about here-- let me call this a, equation a. Then let me write down another equation b, which is a Yang-Mills theory. So the difference between a and b-- so a is this guy. So the difference between a and b, is that a, as we discussed last time, is invariant on the global symmetry, is a U(N) global symmetry. It's that phi is invariant under the acting of a unitary matrix. But this U must be constant. Only for a constant U is this a symmetry. But if you have studied a little bit of gauge theory, or if you have not studied gauge theory, just take my word for it, the b is invariant under a local symmetry, a local U(N) symmetry. He said A miu-- so A mu is what make up the F-- U x A mu x, U dagger x minus i partial mu U x. It doesn't matter. The only thing I want to say is that this U x is arbitrary. It can have arbitrary space time dependence. Just like of the generalization of the QED, it's the gauge symmetry. So the key difference between the two. the key difference between this local and the global symmetry, are manifested in what kind of observables we can see there. For example, for a-- AUDIENCE: I think that should be U dagger, partial mu and then U dagger. HONG LIU: HONG LIU: So this difference, you can say what's the big deal? in one case, this is constant, and this is dependent on space time. So the key difference between them is that in the case for a, operator like this phi squared, phi is a matrix. So phi squared is a matrix. Phi squared x is an allowed operator. So this operator is not invariant under this global U(N) symmetry. But it doesn't matter because this is a global symmetry. So this is an allowed operator. But if you have gauge symmetry, all operators must be gauge invariant. That means that all operator must be invariant under this kind of transformations. So the analog of this is not allows operators. So observables in the gauge theory are much more limited. So we will be interested gauge theories. We will be interested in gauge theories. So that means we are always interested in observables, which are invariant under the symmetry. So we're interested in gauge theories. So that means we're always interested in gauge invariant operators. So the kind of Lagrangian does not matter. So you can have the gauge fields. You can also have some other field, say some matrix phi, et cetera. As phi is the Lagrangian of this form, it's OK. We always only consider the Lagrangian of that form. But it doesn't have an arbitrary number of fields, and with arbitrary kind of interactions. So you start your theory. So let's for simplicity, this can see the local operators. In this kind of theory, then allowed, say local operators, must always have some kind of trace in it. Say you must have some form trace, F mu U, F mu U, a trace phi squared, et cetera. A trace phi to some power F N, phi to some power k, et cetera. You can also have operators with more than one trace, say trace phi squared, trace F mu U, F mu U. So we are going to make a distinction because the operator with only a single trace, we call them single trace operator. And the operator with more than one trace, we call them multiple trace operators. OK so the reason for this distinction will be clear soon. So multiple trace operators-- it's self-evident again-- that multiple trace operators can always be written as products of single trace operators. AUDIENCE: I have a question. Is it a possible case to have a local gauge invariant operator, the F mu U times F mu U? HONG LIU: Yeah I always can see the local gauge invariant operators. We can see the gauge theories. AUDIENCE: So this combination, F mu U times F mu U, it's only-- HONG LIU: No this is gauge invariant. AUDIENCE: Is that the only gauge invariant component? HONG LIU: Sorry? AUDIENCE: Is this is the only gauge invariant component that we can use to construct the gauge invariant local operators? HONG LIU: Sorry, I don't quite understand what you mean. No this is just one specific operator. No, you can take F to the power N, an arbitrary number of-- as far as they're inside the trace, it's always gauge invariant. AUDIENCE: I see. HONG LIU: I'm just writing down a particular example. So just for notational simplicity, I will just write-- so from now on I will just denote the single trace operators collectively just as O with some script n, which denotes the different operators. so n denotes different operators. And then for the multiple trace operator, then you have O1, O2. That would be a double trace operator. And O1, O2, O3, say times O3 would be a triple trace operator. n just labels different operators. I'm just using abstract notation. And the reason for this-- a distinction will be clear soon. Then for such gauge theories, the general observables, in the quantum field theory is just correlation functions of gauge invariant operators. So by gauge invariant operators, you can have local operators, non-local operators, et cetera. So for simplicity, I will restrict my discussion only on the local operators. But local operators means that the fields evaluated at a single point. So the typical correlation functions, then the typical observables will have this form, will be just a product of some correlation functions, a product of some operators, and I say their correlation functions. By c I mean the connected correlation functions. So you can see immediately, these multi trace operators are just the product of a single trace operator. And the correlation function of a multiple trace operator can be obtained from those of a single trace one. You just identify some of the acts. Then that will be enough. So we only need to talk about the correlation function of single trace operators. So now the question-- let me call this equation one. So now the question follows what we discussed before, is how do we decide the N dependence of the guy? Previously we determined the N dependence of this vacuum diagrams of this petition function. But now want should be the N dependence of general correlation functions? One way to do it, you just start immediately calculating. And then you can find some root, et cetera. But actually there's a very simple trick. There's a very simple trick to determine the N dependence of this. So now I will explain. Now I will tell you. So I will not give you any examples because this trick is so nice, and it just works very easy. And so the question, what is N dependence of one? So this is the question we want to address. so here is a very simple, beautiful, trick. So let's consider the following generating functional. So in quantum field theory, when we talk about correlation functions, it's always convenient to talk about the generating functional. So whatever is your field, you do the path integral of all fields. And then you look at the action. So you have your regional action, which I call S0. And then that's add those operators. Ji x, O i x. Yes so this is a standard story. When you take the derivative over Ji, then you will bring down a factor of O i. Then that essentially give you the correlation function. You have, for example, a correlation function, the connected correlation function. O n, the connected correlation function would be just you take the derivative of log z. And then delta J1 x delta Jn xn. And then you set J equal to zero. You set all the J equal to zero. And that gives you the end point function. I should write i here, so i to the power n. So now here is the beautiful trick. You can determine this in a single shot, the N dependence of this guy. And this simple trick is just to add N here. You add N here. In order to get the correlation function, we need to divide it by N to the power N. Now you wanted to get O i, you need to divide-- take the root of N times Ji, so you also need to divide it by 1 over N. But why does this help? Why does this help? It helps for the following reason. Let me call this whole thing iS effective. iS effective, so the key thing is that this O i, single trace operators, then this S effective then has the form N times the trace something. Because you already know S0, which is our starting point, has this form. And now the term you added in, precisely, also has this form, is the N factor times something single trace. So that means the whole thing still has this form. Then now we can immediately conclude this log z J1, Jn, must have this expansion. h from 0 to infinity, and to the 2 minus 2h, f h lambda J1, et cetera. So adding that N is a powerful, powerful trick. So now you can immediately, just from here, we can immediately find out that for endpoint function, connected endpoint function, the leading order is 2 minus n, because the leading order is n squared. Yeah it's 2 minus n. And then suppressed by 1 over N squared, et cetera. Good? So for example, if you look at a 0 point function, which is essentially the partition function, so this is order N squared. So this is what we found before, to leading order. And if you look at the one point function, some operator, then, will be order N. If you look at the two point function, connected two point function of some operator, so it would be order one. And the three point function of some operator will be order 1 over N, as the leading order. And then all higher order just down by 1 over N squared, compared to the leading order. And again, the leading order is given by the planar diagrams. Because of the leading order contribution to here, in terms of this S effective is planar diagram. And then they must be, under those things, just obtained by through [INAUDIBLE], so they must be planar diagrams. so again this comes from planar diagrams. Good? So now let's talk about the physical implications of this. So what does this mean? So we have found out, this is our N dependence for our gauge invariant operators. So what does this mean? Now let me talk about physical implications. AUDIENCE: What did one of those [INAUDIBLE] have with-- HONG LIU: Yeah, I defined them without chi. AUDIENCE: Then something like-- HONG LIU: You define something. So when you write down your theory, you define this 't Hooft limit. Then everything is already in terms of lambda or some other order one number. So that can depend on those numbers in an arbitrary way. It doesn't matter can depend on coupling constants in an arbitrary, but it cannot have N dependence defined inside the operator. Once we introduce this 't Hooft limit, then the operator can depend on the coupling in the 't Hooft limit in an arbitrary way. Because they're all just all the one constant. Let's talk about physical implications of this. It turns out, these simple and [INAUDIBLE] behavior actually has a very simple physical picture behind it. So first he said, in the large N limit-- so let's just look at leading order behavior. So in a large N limit, if we consider this state of O i x acting on the vacuum. So some single trace operator acting on the vacuum. So i, again, just labels different state, different operators. These can be interpreted as creating a "single particle" state. I'm first describing the conclusion. Then I will explain why. So you can see that the state obtained by adding a single trace operator on the vacuum, then this can be interpreted as a single particle state. So if you add on the double trace operator on the vacuum, then this can be interpreted as two particle state. Similarly, say O n acting on the vacuum would be N particle states. So why is this so? Why this is so. So I will support this statement using three arguments. First remember O i O j, the connecting Green function of any two operators of order one. So we can actually just diagonalize them. If you can just diagonalize them, so that O i O j are proportional to delta i j. So in some sense, a two point function, those operators can be considered as independent. And now the second statement. So if you want to call this single particle, this two particle, multi particle, then they should be that they don't overlap. Because a single particle cannot overlap with two particle, et cetera. And then to see the overlap between a single particle with a multi particle, you look at these correlation functions. So if you look at the overlap of a single particle state, with a two particle state, say some double trace operator. Let me just, to avoid confusion, let me just use the inside-- use this notation to see this as a single operator. So you can see that the overlap we saw of the single trace operator with this double trace operator. And they start off-- this whole thing is like a three point function. Just put these two over the same point. So from our discussion here, is how you connect the Green functions of order 1 over N. So this goes to 0, compared to this overlap with itself. So that means in the N goes to infinity limit, there is no mixing between what we called single particle state, single trace, and the multiple trace states. So the third thing is that now let's look at the two point function of two double trace operator. So O1 O2 x, say O1 O2 and y. OK double trace operators. So let's look at the two point function with double trace operators. So there's an even contribution to this. So let's include all diagrams. Also these can all be connected So leading order is a disconnected diagram, which is essentially O1 x O1 y and O2 x and O2 y. Because I have a diagonal next to them. have And then plus the connected Green functions, which is order 1 over N squared. So if you see this leading order contribution, it's just like two independent particle propagating. Just like the product of two independent propagators. So it's sensible to interpret this as just the propagating of two particles. So it's sensible to interpret this two particle state, this double trace operator as just creating some two particle state. Yeah, again this goes to zero, in the large N limit. So I should emphasize when we call this a single particle state, it's not necessary they really exist on a shell particle corresponding to this state. We're just saying that the behavior of these states can be interpreted as some kind of single particles. The behavior you can just interpret as single particles. It's not really necessary they exist as stable on shell particles. In certain cases, there might be. There might be actual particles, actual stable, on shell particles associated with these kind of states. But for this interpretation to be true, it does not have to be. So in QCD actually sometimes-- so I just say they exist. So we call them "glueball" state. For example, in QCD, the analog of this kind of operator can create some state, which they are short-lived. They are not long-lived. They quickly decay. And so they're typically called the glueball state. So from now on we just call them glueballs. Call them glueballs. So this is one of the first implication-- the first indication is that they're just a single particles. A single trace operator can be interpreted as creating single particle states, and the multiple trace operator can be interpreted as creating multi particle states. Another second, he said if this glueball operators, so fluctuations of glueballs suppressed. So let me explain what this means. So let me suppose some single trace operator, O, which has a non-zero expectation value, suppose some state has a non-zero expectation value. And then let's look at the variance of this operator, the variance of the expectation value, which is given by O squared, minus O squared for the fluctuation. And this is, by definition, just a collected Green function of O squared. The is the full O squared. This is the disconnected one. So this is just a collected part of the O squared. And this one, N dependence, we know this is order one, order N to the power 0. So that means that these to below this order N. So the variance of this, compared to the expectation value of this operator itself is surprising, the large N limit. So essentially, in the large N limit, so that means the variance provided by the operator itself is 1 over N because of the 0 in the large N limit. So assuming that if you have a two point function-- so suppose each two point function, so each O2 have a non-zero expectation value, then you can factorize this O1 O2, plus O1 O2. So this is disconnected part. And again this will be of order one. But this part is of order N squared. So essentially the disconnected the part is always factorized. So the disconnected part is always factorized. Yeah this connected part is always small, compared to the disconnected part. So this is like a classical theory. AUDIENCE: We have just five minutes. We can proceed before it's 5:00 HONG LIU: Yeah but it may go to 10 minutes. It's just hard to say. It's just hard to say. Yeah, let me do it next time.
MIT_8821_String_Theory_and_Holographic_Duality_Fall_2014
14_Physics_of_Dbranes_Part_II.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. HONG LIU: So last time, we talked about-- we introduced the concept of D-branes. And then we quantized open strings on the D-branes. And we see the massive spectrum on the D-branes includes, say, massless gauge field, and also some massless scalar fields. And then I described that one can interpolate the dynamics of the scalar fields actually as the motion of the D-branes. So in other words, at the beginning, even when we quantize the open string, we started with a rigid boundary condition, so we started with a rigid ring. But now, after you quantize it, then you get the fluctuations. And because of those fluctuations, the D-branes become dynamical. Those fluctuations of the D-branes make it into a dynamic object in principle to make it move, or et cetera. So now let's say a little bit regarding the math of a D-brane. In the gravitational theory, anything gravitates. D-brane will have energy. It will have a mass, et cetera. so let's talk about what should be the mass of a D-brane if it's a dynamic object. So here, there's a very simple and intuitive answer. So on the D-brane, there are many open strings. In principle, there are an infinite number of open string excitations live on the D-brane. And each of them can be considered, say, as a space time field living on the D-brane, et cetera. So actual definition for the mass of a D-brane is that this should be the energy of a D-brane, which essentially should be the ground state, the energy of the ground state of the D-brane. The energy of the ground state of a D-brane could be corresponding to the energy of the D-brane none of those strings are excited. That should correspond to the vacuum energy of open strings living on it. So this is very intuitive definition and obviously makes sense. So we can write the mass of a D-brane, DP-brane, as the tension, which is the mass per unit volume times the total volume. And this should be equal to the vacuum energy of all the open strings. So say each open string excitation corresponding to a field. You have a tachyon. You have gauge field. You have massive scalar field, and also infinite number of massive fields. All those fields, they have vacuum energy. So you need to sum all of them together. The sum of those vacuum energies would be the mass of the D-brane. So this can be achieved just by doing the vacuum diagram of open strings. So we describe in the closed stream case, if you want to find the vacuum energy in the closed string, you just sum of all possible. So this will be just the vacuum diagram. In other words, so the difference between the open string is that open strings have boundary and closed strings are closed. So that means the sum of all two dimensional surfaces with this one boundaries but no external open strings. This is the natural definition of the vacuum diagram, as we would do. And we will do in the Euclidean path integral. So you can do this in the Euclidean path integral. You sum over all surfaces. In the case when you need the sum of all surfaces, in some sense, the only way we know how to define such a sum is to do the Euclidean path integral. And this sum-- so previously, we talked about the vacuum energy of the closed stream, the sum of all possible closed surfaces, say of different topology. So here, again, you sum of all possible surfaces with this one boundaries. So the simplest surface with one boundary is a disk. The difference with closed string case, now you have to sum over surfaces with boundaries. So the simplest one would be a disk. And the next would be annulus. Now you have two boundaries rather than one boundary. And exactly you can consider more and more complicated diagrams. You can consider more and more complicated diagrams. And the way to weight those surfaces exactly the same as before is that you have this g string than to the power minus chi, and the chi is the Euler number, just exactly as we described before. And Euler number now to apply surfaces with boundaries would be-- so previously, Euler number is 2 minus 2h. h is number of genuses, or number of holes. But now you also need to include the number of boundaries, which are called b. So when you include the boundaries, then that will change your Euler number, and also change the weight for each diagram. You can also add handles here. You can also add handles here. You can also add genuses to the disk. You can also add h here. So according to this counting, then this would count as gs minus 1. So this one has zero holes and one boundaries. So this is 2 minus 1. So this is 1. So this is gs to the power negative 1. And this one has low hole but two boundaries. 2 minus 2 is 0. This is 1 to the power gs to the 0. And then you have higher diagrams. You have higher surfaces with all positive powers, gs. Yes? AUDIENCE: What about like a Mobius strip that has one boundary? HONG LIU: Right. So the Mobius strip is a very good question. A Mobius strip is unoriented surface. So here, we can see the oriented string. You can consider unoriented string. But most of what we said applies to that case. It's just we have to worry a little bit about orientation, so we don't go there. AUDIENCE: Will that really contribute to the vector [INAUDIBLE]? HONG LIU: Hm? AUDIENCE: Will the Mobius strip contribute to the-- HONG LIU: Yeah, yeah, it will. In the case when you have unoriented string. AUDIENCE: But you make a restriction and say, on this D-brane, we have or not have unoriented-- HONG LIU: Here, we only consider oriented surfaces. We only consider oriented strings. We have not talked about unoriented strings. AUDIENCE: But no restriction in principle. HONG LIU: You can. It's actually a technical complication I don't want to go into right now. Yes? AUDIENCE: So we think of vertical axis as time. So the disk would be a string kind of nucleating and propagating-- HONG LIU: No. This is a Euclidean and you can think of time as whatever you want. AUDIENCE: Right. But still, so the disk would be like a nucleating open string that propagates and then disappears, right? HONG LIU: Yeah. For example, this open string you can consider. Heuristically, you may be able to think of some kind of a single string, just rotate, for example. AUDIENCE: Like forever? HONG LIU: Yeah. For example, I just said a single. I'm just saying it's hard to interpret as a time now. But the time in this direction would be periodic time if you think from that point of view. But the good thing is that when you go to Euclidean, what you call time and the spatial direction then becomes obscured. It depends on your convenience. AUDIENCE: In the center, is that a genus? HONG LIU: No. The center is completely smooth. AUDIENCE: It's not like a torus? HONG LIU: No. A disk is a disk. A disk is not a torus. AUDIENCE: But there's a-- HONG LIU: You're talking about this one? AUDIENCE: Yeah. HONG LIU: Oh. This one is annulus. This guy is annulus. This is a flat surface. AUDIENCE: So the inside and outside are the-- HONG LIU: They are different. If you identify this and that, then they become a torus. When you identify this one and that one, then they become a torus. Then you get rid of the boundaries. When you identify them, then there's no boundary anymore because they become a circle. Good? So that means if I have weak coupling, that means when gs is much smaller than 1, which is the cases we can only consider because if you have gs more than 1, then you have some infinite number of diagrams. And we don't know how to deal with this. So weak coupling, when gs more than 1, then the brane tension, then the D-brane will be always scaled with string coupling at 1 over gs because of that, because this term will dominate. This term will dominate. And then the energy should be 1 over gs. This is a very important result. It's a very important result. The mass of the D-brane is actually 1 over gs. So on dimensional ground, you can just essentially write down what's the tension of the D-brane because the only dimensional parameter is alpha prime. So the dimension of the D-brane should be-- so this is mass per unit volume. So you have a p dimension of volume, then that would be p plus 1. So the mass dimension of the tension would be p plus 1. So just on dimension ground, I can write gs because it's 1 over s, alpha prime 1/2 p plus 1. So that gives you the right dimension. And then you can have some numerical constant which you need to determine. You have some numerical constant which you can determine string theory by doing that path integral. Any questions about this? Yes? AUDIENCE: Do these open string vacuum diagrams have any interpretations like half of a closed string diagram? So if you put two disks together, you have a sphere. HONG LIU: Right. We will talk about this in a minute. Yes? AUDIENCE: With the logical decomposition of the powers of g, why is the disk g minus 1. HONG LIU: This comes from this formula. As we discussed before, the weight of different topology is always weighted by some constant to the power of Euler number. Now, the Euler number, if you have surfaces with boundaries, then the Euler number depends on the number of boundaries. And then you can just work it out. So this is based on simple topology. Any other questions? Yes? AUDIENCE: I'm sorry. I'm just a little bit confused about the vacuum energy here as the one-- remember when you calculate the mass of the string. You know, we have a naught term there. There is no excitation. HONG LIU: Sorry. Say it again. AUDIENCE: So when we calculate the mass of the open string and there is a0 term, which is completely different. HONG LIU: That's completely different. So that a0, we considered before, it's the zero point energy for the oscillation modes on the screen. So that a0 is that we are considering this string, and the zero point energy for the oscillation mode on this string. But here, we are considering the zero point energy not of the string. We are considering the zero point energy of the D-brane. And the zero point energy of the D-brane would be to write down the vacuum energy of all the fields living on the D-brane. And all the fields living on the D-brane corresponding to all the-- now, each string excitation becomes a field on the D-brane. And so that's corresponding to sum of that. And that, then, in turn corresponding to sum of these kind of surfaces. Any other questions? So there's an alternative way to think about how to compute the D-brane mass or energy as follows, which is actually extremely instructive. There is an alternative way of doing this. So let's consider just D-brane. So consider the interaction between the two D-branes. So let's consider two D-branes separated by some distance. And then they have a mass. Then they will interact gravitationally. In particular, in a weak coupling limit they're pretty massive. They're very massive because it's 1 over the g string. So when g string is small, which is the only regime we're working with, so the D-brane is very heavy. And so you can ask, what is the gravitational attraction between the two? What is the interaction between the two? And we know that at low energies, say if the two D-branes are not excited, if their distance are very far apart, then the leading interaction between them just comes from the massless mode because only massless mode mediates normal interactions. And so interaction between them just comes from graviton or this [INAUDIBLE], essentially just corresponding to small number of massless closed string modes. Only those massless modes will contribute because the massive mode only contributes short range interactions. AUDIENCE: Why not the vector mode in open string? HONG LIU: No. Vector mode of open string only lives on each brane. AUDIENCE: But we can have an open string like-- HONG LIU: I will talk about that separately. Just wait a little bit. So if I think purely from the alternative gravity point of view, not from string theory point of view, I have two massive objects. I want to look at the interaction between them. And then interaction will be proportional to gn, say their mass. So if I factor out the volume factor, it would be just GN TP squared. So this essentially is the gravitational interaction between the two. And from the string theory point of view, such a diagram corresponding to you exchange your closed string. So this diagram corresponding says, suppose you have brane one, brane two. So this picture that brane one will emit graviton absorbed by the other brane. And then that's how we measure, say, the newtons force between them. And it translates to the string theory picture. This corresponding to one D-brane emits a closed string, and then absorbed by the other D-brane. And when you go to [INAUDIBLE], which only massless mode matters, and then becomes this picture. So this is the string theory version of that diagram. So now essentially, what you need to do to calculate this thing in the string theory is to calculate this thing in the diagram. So form the string theory point of view, now what you need to consider is to do path integral on the topology of a cylinder with one boundary on the brane one and the other boundary on brane two. So this corresponds to exchange of a closed string in this direction. AUDIENCE: Question. HONG LIU: Yes? AUDIENCE: What's the mechanism for the D-brane emitting a closed string? Or equivalently, on the other picture, why can it emit a graviton? HONG LIU: It's coupled to graviton. AUDIENCE: So how did we introduce the coupling? HONG LIU: Hm? AUDIENCE: How did we introduce the coupling? I mean, we introduced them as boundary conditions for open strings. HONG LIU: Yeah. AUDIENCE: So does that naturally introduce coupling? HONG LIU: No, no, no. This is what I'm writing here. And this diagram, you emit from a closed string corresponding to look at the cylinder. One boundary of the cylinder on the location with one D-brane, and the other boundary of the cylinder on the location of the other D-brane. And then you just integrate over this surface. And then that will give you the graviton exchange. That will give you the closed string exchange between the two. AUDIENCE: So the coupling of the closed string to the brane kind of naturally arises? HONG LIU: In the boundary condition imposed here. So you impose this closed string to initiate it from-- so you impose the boundary condition here so that this closed string starts from brane one and then ends on brane two. And then you integrate over all surfaces this cylinder topology. AUDIENCE: We know the interaction constant for closed string is g closed, but is it the same here? HONG LIU: No. That's what I'm going to talk about. Is this clear? OK. So now I'm going to mention two things. First, as I said before, whenever we do some calculations, we often do analytic integration to the Euclidean signature. So now it will be the same. When we do this calculation, we have a closed string start at location of the brane one, and move forward in time to end on the brane two. So this is the simplest diagram. You can also add some holes here. You can also add some holes here. And then that corresponds to higher order diagram structure. So now don't worry about that. Just the simplest diagram. Yes? AUDIENCE: Was it time dimension inside brane to define time dimension as one of the dimensions that lived inside brane? HONG LIU: Yeah. But this is not a space time [INAUDIBLE]. This is virtual time. This is virtual time associated with this graviton. So essentially, I do create the closed string here on this brane and then propagate then absorbed by the other D-brane. AUDIENCE: Can you also think about this as like an open string-- HONG LIU: Yeah. One second. I'm going to explain. So now there are two remarkable things about this diagram. The two remarkable things about this diagram. First, this is a cylinder diagram. And this is a diagram with two boundaries because we have to emit a closed string from here. And so you have one boundary. You have an initial closed string, then you have a final closed string. So this is a surface with two boundaries with no holes. And if you calculate the chi, so this would be zero. Then that means this diagram is gs to power of 0. And then from here, we know that then this from string theory point of view will be gs to the power of 0. And so this is another way to see that the TP should be 1 over g string. Because we said before that the G Newton-- so we explained before G Newton would be order of gs squared. G Newton is the g string squared. So do you remember G Newton is g string squared? Good. But something remarkable about this diagram is that you can also view this diagram. So right now, we see it from this direction. Now we can also view it from the other direction. Viewed from this direction, so now try to think about this direction as the time and this direction as the sigma. So right now, we are seeing this as virtual time and this as sigma. This is a closed string. So now think about this direction as sigma and this direction as time. And then this is an open string with one end ending on brane two and the other end ending on brane one, and then going in the loop. So this is the one loop open string. So even though this is tree-level exchange in closed string. So here is the tree-level exchange in closed string. So here is one loop in open string. So this tells you the same process. You can really view it from two perspectives. From one perspective, it's the standard point of view, is that we exchange some closed strings. So we exchange some gravitons, some massless particles, some particles between these two. But there's another way to think about it is we say, because we have two D-branes here and because D-branes correspond in two places, open string can end, then I have open string connect between them. And this one loop open string is essentially corresponding to the vacuum diagram of those open strings connect between them. So when you add the vacuum energy of all those open strings ending between them, then you're effectively calculating the interaction between the two D-branes. So we have two completely different perspectives to look at the same process. And this is very, very deep and profound. Deep is profound. Because that means the process that you can think from closed string perspective can be fully understood in a different way from the open string perspective, in a completely equivalent way. And this is normally called the channel duality. So it's a very simple geometric fact about two dimensional surfaces. But physical significance is very profound. AUDIENCE: I have a question. HONG LIU: One second. Let me finish. And this is precisely the string theory origin of the holographic duality or the idea, say, of t we are going to see in a couple of lectures. Just because of this simple geometric picture. This side is gravity, and the D-brane is about gauge theory. And then we see gravity to be equivalent to gauge theory. That's something we are going to see later. Yeah? AUDIENCE: So you said the dynamics of closed string can be fully understood by open string. Is that why you say it's a closed surface formed by a closed string? How can you interpret? HONG LIU: Yeah. I'm talking about this particular diagram. I'm just saying this gives you a hint of something very profound. AUDIENCE: And one more thing. Why is the gs 0? HONG LIU: No. This is the surface of two boundaries. AUDIENCE: But that is for open string. HONG LIU: No. Chi is for everything. Chi is everything. Doesn't matter open or closed string. This is the universal formula. This is open. The open string just means we imposed the boundary condition on the open string. The topology is the same. We understand the topology is the same. Yes? AUDIENCE: Why do you call it one loop string? Where is the loop? HONG LIU: Because this is the open string. So this is the open. Think from this point of view. This is the open string on brane one and brane two. And then you go around once, go around in circle. So this is one loop. AUDIENCE: What's the free momentum? HONG LIU: What is this? What is this? This is one loop if you have a particle. And so you have a string. Then you go around the circle. This is one loop. And indeed, when you sum over such surfaces, you will lead to sum of all possible momentums, et cetera. So the field theory momentum is one of the modes you have to sum over when you do past integral over surface of such topology. Yes? AUDIENCE: Once you go to strings connecting different branes, your quantization conditions change. HONG LIU: We will talk about that. Quantization condition almost does not change. We will talk about that. We'll talk about that in a little bit. But right now, it's just intuitively clear you have open string connect between them and they just go around. You have open string connect between them. You can just go around the circle. AUDIENCE: But this correspondence doesn't count the tree-level around the open string. HONG LIU: Count what? AUDIENCE: If we generalize the tree-level open string. HONG LIU: No, no, no. This doesn't have to account for the tree-level open string. AUDIENCE: Tree-level open string contributes to the interaction? HONG LIU: No. Here, I'm only talking about this diagram and just say this hints that there are certain closed string processes can be completely described by the open string. So what want to extrapolate is that open string is a more fundamental description because the tree-level diagram in closed string can be described by the open string. And now if you can generalize that maybe everything closed string can be described by open string. But you don't want to do in the opposite way. Open string is open string. Good? Any other questions? AUDIENCE: Once you go higher dimensions, when you leave tree-level closed string can be distinct? HONG LIU: Yes. Things become more complicated. Things become more complicated. But the similar picture will exist. But nobody has made it work. Nobody has made it work at full string theory level to construct the whole closed string theory from the open string theory. Nobody has made it work. But there are many such kind of indications from the geometry of the surface point of view. So now let's talk about relaxing the strength of open string interactions. Actually, before I do that, now is a good place to go back to examine what we discussed at the end of last lecture. Now is a good time to go back to talk about what we did at the end of the last lecture. So in the last lecture, at the end, we described that one can work out the low energy effective action of the massless modes on the D-branes. So the massless modes on the D-branes on the gauge fields along the D-brane, so A alpha from 0, 1, to p. And then the scalar field and a label all the perpendicular directions. So you can write down effective action for them. I mentioned if you work out things carefully, then you find the prefactor is actually just the brane tension. And if last thing is excited, p plus 1. If last thing is excited, then you just have the vacuum energy, so you will have a one. So this is just the brane mass, the total brane mass. This is et. So if last thing is excited, then you just have the zero point energy, which is just tp times the volume. But now, if you also have gauge field, then based on general argument, you must have the Maxwell. And if the scalar field is excited, then you also have the action for massless scalar field. And then we mentioned that for example, you can consider special case. Suppose A alpha is not excited but the brane, rather than a scalar field that moves in a coherent way the same at all points on the D-brane, just phi a, is a function of t rather than x. So phi, in general. Suppose the brane coordinates our x0 and the p. So in general, A alpha is the function of x0, xp. And phi a is x0 and xp. So they describe you can have arbitrary profile on the world-volume. But suppose, say, let me consider the uniform situation which I only consider every point has the same behavior for phi. And then this s just becomes 1/2, dt just becomes dt. And D-brane plus 1/2 and D-brane phi dot squared. So this is precisely the motion of just a massive object. OK And m is just this guy. So if last thing depends on the spatial coordinate, then the integration of the spatial coordinate becomes the volume. Combine the volume with that, becomes the mass, and then just becomes that. I think this is minus sign. This is plus sign. So as I mentioned, this is another way to see that the D-brane becomes dynamical, and that in particular, this phi describes the motion of the D-brane. So in fact, this result can be much, much strengthened. But I will only quote the result. I will only quote the result. It turns out for D-brane with constant. So as opposed to the D-brane move with constant velocity, so now you can also have a motion in the spatial direction. You have a constant of this and F alpha beta. You can also excite the gauge field, but the field strength is constant. Or this quantity is small. They don't have to be strictly constant, but at least their derivatives are small. In such a situation, one can actually sum all the higher order terms from string theory corrections. So this is just a low energy, just like field theory. And in such a situation, you can actually sum over infinite number of higher order terms. And what do you find? You find so-called wave sum of all infinite number higher terms. You find so-called Dirac-Born-Infeld action. You find that this effective action becomes like this. This is very [INAUDIBLE] result. So I just want to mention it. You can actually sum into this form. So you can sum into this form. And this g alpha beta is the [INAUDIBLE]. So let me just explain a little bit the physics of this formula. So let's consider the case of the phi is not excited at all, just phi for the constant, say, for example. Then this term vanishes. Then this g alpha beta just becomes eta alpha beta. And now you just have a square root, say, your Minkowski metric plus F alpha beta. Forget about this 2 pi alpha prime. This is just some dimensional factor. And then you just have eta alpha beta plus F alpha beta. And then you write the determinant. And suppose when F alpha beta is small, when alpha prime times F alpha beta is small, then you can expand it in powers of F alpha beta. It's a simple exercise but instructive exercise. You see that precisely reproduces the Maxwell term. But this will give rise to higher nonlinear terms. There will be higher order nonlinear terms. So this can be considered as a nonlinear generalization of the Maxwell theory. It turns out actually, this theory was considered in the '30s by this guy, Born and Infeld. Actually, maybe '30s or '40s. Anyway, prehistory. They invented as a way to avoid-- they want to avoid the similarity of the Maxwell theory. So in the Maxwell theory, if you have a charged particle, and then the location of the charged particle, then the field due to that charged particle is singular and the location of that particle. And so they want to avoid that singular behavior, so they invented this Born-Infeld action. And for many years, this action does not have any applications. But if you invent something nice, it will find its use. Just like in this movie, Jurassic Park, life finds a way. Life always finds a way. So that's Born-Infeld. Now let's set F equal to 0. Let's just look at g alpha beta. Now let's look at g alpha beta. So g alpha beta, we can write it in a slightly more transparent form. We can write a form which makes it a bit more transparent. I can write it as the following. So even mu, remember, is the Minkowski metric of the full space time. And I can write this as following. x mu beta x mu x mu with x alpha equal to x alpha, which is along the brane direction, and xa to be phi a. So if you look at this formula, you can see this is an induced metric for some brane embedded in the full Minkowski space time. And the x describes such embedding. This generalization of this induces the metric formula we encountered before for the string. But right now, the only difference is that now alpha beta, they run all in the world-volume direction of the D-brane. And then this becomes an induced metric on the D-brane when it's embedded in the space time. And this x alpha equal to x alpha, it just means that when we embed it, and we choose the world-volume direction to be the same as the space time direction along the brane direction. And in the perpendicular direction, this is just phi a. And if you look at this, this is exactly that. It's exactly that because x alpha equal to x alpha. And then you just get the eta alpha beta. And then for the other direction, you get this one. Is this clear? So when f equal to 0, so this S just becomes eta g alpha beta. So this is precisely the volume element of DB-brane. Because this is the induced metric, and then this is just the total volume element of the DP-brane. And we see this is precisely is the relativistic generalization. So this is just a generalization of the [INAUDIBLE] action we wrote earlier, which is for a string. Then this would be a two dimensional area. And here, you just integrate over the volume element of the whole D-brane. So we see that this Born-Infeld corresponding to really describes the relativistic motion of a p dimensional object. Describes the relativistic motion of a p dimensional object. And this Dirac-Born-Infeld, when you combine these two together, it magically combines these two things into a single thing. Yes? AUDIENCE: So I recall you saying earlier that-- you said that people have played with this idea of thinking about branes instead of just generically higher dimensional objects and strings but no one really understood the theory of these things because the topology and geometry were too complicated. So it seems to me that wouldn't you run into that same problem right here if it's indeed some generalization of the [INAUDIBLE] action? HONG LIU: Yeah, but we don't try to quantize it. At least we don't try to quantize this action. And we know how to quantize this action. And this is just our ordinary field theory. AUDIENCE: I have a question. Here, we must impose the big x as the coordinates in the target space. HONG LIU: That's right. AUDIENCE: So phi a must be kind of a constant? I mean, why there should be a constant part on-- HONG LIU: No. If the derivative of those things are not small, then there will be many other terms. This will not be the only action. AUDIENCE: Given a constant. HONG LIU: Sorry? AUDIENCE: You said with a constant. HONG LIU: The partial alpha phi a equal to constant. AUDIENCE: Oh, equals a constant. So that means-- HONG LIU: No. What I'm saying is that if these are constants, then this is our exact string theory action. And when these are not constants, and then this is a leading approximation, there will be higher order terms which depend on their derivatives. Yes? AUDIENCE: Sorry. One thing I just don't understand-- why is it that we don't want to try to quantize anything? Shouldn't it be quantized in principle? These are the sort of classical analogs of things you want to quantize. HONG LIU: Yeah. AUDIENCE: So this is to say that this object that we don't really know how to quantize is-- we just don't do it because we don't know how. HONG LIU: Yeah. AUDIENCE: I see. OK. It's not because-- fair enough. HONG LIU: Yeah. You should try anything. And only those people who have succeeded in the history books. Only a few have won the battle in the history books. And if you just fail the battle, you're not in the history book. So people have tried this but failed it, but that won't be written in the books. AUDIENCE: Sure. AUDIENCE: Sorry. So you did the square root by summing over all the massive terms in the-- HONG LIU: Sorry? AUDIENCE: You get the square root term by summing the series, including the massive fields? HONG LIU: No. This is still the gauge field and the phi. AUDIENCE: Right. So how do you get it? What's the series that you're summing? HONG LIU: Hm? AUDIENCE: What's the series that you're summing? HONG LIU: Oh. I'm just saying in the string theory, typically you don't start by f squared. You have f cubed, f four, et cetera. You can sum all of them together. Even for the massless mode, these are just leading terms. And these terms would be the smallest number of derivatives, and so they dominate at low energies. But in general, even just for the effective serial massless mode, you can have many, many other terms. AUDIENCE: I have a question. If we assume that partial alpha phi a is a constant, then we can solve out the phi a is proportional to x alpha. But how can you assume they're just the coordinate in target space? HONG LIU: No, no. This is a function of alpha. AUDIENCE: Yes. But since it's a function of x alpha, why it can be regarded as the coordinate in target space? HONG LIU: Sorry. I don't understand. AUDIENCE: All the coordinates should be independent in target space. HONG LIU: Sorry. I don't understand. They are independent. These are the virtual coordinates. These are the volume coordinates of D-brane. And these are the target space coordinates. I'm just choosing the function of the target space coordinates. I'm choosing here just to be identical to the world-volume coordinate. And this one I choose to be some function of the world-volume coordinate. Of course I can do that. Any other questions? So again, this highlights that D-brane is really a dynamical object. In fact, at low energies, they move like [INAUDIBLE] motion. And they actually move relativistically if you give enough velocity, et cetera. Because of the fluctuations of the D-brane, they become a really full dynamical object. They have a mass. They can move around. And now you can deform their shape. If you have enough energy, you can bend them, et cetera. You can do whatever you want. So let me mention one last thing. So you may ask, why somehow those fields which describe the motion of the D-brane, they're corresponding to the massless modes on the world-volume of the D-branes? Whether this is a coincidence, or why it somehow happens to be the massless mode on the D-brane which describes the motion of the D-brane. So this is not an accident. This is not an accident. The reason is that-- so why modes describing motions of D-branes appear as massless modes? So this is not accident. So underlying reason, it's because the underlying Minkowski space is translation invariant. So that means that no matter where you put the D-brane, it should be a well-defined configuration. Should be a well-defined configuration no matter where you put on the D-brane. Then that means that the [INAUDIBLE] action for the phi cannot contain a potential term. They cannot be potential term. They should not be, say, somewhere is the minimum, somewhere is the maximum, cannot happen. Everywhere must be the same. So it means the dependence on y can only be derivative. Can only depend on derivatives. And of course, at low energies, if you have derivatives, then can only be the massless particle. So translation invariant. So this means that any phi a equal to constant should be allowed configurations. That means cannot have potential. So max term is like a potential for phi. To say in the fancy words of Quantum Field Theory II or Quantum Field Theory III, that the phi a, in other words, phi a are the Goldstone bosons for breaking translation symmetries. So previously, Minkowski space is translation invariant. And now if you put a D-brane there, then you break that translation symmetry. The location of the D-brane breaks that translation symmetry. If I even put the D-brane anywhere, then that means the modes, the dynamics control the location of the D-brane must not have any potential, can only have derivative terms. So in other words, when you put the D-brane in, you spontaneously break the translation symmetry on the line in Minkowski space. So let me mention one thing. Then we can have a break. I'll mention one quick thing. So let me say a few words on the strength of the open string interactions. So previously, we described that the closed strings, they interact by such joining and splitting procedure. And the strength here is capped by gs. So the closed string coupling is essentially the gs, which describes such a process. So if you have an open string, of course you have a similar process, just string ends joined together. You can join string ends together. So here, you really just have open string. So now these lines are the boundaries of open string, or the endpoints of open string. So you have two open strings joined together from another one. So let's call this interaction go describing the interaction of the open string. So the question is, how is this go related to gs? And there's a single way we can figure it out. So let's consider the simplest situation. Just have open string propagate in time. Again, this is two boundaries open string. We just propagate in time. Now let's consider a more complicated process. So the open string propagates in time. So this is just a simple surface with one initial open string and one final open string. And you can see the complicated process because we have a hole in the middle. So now the string worksheet is like this, just this part. We have a hole in the middle. And this is another configuration to have some initial open string propagate to some final open string. So now we know, by counting we did before described here. So here, we're adding one boundary. We are adding one boundary. So this adds a boundary. So that means we must add a factor of gs. Because from this formula, this is g minus chi. And chi is minus b. So if we increase one boundary, then you increase a factor of gs. But we can also view this diagram as a single open string comes in. The opposite of this splits into two open strings and then they close together. So one split operation, and the one join operation. So that should correspond to go squared. So then this means that we conclude that gs must be proportional to go squared. The open string coupling strings is the square root of the closed string interaction. Yes? AUDIENCE: What's the strength of the process when closed string becomes open string and vice versa? HONG LIU: Sorry? AUDIENCE: What's the strength of the process when closed string becomes and open string? HONG LIU: Right. Yeah, you can consider such process. Again, you can just do it by counting the topology of the surface. Such process can exist. OK. Then let's have a short break. So what time is it? It's 38. When should we start again? 41? OK, 41. Let's start at 41. So we have talked about D-branes, et cetera. And we have already seen some remarkable aspects of the D-brane, including this channel duality between the closed string exchange can be considered open string loop. And now we are going to see a lot of magic of the D-brane. And this comes when you put several D-branes together. So normally, our conventional intuition says if you find some particle, say in this case, you find the D-brane. So if you put two particles together, nothing much really changes. It's two particles. Put three particles together. Not much changes. It's three particles. But when you put multiple D-branes together, things change a lot in a very profound way. So now let's consider just two D-branes. So let's consider two D-branes. Let's consider example. So let me first just tell you the naive intuition. Suppose you have D-brane one, D-brane two. So for this one, we have a u1 gauge field. For this one, you have a u1 gauge field. Because each one, we have a gauge field, a Maxwell. When you put together, from conventional wisdom, you say, maybe I just have two Maxwell. From conventional wisdom, you two Maxwell. Naively, if I put them together, I just have two Maxwells. 1 plus 1 equal to 2. But in string theory, 1 plus 1 equal to 4. It's actually equal to 2. It's also equal to 2, depending on how you think about it. Anyway, one way to think about it is 1 plus 1 becomes 4. So to see 1 plus 1 become 4 is very easy. So let's consider these two branes on top of each other. But in order to distinguish these two branes, I just separate them a little bit. But you should really think of them on top of each other. And so now you have four types of strings. You can have string going to 1, 1, going to 2, 2, then going to 1, 2, going to 2, 1. So 2, 1 and 1, 2 are different because the oriented string. So I put arrow there. So this is from 1 to 2. This is from 2 to 1. So we have four types of strings-- 1, 1, 1, 2, 2, 1, 2, 2. AUDIENCE: Why is oriented-- HONG LIU: Hm? AUDIENCE: Why the 1 to 2, 2 to 1 are different? HONG LIU: It's because for this string, sigma 0 here. For this one, sigma pi there. For this string, sigma 0 there. And for this one, sigma pi there. Let me just elaborate on this point. So suppose I have a string like this. Then this string is sigma 0 point ending on 1, sigma equal to pi ending on 2. But if I have a string like that, then there's a sigma 0 ending here and sigma pi ending there. So you have four types of open strings. And now if you think about how we quantize those strings, and the four types of open strings, they actually have identical spectrum. Because for all of them, the boundary conditions are exactly the same. Because the boundary conditions only know the location of the D-branes. So all four types of open strings have identical spectrum. So in other words, each string excitation-- say this is the state on the worksheet-- each state becomes four states because I can label IJ. Now suppose I use I and J to label 1 and the 2. I and J can be 1 and 2. So depending on whether this is 1, 1 string, or 1, 2 string, or 2, 1 string, or 2, 2 string. So this is what I said 1 plus 1 equal to 4. Because naively, you would say I have two massless modes. But now I have four. For example, the massless modes become four copies of them. In other words, you can think each open string excitation a 2 by 2 matrix. I can use 2 index to label them. In particular, for example, the corresponding fields-- so each string excitation corresponding to some field. For example, the gauge field associated with this now has two index, I J. And similar with phi a I J. Of course, this generalizes immediately to if you have n branes, then just becomes n times n matrices. So 1 plus 1 plus 1 to n becomes n squared. So now let me give some remarks. So this basic structure turns out to be, again, very, very profound. Now let me give some remarks. So there's a reason I call-- so this is something with the 2 index. So of course, you naturally call it a matrix. But there's another reason to think about this really as a matrix. It's because the strings, as we were doing there, the open strings, they interact by joining their ends. So this naturally leads to-- when those strings interact with each other, and those parts naturally just emerges as a matrix product, I, J indices. So it's easy to see. So let me just draw that. Let me just do it here to save some time. Suppose this is I. This is J. So this is sigma equal to 0, sigma equal to pi. And the sigma equal to pi joins with stigma 0 to end of the other one. But of course, if you want to join them together, their J's have to be the same. This is K. Then you go to I, K. And when they join together, then you sum of all possible J's. Then this is like a matrix product. So if I draw it in the diagram not very well. Now let me separate I, J, K to be three things. But they don't have to be separated. I, J, K can also be the same. But in order to emphasize this picture is that you have a string to go from I to J. Suppose this is I, this is J, this is K. Go from I to J, then from J to K. So sigma 0, sigma pi. And the pi end joins with the sigma 0 end. And then here, you get the string. So that diagram roughly can also be think of a diagram like this. Two strings join into one string with index I and K. And the I, K, K can all be the same. I just make them different to make it clear. And of course, when you join J together, you have the sum of them because they can be in principle all possible J's. So it naturally appears as a matrix product. Just follows by the nature of string interaction. And now there's another remarkable thing is that if you can see the phi a, so the same thing applies for a alpha applies to any field. It's that 1q-- so this corresponds to a string with sigma equal to 0 at 1 and sigma pi. So this corresponds to a 1, 2 string. And then we can also think about the 2, 1 string. So it turns out that these two can be considered as complex conjugates of each other for the following reason. Again, now let me just again separate this 1 and 2 to make it clear. So this is a 1, 2 string. So this is a 2, 1 string. So I claim string interactions defined by this way have the following symmetry. So string interactions described by joining the ends or splitting the ends have the following symmetry. It's that I can associate each brane by a phase factor. So I explained to you i theta I. So I labeled to the brane and the theta can be some different-- can be a phase factor. And now the rule is that if the sigma is equal to 0 and on that brane, then I multiply it by the exponential of i theta I. And the sigma equal to pi ends on that brane, ends on I. Let me write it more explicit. So if sigma 0 ending on I, then I multiply by a phase factor, exponential i theta I. And if the sigma equal to pi factor ending on I, then I multiply by exponential minus i theta I. So let's consider this operation. And I claim this operation is the symmetry of the string interaction. So let's first consider if you just have a single brane. So if you have a single brane like this, then just nothing changes because you multiply one end by i theta and the other by exponential minus i theta, does not change. But now, if you have such kind of interactions, because the sigma 0 and sigma pi ends join together, and they can only join if they are ending on the same brane, then those factors always cancel each other. And so this would be a symmetry. Is this clear? So under this operation, phi a ending on the same brane is invariant. And phi a I J then transforms by a phase factor theta I minus theta J of phi a I J. And the phi a J I transforms as a factor minus I theta I minus theta J. So we can actually think of them as complex conjugates. So they're transforming opposite way under this phase change. Yes? AUDIENCE: And this is because we're considering them to be u1 branes? HONG LIU: Sorry? AUDIENCE: Is this because we're considering them to be u1? HONG LIU: No. This is a good point. I will talk about this more. Right now, let's think about each brane separately. AUDIENCE: I have a question. In principle, can we write the interaction of open string with the same note, J, J by random number? HONG LIU: Sorry. What do you mean by random number? AUDIENCE: Say it's I, J, K, L. HONG LIU: No, no, no. The strings can only join together if they're ending on the same brane. AUDIENCE: Oh. OK. HONG LIU: So the J's have to be the same. So this guarantees that this will symmetry. Good? So in this sense, they're complex conjugates. And now we can build on this a little bit further. So this actually works. Doesn't matter whether the branes are coincident or not coincident. This is a generally true. Now let's consider all the branes are coincident with each other. So for coincidental branes, since branes are indistinguishable from each other, so they are higher dimensional generalization of what we call identical particles. So if they're indistinguishable from each, we can shuffle their indices. We should have the symmetry to reshuffle their symmetries. So whether we call this 1, 1 or this 1, 2 or this 1, 1 should not matter. So if I combine these two facts, two observations together, then when we have n coincidental branes, then there's in fact u(n) symmetry. If that whole string interaction is invariant, say if I have psi, I, J goes to say U, I, K, U, J, L, star, psi, K, L. So back here on U just corresponding to I reshuffle all the indices. So I have to do the same to U. So I reshuffle the two indices in the same way. But I have a star here. It's because of this reason. Because in some sense, the sigma 0 and sigma pi, they're only symmetries if I multiply opposite phase factor. I can rewrite this as a matrix notation. If you think about each side as a matrix, then this is the symmetry corresponding to psi U psi dagger. And the U can be arbitrary unitary matrices. So in fact, when you have n coincidental branes together, there's u(n) symmetries for the string interactions. So to say it in a more fancy mathematical language is that each open string excitation transforms under the adjunct representation of this u(n). So this is like a join representation. If you have u(n) symmetry, then this is like a join representation. So on the string worksheet, this is really a global symmetry. So it's just a phase factor associated with each n. There's nothing. It's a global symmetry. But the remarkable thing is that in the space time, this becomes a gauge symmetry. So this tells you, because of the presence of this u(n) gauge symmetry and because of each mode transforms under a join representation of some u(n), that as a space-time field, they also must transform under a join representation of some u(n). And interpreting the space-time, then this u(n) must be a gauge symmetry. On the worksheet, it's a global symmetry. But in space-time-- so let me just write it down. In space-time-- or in other words, in the world volume of D-branes, this u(n) must be a gauge symmetry for the following reason. Because the only way we know-- for example, this gauge field becomes transformed under a join representation of some u(n) symmetry. And all the excitations will have this symmetry. And then the only way we can make sense the gauge field under such symmetry is that this is a gauge symmetry and this is the gauge boson for that symmetry. Is it clear? So let me just say it in words. It must be a gauge symmetry. And in particular, a alpha I J must be the corresponding gauge bosons. Because this is the only way we know how to make sense, because this is the only way we know can happen at low energies. Some gauge fields transformed as a matrix interact with each other. And this can only be Yang-Mills theory. And if this is Yang-Mills theory, then this must be a gauge symmetry. So this is the basic argument. And at low energies, we must have Yang-Mills theory. So something remarkable happens. So each D-brane, when we separate them. is a Maxwell theory. When you put them together, they become Yang-Mills theory. And somehow, they become non-Rubinian. And everything comes from, in the very trivial way in string theory, you just count the indicies. But the physical implication is profound. And this can be confirmed, again, by just starting the explicit string theory scattering amplitude. You can calculate the scattering of 3a in string theory. Then you find it's precisely-- at low energy, precisely given, but the same vertex as the Yang-Mills theory, et cetera. So you can do that. Then you find the low energy effective action can be written as the following. Some Yang-Mills coupling trace. Let me just write down the Yang-Mills theory. You find the low energy effective action can be written in this form. So this is standard Yang-Mills field strings because now everything is a matrix. Everything is a matrix. And both A alpha and phi a now are embedded in matrices. So this g Yang-Mills is the Yang-Mills coupling which describes how the gauge fields interact with each other. And obviously, this should be related to the open string interaction because-- I think I have already erased it. This kind of interaction, joining the string. This corresponding to two strings joined into one string, and this proportional to g0. And this, from the field theory point of view, it just controls the interaction of the three A's. So that must be the Yang-Mills coupling. So this must be the Yang-Mills coupling. And then this should be related to gs to the power 1/2, which we just derived slightly earlier. So now, on dimensional ground, the g Yang-Mills square can be written as-- let me write it here. This is an important formula. On dimensional grounds, the g Yang-Mills square, I should be able to write it as gs. So on the Yang-Mills coupling, we will use the standard convention that A has the dimension of mass. And then this is a dimension 4 object. And this is a dimension p plus 1. And so the Yang-Mills coupling, you can deduce the dimension of the Yang-Mills coupling there. And it just turns out to be p minus 3 dense-- again, because alpha prime in string theory is only length scale. So the alpha prime must come from here. And then times some constant. So again, dp is just some numerical constant. And you can see explicitly that when p equal to 3, then you have d3 brane. Then the world volume theory is four dimensional. Then we recall that in four dimensions, Yang-Mills theory is dimensionless. QCD [INAUDIBLE]. Good? Any questions? Yes? AUDIENCE: What to do with phi h? HONG LIU: Hm? AUDIENCE: What to do with phi h? HONG LIU: What to do with phi h? What do you mean what to do with phi h? AUDIENCE: Scalar color particles? HONG LIU: Yeah. I forgot to mention here is the standard derivative, covariant derivative. So the alpha phi a is just the standard particle phi a minus i a alpha phi a. So it's the standard gauge covariant derivative. AUDIENCE: Shouldn't phi be more fundamental presentation of-- HONG LIU: No. Everything is to the join because everything has two ends. AUDIENCE: So how to interpret phi, then, in our-- HONG LIU: Hm? AUDIENCE: From the present zoology of particles, where to put phi, the observed particles? HONG LIU: Sorry? AUDIENCE: If a is just the gauge boson [INAUDIBLE] or something, what is phi? HONG LIU: Phi is some scalar field transformed on the join representation of the gauge group. It's a matter field describing the motion of the brane. AUDIENCE: I have a question. There you say that we can reshuffle their indexes. So that symmetry should be permutation symmetry? HONG LIU: No. AUDIENCE: Why [INAUDIBLE]? HONG LIU: But I reshuffle because I say it in the more heuristic way. But normally, if the indices is not-- these are the states. And you can just swivel-post them. A different i, they should be the same thing. They're just corresponding to relabeling, I'm just saying. The lateral action of anything on the state, of course, is the unitary transformation. AUDIENCE: Is there anything to do with that? HONG LIU: No. That thing is related to this star, why we put this star here. This is the reason why we put the star there, because the two endpoints, they should transform in opposite way. Yes? AUDIENCE: Since the branes become dynamical, can't they fluctuate in different ways and so no longer coincide? HONG LIU: Sorry? AUDIENCE: Since the branes are dynamical, you put them all in one place at first. But can't they now start fluctuating and separate? HONG LIU: They can certainly. AUDIENCE: They're no longer symmetric. They become distinguishable. HONG LIU: They can certainly start moving apart if you give them some initial motion. But the fluctuation, the system is isotropic. There's no place for them to-- they will fluctuate, but they will still at that point on average. There's no preferred direction for them to go unless you give them a direction. You say, I want them to go in that direction. Then you push them. AUDIENCE: But you can still think of them fluctuating kind of separately, separating in their fluctuations? HONG LIU: Sure. This a are their fluctuations. Phi are their fluctuations. AUDIENCE: Is this commutative term belonging to the effective action an interaction between the D-branes or is it-- HONG LIU: Yeah. Good point. Let me just comment on this term. So if you think about it, as we said before, a D-brane, no matter what dimension of the D-brane, the story when we quantize them, they're almost the same. It's the same spectrum. If you have a space-feeling brane that everything is A alpha, and then if you have some lower dimensional brane, some of them become scalar field, et cetera. So essentially, they all have the same dynamics. So this interaction can be considered just come from here. So you can start with a space time feeling brane, and then you go to lower dimensions. And then this just can be considered to come from there. It's part of the gauge theory. AUDIENCE: So the whole low energy actions are the action of supergravity plus this D-brane interaction, you can think of the whole low energy action? HONG LIU: You mean if you include the closed string? AUDIENCE: Yeah. HONG LIU: Yeah. That's right, if you have D-branes. That's right. This is effective action on the D-brane. This is the effective action on the D-brane. AUDIENCE: I mean if you extend D-brane [INAUDIBLE], then where does this action come from? HONG LIU: No, that's what you said. AUDIENCE: It's just two actions crossed together? HONG LIU: Yeah. Good? AUDIENCE: Excuse me. How come it has the term in action that's proportional to phi without derivative? HONG LIU: No. They have derivative. This is covariant derivatives. AUDIENCE: The commutator. HONG LIU: No, but phi have covariant derivatives. AUDIENCE: But the next time. HONG LIU: Yeah? AUDIENCE: That thing appears to be proportional to phi without derivative. HONG LIU: Yes? AUDIENCE: But this is the Minkowski space or something. HONG LIU: Yeah. That's a very good point. I'm going to mention that point. But the key is that this particular potential-- this is a very good point. I'm going to mention that. I will mention that in a few minutes. I do because I do want you to do your p-set. So now let's consider separating the branes. Now we will consider separating the branes. Again, let's consider the situation we just have two of them. So at the beginning, they're coincidental. And then now let's separate them in some direction. Let's call that direction x. So let's say they separate by some distance, d. This is 1, this is 2. So of course, this 1, 1 and 2, 2 string, nothing changes because they're just still ending on the same brane. But those strings are now different. So now 1, 2 and 2, 1 strings become different. So 1, 1, 2, 2, exactly the same as before. And the 1, 2 string. For example, let's consider the 1, 2 string. Then the boundary condition changes in the way I have sigma equal to 0, say, in the x direction. Sigma equal to 0. Tau, say, is at some location. Let's call this x0. And then x sigma equal to pi tau then becomes x0 plus d. So now it means when you contact this string, you have to do a slightly modified boundary condition. So you have start with x equal to 0. Now you must include a term depend on sigma. And again, as before, we take the xL to be minus xR and periodic, et cetera. So for sigma equal to 0, then this boundary condition is satisfied. But in order to satisfy the boundary condition at pi, then you now need this w equal to d divided by pi. You need to develop d divided by pi. So now, you have a sigma term. So that will change your [INAUDIBLE] condition. So remember the [INAUDIBLE] condition we had before is something like this, q alpha prime p minus, say, is p plus 4 pi alpha prime. Say for the open string. I'm writing the open string version, which is, up to some numerical factor, the same as closed string we wrote down before. So now, because of this term, then there's additional contribution on the right hand side. It makes sense because now string has to be stretched over some distance. That costs energy. So take one minute to do it yourselves. Just plug this into here. You only need to look at the behavior of this term. Just plug in there. Take you, say, five seconds. You find that the massless condition now has one more term plus the rest, as before. And that means now all the previously massless particle, this here with the corresponding a alpha and phi a, are no longer massless. Because previously, they were massless because those terms are 0. But now you have one more term. And they have a mass given by M divided by d divided by 2 alpha prime. Just take the square root of that. And because this is precisely the d times the string tension, because 1 over 2 pi alpha prime is the string tension. So this is exactly the energy we expect, just from a classical picture. You have a string stretched between the two. Then this is the tension times the length of the string. So now you can also easily understand what's going on from field theory point of view. And now you only have two sets of massless modes now rather than four. So what's happening is that now, the gauge symmetry is broken from u(2) to u(1) times u(1). So the separation of the branes essentially corresponding to the Higgs mechanism in the following sense. When you separate the brane, I said before phi should be interpreted as the [INAUDIBLE] location of the brane. But you separate the brane. That means one of the five fields corresponding to the x-- the x is in the transverse direction for the brane. One of the five fields corresponding to the x must develop expectation value. And that expectation value then gives rise to this mass. So this is precisely a Higgs mechanism. So now let's go back to the question what this potential term means, whether we can actually, as we said before, because of the translation symmetry, in principle, we can pull the brane anywhere. And now, I'm out of time. So let me just say, if you look at this potential, this becomes 0 precisely when phi a and phi b all become commutes. So that means we can diagonalize the phi corresponding to all the transverse directions. You can diagonalize phi corresponding to all the transverse directions. And then they correspond to the location you put all your branes. Go do your p-set, and you will see a more explicit discussion of this. And then one final remark. At the beginning, you started with n branes together. And then because of this, we can separate them. So we can find a solution which, say they commute. We separate them into different stacks, n1, n2, n3, et cetera. So this corresponds to the configuration of phi. So let me just write all phi as a vector. Say there's a1, n1 of them at location a1, and n2 of them at location a2, and ak. If I separate them into n stacks, then will be like this. So n1 of them at location a1, and n2 of them at location a2. You can check. So this is n1 times n1 identity matrix. This is n2 times n2. And you can check that such a configuration does satisfy this condition so that you actually can separate the brane into such configurations. And in this case, then the gauge symmetry is broken into u n1 times u n2 times u nk, because only those points [INAUDIBLE] parts survives. And all the other strings between them become massive. OK. Let's stop here.
MIT_8821_String_Theory_and_Holographic_Duality_Fall_2014
15_Physics_of_Dbranes_Part_III.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation, or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. HONG LIU: OK let's start. So let me just mention brief-- first let me just mention briefly, this story regarding the D-brane and the Higgs mechanism, which we discussed at the end of last lecture. So there was a P-set problem which was related to this. Yes, I expected maybe the P-set problem was a little bit tough because we did not go to many details. We did not go to many details. Yeah, so in grading we will be very flexible because we-- So let me just say a few more things about that, just to make sure-- So here is the action. If you have n-- say if you supposedly have N D-branes together-- and then in such case-- so this is a low energy defractive action for the massless degree of freedom on the D-brane for which you have a gauge field, which is a matrix, and then you also have a scalar field, a number of scalar fields, which are also n-by-n matrices. And the number of scalar fields is the same as the number of transverse dimensions. And then we also discussed that the scalar fields can be considered as describing the transverse dynamics of the brain. Say, let's consider you have-- let's consider you have a single brane and there's only one transverse direction. Say let's call it the phi direction. There's only one direction, say phi 1 direction. Then essentially the location of the brane then essentially the expectation value of phi 1 can be considered-- say suppose this is the direction of x1, and the corresponding scalar field is phi 1. And then the expectation value of phi 1 can be considered as the location of the brane and this phi is the string which ending on this brain and come back to itself. So now let's consider if you have two branes-- now consider you have n-branes-- again, b is only one transverse direction. I now have n-branes with one transverse direction, which again is x1. And now you can just separate them. So now let's consider the process which you separate these n-branes into-- say completely separate them in different locations. So this is x1, x2, to xn. So this situation can be considered as a situation. So in this case, the phi is the-- phi is the m-by-n matrices. And then this can be considered as phi having an expectation value which is only in the diagonal entry, and each that diagonal entry describes the location of each D-brane. And it's natural to think that why this is the diagonal entry because the diagonal entry describes the string, which ending on the brane itself. So this is a one-one string, and this is a two-two string, and n-string. And so the diagonal string is the excitation on the brane itself. And the off-diagonal degree of freedom corresponding to, essentially corresponding to the excitation between the string between the branes. And so it's natural that the location of the branes, and then just corresponding to the diagonal entries. And so now we can generalize to the general dimensions. Now suppose there are two transverse dimensions-- let me call them x1 and x2-- and then now you have two scalar fields. So now the story is a little bit more complicated because when you have two transverse directions, say if you have more than two transverse directions, now you have nontrivial potential term. And under the law of the configuration, say suppose it's five, have low space time-dependent on the law of configuration, say the minimum with the potential, the mass was corresponding to the configuration, which the commutative phi a and phi b to be zero. Right now, say we have two transverse directions, so we only have two, but you can generalize to any number of transverse directions. So that means that the phi, or the phi equals 1, or the scalar field, they have to converge with each other. That means they can be simultaneously diagonalized. That means they simultaneously diagonalized. That means we can choose a basis, say if I write the phi, the collection of all the D-branes, then I can simultaneously diagonalize them, and then have the following form. So each x1 goes one into-- so each vector goes one into-- for example here, is x1-- I should use a different notation, so let me call the spacetime y1, y2-- and say this is x2, et cetera. So this defines the configuration, which you have one D-brane here and one D-brane at this point, and at x3, and et cetera. And again, it's only the diagonal entry that describes the location of the D-branes. The diagonal entry describe the location of the D-branes. And these, because they are all diagonal and the commutator is zero, and these are the law of the configurations. So this is a very important lesson. It tells you that even you have a potential, still the D-brane, you can put it anywhere you want. And you have the freedom to put the D-brane-- you don't break translations image. Any questions on this? Of course when you say, when the scalar field develops expectation value and then your original un gauge symmetry's broken, and then broken into the just in the standard story to whatever [INAUDIBLE] group allowed by this-- so if all the x1-- if all of them are different, then the remaining just be 1 to the power n. Because now you just have each brane separated, then you just have 1 to the power n. Good? So not let's move to new stuff. So now let's say I little bit about the D-branes in super string theory. So what we said so far actually applies to D-branes in bosonic string and in superstring. Now let's say a little bit more above the D-branes. So as we discussed before, when we quantized the open string, the D-branes in bosonic string always have a tachyon-- always have open string tachyon, no matter what D-brane you have because as we discussed before, the quantization, the zero-point energy on the string is the same no matter which dimension of the brane is et cetera. And so you always have open string tachyon. So that means, as we said before, tachyon means that you are seeking-- if you have a scalar field which have a negative mass squared, that means you're essentially sitting at the top of the potential-- in the top of some potential. And so that's why you get the negative mass squared. And so that means you have-- this is an unstable configuration. Because if you, say, give the tachyon a little bit back, then the tachyon want to roll down, and then you move away from this configuration. You move away from this configuration. So when you go to superstring-- so we also discussed before, for the bosonic string, the close string factor also has a tachyon, and then you can get rid of those tachyons by going to the superstring. So similarly, going into the superstring, the D-branes of certain dimensions-- not for all dimensions, but only for some dimensions-- do not have tachyons. So they are stable. So their lowest modes are just massless modes. They don't have negative mass square modes. In particular-- so this I will-- this is a long story. This is a somewhat long story which I will not explain here, but I will just make the statement. In particular you can show that for those stable D-branes in superstring, they always carry a conserved charge. So I will explain where does this conserved charge come from, and they always carry some conserved charged. And which is also a reason why they are stable. And the second feature is that the worldvolume theory is supersymmetric. So in addition to this phi-- so in addition to this say A alpha phi we are looking at here, A alpha and phi a, which we already see in the bosonic string, they are also massless fermions. So the open string excitations also include massless fermions. They also include massless fermions. And in particular, the low energy theories, all of them together, is the supersymmetric version of this. It's a super-Yang-Mills theory, supersymmetric version of this. And it's some kind of super-Yang-Mills theory. So now let me elaborate a little bit on the first point. They carry a conserved charged. Let me elaborate a little bit on the first point. So maybe go to the one, elaborate more on one. So let me just remind you again, the bosonic part of the of massless closed superstring spectrum, which we briefly mentioned before-- so this is so-called type II string, so-called type II string. When I say the superstring I always mean type II string. So if you quantize the superstring, then you find that they are again, you find the metric, you find the graviton, then you find that there are this metric tensor, and you find that there are scalar fields just as what we did in the bosonic string. But they have some additional massless modes. Depending on your quantization or how you treat the fermions in the superstring, then there can be, say IIA or IIB, there are two types of superstring. And for IIA you have one additional 1-form field and additional 3-form field. And for IIB, you have additional scalar fields, and additional 2-form fields, and additional 4-form fields. In particular, this is so-called self-dual. I will explain that in a little bit. In the IIB, contain a self-dual 4-form. And this additional field are called the Ramond-Ramond field, typically. Just a name, it's called the Ramond-Ramond fields. And so this called the Ramond-Ramond fields. AUDIENCE: C1 and C3? HONG LIU: C1 and C3, that's right. They're all called Ramond-Ramond fields. Yeah. So the story is essentially the same, very similar as to what we did for the bosonic string. For the bosonic string you only have x on one sheet, when you quantize x then you find the graviton, you find those B, you find the phi, and in the superstring you pull some additional fermions on the world-sheets, and those fermions give you additional modes, and then you will have those additional-- they will also give rise to additional massless modes in the spacetime, and those are this Ramond-Ramond fields. They are also massless fermions, which I will not write here. Yes? AUDIENCE: So why is it that type IIA and type IIB string theory, they appear to be very, very different, which seems bad. It just seems like there's this arbitrary choice about how you go about doing the mathematics and you get this spectrum of particles or this spectrum. So who do you get around this problem? HONG LIU: This is not the problem. This is a fact of life. This is just what you find. This is just what you find. AUDIENCE: So which one is right? HONG LIU: They both can be right, and they both can be wrong. And our goal is just to find all possible theories are there. And this just allow the quantizations. A lot consists in the quantizations. And both of them give you consistent quantum gravity. And if I have time, if this is a string theory class, if this were a string theory class, then I would explain that actually secretly they are equivalent, secretly they are equivalent. So in some sense, they are not that different. Anyway, for your purpose, at the perturbative level, when we can see that the string coupling to be small, then they appear to be different, and yeah, we have two types of strings here. And so let me mention a little bit about those forms. All of these are anti-symmetric potentials, so these anti-symmetric potentials are generalizations of the Maxwell fields, of the Maxwell field A mu. So mathematically, you can write-- say for example, mathematically you can write the gauge field A mu as the so-called 1-form, so-called 1-form, and then A mu, it just coefficients of this 1-form. And then the field strings is just so the derivative, it's just exterior derivative of this 1-form. And then the Lagrangian will be just minus one quarter F mu mu, F mu mu construct from this procedure. So you can just general-- mathematically straightforward to generalize this you have dimension of forms. So for example, we can consider n-form, which is the n-component tensor, but all indices are fully anti-symmetric with each other. So all the indices are fully anti-symmetric with each other. So these are fully anti-symmetric. If they depend on the conventions, sometimes we also put the one over n-factorial here. So you can define such objects, such as the generalization of A, and the corresponding field strings for this C, we also called F. So the field strings for C is just a n plus 1-form, which is exterior derivative of dC. And then this is n plus 1-form. Again, this is a fully anti-symmetric tensor now with n plus 1 indices. And then you can write down your Lagrangian similarly by generalizing these, you can write s1 over-- let me write it here-- you can write the Lagrangian s minus 1 over 1/2 times n factorial-- 2 times n factorial, not 2n factorial-- 2 times n factorial F mu1, mu n plus 1, F mu1 mu n plus 1. So F is n plus 1-form, so it's a tensor of n plus 1 indices, fully anti-symmetric fully anti-symmetric. So those fields essentially have this kind of structure, essentially have this kind of structure. And say the 1-form is very similar to our Maxwell. So this is a 3-form and this is a scalar, then just a massless scalar, then this is a 2-form whose field string will be a 3-form, and then this is a 4-form factor. And then just by definition, just like here, there's a gauge symmetry because F equal to dA, here, because this is F equal to dC. So there's a gauge symmetry because I can add C n, the total derivative, of any n plus 1, n minus 1-form. So I can make a gauge symmetry, so lambda n minus 1 can be any n minus 1-form. And because of this square equal to zero, now F is invariant. So F n plus 1 is invariant. So if you put this into here, and because this is already a total derivative, and so F is invariant. So there's a great symmetry, then there will be a conserved charge associated with those n-forms. You can define conserved charge associated with those n-forms. So so far any questions? Yes. AUDIENCE: And maybe, so there's a C field that they denoted fermions of bosons. HONG LIU: Bosons, it's all bosons. AUDIENCE: But it looks-- so this anti-symmetric has nothing to do with fermions? HONG LIU: No, this is just the indices. This just indices which are anti-symmetric. This is not the location, spacetime location. This is just indices. It's the same thing, I think, F mu mu. F mu mu, the indices are anti-symmetric. It's the same thing here. Are the people comfortable with this differential form notation? Yes. AUDIENCE: [INAUDIBLE]? HONG LIU: Bosons, they're all bosons. AUDIENCE: And where are the partners of those? HONG LIU: Sorry? AUDIENCE: Hb and phi, where are their partners? HONG LIU: So far I did not write them down. So they are also fermionics. They are also massless fermions. We will not need to worry about them. So they are massless fermions in this. AUDIENCE: [INAUDIBLE]? HONG LIU: Right. AUDIENCE: With superstrings. HONG LIU: Yeah those just come from, because when you go to the superstring, you have additional fermions on the world-sheet. And then they can give you additional massless modes essentially come from them. The story, the real story is a little bit more complicated because actually in the superstring, even those come from the fermions, world-sheet fermions. Yeah but anyway vehicle you have at the formula for the world-sheet, then you have more possibilities then that give rise to those modes. That's the long story. AUDIENCE: Those modes are also for bosons, not for fermions? HONG LIU: No, these are all of the spacetime bosonic fields, which are low energy excitations of the strings. And how they arise on the world-sheet, whether they come from fermions or from bosons on the world-sheet, it's a separate issue. And, in fact, all of them actually arise from world-sheet fermions, but they are spacetime bosons. Yes. AUDIENCE: [INAUDIBLE] of these Ramond-Ramond fields? HONG LIU: Yeah, you just workout their repetitions under their say, Lorentz symmetry. They're all integer spin-- they're all integer spin-- they're all integer spin repetitions of the Lorentz group. And not like fermions, then will be half integer. They are generating repetitions of the rotational group. AUDIENCE: So they are like spin 2, spin-- HONG LIU: No, normally we don't call them spin 2 or spin three. When we call spin 2, we mean a fully symmetric indices, symmetric and traceless. AUDIENCE: But anti-symmetric, we would never call them-- HONG LIU: Yeah, we don't call them spin. Just the repetitions of the Lorentz group, it's-- it's just some repetitions of it's integer. Good? And in the case of the Maxwell, we know that for the 1-form, for the case of the Maxwell, its source, where for A, its source is a point particle. So we can couple the-- so the point particle interact say with this extender vector potential in the following way. So suppose a particle following some trajectory described by x mu tau, and then this particle, we are intact with this vector potential in the following way. This is familiar from E and M. So you just integrate essentially with A along the trajectory of the particle. And you can write this in the more compact mathematical form. You just say you integrate along the trajectory and A is a 1-form, and you just integrate this 1-form along the trajectory. And so this is so-called pull back of this 1-form A to C. So this is the mathematical language. Similarly, we can generalize to these higher dimensional forms. So 1-form, naturally coupled to a point particle-- which can be considered as a zero-dimensional object. So a p plus 1-form, then naturally-- I should say is fermion. Then a p-dimensional object that naturally couples-- just as a generalization of this-- to a p plus 1-form. As follows, you can just-- the worldvolume of a p-dimensional object is a surface of p plus 1 dimension. So the worldvolume of p-dimensional objects-- p plus 1 dimensions because there's also time. You also move in the-- it's a p-dimensional object moving the time, then in the p plus one dimension. So let me call that worldvolume sigma. And then the naturalization of this will be this, the coupling between a p-dimensional object to a p plus 1-form will be like this. Just you integrate this p plus 1-form on the worldvolume of this p-dimensional object. And again, this is the pull back of C to sigma, C plus 1 to sigma. And let me just write it in a more explicit form. So here I use tau to parametrize the trajectory along the [INAUDIBLE]. So here, let me use psi to parametrize the worldvolume coordinate of this brane. Then this pull back means mu 1, mu p plus 1, partial x mu 1, partial psi 1, partial x mu p plus 1, partial psi p plus 1. And x mu psi one, psi zero to psi p describe the embedding of sigma in spacetime. Any questions on this? So it tells you that-- so this just tells you that the AUDIENCE: There is [INAUDIBLE] p plus 1 back there could say zero because of p because of notation. HONG LIU: Let me just do zero. AUDIENCE: Yes, so here the p-dimensional object in the simple case is just like the point particle is a zero-dimensional object. So that's what the object is. HONG LIU: That's right. The zero-dimensional object naturally couple to 1-form in this way. And then a p-dimensional object naturally couple to a p plus 1 dimensional this way. So let me just make, in the Maxwell case, the gate symmetry here implies an electric charge is conserved, OK, the electrical charge is conserved. So for a p-dimensional object coupled to a p plus 1 dimensional form, and because of this gate symmetry and then this charge is also conserved. In particular, the object with minimal charge must be a stable object because there's nothing to decay to. So again, after generalization of this case, so in the standard story, in the standard electromagnetism, we can have an electrically-charged particle, we can also have a magnetically-charged particle. We can have a magnetically-charged particle. In particular, you can talk about the magnetic monopole in quantum mechanics and et cetera, even though we have not observed them. So similarly one can generalize that concept to higher dimensions. So let me first just remind you of how we define the magmatic-- the mathematical way of how we define the magnetically-charged object EM. So EM-- so because F is a 2-form, we can dualize F-- so this is Hodge dual. We can define another form, which I call F tilde, which is related to original F by Hodge dual. So this is a 2-form, the Hodge dual of a 2-form in the four dimension is another 2-form. And then I can write it as the total derivative with another 1-form potential. And then the object-- so you know that the under this dual, so F 0,1 is mapped to F tilde 2, 3. So the electric field is mapped to the magnetic field, and similarly, the magnetic field here is mapped to the electric field here, so under this Hodge dual, under this Hodge dual. I should not just write this. So after some minus sign we'll not keep track of, and under this Hodge dual, you essentially say the electric field now I call magnetic field. So now object coupled to a tilde, then it's magnetically charged. Because in terms of A tilde, such objects they generate electric fields, and from our original F point there is a magnetic fields. So there will be a magnetically-charged object. So essentially that's how we think about the magnetic moment pole, EM. So this can also be generalized to this n-form case. So one can also dualize an n-form. Is that we introduce use a lot of C tilde, which is the dual of dC. So dC is the field strength for C. So this is n-form. And the dC is the field strength of C, and then I do a Hodge dual, and then I define to be dC tilde. I define to be dC tilde. And the dC tilde-- so this is n-form. When you do a field strength become n plus 1-form. And when you do the Hodge dual, when you take the Hodg-- dual, the Hodge dual of n-form is D minus n, OK? So this is D min n minus 2. So this should be a D minus n minus 2-form because when you take the D, then that makes into D minus n minus 1-form, and this Hodge dual take this n plus 1-form into D minus n minus 1 form. So you map your n-form to a D minus n minus 2-form. So essentially, it's the integration of the electric-magnetic dual of E and M. So now you can couple a D minus n minus 3-dimensional object to C tilde. And in terms of C, in terms of regional C-- so this is a magnetic object, OK? Just the same as we define magnetic object or magnetic monopole for E and M. So is the clear? Yes. AUDIENCE: In order to write F tilde and A tilde, do we have to assume that there are no electric charges? HONG LIU: That's a good point. No, you don't have to-- so you do this, you have to assume that the equation of motion is satisfied. You have to assume the equation of motion is satisfied, then you can write this procedure. AUDIENCE: So if dF is equal to J, then that is non-zero. So if there are-- HONG LIU: So you have the exact the same situation with E and M. It's just identical situation with E and M. It's just whatever you do in E and M you do it here. AUDIENCE: So I thought that we could only write F equals to dA because there were no magnetic monopole? HONG LIU: No, so you can do that, now you have to introduce some similarity, so that's why the magnetic monopole have a Dirac string. That's why the magnetic monopole have a Dirac string, then you have this Dirac quantization condition, et cetera. So as you would then, say either in E and M, or in the quantum mechanics, say in the graduate quantum mechanics, or E and M. So are people familiar with the concept of magnetic monopole and the Dirac-- so-called Dirac string and Dirac quantization condition? Yeah, I saw some nodding heads. So you're not familiar with it, then look at, for example, in one chapter of Jackson have a very detailed discussion of magnetic monopoles. But the best, actually, was Dirac's original paper. It was really beautiful, very, very beautiful. But Jackson has a rather pedagogical discussion. Jackson's in actual dynamics, that's a rather-- AUDIENCE: Does he discuss the differential forms, though, or no? HONG LIU: I don't remember. Oh, you mean Jackson? AUDIENCE: Yeah. HONG LIU: I don't remember. Yeah, it's the fact that you can allow the magnetic monopole then if you allow the certain singularities in the vector potential. And that's the so-called Dirac string. So you don't have a thing to do. So the idea is that you-- what you were saying is right, the fact that you could write F as dA means there's no magmatic source, but you can introduce a magnet source if you allow certain singularities, and that's what Dirac realized. Good. So now, this other gauge fields, in type IIA and type IIB string, and they can give rise to extended objects which couple to them. And it turns out the extended object cover to those Ramond-Ramond fields, are precisely D-branes. And they are precisely D-branes, and because they couple to those gauge fields, and they are stable objects. At least the minimally-charged ones are stable objects. So this is related to the statement that actually in superstring you actually find some stable branes. And the other dimensions-- so let me just list them just to-- So with this preparation, then in IIA, so we can have the following electric and fully magnetic object. See in the IIA we have a C mu1. We have a 1-form. So that means there's a natural object coupled to it-- electric object coupled to it. Turns out this is a D0-brane. A zero-dimension object coupled to a 1-form, this is D0-brane. And as a 1-form in 10 dimension-- so in superstring we have 10 dimension-- is dual to-- so D is 10 minus 1 is 9, minus 3 is 6, and a then the magnetic object will be a D6-brane. So similarly we have a 3-form, and then this gave an electric D2-brane, and a magmatic D4-brane. So if you just follow this rule. So for the type IIB, so let me now first start with this C2. So this gives you a D-string one-dimensional object coupled to it for the D-string. And then the mathematical object will be a five-dimensional object. It's called a D5-brane. So this string is a D1-brane. So now, let's look at this 4-form. So the so-called self-dual-- so sometimes we put a plus here to indicate this is a self-dual. So let me now explain what this self-dual means. So self-dual means that from C4, as we construct an F5-- and construct an F5 from this C4-- to be a self-dual form means that this forms satisfies the condition that F5 equal to F5 dual. So this is a self-dual. Just means this is a constraint that this C has to satisfy. So this is called a self-dual 4-form. If you look at repetitions of the Lorentz symmetry, you'll find that this is actually a repetition. And so for this C, you have to satisfy this self-duality condition. Then now, for object which coupled to the C, would be a naturally D3-brane because the three-dimensional object coupled to a 4-form, so it would be a D3-brane. And this D3-brane, because of this self-duality condition, we have to be both electrically and magnetically-charged because this is a self-dual form. So this must source both electric flux and the magnetic flux to be consistent with self-duality. And then finally, you can also have a scalar field in the type IIB. And the scalar field, now eventually you will couple to so-called the D minus 1 dimensional brane. Because scalar field, by itself, is zero dimension. It's already zero dimensional form. So following this convention we will couple to a D minus 1 dimensional brane. And that obviously not make much sense, but actually can make sense when you go to Euclidean signature, and then it turns out this scalar fields couple to something called the instanton, which I will not go into there. Something called D-instanton. So the bottom line is that the object, they are charged under [INAUDIBLE] fields, they are all D-branes. They are all D-branes, and they come into those dimensions. And any D-brane-- of course you can define D-branes with other dimensions. So you can consider in IIA a D3-brane. But in IIA, a D3-brane will not be a stable object because there's not conserved charge for you to couple to. So even though in IIA you can consider a D3-brane, but it's not a stable object, and actually, when you quantized the spectrum on the D3-brane in IIA, then you find there's a tachyon. Again there's a tachyon in the worldvolume that indicates that's not a stable object. But those branes don't have a tachyon. And in particular, on those branes, these are all supersymmetric field series. For example, the most important one is this D3-brane because then you have four dimensions of worldvolume, then you actually have four-dimension theory on this D3-brane, and that's what gives you so-called super-Yang-Mills theory. So on these branes, on these stable branes, all of the theories-- super-Yang-Mills theory, in particular for D3, it's given by a so-called n equal to four super-Yang-Mills theory in four dimensions. AUDIENCE: So what you're saying is for any sign table, does it mean dissolving-- HONG LIU: Yeah, it can decay. It can decay. AUDIENCE: But you lower dimension of brane? HONG LIU: It's possible for it to decay into lower dimensional brane, and it can also for it just to decay into closed string modes, radiated closed string modes. AUDIENCE: But, I mean the dimensions of these two objects are different. It sounds like if branes decay to a string, it will become infinite string. HONG LIU: Yeah, that's right, ud type density of strings. It's actually a beautiful subject to discuss those branes, how they decay. Yeah, it's a nice subject, but it's way out of our discussion here. Any questions? Yes. AUDIENCE: As to four, if we consider stacks of such branes, do we get higher-ranked age groups? HONG LIU: Yeah, that's right. Exactly. It always give you n. Good, so that concludes our discussion of the D-branes from the point of view as the rich boundary conditions. So now, I want to take a slightly different perspective on the D-branes. From this perspective, from the perspective the object, which are charged under those generalized gauge fields. And now I want to view those D-branes. I have some objects, some solitons, if you want some solitons, which are charged under those generalized gauge fields. I want elaborate from this perspective. I talk about the different perspective. So D-brane has math, has tension. Also D-brane carry-- so now we only consider those stable D-branes. So they have a mass, they also carry those charges, conserved charges. So that means there will be flux coming out-- electrical or magnetic flux coming out of those branes. So that will deform the spacetime because the gravity will deform spacetime. And so now, we'd like to find out what are the spacetime around those D-branes. How do they deform spacetime? So for example, say it consider charge as a total example of this. Say, let's consider just again in the Maxwell theory, in Einstein-Maxwell theory, consider, for example, a charged particle sitting at the origin of a four-dimensional spacetime, Minkowski spacetime, for example. So if you include both GR and E and M, then the dynamics of this series controlled by so-called Einstein-Maxwell-- so you will have a Einstein theory and the Maxwell. So the dynamics of such particles should be captured by them. And so this captures the gravity due to the particle. This captures E and M due to the particle. So from here, you can work out how does this particle deform spacetime from the Einstein equation? So the equation motion of this system would be, you have a standard Maxwell-- so I will be careful about the minus sign-- and also the Einstein equation, the stress tensor. So j is just the-- if I have a charged particle sitting there, there's no current, there's only a charged density. And the stress tensor contains two parts. It's the stress tensor due to the particle and also the stress due to the electromagnetic fields. Because this excited the electrogmagnetic fields. And the particle part only have a zero component, so you only have a mass. There's no momentum. So this is just a-- maybe I should write r. I think I'm using the notation of r. So the particle part of the stress tensor would be just the data function. The only non-zero component are zero-zero component. You only have energy density and just given by the data function of the mass. So from here, by solving those equations we see those forces then you can work out how a charged particle deforms your spacetime. So let me just say I'm a bit further, for example you can easily solve the Einstein equation, we all know the solutions, so you just have a coulomb potential, and in other words you have electric flux surrounding the s2 around the point particle, you go to q. Again, this is the dual of F. You have electric field, when you dualize that hyperspatial component, then that gives you the-- so this a Gauss node, OK? This is a Gauss node. And you can also work out the metric surrounding this charged particles. So because of the spacetime symmetric, spherical symmetric, then you can actually write down the spherical symmetric ansatz for the metric around the particle. So from the Einstein equation, you can find out what's-- you can work out what is f(r) and h(r). Some of you may have already done this exercise before. So if your q is zero, what answer would you get? That's right. So if the q is zero, then you just get the Schwarzschild metric. And it's the exactly the metric say produced by a song, et cetera, far away. So here we have a charge, then what you get is something called the Reissner-Nordstrom metric. And when you have a charge having electromagnetic fields. So you will get so-called Reissner-Nordstrom metric. AUDIENCE: So that's if q equals 0? If q equals 0? HONG LIU: No, q larger equal to zero is the Reissner-Nordstrom. And when q equal to zero, then you just get the Schwarzschild That's how you calculate the precession. That's what you use to calculate the precession of, say, of the mercury. Similarly, you can also do this for magnetically-charged particle. So the only difference for magnetically-charged particle, the difference is that this equation now becomes just directly magnetic flux equal to G. Say for example, if the magnetic charge is G, and then you will get this. Then instead you have this, and then you have yeah-- We have a magnetic flux. And again, you could plug into any equation you solve, you get the same Reissner-Nordstrom metric. It's also a Dirac quantization telling you that the qg should be quantized in integers of 2 pi n-- so n equal one, two, et cetera. So this is the familiar story in the GR, so if you want to look at how the gravity and the E and M surrounding the charged objects, and so you can generalize this story immediately to those branes, which are charged-- which have their own mass and charged under some generalized gauge fields. So you can generalize that immediately. So you can generalize procedure immediately to higher-dimension objects. So you can just, as a single exercise in GR, you can just carry out this procedure, just replace this to some higher-dimensional form, under the source is some higher-dimensional object, then you can work out the solutions. And that was a simple exercise which actually Horowitz and Strominger-Dietz, in the early '90s, in 1991, it's a trivial generalization you can do, why not do it? But turns out that's a very, very important step to do for the reason which I will explain. So for simplicity, let me just talk a little bit about doing this for this D3-brane, also for definite entities, you could do it for any of this. And let me just say it for D3-brane. So now consider the D3-brane-- so D3-brane is part of the type IIB theory, so of course, we don't know to really do this in the full type IIB string, so we will do the low energy limit of the type IIB sting. So we will do this for the D3-brane. In the low energy limit of type IIB stream, which is so-called type IIB supergravity. So it doesn't matter that you don't learn anything about type IIB. Supergravity, and essentially, that's just some generalization of this equation. Type IIB supergravity is some generalization of this equation. But nevertheless let me just introduce some notations just to be-- So type IIB supergravity-- we will do this in the law of limit of the IIB string. So this is the law energy effective theory for massless modes of IIB string. So it has the form as what we had before. You have 1 over 6 pi, T pi GN. But not this is a 10-dimensional Newton constant, so this is a 10-dimensional theory equal to 10. And you have the Einstein term, and then you have forms associated with all of this flux, generalized gauge fields, et cetera, fermions, et cetera. And as we said before, the relation with string theory, the Newton constant should be proportional to gs squared. So you can actually work out the prefactor precisely. So let me just write down the prefactor. So the relation with the string is-- just on the dimensional ground, so this is the Newton constant in 10 dimensions, so this would have dimension eight in terms of the lens have dimension eight. So the right-hand side I must be upper prime to the power 4 because this is only lens scale in string theory. And then the prefactor, it turned out to be-- so you can work out the prefactor, so turns out 16 pi GN is equal to 2 pi to the power 7. Say, when you work out the law energy effective theory, you can work out this precise factor. And the regime of the validity of this type IIB supergravity is that the gs is much, much smaller than 1. The string [INAUDIBLE] should be much, much smaller than 1. So that just tells-- this is a statement the string loop correction should be small because the when the string become big, the loop correction become more and more important. The loop correction would be more and more important from the point of view of gravity this quantum gravitational corrections. And so when G string become big, the quantum gravitational corrections-- so spacetime will fluctuate more. And then of course you cannot trust the classical gravity in that region. So if you want to trust the classical gravity, you want to have small coupling, or translate into this language, the Newton constant has to be small. OK? The Newton constant has to be small. That means the gravity's weak. But in real life, gravity's weak, so it's OK. And so this means the quantum correction is small, so you can trust the classical gravity. So you can trust classical gravity. And then we also want the energy-- no energy, so we want energy squared to be smaller then one over alpha prime. Because when energy squared becomes more than one over alpha prime, then the massless string modes become important because all of the massless string modes have mass square of one over alpha prime. And if we only want to concentrate on the massless modes, and then you don't want to excite them. And also you want the curvature of the spacetime-- here we're considering curve spacetime-- so you want the curvature to be much smaller than one over alpha prime. So this gravity, essentially, when you consider the mass of this particle, you consider the particle limit of the spacetime. So this can be considered-- as you can see, the spacetime curvature-- the spacetime curvature radius is much much-- so alpha prime is essentially it gives us the rough lens scale of a string. And then when the curvature is much, much smaller than one over alpha prime, that means that the sight of the string is tiny compared to the curvature radius, the typical size of your system. And that you can roughly treat them as point particles, and that's what the gravity theory does. In the supergravity theory it's all point particles. And so this means we decouple massless modes. So you can count on massless modes. So whenever you take a limit, you always have to take the dimensionless number small. So the dimensionless number is alpha prime times curvature, of alpha times energy squared. But in reality, we often-- when we work with a theory, we often just keep our curvature, or our energy scale as what we want. And then in talking about this limit, we just say, we take alpha prime goes to zero limit. We say we take alpha prime goes to zero limit. So that means a law of energy limit. Even though, legitimately, this is not the right thing to say because this is the dimensional prime integral. And so the regime where supergravity works, can be formulated, say, as alpha prime goes to zero and the g string goes to 0. So in this regime, now you can work with supergravity, and now you can now generalize this story to the branes. Now generalize this story for branes. So now let me say a few things about the D-branes-- D3-branes. So they D-brane, D3-brane's charged under C 4 plus-- process and we said this C 4 plus is required to be self-dual. That means any source of C 4 plus must carry both electric and magnetic charge. So that means that for the D3-brane-- say if we define an electric charge to be-- So now, so the D3-brane is a three-dimensional object and the transverse direction is a six-dimensional because this is 3 plus 1 including time, so the transverse direction is six-dimensional. So we can similarly just think about the D3-brane. So if we ignore the spatial direction along the brane, then the D3-brane, the transverse direction then can be just considered a point in R6, OK? So a way to think about the D3-brane, the surrounding of the D3-brane, it's an analog of a point in R6, in the six-dimensional space. Is this clear? This is going to be very important. And then the surrounding this point, the sphere will S5. Just like in the electric case, just like the E and M case, this is the S2 surround the point particle, and this S5 is the sphere, which is surrounding the whole D3-brane. And then this D3 can have electric and magnetic flux through this S5. And so electric charge would be just the 5-form running through it. And the magnetic charge would be just-- right. Minus the dual of 5-form is 1-form-- just make the relation between these two. And now because of self-duality condition, then they must be the same. So that means for the D3-brane, the electric and magnetic charge must be the same. And also for higher-dimensional object-- AUDIENCE: Excuse me, why should they be the same? Why not-- HONG LIU: It's because of F5, you get F5 star. It's because of that. Because of that self-dual condition. So electric and magnetic charge of the D3-brane must be the same. It must carry both electric and magnetic charge. And now, this Dirac quantization condition also works for higher-dimensional objects. And again, it's a very simple exercise to do it-- very simple exercise to generalize Dirac's argument to higher dimension. Again, it's just related to how you treat those higher-dimensional forms, et cetera. Again, it's a very simple step, but it's a very important step. So that tells mu that's for D3-brane, which is self-dual, which carry both electric and magnetic charge, then you have to satisfy this quantization condition. Then that means the minimal single D3-brane, the minimally-charged object, the charge must be square root of 2 pi. And for n of them, it must D3 equal to q3 equal to 2 pi n. So now we have to specify the analog for E and M, those conditions. Now we also have to specify the mass of the D3-brane. So as we discussed before, tension over the D3-brane, tension over D-brane should be proportional to one over gs. That you also did in your P-sets. And again, by doing a precise calculation, you can work out the precise prefactors. It turns out actually to be the following form. It's the q3 divided by 16 pi GN. And the GN is just gs squared, so if you translate into-- so for n D3-brane, that would be n divided by 2 pi cubed gs alpha prime squared. So this gs and alpha prime squared we know on general grounds. So gs we'd-- one over gs we know on general ground, and alpha prime squared just from dimensional analysis. Because this is a three-dimensional object, the tension we'll have mass dimension 4, so that's why you have one over alpha prime squared here. And then the other 2 pi cubed from precise calculations. And you can also write it in this form in terms of Newton constants. And the reason I write this form because this is a very special case. And you see that the tension is actually precisely related to the charge and divided by the square root of Newton's constant. And it turns out that only special objects have this kind of property. And this is so-called BPS objects. I will not go to there. It's related to those branes that are supersymmetric. Anyway, so these are the mass over the-- essentially, this is the mass of the brane, and these are the charge of the brane. When you specify the charge, than those fields, then those flux that you uniquely determined, and then the C4 potential uniquely determined, up to gauge symmetry are determined. And then now you can plug into the generalization of the Einstein equation to work out the geometry. So again, the symmetry here-- I won't have time for the bottom line, so that's OK. So the symmetry preserved by the D-brane, by D3-brane as we said before, similar to all D-branes is you have a Poincare symmetry along the brane direction, and then you have rotational symmetry in the transverse direction along the brane. So the transverse we have six dimensions. So there's SO(6) directions you can rotate in the-- and then you have Poincare symmetry in the (1,3) direction. So based on these symmetries you can write down what the metric looks like. You can write down answers for the metric. Just like right here, you can write down the answers for the metric of a point particle based on symmetry. So here, if you can write down the answers for the metric based on symmetry. So first, you should have a part which is preserved Poincare symmetry. Means that this part can only be this form. Say along the brane direction, can only be this form times some other factor which depends on r, so r is the transverse radius from the brane. And then, the other dimension must be spherical symmetric, so you can have a lot of factors. And then the r squared d omega 5 squared. So this is s5 surrounding the brane, and then this is the radial direction. So this is the only form of the metric. And then essentially, again, as in the charged particle case, you need to determine these two functions. Which you can plug in the Einstein equation to determine these two functions. So let me just write down the form of those functions, then we discuss what they mean next time. Some how, this year I'm consistently much, much slower than what I did before. Maybe I'm explaining physics better. Anyway, so now it's just a mechanical exercise. You plug in these answers into the Einstein equation, then you can find the two equations for f and h, then you can solve it. In the old days, when Horowitz and Strominger did it-- maybe they did not have Mathematica yet, but nowadays you can do it. Using Mathematica, if you have the right program, you can do it in five minutes. But in the old days-- they were friends with Wolfram, maybe they already have the program at that time. Anyway, so it was a [INAUDIBLE] exercise to do at the time. So you find actually the answer is very simple. Turns out, both f and h, they just-- can be written in the following form. H is just a harmonic function. And R to the power 4 is given by some constant GN, Newton's constant times the D3-brane tension times n. And it can also be written as 4 pi gs N alpha prime squared. So turns out that those functions are very simple. They just square root of a harmonic function. So this is a harmonic function for the transverse R6. So if you solve a coulomb problem, and that will be the solution. And this is the harmonic-- so this is the generalization of 1 over R potential. So this is just generalization of 1 over R potential. And then f and g is very simply related to this by square root. And this constant R to the power 4, so this is not curvature, this is just some constant. I'm using the standard notation. And then this can be-- this is just related to the mass of the brane. Mass of the brane, of course, is proportional to n. Yeah. AUDIENCE: In this case, is the mass, the charge is it massless? HONG LIU: Sorry? AUDIENCE: The charge is massless? HONG LIU: Sorry? AUDIENCE: Is the charge considered on the D3-brane brane massless or massive? HONG LIU: No. Charge are just charge. What do you mean by charge is massless? This is just the charge carried by this D3-brane. Anyway, so this R4 is just like GN times the tension, N have the tension. We have n objects. So this is what you're familiar with the Schwarzschild black hole. Schwarzschild metric just using constant times mass, so this is just generalization of that. The Newton constant times the tension, and you have n-branes, we just multiply n. And you write it-- if you write everything from gs, this is a proponent to gs squared, this is a proponent to one over gs, so and that, the thing's proportionate to gs. OK. OK we'll stop here.
MIT_8821_String_Theory_and_Holographic_Duality_Fall_2014
11_String_Theory_in_the_Lightcone_Gauge.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: No, no minus sign. I think maybe-- yeah, it should be a plus sign. Yeah. Should be a plus sign. Yeah. Yeah, you should correct that minus sign to a plus sign. Yeah, both should be plus sign. Yeah-- I wrote on my notes was correct, but then on this board, somehow-- [INAUDIBLE] maybe I should have a minus sign and put it there. But didn't realize. Yeah. So let me first remind you what we did in last lecture. So first, we have some gauge symmetries on the [? worksheet. ?] And then we can use that, say, [INAUDIBLE] traditional symmetry. You can re-parametrize the [? worksheet ?] coordinates, and also to a [? while ?] scaling for the metric itself. Use those freedom, we can set the metric to, just to the flat Minkowski metric. [? On worksheet. ?] Now after you do that, then the [? worksheet ?] action then becomes just like a free scalar field. Then of course, the equation of motion is just given by the standard [? wave ?] equation. I then can solve it immediately. And essentially, you just have some left-moving wave, some right-moving wave, and plus, some zero modes. And then for a closed string, these are independent functions. So then for the closed string, closed, so you can have independent left-moving modes, and then also independent right-moving modes. And but for the open stream, because you have boundary conditions, and then you only have one set of modes, because you are-- yeah, because boundary conditions give you a constraint. So it's just like you have a standing wave. And then X l should be equal to L r. Yeah, X l should be equal to X r. But this is not-- but [? quantized ?] string theory, even in this gauge, is not just quantized scalar field theory, because we still need to solve the equation motion come from doing the variation of gamma a b themselves. And the way you do the variation of gamma a b themselves, they see [INAUDIBLE] to set the stress tensor of this scalar field theory zero. Yeah, to set the zero, the stress tensor. And so, already, just-- yeah, so they're two independent equations. Y is the diagonal component. It should be zero. And then the other is the off-diagonal component, should be zero. So you have two sets of equations. And those equations are in general hard to solve. So they are non-linear quadratic equations. Which I did not write them here. So they're non-linear constrained equations, which are, in general, hard to solve. Then the important trick we discussed is that you can go to the light-cone gauge. We said even after this fixing, there's still some remaining gauge freedom. That you can actually make one more choice to set the X plus, set it to be the same as tau. And the X plus, we always use this definition. So you can see that the all directions, X zero, X one, two, et cetera. And then we combine X zero and X one together to form X plus X minus. And then the rest we call them X i. So these are also sometimes called transverse directions. So also called transverse directions. And so now, then we can use this remaining gauge freedom to go to so-called light-cone gauge. Then the X plus becomes V plus times tau, some constant times tau. And then this light-cone gauge, then those equations can be written in a very simple way. In particular, the dependence on X minus becomes linear. Then you can actually use these two equations-- you can solve X minus exactly. And so this tells you that actually, the independent [? degree of ?] freedom just X i's. OK? The independent [? degree of ?] freedom just X i's. Any questions about this? So let me just put the equation number, because I'm going to use this equation number later. Which inherited from the last lecture. AUDIENCE: So the-- [? confidence ?] on the [? energy ?] [INAUDIBLE] because they're [? close together ?] [INAUDIBLE] PROFESSOR: No. No. This come from the equation motion for gamma a b. So [? remember, ?] previously, the gamma a b is also a dynamic variable. Even when you fix the gauge, you still impose the equation motion. And so that's essentially this. Yes? AUDIENCE: Aren't the remaining degrees of freedom for X minus just because of the boundary condition, or something like that? Is there a freedom in specifying that? PROFESSOR: What-- There's some constant, a [? possibility ?] constant and we can-- of all constants. Yeah, because this determines all the derivative dependents. And they might be, overall, constant. And that you can always-- yeah, it's not important. Yeah, yeah. Any other questions? Yes? AUDIENCE: So going back to what you said earlier. So for the boundary conditions for the open string, the derivative's also positive? So there's not a minus sign? PROFESSOR: Yeah. Let me also add here-- so for the open string, for the moment, we can see the so-called Neumann boundary condition. So this is equal to zero. So that, we'll keep that. Yeah, so one correction. In the last lecture, I wrote a minus sign there. That should be corrected by plus sign. The both equations should be corrected by plus sign. Yeah. Good? So with this set up, then we are ready for quantization. Because now, we only need to quantize the X i's, because X i's are independent-- only X i's are independent variables. And so we only need to quantize X i's. And X i's, they are just a free field. And that's very simple. OK? But before do that, let me just emphasize one point. Emphasize one point. So the light-cone gauge, by making this choice, you make those equations very simple, but you also sacrifice something. Because of the light-cone gauge, because the [? shaky ?] choice itself, it breaks Lorentz symmetry. OK? Because [INAUDIBLE] [? it lacks ?] two special directions. So X zero X one, and you form X plus from them, and you impose some conditions on X plus. So this breaks Lorentz symmetry. OK? So when we say "break," this does not really break the Lorentz symmetry. Just say this gauge choice itself does not respect Lorentz symmetry. OK? So the theory is still Lorentz symmetric. The theory should still have Lorentz symmetry. It's just Lorenz symmetry not manifest. Say, yeah, say just not manifest. Yeah, maybe I should-- using proper English, I should say, in the light-cone gauge, Lorentz symmetry is not manifest. I think this is a better way to say it. OK? It's not manifest. So this equation breaks Lorentz symmetry, and then the Lorentz symmetry is not manifest. So but there's a third group. Lorentz symmetry is still manifest. So that's the group which rotates the X i. So this remaining s o d minus two, which rotating X i's. This is manifest. OK. The remaining rotating X i is manifest. OK? So keep this in mind. OK? This is very important point. OK. So now we can proceed to quantize it. So before doing the quantization, let me develop a little bit further to make the task of the quantization easy. So I ask, what we do in free field theory is [? continue ?] to expand to those arbitrary functions in terms of Fourier modes. OK? And they are periodic functions of 2 pi, so we can easily expand them in Fourier modes. For example, X r X l mu sigma plus tau can be written as i over prime 2. OK? So this alpha prime, so this [? pre-factor ?] is purely convention. OK? Just for the later convenience. And this one over n is also purely convention. And you can choose what you want. So the key is that you are expanding this periodic function. Inferior modes. Because you have to be periodic in sigma, so the coefficient here must be integer n. And we exclude unequal to zero mode, because the unequal to zero, this is just a constant. And that is already included here. And that is already included here. So we say exclude unequal to zero. And also this is a real function. Then this means that the alpha n mu should be equal to alpha minus n. So with this convention over n here, and with this i here, would be alpha minus n mu [? ba. ?] OK? And similarly, you can write down the expansion for X r. Similarly, you can write down the expansion for X r. The same [? pre-factor. ?] And this n sum for minus infinity to plus infinity but not equal to-- unequal to zero, so we call the modes f n tilde divided by n. [INAUDIBLE] tau plus sigma, tau minus sigma. OK? And here, again, alpha tilde mu should be equal to [? ba ?] alpha tilde mu minus n, in order this to be real. OK. For the open string, you only have one set of modes. For open strings. So this is for closed string. For open string, you can-- you just have alpha. Yeah, just alpha and mu, you go to alpha and mu tilde. So you only have one set of modes, OK, because are they equal. OK. So you can plug this, those expressions, into here. Then you have most general form of X mu in terms of these Fourier modes. OK? So to save time, let me not do that step. So you should try to do-- yeah, just so a trivial step of yourself, write those things into a single equation. And then that's the most general form of X mu in terms of these specific modes. So now let me make some remarks. You might wonder, what's the implication of those zero modes here? OK? So those just describe the center of mass motion of the string. Center of mass motion of the string. It's given by-- say, you just average the motion of-- you average the motion of, say, of the string. So that gives you the center of mass motion of the string. And all those are periodic functions. So all of those are periodic functions. So all X a or X r periodic functions. So when they integrate, they give you zero. So the only thing remaining are those zero modes. And so you just get the X mu plus V mu tau. OK? So if you think of the center of mass of the string, that's a particle. It is a point, it's a point particle. So this essentially gives you the trajectory of the particle in terms of the proper time. So you can think of this tau as the proper time for the center of mass of the string. And then this just-- yeah. V mu is the velocity for the center of mass in terms of the proper time along the string. OK? And so all these different alpha n, alpha n tilde-- so those just parametrize the different oscillation modes. So they just parametrize the different oscillations of the string. Yeah, of a string. Oscillation of a string. So any particular alpha n is non-zero, means that particular harmonic is present in the string motion. Et cetera. OK? And for the closed string, as I said, before we have both left- and right-moving. Because you have left- plus right-moving waves. But for the open string, you essentially only have a standing wave, because of the boundary condition. OK? So you only have one set of oscillation modes, rather than two sets. So any questions on this? Good? So in the light-cone gauge, we can do further. Between the light-cone gauge, we can further solve this X minus. OK? So let's try to do it classically. In the light-cone gauge-- OK. So X minus can be solved by inserting the mode expansion into this equation 14 on the 15. OK? Then you're just equating different Fourier modes, the coefficients of different Fourier modes. OK? And then you can solve V minus, alpha n minus, alpha n minus, alpha n tilde minus, et cetera. OK? So now let's look at the zero modes. OK? So let's look at the equation satisfied by V minus. So let's look at the zero modes. So let's look at-- first look at equation 14. So if you say zero mode, means we look at the unequal to zero modes. And equal to zero modes here, partial tau X minus, you just have a V minus. So left-hand side-- so from 14-- so let's first do for the open string. So left-hand side you have 2 V [? 2 ?] minus. So the right-hand side. So if you look-- again, you look at the mode, which is unequal to zero. So there's one obvious term come from just taking derivative of this term. And so we have a V i squared term. OK? And then you will have a contribution from oscillators. Which I will not go into details here. So you can easily calculate it yourself. So you find for the open string. That's the answer. And so there's only one set of modes. And similarly, you can do it for the co-string. Again, on the left-hand side, just 2 V plus, 2 V minus, and the right-hand side you have V i squared. But now you have two sets of modes. And also there's some coefficient difference. So other than this [? pre-factor ?] 2 on the one, here, you can actually wrote down those expression just by closing your eyes. Just write them down. Because you have a zero mode, these two have to add up to zero. So that's the only possibility. So other than this factor of 2 in the one you need to check yourself, you can essentially write down this expression immediately, just from the structure of that equation. OK? AUDIENCE: What's the definition of V i? PROFESSOR: V i is there. V mu in the i direction. And V minus, V plus, it's all that. So those equations are of great importance. And let me box them. Which is also the reason I write them down. So but now, I did not leave space to give them numbers. Yeah, anyway, these are the two equations [? of ?] add the numbers [? nature. ?] From the 15, from the equation 15-- so this is a consequence from the 14, the zero modes from the 14. From the 15, you get nothing. You get zero equal to zero for open string. So this is obvious, because on the left-hand side, zero, because for the open string-- for the-- yeah. Yeah, because for the-- because left-hand is zero, because the zero mode does not have an [INAUDIBLE] independent sigma. So this is zero. And then you [? credited ?] a little bit checking yourself, that the right-hand side is also zero, essentially because of this structure X 0 equal to [? X y. ?] Yeah, X 0 equal to X r. And but for closed string, you actually find non-trivial equations. You find it-- the left-hand again, is zero. And the right-hand side-- the right-hand side, if you want to find the zero mode, essentially you just integrate over the whole thing. So let me just write it explicitly. So you can write it also in terms of modes. OK. So this equation is very interesting. So this tells you that the overall amplitude of the left-moving string, yeah, left-moving modes, when you add them together, should be the same as the overall moving, overall amplitude. Say-- yeah, in this combination of the right-moving string. OK? Yeah, they have to be balanced. So if we look at the source of this equation, and the reason the left-hand side is equal to zero is essentially due to the periodic boundary conditions. Because the periodic boundary-- for closed system, because periodic boundary condition, you cannot have any term linear in sigma. And the periodic boundary condition, in other words, it means that there is no special point on the string. OK? Any point is the same as the other. So your choice of origin is actually arbitrary. And so this equation essentially reflects that. So the fact along the string, there's no special point, and they give you a global constraint on the oscillation of the string. OK? Between the left-moving part and the right-moving part. And then, if you look at the-- so if you look at the nth modes, then you can find out alpha n minus, alpha n tilde minus, et cetera. OK? In the piece that you have a [? problem, ?] you will have [? found ?] doing this. Any questions so far? So now let's look at the physical meaning of those equations. OK? I boxed them. I said they are very important. And now let's look at the physical meaning of those equations. So this can be considered as consequence of no special point on the string. And now let's look at the physical [INAUDIBLE] of those modes, of those equations. So this is the fourth comment. The third remark-- fourth remark. So let me remind you that previously, the action have the global symmetries. So this comes from the translation. Sorry, by some constant [? m a ?] mu, all are Lorentz transformations. OK? So those global symmetries, as we know, they are [INAUDIBLE] to [? conserve ?] the current. Conserved current on the [? worksheet. ?] OK? On the [? worksheet. ?] So for the moment, let us consider the translation. So it's actually-- so it's a couple of lines, but I will leave you to do it yourself. I think you will also do it in your [? p ?] set. To derive the conserved current for this one. So let me just write down the answer. So the conserved current for the translation-- for translation, you can derive the conserved current because this [? t ?] such symmetry. For each mu, there's a symmetry. So the current is labeled by mu. But there's also [? worksheet-- ?] index to index, this is a current on the [? worksheet. ?] OK? So this can be written as 2 pi alpha prime. So I'm just writing down the answer, but you should-- you will check yourself in the [? piece ?] that this is the right answer. OK? So you can immediately see that this current is conserved, because the-- if you act partial a on here, you just get the equation motion when it's zero. Equation motion just partial a squared equal to partial a squared acting on x mu equal to zero. OK? So you can immediately see this is indeed conserved. This is conserved. And then this will lead us to a conserved charge. OK? Say, if I integrate it all along the string-- so let's do the-- for example, for the closed string, if I integrate it for the string, and this partial zero component. So the zero component, so the time component of this current, and then this is conserved charge on the worksheet. OK? So do you have any guess what should be this conserved charge equals 1 [? to? ?] Yes? AUDIENCE: [INAUDIBLE]. PROFESSOR: Yeah. Should be, must be, the space-time momentum. OK? In your-- maybe [? level ?] one, you should have learned that-- say, if you look in the classical mechanics, for particle move with the translation symmetry, then the momentum should be conserved. OK? So here, we have a string with the translation symmetry-- moving the space time is a translation symmetry-- and then the corresponding momentum must be conserved. And this conserved charge must be that momentum. OK? So these scenes-- so if I can see the p mu, so let me write in the p [INAUDIBLE], if you can see the p mu, which is this integration of 2 pi. So this is for the closed string. Similarly, for the open string, you integrate over pi. So this must be-- so this is the space time. So this-- oh, I'm sorry. This is a space-time momentum over the string. OK? Yes? AUDIENCE: What's the definition of d partial a in the definition of the current? PROFESSOR: So this is a conserved current, because [? one ?] into this global translation. AUDIENCE: But what's d sub a? PROFESSOR: Oh. Just the derivative of a. Derivative of sigma a. So I have always used a notation here. That the partial a is partial sigma a. So I should-- this goes to one into the space time momentum over the string. OK? So for the closed string, you integrate from zero to 2 pi. From open string, you're not going from only from zero to pi. So now if you plug in that mode expansion to here, again, all the oscillatory modes, they should not contribute. OK? And because this is a-- you can check they do not contribute. So the only thing contribute is zero modes. So you find that this V mu, p mu, is equal to v mu divided by alpha prime. So this is a closed string. So this is [? overviews ?] from here, because this gives you v mu times 2 pi, and then 2 pi can still give you alpha prime. But for the open string, you only integrate over pi. So you get 2 alpha prime. So this is for the open. OK? Or in other words-- or in other words, in terms of the space time momentum-- so this is a more physical quantity then this v mu-- so in terms of the space time momentum, the v mu can be written as alpha prime p mu, which is for the closed. And 2 alpha prime p mu, this is for the open. And then this itself, of course, should be in the [INAUDIBLE] that space-time momentum density along the string. OK? So now we have found that the v mu secretly is just the space-time momentum. OK? Essentially just space-time momentum, up to some overall constant. So now we can integrate those equations. Now we can integrate those equations. Then we can write them. So for example, for the first equation-- so now, let me give you a number. Yeah, actually, on my notes here, it's equation 18. Let me just write the equation 18 here. Only this is equation 19 for the closed. So I have more equation number in my notes than here, because I did not copy all the equations. Anyway, so that's-- for example, equation 18. Now I can rewrite it in terms of the momentum. OK? So let me put all v to a single side. So I can write as 2 p plus p minus, minus p i squared equal to. So each p [? rated ?] 2 alpha. Under that 2 alpha here, so that's 1 over 2 alpha sum over m equal to zero alpha m i, alpha minus m i. OK? OK? So now what is this? Do you recognize what's this? AUDIENCE: p squared. PROFESSOR: This is just p square. Or depend on your rotation, is minus p square. OK? Minus p mu, p mu. Maybe-- let me write [INAUDIBLE]. Minus p mu p mu. So what is minus p mu p mu? AUDIENCE: Mass squared. PROFESSOR: Yeah it's the mass squared. So this equation should be interpreted as a mass equation. So this tells you-- so it tells you the mass of the string should be written as the form-- OK? So now we have obtained the relation, we see that this relation-- so this is for the open string-- so we have obtained the relation between the mass of the string. So if you think about the center-of-mass motion of a string-- think, it's like a particle, then this particle can have a mass-- than this mass over the string, mass squared over the string, is related, can be expressed in terms of oscillation modes in this way. OK? And similarly, this equation 19 for the closed string, you find that m squared equal to-- again, just differ by pre-factor-- then alpha minus m i alpha m i, plus alpha tilde minus m i, alpha m i. So this is the mass equation for the closed string. So these are typically called the mass-shell conditions. OK? So these are for the closed. Any questions on this? AUDIENCE: [INAUDIBLE]. PROFESSOR: Hm? AUDIENCE: [INAUDIBLE] tilde? PROFESSOR: Sorry? AUDIENCE: Would there be another tilde? PROFESSOR: Oh that's right. Yeah. Thanks. Good. AUDIENCE: [INAUDIBLE] have a question that here, before, it's a expression that we'll have to enter to find the mass of the string. So this is a kind of a definition of the mass of the str-- PROFESSOR: No, no. The definition of the mass of the string is determined by how the center-of-mass motion, right? How would you tell the [? power ?]-- the particle moves is from the center-of-mass motion. And that determines its mass. This is a conserved quantity, p is a conserved quantity, and the dispersion relation determines your mass, right? Yes, so this is the mass in the [? right. ?] Yeah. Any other questions? Good. So now, finally, we can do the quantization, with all those preparations. OK? So now we can finally do the quantization. So similarly, I will not go into here-- similarly, yeah, let me just also mention by looking at this transformation, you can write down the conserved [? chart, ?] conserved current [? running ?] the Lorentz transformation. Then that will give you rise to the angular momentum. And also give rise to the corresponding charge associated with the boost, et cetera. OK? And we will not go into there, they actually play a very important role understand the various aspects of the string also. OK. So now let's-- now we can quantize it. And as we said before, we only need to quantize independent [? degrees of ?] freedom, so those X i's. So only need to quantize x i. Because these are the independent degrees of freedom. And from here, from here, you can immediately write down the canonical momentum. Say, the canonical [? worksheet ?] momentum for the string-- for the x i. So the canonical [? worksheet ?] momentum. So I should distinguish this with this guy. So that is the-- this goes one into the current, goes one into the space-time momentum. And this is just treating so this is just treating the x i as a two-dimensional field. We can write down its [? worksheet ?] canonical momentum. And we just take the time derivative of that action for the i direction, and then you find it's given by 2 pi alpha prime partial tau x i. OK? So this is the standard to the canonical momentum for the two-dimensional field series. So this happens to agree. So this happens to be the same as that momentum. But you should keep in mind that the physical interpretations were a bit different. There, it's corresponding to the density for the space-time momentum, and here, this just goes one into the canonical momentum, into the quantization. OK? So now you've found the canonical momentum, then we can just impose the quantization condition. You can just impose quantization condition. So now, we will-- you promote all this as operators, all the classical field is promoted as operators, and then you imposes a canonical quantization condition. So x i x j, for different x sigma prime, but evaluated same tau, should commute. You commute. Same thing if you do the pi at a different sigma prime, but tau should commute. But x i sigma tau and the pi j sigma prime tau then should give you i delta i j, then delta function sigma minus sigma prime. OK? So this is just the-- you just impose the-- this a free field, you just impose a standard canonical quantization condition as a two-dimensional field. And now all of these modes, all these different modes-- oh, which I just erased half of it-- and all these different modes-- so x i p i-- so p i-- v i is the same as v i-- and then alpha n i, alpha n tilde i, they are all operators. So classically, because one into integration constant for the equation of motion, and then quantum mechanically, they become integration constant for your operator equation. So they're just constant operators quantum mechanically. OK? And then when you plug in those mode expansions into here, just as usual, in your free field theory quantization, then you just find the accommodation relation. For example, you just find x i p j equal to i delta i j. Then you also find alpha m i alpha n j equal to alpha m tilde i alpha n tilde j equal to m delta i j delta. So I'm just writing down the answer for you. OK? So this is all straightforward, other than unconventional organizations. AUDIENCE: So that's things like delta m minus n, or something? PROFESSOR: Yeah, that's right. Yeah. So now if you look at this, OK, if you look at this, so this is m. So this m is related to this normalization we were choosing here. Here we are choosing n. [INAUDIBLE] here, there's appearing some m here. OK? And from this scene, you can see-- so these all others all other commutation relation, all other commutators vanish. OK. So these are the only non-vanishing ones. So if you look at this equation. So let's see for m greater than zero, so for m greater than zero, then this is m, then this is only [? non-manageable ?] for m equal to m equal to zero. So this alpha m alpha minus m, then you go to m something. It's a positive number. So that means, tells you, that in terms of a standard location, square root m alpha m i should be interpreted as the, say, the standard [INAUDIBLE] operator. And the square root m alpha minus m i should be interpreted as a i dagger. So this is all for m greater than zero. OK? Similarly for the tildes. Similarly for the tildes. So that's what the equation means. OK? So it's clear to you? Then we essentially-- then we have essentially finished the quantization of the string. OK? We have solved the Heisenberg equations. And so, this is now-- we saw those modes [? expression ?] plug in. This is now interpreted as to the solution to the Heisenberg equations. And then the integration constant in these equations, they satisfy those commutation relations. They satisfy those commutation relations. And also, the commutation relation between x i and the p, indeed is it what you would expect for point particle. So this goes one into the center-of-mass location and the center-of-mass momentum. And indeed, you would expect between the position and the momentum. OK? Any questions? Yes? AUDIENCE: Are we dropping the index m on the [? eight? ?] Or is that a subscript? PROFESSOR: Yeah. But then we always use the notation of alpha. I will not use a. Yeah, but just keep in mind that their relation is like that. And then a and a dagger will have the standard commutation relation equal to 1. OK? So now, we can now work out the spectrum. So now we can work out the spectrum. Let me see. So we can work out the spectrum. So the lowest state is so-called oscillator vacuum. So when we quantize it, we define our vacuum. So the vacuum [? does ?] run into, of course, the states, which are [INAUDIBLE] by all [INAUDIBLE] operators. [INAUDIBLE] by all our [? relational ?] operators. OK? Greater than zero and i. OK. So we first define-- so in order to-- so we are quantizing-- so this is like a free field theory, and so we define our vacuum, which is a [INAUDIBLE] by all [INAUDIBLE] operator. But the difference from the standard quantum fields theory, two-dimensional quantum fields theory, is that now, the vacuum here is labeled [? by ?] p. OK? This is the space-time, center-of-mass momentum of the string. So we are taking the vacuum to be, say a momentum [INAUDIBLE] state. OK? In terms of space-time. And so this zero p, which because one and two have low oscillator exactly on the string, but can still have a momentum, a space-time momentum, so this is labeled by center-of-mass momentum. OK? And then you can also build up other state. Then you can-- so this is the lowest state, on the-- so this is, in some sense, the vacuum state on the string. And then you can also build up the excited state. And so you just elect arbitrary number of alpha and alpha tildes. On this vacuum state. OK? So yeah, let me also label the equations. So let me call this equation 20. OK? So for the open string-- yeah, so this is for the closed string. For the open string, you only have one set. You just have one set of, say, alpha n i. And you still have two sets. OK? It's also convenient to define oscillator number. OK? So which we call-- define in terms of the standard [INAUDIBLE] And then the alpha minus m i, alpha m i, so it would be equal to m N m i. OK? So this is oscillator number for the m's mode and in i directions. OK? So in this equation there's no summation of i. Also there's no summation of m, just a-- OK? So I hope this is clear. Good. Any questions? So now let's look at those equations. Now let's look at the-- those mass-shell conditions. So the quantum [? variable. ?] OK? AUDIENCE: Excuse me. What does operators x and p do in those vacuums? PROFESSOR: Hmm? AUDIENCE: Operators x and p. What do they do with those right here? PROFESSOR: No no no. So we take-- so back here, so we take the vacuum to be a [INAUDIBLE] state of p. And then yeah, then that's it. Yeah. AUDIENCE: [INAUDIBLE] p i [INAUDIBLE] right here, what do they do? PROFESSOR: So this is [INAUDIBLE] state of p. And the action of x on p, it just asks what do you do in quantum mechanics. Yeah. Yes? AUDIENCE: [INAUDIBLE] kind of a dumb thing, but. So the p mu, that only includes the i [INAUDIBLE], or all? PROFESSOR: All components. All components. But then they must be satisfied. AUDIENCE: Right, right. So they have to satisfy-- PROFESSOR: Then they have to satisfy this kind of constraint. AUDIENCE: I see. PROFESSOR: Other questions? Good. Good. So now, we can write down-- so now let's write down the mass-shell condition in terms of-- so this is a classical equation. OK? These are the classical equations. So now we can write down the quantum version of it. So we can write down the quantum version of it. So each state, so the typical state, here, will carry some p, will carry some p. So p i are independent, but then p minus p i and p plus, for example, yeah, for x plus, this will also give you a p plus. For p i and p plus are independent. But then the p minus, then it's determined from those equations, determined from these mass-shell conditions. OK? So now let's write down those mass-shell conditions in terms of quantum form. So now let's, again, look at the mass-shell conditions. So let's first, again, do the open string. So we just take these. So when you go to quantum, then you have to worry about the ordering. OK? Then you have to worry about the ordering, because now they don't [? commute. ?] OK? So now you have-- about the ordering. So let me-- for the moment, don't-- then there's an ordering constant, et cetera. And it's easy to understand what should be the ordering constant, because this is a string, and each oscillation mode behaves like an-- it's just a harmonic oscillator. And then you just add up all the zero-point energy of the harmonic oscillator, and that will give you the ordering constant. OK? It's just the same as a harmonic oscillator problem. So to translate this into a quantum operator expression, so I just do it-- [? isolate ?] the harmonic oscillator. OK? Because I just-- have essentially have m harmonic oscillators. Have all these different harmonic oscillators. Then this is easy. So I just copy this one of alpha prime. So this is m from minus infinity to plus infinity. So let me rewrite it from m equal to 1, to plus infinity. Then I have a factor of 2. Then the overall factor becomes 1 over i, 1 over alpha prime. OK? And now, let me-- also making the sum over i explicit, so from 2 to D minus 1, all the transverse directions, and then I sum over m equal to 1 to infinity. And this, I have written down before. This is essentially just this m times the oscillation number. m N m i. OK? So now I have to write down the zero-point energy. So let me call it a zero. So the zero-point energy, so you just do it as a harmonic oscillator. So this is for the oscillator number. So for harmonic oscillator, we have m plus 1/2, OK? So for each oscillator, you have 1/2. So a zero must be equal to 1 over alpha prime sum over i sum over m then 1/2 times m. OK? So is this clear to you? Yes? AUDIENCE: So in here, we're not actually summing over i, are we? Is there an implicit sum over i? PROFESSOR: Oh yeah, oh, sorry. Here. Here, repeated index always means we summed. AUDIENCE: Even though they're both up indices? PROFESSOR: Yeah, yeah. Yeah, because we are working with Minkowski metric. AUDIENCE: I see. OK. PROFESSOR: OK? So essentially, you have the harmonic oscillator with a frequency m. OK? Because that's what each the-- yeah, I should have emphasized one step. Let me see. Yeah, I did not write it down explicitly here. But if you remind yourself, when you quantize a free quantum fields theory, the harmon-- now I have erased my harmonic expansion-- and in the harmonic expansion, under the nth mode, we have frequency n. OK? Yeah, say-- let me just add it here. So when we write down [? pi-- ?] say tau minus sigma-- yeah, it'll be some pre-factor. So this n, this just the frequency of each mode, OK, as a harmonic oscillator. So that's why, here, for the zero point [INAUDIBLE], so each one, each mode, contribute 1/2 m. OK? So is this clear to you? AUDIENCE: And you can also get that from the computation relationship? PROFESSOR: Yeah. AUDIENCE: [INAUDIBLE]. PROFESSOR: Yeah, let me just say, I want to write down the quantum version of this equation. The way I do it is that I write down the quantum version of the harmonic oscillator. Treat each mode as a harmonic oscillator, and then the zero-- then this will be just a sum of an infinite number of harmonic oscillators. And each harmonic oscillator have a frequency n. So each harmonic oscillator have frequency n. OK? So each harmonic oscillator has a frequency n. Then this is the oscillator number times the frequency. And the zero mode is just 1/2 times the frequency. Yeah. Clear? Good. And similarly, we can write down the equation for the closed string. Again, sum over i sum over m not equal to zero, or sum m equal to one to infinity, then m, m N i plus m N m i tilde. OK? And then, again, plus a zero. So this here, a zero, it's essentially the same. Is the-- so now you have this equation. So this is alpha 2 sum over i sum over m from minus 1 to infinity, from 1 to infinity, 1/2 m plus m. So 1/2 m plus m because there's two modes here. OK? Yeah, maybe I should write m 1/2 plus 1/2. Yeah? AUDIENCE: I'm still confused about something. So where exactly did the a zero come from? Where is it defined before? PROFESSOR: No no no. No, a zero come from here, come from I'm rewriting this equation as a quantum equation. So this is-- so if you look at it here, here, each one is a dagger and a. So there's an ordering issue here. And there's an ordering issue here. So I'm just giving you a simple trick to find out what is the-- how to resolve the ordering issue, because each term is like a harmonic oscillator. And then this will be just like, exactly like a harmonic oscillator. AUDIENCE: OK. So you're just taking all those contributions and packaging it? PROFESSOR: Yeah, that's right, that's right. Yeah, just like you're writing down the energy of a harmonic oscillator. AUDIENCE: Gotcha. OK. PROFESSOR: Yes? AUDIENCE: I'm a bit surprised that these terms contribute to m squared, not to m. They look like [INAUDIBLE] of a-- PROFESSOR: Which one? AUDIENCE: I mean [INAUDIBLE] so the a zero, all zero-point energies? Intuitively I would have thought that they would add up to-- they would contribute to the mass of the string, not to the mass squared. PROFESSOR: It depends on where is your intuition come from. AUDIENCE: I don't know, it-- is it-- PROFESSOR: Yeah, this is a-- this is in the-- yeah. I think, again, so, so here, I'm assuming the-- yeah, let me just-- I understand where you've-- yeah, let me just explain one more thing. Let me just explain one more thing. So that equation, which I write in that form, m square equal to that thing. OK? And this should be understood as the 2 p plus p minus-- minus p i square, say minus-- yeah, minus p i squared equal to that thing. OK? And then this can also-- so this essentially gives you p minus. You could do this, the rest. And the p minus is the image in the light-cone frame. And so, you are just [? conceding ?] the image, but in the light-cone frame. AUDIENCE: Why does that get p minus, sorry? PROFESSOR: Yeah. I'm just saying this equation itself-- think about how we derived this equation. This come from here. Come from here. Yeah, I'm doing it a little bit fast today. So come from here. So this equation should be considered as a constrained equation which you solve for p minus. And p minus is precisely the image in the light-cone frame. Yeah, this is a precise image in the light-cone frame. Yeah. That's why. Yeah. So is it clear? AUDIENCE: I need to think about it a little bit. PROFESSOR: Yeah. And so, that's, yeah. Yeah. This is a good question. I should have emphasized this point a little bit earlier, in jump from here to there. Yeah. You should think of this-- so this is an equation which you can imagine it as solve for p minus. And p minus is the image in the light-cone frame. And then all these contributions, the right-hand side, can be considered as the contribution to the light-cone image. And then we're just repackaging it in terms of the mass term. Yeah. So now we have a problem. Because if you look at the zero-point energy-- so any other questions? AUDIENCE: So the mass square, the mass of the string [INAUDIBLE]. PROFESSOR: The mass over the-- no no no no. That's not the way you should think about-- sorry, say it again? AUDIENCE: Because you said p minus the energy, and then. PROFESSOR: p minus is the light-cone energy. AUDIENCE: Light-cone, yeah. PROFESSOR: Yeah. So the light-cone image-- yeah, the light-cone energy. Yeah, this is a light-cone [? image. ?] Yes? AUDIENCE: But then is-- what he was asking is why the contribution from the zero point energy is-- you are contributing to the m squared, right? So. PROFESSOR: Yeah. My answer is that the zero-point energy is contributing to the light-cone image. It's contributing to the light-cone image. Yeah, just repackage it in terms of the mass-squared term. AUDIENCE: One more comment. That [INAUDIBLE] in the [INAUDIBLE] case and that you will find the [INAUDIBLE] p minus. That is [INAUDIBLE] interpretation. PROFESSOR: Yeah. Just say because we are going to the light cone, and in the light cone, the p minus is the image. And then we repackage it into the mass-squared term. Is it clear? Yeah, so what he was asking is-- so his intuition is right, because this-- at least in the light-cone gauge, you can interpret it-- so this zero-point image is contributing to something like image, rather than something like mass square, even though this equation is acting like a mass square. Yeah. That was my explanation to him. Yeah? AUDIENCE: So one other question. If indeed there is an infinite constant [INAUDIBLE], like there is normal quantum physics, how does string theory get around this issue of-- so it seems that you're still looking at this issue with infinite energy in a vacuum. PROFESSOR: We're not going into that. Yeah, we're not going into that. So this is a standard issue. So now we see that you have-- so now you see you have a sum of an infinite number of modes, and each mode contributing to 1/2 m. So each mode contribute to 1/2 m. So this is apparently an infinite answer. OK? But now we have a trick. But now we have a trick. The trick is this sum is actually equal to-- AUDIENCE: Oh my gosh. [GIGGLING] PROFESSOR: So we have a trick. And the trick is that this is actually-- we equate it to minus 1/2-- minus 1/12. OK? So there are many ways to justify this thing. But I will not do it here. I will only do it in one way. One way, which is the quickest way, but maybe it's most unsatisfying way physically. But it's the-- mathematically, the quickest way. OK? And this is so-called zeta function regularization. So typically-- so a trick we often used in physics is that when you go to-- so when you get an infinite quantity, what you do is you slightly change the form of your quantity so that this is finite. And then you manage to take the limit. To go back to the original limit. And yeah, let me first just write down the answer for you. And then I talk about philosophy. [LAUGHTER] PROFESSOR: So the key is a [INAUDIBLE]. And let's define a function called the zeta function. So this is the so-called Riemann zeta function. And so this is sum of n equal to 1 to infinity 1 over n to the power s. OK? So as you all know, this sum is only convergent only for s greater than 1. And it's a famous sum that when s equal to 1, this is a logarithm divergent. OK? So this function have [? a pole. ?] So [? this ?] thing over these [? a ?] function. So what you can do is you can do the sum. For s greater than 1, then you can actually-- we'll call the analytic function letter s. Use this as a definition for this function. And this function is well defined for s greater than 1. And then you find this function has a pole. And the zeta equal to, say, 1. At s, you could do 1. Because 1 into this-- as a logarithm divergence, yeah, for 1 plus 1/2, et cetera. But this function-- or, I can do it here-- but this function have an analytical condition. But this function, this analytic function, can be analytically continued beyond s equal to 1 to smaller value of s. In particular, you find can be analytically continued to minus 1. So minus 1 is a situation, what we are seeing-- looking at here, with s equal to minus 1, this equal to that. And then this function when you [INAUDIBLE] to minus 1, you find it's actually finite, and then given by minus 1/12. OK? So this is a rationale for the manipulation. So this is very similar-- this is very similar to dimensional regularization you used in quantum field theory. And you see, certain integrals are divergent. Say, certain integrals are divergent in four dimensions. And then, the trick we did, when you do your quantum field theory, is that we promote the dimension as a variable. Rather than equal to 4, we called it general d. And then, for that general d, then there exists some range of that d, which is that integral is finite. And then you can evaluate it as an analytic function of d. And [? analytic ?] continue to d equal to 4. And sometimes that just give you a finite answer. And sometimes that still gives you a divergent answer, and then you need to do [? regularization. ?] But sometimes when you do that trick, you just get a finite answer, and then there's even no need to do a [? regularization. ?] That trick just works. So this is exactly the same as that trick. So this trick is exactly in the same sense as that. But of course, this is only a mathematical trick. And to justify it physically, you have to think about the-- to justify it physically, it's the same as the philosophy for your [INAUDIBLE] in general quantum field theories. It's that this system have many, many high-energy modes. So the divergence come from very large m, have very, very high-energy modes. And so the justification is that-- so those high-energy modes, so the low-energy physics, which is related to the zero point [? energy ?] [? excitement, ?] should be insensitive to the physics of those high-energy modes. Then you can subtract the infinity, et cetera. So that philosophy's the same as what you normally do in the [? regularization ?] quantum fields theory. OK. Any other questions here? AUDIENCE: So can we say that the true physical theory doesn't have any divergences, but it's just this effective theory that kind of works out, has divergences that we then, kind of, get rid of by the strings? But the real physical theory, that real wouldn't, shouldn't wouldn't have any divergences? PROFESSOR: So you just say the physics-- say this particular quantity does not depend on the details of your UV physics. So this divergence itself comes from a particular assumption of the UV physics. But you can modify your UV physics in a certain way, and this particular answer should be independent of those details of that UV physics. Yeah. So this is a standard philosophy behind the regularization. And so you can justify this answer using many different ways. You can also justify it-- so you can also just put some constant in there, and then impose some other self-consistency conditions, and you can determine this answer. Anyway, there are many ways you can derive this answer. And this is one way, or the one quickest way of doing it, so we are just do it here. So any other questions? Good. OK. So now, from here, we can find that for the open string, the a zero is just equal to minus, d minus 2 divided by 24, 1 over alpha prime, for the open. And then for the closed string, is equal to minus d minus 2 divided by 24 alpha prime over 4. Closed. OK?
MIT_8821_String_Theory_and_Holographic_Duality_Fall_2014
5_Black_Hole_Thermodynamics.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. HONG LIU: OK, so let me first summarize what we did at the end of last lecture. So we can see that the Rindler space can be separated from the right column into into four different patches. In particular, there's a left patch and the right patch, and this is a constant row slice in the Rindler. So what we showed is that the Minkowski vacuum, the standards, the vacuum we defined, the Minkowski quantum field theory, can be written as an entangled state, and we express it in terms of the Hilbert space of the left and the right patch. So this n sum over all possible in the [INAUDIBLE] state. So this n and n are eigen vectors, eigen values, and eigen vectors of the Rindler Hamiltonian, which we called HR. And so this, similarly, is an L. So an R means the eigen vector in the right patch and an L is the eigen vector for the left patch. And now when you trace out, for example, the left patch, suppose you're only interested in the physics in the right patch when you trace out the left patch, then you find the sum of density matrix for the right patch. [? Under ?] the [? sum ?] of the density matrix is the partition function for the full Rindler minus 2 pi h and so you conclude that the temperature is 1 over 2 pi. So that's what we did at the end of last lecture. So there are several key elements here. So there are several key elements here. So the first key element is that the Minkowski [? ground ?] state turned out to be a particular kind of entangled state between the left and the right patch. And then you get the [? sum ?] of density matrix when you have a Lorenz above the left patch. So these are two of the basic elements. So any questions on this? Yes. AUDIENCE: [INAUDIBLE] left patch. Is there any physical meaning to the left patch? HONG LIU: It's the same physical meaning to the right patch. For the observer in the right patch, of course, you don't see the left patch, and so that's why you get the [? sum ?] of density-- yeah, that's why you have to integrate them out. Yeah, they play a very important physical role. AUDIENCE: [INAUDIBLE] observable [INAUDIBLE] HONG LIU: This is just pure Minkowski space. Yeah, this is just pure Minkowski space. This is Minkowski space. So this kind observer can only observe the right patch, so to this kind of observer then you must integrate out all the physics in the left patch. So that's why they see a similar physics. Yeah, so this is just a physical explanation why this is similar physics. AUDIENCE: Why do we need to introduce a harmonic oscillator during [INAUDIBLE] HR can be anything? I forgot the reason why we introduced harmonic oscillator. HONG LIU: Oh, I just gave you a simple example to explain this kind of physics using a simple example. just in case you're not familiar with this kind physics I gave you a simple example to build up your intuition. the reason we considered the harmonic oscillator is not an important reason. Say if you [? quantize ?] the scalar field theory-- so any theory on this space, say if you [? quantize ?] a free quantum field theory, that just reduce the harmonic oscillator. So the harmonic oscillator in fact have the exact same physics as general quantum field theory you consider. Any other questions? AUDIENCE: May I repeat my question? The right patch, the [INAUDIBLE] patch, corresponds to the exterior of the black hole, right? HONG LIU: In the black hole problem, the counterpart of the right patch is exterior over the black hole. That's right. AUDIENCE: So the upper patch is the interior? HONG LIU: That's right. AUDIENCE: So what are the kind of [INAUDIBLE]? HONG LIU: The left patch is [INAUDIBLE] asymptotical region over the black hole, which is also outside the horizon. So the extended black hole has two asymptotical regions. Yeah. AUDIENCE: So is there any kind of [INAUDIBLE] way with the left patch [INAUDIBLE]? HONG LIU: Yeah, as I said. Yeah, I'm going to talk about this a little bit. I'm going to talk about black hole in a little bit. Any other questions? Good? So let me make some further remarks. The first is that this Minkowski vacuum-- this entangled state between the left and the right-- is invariant under the [? axim-- ?] Or maybe I should say-- let me just say it. It's invariant under HR R minus HR left. By invariant under this I mean-- so this state is [INAUDIBLE] by this, and this invariant under any translation generated by this combination. I'm not using very precise English, but I think you understand what I mean. Anyway, so this you can see immediately from there. I just [INAUDIBLE] this harmonic oscillator example, and if you act this on that-- so this R means Rindler, and this R means the right patch and left patch. So act this on that state, then they just get an E because of the minus sign. They just canceled. And so this minus sign is related. What I said last time is that you can think of the left patch half the time running over the direction. By time I mean this [INAUDIBLE], OK? So what this operator gener-- it does is the generator flow-- there's a generate flow in eta. So the eta goes running to so this goes running to [? comes ?] in the low surface. And the flow in the right quadrant is going up, but in the left quadrant it should go down so the time should be moving [? normally ?] direction, and that's what this minus sign means. And then this transformation, we'll leave this state invariant. And you [? graph ?] the same thing like we did before for the harmonic oscillator. Any questions on this? So you can immediately see from that equation this operator [INAUDIBLE] that state. So this can also be-- so do you have any problems saying that the HR generated translation along this? Is this clear to you? AUDIENCE: So the [INAUDIBLE], the time [INAUDIBLE] are opposite. Does it mean that any other physical is the opposite time and [? changing weight ?]? HONG LIU: It doesn't matter what you mean. It doesn't matter how you interplay, physically. Right now, I'm talking about the mathematical statement. I want to first understand the mathematical statement. So this mathematical statement has two layers. Let me write explicitly. This means i HR R minus HL, HR left acting on 0N with reach any eta this is invariant. This is invariant. This, you can just see directly from that the fact this [INAUDIBLE] that state. And then that means that this thing leaves this thing invariant. Under the action of this guy, this operator, from the point of view of the red patch because one thing, you generate the translation in the eta because HR is the [INAUDIBLE] for the eta, so this just generates the translation in eta, which leaves rho invariant. And this slice I'm showing here is the constant, the rho slice, so that means this generates a translation along that arrow direction, moving to positive time. And this minus sign means that, in the left patch-- because [INAUDIBLE] transformation we are moving [? in the only ?] direction. And then that operation, we will leave this state invariant. Yes. AUDIENCE: So one question. So sometimes people have this interpretation-- I'm not sure if this picture is correct-- but in the black hole picture, we interpret the bottom thing as being some sort of white hole, which sort of spews out things. So if we reverse time, then does it sort of become a black hole again because now it's-- HONG LIU: No, no. We are talking about completely different things. Here, I'm just talking about-- in the left and in the right you can choose whatever time direction you want. I'm just making a statement that said, if I make this kind of operation-- do a time translation in the positive time direction, but in the opposite time direction in the left-- that particular operation leaves the this state invariant. For physical applications you can choose whatever time direction you want. You can choose whatever time direction you want. And this is a mathematical statement saying this particular operator leaves this state invariant, and this particular operator half of the interpretation of generate opposite time translation in the left and the right patch. So this is an algebraic statement, but this can also be seen geometrically. This can also be seen geometrically. So if you work out the relation between the Minkowski's coordinates and the Rindler coordinates, you can actually immediately just see is that, geometrically, eta translation is a boost. in x, T. So what that does, it just generates a boost. So if this is not immediately clear to you, just go back and try to look at a translation between the two coordinates. Then you will see it immediately. So in other words, this HR actually generates a boost. This Hamiltonian essentially generates a boost. This Rindler Hamiltonian generates a boost. One second. Let me finish. So, clearly, by definition, the Minkowski vacuum is invariant on the boost. So this statement is essentially the statement that this thing is invariant on the boost. And then you can see this negative sign. Then you can see the negative sign from the fact that if you make a Lorenz boost-- and this is the trajectory of the Lorenz boost, and it acts off the direction in the left and in the right. So that's why there's a negative sign here. So this negative sign goes [? bonds, ?] so the geometric statement that when you make a Lorentz boost in the Minkowski plane, and the action on the left and on the right is in the opposite direction, so the same boost will take a point here to there, but we'll take a point here to here. And you can check yourself. And this translates into an algebraic statement-- just become this statement. And this statement is the same-- that this is invariant on the boost. So this is the first remark. And the second remark is that, if we expand the field phi R in the right patch in terms of a complete set of modes, just ask, what do you normally do when you do canonical [INAUDIBLE]? In the right patch, say-- so this defines a [INAUDIBLE] and the creation operators for the theory in the right patch. And, similarly, you can do it in the left patch. And then you can show, just based on that relation-- just based on that relation, you can show-- so let's consider the freescale [? of ?] [? field ?] [? series ?] so you can show that this-- just as the harmonic oscillator example we discussed before-- and this Minkowski state can be written as a [? squeezed ?] state in terms of [? their ?] vacuum. And now you have to take the product of all possible modes, and only the j is the frequency for each mode. So this is a precise analog of the harmonic oscillator example, just because each set of modes gives you a harmonic oscillator. So you just take the [INAUDIBLE] product of all these harmonic oscillators, and then you have this relation. Also, very similarly, the usual Minkowski creation and the relation operators are related. So this ajr, ajl by Bogoliubov transformations, just as the harmonic oscillator example. So we are allowed to write this transformation explicitly. But he's saying here, just direct generalize of the harmonic oscillator example because the field series is just a bunch of harmonic oscillators. A field series is just a bunch of harmonic oscillators. Any questions regarding this point? Yes. AUDIENCE: So, just to check-- so the right only affects the right patch and its identity everywhere else. HONG LIU: Sure. AUDIENCE: So that the combination leaves the bottom and the top portion just fixed. So the combination leaves the bottom and the top quarter fixed? HONG LIU: You don't get back to the-- yeah, when we-- yeah, so this is-- so this operation itself does not direct-- so this operation itself does not direct access to the top and bottom portion. And this is a [? trajectoral ?] for them. So if you have a point there, just take them to the [? hyperbolic ?] trajectory. So we're not taking to there. AUDIENCE: Right. And the top and the bottom are just-- it's [INAUDIBLE]. So it keeps them fixed? HONG LIU: No. When you define the Hilbert space, you only define for the left and the right because they don't define Hilbert space. The Hilbert space defines the given time slice, so that only includes the left and the right. And those particles run into future evolution, and that evolution is not controlled by them because they only take you along the hyperbolic trajectory. Any other questions? Good? So the third remark is that all of the discussions generalizes in complete parallel to Schwarzschild space time In particular, we said before, the Schwarzschild time have the falling space time causal structure. You have a whole-- so this is your regional region outside the horizon, but then you can extend this part of the space time into four regions. In particular, you have two-- again, you have R and L-- two asymptotical regions. So this way the R goes to infinity, and similarly this way, also, R goes infinity. So, again, in this [INAUDIBLE] vacuum state, which it can be defined from going to create the [? signature ?] and the [? compact ?] [INAUDIBLE] phi this tau. Again, corresponding to an entangled state between the left and the right. And if you ignore the left, again, you'll get a similar state from the right. So the story's completely in parallel. It's what we discussed before. The only different thing is the technicalities that, of course, in the specific metric are different. The specific metric are different. And in particular, this [INAUDIBLE] vacuum can be obtained by doing Euclidean paths integral. Again, will be the half space, so this is the tau direction. Again, you do the half space, and the times S2. It's the same thing exactly as we did for the Rindler. You do the half space and when you're interpreting in terms of this tau [? foliation ?], which is angle, then you get this entangled state. You get this entangled state. So any questions on this? Yes. AUDIENCE: Just to clarify. So this entangled state is really nothing more than a mathematical trick to help us? Or should I think about it physically? HONG LIU: No, this is a fact. This is not a mathematical trick. If we're grabbing that case, the Minkowski vacuum is entangled state. If you write it in terms of the Hilbert space over the left and the right patch. This is a mathematical fact. This is not a mathematical trick. AUDIENCE: Well in the sense that I'm perfectly also allowed, I can create something which, in my patch, gives me the same description-- if I just think of a perfectly thermal state, and I don't even have to think about that being en entangled with anything. Is that also OK? HONG LIU: Oh, sure. Yeah, but I'm just telling you-- if you are observer in the right patch, of course you will never see anything on the left patch. I'm just giving you a physical explanation. Where does that physically-- where does that thermal [? nature ?] come from? AUDIENCE: Sure, OK. Yeah. I just wanted to clarify that. AUDIENCE: Is this Minkowski [INAUDIBLE] defined in the upper patch? HONG LIU: Sorry? Yeah. So you're talking about this upper patch? AUDIENCE: Yeah. HONG LIU: So the stories are [? falling. ?] In the standard [? QFT, ?] in the Minkowski spacetime, you define whatever your states at [INAUDIBLE] equal to 0, then you move this time. Then, of course, that will include this part. And then the standard Minkowski time evolution, in terms of capital T, will include this part. But if you do the Rindler time evolution, then you will not involve that part. Yeah. Is that what you are asking? AUDIENCE: So it's not defined in the upper patch? HONG LIU: Hmm? AUDIENCE: So it's not defined in upper patch? HONG LIU: No. It's not-- it just does not access those informations, because the time translation is always like this. The time translation will always-- will never take you there. You have to ask, what is your time translation? So in quantum mechanics, you define an initial state and then you have a [? tone ?] intake you wove into future time. And in-- then, depend on which time you use, then cover different regions of the Minkowski spacetime. If you use the standard Minkowski time, capital T, then that will cover everything. If you only use the Rindler time, then that only covers this region or this region. Any other questions? Good. The fourth remark. So now, let me call this-- so this is r and l and then this f. OK? So let me call this region f. So this story, experienced perfectly in this Schwarzschild spacetime, why an observer in infinity will see, say, thermal radiation. But actually, this does not apply to the real-life black holes, because the black hole formed by gravitational collapse only have the right and the future region. You don't have the left region. You don't have the left region. OK? So this discussion does not apply. OK? This discussion does not apply. But this is only one of the ways to derive that the black hole have a finite temperature. In fact, the Hawkings original derivation, just by considering scalar field, just by considering quantizing scalar field alone, in the right patch, and he already did deduce the thermal nature. So even though this particular discussion does not apply, all our conclusion, all our conclusion does apply. OK? Our conclusion does apply, including the finite temperature, et cetera. OK? So now, to interpret, where does this temperature come from? And later we will say, actually, the black hole not only have a temperature, it can satisfy all the thermodynamics. So in this case, to interpret where this temperature come from is physically more intricate. OK? So I will not try to do it now. But later, when we talk about the duality, and then that will be a better place to go back to this question. And then we can ask the precise difference between these two cases. Between the case which you have all patches, and the case which you only have two pat-- only have two regions. And that they are, actually, physically fundamentally different. Physically fundamentally different. But the reason those conclusion applies is because the temperature, in fact, is a state that can be made by the local observer. For local observer, outside the horizon, he's not going to tell the difference between this metric and the exac-- and the almost identical metric in outside horizon. Outside the horizon, Schwarzschild metric is a perfect one. So locally, he will not tell any difference. So that's why the local observer should always see the temperature. If you see it in one case, you will see it in the other case. OK? But the underlying physics will turn out to be very different. Yes? AUDIENCE: How are we defining temperature here? Are we defining it by means of energy, like we do [INAUDIBLE]? HONG LIU: Yeah. Yeah. Yeah. Yeah, here, we define it in terms of the density matrix. AUDIENCE: Oh, it's defined by that equation? HONG LIU: Right. Yeah. So the density matrix-- so if you have a density matrix like this, which z is traced explain [? to ?] [? matters ?] beta h, then you say the temperature is one over beta. So that's how we define temperature in quantum statistical physics. Yeah. Yeah. Yeah. So this defines a canonical example for you. And this beta is the temperature, the [INAUDIBLE] temperature. OK. So any questions on this? I hope not, because I want to discuss this later, not now. AUDIENCE: Sir? HONG LIU: Yes? AUDIENCE: If we hold up a thermometer like this, should we get some different temperature than if we let it fall? It's accelerated right now. Can this has been measured? HONG LIU: No. Because it's too low, the temperature. Right. Yeah. Yeah, because our temperature would be much bigger than this temperature you are able to-- yeah. Just the fluctuation of air in this room will create fluctuations in temperature which much higher than that kind of temperature. AUDIENCE: In space, can be a vacuum. HONG LIU: In the space, you also have to do very precise measurement. It's-- yeah, you have to calculate it. h bar is very small. AUDIENCE: Divide by the mass. HONG LIU: Yes? AUDIENCE: To-- constructing on that same question. So the same thermometer that is being held in position. So if I observe it while sitting down here, so I'm accelerating with it, I'm like the Rindler observer, and I would see it at a certain temperature, right? Although it's very small, so we haven't been able to measure everything. But if I am, instead, free-falling while the thermometer is being held in place, would I then not see-- would I then not measure a temperature? Would I measure temperature? HONG LIU: Yeah. Yeah. You wouldn't be able to see a temperature. Yeah. Free-falling. Although won't see-- AUDIENCE: So the thermometer is being held-- so the thermometer is accelerating? HONG LIU: No. Thermometer is also free-falling. No, if you free-fall, the thermometer also free-fall. AUDIENCE: No, no, I mean-- he holds the thermometer, and he is sitting in his chair, but I'm the one who measures the reading on the thermometer, and I'm free-falling. HONG LIU: Yeah, that's an intricate experiment. Then we have to analyze that. So the photon from his thermometer then will somehow go into your eyes, and you will analyze it. Then whatever is the reading on his thermometer, you will see it. Yeah. No, you're not doing any measurement. You just see the reading on his thermometer. If his thermometer has a temperature, then you will see a temperature. AUDIENCE: Right. HONG LIU: Yeah, you are not doing a measurement yourself. AUDIENCE: So, OK. So now, bringing it back to non-gravitational physics, flat spacetime. I'm a [? neutral ?] observer, and I see a thermometer accelerating by me. I would therefore see it as reading a certain temperature? The thermometer itself isn't a measurement. You need to have something to measure yourself. HONG LIU: Yeah, it's not [INAUDIBLE]. AUDIENCE: Sorry? HONG LIU: No. No, the thermometer-- whatever thermometer is doing, what you are doing-- you just read the thermometer. And it has nothing to do whether the thermometer has a temperature. Thermometer maybe have a temperature due to some other reason. Whether you have-- what you could do is just read that thing. Yeah. HONG LIU: OK. So let me continue. OK. So now, we found a black hole has a temperature. So then, you only need to take a very small leap of faith. Saying, if this guy has a temperature, then it should satisfy thermodynamics. OK? Then we say, this must obey thermodynamics. OK? Also obeys thermodynamics. Black hole. And then, you immediately deduce there should be entropy, because now you can just apply the first law. For example, we have the thermodynamical relation dSdE should be 1 over t. OK? Say, if we think of t as a function of e, then by integrating this equation, we should deduce what is the entropy of the black hole. So remember the t of the black hole is 1 over-- remember, the t of the black hole, the t of the black hole. Yeah, let me do it here. t b h of the black hole is h bar kappa divided by 2 pi, which is the h bar divided by 8 pi g n times m. So this would be just 8 pi g n times m h bar. Of course, you identify the mass of the black hole with its [? image. ?] So you identify them. So now, you can just integrate this. You can just now become a trivial exercise. Then you find s e is equal to 4 pi g n e square divided by h bar. And the plus integration constant. OK? And this integration constant we can say to be zero, because if the black hole have zero mass, of course there's nothing there. And so then, we just have this formula. So this formula can be written a little bit more geometrically. So also remember, the black hole-- the Schwarzschild radius is 2 g n m. OK? So this can be written a little bit in the geometric way. So this can be written in terms of s 4 pi r s square divided by 4 h bar g n. OK? Now g n goes to the downstairs, because the r s contain two powers of g n. So this is given by the horizon area of the black hole, divided by 4 h bar g n. OK? So now we conclude-- we conclude just a bit-- let's connect these two formulas together. The temperature of the black hole is h bar times the surface gravity divided by 2 pi. And the entropy of a black hole is the area of the black hole, area of the horizon of the black hole, divided by 4 h bar g m. So as I said before, for the black hole, there are two very important geometric quantities. One is kappa, surface gravity. The other is the horizon area. And then they enter in a very-- in a nice and simple way, into the temperature and the horizon-- the entropy of a black hole. And let me call this equation one. This is an important equation. So let me just note one thing. This temperature is rather funny, if you look at that formula, because the mass is in the downstairs. Say, if you increase the mass, then the temperature actually decrease. This is actually opposite to your everyday experience. OK? Because when we increase the mass, the black hole temperature decreases. OK? If you calculate the, say, specific heat, the specific heat is smaller than zero. OK. So we will later see this is actually an artifact of a black hole in asymptotically flat spacetime. So here, we're considering black holes in asymptotically flat spacetime. If you think about black holes, say, in the spacetime like [INAUDIBLE] space, and then actually, the temperature will go up with the mass. As in the ordinary story. OK? So this is just the third remark. So another third remark is that this equation, these equations tend not to be universal. So we derive it to the simplest Schwarzschild black hole, but actually, those relations apply to all black holes have been discovered. Just apply to every black hole. OK? So now let me talk about general black holes. So we are mostly just making some statements. Because most of the statement I make here, they are highly nontrivial. And each statement may take one lecture to prove, or even more, so I will not really prove them. I just [? coat ?] them. And I can also not prove it on the spot. So first is something called the no-hair theorem. So no-hair theorem says that stationary-- stationary's a key word-- and asymptotically flat-- this is also a key word-- black hole is fully characterized by the first, mass. Second, angular momentum. Third, conserved gauge charges. OK? So the Schwarzschild black hole we talked about corresponding to a special case, which angular momentum is zero, and the conserved charge-- yeah, conserved charge-- for example, the electric charge. For example, electric charge. OK? And so-- yeah, so let me just give some-- so typically we denote mass by m, and angular momentum by j, and the electric charge by q. So the Schwarzschild black hole goes one into the j equal to zero, and the q equal to zero, but more general black holes you can have both j and the q. So for our-- in string theory, there can be many, many different gauge fields. So in string theory, actually there are many, many different charges. So black holes, in string theory, can have many, many more charges than, say, just in the standard model. Just in the standard model. Yeah. Just for all those black holes, this equation one still holds. OK? Yes? AUDIENCE: So in proving this theorem, we're going to kind of start off with a certain definition of what is a black hole and what isn't a black hole. So what is the key feature that defines it? Because there's lots of metrics. And some of them are characterized by only these three things, and some are not, and-- HONG LIU: You must have an event horizon. AUDIENCE: Just the presence of some horizon? HONG LIU: Yeah. So event horizon. Yeah. AUDIENCE: OK. But in Rindler we have an event horizon. HONG LIU: Hmm? AUDIENCE: In Rindler we have an event horizon that's not a black hole [INAUDIBLE]. HONG LIU: That's true. You should at least have object. You should have some mass. You should have some quantum number. AUDIENCE: I could be accelerating next to the [INAUDIBLE]. HONG LIU: So, no, no, no, no, no. Rindler is called observer horizon. It's observer-dependent horizon. And in the black hole, it's not. AUDIENCE: So you cannot go to any frame where there is no [INAUDIBLE]. What if you are free-falling? HONG LIU: Hmm? AUDIENCE: What if you are free-falling? HONG LIU: No no no. Yeah, what I'm just saying that the-- here, here the horizon, you can-- different observers have different horizons. AUDIENCE: So you can [INAUDIBLE] the horizon out of the picture? HONG LIU: So here, the horizon is arbitrary. It depends on your observer. Even though I draw here, but-- this does not have to cross the origin. This can be anywhere. It's anywhere. Just here, there's no [? generating ?] a horizon. AUDIENCE: But if I'm free-falling into a black hole, I also don't have a horizon, right? HONG LIU: No. AUDIENCE: Because I'm causally connected [INAUDIBLE]. HONG LIU: That's true, but independent of that, there's the spacetime structure, there's a horizon there. In the spacetime structure of the Minkowski space, there's no horizon. In order to talk about [INAUDIBLE] horizon, you have to talk about specific observer to have specific motion. Yeah, I write down the Schwarzschild metric. And the different [INAUDIBLE] in the way. There is already an event horizon. AUDIENCE: How do you write down a metric in a [INAUDIBLE] invariant way? HONG LIU: No, I'm just saying the motion of the horizon is a [INAUDIBLE] variant. AUDIENCE: OK. Yeah. But if I'm in a frame that's free-falling, then I don't see a horizon, right? Isn't that [INAUDIBLE] which transported me to a frame where there is no horizon? HONG LIU: Maybe let me say this. If this can make you a little bit happier. To a [? symptotic ?] observer, there's a horizon. There's a different [INAUDIBLE] invariant horizon. AUDIENCE: OK. HONG LIU: Yeah. Yeah, yeah. Yeah, if you don't want to fall into the black hole, for those people, there's a horizon. [LAUGHTER] And the difference from here, if you fall through the horizon, nothing happens. So this no-hair theorem is remarkable, because it says if you have a star which collapsed to form a black hole, in this process, all the features of the star were lost. Because black hole only are characterized by those numbers. OK? But star, you can characterize in many, many other different ways. But the black hole, essentially, don't have any features. So this is called no-hair theorem. All features. Yes? AUDIENCE: What if I add something like [INAUDIBLE] scalar field or other field into that [INAUDIBLE]. [INAUDIBLE] theorem? HONG LIU: Yeah. Yeah. The story is a little bit more complicated. Let's try to not go into that. There's something called a secondary [INAUDIBLE], et cetera. Yeah. It can. It can, but this is refers to Einstein. But if you go to the frame of Einstein plus-- Einstein plus some meta-field, and then this as a statement is true. Any other questions? OK. So now-- so historically, of course, people did not discover the temperature first. So historically, people first have the no-hair theorem. So it seems like black hole is completely featureless. OK? It's a very boring object that don't have any feature. And then, people discovered the so-called four laws of black hole mechanics. For general black holes-- for general stationary black holes, again. So the zeroth law-- again, we just [? coat ?] them. [INAUDIBLE] is gravity. Kappa is constant over the horizon. The first law is that if you change the mass of the black hole a little bit-- OK, imagine you put something, throw something into a black hole, you change the mass of the black hole a little bit, then you find such a relation. a is the area of the horizon So J is the angle of momentum. And omega is the angle of where you can see the horizon. So if you have angle of momentum on the black hole, we would be rotating. So omega is the angle of frequency. And fie is the electric potential. So if you have a charge, than the back hole, all the current electric fields, this is electric potential at the horizon. And you always, in this old notation, you normalize the electric potential to infinity to be 0. So this is the first law. It just say if you change mass of a black hole, and change some angle of momentum and change some charge then to the first order they satisfy this relation. They satisfy this relation. So this is just purely mechanics, a classical of gr. This is pure, classical gr. And then there's a second law, which is also classical gr, is that horizon area never decreases. And the third law, says the surface gravity-- let me just call it kappa. This kappa surface gravity over black hole cannot be reduced to 0 in the finite number of steps. AUDIENCE: What do you mean by number of steps? HONG LIU: Find the number of procedures. AUDIENCE: Like? HONG LIU: Say if each time you throw a particle to a black hole, this is called a step. AUDIENCE: Sure. OK. Thermometer, like [INAUDIBLE]. HONG LIU: That's right. Yeah. This is called a step. Yeah. Right. OK. So all of these are classical statement. And so this tells the second law. For example, if you throw something to a black hole, the black area will increase. So if you collide the two black holes, and then if you collide two black holes, they will merge into a bigger black hole. And this bigger black hole, the area will be larger than the sum of the area of two black holes-- than the area of two black holes. Yeah. So of course this law just-- these four laws then become immediately just like the four laws of thermodynamics. You make this identification of one. This identification of one-- so if it's one, this just becomes the four laws of thermodynamics. Yeah. Thermodynamics. The four laws of thermodynamics. In particular, the first law, if you substitute the copper and A by the temperature under the entropy, than this just becomes the standard, the first law, in particular. The first law, it's just dm, sdt, tds plus omega dj plus 5 dq. So this is really the first law of thermodynamics. So historically, these four laws of mechanics actually was discovered before Hawkings radiation. So first they discovered this black hole law theorem. And then they discovered these four laws of black hole mechanics. Then they say, ah, this is really look like thermodynamics. And they even patterned these four laws to precisely like the four laws of thermodynamics. But they could not imagine the black hole was a thermodynamic object. They could not imagine the black hole was thermodynamic object. So they were saying, if you look at the old paper-- there was a very famous paper by Bardeen, Carter, and Hawking, which discuss these four laws of black hole mechanics. And they said, this four laws of black hole mechanics should actually transcend the standard of thermodynamics. The black hole actually should transcend all of these, our traditional physics. But in 1971 or 1972, a young graduate student called Bekenstein, so he was a graduate student at Princeton. So he was studying under a guy called Wheeler, studied under Wheeler. And so he was very uncomfortable with the fact that if you throw something into a black hole, than that thing is gone. So he was a very uncomfortable with that. Because if you throw something to a black hole, it's gone, then he concluded that's really the second law of thermodynamics. Because if you throw to a black hole, that thing is gone, then the entropy associated with that thing is gone. And the black hole is just black hole, have this no-hair theorem. And then you violate the second law of thermodynamics. People like Wheeler or Hawking, they say, ah, this is great. Black hole transcend the thermodynamics. But Bekenstein was uncomfortable. He thinks thermodynamics should transcend black hole. And then based on the second law of the black hole, then he said, so maybe-- so he wrote a series of papers, a few papers. I don't remember. He say, if we think black hole has entropy proportional to the area, then the second law of thermodynamics can be saved. Because this area of level decrease, and if you throw something to a black hole, even thought that, the entropy associated to that guy's lost. But that area also increased. The area also increased. And then you can say it's the second law of thermodynamics. Now, he actually proposed to generalize the second law of thermodynamics. So he proposed a generalized second law. He said, you take the total, if you take the total entropy of the system to be that of the black hole, and then matter field outside the black hole, then this ds total must be non-decreasing. Of course, now if we accept, if we take this leap of faith to really think black hole as a sum or object, then of course this generalized second law has to be true because the thermodymaics-- some object. But when Bekenstein proposed it, it was really bizarre to say in a nice way because it just sounded crazy, just outright crazy. Because how can black hole have entropy? Black hole absorb everything. Just how can it have entropy? And it just completely was disregarded by people. And it was discarded by people. And anyway, but then now that he was right, then of course after Hawking's discovery of a Hawking radiation, they become very natural for the black hole to have entropy. And in particular, this formula, after you determine that the black hold have this temperature, after you fix this pre-factor, then you can also, just from the first law, just from here, just from here, to fix that pre-factor of a black hole. So Bekenstein could not decide what's the proportional constant. But once you get the temperature, then this proportional constant just uniquely fixed from this equation, so without using that. Just using the first law of mechanics, then you can immediately integrate that as entropy. Anyway-- AUDIENCE: How did Bekenstein find-- HONG LIU: It's a postulate. It's a gas. Yeah. He just postulate, if we imagine-- AUDIENCE: He didn't derive it? HONG LIU: No there's no way to derive it. He was just saying, if you imagine black hole has entropy proportion to the area, than the second law of thermodynamics can be saved. And he wants to save the second law of thermodynamics. Yes. AUDIENCE: So one question. So this makes sense classically. If I have a system of entropy, I throw them to a black hole. Entropy is like ignorance. If I throw ignorance into the black hole, maybe there's somehow the black hole also becomes more ignorant or something. But what does this mean in terms of quantum mechanics when I have a pure state? From quantum statistical mechanics, substance the entropy, you just don't know something about your state. And it's your fault. And it's not like the black hole should car if it's your fault or not or something. So why does this makes sense quantum mechanically? Maybe it doesn't. HONG LIU: You mean, why does black law have entropy make sense quantum mechanically? AUDIENCE: Yes. HONG LIU: It's the same thing as-- this room has entropy. This room we use quantum statistical physics. If you believe black hole is an ordinary object, then-- AUDIENCE: We only quantum statistical physics because we're ignorant, the full state of-- HONG LIU: Yeah. For a black hole, we are also ignorant. Yeah. This is actually something I'm going to talk about now. Any other questions? AUDIENCE: I have something [INAUDIBLE]. So basically, I think it can be [? a healing ?] experiment. So where we hear things of [INAUDIBLE]. There is one unit of entropy, and it enters black hole. It increased the energy of black hole, which increased the mass of black hole, which increased the [INAUDIBLE] of the black hole. So in this way you can actually derive the semi-qualitatively derived out of proportionality. HONG LIU: Yeah. To derive with these precise [? cohortions-- ?] no, I don't think they derive the [? cohortion ?], but I need to check. I don't think there's any way to derive the-- you can, say, put some bond on the [? cohortions ?]. But you cannot derive the [? cohortions ?]. I think that argument would not enable you to derive the [? cohortions. ?] Anyway, so let me just mention a few more things about black hole. This is just pure-- actually, I'm running out of time. So let me just mention some paradox or paradoxes for the black hole. So we have shown that the black hole is some object. So Jordan just asked. But we know the ordinary thermodynamics has statistical physics behind it. So the immediate question is actually, does the black hole entropy, for example, have a statistical interpretation? So this is one question. And another question is that, does black hole actually respect quantum mechanics? Does black hole respect quantum mechanics? So if black hole entropy have a statistical interpretation, then this give you a very-- so it black hole have a statistical interpretation that means-- so A, if affirmative, that implies that each black hole, even though black hole at a macroscopic level only is characterized by these three things, But macroscopically, must have internal states. Or, maybe I should call it macrostate of order to the entropy, which is the A of the black hole area 4h bar e. So hidden behind this no-hair theorem is actually a huge number of macrostates. Just like the air in this room, even though we describe the unit temperature, pressure, and the energy stature, but given that macroscopic data, they can be huge number of macro states. And then a similar thing should happen to the black hole. And in order to see that the black hole does have, a statistical interpretation, then you have to find so many states for a black hole and in order to answer the question A. So this question has to be answered in the affirmative for many different type of black holes. Say in string theory and also [INAUDIBLE] spacetime, and they see the spacetime. Using string theory method or using this holographic duality, we will see examples later. We will see examples later. So these really confirms that the black hole is really a quantum statistical system. So regarding this question B, then this long time paradox-- so this A has also been a paradox for many years, and was only resolved-- the basic calculations were able to do only in 1996, when the [INAUDIBLE], they did some very special [? Schwarzschild ?] black hole, which they counted this exact number of states. They counted the exact, this number of states, and about for a very specific type of black hole. So B is rated to the so-called Hawkins information paradox, information loss paradox. So I will not have time to go into detail here. Let me just give you a very rough version of it. So you can see the pure state, could see the star, a big star in the pure state. So we know from gr that a sufficiently massive star will eventually always collapse to form a black hole. So if we imagine you have a pure star in the pure state collapse to form a black hole, and if quantum mechanics is preserved throughout the process, then this black hole should also be a pure state. So that means the black hole should be just one of all those max number of possible states. There should be a pure state, but only one of those all possible states. But then Hawkins had an argument saying this is impossible. Because if black hole is a pure state, then when black hole evaporates-- so the black hole evaporates, eventually the black hole will be gone. So the one funny thing about black hole is that the temperature is inverse proportionate to the mass. So when the black hole is big, then the temperature is low, then the radiation is small. But when you start radiate, then the black hole mass will decrease. And then the temperature will be higher than we radiate mole. So it will be acceleration process. And eventually-- presumably black hole will be gone. So this kind of semi-classical argument that we are given applies only for a massive black hole much greater than the planck mass, only much larger than a planck mass. So below planck mass, what happens? Nobody knows. But at least this radiation statement should be robust for the mass much, much larger than the planck mass. Now Hawking then say this is a paradox because according to his calculation, the radiation is simple. And we know that the sum of radiation does not contain any information. It cannot contain information but it's pure state, because sum or radiation is information free. And so the sum of radiation which come out, come out until you reach say the mass of all the planck mass. And here we reach the planck mass. And then before you reach that mass, because the radiation is simple, there can be no information can come out. And when you reach that planck mass, it just becoming possible for such a huge amount of internal state to be encoded in the planck mass object. So he concluded that the information must be lost, and the black hole must violate mechanics. So this is a very heuristic argument. But I highly suggest, if you are interested, to go read his original paper, which is very beautiful. Because he was really trying to think of black hole as a ordinary quantum mechanical object. And the way he was thinking about it is really very nice. And actually, it's not very different from we are thinking about black hole right now, from the holographic duality. He was really thinking that's it's a quantum mechanical object. But then he reached this paradox. Anyway, so this paradox had bothered the people for more than 30 years. So he discovered the Hawking radiation in 1974. I think he proposed this paradox in 1976. So for more than 30 years, people argue with each other what is going to happen. It's typically divided into two camps. So the gr people, they think black hole is everything, quantum mechanics nothing. And the black hole must be able to violate quantum mechanics and will bring us to a new frontier we never see before. And the particle, these people are saying, a black hole-- oh, we can even creating the accelerator-- must obey quantum mechanics. So people would just argue with each other and without really setting the question in a very convincing way to either camp. But this holographic duality, in the context of holographic duality, then the black hole [INAUDIBLE] spacetime, then you can actually rephrase this question about the black hole information laws. And the holographic duality strongly suggests-- I think it's maybe not really completely proved-- strongly suggests at least that the black hole is just a ordinary quantum mechanical object. We are not transcend the quantum mechanics. We are not transcend the quantum mechanics. Yeah. I am really out of time. Yes? AUDIENCE: So you said the star is in a pure state. But how can that be, because it has a temperature and it's also thermal system? So how can you put it in a pure state? HONG LIU: No. Black hole does not have to be in the thermal state. No. I can certainly imagine a star, which is in the pure state. In real life, maybe it's hard to construct them. But on the paper, I can do it. [LAUGHTER] In principle, so how many atoms in the star? I don't know. Maybe 10 to 100? Now let's imagine there's 10 to the 100 atoms. AUDIENCE: But it's radiating all the time though. Doesn't it entangle with things and make it not pure? HONG LIU: Don't worry. Don't worry. I can certainly write down a wave function for 10 to the 100 particles, which is in the pure state. And this will be a big object. And according to the rule of gr, this thing will collapse to form a black hole. AUDIENCE: What about the light that it's emitting, which it has-- HONG LIU: No. There's no light. No. We work with zero temperature, just pure state. There's nothing. There's nothing. AUDIENCE: No temperature? Zero temperature? HONG LIU: Yeah. In principle, I can do that. AUDIENCE: In that case, will the black hole have the temperature, Hawking temperature? HONG LIU: Yeah. The black hole will have Hawking temperature. AUDIENCE: But it's still in it's first state? HONG LIU: The black hole will have a Hawking temperature, will have-- similar, have entropy, but will be a pure state. So this is the essence of the information paradox. So this is the essence of the information paradox. And then we will be able to explain it, why this is so, using the holographic duality. AUDIENCE: So as the star collapses, it gains a non-zero temperature. It starts at zero temperature, and then collapses, and becomes [INAUDIBLE]. HONG LIU: Yeah. It can. Yeah. It seemingly have a long zero temperature. Yeah. Yeah. Maybe let's stop here.
MIT_8821_String_Theory_and_Holographic_Duality_Fall_2014
4_Physical_Interpretation_of_Black_Hole_Temperature.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at [email protected]. PROFESSOR: OK, let us start. So maybe start by reminding you what we did in the last lecture. So you take the black hole metric so we explain that you can actually extend-- so this is the other black hole horizon-- so you can actually extend the region outside the horizon to actually four total regions for the four black hole spacetime. And then this [? tag, ?] this is a singularity. And of course, this is only the rt plane, and then you also have [? s2. ?] So now [INAUDIBLE], let me show that in order for the metric to be regular at the horizon into this when you go to Euclidean signature then you want to identify Tau to be periodic with this period. So in essence, when you go to Euclidean signature, then this rt plane becomes, essentially, a disc. And this Tau is in the angular direction, and then the rho, this r is in the radial direction. And then the region of this disc is the horizon. The region of the disc is the horizon. So when you say, yeah, so this Tau has to be periodic. So that means if you fit any theories in this spacetime, then that theory must be at a finite temperature with inverse of this, which gives you this [? H ?] kappa divided by beta. And the kappa is the surface gravity. So you can do a similar thing with the [INAUDIBLE]. And the [INAUDIBLE] is just 1/4 of a Minkowski, which [? forniated ?] in this way. This is a constant of the rho surface, and this is a constant of the eta surface. And, again, [INAUDIBLE] in eta to Euclidean. And in order for the Euclidean need [? congestion ?] to be regular, the region, then the theta have to be periodical in 2pi. And then this theta essentially, again, becomes an angular direction. And then this implies that the local observer in the window of spacetime must observe a temperature given by this formula, OK? Any questions regarding that? Good, so today we will talk about the physical interpretation of this temperature. OK? So let me just write down what this temperature means in words, so I will use black hole as an example. We can stay exactly parallel [INAUDIBLE]. It says if you consider a quantum fields theory in the black hole spacetime, then the vacuum state-- so when you put the quantum fields theory, say, in the spacetime, always ask, what is the vacuum? Then the vacuum state for this curve t obtained via the analytic continuation procedure where the analytic continuation procedure from Euclidean signature is a thermal equilibrium state with the stated temperature, OK? So emphasize when you want to talk about temperature, you first have to specify what is the time you use. So this temperature refers to this particular choice of time, and so this is the temperature corresponding to an observer while using this time. And then this is the time, as we said, corresponding to observer leaving at infinity. And so this would be the temperature observed by the infinity. So similarly, this temperature is the temperature for given [? with ?] the observer at some location of rho. So this is for local observer. If you just talk about the eta, then the temperature is just h divided by 2pi because theta is periodical in 2pi. And if you ask what is the temperature associated with eta, it's just h pi divided by 2pi. So any questions regarding this statement? So now let me [INAUDIBLE] is going to be abstract, so let me elaborate a little bit, and let me make some remarks. So the first is that the choice of vacuum for QFT in the curved spacetime is not unique. So this is a standard feature. This is a standard feature, so if you want to define a quantum fields theory in the curved space time because, in general, in the curved space time, there's no prefer the choice of time. So in order to quantize here, you have to choose a time. So depending on your choice of time, then you can quantize your theory in a different way. And then you have a different possibility of what is your vacuum. So in other words, in the curved spacetime, in general, the vacuum is observer dependent. So of course, the black hole is the curved spacetime, so the particular continuation procedure we described corresponds to a particular choice, to a special choice, of the vacuum. In the case of a black hole, this has been called to be the Hartle-Hawking vacuum because the Hartle-Hawking first defined it from this Euclidean procedure. And for the Rindler spacetime, time so we see a little bit later in today's class that this is actually-- the choice of vacuum actually just goes one into the standard of the Minkowski way vacuum. reduced to the Rindler patch. So of course, you can actually choose the vacuum in some other way. For example, say if for a black hole, instead of periodically identify Tau with this period, we can choose not to identify Tau at all. Suppose for a black hole, in Euclidean signature we take Tau to be uncompact. So we don't identify. So this, of course, runs into a different Euclidean manifold. Then that corresponds to a different Euclidean manifold. And, again, you can really continue the Euclidean theory on this manifold back to [INAUDIBLE] to define our vacuum. And that gives you a different vacuum from the one in which you identified Tau. So this gives you a so-called Schwarzschild vacuum. Or sometimes people call it Boulware vacuum because Boulware had worked on it in the early days before other people. And, in fact, this Schwarzschild vacuum is the most natural vacuum. In fact, this is the vacuum that one would get by doing canonical connotation in the black hole in terms of this Schwarzschild time t. So let me just emphasize, so this is a curbed spacetime. You can just put a quantum fields theory in this spacetime, and then there's a time here which, actually, the spacetime is time dependent. So you have a time-independent Hamiltonian [? so ?] respect to this time. And so, in principal, you can just do a straightforward canonical connotation. And then that connotation will give you a vacuum. And that vacuum would corresponding to the same as you do analytic continuation to the Euclidean. And don't compact Phi Tau because there's nothing special. Just like when you do a 0 temperature fields theory in the standard Minkowski spacetime. But when you compactify Tau, then you get a different vacuum. It is what would be now called a Hartle-Hawking vacuum, and it's what you get a finite temperature. Is this clear? Yes? AUDIENCE: So in what sense are these different vacuums physical or not? PROFESSOR: Yeah, I will answer this question in a little bit. Right now, I want you to be clear. Right now we have two different vacuum here corresponding to two different ways of doing analytic continuation. And one way you compactify Tau, and one way you don't compactify Tau. And the way you don't compactify Tau is actually the most straightforward way. So if you just do a straightforward canonical connotation using this time, then that will be the vacuum you get. So that's why this is called a Schwarzschild vacuum, OK? AUDIENCE: So do you not have a similarity in writing? PROFESSOR: I will talk about that. So similarly for the Rindler, again, if you do straightforward canonical connotation, in this spacetime with eta has your time, then when you go to Euclidean signature, you want compactify the Euclidean version of eta. Then you won't compactify theta. So if you take the theta to be uncompact, then you get what is so-called Rindler vacuum. So it just, again, can be obtained. So then you ask them, why do we bother to define those vacuums? Why do we bother to make this identification given that this is the most straightforward thing to do, say, when we consider this quantum fields theory inside curved spacetimes. So here is the key, and so that's where the geometric consideration becomes important. So, again, let me just first use the black hole as an example case. So in the Schwarzschild vacuum, the corresponding Euclidean manifold is singular add to the horizon. So we explained last time, when we do this [INAUDIBLE], the spacetime is regular only for this periodic identification. For any other possibilities, then you will have a conical singularity add to the horizon. So this implies the Euclidean manifold singular horizon. Say, if you cut computer correlation function [INAUDIBLE], then you can have single behavior at the horizon. In particular, when you analytically continue to [INAUDIBLE] signature, then that means that physical observables often are singular after the horizon. So we need to continue to [INAUDIBLE] signature. For example-- which actually I should have maybe the [INAUDIBLE] problem but then forgot earlier. For example, you can check yourself that just take a free scalar fields theory. You compute the stress tensor from this free scalar fields theory, and you find the stress tensor of this theory blows up after the horizon in the Minkowski vacuum. And if this is not the case, for the vacuum we obtained by doing analytic continuation this way-- so this is not the case for a Hartle-Hawking vacuum. So Hartle-Hawking vacuum just is [? lame ?] for the vacuum we obtained by this analytic continuation procedure for which all observer was regular just, essentially, by definition, by construction. So essentially by construction because Euclidean manifold is completely smooth, and if you want to compute any correlation function-- for example, you can compute the Euclidean signature and then I need you to continue back to [INAUDIBLE] signature because it's the smooth side of the horizon of the Euclidean signature. I need the continuation back. We also give you a regular function at the horizon there. So similar remarks apply to the Rindler space, and this Rindler vacuum will be singular at this Rindler horizon. It will be singular at Rindler horizon. So since from the general relativity, the horizon is a smooth place. The curvature there can be very small if you have a big black hole [INAUDIBLE]. So physically, we believe that these Hartle-Hawking vacuums are more physical then, for example, the Schwarzschild vacuum. But Schwarzschild is still often used for certain purposes. But if you want to consider these physical observables, then the Hartle-Hawking vacuum should be the right one to use if you don't want to encounter singularities at the horizon. Any questions? Yes? AUDIENCE: [INAUDIBLE]? PROFESSOR: No. AUDIENCE: [INAUDIBLE]. PROFESSOR: Yeah. Yeah, because, for example, Euclidean Tau is uncompact. AUDIENCE: So the singularity of the physical variables [INAUDIBLE] to become singular to Hartle-Hawking? PROFESSOR: Yeah, that's right. AUDIENCE: [INAUDIBLE]? PROFESSOR: I would say this is the physical reason. If I think about it, I might other not-so-essential reasons. Yeah, but at the moment, I could not think of any others. Yeah? AUDIENCE: So if I understand correctly, when you quantize a theory in curved spacetime, you have to choose a space-like foliation of your spacetime, then you quantize it on that foliation basically. So the problem is that I still don't understand. So the different ground suits that you get for different possible quantizations based on different foliations. Like, to what degree are they compatible versus incompatible with each other? Are they at odds with each other? Like is one more real than another one, or is it just sort of an artifact of your coordinates, and there are things that we can do which are independent of which one we end up with. PROFESSOR: Yeah, so first, in general, they are physically inequivalent to each other. For example, in this case, the Schwarzschild vacuum is physically inequivalent to the Hartle-Hawking vacuum. And actually, with a little bit of work, one can write down the explicit relation between them. And so from the perspective of the Schwarzschild vacuum, the Hartle-Hawking vacuum is a highly excited state and vice versa. So vice versa. And so, yeah, for local observable, say in the local, it depends. I think the vacuum is more like-- sometimes certain vacuums are more convenient for a certain observer and a certain vacuum is more convenient for that observer. Sometimes it depends on your physical [INAUDIBLE], et cetera. And other questions? AUDIENCE: So we cannot treat these two vacuums as like some transformation? PROFESSOR: Yeah, there is some transformation between them. AUDIENCE: So they stay just the same? Like do [INAUDIBLE]? PROFESSOR: No, they are different states. They are completely different states. You can relate to them in a certain way, but they're different states. Yeah, they're different states. AUDIENCE: If they are different, than why can we only say one is wrong? Because, for example, if it's singular, it's not physical, why don't we just discard it? PROFESSOR: I'm sorry? AUDIENCE: Why we-- PROFESSOR: Yeah, physically we do discard it. Physically we don't think the Schwarzschild vacuum is-- so here there's assumption. So the assumption is that we believe that the physics is now single at the horizon. So this is a basic assumption. And if that's the basic assumption, then you should abandon Schwarzschild vacuum as the physical choice of the vacuum. Then you say, then if we really do experiment in the black hole geometry, then what you will discover is the property of the Hartle-Hawking vacuum. But if our physical assumption is wrong that actually a black hole horizon is singular, then maybe Schwarzschild horizon will turn out to be right. But in this case, we actually don't know. We cannot really do experiments in the black hole, so we cannot really check it. AUDIENCE: But we can measure black hole temperature from infinitely far away, and the Schwarzschild vacuum it would be 0. PROFESSOR: No, but you cannot measure it. We have not been able to measure it. AUDIENCE: And what about the [INAUDIBLE]? PROFESSOR: Yeah, first produce the black hole first. [LAUGHTER] Yeah, we will worry about it after we have produced the black hole. AUDIENCE: But if they didn't radiate, they would not decay at all. PROFESSOR: Yeah, then you may also not see it. Anyway, so this can considered as a physical interpretation of the temperature inside the regularity at the horizon force us to be in this Hartle-Hawking vacuum. And then the Hartle-Hawking vacuum is like [? a similar ?] state. OK, so now they may explain why [? they ?] say, they will behavior like [? a similar ?] state, OK? So let me go to the second thing I will explain today. It's the physical origin of the temperature. So, again, I can use either black hole or the Rindler example. The mathematics are almost identical. But I will use the Rindler example because the mathematics are slightly simpler. So I will use the window of spacetime. So I will explain using the Rindler example. So I will explain at A that this choice of theta to periodically identify theta plus theta plus 2pi [INAUDIBLE] into the choice of a Minkowski vacuum. Then the second thing I will do is I will derive this temperature-- derive the temperature using a different method. So that's I will do today. So here, when me say, this temperature, we just directly read the temperature from the period of the Tau, OK but I will really derive with the thermal density matrix. And then you will see that this temperature is indeed the temperature with appears in the density matrix. Any questions? So these two together also amounts to the following statement. So this is an important statement, so let me just write it down, so this A and B also amounts to be the following-- also is equivalent to the following statement. It says, the vacuum inside the Minkowski-- the standard of the Minkowski vacuum where you do your quantum fields theory, the Minkowski vacuum appears to be in a similar state. This temperature which is pi divided by 2pi or T equal to hbar divided by 2pi in terms of eta. OK, it depends on which time you use to a Rindler observer of constant acceleration rate. So a says, actually, this choice [INAUDIBLE] a periodic of 2pi, actually we are choosing the Minkowski vacuum. And then the second statement you said, but the stand of the Minkowski vacuum, which appears to be 0 temperature to ordinary Minkowski observer than appears to be a similar state to a Rindler observer. OK, so that's the physical content of this things which we will show. Any questions on this? Yes? AUDIENCE: [INAUDIBLE]? PROFESSOR: It's actually a tricky question. In some sense, they don't really belong to the same Hilbert space. Yeah just when you talk about quantum fields theory, it's a little bit tricky when you have an infinite number [INAUDIBLE]. Yeah, but one can write down the relation between them. AUDIENCE: [INAUDIBLE]. PROFESSOR: Yeah, one can write down a relation between them, and then if you take the [? modulus ?] of that vacuum. Then you'll find that it's actually infinite because you have an infinite number that [INAUDIBLE]. It's not possible to normalize that state. OK, so that's what we are going to show. Hopefully, we will reach it by the end of this hour. But before that, we need a little bit of preparation to remind you of a few things. So once we go through these preparation, then final derivation only takes less than 10 minutes. So first, he said, they are actually two descriptions of a similar state. So let me remind you. They are two descriptions of a similar state. So we will use harmonic oscillator as an example. So now let's consider a single harmonic oscillator a finite temperature. So the standard way of doing it is that if you want to compute the expectation value of some operator, of some observable at finite temperature, you just do the standard canonical average. So the H would be, say, the Hamiltonian of this single harmonic oscillator, and the z is the partition function. So can also be written as trace x and as a thermal density matrix. So the thermal density matrix is 1/z minus beta h, and the z is just the sum of all possible states. So this is a standard way you would do, say, the finite temperature physics. So this, of course, applies to any quantum systems causal quantum fields theories, et cetera. But actually, this alternative way to do thermal physics-- so this was realized by Umezawa in the 1960s. So he said, instead of considering this thermal density matrix, let me just consider two copies of the same. Let's consider two copies of harmonic oscillator. Let's double the copy, and then now we have H1. Then the foreseeable space for these two copies will be the H1 of one system tensor product of H2 and the Hilbert space of the other. And then you will have H1 H2, the Hamiltonian H1 H2 associated with each of them. But, of course, these two H's are the same. Then the [INAUDIBLE] system in a typical state would be, say, of the form sum mn, amn, m1 [? canceled ?] with n and 2. So the general state, say, of this doubled system will be like this. So this is two copies of the same system with no interaction with each other. So now he says, in order to consider thermal physics, let's consider special state defined by the following, 1 over square root of z sum over n. And he said, now let's consider the following states. So this is an entangled state between the two systems. So this is an entangled state. So the key observation is that if you want to consider the thermal physics, it says, for any observable, say this x-- so [INAUDIBLE] 1, which only acts in one of the systems. [INAUDIBLE] act on the first system. So let's consider any such observable. So now, if you to take the expectation value between the sides, the x between the sides, so you essentially have a side squared. Then you have 1/z. Then also Tau will become better En, and then this just becomes 1/z sum over n [INAUDIBLE] and x n. So this is the thermal average. This is sum average. So if you understand the why, you can also consider because if we are interested only in system 1, then you can just integrate our system 2. We can trace our system 2. Say, suppose we trace our system 2 of this state, then, of course, what do you find? Which is the thermal density matrix of the system 1. So in other words, the thermal density matrix in one system can be considered as [? entangled ?] pure states of a doubled system. And because we know nothing about the other system, once we trace the l to the other system, then you get a density matrix. And this density matrix comes from our insufficient knowledge of the other system. So this is another way to think about this thermal behavior. Do you have any questions on this? So the temperature arises due to an ignorance of system 2. So because if you now have full knowledge of one of the two, then this would be just the [? size ?] of a pure state. It's just a very special pure state. But if you don't know anything about system 2 and if you consent of only about system 1, then when you trace out system 2 then you get the thermal density matrix for 1. So let me make some additional remarks on this. I'll make some further remarks. First, this, of course, applies to any-- even though I'm saying I'm using the harmonic oscillator, this, of course, applies to any quantum systems including the QFT's. So the second remark is that this side is actually a very special state. This size is invariant under H1 minus H2, H1 minus H2. So you can see it very clearly from here. It says, if you add H1 and H2 on this state, H1 minus H2 because both n's have the same energy, so they just cancel. So this is invariant under this-- yeah, I should say, more precisely, it's [? annihilated ?] by H1 minus H2. And it's invariant under any translation created by H1 minus H2. AUDIENCE: [INAUDIBLE]? PROFESSOR: Yeah, just double the system. It's two copies of the same system. So the third remark now relies on the harmonic oscillator. For the harmonic oscillator, we can write this psi in the following form. So this you can check yourself. I only write down the answer. You can easily convince yourself this is true. Some of you might be able to see it immediately just on the blackboard. As you actually write this in the-- if you can write this as some explanation of some-- so a1 a2 are creation operators respectfully for two systems. And then the 0 1 and 0 2 are the vacuum of the two harmonic oscillator systems. And you can easily see yourself, when you expand this exponential, then you essentially just take the power of n. And then that will give you n. Yeah, when you act on 0 0, then that will n a times n, OK? So this is normally called a squeezed state. So this tells you that this psi is related to the vacuum of the system by some kind of squeezed-- so this is a squeezed state in terms of the vacuum. So this form is useful for the following reason. It's that now, based on these three, one can show it's possible to construct two oscillators, which are b1 and b2, which [INAUDIBLE] the psi. So the b1 and b2 are constructed by the following. So, again, this you should check yourself. It's easy to do a little bit of algebra. So cos theta is equal to 1 over 1 minus the exponential minus beta omega. So the omega is the frequency for this harmonic oscillator system and the [INAUDIBLE]. So this b1 b2 are related to a1 a2 by some linear transformation. So what this shows is that [? while ?] 0 1 and the 0 2 is a vacuum for a1 and a2, and this side is a vacuum for b1 b2. So then maybe you can see that this goes [? into ?] a different choice of vacuum. So as we'll see later, the relation between the so-called Hartle-Hawking vacuum and the Schwarzschild vacuum is precisely like this. They just, of course, run into a different choice of alternators. And, in particular, their relation is precise over this form because, say, if you consider some [? three ?] series because the [? three ?] series essentially just reduces to harmonic oscillators. OK, so this is the first preparation. And those things are very easy check yourself because this is just a single harmonic oscillator. Any questions on this? Oh, by the way, this kind of transformation is often called Bogoliubov transformation. So the [INAUDIBLE] you assume by this transformation is that in the expression for b1 b2, a [? dagger ?] appears here. If there is only a that appears here, then b1 b2, a1, a2, they will have the same vacuum because they will [INAUDIBLE] the same states. So the [? nontrivial ?] thing is because of the [? dagger ?] appearing here, so now the state which they are [INAUDIBLE] are completely different and [? graded ?] by this kind of squeezed-state relation. Yes? AUDIENCE: So it's related to the H2 minus H2? PROFESSOR: I'm sorry? AUDIENCE: The operators like b1 b2 are somewhat similar to the H1 minus H2? PROFESSOR: Not really. You can write down H1 H2 in terms of b1 and b2. You can certainly do that. Yeah, but H1 H2 is also very simply. You write in terms of a1 and a2. AUDIENCE: If a letter transformation [INAUDIBLE] we get-- never mind. PROFESSOR: Yeah, you can find the b1 [INAUDIBLE]. Just take the conjugate. Good, so this is the first preparation, which is just something about the harmonic oscillator, which actually can be generalized also to general quantum systems. So the second is I need to remind you a little bit the Schrodinger representation of QFT's. So normally when we talk about quantum field theory, you always use the Heisenberg picture. We don't talk about wave function. But this equivalent formulation of, of course, quantum fields theory, which you can just talk about wave functional and talk about it in terms of the Schrodinger picture. So for example, let me just consider a scalar field's theory. Then the Hilbert space, the configuration space, of this theory of this system is just phi x. You validate it at some given time. Say, you validate it at t equals to 0, for example. So let me just write it as 5x. The configuration space is just all possibles of phi defining the spatial slice. OK, so this your configuration space for the quantum fields theory. And the Hilbert space of the system just given by all possible y functionals of this configuration space variables. So if you feel this is a little bit too abstract to you, then just think of space is discrete, and then you can just write this as some discrete set of variables that them become identical to the ordinary quantum system. So two things to remind you, two more things. First, just as in quantum mechanics, if you ask the value of, say, a time t2, you are in the [INAUDIBLE] state of this phi, this [INAUDIBLE] value phi 2 and the overlap we said t1 in the [INAUDIBLE] state of phi, this [INAUDIBLE] value phi 1. This is given by a path integral [? d ?] phi. You integrate with the following boundary condition. And actually, just [INAUDIBLE] for this theory. In particular, by taking a linear of this formula, one can write down a path integral representation of the vacuum wave function. So the vacuum wave function, in this case, will be very functional. [INAUDIBLE], so if you have a vacuum state, then you just consider the overlap with the vacuum to your configuration space variable. So I will denote it by phi 0 phi x. This is the vacuum. And this has a path integral [INAUDIBLE] is that you compactify your time. OK, so it's where this is your real time, and this is your imaginary time, which I call t e. You compactify your time. And then integrate the path integral or back to imaginary time. So this can be written from the path integral as t phi t e x. So you go to Euclidean, the but integrate all t's more than 0 with the boundary condition te equal to 0x. You can do phi x, and your Euclidean action. OK, I hope you're familiar with this. This is where we obtain the vacuum [INAUDIBLE] function. So if this is not for me to [INAUDIBLE] say, well, how do you get to the wave function of a harmonic oscillator in the vacuum from path integral? Yes? AUDIENCE: [INAUDIBLE]? PROFESSOR: You will see it in a few minutes because I'm going to use those formulas. Yes? AUDIENCE: So one question is that what is the analog of the wave equation for the wave functional? Like what is the [INAUDIBLE] equation for the wave functional? PROFESSOR: The same thing. i [? partial ?] t phi equal to H phi. AUDIENCE: OK. PROFESSOR: Yeah, it just has more [INAUDIBLE] freedom. The quantum mechanics works the same. AUDIENCE: OK. PROFESSOR: So is this familiar to you? If not familiar to you, I urge you to think about the case of a harmonic oscillator. For the harmonic oscillator, that will be the way you obtain the ground state wave function from the path integral is that you first need to continue the system to the Euclidean. And they integrate the path integral all the way from minus Euclidean time equal to infinity to Euclidean time equal to 0. Let me just write down the-- so the standard of quantum mechanics if you want to right down to the ground state wave function, so that's what you do. Again, you go to the path integral. You go to go to Euclidean, and you really integrate of all tE smaller than 0. And with the boundary condition that x [? evaluated ?] at tE equal to tE equal to 0 equal to x. So that gives you the wave function, gives you the wave function. OK, now with this preparation, that's just some ordinary quantum mechanics. And this is a generalization of that to quantum fields theory just replace that x by 5. Good, so now let's go back to prove the statement we claim you're going to prove, this one, OK? So now let's come back. So we finished our preparation. Now come back to Rindler space. So it is considered scalar field theory, for example. So let me just remind you, again, this Rindler-- so write down some key formulas here-- so Rindler is the right quadrant of the Minkowski time, which is separated by this right column. Suppose this is X. This is T. Then Minkowski [INAUDIBLE] is minus dT squared plus dX squared. And then the Rindler path is minus rho squared to eta squared plus the rho squared. And so this is the rho equal to constant. And this is the eta equal to constant. And here is eta equal to minus infinity. So here is eta equal to plus infinity. So spacetime foliates like this, OK? And when we go to Euclidean signature for the Minkowski, of course, it's just T goes to minus iTE. So I call this Euclidean time. For the Rindler, I will do minus i theta. This is my notation before. So in the Euclidean signature, the standard Minkowski just becomes TE squared plus the dX squared. And then this Rindler is just rho squared [? to ?] theta squared plus the rho squared. In particular, this theta identified to be plus period 2pi. The Euclidean, under the continuation of Minkowski and Rindler, are actually identical. Both of them are the full two-dimensional Euclidean space. So if you do the standard of the Minkowski, just replace X by TE. But for the Rindler, you just replace it by rho theta. So they just go one into a different foliation, and this goes one into the [INAUDIBLE]. And this one is bound for the Cartesian coordinate. So the remarkable thing is that even in the Lorentzian signature, the Rindler is only part of the Minkowski spacetime. But once you go to Euclidean, if you do this identification theta cos theta plus 2pi, they have exactly the same Euclidean manifold, just identical. So this immediately means one thing. It's that all the Euclidean observables just with the trivial coordinate transformation from the Cartesian to the polar coordinates, all the Euclidean observables are identical in two theories. AUDIENCE: What is that down there, "are identical?" What's R sub E squared? PROFESSOR: Yeah, this just means the Euclidean two-dimensional spacetime. AUDIENCE: Oh, I see. PROFESSOR: Yeah, let me just right it. This is R subscript E squared 2. So this is just a two-dimensional Euclidean space. So from here, we can immediately lead to the conclusion we said earlier because for Minkowski-- say, if you compute the Euclidean correlation functions, and then [INAUDIBLE] to the Lorentzian signature, what you get is that you get the correlation function in the standard Minkowski vacuum. So for Minkowski and then back to Lorentzian, so the typical observables-- so Euclidean, say, correlation functions just goes to correlation functions in the standard Minkowski vacuum, OK? Yeah, this is just trivial QFT in your high school. [LAUGHTER] But with this top statement, we reach a very [INAUDIBLE] conclusion is that for Rindler, when you go back to Lorentzian signature, then that tells you for the Rindler when you do that [INAUDIBLE] continuation and back to the Lorentzian signature, what do you get? It's that you get a correlation function in the standard Minkowski vacuum. But for observables, restricted to Rindler because it's the same thing. It's just the same function when we do a [? continuation, ?] just you do it differently. It's just on the continuation procedure, it's a little bit different. And a way to go into Rindler, you just get observable restriction to the Rindler path. So the Euclidean things are exactly the same. So that tells you that the corresponding Lorentzian Rindler correlation function is the same as the correlation function in the standard Minkowski vacuum, but you just restrict to the Rindler patch. OK, now is our final step. So now let's talk a bit more about the structure of the Hilbert space. So now I have to derive the temperature, but somehow in the Minkowski, there's no temperature. Here there's no temperature. It's T equal to 0 from your high school. And here, we must see a temperature. So where does this temperature come from? So now let's look a little bit at the structure of the Hilbert space. So using that picture-- so let me write here-- so the Hilbert space of the Rindler-- so this is all not in the Lorentzian picture. So the Hilbert space in the Rindler is essentially all possible square integrable wave functional of the phi, which defines in the right patch. So when we define this wave functional, we verify at the single time slice. So that's evaluated at a slice which is eta equal to 0, which is just here. So with phi R is essentially just phi for x greater than 0 and the T equal to 0. This is the right half of the real axis. So this is a Hilbert space of the Rindler. And, of course, we can also write down the Hamiltonian for the Rindler respect to eta. So this is called the Rindler Hamiltonian. And as I said before, you can quantize this here unit, this Hamiltonian, then you will get-- say, you can quantize this Hamiltonian to construct all the excited states, so to me just label them by n. So I say, this complete set of eigenstate for HR, so the second value will say, En. OK, you can just quantize it, then you can find your full state. In particular, the ground state, when you do that, is what we call the Rindler vacuum. We said before that if you just to straightforward canonical connotation, you get the Rindler vacuum, which is different from the vacuum which we identify theta by 2pi. And this we just straightforward quantize respect to eta. So now let's go back. So this is a structure of the Hilbert space in some sense for this Rindler. So now let's look at the Hilbert space for the Minkowski. So Minkowski is defined-- by Minkowski, I mean, the theory defined for the whole Minkowski spacetime is this T. Quantize with respect to this T. So this would be just standard psi phi x. Now this x can be anything, and then we also have a Minkowski Hamiltonian defined with respect to T, to capital T. OK, so this is the standard of the Minkowski Hamiltonian. And then the vacuum, which I denote to be the Minkowski vacuum is M. And then vacuum functional phi x is just phi x 0 M. So now the key observation here is at phi x, which this phi x is for the full real axis, for the full horizontal axis. So this is in some [? contained ?] phi Lx and the phi Rx. This goes one into the variable of phi. You [? variate ?] it to the right patch, and the value you've added to the left patch. So the space of phi x, it's the combination of space of phi L x for the left part and space for the phi right part. In particular, this tells us the Minkowski Hilbert space, which is defined as a functional of this full phi x, should be the tensor product of the Rindler to the right tensor product of the Rindler to the left. Notice that we talk about this right Rindler, but there's a similar Rindler to the left, but the structure of the Minkowski space or Hilbert space, it's the tensor product of the two. In particular, this ground state wave functional, you can write it as phi Lx and phi Rx. So this ground state way functional should also be understood as the functional for phi L. OK, so now here is my last key. And here is last key. So remember, the ground state wave function will be obtained by doing path integral on the Euclidean half plane. So T is smaller than 0, OK? So let me just draw the Euclidean space again, X, TE. So the wave function can be written as define [? tE ?] x [INAUDIBLE] minus SE Euclidean action. And the lower half plane of the Euclidean space with the boundary condition-- OK, remember from path integral, that's how you obtain the-- so that means you integrate over all this region, path integral over all of this region. Then with boundary condition fixed at here, we do this path integral, you get the Minkowski vacuum wave function. Now here is the key. So this half space, when I write in this form, you treat this T as the time, but now let's consider a different foliation. I don't have colored chalk here. I can consider a different foliaton. It's that foliation in terms of the theta. Then you just have this foliation, so for each value of theta, you have a rho. So integration for minus theta to theta. So if you think about from this foliaion-- so think about from this foliation, then this path integral can be written as a following, D phi theta rho, which you fix phi theta equal to minus pi rho. Rho here when theta equal to minus pi becomes x, you could do phi Lx. And the phi theta equal to pi-- theta equal to 0rho equal to phi Rx and exponential minus S, so these two should be the same. I just chose the different foliation to do my path integral. But now if you think from this point of view, this a precisely like that because theta 0 and theta minus pi are just two different times in terms of this Euclidean Rindler time. So this you can write it as phi, so let me write one more step. So this can written as phi2 [INAUDIBLE] minus iH t2 minus t1 phi1. So this last step can be now written as I can think in terms of the time within the Rindler of time, this can be written as phi R exponential minus i minus i pi because we have a Euclidean time H Rindler Hamiltonian phi L, OK? So this tells me that Minkowski wave functional can be written as the following-- can be written as phi R exponential pi HR phi L. So now let me expand this in the complete set of states of HR. Then this is sum of n, so if you expand in terms of a complete [INAUDIBLE] to HR, then this is, [? say, ?] wave function. So with this chi n equal to chi n phi equal to phi. So remember, this n is defined from this Rindler Hamiltonian. Now let me erase this. So I can slightly rewrite this because I don't like this complex conjugate. I can slightly rewrite this. It's thi0 phi x is equal to sum over n [? and then ?] minus pi En chi n phi R and the chi and tilde phi L. So chi and tilde define to be phi and star phi L. So this chi and tilde can be considered-- just like in ordinary quantum mechanics, this can be considered to belong to a slightly different Rindler Hilbert space whose time duration is opposite. So when you switch the direction of time, you put a complex conjugate. So this complex conjugate can also be thought of just as a wave function in the Hilbert space with the opposite time direction, with opposite to the Rindler we started with, to the right patch Rindler. So now this form is exactly the form we have seen before. In this [INAUDIBLE] way to think about thermal state. So this is exactly that because this tells you that the Cartesian in the Minkowski vacuum is the sum of expression minus pi En with n Rindler of 1 patch [? tensor ?] and Rindler of the other one. So this or the other Rindler should be considered Rindler of the left except we should quantize the theory with the opposite time direction, which gives you this Rindler tilde Rindler left. So this is just precisely the structure we saw before. In particular, if you are ignorant about the left Rindler, you can just trace it out, trace out the left part of this ground state wave function. Then, of course, you just get rho Rindler, the thermal density matrix in the Rindler. And this is equal to, of course, 1 over Rindler exponential minus 2pi HR. And we see that the beta is equal to 2pi, but this is the beta associated with eta. As we said, when you change to a different observer, a different rho, then you have to go to the local proper time. But this is precisely what we found before. So now we understand that the thermal nature of this Rindler observer is just precisely that because this observer cannot access the physics to the left patch. But the vacuum state of the Minkowski spacetime is an entangled state between the left and the right. And when you trace out the left part, then you get a similar state on the right part, so that's how the temperature rises. And the same thing can be said about black hole, which we'll say a little bit more next lecture. And let's stop here.
MIT_8821_String_Theory_and_Holographic_Duality_Fall_2014
23_Duality_at_a_Finite_Temperature_and_Finite_Chemical_Potential.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: So last time we talked about how to describe say for the boundary CFT is at a finite temperature what would be its gravity description? So what we described was a thermal theory. Say a Thermal CFT on the flat space, OK? We show that this is related to a black hole with a planar horizon. The key is that the horizon should have the same topology as the boundary space. So in terms of the picture this is your boundary, so this is z equal to 0, and then the horizon somewhere say, at some value of z0. So horizon has some value of z0 and then the horizon have the same topology as the boundary on space. And then you can do some of the [? lamicks, ?] et cetera. Any questions about that? Good. So previously we also briefly discussed that the Ads also allow you to put the field theory on the sphere. So if you put the field theory on the sphere then that will be due to the global Ads. Now you can ask the same question. What happens if we put the boundary theory on the sphere and then put there the finite temperature. So let us put a CFT on the sphere at the finite temperature. So the story here is much richer for a very simple reason. For a very simple reason. So if you have a CFT just on the Rd minus 1 so this Rd minus 1, this euclidean space, euclidean flat space does not have a scale and the CFT does not have a scale. So if you put it at a finite temperature then the temperature is the only scale, and essentially provide a unit over the scale. So the temperature is the only scale, so essentially providing an energy unit. So this means that for CFT on the finite temperature, CFT, on the flat space then physics at all temperatures are the same. So they are just related by a scaling of units. So in some sense you can see it seems that the physics is very simple whether you have like 0.0001 degree or you have 10,000 degrees, doesn't matter. And it's only the relative scale that matters. So the physics at all temperatures is the same. There's no difference between the low temperature and the high temperature, and the temperature just provide the units. But if you put the CFT on the sphere now the story's different. So for CFT on the sphere because the sphere has a side itself. OK, so let's take the sphere have both sides r. Again when you put the CFT on the sphere the sides of the sphere does not matter because again the sides of the sphere are essentially provide by the units because the theory's getting [? warrant. ?] So let's just put all the sides to be r. So let's take sides to be r. And the r can be chosen to be the same as the curvature radius in the [? back ?] just to make formula simple, and you can choose it to be any radius. And now since the sphere already have [? all ?] sides and now you put it at a finite temperature. Now you have the dimensionless number. Then at the finite temperature in the physics is controlled by a dimensionless number, r times t. So now the temperature makes a difference. Now different temperature that r is is different because now you have a relative scale and if rt is small then essentially you're at the low temperature, and then when rt is big then you have a high temperature. Now there's a dimensionless number to characterize whether you're in the low temperature or in the high temperature. So the physics essentially become much richer because of the and then you have a you have a whole parameter the physics can depend on. And so indeed the story become rather intricate whether we can see the finite temperature theory on the sphere so now let me just mention some important features. So I will walk you not too slowly but also not too quickly about the physics on the sphere. We cannot afford to do it very slowly at this time. Yeah, but if I do it too quickly, of course you won't learn anything. OK, so let me just point out some important features. So first, when we talked about the story last week-- when we talked about the finite temperature on the flat space-- we said, oh, probably there are two possible descriptions for finite temperature theory from the [? back ?] point of view. So one is that you can have a thermal gas. You have a thermal gas in Ads. But then we argue that for a theory on the sphere that's not allowed because you encompass singularity because the metric becomes singular. And the one big difference for the theory on the sphere is that actually now this is allowed. So the thermal gas in Ads is now allowed. So now let's go through the argument. So now first let me just write down the global Ads metric. Of course you can write in many different coordinates. So the most convenient coordinates to write global Ads for our purpose is the following. So this is just a pure Ads and r goes to 0 to infinite. So this is like somewhat analog of a, say, sprinkle coordinate in flat space. But the difference is of course now you have nontrivial factors which tell you're in the curve space time. So now to go to the thermal Ads we do the trick you normally do for thermal field theory. Is that you go to euclidean space and then you put the euclidean time to be periodic in the inverse temperature. Actually, let me write down here the analogous metric on the sphere. So this is a situation we can see that before for the theory on the flat space, on the plane, and then that would be the metric to be on the sphere. So we've talked about last time if you make tau to be periodic then as z goes to infinity then this circle shrink to zero sides. Because the prefactor goes to 0. So the tau circle will have a sides beta but the proper densities controlled by the prefactor. And because when z goes to infinity then the proper lens of the circle goes to 0. So whenever you have a circle we shrink to 0 then you have a singular behavior, et cetera, and then the metric's singular. So you can have [? where ?] it is a singular behavior and so that's why this is not the amount. But now in this case, when you do such energy configuration this becomes well defined. Because of the proper sides of the tau circle so now you local proper lens local proper sides over the tau circle, it's just given by this prefactor 1 plus r squared divided by 1 squared by r squared beta. And since a greater equal here so this is greater equal to beta. So actually this is bounded from below. So this is a circle [? label ?] will become too small. Essentially the minimal side of the circle is the same as the beta itself. So this euclidean metric is perfectly well defined. OK, so euclidean metric well defined. So you can trust here that the euclidean metric is singular as z goes to infinity. If you put the 0 in the flat space then that's not allowed but in this case allowed. So for the serial in the sphere-- but still there's an analog of a black hole solution. You can also find the black hole. Again you just write down the standards, sprinkle's metric answers, for black hole metric and then you solve Einstein equation, then you find-- so let me just write down the answer. So you find that this space also allows a black hole solution. Can be written as the following. So now f just slight generalization of this factor. So the mu is primarily related to black hole mass. And the horizon is that i equal to r0 and r satisfy that this factor goes to 0. So you can also find out what is temperature associated with this black hole. So here that beta can be anything. There's no constraint on the beta. So here using the standard technique to find what is the Hawking temperature for this black hole. Let me just write down the answer. So this is just 4 pi divided by f prime. You've already add to the horizon standard formula and you just find the 0 of this guy, and then you just take the derivative so you find the beta can be written in terms of r0 as follows. So this is just a simple algebra. So you find that if you take the derivative, we express the derivative is r0, then you find the Hawking temperature of the following form. I'm not doing this calculation here. You can use it to check yourself. Good, any questions so far? So now it seems like we have a problem. Because we have a black hole we also have a standard thermal Ads. Then which one is the right description for the field theory at the finite temperature? Seems like we have a choice. But now if you look at this function the story's even more tricky than that. So this product of this function this is a rough behavior of this function. So this is the inverse temperature and then this is the sides of the horizon. And the side of the horizon essentially determines your entropy. So it's really a physical thing here and because of the area over the horizon it's the entropy. The area of the horizon is a entropy. So this r0 it can be considered as something standard for entropy. So now just to get the intrusion, this is a complicated function, it's because intrusion and how this beta depend on this r0, the horizon side. Suppose horizon side is very small, r0 goes to 0, then downstairs is finite. Upstairs goes to 0 so you just go to 0. And the r0 is very large so the downstairs proportionate to r0 square but upstairs only proportionate to r0 so when r0 is very large we go to 0 as the r0. And this is a small function so there must be a maximum somewhere. And because you can easily find the maximum just to take the derivative of that function. Anyway, so that's how the beta will depend on r. But now this plot is highly peculiar. Why? Can you tell me why? There's a maximum on theta, which means? That's right. So there's a beta max. So that means this is a beta max which tells you there's a minimum temperature. That means that black hole solution only exist above a certain minimal temperature. And you can easily find out what is that minimal temperature just by taking the derivative of this function and it require to be 0. Is there any other thing peculiar regarding this plot? AUDIENCE: When r0-- PROFESSOR: Hmm? AUDIENCE: When r0, [INAUDIBLE] black hole. PROFESSOR: Well, r0 is 0-- the black hole becomes smaller and smaller. It means there's no black hole. AUDIENCE: But the temperature is infinity. PROFESSOR: Hmm? AUDIENCE: But the temperature there is infinity. That's weird. PROFESSOR: No, that's not weird. No, that's the standard, the flat space black hole behaves that way. Just look at a Schwarzschild black hole in flat space when the horizon side becomes smaller and smaller the temperature become higher and higher. That's a standard feature of flat space. AUDIENCE: It must be the other way. If you have a huge black hole, then why is it that the beta goes down? That seems bizarre. Maybe it can't also be standard, right? [LAUGHTER] PROFESSOR: Both things you said are correct and are important but there's something more elementary you're not pointing out. You're pointing out the higher order of important things. But there's one lower order important thing that which is not yet pointed out. AUDIENCE: There are two solutions of the same temperature. PROFESSOR: Exactly. In terms of the temperature this is a [INAUDIBLE] So if you look at the given temperature they're actually two solutions. So for any given temperature there are two solutions. Say T greater than T min so this is for the beta smaller than beta max means that T greater than T min There are two solutions. And then we have to label them so let's very imaginatively call this one the big black hole and then this one the small black hole. Because this one have a larger size and this one has a smaller size. So this corresponding to the r0 of those two. So you have two black hole solutions. One have a larger radius than the other one. So we have big black hole and a small black hole. And now this is an important thing you mentioned that for the small black hole the temperature when you decrease the sides the beta decrease. That then means the temperature increases. So this as I said it's like an entropy. So if you lower the entropy somehow your temperature increases. So when you increase the temperature you actually lower the entropy for the small black hole. A small black hole you go like this. Go like this. When you reduce the sides, you reduce the beta, means you increase the temperature. But this is actually the standard Schwarzschild black hole behavior in flat space. And this just tells you have [? a lack ?] of specific heat. So this is a situation we understand. This is a situation we can come up with a flat space, a Schwarzchild black hole. And it make sense why it's happening here because here roughly is order r. Because the r is the scale here so the maximum location where the maximum is controlled by the r. So this region corresponding to sides of the black hole to be much, much smaller than Ads curvature radius. So when something is much smaller than the curvature radius what you do see? Hmm? AUDIENCE: [INAUDIBLE] PROFESSOR: Yeah, flat space. Yeah, it just like even though we live in the curved universe but we actually see flat space because of course our scale is much more than curvature radius of universe. And a similar key thing here even though the black hole have a size much smaller than the curvature radius of Ads they essentially behave like flat space black hole. And so that's exactly what you see for the flat space black hole. And indeed, in a more interesting is this big black hole, the [? interesting ?] is a big black hole. And the big black hole when r0 increases the temperature increase. So this is the right way. So this is indeed the standard, just positive specific heat. You can check it has positive specific heat. So this is indeed what you'd expect from a thermal system. And so this is indeed what you expect from a thermal system. So now we are encountering an even weirder situation. We not only have so more ideas. We're not only have black hole. We actually have two black holes. So what's actually described with the field theory at finite temperature? So what does this really mean? Number two is a black hole solution. Now let's look at the number three. So now let's look at what are the indications those solutions. So first thing you can just guess. So what do you think what these three solutions should mean? If you believe this, duality has to be true. Yes, so that the game we play here is that you believe that the duality is always true. And then whenever you see some phenomenon, the gravity side, you say this, I try to integrate. I try to understand on the field theory side. And it makes sense and then we just carry out the guess and check it, et cetera. So what would the three solution mean? Three gravity solutions, or in principle for example, for a temperature at here we can have three solutions all have the same temperature. What would this mean? So the simplest guess is that it tells you that for CFT and the ST minus y at this temperature there's three possible phases. There's three possible phases. It's just like water, ice, or vapor. There's three possible phases. Now, the interpretation is clear. Then we should just find. So to decide what is the right solution then we should find what? That's right, we should find the phase, find the one with the lowest free energy. And then to decide which is the right one then we just need to find the one with the lowest free energy. So now recall that the partition function the field theory side should always be identified with the partition function on the gravity side. And then this side the partition function by definition is minus beta, the free energy. [? Seems ?] like going more and more. that's weird. But on the gravity side we write this by saddle-point approximation. We integrate over all possible field on the gravity side weighted by the euclidean action, and the leading order in the saddle-point approximation you just evaluate it. You just evaluate it on the classical solutions. So now you can just equate them. You just need the free energy must be equal to se to the to the [? Ic. ?] So this is just the same principle we discussed before on how we just apply it. So now if we believe the duality and we believe with some of the [? lamicks ?] then we should just find the solution with now just SE. So we should calculate the euclidean action for all three solutions and then we find the corresponding free energy for each of them, and then the one with the largest, SE, will correspond to the one with lowest free energy. Then that will be the stable solution. And the other phases presumably will be unstable or not stable. just like in the standard story. So this conclusion was drawn from some of the [? lamrick ?] itself. But what will be nice we can actually draw this conclusion without using thermal dynamics. So we say, so this [INAUDIBLE] was drawn to identify this with the free energy and then we say according to the sum of dynamics, and then we should choose the one with the lowest energy. But will be better if you can actually derive this thing from the gravity side, and you can and for a very simple reason. So this also follows from the standard rules of saddle-point approximation as follows. Oh, I forgot to write on this board. So when you evaluate this [? possible ?] so when you evaluate your integral you're in the saddle-point approximation, so normally you are instructed to add the contribution of all saddle points. So the fact that you have three different solutions it tells you have three separate saddle points. And then just from the standard rule of the saddle-point approximation you just need to add all of them together. So you add the expression SE, you evaluate the thermal Ads. And the expression, SE you evaluate it as a big black hole, and then you add the SE to the small black hole, et cetera. OK, you add all of them together. And now each action is in exponential. And as I said, they are weighted. All this action are weighted by n square. Because the action is always proponent to law of [? Newton ?] and the law of newton is [? proponent ?] to n square. So there's a big parameter in the exponential. So the saddle is not just SE dominates in the large N limit. You just add three exponentials, which one is the biggest will dominate. Because each one of them is proponent to n square and n goes to infinity. And n goes infinite means it's greater than the other guy by a tiny bit. in the prefactor of n squared they will become predominant. So we see that the sum of the [? lammicks. ?] Sum of the [? lammicks tells ?] us we should define the smallest free energy and then that statement can be essentially derived if I don't know that statement. I can actually derive that rule, say finding the smallest free energy, just by using the standard saddle-point approximation from the gravity side. So this is a nice consistency check. This a nice consistent check that the duality which we believe to be true should be true. Yes? AUDIENCE: So the one thing I don't totally understand is that, so say that you give me this Euclidean integral over metrics on the gravity side. So the idea that there's really only three contributions-- because we found three solutions with a particular temperature-- how do we know that there aren't any other solutions? Like, how do we know there aren't any other terms that were not accounted for? PROFESSOR: Oh yeah, there are three-- so this is a rule of the saddle-point approximation, you find the saddle-point. So you find the solution of the equation of motion with the right boundary conditions and these are the only ones. Yeah, and then of course, each of them include the fluctuations. This is just the standard saddle-point approximation. Any other questions? So now the task just boils down to-- yeah, let me just emphasize here-- yeah, so now, the task just boils down to find what is the lowest free energy. OK? So now it's 3-- no, it's 4. So now, let's try to decide which one is more stable. So first, let me just emphasize SE is proportional to the 1 Newton-- 1/G Newton. And then it's always proportional to N squared. OK? Because 1/G Newton is proportional to N squared if you translate from the [INAUDIBLE] language. And now, if you vary the free energy for them-- for this-- here let me call this-- thermal [? gas and ?] AdS, I always just call it TAdS, OK, Thermal AdS. I think, yeah, just-- the shorthand is just thermal AdS. So for the thermal AdS, this is actually 0. This is actually 0. So thermal AdS is the free energy-- I should say this way. SE is actually the 0 times N squared. And then you can have fluctuations then that can contribute to the order and to the power 0 [? contribution ?]. So the 0 in this-- 0 is simple. It is because, when we go from the [? pure ?] AdS into the thermal AdS, essentially we just changing the global structure. OK? We just make the Euclidean circle so for the pure AdS, when you go to Euclidean space, the tau have infinite size. And then when you go to the thermal AdS, we just make the tau to be a compact circle. So you have not changed the-- the solution is identical. The solution is identical go to the pure AdS. Just the global structure is different in terms of the periodicity of the Euclidean time circle. So now, if you vary the classical action-- so if you're reference point is that the pure AdS, the classical action is 0. And then, for the thermal AdS will be 0. OK? Will be the same. It would be the same-- the classical action will be the same as pure AdS, OK? So [INAUDIBLE] order N squared, but then you can have fluctuations which contribute. And the fluctuations is to the-- yeah, so the fluctuations can be interpreted as the, say, the graviton symbol [? and ?] division from the symbol, graviton gas. OK? And so, say, in the themal AdS sites essentially, have a thermal gas graviton. And there, free energy is independent of N because they are the fluctuations. And just as usual, whenever you can see that the fluctuations there, they're independent of a coupling constant. Is this clear? AUDIENCE: Why is the first term 0? PROFESSOR: Hm? AUDIENCE: Why is the first term 0? PROFESSOR: The first term is 0 because this has the same classical solution as the pure AdS. AUDIENCE: The pure AdS, then, has a nonzero [INAUDIBLE]. PROFESSOR: Yeah. Yeah, that's a very good question. AUDIENCE: [INAUDIBLE] PROFESSOR: Yeah. I will-- yeah. Wait a few minutes. I will talk about this more precisely. It's a good question. AUDIENCE: Oh, OK. PROFESSOR: Yeah. Let me just tell you the answer first, and then I will say a few things. Any other questions? OK. And then if you calculated the Euclidean action for the big black hole and the small black hole, you find that they are always greater than that. It's always larger than action for the small black hole. And this is some number, nonzero number, times order N squared. So this is always the case. We also [INAUDIBLE] way for you to calculate in a few minutes. But I will not do this explicit calculation here because it takes some time. So there is a simple way, physical way, to understand this-- to understand why this is true because this is a free energy. So this tells that the free energy of the big black hole is always much more-- is always smaller than the free energy of the small black hole. This is not a precise way to understand it, but it is a heuristic way-- is that if you look at these two solutions-- so these two solutions have the same temperature, but the big black hole have a larger size. When you have a larger size and the entropy is proportional to the area of the horizon-- and the entropy is much higher, so the big black hole has much higher entropy than the small black hole. And so you choose-- you would expect that this should have a smaller free energy. But, of course, you have to do a precise calculation to really check it because their energy is not the same. Their energy is also not the same. Yeah, but this is a heuristic way to understand. Anyway, so you find that the free energy-- so you find the Euclidean action, or the [INAUDIBLE], is always much-- is always greater than the small black hole. And that is good. That is good because that tells you the small back hole is never dominate because you always find something bigger than that. And the small black hole have a [? lack of ?] specific heat. That's not something you want because our Field Theory definitely have quality of specific heat. Our Field Theory definitely has quality of specific heat. So that tells you that this is never-- can be low-- that would be a dominate phase. So now, if you calculated a number-- so now if you look at the number explicitly-- So what you'd find actually-- so you also have-- let me just first tell you the results, OK? You find there exists a temperature, Tc, which is greater than the minimal temperature. Then, you find that the-- when you're smaller than the minimal temperature-- Yeah. Right. OK. So when your T is smaller than the minimal temperature-- oh, no. No. T is smaller than Tc. So you find that there exists a Tc, which is greater than the minimal temperature. So let me just say this. So when T is smaller than T minimum, of course, there's only thermal AdS exists. And, of course, that would be the only phase which can describe the Field Theory result. But, now, there exists another temperature [? then, ?] within T, smaller than-- greater than T minimum and smaller than Tc. Then, you find that the SE big black hole is smaller than 0. And the thermal AdS, essentially, by definition to be 0. So that means that the thermal AdS will have lower free energy. So in this range, the thermal AdS will dominate. And when T is greater than Tc, then you find that the big black hole now has a positive action and [INAUDIBLE] dominates. So if I draw this-- and then there's a Tc and there's a T minimum-- so this is the temperature axis. So here, of course, you only have a thermal AdS. But within this range, the thermal AdS dominates over big black hole and the small black hole. And when you're above, then the big black hole dominates. So you actually have a transition at Tc. So below here, it's just thermal AdS. And then you have a transition from thermal AdS to the big black hole at Tc. So a quick side remark. How we-- how would you find this result? So this would be the result if you calculated the-- so you just calculate the Euclidean action. You compare them. That's what you would find. So any questions about that? Yes? AUDIENCE: Do you ever get the small black hole phase at all somehow artificially? PROFESSOR: Yeah, you can always get artificially. Yeah, artificially, always. Yeah. And let me not go into that. So now, let me just say a few remarks, say, in finding SE. So when you find the Euclidean action, you find it is always divergent. It's the same thing as what we encountered before. If you calculate the whole action, you have to integrate the-- over the-- essentially, the volume of the AdS. But the volume of AdS goes to infinity at-- near the boundary, essentially, you always get infinity. So in order to get a [? finite ?] answer, you need renormalization. So when you do renormalization, essentially, you put the [? cut ?] off and then you subtract covariant, the local counter terms. You put a cutoff. And then you subtract covariant counter terms at the cutoff. And then you get a finite answer. Then, you take the cutoff to the boundary. So we will not go through this procedure, but it's something you can do. But there's an alternative shortcut to not go through these. Or you can just subtract-- just calculate the Euclidean action for the cutoff, then subtract the value of pure AdS. So if you calculate the pure AdS, you will find the answer is also divergent. You will find the answer is also divergent. And you just subtract it. And then when you do the subtract, you will find the difference is [? finite. ?] And then you find the difference is finite. And this is a slightly simple way to do than that, but we also not do this because, still, you have to calculate Euclidean action [? and such. ?] So the synchron-- shortcut is to assume thermal dynamics. That's what we did last time. Because you can find the entropy density very easy. So entropy density-- did I erase my-- so the entropy density is just, essentially, the area of the horizon. So the entropy-- right now, it's actually the total entropy, not the entropy density because now we are [? almost here. ?] So the entropy would be-- and size of the sphere will be r0 to the power d minus 1 times omega d minus 1. Omega is the volume of a unit sphere. And then divided by 4 pi 4GN. So this is the entropy of the system. And you can express this in terms of the temperature because r0 is a function of temperature. r0 is a function of temperature. If you invert this, you can imagine r0 is a function of temperature. So you have-- essentially you have entropy as a function of temperature. And-- Hm? AUDIENCE: d minus 1, what's that? PROFESSOR: [? Formula ?] d minus 1 is the volume of the unit sphere. So you know the entropy. You can just use the formula S equal to minus F minus T divided by T to-- so you can also write this as minus F r0, partial r0, partial T, et cetera. And then you can integrate this equation to find the-- to find F as a function of r0 because we know how the r0 depends on T from here. So you just need to integrate this equation. So this is a simple exercise because now you just need to do an integral of some function. And then you find omega d minus 1 divided by 16 pi GN, r0 d minus 2 minus r0 d divided by R squared. You get actually a rather simple answer if you do that integration. So we have chosen the integration constant so that when r0 equals to 0, this is 0 because r0 equal to 0, if the black hole has 0 size, essentially, [? there's ?] no black hole. And then we have chosen the free energy to be 0 for the [? no ?] black hole. So this free energy, again, should be interpreted as the difference with the pure AdS. So for the black hole-- this is the black hole free energy because, by definition, this way, the pure AdS, you just have 0. Any questions about this? So now you see here, clearly, this expression. So now this is a free energy now. This is not the Euclidean action. So they differ by the amount [? assigned ?] in my convention. So now you see that there exists r0, critical r0. F black hole is greater than 0 Free energy is greater than 0 for r0 is smaller than R. So you can write it as formula d minus 1-- so let me just write. Yeah, you can put the omega-- the r0 to that [? out ?] for the overall factor r0. Yeah, put d minus 2 out, and this just becomes 1 minus r0 squared. So you find that the free energy is greater than 0. But r0 is greater than 1. That's greater than the-- so this is greater than the value of the thermal AdS. And the black hole is smaller than 0. The free energy [? smaller ?] than 0 become great than R. So the r0 is somewhere here. So r0 is somewhere here. It has to exist above the-- it's somewhere here. This is r0. This is this r0 critical [? as you go ?] to R. This is r0 critical [? as you go ?] to R, somewhere here. So it's always a big black hole. And you can find out what is this critical temperature. So the critical temperature is just the beta evaluate at r0 equal to R. And then you can find out that this is 2 pi R divided by d minus 1. Just using this formula you can find out it's that. So now I emphasize that this is a phase transition. And this is not only a phase transition, this is actually a first order phase transition for the following reason. So in our unit, below here, the free energy is 0 times order N squared. It's 0 times N squared. But about here-- well, actually, above Tc-- at Tc, the free energy is exactly equal to 0. So [? r0c ?] is equal to R at Tc. There's a free energy. It's exactly 0. So at Tc, the-- a big black hole and the thermal AdS have the same free energy. So they can exist together. And above here, then the free energy of the big black hole is some negative number times order-- times N squared. So your free energy has a huge change. So essentially, the first derivative of free energy, is discontinuous across the phase transition. So this is actually a first order phase transition. So this is-- yeah, let me just emphasize that. So this is a first order phase transition, with the free energy equal to order N to the power 0 when T is smaller than Tc and some nonzero number times N squared. Well, T greater than Tc. Any questions? AUDIENCE: When r0 is equal to R, then F is equal to 0. PROFESSOR: Yeah. AUDIENCE: Then, why is the-- oh. PROFESSOR: No, the derivative of F should be discontinuous. No, F is continuous. No, this is a definition of the-- for the phase transition, the free energy is always continuous. And then the free-- so the first derivative here will be proportional to N squared. And you can check-- Yeah? AUDIENCE: So the point is-- OK, so just see if I understand. So the point is that it's discontinuous in the limit of large N or something? I mean-- PROFESSOR: Yeah, that's right. That's right. Exactly. So let me emphasize. So this is N goes to infinity limit. Is there any other questions? So we see there's a lot of [? rich ?] story when you go to the sphere. And, actually, there's a phase transition. And the phase transition roughly [? adds to-- ?] the black hole size is the AdS radius. Exactly the black hole size is AdS radius. And this is, more or less, what you expected because that's where the physics become nontrivial because when the horizon is much, much smaller than the black hole [? size-- ?] than the curvature radius, as I said, just should reduce to the flat space black hole and et cetera. Yeah, because the curvature radius is only scale here. So now, let me make another remark. Now, I erase this. I think it's OK. Yeah So you should actually check yourself. So below Tc, the F prime is always 0. When you take the derivative, it's always 0 at order N squared. Let me, again, write this answer, 0 times N squared. So you should check that the first derivative of F is discontinuous. AUDIENCE: So F itself is not discontinuous? PROFESSOR: No, F is not because the F is 0 precisely at T equal to Tc. So F is equal to 0. And the-- Yeah. So the reason the first derivative is discontinuous is very clear because below Tc, this is 0. So the first derivative is 0 times N squared. And above Tc, the derivative is the derivative of the free energy of the black hole at I equal to r0. And, clearly, the derivative is nonzero. At I equal to r0, the derivative is [? nonzero. ?] OK? And so you see, they're discontinuous. So since physics only depends on the [? dimensionless ?] number, R times T-- So large R, small T-- large R at fixed T essentially is the same as the, say, large T fixed R. It can only depend on the ratio of them. OK? It depends on the product of them. So you can either think of-- you can either think you fix the temperature, you increase the size of the sphere. Or you fix the size of the sphere, you increase the temperature. [? In ?] fact, the same is to increase this guy. And this is essentially the limit going to the flat space. It's going to the R to the T. If you take R go to infinity, you go to a [? theory ?] on R d minus 1. So we see that the theory on R d minus 1 is mapped to the high temperature limits of the theory on the sphere. So essentially, just corresponding to the 1 point for the [? theory ?] on the sphere and this infinite temperature of the [? theory ?] on the sphere. So that's why, as we said before, because there's no scale here, the [INAUDIBLE] temperature from here is essentially the same in the flat space. And in particular, this is described by a big black hole. And this is exactly what we see before because when the black hole become very big, then you can approximate it to a plane because, locally, it's like a plane [? anymore. ?] And the spherical horizon is just like a plane. And then you go to the black plane-- you go to the black hole with the flat horizon. AUDIENCE: Why are the phases only dependent on R times T? PROFESSOR: Yeah, because this a CFT and this is only dimensions number. It can only depend on them through this dimensions number. There is no other scale. Good? Other questions? Yes? AUDIENCE: So on the side of the CFT, what are the different phases? I didn't- PROFESSOR: Sorry? AUDIENCE: What are the different phases on the CFT side? PROFESSOR: Yeah, that's what we are going to explain. So now, we have to describe the black hole story. Or now, we have described the gravity story. Just by looking at the gravity size, we saw there are three possible phases. At low temperature, you get this thermal AdS phase. At high temperature, you get big black hole phase, and then there is a phase transition between them. And then, also, you have unstable small black hole phase, which never appears as a stable phase at any temperature. So, now, let's try to see whether we can understand this from the Field Theory side. What does this mean from the Field Theory side? Why somehow-- if I, for example, put [INAUDIBLE] on a sphere, do I expect such kind of phase transition? Does this make sense? Oh, by the way, I should say this is the-- so this is called a Hawking-Page transition. So this is called a Hawking-Page transition. So remarkably, this was discovered in 1980-- I think '81 or '82, almost 20 years before this AdS/CFT conjecture. But they already figured out that there is a phase transition. They were-- so they were looking at the black holes [? and ?] AdS, and they said, oh, there's two black holes. But there's also some AdS. And then, somehow, there's some kind of phase transition between them. But because they don't know the Field Theory, they don't know this should correspond into some kind of Field Theory system. But they figured out this gravity story essentially in 1981. Now, let's explain what should be the Field Theory interpretation of this-- The Field Theory explanation of this. Physical reasons for Hawking-Page transition. So now, I will consider a toy example. So I won't consider-- so [INAUDIBLE] on the sphere, that's a little bit too complicated. But I'm going to consider toy example. And you will see, from this toy example, that the physics behind this is very simple and actually, also, [INAUDIBLE]. So let's consider you have N squared harmonic oscillator-- free harmonic oscillator. So let's just imagine you can take them to be different frequencies-- [? let's ?] just even for simplicity-- or for the same frequency. Say, omega-- I have some omega, which I just take to be 1. Doesn't matter. So just consider N squared free harmonic oscillator. So I claim when N-- when N goes infinite limit, this system have the same phase transition-- have exactly the same phase transition described there. So, now, I will explain. So first, let's look just at-- this is a system we know how to solve exactly, so let's look at the spectrum. So this is total spectrum, the total energy spectrum of this N squared harmonic oscillators. So let's call the 0, the ground state. So let me normalize the energy so that the ground state have energy 0. And then you can excite one harmonic oscillator, and then you have state of 1. Yeah. Actually-- Yeah, have state of 1. And you can have two harmonic , oscillator et cetera. And then you'll have many harmonic oscillators, say, of all-- then, you're going to have almost every harmonic oscillator excited. And then you have energy of order N squared, et cetera. So now you have to imagine a slightly nontrivial condition. Now, you have to imagine a slightly nontrivial condition, which this is the only thing [? reach ?] beyond the harmonic oscillator. Yeah. Actually, I just realized this. I slightly oversimplified the story a little bit. Yeah. Yeah, let me say this way. Yeah. Yeah. Let's consider 2N harmonic oscillators. So let me arrange it into two matrix, A and B. And they are all free. it doesn't matter. For this purpose, it doesn't matter how many harmonic-- just imagine I have arranged them into two matrices, so I have two N squared harmonic oscillators. And then I can excite them. But, now, I have a condition. So I have a condition, which is analog to the condition of the gauge invariance. Instead, I want all the states to be the trace singlet created by A and B. You have A, B, et cetera. So A have N squared creation and annihilation operator, and B have N squared annihilation-- creation and annihilation operator. And then you can act them-- then, you can [? accurately ?] form the spectrum that it is. But there's a-- but also I need to add a gauge invariance condition. It's that the state [? acted ?] by A and B have to be SU(N) singlet. And SU(N) is the conjugate. Just imagine A and b transformed under some SU(N) [? join representation. ?] And they have to be the SU(N) singlet. So equivalent statement say that the trace A and B-- the order state has to be created by operators inside the trace, single trace or multiple traces, et cetera. So is this clear? AUDIENCE: So one thing. So A and B are operators? Or they are-- PROFESSOR: They're operators, harmonic oscillators. AUDIENCE: Oh, I see. PROFESSOR: Yeah. Imagine you have two matrices under-- and two harmonic oscillators that form two matrices. And each of them have N squared harmonic oscillator. And if I just say they are free, then you don't even have to think about the matrix structure. AUDIENCE: So the only thing, I'm confused about the matrices. So are the matrices acting on some [INAUDIBLE] So in other words-- PROFESSOR: Yeah. Let me be a little bit more explicit. [INAUDIBLE] Think about the foreign system. The Lagrangian 1/2 Tr A dot squared, Tr 1/2, Tr B dot squared minus 1/2 Tr A square 1/2 Tr B squared. So this is a free harmonic oscillator. So these are two N squared free harmonic oscillators with frequency 1. So if I don't impose this singlet condition, this is just a purely free harmonic oscillator. But now, I [? require ?] all the state-- and so SU(N) transformation-- [? yeah, you can act them. ?] But now, I impose the condition that they will have-- they should be SU(N) singlets. So they can [? create ?] it by A and B, et cetera. But they should be SU(N) singlet. Good? So now one thing you can convince yourself is that because of the trace condition-- so if you think about the state of energy 1 or 2, et cetera-- so as far as the energy is order N to the power 0, so as far as the energy does not scale with N, you can check yourself the density of state-- the way you can-- the degeneracy of those states will always be of order N to the power 0. So this is a fact. You can convince yourself. And when you go to the energy of order N squared, then you can check that the density of state become exponential of order N squared. So the reason for this is very simple. So the reason for this is very simple heuristically. So the reason for this is very simple because if you have a state of order 1, which does not scale with N, means there's only order 1 states-- order 1 oscillators are excited. And then, of course, the density of state will be independent of N. And if you do have energy of order N squared because each of-- each oscillator has a frequency of 1, and then you have energy N squared. And then that means order N squared oscillator is excited. And then you have order Ns-- if you have order N squared oscillator excited, then how many ways you can choose them to construct the [? state of ?] energy order N square and that [? exponential ?] of order N squared. So imagine each of the states, you can have N possible oscillator to act, say, for each of them for twice. Then, you have 2 to the order N squared probabilities of doing that. So that's where this [? exponential ?] order N squared come out. So this is a fact. You should try to convince yourself if it's not obvious to you now. So the reason I add this singlet condition is for here. If I don't add a singlet condition, then there's some kind of independence here, which I don't want. There can be [? log ?] independence. Yeah, just-- yeah. So this makes [? your ?] story a little bit [? convenient. ?] Oh, so now-- yeah. We're running out of time unfortunately. So now let's consider the free energy, which, roughly, you can consider as integrates of all possible states with this weight factor and times the density of state. [INAUDIBLE] because the free energy sum of all states, minus BH, and the sum translated into the integral with a density of states. And then that translates into that. So naively-- so we always consider-- so our beta is always-- does not scale with N. That's order 1. It's always order 1. So if you look at here naively, only state of energy of order that's not scale N will contribute significantly because of this suppression. Because of the thermal suppression, say, if you have energy of state of order N squared, then this is a huge suppression. Then, they should not contribute. Then, they should not make a visible expression-- a contribution to your free energy. Expect, except when those states have a huge density of states. So except when those states have a huge density of states. So, here, we see those states have actually order N squared density of states. So that means, in here, in that integral, you have beta times order N squared-- something of order N squared and then times something positive of order N squared. When you can see the contribution of the state of energy of order N squared. So when this factor dominate over this factor, then the entropy will overwhelm the thermal suppression. And then, such state will dominate. And that precisely happens when beta becomes small enough. Because when beta becomes small enough, when you got the high temperature, then beta decrease. So eventually, this will overwhelm this one because the beta will decrease. And that's precisely the Hawking-Page transition we'll see. So at low beta-- at big beta or small temperature, thermal-- this one dominates. And then, the [? V ?] would be of order N to the power 0 because only state of order-- energy order zero will contribute. But the beta-- when beta is sufficiently small, then the [? log ?] D(E) minus BE can become greater than 0. And then the order N squared states will dominate.| And then you find the-- then, you find the [? partial function ?] will be [? exponential ?] of order N squared. And that's precisely what we see in the Hawking-Page transition. So I emphasize that physics here have nothing to do with the details of the system, only related to the large N and to the density of states. So if you have interactions, if you have more complicated system, it doesn't matter. It doesn't matter. As far as your [INAUDIBLE], this phenomenon will happen. This phenomenon will happen. And so this is the essence of the Hawking-Page transition. We will stop here.
MIT_8821_String_Theory_and_Holographic_Duality_Fall_2014
8_Large_N_Expansion_as_a_String_Theory_Part_I.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality, educational resources for free. To make a donation, or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Good. So let us start. So let me first quickly remind you what we did in the last lecture. So we first talked about the-- so we talked general observables and gauge series. So we're interested in correlation functions of gauge invariant operators. In particular, we will be focused on local operators. So we say, for example-- so the local operator can also be separated into, say, the single trace operator. Yeah, the single trace operator. So n just labels different operators. Or say that the double trace operator-- sometimes, in order to avoid confusion, we put some thoughts that means these two operators should be considered grouped together. And then you evaluate it at some spacetime point. Say this would be a double trace operator. Or maybe the higher ones. OK? And also, we found that a typical correlation function-- say if I look at endpoint function, endpoint connect equaling function of single trace operators, then the scale, with n as [INAUDIBLE], again, you sum over all the h, which is the topology of your diagrams. And then to the power n to the minus n-- n is the number of the endpoint functions-- the minus 2h. Than say some function. Fn, which can depend on the coupling constants and also will depend on x1, x2, xn, et cetera. OK? So this will start us leading order with N 2 minus n, which come from the planar diagram. And then the next order, just N to the minus n. So this will be the torus diagram. The diagram you can put on the torus Or genus 2 diagrams. OK? Et cetera. So we also discussed that this behavior has important physical implications. We talked about two implications. We talked about two implications. You say that kind of independence means that, in the larger limit-- say in the N go to infinity limit-- we can essentially treat the action of the single particle-- a single trace operator on the vacuum-- as generating a single particle state. We often call this as glueballs state. Glueballs, OK? So this single particle state, they don't have to appear really as, say, a symptotic state in your scattering, et cetera. It just say the behavior of such state behaves like single particles. And then, if you act on the double trace operator, on the 0, then that gives you the two particle state, and et cetera. And the triple trace operator will give you the three particle state, et cetera. OK? So this is one aspect. And the second aspect is that if-- say if there is some single trace operator whose expectation value is non-zero, then the fluctuations are suppressed. OK? So for example, if you look at the ratio of the variance, so O squared, the connected part of the O squared, is essentially the variance of this guy. And divided by expectation value itself, the scale is 1 over N because of that behavior, and this goes to 0. OK? In the margin, as N goes to infinity. So that means the fluctuations are suppressed. So any questions regarding this? Good? OK. So now let's talk about the third aspect of it, the third implication. So now, suppose we interpret this endpoint function, so this connected endpoint function, say we check it symbolically as, say, some kind of blob. Then you have [INAUDIBLE] x. Say 1, 2, 3-- n minus 1, n. OK? So each line means an external operator insertions. So if we interpret this as some kind of scattering amplitude, so as generates some kind of scattering amplitude over the n glueballs, then you can show, because of this behavior, you can show, because of this N scaling. And then, again, to leading order, in the N goes to infinity limit, the scatterings are classical. Classical means-- you said it only involves tree level scatterings. OK? So now I'm going to show this. This is the conclusion. So now I'm going to justify this statement. So before I start justifying this statement, do you have any questions? Good. So let me just make a few remarks. So first-- so let's look at the simplest case, which is the three point function. So the three point function can be considered just as effective vertex. It can be considered as an effective vertex of three particles intact with each other. Say one glueball is split into two glueballs, et cetera. OK? So this, from that definition, scales as 1 over n. OK? This scales as 1 over n. So if we treat it as a basic vertex-- so suppose we treat it. Suppose we treat it as a basic vertex with coupling g. So g is 1 over N. OK. The from simple, then to easy to see, then tree-level amplitudes for n-particle scattering, n-particle scatterings are precisely scaled as g to the n minus 2, which is n to the power minus 2 minus N. And then this is precisely-- we see for the leading scaling here. OK? So is this fact obvious to you, that the tree-level scattering for n-particles with basic vertex given by g will scale like this? AUDIENCE: Sorry. Why is it proportional to g? Because each-- they start in a double line notation. Each line contributes to like a g square. So how that g comes from-- PROFESSOR: Now, which g you are talking about? Oh, no, no, no. Maybe let me call it-- it's a different g. It's a different-- yeah, let me call it kappa. Or let me call it g tilde. Yeah, it's not the same. It's not the same g as before. This is just something I-- [INAUDIBLE] there. I just call it some kind effective coupling. Yeah? Yeah. Just imagine you have a process whose basic interaction vertex is g tilde, and g tilde is order 1 over N. And then the tree-level scattering for n particle will depend on this coupling to be g tilde to the n minus 2. OK? So this clear to you? Yeah. You can just draw a tree diagram. You can see the immediate tree. OK? So this is consistent. So this behavior is consistent to interpret that as a tree. But you don't actually have to assume the basic three-point vertex. So you can also say-- you can also include-- higher order vertices. But your vertices which include should be consistent with this scaling. OK? Suppose if you're want to inclue four-particle vertices. And then this should scale as g tilde squared, which is N to the minus 2, because the four-point connective function should scale as minus 2. And similarly, if you have a five-point vertex, then it should scale as g tilde to the cube, which is N to the minus 3. The five-point function should scale as N minus 3. OK? So you can also add such higher order vertices as far as you are consistent with this large N scaling. So then you can easily convince yourself that even including such higher order vertices does not change this conclusion, that the N-point tree-level amplitude, again for n-particles, will scale like this. OK? Will not change that conclusion. So this is second statement. And these two statements, just to give you a heuristic way to see that this kind of scaling can be generated-- indeed, generated by tree-level scatterings. So now I'm going to prove that in the large N limit to leading order, in N, in any such kind of correlation functions, there are no more than one-- there's no more-- there are only, maybe. Let me just say it easier. There are only one particle intermediate states. OK? So I'm going to show this. So let's consider, say, a three part-- so by one particle, I always mean one particle in this sense. OK? So one particle in this sense. So I read through it a little bit heuristically. It's not very precise, but I think it's rigorous, but it's not-- but they will not [? right ?] the precise-- yeah, will not [? right ?] rigorous formulas. So let's consider, just as an example, consider the three-point scattering, three-point functions. Say I have three operators. And look at this connected [INAUDIBLE] function. OK? So now, in those places, we can, in principle, insert the complete set of states. OK? And those can be considered as intermediate states-- contribute to this process. So if I consider a scattering process, and then I can insert a complete set of states here, then those states which gives a nontrivial contribution can be considered as a contribution from some intermediate states. OK? So for example, let me introduce a single particle state here-- say this O1, O2, O3. Suppose I insert a complete set of-- so I insert a 1 here. This 1, we include the sum of all possible single particle states and also include possible two-particle states, et cetera. OK? And look at the [INAUDIBLE] equaling function. OK? Yeah, just heuristically, when I insert the complete set of states, I can insert [INAUDIBLE] like that. So now let's look at the scaling. So this scale, according to our general behavior actually scales as 1 over N. OK? So this guy is indeed scale as 1. So two-point function still says 1, N to the power of 0. And then this guy scales as N to the power minus 1. So this term is OK. So insertion of the single particle state is OK. But now, this will give you a higher order contribution, because this scales as N equal to minus 1. And this scales as N equal to minus 2. So all together, this contributes to N to the minus 3. OK? So the contribution of the two particle states, they will be suppressed compared to the contribution of the single particle states. And similarly, with higher-- even higher-- even more particle states. OK? So if I draw a diagram, this process is like the following. It said I can start with 1. And then this 1 can, say, turn into i through a two-point function. And then this i can go to 2 and a 3. OK? So this is a tree-level process. And this is like the following. So this is a two-particle state. So this, if I draw a diagram, would be like if I started with 1. And then this 1 can split into ij. And then this ij combines into 2, 3. OK? So this is like a one-loop process. OK? And this, we see, it's N to the minus 3. And this is indeed just N to the minus 1. So this is suppressed compared to that. So this is a direct way to say that, in this kind of-- to leading order, indeed, can only tree-level processes. You can never have these kind of loops. Because whenever you have a loop, then of course one of you must have a multi-particle. You must have multi-particle intermediate states. And then they're [INAUDIBLE] Is it clear? So this tells you that all loops are suppressed. And so abc together-- so this abc together, then justify the statement that to leading order. this scattering amplitude involving only tree-level. When you have only a tree-level scattering, then this is like a classical story. Only size you land in quantum field theory in the loops can be considered as quantum fluctuations. And if you only have trees, it's essentially just a classical process. So when you solve the classical equation motion, and then what do you get? It's the tree-level processes. OK? Yes? AUDIENCE: What is the number of operators [INAUDIBLE]? PROFESSOR: Hm? AUDIENCE: What is the number of operators in the sum? PROFESSOR: Right. That's a very good question. And essentially, by definition, such operator only scales. Essentially, by definition, such kind of so-called single trace operators and the scale set-- yeah, there's no-- yeah, just the way you can make such single trace operator is not proportional to N. Questions? AUDIENCE: What is-- each of these terms individually small, but when you take summation over them-- PROFESSOR: Yeah. Yeah, that's the question was just asked. So those operators, if you count them, the number of single trace operators you can construct is independent of N. And so so when you suppress them by N. There is nothing you can compensate them. In principle, yeah, there's nothing can compensate them. Yeah? AUDIENCE: So how many operators that are Oi, Oj can you construct? What is the magnitude? PROFESSOR: Oh, it's an infinite number of them. It's an infinite number of them. But the key is the infinite [INAUDIBLE]. AUDIENCE: It's a factor of infinite. We can just-- PROFESSOR: Yeah, it's infinite. It's infinite. But compared to the large N limit, it's still order 1. [LAUGHS] It's infinite. It's infinite, but it's order N to the power 0. AUDIENCE: Oh, my gosh. AUDIENCE: All right. PROFESSOR: It's N to the power of 0. And so they will be suppressed, by physical process. So for example, say this operator-- say I have some dimension. Say this operator-- suppose this Particle 1 have some energy. And a particle has some energy. And then the process of this generated this, too. And when their energy becomes very large, then it's suppressed. And you have this kind of physical suppression. Of course, if you include all energies, there are an infinite number of them. But in fact, if you can-- if you consider, yeah, energy [INAUDIBLE] this kind of thing. Then essentially, there's a finite number of contributes. AUDIENCE: Finite. OK. PROFESSOR: Yeah. But the key is that this is order N to the power of 0. AUDIENCE: Maybe one more thing-- PROFESSOR: Yeah. AUDIENCE: What if it includes the loop, closed by itself? PROFESSOR: Yeah? AUDIENCE: Like a closed fermion loop [INAUDIBLE] I mean, it doesn't need to involve [INAUDIBLE] state. It can just [INAUDIBLE]. PROFESSOR: No, no, no. Loop always involves particle states. So any loop, when you cross it, there's always two states. AUDIENCE: Don't they need to cross it at the end? I mean, like a [INAUDIBLE]. PROFESSOR: Sorry, I don't quite understand. Any loop, by definition, when you cross it, there's always two points. Then that means this is more than one particle. AUDIENCE: Just something like this, don't cross any-- PROFESSOR: No, no, no, no, no. You cross here. You still have two particles. AUDIENCE: Right. AUDIENCE: Oh, OK. PROFESSOR: Yeah, you cut at any time. There's still two particles. OK. Let's summarize. So now let's combine 1, 2, 3. So we can combine 1, 2, 3. So this tells us-- so this 1, 2, 3 tells us at leading order in large N expansion, we have a series-- a classical series. We essentially just have a classical series of glueballs. You just have a bunch of particles, and they interact classically. There's no quantum fluctuations. And we see interactions among glueballs, among them, given by some effective company which I call g tilde, which scales as 1 over N. OK? Scales as 1 over N. So let me just elaborate this statement a little bit more. So this can be considered as, say, when we take a gauge theory-- say, for [INAUDIBLE] theory. Say QCD. You take N goes to infinity limit. And you take h bar finite. So by definition, when we talk about this theory, h bar is always finite. But in the N equals infinity limit, this is equivalent to a glueball theory where h bar tilde goes to 0. OK. So you get the classical theory and with effective h bar tilde, which is controlled, essentially, by 1 over N goes to 0. OK? So here, you can do perturbative expansion 1 over N. So this is just a leading order. Here, you can do perturbative expansion in 1 over N. So in this side, it's like some kind of semi-classical expansion in this h tilde, which, of course, should be scaled as 1 over N. OK? So on this side, when h bar tilde goes to 0, you have a classical theory. But then you can do expansion h tilde. So this is one that we normally call a semi-classical expansion. OK? So the large N expansion has been translated into some semi-classical expansion when you translate into this glueball [INAUDIBLE]. And we're now-- so is this clear? Yeah, this is a little bit small corner. Is it readable? Yeah. This is a very important statement. Maybe I should rewrite it. Yeah, let me just rewrite it. So perturbative expansion-- perturbation in N, in 1 over N. So this is equal to semi-classical expansion in h bar tilde, which scales 1 over N. OK. And we will now show that these guys, this glueball theory, and this semi-classical expansion, is actually a string theory. OK? No, we will not show that. I say I will motivate that. I say that's a string theory. So before I go that, do you have any questions? Yes? AUDIENCE: Just to clarify, doesn't the sort of second statement-- the fact that the perturbation 1 over N corresponds with semi-classical expansion in h bar tilde, isn't that just a generalization of the thing you just wrote? It's the same statement. PROFESSOR: Yeah, it's the same statement. No, no, no. This true statement, they are-- AUDIENCE: Basically the same. PROFESSOR: They are not equivalent. This is talking about this particular limit. AUDIENCE: OK. PROFESSOR: This is talking about this particular limit. And then this tells you what happens when we'll relax this limit by including [INAUDIBLE] fact by 1 over N expansion. And to make this identification-- so this by itself does not imply that. AUDIENCE: The other way does, right? PROFESSOR: Yeah. Of course, the other way does. So this is a stronger statement. AUDIENCE: Right. OK. PROFESSOR: Under this statement, it's supported by just looking at the leading model behavior. AUDIENCE: Right. PROFESSOR: But if you want to look at this behavior, and then you look at-- then you need to look at the sub-reading terms of 1 over N. Say, for example, here. Then you see the fluctuation is controlled by 1 over N. It's like your fluctuation is controlled by h bar. And similarity, here, then when you include 1 over N [INAUDIBLE], then you will see a single particle. And the two particle, multi-particle particle, they can mix together due to fluctuations. And again, that amplitude controlled by 1 over N, et cetera. Yeah, just say this statement is stronger. You need to look at sub-reading corrections carefully, and you'll see that fits into an h bar pattern. AUDIENCE: Great. PROFESSOR: Yeah. Yes? AUDIENCE: So instead of this [INAUDIBLE], we consider regular creation-- [INAUDIBLE] Do we still come to the conclusion that all the loop diagrams are trace? PROFESSOR: Yeah. That's a good point. So in essence, you can imagine this Oi, each Oi defines-- this point, 1, is precisely a statement. So you can imagine each Oi defines a creation-- defines an independent creation under [INAUDIBLE] operator. Yeah. AUDIENCE: [INAUDIBLE] theory [INAUDIBLE] expression of both diagrams. PROFESSOR: Sorry? No, in the original theory, we are not. Only in the large N limit. AUDIENCE: So in the original theory, in the large N limit, we do have suppression? PROFESSOR: No. You don't have a suppression of loop diagram in terms of the standard loops. You only have the suppression of the loop diagram in terms of those loops-- in terms of loops of glueballs. AUDIENCE: But here we don't specify-- don't put any restrictions on what are the operators O. So why can't we say just Oi is just psi i? PROFESSOR: No, no, no. O is [INAUDIBLE] operator, O is composite operator. O is some composite operator. So for example, in the [INAUDIBLE] series, so one example is O can be TrF squared. So if you look at the two-point function of O, then that's corresponding to two gluon propagators. And you have loops here. But this is not the loop in terms of the glueballs here when we talk about loops-- yeah, so let me emphasize. All loops of glueballs. Because here-- so the key of this statement is that in a large N limit-- and if you can see-- if our thing is gauge invariant operators, then that's a [INAUDIBLE] that you should consider. And if you're not worried about the original stuff. And those that have loops, et cetera, [INAUDIBLE] the h bar, et cetera. But in terms of those glueballs, then you have a classical theory. There's no loops. You only have tree-level scattering in the large N limit. Yes? AUDIENCE: So are we getting a general gauge of varying operators or explicitly single trace operators? PROFESSOR: No, no. I have talked about everything. AUDIENCE: So I mean like the single-- a single O operator can be several traces inside? PROFESSOR: No, no, no. Single O is a single trace. It's a single trace that's defined there. AUDIENCE: What about non-trace gauge invariance operators? PROFESSOR: We don't consider them. Oh, you mean non-trace gauge invariant operator? No, every gauging variance operator always involves trace. AUDIENCE: Always? PROFESSOR: Yeah, sure. You will be a famous mathematician if you can construct something invariant which does not involve a trace. Yeah, construct something about the matrix. Construct something using matrices which does not involve traces. Yeah. Even if you do determinant, and determinant can still be written as traces. Yeah. Yeah, just everything can be written as traces. AUDIENCE: So determinant would just have-- if you have a determinant operator, that can just be written as a product of many Os. PROFESSOR: That's right. That's right. That's right. That's right. Good. Any other questions? Yes? AUDIENCE: Just to clarify, we've been talking about the-- [INAUDIBLE] [INAUDIBLE]. PROFESSOR: Yeah. Here, we are talking about only-- every field is matrix valued fields. Just every field is matrix valued fields. Yes? AUDIENCE: I don't know, I just-- I thought there was more invariance that you can construct [INAUDIBLE]. Like I was asking you the other day, basically it's all of the-- each one of the coefficients of the characters [INAUDIBLE] polynomial of a matrix. PROFESSOR: Yeah, they can be written as traces. Yeah, they can be written as traces. AUDIENCE: Oh, I didn't know that. PROFESSOR: Yeah. AUDIENCE: Usually we don't discuss quarks here. PROFESSOR: Yeah. Yeah, here we have-- not as quarks. And we're going to mention quarks very briefly at the end. Yeah. Any other questions? Good. Good. OK. So now let's talk a little about string theory. Maybe I will leave it there. Let me see where they leave that. Yeah, let me leave that. OK. So first we'll talk a little bit about the strings. So first, let me remind you, QFT, normally when we say a QFT, we say QFT is a series of particles. OK? So the standard approach to the QFT-- say you write down a field theory Lagrangian, and then you quantize the [INAUDIBLE]. So this is normally called the second quantized. So the standard approach is called the second quantized approach. So you can quantize the approach. OK? But there also exists a first quantized approach. There also exists a first quantized approach. There also exists a first quantized approach. Say you directly quantize-- you just quantize a quantum motion, say, over a particle or particles. Say this is considered one particle-- say a particle in spacetime. OK? So for example, if you can see the particle propagate in spacetime along some trajectory-- so let's parametrized this trajectory using a parameter by tau, which labeled a point along the trajectory, and then, so this will give you a mapping. Say the trajectory will map out the trajectory-- yeah, so this will map out the trajectory in spacetime, which can be written as a spacetime coordinate as a function of tau. OK? So this parametrized the [INAUDIBLE] of the particle. So this parametrized the [INAUDIBLE] of the particle. And to quantize the particle, the simplest way is to imagine just sum over all possible paths over the particle. So essentially, we're doing a path integral. Essentially we're doing a path integral. So you just integrate over all possible paths of a particle, and then this is essentially a quantum motion. Just quantum mechanically, it can fluctuate, anyway. And then this particle-- then this action, for the particle, essentially it's just the lens. So suppose the particle has mass m, it's just in the lens of the particle, dl, along the trajectory. OK? And if you include interactions, then you have to include the trajectories into which the particle can split. Say a line can split in two lines, et cetera. So this gives you interactions, et cetera. OK? And this we need to add by hand, because the particle in principle can-- you can draw infinite ways of particle split into more particles, et cetera. So depend on your specific theories, you need to specify, say, the interactions. So this, you need add by hand. OK. So in principle, you can forget about the standard formulation of quantum field theory just to work with this particle theory. So the string theory is a generalization of this first quantized approach. So string theory, as is currently formulated, is formulated in this first quantized approach. So you just quantize motions of a string in spacetime. OK? So for example, so particle case, you have a [INAUDIBLE] particle string case, you have a closed loop. So imagine we have a closed string. OK? You have a closed string. A string is a one-dimensional object, so let's consider a closed string. Then the string can be parametrized. So the string itself can be parametrized say by a parameter, sigma, along the string. But then, then this string can move in the spacetime. Then we'll trace out some trajectory. Trace out some trajectory. So this gives rise to worldsheet. So in the particle case, you have a [INAUDIBLE]. So in the string case, you have a worldsheet, which we normally call sigma. And then the sigma can be, again, written in terms of the spacetime coordinates as a function of this tau and the sigma. Now you have two coordinates to parametrize this worldsheet. So this is essentially a two-dimensional-- so essentially, you have a two-dimensional surface in the spacetime, embedded in spacetime. And the string theory just quantized-- you just can see the motion of all such kind of surfaces. Yes? AUDIENCE: So in the case of a single particle, we have a point. Does this action actually-- what does it reproduce exactly? Is it capable of reproducing all of quantum field theory using this [INAUDIBLE]? PROFESSOR: Yeah, in principle. But it's not convenient. Yeah, for example, if you have a scalar particle, this can certainly easily reproduce your free scalar field theory. And if you want to reproduce phi-cubed theory, then you need to add this kind of a split in your-- when you do your path integral including the trajectories, you need to include these kind of trajectories. Yeah, in principle, you can do it. It's not very convenient. AUDIENCE: I see. OK. Interesting. PROFESSOR: Yes? AUDIENCE: How do you add interactions by hand in that particle? PROFESSOR: Oh, well you just [? recreate. ?] In the path integral, here, you integrate all such kind of paths, right? And then you have prescription. You say, what kind of paths to include. And for example, if I have phi-cubed kind of interaction, then you say we should include this kind of path. AUDIENCE: But in case it cannot be parametrized as a top, [INAUDIBLE]? PROFESSOR: Yeah. Yeah. Then you have to-- then there are tactical complications in doing these extensions. AUDIENCE: So one other question. Is string theory formulated in terms of-- is it possible to perform a second quantization? PROFESSOR: Not right now. So people have written down classical string field theory. And in principle, you can quantize it. But that's a very complicated thing. But I think nobody have successfully quantized it. Even to write down the theory is very non-trivial. AUDIENCE: OK. So similarly, to do quantization for string, you just look at the-- you just can see that the path integral-- so in principle, to do the quantize of the string, you can again just imagine your sum over all possible worldsheet configurations. All possible this kind of embedding of this two-dimensional surface in spacetime, then that essentially gives rise to all kinds of string motions. OK? So weighted by some string action. So the simplest way to write down the string action is a straightforward generalization of this particle case, if I just integrate over its area. So here, it's just the lens of the string. Here, it's the area. But here, we also need to include-- so here is the particle mass. So a generation of the particle mass is the so-called string tension. So t, which is normally written as 2 alpha prime. So you introduce a parameter. You introduce a dimensional parameter alpha prime to parametrize this t. And so this is string tension. So this is the mass per unit. String tension is just mass per unit length. OK? And this is normally called a Nambu-Goto action, so NG. So this is Nambu-Goto action, who first wrote it down. OK? So the A is essentially just the area of the surface in the spacetime. OK? it's just straight generalization of that length of the trajectory. And yeah, depend on the worries-- depend on what kind of questions you are interested, then you try to track those kind of physical answers from this-- say this path integral. OK? Any questions so far? Yes? AUDIENCE: Do we allow paths to contract the string to a point? PROFESSOR: Yeah. Yeah. I will show you. Yeah? AUDIENCE: Is that 2 times variable? PROFESSOR: No, single. No, sigma parametrized the string itself, and the tau, it parametrized the motion of the string along the motion of the string. AUDIENCE: But there is a time variable in sigma? Or it's just a-- PROFESSOR: No, no. Tau is the time. Tau is the internal time for the string. And then, there's a time in here-- in the x0, in the spacetime. So this can be considered as some kind of proper time for the string-- internal time for the string. Yeah, just like you parametrized the trajectory of the particle, you have some tau to parametrize the trajectory of the particle. This is the same thing. AUDIENCE: That means so any proper time-- isn't a variance-- itself is four-dimensional spacetime. Why there is only like one parameter of tau which does not have these four components? PROFESSOR: No, I don't understand what you-- no, there's only a single time. You have a particle, you have only a single time. Right? Now do you agree here there's only tau here? Now here, I just have a line. I've parametrized this line. I call it tau. AUDIENCE: OK, yeah. PROFESSOR: Here, I have a two dimensional-- here, I have a string. And then each point on the string moves like a particle. And then my parametrize by tau. It's the same thing. AUDIENCE: [INAUDIBLE]. Yeah. Yeah. PROFESSOR: Yes? AUDIENCE: You go ahead. AUDIENCE: What's the metric on the worldsheet? Does that make sense to ask? Because it's like minus plus or-- PROFESSOR: Of course, it makes sense. Of course, on the worldsheet, it should be a Lorentz in-- whatever is the metric on worldsheet will be induced from the spacetime. AUDIENCE: OK. So it would be a minus plus. PROFESSOR: Yeah, for example. Yeah. Yeah. It's [INAUDIBLE]. Yeah. Yeah, you can also-- we can talk about-- yes? AUDIENCE: So one other question. So if that should be interpreted as proper time, if I have a string which is moving in some weird, non-uniform way, how does it make sense to talk about the proper time of the string when you could imagine the different points on the string? PROFESSOR: Yeah. Yeah. Good point. So often, you cannot talk about that. And this just gives you a heuristic picture. Because in principle, string can do a highly non-classical motion. And then that's what's encoded, say integrating over all such kind of surfaces. And some of them, you cannot interpret classically or semi-classically into this kind of picture. You visualize it. So I will draw some pictures later. You will see. Yes? AUDIENCE: This might be a question without a good answer, but what is the-- why does the action take that form? What is that accomplishing physically? Or I guess maybe an easier example would be up there in the first quantization approach. How does that reproduce [INAUDIBLE]? PROFESSOR: You say why that produce gravity? Yeah. How that reproduces gravity, of course, is a technical question which you have to calculate explicitly. But the question, you said, why do we choose that? The reason we choose that is it's based on various principles. So first, that thing, whatever it is, should not depend on how you choose this tau. Because you can parametrize your trajectory anywhere you want so that the l-- so whatever this action should not depend on the parametrization. And also, this action should be Lorentz invariant, and the Lorentz boost should be translating [INAUDIBLE], et cetera. And with those conditions, then essentially uniquely determined to be this form. It's [INAUDIBLE] unique [INAUDIBLE] Yes? AUDIENCE: So I guess if you were doing a quantization of the bosonic string, like if you were deriving the field equations, would you use Nambu-Goto or would you use Polyakov? PROFESSOR: Yeah. Yeah. That's right. If you really quantize it, you actually don't use Nambu-Goto. Nambu-Goto, you use it for the classical description. Yeah. I did not mention the other forms. It's just for our current discussion, it's not essential. Yeah. Indeed, when you quantize it, you actually use a different form of this action. Any other questions? Good. So now, for example, say how you-- so it depends on what kind of questions you are interested, and you try to extract them, say, from this kind of path integral. And of course, this is a highly non-trivial questions. And they require, say, some kind of trial and error, et cetera. So let me just tell you-- say if I want to calculate the vacuum process. Say if I want to calculate the vacuum energy-- OK? So let me call this Z string. So remember, when we talk about large N gauge series, and if you want to calculate the vacuum energy, then you calculate the-- you sum over all possible vacuum diagrams, et cetera. OK? You sum over all possible vacuum diagrams. But suppose we want to calculate the vacuum energy in the string theory, and then you do the similar thing. You just do the, what do you want to do for the string? You say you sum over-- all closed 2D surfaces. OK? So the "closed" here is essential. So this is based on the intuition that when we-- in the quantum field theory, when you sum over fermion diagrams, and the fermion diagrams heuristically can be considered as a particle trajectories, et cetera. And for the vacuum process, none of-- there's no external lags. So everything coming out of the vacuum then come back to the vacuum. So that's why-- so if we want to calculate the vacuum energy in the string theory, and again, corresponding to the virtual string motion. And again, the virtual string motion should correspond into the sum of all such kind of services without any external lags. So it should be closed surfaces. OK? It should be closed surfaces. So it's the analog of the particle case. So exactly as in the quantum field theory case, it's easier to do this using the Euclidian-- by going to the Euclidian signature, and say go to the Euclidian signature. It's going to Euclidian signature. So essentially, analytic-- so instead of considering a surface in the Lorentzian spacetime, you consider-- your analytics continue your spacetime, the full spacetime to the Euclidian signature. And then this just becomes some surface in some Euclidean space. And then this is just a much easier mathematical object to deal with. OK? So that's what you get. And if you have a string, and then you have this. OK? OK. So now, when you sum over all surfaces, now you have a choice. First, you can sum over the topology. So we talked about before-- or two-dimensional surfaces, topologies classified by this genus. And then, you can sum over surfaces with a given-- so you can separate the sum into sum over topology and then some surfaces of a given topology, given h. OK? So this h is the genus. So essentially, we are just summing of all possible two-dimensional closed surfaces. OK? Is this point clear? This is absolutely key point. And with this, minus SNG. OK? So now there's a very important mathematical trick. And this mathematical trick can be justified vigorously, but I will not just put it here. I will just add it here. It says now, just in physic,s whenever you see a discrete sum like this, what do you do? Or in mathematics, whenever you see a discrete sum like this, what do we do? Hm? AUDIENCE: [INAUDIBLE]. PROFESSOR: Sorry? [INTERPOSING VOICES] [LAUGHTER] PROFESSOR: Exactly. You add weight. So here, I will add a weight here. So here, I am summing of O topology. But now I will add by hand a weight to weigh the different topology. So lambda is just some parameter. So lambda is just a parameter I introduced by hand myself. OK? So this is just the weight for a different topology. So chi is the Euler number. So lambda is just some-- so chi is the Euler number, 2 minus 2h. And the lambda just can be considered as some kind of chemical potential-- some kind of chemical potential, say, for topologies. OK? Yeah. Yeah, just like in the system, say you have a conservative charge, then when you add particles, different particle number, it's convenient to add a chemical potential. So lambda is like that. So even though I added by hand now-- even though I added it by hand here, but in string theory, actually this is-- arises completely natural from the string theory. There's a rigorous way one can show how that arises, but I will not do it here. It's not important for our purpose. And I will also introduce a notation-- say gs is equal to exponential lambda. OK? So now, let's look at the summation. So here, you sum over-- so at h equal to 0, you sum over all surface with the topology of a sphere. And the weight is given by gs minus 2, which is expansion minus 2 lambda. OK? Now because the chi-- when h equal to 0, chi equal to 2. So this is just minus 2 lambda. And so this is one over gs squared. OK? And then the torus is h equal to 1. So this is just the gs to the power 0. So order 1. If you have genus 2, so this would be gh equal to 2, and it would be gs squared. OK? So here is a remarkable fact. So here is a remarkable fact. Here is a remarkable fact. It said summing over topology automatically includes interaction of strings. OK? In fact, this fully specifies string interactions. OK? So let me just elaborate on this statement. So we can see this statement from these sums here. OK? So now let's look at the physical meaning of those diagrams. So let's first look at the sphere diagrams. So suppose we draw a sphere. OK? So what I'm describing to you is heuristic, from physical perspective. So if you think about the sphere embedded in the space-- so this is [INAUDIBLE]. So if you think from, say, imagine your time going up. OK? Imagine your time going up. So this [INAUDIBLE] you nucleate an infinitesimal string at the tip of the sphere, and then this stream propagates. So then each time, you have a string. And then, at some future time, then go back to the vacuum. OK? And so you just have a string come in and come out-- a string come out from the vacuum, and then go back. And the sphere just described this kind of virtual process. So you look at string, and then disappear. OK? Say nucleate string-- string nucleation, and it then disappears. OK? Yes? AUDIENCE: Why did we only talk about close strings? PROFESSOR: Because it's enough for my purpose now. [LAUGHTER] Yeah. I'm not trying to give you the full string theory at the moment. I'm just trying to give you the bit of string theory that's enough to make that point. Say large N expansion is like string theory. Yeah. Yeah, because otherwise, take the whole semester to reach the point. So now, let's look at this torus diagram. Because if you want the sum of all surfaces, then you have to sum over different topologies, then you have to sum over the torus. Then let's look at what torus diagram means. So let me show the torus in the slightly different way. OK? So the torus, again, if you try to view the time going up, then from here, again, you do create the single string. But now, at this time, then you actually have two strings. And then, here, you have one string back again. OK? And then the torus actually describes such a virtual process, is that you nucleate a string from the vacuum. And then this string splits into two strings. And then these two strings come back to join, again, into a single string and then go to the vacuum. OK? So this is the nucleation out of vacuum. And then here, it splits. So here, a single string splits into two string. And then here, they join together, join back into a single string. And then here, disappear into the vacuum. OK? So we find, by including the torus diagram you automatically include this kind of interaction, that the string can split into two strings, and two strings can join into single string. And essentially, you don't have choice. This is just determined by the topology. OK? You don't have choice. Just once you write down this path integral, then this is fully determined. OK? It's fully determined. And similarly, if you look at the [INAUDIBLE] process-- so you look at the string split, join, splits and join again. So the [INAUDIBLE] string again can see that it has such in the acting process. So now the key-- so now if we compare the gs dependence of these diagrams. So this one-- so each diagram, compared to the earlier one, you increase by gs to the power squared. But whenever you include a genus, essentially [INAUDIBLE] include such a splitting under the joining. So that means you can actually associate with each process to be a factor of g string. OK? So for the sphere, you have-- say from here. Say if you normalize a sphere to be gs minus 2, then now if you include-- so essentially, this weight which we added here is to wait for such a splitting and the joining process. So each splitting and joining essentially give you a weight of g string. OK? Weight g string. So in some sense, we concluded that-- so the basic string interaction vertex is just a single string can join into two string, or two string-- or single string can split into two string, or two string can join into a single string. And the weight for each process is given by g string. OK? So that's effectively what this mathematical trick does. So effectively what this mathematical trick does is to assign a weight for each such splitting process. Yes? AUDIENCE: So if you start the-- so you drew it like this for the two thing. But if you start the nucleation from the side rather than the bottom, then wouldn't it split into three and then back into one? PROFESSOR: That's right. Yeah, it's-- because of the topology, you can split in an arbitrary way. And this just tells you, using this basic vertex, then for fixed parameterization-- yeah, this vertex already enough. And you can-- indeed, it's a very good question. You can try to slice the diagram in arbitrary way. Then that may give you some arbitrary other things. Then that's fine. And then you can assign weight for some other vertices. But they will be all consistent with these fundamental vertices. So essentially this gs, which I called to be-- exponential lambda, it's essentially the coupling for the string. OK? It essentially determines the strength of the string interactions. Yeah? AUDIENCE: Our choice of weight is not unique, right? [INAUDIBLE] something ugly like [INAUDIBLE] chi squared and then including that. PROFESSOR: Yeah, that's a good point. Indeed. Indeed. Yeah. So here is the simplest choice. And it's a choice which you can-- yeah. So this is the simplest choice you can put, and then this is a choice which will also arise if you property quantize string theory. And indeed, you can write down some other string theory with a different power. It might be possible. Yeah. I mean just as a mathematical-- say at the mathematical level, it doesn't prevent you to add some arbitrary function of chi. And this is just the simplest way to do it. And this is the way which arises out of string theory. Yeah. So this process is not arbitrary. It's something which you can derive from string theory. Just here, we will not go through the whole thing. Any other questions? AUDIENCE: Do we have an extra minus sign in the exponent? PROFESSOR: No. AUDIENCE: Oh. Because chi goes-- it's more negative as h increases? PROFESSOR: Yeah. AUDIENCE: So the exponent becomes larger and larger? PROFESSOR: Yeah. Yeah. Yeah. Because I can take g to be small. Yeah. Yeah, the only thing is I want to be consistent with this power. So now, here is the key. So now let's look at the external strings. So here, at the beginning, I was looking at the-- I was looking at the vacuum process. There was nothing. Just everything come out of the vacuum. But you also consider, say, some process. Say you started with two strings. Then you scatter them together, you get two string back. So you can also consider such kind of strings. So you started with actually two initial strings. And then through some complicated interaction, then you have two strings back. OK? So for example, this would be such kind of process. Anything can happen between. I have two initial strings, but I have two final strings come out. OK? So this would be some kind of surfaces. So this would-- because when do you have your sum of all possible surfaces which four external string-- say, two coming from minus infinity, and two come from t [INAUDIBLE], two come out at t plus infinity. OK? And again, you need to sum of all possible topology in between. Say you sum over sphere in between. Yeah, let me just save time, draw it quickly. So you can sum over all spheres in between. Some spheres. Or you sum over torus, et cetera. OK? But now, if you do this weight, according to this rule-- OK? Now if you do the weight the according to this rule, something changed. Because again, we want you sum of all possible surfaces with the same rule as you do for the vacuum. But now, there's something important that changes. So these are the [INAUDIBLE] with the boundaries. So you want to start with two initial strings and two final strings. So each initial string could use a boundary. So now you have four different boundaries. So you have four boundaries. And so when you introduce boundaries into two-dimensional surfaces, then the Euler number changes. So sum of you already may know this from high school. 2 minus h, and the minus n, and n is the number of boundaries. OK? Number of boundaries. And which is the same as the number of external strings. Number of external strings. OK? So now, if you use this rule-- so now for the n-string scattering-- for n-string scattering-- so let me call this n, An. Then you will, again, sum over all genuses. So now this weight will translate into gs, become n minus 2 plus 2h. OK? So now, because of this n, so I have additional power of gs to the power n. Say I can write as Fn h. OK? So if you look at the sum, so this is gs to the power n minus 2 come from the sphere. The sphere, surface, [INAUDIBLE] surfaces, which can be considered as a tree level from the string point of view, because the string just come out and then nucleate. And then the next order is gs to the-- gs n. Then this is the torus topology. So this may be considered as some kind of one-loop process, and gs n plus 2. So this is genus 2, et cetera. OK? So now, let's compare with this. So this also included in the vacuum. For the vacuum, it just says n equal to 0-- [INAUDIBLE] n equal to 0. So now, we see exactly parallel with what we see in the large N gauge series. In both cases, you have summing over topology. And in both cases, we'll have summing-- you have this expansion. So identical mathematical structure with large N expansion. In particular, g string now corresponds into just 1 over N. And these external strings-- say if you have some-- just goes 1 into the glueballs, what we call the glueballs. So these kind of single trace operators. And the sum of string-- over string worksheet of, say, topology of genus h is mapped to, say, sum over Feynman diagrams of genus h. Yeah. So you see an exactly parallel mathematical structure between the two. So the question, is this an accident? Or this is something deep? OK? I think I'm running out of time. Yeah. OK. Maybe I will stop here. Oh, you don't want to stop? AUDIENCE: There's only 1 and 1/2 pages left. PROFESSOR: OK. Yeah. Let me just say a few words. Let me say a few words. If you look at these two sums-- this sum and this sum. Oh, yeah, let me call this fn. So that's a distinction. Let me call this f. So that's f. So if you look at these two sums-- so the question is, can you really physically identify these two? OK? Let me just say a couple words. Say fn h, which will appear in the correlation functions, essentially just sum over Feynman diagrams of genus h. And this is just the Feynman diagrams-- the expansion for the Feynman diagrams. And this one, this fh-- fn h-- So this is sum over-- this is path integral, say minus SNG of genus h surfaces. So now remember one thing we said before. We said each Feynman diagram can be considered as a partition of a 2D surface. OK? So now, if you sum over all possible Feynman diagrams-- so you can also think of it geometrically as sum of all possible triangulations of a surface. So this kind of partition essentially just a triangulation of the surfaces divided into different parts, and with some amplitude-- with some weight. So sum of all possible triangulations over the surface, of course this is precisely just a discrete form of summing over all surfaces. So this just a discrete sum of summing over all surfaces. And so this really identified these two are essentially the same object. So when you're summing over diagrams, you're actually summing [INAUDIBLE] actually summing over all possible embeddings of some surfaces in spacetime. And the precise nature of the surface, of course, depends on the diagram itself. So this really tells you that the larger N expansion is really just a string theory. OK? It's really just a string theory. But from here, you cannot immediately tell what kind of string theory is this. Because from famine Feynman diagram itself-- so this Nambu-Gotu action, which I wrote earlier, goes 1 into your map, from some surface-- embedding of some surface in some spacetime. And that spacetime can be some arbitrary spacetime, et cetera. For a different spacetime, of course, you get a different action. So the question is, what does this correspond to? Say I give you a field series, and you have those Feynman diagrams. And what Feynman-- and for each Feynman diagram, you can write down expression for them. Just whether this really corresponding to, say, some geometric actions. The Feynman diagrams [INAUDIBLE] really cause some geometric action describing the motion of some surface in some spacetime. Yeah. So let me just stop here. [APPLAUSE]
MIT_8821_String_Theory_and_Holographic_Duality_Fall_2014
24_Holographic_Entanglement_Entropy.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. HONG LIU: So first, do you have any questions regarding this Hawking-Page transition we talked about last time because we were running out of time, so it was a little bit hurried. Do you have any questions about that? AUDIENCE: So what's the original thermal AdS stat? Did you put some random [INAUDIBLE] onto that metric, and it will generate some [INAUDIBLE]? HONG LIU: Sorry? Say it again? AUDIENCE: So what's the original thermal AdS state? HONG LIU: The thermal AdS state-- you couple AdS to a thermal bath. And then whatever excitation will be generated by that thermal bath will be generated. So mostly it's the graviton gas. It's the gas of the massless particles inside AdS. So that essentially. The field theory procedure to go to cleaning space and to make the cleaning time periodic. And that physically should be interpreted as you just a coupled decisions to your thermal bath. AUDIENCE: I have a question not really related to this. But I'm just wondering why do we want to consider [INAUDIBLE] on a sphere, because previously we discussed about [INAUDIBLE] 4, which is reality. What is this? HONG LIU: So normally when we look at the theory, you want to look at the theory from as many angles as possible. So some of them may not be able to directly realize it in the experiment. But still, it's useful from a theoretical perspective because that gives you additional insights. So we see that the physics on the sphere is actually quite rich. So that actually gives you some insight into the dynamics and also into the duality itself. Yeah. AUDIENCE: So there's no condensed matter system essentially like that. HONG LIU: Oh, if you're talking about condensed matter applications, you may even imagine in some systems, you may be able to put it on a sphere. Certainly, you can put it on a two-dimensional sphere. And I can imagine you can put it-- you can manipulate say, and put some spins on the two-dimensional sphere. Yeah, you might be able to do that. Any other questions? Yes? AUDIENCE: [INAUDIBLE] you said there are two kinds of [INAUDIBLE] connected on the same [INAUDIBLE]. HONG LIU: Yes. AUDIENCE: So it's like two CFTs? HONG LIU: No, it's a single CFT. It's just a different sector of the CFT contributes. AUDIENCE: So it's like 0 quantum critical point [INAUDIBLE]? HONG LIU: No, this is a first-order phase transition. So it's not the quantum critical point. So the picture is that you have a temperature. So it's some TC. So below TC, you have a phase which we call the thermal AdS phase. And in this you have a big black hole phase. So translated from the field theory side-- so this goes running to the states which energy scales into the power 0 contributes. And here, it's dominated by the state of energy O (N square). Of course, in the thermal ensemble, every state in principle contributes. And in this phase, it's dominated by the contribution of the no energy state. And here, it's dominated by the high-energy state. And really the high-energy state also has a much higher entropy. And then they dominate your thermal ensemble. Does this answer your question? AUDIENCE: Yeah, this looks like in any transition, you have [INAUDIBLE] carry energy and also [INAUDIBLE]. HONG LIU: Yeah, it is a little bit similar to that. There's also an entropy thing there. Yeah, it's a balance between two things. But the details is very different. The details are different. There's some qualitative similarity. AUDIENCE: So I read on the paper [INAUDIBLE]. He said that the Hawking-Page transition on the field theory side is due to some kind of deconfinement and confinement of QCD. HONG LIU: Yeah, so that's a heuristic way to say about it. So the key thing is that here, it's not really a confinement or not confined because in the [INAUDIBLE] series, there's no confinement. It's just scaled in [? Warren's ?] theory. And so in the sense he says this is confined-- so he called this a confined phase. And this is a deconfined phase-- it's based on the following. It just refers to these two behaviors. It just refers to these two behaviors. So in QCD, which will generating confinement or deconfinement-- so in QCD, we have [INAUDIBLE] 3. But if you scale into infinity, then you will find in QCD in the confined phase, the free energy will scale as N with N power 0. But in the deconfined phase, which is scale-- obviously the N square. So that aspect of scaling is the same as a serious risk confinement. So that aspect is similar. So he essentially refers to this aspect. But on the sphere, every state has to be a singlet. So in some sense, this notion there is no genuinely notion of confinement as we say in the QCD in the flat space case. If you read his paper, he actually described this very clearly. Yeah, he just called this a deconfinement transition the heuristic way. Yes? AUDIENCE: I have a question about the sum by which we calculate partition function. The previous class on Monday, you wrote it as sum over energies. The degeneracy of that [INAUDIBLE] factor. And you noticed that both of these grew at exponential to something times N squared. One was positive, and one was negative. I was wondering if it's possible that sum diverges? HONG LIU: No, that sum does not diverge. That sum is always proportional to n square. Yeah, depending on what you mean the sum diverges. So that's just from the definition of your partition function. So it's just the sum of all possible states. And then you can just reach this-- yeah, when you have a large number of states, you can just roughly write it as a continuous integral and then with the density of states. And so the key is that what happens to the density of states-- so you're asking maybe whether this integral will diverge when it equals infinity. So when it equals infinity, you will see that say supposing E scale as N cubed. Then you will see that the density of state actually does not grow that fast. And actually, it's only for the E of O(N squared), then they scale in the same way. They scale in the same way. So that's why you have this balance. And if you are not in the scaling with [INAUDIBLE]. If you have N cubed, and then this will dominate. So that factor will be suppressed. Any other questions? OK, very good. So now let's go to entanglement. So first, let me say a few words about entanglement entropy itself. So this is very elementary stuff, even though maybe not everybody knows. So let's consider a quantum system, just a general quantum system. So let's separate the degrees of freedom into two parts, say A plus B. So AB together is the whole system. We just separated the degrees of freedom. So essentially by definition, the Hilbert space of the system-- then we'll have a tensor structure. So the full Hilbert space will be a tensor product over the Hilbert space for the A part and tensored with the Hilbert space in the B part. OK? And consider, of course, in general, we have an infinite dimensional system. But in the case if you have say a [INAUDIBLE] Hilbert space-- so i f this M dimension, this is N dimension, then the total Hilbert space will be M times N dimension. And the typical wave function will have the form-- will be some sum-- say you can write it in some basis in A and times some wave function writing some bases in B. And this actually means that the Hilbert space is a tensor product. It's just your wave form can typically have this form. So we say that AB-- so we say that in the states psi that AB are entangled if psi cannot be written as a single product-- rather than the sum of product and if you just have a single product. So for a simple state, it might be easy to see. But if I write the very complicated state which many-- in principle, write the state in some bases which look at the complicated sum. But they may be some other bases and be a simple product. So in general, it's actually hard to tell. In general, it's hard to tell, even though it's easy to say. But in general, it's hard to say. So the entangled entropy-- so let me just call it EE just to save space. EE essentially provides a measure to quantify the entanglement between A and B-- because even in the case which you know this state is in entangled state, you may still want to ask how much they are entangled. Entanglement entropy provides the way to quantify it. So the definitions are very simple. So first we look at this the density matrix for the total system. So if the system is in the state psi, then the density matrix for the total system would be just on the psi conjugate-- psi itself. So in this basic matrix, we just trace out all the degrees of freedom in B. So since the Hilbert space have a tensor product structure, you can always do this. Then what you get is you get this the density matrix. And then here, you get a density matrix which only depends on the degree of freedom in A. And then we just calculate the Von Neumann entropy corresponding to this row A. So entangled entropy is just defined to be [INAUDIBLE] entropy associates resistance in the matrix rho A. So this rho A is often called the reduced density matrix. And the Von Neumann entropy associated with the reduced density matrix is defined as the entanglement entropy. OK? So this provides a very good the measure because say if SA is equal to 0, from our knowledge of the Von Neumann entropy, you can immediately deduce where the SA is equal to 0 if only if the rhos A, this density matrix is a pure state. It's a pure state. And then from here, you can also deduce rho A is a pure state. This reduced density matrix comes from this procedure as a pure state. Only if the psi can be written as a simple product. So this tells you as when SA is non-zero, then you can be sure this state must be non-entangled. So whenever SA is equal to 0, you can be sure this is non-entangled. When SA is not equal to 0, you know for sure A and B must be entangled in this state. So that's why this is a good measure. OK? And then the value of SA then will tell you how entangled the system is. And also, you may immediately ask the question what happens instead of defining the rho A? Of course, you can also do the same thing. You trace out A to define out rho B and then the define the entropy for the rho B. OK? Define entropy for the rho B. But you can easily show for any pure state psi SA always equals to SB. So it doesn't matter which are symmetric. So it doesn't matter which one you're looking at. So this is very easy to prove, essentially just following from something called a Schmidt decomposition. And essentially right to this state in terms of Schmidt decomposition between the degrees of freedom rho A and B. And then you can show that rho A and rho B essentially have the same eigenvalues. And if rho A and rho B have the same eigenvalues, of course, then the entropy will be the same, because the entropy only depends on the eigenvalues. OK? Any questions about this? Good. So here is what a mixed state-- about a pure state, let me just say a side remark. So if AB, the total system is in a mixed state-- so far always is in the pure state. But suppose it's in the mixed state. Suppose the system itself is described by a density matrix. So the total system itself is spread. In general, SA is not equal to SB. OK? Not only in general, it's not equal to-- SA is not equal to SB. And in such a case, the entanglement entropy also contains classical statistical correlations of the mixed state-- of the mixed state-- so in addition to quantum correlations. It's almost trivial to see because the people suppose here is not a pure state. So here you replace it by the density matrix. And when you trace all the B in this density matrix, then, of course, there's still some original uncertainty in the previous density matrix remaining in A. And then they will come into this entropy-- so as defined, will depend on the classical uncertainties-- classical statistical uncertainties of your original density matrix. So for a mixed state-- so that's why for a mixed state, the internal entropy is not a very good measure of quantum entanglements, because it's contaminated by classical statistical information. But for today, we all need to talk about the pure-- mostly talk about the pure. Yeah, we actually talk about in general. But I want you to keep this in mind. So any questions so far? So just to give you an a little bit more intuition, let's look at this very simple example to calculate entangled entropy. So let's consider a two-spin system. So let's consider you have two spins, OK? So this is my A and B. So this has a two-dimensional Hilbert space. This has a two-dimensional Hilbert space. Altogether, you have a four-dimensional Hilbert space. So for example, so let me consider such a state. So this looks like a complicated state. But actually, this can be written as a single product, because you it can be written at OK, I hope this notation is for me. It's OK with you. You just A and B-- A spin and B spin. So even though in this space, this is written as say a sum of state. It looks the entangleds. But in fact, it's not because you can write it in terms of a simple product. So this is another entangled state. Of course, for the simple system, it's very easy to tell. But to give you a complicated system with many, many, degrees of freedom, then it becomes very hard. And then entangled entropy becomes useful. So now let me give you an example to calculated entangled entropy. So let's consider a state like this. So let's see some parameter. So clearly this state is entangled because you cannot write it as a simple product. So now, let's check it. Now, let's check it. So you can look at the full density matrix of the system. You just look at this-- the bra and the ket itself. And then just do the product. And you get the quotient theta-- under the [INAUDIBLE] terms. So you just take the product with itself. And then let's try to find what's rho A. So you trace out degrees of freedom of B. So this is our B, the second spin. And now let's trace out degrees of freedom B-- trace out the second spin-- so rho A. Goes one into we trace out the B in here. So when you trace out the B, you just take the end product between these two. So these are the same. So this will remain. And so you have this. And similarly, this one you have that. These two are the same. So when you take the trace, this is nonzero. But this too will give you 0, because this one is orthogonal to this one. And this spin is orthogonal to that spin. So that's what you get. And then now you can usually just write down the entropy. So this is the entangled entropy for these two states-- for this two spin system as a general function of this parameter theta. So now let's plot these as a function of S theta. So clearly, this is a period function. We only need to go to pi over 2. And then you have something like this. And you can easily plot that function. You will see something like this. So when theta is equal to 0. This is equal to 0 because this term is 0. This term is also 0 because of the quotient theta square is equal to 1. When theta equals to pi over 2, this term becomes 0 and this term also becomes, because it's [INAUDIBLE] 0. But this maximum in the pi over 2 or pi over 4. So I had a pi over 4. Of course, you can also go to the pi equals to minus 4. OK, you can go do plus minus 4. I didn't go to the [INAUDIBLE]. Yeah, anyway, so these are called the maximum entangled states. This is a maximum entangled. So this is a state in which you have the highest entropy. Any questions so far? AUDIENCE: You switched it. It should be up down. It's down. HONG LIU: Right, that's right. OK, good. So let me also say a few things about the properties of the entanglement entropy. So there are many properties you can derive from here. Let me only say a few important ones. OK, so one property of the entangled entropy is called subadditivity condition. So if you can see that the two systems A and B, then you can show that S(AB), the entropy for the total system, is smaller than the sum of the separate system. So when I write AB, I mean the combined system. OK? And this is greater than the difference between them. So this is so-called the subadditive condition. And then intuitively, you can understand. So if you have entropy of A and entropy of B, you add them together, then it's larger than the entropy of AB because there is some redundancy. There might be some redundancy here. Yeah, because when you add AB together, there may be some common correlation between them. And so this is greater than that. OK, intuitively that's what this inequality means. And also there are some property for the strong subadditivity condition. AUDIENCE: I don't quite understand-- HONG LIU: Yes? AUDIENCE: What S of A and S of B means? HONG LIU: So this is the entropy for the A. This is the entropy for the B. AUDIENCE: Don't we need this to partition system A into two subsystems? HONG LIU: No, you don't-- let me explain my notation. So S(A) means the entropy equals 1 if you integrate out everything else except A. And S(B) means you integrate everything else except B. And S(AB) be means to integrate everything else except A and B. And this is S(AB). AUDIENCE: But in our case, A and B is all that there is. HONG LIU: No, no, no. Now I'm just doing generally. Once I have this definition, so this can be-- even AB is a total-- yeah, so this can apply both to the case I said earlier-- say if you divide-- here I just-- so this AB does not have to be the same as that AB. Here I'm just talking about only two subsystems. AUDIENCE: So it's for example, S(A) is like-- A and the supplement as two parts of the system. And they could have some entanglement. S(A) is just an entanglement field. HONG LIU: A, S(A) is the entanglement of A with the rest. And S(B) is the entanglement of B between B and the rest. And the S(AB) is entangled between the AB together with the rest. Yes. AUDIENCE: So to clarify one thing, when you say S of A, we have this whole system. It means that trace everything which is not A? HONG LIU: Yeah, that's right. Any more questions about this? So here when I write to those expressions, I assume A and B don't have interceptions, OK? I assume A and B don't have interceptions. You can also have the strong subbadditivity condition. So this is pretty easy to prove. If we have time, it takes five minutes to prove. But the strong subadditivity which I'm going to write down. So strong additivity is involving three systems. Now, you have to add the three systems. And then greater than ABC-- OK? So, again, intuitively, the meaning of that inequality is clear. So the first inequality just says that I have two systems here. And the sum of these two systems-- the entropy of these two systems-- is greater than the sum between the combined system and the intersection between the two systems, OK, because C in the section between the two. And ABC is the whole thing combined. And similarly here, he said they have S(A) and S(B). I have two A and B. So I trace out the system. Outside A, I get entropy for A. I trace the system outside B, anyway for B. This inequality says if now you attach A C to A and C to B, a common system to both A and B. And then the resulting system will be larger than the B form. So if you add something, we increase the entropy. That, of course, is intuitively clear, because if you have entropy essentially parameterized the unknown part of the system. And if you add the third system, and then this just increase your unknown and then increase your entropy. So this strong subadditivity condition is actually very hard to prove. It's very hard to prove. It can get rather mathematical and requires some effort to do it. But this one is pretty easy to do. Yes? AUDIENCE: Can [INAUDIBLE] about the first of the two equations? For strong subadditivity, what's the first equation? HONG LIU: Yeah, so this means-- so look at this as a single system. This is a single system. That means that the sum of these two-- the entropy of this two system is greater than the sum of the combined system under the intersection. Any questions about this? Good. So this concludes the very simple introduction to the entangled entropy. AUDIENCE: Does the first one have any implications of topology? HONG LIU: Sorry, what do you mean by topology? AUDIENCE: Because it's like the intersection of the combined thing. HONG LIU: Yeah, so as classical entropy, this is a very simple thing. You can easily convince yourself if ABC are classical systems, and just classical distributions, describe classical probabilities, statistical distributions. And then with those things, you can understand this inequality just by drawing this kind of standard diagram corresponding to the different sets. The quantum mechanics proving those things are actually not trivial are not trivial. Quantum mechanically, they no longer come in a very intuitive way. Yes? AUDIENCE: I was noticing something curious about the last line classically. If A and B are the same, I think the equation will still hold. But quantumly-- HONG LIU: It still holds. AUDIENCE: Doesn't it not hold quantumly? HONG LIU: No, they are supposed to hold quantum mechanically. AUDIENCE: But we are under the assumption that A, B, and C-- they don't overlap, right? I'm saying if A and B are totally overlapping. HONG LIU: No, but this inequality I'm writing in a way which they are not intersecting. You can write them in a way which they intersect. And I'm just writing in this way. So the condition for this particular form ABC is not supposed to intersect. AUDIENCE: Right, I'm saying if we set A equals to B. HONG LIU: No, no, then you cannot use this equation. Then you have to write the equation in somewhat different way, which apply to the intersection case. You can do them just choosing to write in the way which did only intersect. AUDIENCE: OK, but the thing I want to say is that we can still keep the bottom equation classically, even if we took-- HONG LIU: No, both are applied classically. Both are trivial classically. You can understand very easily classically. It's just quantum mechanically, it's no longer trivial. Quantum mechanics is no longer trivial. And there are various ways you can write those equations. I'm just choosing to write the way which ABC don't intersect. And you can also write the way, just rename them so that they intersect. AUDIENCE: So [INAUDIBLE]. So this is true for any entanglement state we choose? HONG LIU: Any state, yeah, density matrix. Whether it's a pure state or density matrix, it doesn't matter-- a general statement. OK, good. So this concludes the short introduction to the entangled entropy itself. And you may know that the entanglement entropy actually plays a very important role. Entanglement and entanglement entropy itself plays a very important role in quantum information and quantum computing because of quantum entanglement is the kind of quantum correlations which you don't have classically. So this EPR paradox and the [INAUDIBLE] inequality, and the teleportation-- all those things that take advantage of. But that's not our main point here. So what I'm going to talk about next is actually entangled entropy is also starting playing a very important role in our understanding of many-body physics and the space time. So now, let's talk about the entanglement entropy in many-body systems in quantum many-body systems. So I hope you're familiar with this word many-body systems. It's just a system of a large number of particles or a large number of constituents. So quantum field theory issue is a many-bodied system. Any quantum field theory is a many-body system. But this also includes many other lattice systems, which condensed matter people use, which are not necessary quantum fields-- can be written as a quantum field theory. OK? So now let's again consider just a simplified case. Let's consider a system which is composed of A and B-- so A plus B. And then I will say some trivial statements. So if H is HA plus HB. So now we talk about Hamiltonian. So far, I'm not using actually many-body systems. What I'm talking about are pretty generic systems. So even the Hamiltonian is actually just a direct sum of the Hamiltonian for A and Hamiltonian for B. If they don't couple, then of course, the [INAUDIBLE] state or, in particular, the ground state is unentangled. So you just find the ground state of each system. And then that's the ground state of the total system. And also, in general, if you start with the initial stage, which is unentangled, with the initial state, which is unentangled, it will remain unentangled. So if you started with an unentangled state, you wove it using this Hamiltonian, of course, it will just act on the specific part. And then you will remain unentangled. OK? And now let's consider we have H equals to HA plus HB. But now they also have interactions between the A and the B. So A and B are coupled. Then in general, we can just repeat what he said here. Then the ground state is now entangled. OK, and now if you start with the initial state, even if you start with the initial unentangled state, it becomes entangled on Hamiltonian evolution. So in general, we will expect-- so just based on this general expectation-- so you would expect the ground state of a typical many-body system will be actually rather entangled. It will be rather entangled unless the Hamiltonian factor rides into all three particles. The factor rides into the Hilbert space at each point-- for each degree of freedom. But now let's talk about the many-body systems we're interested in. So they're either typical condensed matter systems or quantum field series. So in typical condensed matter systems, we face this problem. So in typical generic condensed matter systems of quantum field theory, in general, no matter how you divide A and B-- so generic H AB is non-zero. But not only is H AB is non-zero, it's in general, H AB-- so the whole Hamiltonian, including H AB is local. This is a very important concept. It's local. But local will mean the following. For example, let me give you an example. For example, one of the very important condensed matter systems is, of course, the Heisenberg model. So essentially, you have the Hamiltonian. So you can see the lattice in whatever dimension you're interested in whatever kind of lattice you are. At each lattice, there is a spin. So consider you have a lattice of spins. And then you'll have nearest neighbor interactions between the neighboring spins, so some of the nearest neighbors. So you can imagine you have a lattice system. And each point on the lattice, you have a spin. And each spin interacts only with the spins nearby. So this is a local interaction. So by local interaction, means that the only direct couple to the spin falls distance away from it. So this is an example of a local system. And all our QFTs are local systems. All our QFTs are local systems. For example, let me just write down the phi 4 theory, for example, phi 4 theory. Then phi at each point-- so now you have to go back to your first day of your quantum field theory where you emphasize locality. So the phi evaluated at each point now is independent of degrees of freedom. Under the quantum field theory, it's a Hilbert space of a tensor product of the degrees of freedom. OK? Under the form of the Lagrangian say if you discretize it-- and this is only involving coupling between the neighboring point, because the derivative is only dependent on the second derivative. And this only depends on the value of phi at the single point. So the typical quantum field theory as far as you have found a number of derivatives, it's all local. It's all OK intact. It's all local. OK? So in other words, in these kind of systems, let me go back here. So this is very important. In other words, in these kind of systems, so let me just imagine-- so let me just say this box is the whole thing. It's the whole whatever space I live in. And I divide it into A B. So this tells you that for typical condensed matter or quantum field series, the H AB is only supported. So let me call this epsilon. So that tells that H AB is only supported near the boundary between A and B, because there is only local interactions. So the part of the Hamiltonian which directly covers A and B is only within this part. And this epsilon is added on the lattice spacing is order of lattice spacing, which in the case of a lattice system or in a quantum field theory, it would be your short distance cutoff. So if you try to discretize your field theory, then this will be a short distance cutoff. So similarly, I can consider another shape of A. So suppose I can see the circular A. And then, again, the part of which H AB is only supported in the region between these two dashed lines. And those dashed would be considered to be very close to the A. I'm just going to pick. But this should be the short distance cutoff. OK? And again, the H AB is only supported in here. So this is an important feature of which you have a local Hamiltonian. Let me just write one more word. So H AB only involves degrees of freedom here-- the boundary of A. So this means the boundary of A. So this turns out to have very, very important implications-- the fact that the Hamiltonian is local. And that they only direct and mediate coupling between the degrees of freedom near the boundary of A. And they have very important consequences. So now I'm just telling you the result which is a result of having accumulated for many, many years since the early '90s. So then when I find in the ground state of a local Hamiltonian in general you may construct some kind of evil counterexamples. But in general, in the ground state of a local Hamiltonian-- in the ground state of a local Hamiltonian-- any just choose any region A-- you integrate out the degrees of freedom outside of it. So let me emphasize an important point. Let me just pause to emphasize an important, which is only increasing what I'm saying here. In this definition of the entangled matrix which I just erased, you just need to have a partition of a degree of freedom between A and B. The way you partition it does not matter. How you partition it does not matter. You can partition it in whatever way you want. But for typical this kind of lattice system or for quantum field theory, there's a very large partition. And the partition is just based on the locality. It's just based on the degrees of freedom at each point. You just factorize your Hilbert space in terms of degrees of freedom at each point. So of course in the lattice system, it's obvious. You have a spin at each lattice. And for the quantum field theory, you just factorize your degrees of freedom. At each point, you factorize them. And so this locality provides a natural partition of your degrees of freedom. And when we talk about A and the B, we always talk about in terms of the space location. And it's natural to talk about the partition in terms of just your metric locations. So is this point clear? I just want to emphasize this point. So then you find that in general for the ground state of a local Hamiltonian satisfy so-called area law is that the leading term of the entanglement entropy is given by the area of the boundary of A-- this applies no matter which dimension you are in generic dimensions-- and divide it by a short distance cutoff of the lattice spacing. It's some number. So A is a dimensionless number. So area-- we can see the d-dimensional theory-- so d-dimensional system. Then the spatial dimension will be d minus 1. And then you go to the boundary of some region. The boundary of region A. Then that will be a d minus 2 dimensional surface. And then this would be the area of that surface. So this may cover a dimensionless number. And then you can have some pre-factor here which depend on your systems. So this formula tells you a very important physics. So this formula tells you a very important physics. This tells you that generically if you-- in such kind of systems, AB are entangled. But the entanglement between AB between A is complementary, which I just call A and B, are dominated by short-range entanglements near the boundary of A where this H AB is supported. OK? So this is also very intuitive once you load the answer-- once you load the answer, because we emphasized that H AB only coupled to degrees of freedom nearby. And then we look at the ground state you find most of the degrees of freedom in here don't entangle with B. Only the degrees of freedom them near the boundary of A are entangled with B because of this interaction. This local interaction directly leads to the entanglement between the degrees of freedom nearby. And those degrees of freedom, if they are far away, they might be entangled. But they are not dominant. OK? So I have put many dots here. So included in those dots are possible long-range entanglements. Because when you find the ground state, you really have to extremize the energy of the total system. You don't only look at the small part. So there's always some kind of long-range entanglement beyond this H AB. But this formula tells you that this short distance entanglement dominates. Yes? AUDIENCE: One question-- in QFT in general, like [INAUDIBLE], there's really all the interactions are perfectly local, I believe. But in string theory, are there any non-local interactions that occur. Are there any interactions with non-local? HONG LIU: Yeah, it will depend on the scale. In string theory when we say non-local, it's non-local at the alpha prime scale. And that alpha prime can perfectly be our short distance cutoff here. When I talk about local and non-local, I'm talking about infinite versus finite. For example, here, when I talk about whether this is a local system, I don't have to have a nearest neighbor. As far as this matches this point only directly coupled to the finite distance neighbor. And this can see the local Hamiltonian. But from the quantum field theory point of view, this might be considered non-local. You see what I mean? Depending on your scale. AUDIENCE: So is there anything that appears in string theory which is non-local at any scale? Or is the non-locality just some-- HONG LIU: Yeah, so the string theory itself does not give you that non-locality, but the black hole may. AUDIENCE: Oh, interesting. HONG LIU: The black hole seems to give you some kind of non-locality which we don't fully understand. So that's why the black hole is so puzzling. And the [INAUDIBLE] of string theory, you're non-local in the scale of alpha prime. Yeah, if it's alpha prime sufficient small, it's like a local theory. And even if alpha prime is big, if you look at very much, it is still like local. AUDIENCE: I see. HONG LIU: Yeah, but black hole can make things very long distance. So that's the tricky thing about black holes. OK, so this is something which are is interesting and was first discovered in the early '90s. But in hindsight, it's very intuitive. In hindsight, it's very intuitive. So even though this formula, even though this term is universal, but this coefficient is actually highly non-universal. Normally, by saying universal, we mean something which is common to different systems. So this coefficient actually depends on the details of individual systems. Say if you calculate for the Heisenberg model, or if you calculate for the free scale of field theory, or free fermion theory, and then this coefficient in general is different, are depending on how you define your cutoff. So even though this formula is universal, but those pre-factors actually are highly non-universal. But the partial excitement-- there are many things people are excited about in terms of entropy. This is one thing, this area law, because the area law is nice. Then this can be used as a distinguishing property of a ground state. If you go to a highly excited state, then you find the generic distribution inside and A and B will be entangled. And [INAUDIBLE] highly excited. But only in the ground state, somehow only near the boundary they are entangled. This is also makes sense, because if it were high energy then you can excite it, because from the very far away from H AB, then of course, everything would be entangled. AUDIENCE: So just one other question. So why are you dividing by epsilon? HONG LIU: No, this estimation is a number. Making the dimension is numbered. This is only a short-distance information here. So that is spatial short-distance cut off. AUDIENCE: Right, the only reason I ask is that intuitively it seems that the entropy should be proportional to this thin volume around the boundary. But that doesn't-- OK, I don't know. HONG LIU: Yeah, you have to make it into a dimensionless number. You have to be proportional to the area. So there's a very simple way to understand that thing. So the area divided by lattice spacing-- so what is this guy? This is wonderfully a lattice volume. That is the lattice area on the surface. So essentially this tells you how many degrees of freedom are lying on that boundary. AUDIENCE: OK, interesting. HONG LIU: Yeah, OK? So is this clear? OK, I think I'm spending too much time on this. Anyway, so excited in the last decade-- the main excitement about entangled entropy in the last decade. The one thing is about this behavior, then you can use it as a characterization of the ground state. But a lot of the excitement about this thing is on those dot, dot, dots. So people actually find that the subleading term-- some excitement. Yeah, I'm just saying-- from last decade people have discover that the subleading terms in entangled entropy other than this area term which come in from long-range entanglements. So this captures the short-range entanglement from the long-range entanglement can actually provide important characterizations of a system. So let me just mention two examples. So it will be little bit fast because we are a little bit-- do you want a break for the last day? AUDIENCE: Well, when you say it like that-- HONG LIU: OK, good. OK, thanks. So let me just quickly talk about two examples because I want to talk about the holographic case. So the first thing is something called the topological can be used to characterize so-called topological order in 2 plus 1 dimensions. So since the '90s, mid '80s, and the '980s, people discovered that if you look at the typical gapped system-- so when we say a gapped system means that system between the ground state and the first excited state have a finite energy gap. So our general understanding of a gap system is that in the ground state because you have a finite gap, and there's sufficient low energies, you simply cannot excite anything. You have low energy because you have a finite gap to the first excited state. You cannot excite anything. So essentially, when you look at the correlation functions, we can look at all the variables. They all only have short-range correlations. The cannot have long-range correlation, because there's nothing to propagate you to long distance. And because long distance requires massive degrees of freedom. And because you have a gap, you don't have such massive degrees of freedom. So the typical conventional idea about the gap system is that all correlations should be short-ranged. And so this is a condition that meets them. But in the late '80s and early '90s, our colleague, Xiao-Gang Wen, he introduced this notion of a topological ordered system based on fractional quantum Hall effect. So he actually reasoned that there can be gap systems even though I have a finite gap, but in the ground state actually contain long-range correlations-- nontrivial, long-range correlations. And those long-range correlations cannot be seen using the standard observables. So if you are using the standard observables-- say correlation functions of local operators, you don't see them. You only see the short-range correlations. And those long-range correlations are topological in nature. You have to see it in some subtle ways. And then in the '80s, except in the late '80s, early '90s, he was trying to argue in some [INAUDIBLE] way to argue there is such kind of subtle correlations. But then around 2000, maybe 2005, then he realized actually you can see it very easily directly from the ground state wave function. You just calculate the entangled entropy. You just calculate entangled entropy. So in entangled entropy, you always have this-- so this is a 2 plus 1 system, and then the boundary is just a line. So essentially, you just have a boundary. It's a number-- the length of the boundary divided by your cutoff, but then it turns out for these kind of topological ordered system, there is a finite constant as a subleading term. And this finite constant is independent of the shape of A and independent of length of A and independent of whatever A is purely topological in nature. And so if you have achieved a gap system then this gamma will be 0. So you look at a free massive scale of field-- then this gamma will be 0. But for those topological ordered systems, this gamma would be non-zero. And it tells you actually there is long-range correlations which are encoded in the entanglement entropy, because this cannot come from the local thing because this does not depend on the shape of A, does not depend on N sub A, does not depend on anything of A. So this is really topological in nature. It tells you this can only be some long-range, subtle correlation, which is only capture by these entangled entropy rather than by the standard correlation functions. OK So this is one discovery which is generated a lot of interest in the entangled entropy because now this can be used to define the phases. OK, you can actually define non-trivial phases using this number. And so this is the first thing. So this is about in condensed matter systems. In quantum field theories , this can be used to characterize the number of degrees of freedom of a QFT. Again, this is a long story. But I don't have time. So let me just tell you a short story. So first let me look at 1 plus 1 dimensions, a CFT. So for 1 plus 1 dimension of CFT or important the quantity is so-called essential charge. So have you all heard of essential charge or not really? It just says, imagine you have a 1 plus 1 dimensional CFT. And for each CFT, you can define a single number, which is called the center charge. And this center charge is very important because it controls the asymptotic density of state of the system. So if you look at the system go to very high energies, and the density of state essentially is controlled by this number C. So essentially C can be used to characterize a number of degrees of freedom. And the C will appear in many other places. But I don't have time to explain. But anyway, there's something called the central charge, which can be used to characterize numbers of degrees of freedom of the system. And thus, if you look at the entanglement entropy for 1 plus 1 dimensional system for 1 plus 1 dimension of CFT. So in 1 plus 1 dimension, you only have a line in the spatial direction. So then this takes A just to be a sacrament of length l. Then you find when you integrate out everything else, the entangled entropy for the A given by C divided by 3 log L divided by epsilon. Again, epsilon is a short distance cutoff. And the C is your center charge. So you see that the entanglement entropy-- first you notice that formula becomes degenerate when you go to the two dimensions. It's equal to 1 plus 1 dimensions. 1 plus 1 dimension the were played by something like this. You have log epsilon. So the key important thing is not the prefactor because of the log. The prefactor is actually reversal. And it's actually controlled by the central charge. So that saves you from entangled entropy. You can actually read the central charge and actually read what are the numbers of degrees of was freedom of the system. So the fact that the central charge in the 1 plus 1 dimensional CFT captures the number of degrees of freedom was known-- known before people thought about entangled entropy. But it turned out-- and people spent many years trying too hard to generalize this concept to higher dimensional series. And it turned out they are very hard to generalize. People didn't really know how to do it. But now we know because you can just generalize the entangled entropy to higher dimensions. And then you find the same kind of central charge which appeared. You find again things can be defined as analog of the single charge in 1 plus 1 dimension appears in the entangled entropy in the higher dimensions. And then they can be used to characterize the number degrees of freedom. And so in some sense, the entangled entropy really provides a unified way to think about this central charge and to characterize the number of degrees of freedom across all dimensions-- across all dimensions. And so this is a key formula. Actually, I first realized by our old friend Frank Wilczek when they studied free field theory. And people later generalized to show this actually works for any CFT. Good. So now finally, we can talk about holographic entangled entropy with lots of preparations. So this just gives you some taste of actually why we are actually interested in entangled entropy and why this actually not only people in quantum computing or quantum information people are interested in it. People are doing condensed matter-- people doing quantum field theory are also interested in it. And now people who are doing string theory are also interested in it because of this holographic entanglement entropy. So suppose we have our CFT. So a two-dimensional CFT with a gravity dual-- so a dual to some theory in d plus 1 dimensional [INAUDIBLE] space time. So let me give you origin A. How do I find the entangled-- what is the counterpart of this entangled entropy on the gravity side? So this is the question. So let me just draw this figure. Suppose this is the box again that represents your total system. And then this carves out of region A. Then the question what does this translate into? And the graph decides. OK, so now in the boundary, we have some region A is equal to 0. So for this question, in some sense it's difficult because it's not like other questions we've talked about before. So we also know partition function. You can say the partition function cannot be the same. Those things appear to be natural. Here, there is no natural thing you can think about, because you cannot define entangled entropy as a partition function. Entangled entropy have to involve in so-called trace out the degrees of freedom. Then take the logarithm. It's highly non-local and a complicated procedure. And it's obvious how you would define on the gravity side-- how you would guess on the gravity side. So it's quite remarkable and the Ryu-Takayanagi, they just made the guess. And then worked. So the proposal is just find the minimal area surface, let me call gamma A, which extends in the back and with the boundary of A as the boundary. You first find the surface. And then you say the entangled entropy for A is just the area of this gamma A divided by 40 Newton. And this is the Ryu-Takayanagi. So the idea is that you find a surface of a minimal area is going into the back, but ending on the boundary of A. So this is your gamma A. And you find such a minimal surface. Then you divide by 40 Newton. OK? So let me just mention one thing. Let me just note a simple scene, because again, the A of the boundary, this is d minus 2 dimensional. And A and the gamma A are all d minus 1 dimensional. So this is d minus 2 dimension is what we said before because A is just a region in your spatial section. So A is a d minus 1 dimensional. And the boundary A will d minus 2 dimensional. And the gamma A would be a surface which is ending on the boundary of A. This ends on that thing. So this is also the d minus 1 dimensional. And this is d minus 1 dimensional. So this area have dimension d minus 1. And that's precisely the dimension of G Newton in the d plus 1 dimensional [INAUDIBLE] of space time. And then this is a dimensionless number. OK, because we did it the SA with dimensions. So you can say this is a very difficult guess. You can also say this is a very simple guess, because clearly this formula is motivated by the black hole entropy formula. You may say, ah, black hole entropy is something divided 4GN. If it is entangled entropy, it's also entropy. Maybe it's also something divided by 4GN. And maybe it's some surface divided by 4GN. And then the special surface would be the minimal surface. So in some sense, it's a very naive guess. And you could say it's a very simple guess. But as I said, it's also a very difficult guess, because essentially, other than that, you don't have other starting points. Yes? AUDIENCE: So [INAUDIBLE]. So we have A in the CFT. Where do we know exactly where to place A in the AdS? Because it's not-- HONG LIU: This is a good question. This is what I'm going to say next. Yeah, but let me just emphasize that. So when we talk about A, we're always talking about the constant time slice, because you have to specify some time. And actually, when we consider the ground state or typical state does not depend on time. And it actually does not matter which slice we choose. You just choose a time slice. And then you specify region A, because to talk about degrees of freedom, you only talk about it in the spatial section. So if you have a ground state or have any state which does not depend on time, then the gravity side also does not depend on time. It's a time-dependent geometry. So time slicing in the field series naturally extends in the gravity side. So in the gravity side, this surface would be in the constant time slicing, which would go to the gravity side. AUDIENCE: Professor, one more thing. With the CFT and AdS-- I know we like to think one existing on the boundary of the other. But how does-- that's sort of an abstraction, right? HONG LIU: No, that you should really consider as a real thing-- real stuff. Yeah, if you just think of that as an abstraction, then you miss a lot of intuition, because you use that as a genuine boundary, will give you a lot of intuition. And many things become very natural. Many things become very natural. Yes? AUDIENCE: So comparing to the other formula that also has an area of the boundary, all this tells us is just it helps us count the number of degrees of freedom in the CFT. So it all it tells us is basically the epsilon in the denominator? HONG LIU: No, no, they have a complete different dimension. No, this one higher dimension. AUDIENCE: So that's another strange-- HONG LIU: No, this is not strange at all, because they're not supposed to be the same. No, this is the way it should be. So this is d minus 1 dimensional. And that is the d minus 2 dimensional. It's just completely different. AUDIENCE: Well, right, but they are not-- there is something the same about them. HONG LIU: No, there is something to same about them. But these two have completely different physics. So this has a close analog with black hole physics. Don't think they're motivated by that formula. I think they're motivated by the black hole formula. And people also try to connect that formula to the black hole formula. That's another story. Anyway, it doesn't matter. OK, good. So this is a guess. Yes? AUDIENCE: So is this result for the ground state on the CFT? HONG LIU: No, this is valid. Yeah, for the time-independent state. The eigenstate is not invoked with time. AUDIENCE: The thing I don't understand is on the site of the CFT, there is this state that you did as an input to calculate the entanglement entropy. And what's the corresponding thing on the AdS side? HONG LIU: The AdS side is the corresponding geometry. So it's as said before, each state on the field theory should correspond to some geometry with normalizable modes. Yeah, so it's the corresponding geometry. And the ground state, then this will be just pure ideas. And if you look at the final temperature state, then this will be the black hole. And then if you are able to construct some other geometry corresponding to some other state, you can use that. AUDIENCE: How about this entropy on independent on which state we choose for CFT? HONG LIU: No, of course it depends. No this formula does not. But the geometry does. The geometry does. This is a minimal surface in whatever [INAUDIBLE] geometry. AUDIENCE: Does the region-- after we choose the region, we can choose any state-- HONG LIU: Yes, yes. AUDIENCE: And as a minimal surface, is it independent on which state we choose? HONG LIU: Of course, it would depend on which state you choose, because each state corresponds to a different geometry. And the minimal surface depends on your geometry. So each state-- so remember previously, the state on the CFT side is mapped to the geometry with normalizable boundary conditions with normalizable modes. Yeah, anyway, just let me say just geometry. So a state with the geometry. If you have a different state, you have a different geometry. But by definition, all this geometry would be asymptotic AdS. So in a boundary, they should all be asymptotic AdS. And so that what means to have [INAUDIBLE] mode excited. OK? Yes? AUDIENCE: So this entropy formula takes into account the entropy of gravitational excitations in AdS, if you can put it that way. But black hole entropy-- HONG LIU: No, let's try not to jump too fast. And I think you are trying to extrapolate too fast. This formula is supposed to calculate the entangled entropy in the field series side in the [INAUDIBLE] limit and the notch lambda toward coupling limit. And we can talk about the gravitational interpretation later. But this formula is about the field series entanglement entropy. OK? Good. Any other questions? OK, good. So there are many support of it. So this is a rule of the game. You make a guess. Then you do a check. If the check worked, then that gives you confidence, and then you do another check. And here you find it will fail. And if you do a sufficient number of checks, then people believed you. Then people will believe you. And everybody started checking it. And then sooner or later we will see whether this fails or works. Anyway, so this is nice because this is simple to compute, because finding a minimal surface on the gravity side is in some sense, it's a straightforward a mathematical problem conceptually, even though technically it may not be simple. But it's straightforward conceptually. Anyway, so you can use these to calculate many quantities. Let me just say in support of this-- let me just say it in words just to save time. So the first thing you should check is that this satisfies the subadditive condition which we wrote down earlier, because that's a very non-trivial condition. It should be satisfied by the entropy. So first, you should check that, because if you violate that, than this proposal is gone. And then there are other things that you can try to use to reproduce all the known behavior we know about entangled entropy, for example, this behavior and this behavior-- this area along. And then we actually build up confidence by checking all those known results. And then you can try to derive [INAUDIBLE] and to see whether [INAUDIBLE] makes physical sense. So essentially, that's how it works. And this is nice also at the tactical level, because entangled entropy is something very hard to compute. I don't know. You guys may not have experience of calculating entangled entropy. But ask Frank Wilczek. He was one of the first few people to do it in some free field theory. Even for free field theory, people could mostly do it in 1 plus 1 dimension. Going beyond 1 plus 1 dimension free scalar field theory in 2 plus 1 dimension 3 plus 1 dimension for some simple region like a circle or sphere you have to do a numerical calculation. You have to discretize the field theory to do very massive numerical calculation. It's not easy to compute. It's not easy to compute. But this guy is actually easy to compute in comparison. So this guy at the technical level provides a huge convenience to calculate many things. Yeah, so this is a side remark. So now let's just do some calculations. So first, let me show that this actually satisfies strong subaddivity. So remember, the formula we had before is S(AC), unfortunately I erased it. OK, so this is one of the inequalities. So we are just drawing it 1 plus 1 dimension because it's easy to draw. But a similar thing can be easily generalized in a higher dimension. So let's look at just the line in 1 plus 1 dimension. So this can call this region A. This is region C. This is region B. You can also make them separate. It doesn't matter. You can also make them separate. It's easy to work out. So I'm just giving you one cases. So suppose a minimal service for the AC reading is like this. And the minimal surface the BC reading would be like this-- so minimal surface like that. So let me call this curve gamma AC and this curve gamma BC. OK, I could not find the colored chalk today. So I did not have the colored chalk. Let me see whether this chalk works. Is this the colored chalk? Yes. So now, let me define two other surfaces. Let me call this one gamma 1 and this one gamma 2. In principle, I should use two different chalks for that. You think it looks like something? OK, anyway, so we have gamma AC plus gamma BC equals to gamma 1 plus gamma 2 because-- gamma AC-- this is length-- gamma BC is equal to gamma 1 plus gamma 2. And then just from the definition of the minimal surface, then the gamma 1 must be greater than gamma C in terms of the length because the minimal surface associated with C must be the minimal area. It must be smaller than gamma 1. Under the area of gamma 2 must be greater than gamma ABC, because the gamma 2 is the boundary is the ABC. And the gamma 2 must be greater than gamma BC because this is supposed to be the minimal surface. So now we conclude that gamma AC plus gamma BC is greater than gamma ABC plus gamma C. And then this translates into that formula. OK? Very simple, and elegant proof-- very simple and elegant proof. And now you can prove another inequality. You can prove another inequality, which is S(AC) plus S(BC) greater than S(A) and S(B). So now, I will define another surface. OK? So now, let me just redraw it-- A, C, B. So I have AC like this. I have BC like this. And now, let me call this one gamma 1 tilde-- this one gamma 2 tilde. This is annoying. OK. And again, gamma AB plus gamma AC plus gamma BC equals to gamma 1 tilde plus gamma 2 tilde. So the gamma 1 tilde ends in A. Gamma 2 tilde ends in B. That means gamma 1 tilde must be greater than gamma A and gamma 2 tilde must be greater than gamma B. OK? So now you show that gamma AC plus gamma BC is greater than gamma A plus gamma B. So this is very easy. So this is a very simple and elegant. So if you want to really to be impressed by this, I urge you to look at the proof of the strong subaddivity itself in say in some textbook or in the original papers. It's actually highly non-trivial. You need some double, double concave functions. It's quite a height involved. OK, so this is a great confidence boost that satisfies the strong subadditivity condition. So now, let's look at the last one. Let's try to reproduce this formula. Let's try to do a simple calculation to reproduce this formula. So this formula is supposed to be true for any CFT. This should be also applied for the holographic CFT and to check whether this formula works. The reason you would like to choose an example which is new. But this example is the simplest. So that's the reason I choose this example, even though it's just reproduced all the results. So let's now look at the entangled entropy of 1 plus 1 dimensional CFT. So here, now you have a CFT 1 plus 1 should be dual to the AdS3. You will do some theory in AdS3. So the S squared-- so let's only look at the vacuum then we just work with AdS3. So let me write down the AdS3 metric. So dx is the boundary directions. So as I said, each CFT is characterized by a central charge. And you can obtain a central charge from various different ways. And again, from the holographic, you can try to express the single charge in terms of gravity quantities. And there is many ways to derive it that we will not go through with that calculation. So let me just write down the results. So far the holographic CFTs, the single charge is related to the gravity quantity by 3R divided by 2 G Newton. So this is a result which I just caught. I will not derive it for you. But you can derive it in many different ways. So for CFT, to do a gravity idea 3, its central charge is given by 3R, which is the AdS radius divided by 2 times the Newton constant. And the Newton constant in three dimensions has mass dimension 1. So this is a dimensionless number-- a dimensionless number. So now let's calculate the entanglement entropy using that formula. So this now reduced your calculation very similar to our Wilson loop calculation. So let's call this L divided by 2. So lets call this X direction. So this is a V direction. So this is L divided by 2. This is minus L divided by 2. This region is A. Then we need to find a curve, because this now 1 plus 1 dimensions-- one dimensions. So we need to find the curve which end on this segment of A. OK? So is this clear? So this is the x direction. And so this is 0, x equals to 0. Yes? OK. So what we need to find is find the minimal surface connecting these two points-- a minimal curve-- a minimal length curve connecting these two points. So it's as we said before, we should look at the constant time slice because this is a vacuum which is time independent. Let's look at the constant time slice. The constant time slice, the metric becomes-- so in this space, the metric becomes r square divided by z square because the time does not change. So now if I treat the x as a function of z, and then these will become. So this curve lets me parametrize x as a function of z. Then this just becomes r square divided by z square 1 plus x prime square dz square. And so this is dl square. So the length of the curve square would be just parametrized by this guy. OK? And we have to satisfy the boundary condition-- by symmetry, we only need to consider the right stuff OK, because it's symmetric. So we impose the boundary condition at the x z equal to 0. It's L divided by 2. We can see the right half. So now you can just write down the entanglement entropy now. So S(A) will be 1 over 4GN. So I integrate half of the curve. Let me just call here z is equal to 0. Let me call this point z0. So I just integrate from 0 to z0. Z0 we will find out when you find out this minimal length curve. So this one will be r/z dz. Then just taking the square root of this guy-- the square of 1 over x prime square. So this is the length and the times 2, because we only can see that this is half of it. And now you extremize it. Then you find the minimal length curve. And then you plug the solution here. Then you find the action. Actually we don't need to extremize it, because this is a well-known problem. And this is called the Poincare disk. So this is a prototype of a two-dimensional hyperbolic space. And the minimal length curve inside your hyperbolic space is maybe a 16th century problem. Yeah, I don't know. Actually, maybe 17th century. Anyway, so the answer is actually very long. But you can also easily work out yourself. So the result is actually this is just exactly a circle. OK? Exactly a half circle. So that's actually a half circle. That means that x equals 2. The half circle with a radius L divided by 2. So this has just become L squared divided by 4 minus d square. In particular, z0 corresponding to the point which x equal to 0 just L divided 2. So now you just plug this into here. You just plug this here. Let me just write it a little bit fast, because we are a little bit out of time. So you plug this in. Then you can scale the z out. You can scale z to be scaled L divided by 2 factors out. And let me just scale it. You can rewrite the integral as follows. So after the scaling it's say 2z scale with L divided by 2z to do the scaling. Then you find this expression becomes 2r/4GN. And the upper limit becomes 1. And the lower limit you see. When you plug this in, actually the lower limit is divergent because there's 1 over z here. As always, we need to put the cutoff. And also, this divergence is expected because of below that here it depends on short distance cutoff it would be divergent. So we need to for the short-distance cutoff here. So let me put it here epsilon. And this will be 2 epsilon divided by L when I do this scaling. And then you have dz/z 1 minus d square. So the integral reduces to something like this. And now you can evaluate this integral trivially. Then you find the epsilon goes to the limit. Then you find the one term survives 2r divided by 4GN log L divided by epsilon. So now let's try to rewrite in terms of this C-- we rewrite in terms of C. So this is 1/3 3r divided by 2GN log L divided by epsilon. And so this is precisely that formula C divided by 3 log L of epsilon. And with that, you just count from the minimal surface in the hyperbolic space is a circle. And it takes a lot of effort to calculate this thing, even to do a free field series calculation. But in gravity you can do it very easily. So now let me just quickly mention what happens if you do at a finite temperature. So you can also do this at a finite temperature. So since we're out of time, we'll be doing it a little bit fast. So first doing it at a finite temperature is easier. Let me just say finite T. And then I will connect to the black hole entropy. I will show that this formula actually reduces to the black hole entropy in some limits. So for these problems, actually, it's easier to consider CFT on the circle. OK. So when the CFT is on the circle, the boundary will be a circle because this is 1 plus 1 dimension. So remember, in the global ideas, it's like sitting there in the boundary. And in the 1 plus 1 dimensional case and the boundary is just really a circle. And this is the bark And a finite temperature, you put the black hole there. Let me just put the black hole here. OK? Just put the black hole there. So now you can ask the following question. So now let's consider some reaching A. So let's consider A is very small. Yo can work out the minimal surface. So it acts the same as our Wilson loop story. So you find a minimal surface. But if A is sufficiently small, then the geometry around here is still AdS. And then you just get a minimal surface around here-- so a small deformation from the vacuum behavior. So we'll make A larger and larger. So this z0 depends on L. If we make L larger and larger, this is z0 will be deeper in the bark. So if you make A larger, so this will go more to the bark and then will be deformed, and then we'll see the black hole geometry. But now to answer the question how about if I make A to be larger-- this region to be A to be larger than the half of the size of the circle. So the minimal surface should be able to be to be smoothly deformed back into A. So now you have to do something like this. It turns out the minimal surface want to go around the horizon and then doing something like this. AUDIENCE: Why don't you do it the other way around? HONG LIU: No, it's because the other won't. It's because A is like this. And if you do something like this, then it can be deformed back into A. It's mostly without crossing the black hole. AUDIENCE: Yes, or it could be like a phase jump or something at some point when one surface becomes longer than [INAUDIBLE]. HONG LIU: No, in this situation, it's not that thing. It's something like this. Now, if you make A even longer, eventually, let's take the A even longer. And that would be something like this. Yeah, because you always want to hug near the horizon. And eventually, we'll make A to be tiny, then it will become something like this. You have something close on the horizon. Then you have something like this. It's roughly something like this. Yeah, is it clear to you? It doesn't matter. [LAUGHTER] So now let's consider-- anyway, yeah, let me just draw this again. It actually does matter. Sorry. So let's take A to be very small. Let's take A to be a very big. This part will be very small. So we are finally we will go like this. Anyway, something like this when you hug the black hole. And eventually, when these two points shrink, so take A with the total region. Take A to be the total space. What do you get? Then what you see that the minimal surface essentially is just essentially a surface hugger on the horizon. And this is a precise black hole formula, because then this is a imprecise a black hole formula. And this is a situation you expect it to be black hole formula, because if you take the A to be the host space, then rho A is just equal to the rho of the whole system. And the S(A) by definition is just equal to the S of the whole system. And this is just a thermal density, a thermal entropy. And this is given by the black hole formula. It's given by the black hole formula. And this is given by [INAUDIBLE]. So you find in these special cases, actually, you recover the black hole formula. So I have some small things. We can talk a little bit more. But I think we'll skip it. You can also show that actually for general dimensions, you always recover the area more without the details of the geometry, because no matter what kind of dimension you look at, the typical state which will be normalizable to geometry. When you approach the AdS-- when you approach the boundary, it's always like AdS. And then you find in a higher dimension, it's always like this. It says, when this minimal surface is close to the boundary, it becomes perpendicular to the boundary. Here is a circle, then of course, it's perpendicular to the boundary. But you see this feature is actually generalized to higher dimensions-- to any dimensions. So what you see is that in the general dimension, a minimal surface will be just near the boundary will be just perpendicular to going down from everywhere. And then from there, you can show that you always have the area law from this behavior. You always have the area law. So this can give you an exercise for yourself. Yeah, so let me just finish by saying a couple of philosophical remarks, and then we'll finish. So this RT formula is really remarkable, because it's actually, as I said, technically provides a very simple way to calculate entangled entropy. But actually, conceptually, it's even more profound because it tells you that the space time is related to the entanglement, because it simulates something which is very subtle from the field theory side, because this S(A), as we said, you have to do some complicated procedure to trace out degrees of freedom-- take the log. And that turns out to be related to the geometry in a very simple way. So essentially, you see that entanglement-- it's just captured by the space time. And in particular, it's something you can say the geometry is essentially equal to the quantum information. So this is a very exciting also from the gravity perspective, because if they ask you if you really understand this duality, then you can actually using the quantum information of the field theory to understand the subtle things about the geometry. It just tells you the geometry is actually equal to the quantum information. So for many years, actually not that many years, so previously, we have people doing quantum information. And we have people doing this matter, which is the quantum many-body system. And this matter, or QCD, so this is kind of quantum many-body. And then we have people doing black hole, string theory, and the gravity. So they are all very different fields. But now, they're all connected by this holographic duality. Somehow essentially, they connect all of them. So now, we essentially just see a unified picture, a unified paradigm, to understand all quantum systems, including quantum systems without gravity, with gravity. So I will stop here. [APPLAUSE]
MIT_8821_String_Theory_and_Holographic_Duality_Fall_2014
20_Euclidean_Correlation_Functions_Twopoint_Functions.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. HONG LIU: OK, let us start. So let me first remind you what we did at the end of last lecture. So we start talking about how to calculate the Euclidean correlation functions or how to relate the Euclidean correlation functions in the boundary field series to the bulk quantity. So the question we are always interested in-- so the question we are always interested in in the field theory is the generating functional. And so this is defined, say, in the Euclidean series to be-- we insert this. You insert this information into your Lagrangian. And then you just calculate the correlation function of this exponential, OK? So even if I write down a single field phi, single source phi, and a single operator-- so you should really understand this as the collapsing of all possible operators-- all possible operators and all possible corresponding sources you can put in. And those are a lot of important things-- is that you should imagine this exponential as a power series, OK? So this expression should-- for the purpose of the generating functional. And this exponential should be understood as a power series in phi. So we expand it in phi. And then the term proportion to phi, then gives you the one-point function proportion to phi squared, give you the two-point function of O. And the cubic term would be the three-point function of O, et cetera. And so this should-- yeah, this exponential does not need to make sense non-perturbatively. You only need to make sense of it as a power series of phi, OK? You only need to make sense of it perturbatively and then just given by the correlation functions of O. OK, so this is a power series of phi, which is the coefficient given by the correlation functions of O, OK? So this clear? And then we argued last time that given the relation phi x is related to some operator to some field phi, to the boundary value of phi. Yeah, given that, say, the operator O is related to some bulk field phi and the nominal [INAUDIBLE] modes, the boundary value of phi should be related to the source corresponding to this o, then you can in some sense immediately write down what this generating function should correspond to on the gravity side. This can be conceivably just a partition function with this deformation-- with this deformation. And then this should be the same as the partition function on the gravity side. Then we should expect that CFT phi should be the same as the gravity partition function reaches the boundary condition, OK? So we expect this to be true, OK? And so this is just a heuristic way of writing the more precise relation. For example, for scalar, the way to understand this is that phi xz at z goes to 0 should goes to phi x z d minus delta OK? So this is a precise mathematical meaning of this relation. So the boundary value of capital phi should be given by the small phi. And of course, depending on different fields, then this power may be different. So last time, we talked about the, say, for conserved current, we just do it to vector fields. And then this power will be 0. And then also for the [INAUDIBLE] perturbation will be a different power, et cetera, OK? So any questions regarding this? Good. So now, the thing is that in general, we cannot really verify this relation because the right-hand side is very hard to compute. Actually, we don't even know how to define the right-hand side in the full quantum gravitational regime. And it's just a formal expression. Say for quantum gravity exist, then maybe there should exist a partition function, OK? But there is a regime-- we do know how to calculate this side. So this is so-called classical gravity regime or also called semi classical regime. In the semi classical regime, which is the limit which alpha prime goes to 0 and the G newton goes to 0 or similarly gs go to-- string coupling goes to zero. And in this limit, one can formally write this z bulk just as a Euclidean path integral of all the bulk fields. OK, so now I use the convention for the Euclidean path integral without the minus sign. So we just integrate, say-- so this is your action. Then you integrate over all possible gravity fields. Again, this should be integrated as a collection of all the fields on the gravity side, including whatever scalar field, gravity, vector field-- you catch everything, OK? And there'll be some appropriate boundary condition-- OK, so appropriate boundary condition. And so this SE-- so this SE, just the Euclidean action-- standard Euclidean action. Say the Einstein term and the plus-- say the matter terms, OK. And then the leading order in this limit-- and the meeting order in that limit, you can just perform this integral by saddle point of approximation, which is say, alpha prime goes to 0 limit, kappa goes to 0 limit. OK, kappa is essentially the G newton. So you can calculate this path integral by saddle point approximation. Just solve the equation motion following from this action, OK? I assume you all know what's saddle point approximation. So the saddle point approximation you just solve. So z bulk-- that's given by SE. Now you evaluate it at the solution of the classical equation motion. So phi c is just a classical solution. It's a classic solution which satisfies the right boundary condition. Correct boundary condition, OK? So in this limit, we can really calculate this quantity. And this is very good news. This is very good news because this limit-- because this limit is the-- what did we say before? N equal to infinity. And the lambda goes to infinity limit of the boundary theory of the super Yang-Mills theory. OK, so we can actually-- so given this, then essentially we know the generating functional. Then we can actually calculate the correlation functions in the strongly coupled field series, OK? And it's just by solving the classical gravity equation, just by solving the classical gravity equations. So we conclude-- this is an important equation. So let me write it here. Yes, so let me just make a small remark-- make a small remark. So to [? read in ?] order, this correlation function essentially is controlled by this 1 over 2 kappa square. So this is 1 over 2 newton. So remember that previously, we-- when we talk about relation between the G newton and the field theory relation, this is proportional to N square, N square of the [INAUDIBLE] theory. And this is indeed consistent. So in the saddle point approximation, the SE, this is just controlled by 1 over 2 kappa square when we just [? put it ?] to N square. And this is indeed consistent with our expectation from [INAUDIBLE] theory that the leading order, the generating functional, should be proportional to N square, OK? We support for normalization. OK, so now we have obtained something powerful. So now we have obtained something very powerful. It said in the N equal to infinity and the lambda goes to infinity limit, the logarithm of CFT, the generating functional, is related to-- we said the boundary condition, the boundary value. Yeah, this is the boundary condition. Yeah, maybe. Just, we have this, which is the boundary value which phi c goes to z to some power of phi x. And thus, z goes to 0, OK? OK? And this is the part of the [? non-normalizable ?] mode for this bulk field phi, OK? And this a-- so this precise number a depends on-- so this exponent a depends on, say, dimension of the operator. Dimension, spin-- say, possible other quantum numbers of operator. So for scalar, we just have this guy, which we just wrote, OK? And so let me remind you for the vector, for the conserved current, which is due to a vector field, to a gauge field, to a massless vector field-- and then this gives you a equal to 0. Yeah, maybe this just gives you a equal to 0 in this case. So the leading, non-normalizable term-- so just the boundary value of the vector field is just given by the source to the current. And for the metric, we discussed last time, is that for the metric perturbation, we just do it to the stress tensor. So this is corresponding to a equal to 2, OK? So where this comes from is that, remember-- and last time, we say that A mu-- say if you have a gauge field, say a [? mass ?] term, when you go to the boundary, then you get a mu then plus some b mu Z d minus 2. So you see, there's no power here, Z equal to 0 here, OK? Similarly for the metric, component of the metric, go to 0, goes to 1 over Z square, say, the boundary value. And the boundaries-- yeah, delta g mu is the source to the-- for [? T mu. ?] So here, the a is actually equal to minus 2. Or a equal to 2, yeah-- a equal to minus 2. That's right. OK? Any questions on this? Now let me make some remarks. So the first remark is what I said a little bit earlier. It's that the phi x in this generating functional should be considered as infinitesimal. OK? So the reason we want to consider same term is that what we said before. We said we should consider the generating functional simply just as a power series. Because we don't want to take on perturbation to deform your regional series, to actually change your regional series. We just use it as a generating function. So similarly, the same thing should be considered the gravity side, OK? So similarly on the gravity side-- so this phi is due to the non-normalizable mode. So this non-normalizable mode should also be infinitesimal. OK, so don't disturb your asymptotic ADS geometry. OK, any questions? So this is first remark. The second remark-- now let me call this equation. Let me give the equation name. It said that both sides of star are actually divergent, OK? Are actually divergent. So the left-hand side, for a very simple reason, we are considering a quantum field theory. We are considering a generating function of composite operators in a quantum field theory. They're always short distance divergences when the two operators come together, OK? So this is just the usual uv divergences of a QFT. On the right-hand side, we will see soon in the example this is also divergent. And the reason it's divergent is also we have seen it before. So remember the ADS metric is like this. OK, so I write all boundary directions together in terms of dx squared. So because when Z-- when you go to the boundary, Z goes to 0. And then the whole metric blows up. So essentially, you have infinite volume. And yeah, essentially-- so roughly you can say this is a volume divergence near boundary of ADS, OK? So the relation between them, of course, is precisely what we called IR uv connection. So the IR-- so the volume divergences associated, say, with the space time have infinite volume is related to the uv divergences in the field series sites, OK? So in order to actually obtain the finite answer, we need to renormalize them. We need to renormalize them. So the idea to role match them from the field theory point of view, essentially, you need to add some counterterms to cancel the divergences, OK? You need to add the counterterms to cancel the divergences. So the similar logic we do on the gravity side. So that means that in order to make sense of this expression, we introduce a renormalized action which is defined by your original action-- say, the [INAUDIBLE] action. But this is divergence. So we first need to regularize it. So we normally do regularization. So instead of evaluating all the way to z equal to 0, which will be divergent, we cut it off at some z equal to epsilon. So this is your full space time. So z equal to 0 is, again, your boundary. So you integrate all the way to z equal to 0. You will get a divergence. So we just cut it off at a z equal to epsilon, OK? And then we added the counterterm, which can be expressed in terms of the phi c evaluated at this epsilon, OK? And this [? displaced ?] row should cancel the divergence here, OK? Should cancel divergence here. But this St is much-- so St should satisfy some simple criterion. In principle, this St can be arbitrary, but should satisfy some criterion. First, this must be local functionals of phi c, OK? The reason it must be local functionals of phi c is because this should come from short distance physics. And short distance physics is not something we can really control even in the renormalizable theories. Even in the renormalizable theories, we just look at the quantity which are insensitive to those short distance physics. So the short distance physics, not something we can control. But one important thing about short distance physics is that they all must come from a single point, OK? So it means the counterterm you act must be local, OK? And of course must be covariant, et cetera. So local, covariant-- satisfy those basic conditions and the covariant, et cetera, with all the right symmetry. Essentially, this is the only condition those counterterms should satisfy. And then you can just choose your counterterms to cancel the divergent terms here, OK? But now the key thing is that this is different from just throwing away the divergent term here. Because this counterterm also contain some finite terms. So that's normally, in the [INAUDIBLE] field series, that may change the finite terms here. And those finite terms are constrained by the locality and the covariant and [? those ?] symmetries. And actually, there's no ambiguity in them, OK? Even though the divergent term, in some sense, there's ambiguity. But in the finite term, there's no ambiguity. And so we will see an example at the end of this class. OK, so we need to renormalize them. And this is the criterion to normalize it, OK? So once we have renormalized it, then we can just obtain the n-point function. So the general n-point function-- say, oy x1, on xn-- if you look at this n-point function, so let's say connected, then we'll be just corresponding to the standard story. You just take n derivatives on the partition function, not the normalized one. OK, so you take derivative over phi 1 corresponding to o1 and then phi 2 corresponding to o2, et cetera, and the phi n corresponding to xn. And then at the end of the calculation, after you have done the derivative, you set all the phi to 0, OK? So then you have extracted n-point function, OK? So this is the same that you expand that exponential in power series and then only extract terms power n in phi, OK? So same thing. And now because this is now equal to that-- so we can write this in terms of gravity quantities. So writing as n-point function of SE will normalize SE as a function phi c, now phi 1 x1, phi n xn. Then at the end, you take phi equal to 0, OK? All the phi to 0, OK? So now you in principle, by calculating the gravity action with the appropriate boundary condition, then you would have obtained any n-point functions from the field theory. Any questions about this? Yes. AUDIENCE: Can we check that relations start on the lattice? HONG LIU: Can we check that? Yeah, in principle, you can. But it's just not easy to do. AUDIENCE: Have people tried? HONG LIU: So people have tried, not for super Yang-Mills series. So there's some version of these for some matrix quantum mechanics. People have tried. In matrix quantum mechanics on the left-hand side, you can actually calculate using the lattice or using Monte Carlo. So maybe not lattice. It's just a matrix integral. You can calculate it using a Monte Carlo and then can compare with the gravity. Then they turned out to agree with very high accuracy. But for super Yang-Mills series, the left-hand side is not doable yet using lattice calculation. So-- and the one particularly important thing is the one-point function. So maybe just separate it to mention elliptics. So one-point function is important because one-point function just expectation value, which of course we often are interested in. So if you have a symmetry breaking, whether you develop expectation value or if you want to extract the expectation value of a stress tensor, et cetera. So the one-point function is of very special importance in many cases. And also, if you know the one-point function in the presence of the source, then essentially, you actually know the full n-point function. Because you can just take the derivatives here. Then you can get to the higher point functions because of the-- yeah, if you know the source here, if you take one derivative, now you get two-point function, et cetera. OK? So one-point function-- so one-point function we can write as the following. So you just take one derivative, delta phi x, due to this operator, OK? So now let's do the scalar case. But similar can be applied to other case, OK? So you can just replace this phi x by the boundary value of capital phi. So this is for the scalar case. And you could write it as delta SE of phi c. On the right I said, derivative over phi c z, x. And as z goes to 0. Because phi c has to approach phi with this power. And essentially, you just replace it, OK? But the reason we want to write this form-- because this form have a very nice geometric interpretation. This has a very nice geometric interpretation because delta SE, delta phi c evaluated at z, x should be equal to phi c z, x. And this pi, which is the canonical momentum conjugate to phi treating the z direction as time. This z direction treating as time. And so this pi c just means it's the classical-- means that this is evaluated to the classical solution-- classical solution phi c. So this expression-- is this expression obvious to you? AUDIENCE: No. Just from the other graphs. HONG LIU: Hm? AUDIENCE: It's just [INAUDIBLE] conjugation, essentially, right? HONG LIU: Yeah, this is the-- yeah, actually, I forgot the name of this equation. Maybe it's-- normally this equation is taught in the so-called Hamilton-Jacobi theory. It's normally called in the Hamilton-Jacobi theory. But this is a well-known fact in the classical mechanics. Let me just tell you one thing. So consider you have a, say, classical mechanical system. So you move from t0 to t1, OK? And then suppose you have already found your classical solution. So this is a classical trajectory. And then suppose you evaluate your classical action. Say so this is t0 to t1 dt, all evaluated at Xc, Xc dot, et cetera, OK? Xc is already classical solution. And so this function-- and let's suppose that Xc at t0 is equal to X0, Xc at t1 equal to X1. So this is the initial value and the final value of your trajectory, OK? And then-- so clearly, because of this boundary condition, the S depends on X0 and X1. Then this is an important statement in classical mechanics-- is that delta S, you do the valuation over the final value of your location. This gives you the momentum at that point at P1. And if you do the valuation over delta X0, then that gives you the initial momentum, the [INAUDIBLE] or the initial momentum, at this point, OK? So if you're not familiar with this fact, try to derive it yourself when you go home. Take two minutes to derive it. If you find the right the way to do it, it's actually very simple. Yeah, but it's also in the chapter 1 of Landau, Lifshitz. And it's very-- essentially the first few pages of Landau, Lifshitz. You can also take a look at that. And now this situation we have here is very similar to that. So we are evaluating the classical action all the way to some value of z. And z essentially is the boundary-- is the time here. And the phi c is the value of phi at this value of z, which is the analog of our X1 here, OK? Then you take the derivative-- so this is a function, a version with that. Then you take the derivative over phi c, which is the value-- because [? one you ?] take derivative over X1. Then here, you get the P1. There, you get equals 1 in momentum conjugate to phi, OK? So this is just the functional version of that. Any questions on this? So now we have a very elegant expression. We said the one-point function. We said now, if you look at this-- now, the one-point function-- so even in the presence of the source, you just z equal to 0 limit-- z to the power d minus delta, pi c z, x. And of course, this should also be renormalized because of here, we renormalized R. So here, it should also be renormalized. And so here-- yeah, OK? Yeah, so this is a nice expression. So it tells you that the one-point function you see essentially just goes one into a limit of the canonical momentum to that field evaluated at the boundary, OK? So now, with a little bit of effort, we will show-- so you remember phi x, z, when you go to z equal to 0, as we said last time equal to Ax, you have two asymptotic behavior. So this A is related to the source. So A is related to the phi. So we discussed last time A is related to the phi, OK? So now you can show, using this expression, that the one-point function is actually exactly equal to 2 nu times B. And the nu-- yeah, let me just introduce my notation, remind you of which we had been used last time. You said the delta. So last time, we derived this delta equal to d divided by 2 plus nu. And the nu is equal to d squared over 4 plus m squared R squared for scalar fields, OK? So you can show-- so we will do that. We will reach this point at the end of this class, OK? So you show that actually, the one-point function precisely is given by B. So this is very nice. This is very nice. So you have two asymptotic behavior. And one of them is related to the source. And the other is related to the expectation value, OK? And so I urge you to do a self-consistency check yourself as we did last time that the [INAUDIBLE] you can determine the dimension of this operator, which is precisely delta. And from this relation, you can also check that this relation is compatible. We said O have dimension delta, OK? And delta are given by that delta. Again, this you can check just by the scaling the two side, the how two sides scale under the scaling, OK? AUDIENCE: So usually in QFT there's no relation between the conjugate momentum and one-point function? There's no kind of a-- HONG LIU: No, no. No, this is conjugate momentum in the gravity side. This is not-- AUDIENCE: So within QFT, there isn't any-- HONG LIU: No, this is the gravity canonical momentum related to the field theory one-point function. This has nothing to do with canonical momentum on the field theory side. So this is the canonical momentum on the gravity side. The boundary value of the canonical momentum on the gravity side with the rate of direction treated as the time, which gives you the one-point function in the field theory. Good. Good, OK. So now let's look at them more explicitly. So let's now give you an example to compute the two-point function for scalar operator. And then we'll also derive this relation. We'll also derive this relation, OK? So before we do that, do you have any questions? Yeah. AUDIENCE: I guess usually in QFT, when you have Gaussian integrals and one-point function is 0, in what case does this happen here? HONG LIU: Say-- in many cases, in QFT-- so if you have any state with a non-zero [? energy, ?] any state above the ground state where you have a non-zero stress tensor. And then the stress tensor will have a one-point function. And so for example, if you have a symmetry breaking in the scalar fields, we also have expectation value, yeah. And also, if you have charge density, than the current will have expectation value. Yeah. Any other questions? So yeah, actually, before we do the example, let me also make a brief remark why we actually first considered the Euclidean correlation functions. So now first, can I ask the reason why we want to do the Euclidean correlation function first? Hm? AUDIENCE: It's easier? HONG LIU: It's true. It's true. And why it's easier? Hm? AUDIENCE: You can-- HONG LIU: Sorry? AUDIENCE: Usually Euclidean takes less time. HONG LIU: It's true. Yeah, so there's several reasons maybe to save time. Actually, we don't have much time. Several reasons-- first Euclidean interval, you can more easily define it using the Euclidean path integral. And so there's no ordering issues, et cetera. And we just have Euclidean correlation functions. But the Lorentzian correlation functions, there are many of them. So it depends on the orderings, et cetera. And the typical Lorentzian function cannot be defined straightforwardly using path integral. So except the Feynman correlation functions, say time-ordered. So if you want to look at other ordered correlation functions, they cannot be defined straightforward. They cannot be defined straightforward using path integral. And so it's much more complicated. Actually, for many applications-- in particular, say, for the linear response for the many body system-- say for condensed matter or for QCD, et cetera, mostly we're interested in the retarded Green function. And that is not easy to define using path integral. And so you need to develop some other tricks. So that's why we are doing the Euclidean first. OK? So now let's look at example two-point function of a scalar, OK? So this is essentially the simplest example. And for a two-point function, they only need to have the quadratic term in phi, OK? And that means here, you only need to include the quadratic term in capital phi. So that means it's enough just to consider the action to the quadratic reliable, OK? So that means that if we are interested in the two-point function to leading order, then you can just look at the quadratic action. And for scalar, then, for scalar operator O, then you are just [INAUDIBLE] massive scalar fields. So we can write down Euclidean action. OK? You don't have to worry about the higher order corrections. And then the metric is the Euclidean version of this. It's just the Euclidean metric. And this is dx squared-- so dx squared, Euclidean metric, dx squared will be just delta [? mu ?] mu, dx mu, dx mu. It's really just the standard Euclidean metric. So essentially, we take the-- so here, where previously was minus dt squared plus the vector x square. Then we take that dt square into Euclidean signature. Then it just becomes a pure Euclidean. All right, I hope this is clear. And then our goal is just find renormalized phi c using this action, which satisfies the right boundary condition, OK? OK, so this is our goal. And if we find this guy, then you take the derivative. Then you get two-point functions. OK, first, let me-- first, you can write down the canonical momentum with the z as the time direction. So this is just-- OK, so this is canonical momentum. Under the equation motion-- OK, equation motion. And again, because of a translation symmetry in those X in the boundary directions, we can write it in the Fourier transform. You can write x, z in the form of k, z-- so [? expression ?] ikx, just as we did last time when we solved the [? free ?] theory. Then this gives you-- so you plug this in when you put in the explicit metric here. Then you get z d plus 1 partial z z 1 minus d partial z phi minus k squared z squared phi and then minus m square R square phi equal to 0, OK? So this is very similar to what we did before, what we did last time when we talked about normalizable modes and non-normalizable modes. This is the same equation. The only difference is now we are in the Euclidean signature. So this equation can be solved exactly-- so in terms of Bessel functions. But actually, we will not need the explicit solution here at the end. So for now, we will not try to solve it explicitly, OK? So we only do that at the end. So now let's first look at the on-shell action, OK? So there's a simple trick. Because this is a quadratic, you can actually do an integration by part in the action. So let's say plug in the-- let's plug in the classical solution to this equation, to the classic equation, into this action. So let's do an integration by parts in this term. Then when you do integration by parts, then you get two terms. You get a boundary term come from integration by parts. Then you get the bulk term. So the bulk term can be written in the following way. So let me first copy the m square phi c term. And then it's the term which you obtained by integration by part of that term. So that term can be written as partial m and then plus the boundary term plus the total derivative term. Just did a simple manipulation. I did integration by parts here. So now you recognize this guy. It's precisely the combination which appeared in that equation motion. This is always the case when you have a quadratic action, OK? So this is just 0. So we don't need to worry about this. So this term just said [? N ?] equal to 0 on shell. So this is just equal to 0. And so we only have a boundary term. So this is a total derivative. But the boundary-- so we always see the boundary term in, say, the directions along the boundary. So you only need to worry about the boundary term in the z direction. So then this just translated into the z equal to 0 and the z equal to infinity, OK? So this is translating to minus 1/2 d dx, then square root g, gzz. Only the z direction matters. Then partial z phi times pi c. Then z equal to 0 and the infinity. So now you recognize this guy is precisely our pi with some minus signs-- precisely our pi. So this can also be written as 1/2 d dx pi c and the phi c you write it as a 0, infinity-- is equal to 0, is equal to infinity. So sometimes, we also write it in momentum space, which is convenient because we can solve it in the momentum space. So it can also be written as d dk 2 pi d phi c kz and the pi c minus k, z, then 0, infinity. So we have now reduced it to evaluate this on-shell action to just two boundary terms at z equal to infinity and z equal to 0, OK? So now, we will later see that at infinity, with pi c times phi c at z equal to infinity just equal to 0, OK. We will see, OK? So we will see explicitly that in the right solution, that both pi c and the phi c, they go to 0. But actually, there's also a simple way to argue without looking at the [INAUDIBLE] solution that this is actually generically also 0 for the following reason. It's that at infinity, of course, we want the full solution to be well-defined. So we should also require phi c or partial z phi c to be finite, OK? We don't want them to diverge. Then you have a singular solution, OK? So we want them to be finite. And now you see the pi. The pi-- you work out the fact of the pi when the z goes to infinity is proportional to 1 minus d partial z phi, OK? Because of the square root g-- square root g there take the determinant. It's 1 over z to the power d, d plus 1. And the gzz inverse is the z square. So you get this power, OK? You get this power. So this-- we generically go to 0 for z goes to infinity when d is greater than 1, OK? So for any dimension greater than 1, if we impose this kind of regularity condition, then this is always 0. But in fact, for this case, if you solve those equations explicitly, you find actually even for d equal to 1, this goes to 0, OK? So this is actually generically 0. So we don't need to worry about them. So now let's just focus on the z equal to 0 side. Now let's focus on z equal to 0 side. OK, so we have this condition for phi which we derived last time. It says phi c should satisfy some Ax z to the power d minus delta plus some Bx, the leading term. You know, because there are sub leading terms, this is a leading term. And so from that expression, you can also obtain the pi c from that expression-- pi c. And then you can easily work out. And that's proportional to, say, minus A d minus delta, z to the minus delta minus delta B z delta minus d, OK? So you just take the derivative, and you find that. So just intuitively, you have the same thing, this z minus d factor multiplying the derivative of phi. So intuitively, now we go to z equal to 0. Then this factor actually is bad. Because this factor goes to infinity, OK? This factor goes to infinity. So you see that the pi c indeed is generically divergent because of this term, OK? Generically divergent because of this term. So now let's look at the product between them because this involves the product between them. So the leading term product between them, phi c pi c, is proportional to z to the power of d minus 2 delta, OK? So this is all the expression for the delta. And nu is always greater than or equal to 0. So we see that this term was always divergent. It's always divergent for nu greater than 0. And for nu equal to 0, it actually requires a little bit special treatment. Because for nu equal to 0, then delta equal to d divided by 2. Then these two terms have the same power, then become degenerated in this logarithm term. And again, it will be divergence. But I will not go into that, OK? So the story is that this is always divergent. This is always divergent. So this will confirm what we said earlier. So this tells you that SE phi c is divergent. So we need to renormalize it. We need to renormalize it. So we need to add the counterterm to it, OK? We need to add the counterterm to it. So the choice of the counterterm, as we said before, should be local on the covariant. So here, it's very simple. We only work out the quadratic order. We only need to write down things quadratic in phi. So the counterterm should also be quadratic in phi. So we can write counterterm as the following. So it's easier to write in the momentum space, say, evaluated at this epsilon. So we need to introduce our cut-off equal to epsilon and then introduce a counterterm. So the counterterm introduced should be local. So the local term-- so at a quadratic level, all possible local terms can be written as follows. We can write it in momentum space as the following. Say, as something which is a function of k squared-- phi c, k, say, z phi c minus k, z, OK? So the criterion is said fk. So the locality means that fk must be analytic in k square. So let me just give you some intuition. So this is a compact expression using momentum space. But in coordinate space, it's easy to understand. So this is analytic in k squared. So you expand it in k square. So k square is this Lorentz covariant k square-- using this Lorenz covariant k squared. Yeah, the same k square is here, which is a Lorentz covariant k square. So you can expand this in k square. So the 0-th order term just a constant. And the constant, when you go back to the coordinate space, this is just phi square, OK? This is simple. It's a local term. You can write in terms of two phis. And the next order term will be k square. k square, when you go back to coordinate space, this will be just partial phi square and et cetera. And indeed, these are the only local terms you can write down. And so essentially, you can summarize in the momentum space just as something which is analytic in k square, OK? So now, we want to choose the k square to cancel the divergence, OK? So want to chose the case-- I chose this to cancel the divergence, OK? So now, in order to do a-- so now I need to introduce a little bit of notation. Because if we want to worry about the locality-- and it's not enough just to look at this leading term, OK? Look at leading term-- so remember, those things-- so those behavior obtained solving this equation by setting this to be 0. So when z goes to 0 limit to leading order, the kz just goes to 0 compared to this term. So that expression, which we did last time, are obtained by solving this equation without this kz squared term. So we include this kz squared term so that we include some higher order corrections, OK? We include some higher order corrections. But now since we worry about whether this fk squared is analytic-- so we have to worry about those k dependence. OK, so now let me introduce some notation, OK? So let's consider the basis of solution-- the basis of solutions to this equation, phi 1, which I call phi 1, and phi 2. So phi 1 and phi 2 precisely are defined by these two asymptotic behavior. So phi 1 has the behavior that goes to d minus delta. And the phi 2 then have the behavior when you go to z equal to 0, it goes to z to the power delta as z goes to 0, OK? So this is the leading term for phi. But for small z, this can be written as a power series. And then the leading term will be just like that. Then you will have, say, some a1 k squared z square term come from treating the next order list term and also higher order terms-- say, k4, z4, et cetera. So similarly, phi 2 z, k would be start with z to the power of delta. Then will be also have higher order terms. OK? So the key is that you can work out those coefficients, OK? Because this is local expansion of infinity, and you can work out those coefficients explicitly. And all the dependents on k are local in the sense that they all are dependant only on k square. It's because the k square is what k square appears in the original equations, OK? And similarly, you can work out the pi. So the corresponding pi 1 and the pi 2 corresponding canonical momentum for them equal pi 1, pi 2, OK? Then the pi 1, pi 2, each of them will have these two behaviors, OK? Each of them will have these two behaviors and along with these higher order terms, OK? Good? So now-- so now let's look at this on-shell action. So this on-shell action, before we regularize it, evaluate it at z equal to epsilon. So this would be-- oh right, there's one more step. Sorry. So using this notation, we can just write our phi c equal to A phi 1 plus Bk same momentum case-- phi 2. So now these are exact expression because the phi 1, phi 2 are two bases and Ak and AB are just linear coefficients of that basis, OK? And the pi c would be similarly Ak, where pi 1 plus Bk pi 2, OK? So under the SE phi c would be, if you plug this in, just have A square pi 1 and phi 1 plus B square pi 2 phi 2 plus AB pi 1 phi 2 plus phi 1 pi 2, OK? And this is the one that's divergent. So this is the one which is divergent coming from the expression we said before. OK, this is divergent. And so this is phi 1, this is pi 1, OK? So this is divergent. And the other term you can all check are finite, OK? So this is divergent. So we need to cancel this term. But you cannot just abstract this term. You cannot just abstract this term because this term is not covariant. Because A-- because this term is not covariant. This term does not have the form of the phi square et cetera, does not have this covariant form, OK? So this term by itself is not covariant. So you cannot just subtract this term. But now, you can easily-- so now, we have phi square here. So now you can easily pick a phi just to cancel this term, OK? Just by comparing these two, we can just pick an f to cancel that term. So it turns out that f can be written as pi 1 phi 1 times phi square, OK? So essentially, this is our f. It's the ratio between pi 1 and phi 1. So now if you expand it, since I'm out of time-- so let me not do it here. If you expand it, you can easily see this term cancels that term, OK? But there are other terms from the covariant. So there are also finite terms brought by this term. But there's also finite terms brought by this term. And this is local because this is a ratio of pi 1 and phi 1. And both pi 1 and phi 1 are obtained by solving this equation locally at the infinity just by power series and dependence on k squared. It's always analytic, OK? So this is guaranteed. So this is analytic function of k squared. OK, so you take a little bit of exercise. You work out what is a here, a1 here, et cetera. Then you can write this term more explicitly in terms of standard term like phi square, partial phi square, et cetera, OK? So now you add these two together. So now you add these two together. You get a renormalized counterterm, OK? Then you find the renormalized expression. Then you find the renormalized expression is SR phi c is given by 1/2, again in momentum space, 2 nu A, minus k, Bk. So you find everything depends on phi 1, pi. They all disappear in the end. You just get something like this. You get a finite term like this. So now this is ambiguous. It's all finite and ambiguous. So this is your on-shell action. So normally, what we do is that we solve this equation. Then we can find the explicit A and B. So we solve this equation with the following boundary conditions. It said we first impose the boundary condition. You have Ak equal to phi k because A is the source. A is identified with the source. But we also need to impose regularity at z equal to infinity. So we wanted the solution to be well-defined at z equal to infinity. So it turns out these are the two conditions which uniquely determine the solution. So that means these two conditions together which will determine B in terms of A. OK, so this will determine B in terms of A. So in particular, this is a linear solution. So B will have the form which-- some number chi times A. So chi is the thing which you determine by solving the equation explicitly, OK? This is a form-- chi you obtain from solving the equation. Just short a couple minutes. It's just-- so now you plug this into that. You plug this into that. Plug this into that. Unfortunately, I erased my equation. OK, so you plug this into that. So chi you assume is some known function which you obtain after you are solving the equation explicitly. Then what you get-- so without that page. So what you get is you will get S renormalized action phi c equal to 1/2 the integration. So 2 nu Ak-- 2 nu lambda-- or we here call chi-- Ak and A, minus k, OK? So you remember this is just all phi k. This is just all phi k. So now you can obtain the one-point function. Just take the derivative over phi-- say, minus k. You get the Ok. And this is precisely just 2 nu chi times Ak. And this is precisely 2 nu B, OK? And you can write it in the coordinate space also, Ox equal to 2 nu B. So this we derive in the linear order by looking at the quadratic equation motions. But actually, this turns out you can prove it actually to all order. You work with nonlinear equations, et cetera, you can show this is always true that the one-point function is always given by this B, OK? One-point function is always given by this B. So let me say one thing. So now the two-point function-- so the two-point function, you just take two derivatives. You take two derivatives. So that just gives you 2 nu chi, OK? Just the 2 nu chi. So this can also be written as 2 nu B divided by A because the chi is equal to-- and this makes perfect sense physically because this is our Euclidean correlation function, OK? And the 2 nu divided by A can also be written using what we write there. So this is the expectation value divided by phi k divided by the source, so A because it's phi and 2 nu B is expectation value. And then we see this expression is precisely a so-called linear response. The expectation value in the presence of the source is the correlation function times the source-- the correlation times the source. So again, this equation is completely general. So for any two-point function case, we'll always find this B divided by A times 2 nu. And you can also alternatively written it as the ratio O divided by phi, OK? So just the final minute. Now you can solve the equation explicitly. So now you can solve the equation explicitly. You can solve the equation explicitly. And you can solve your [? eom ?] using the Bessel function. And then you find that actually, the function which is no [INAUDIBLE] the infinity at z equal to infinity is given by this form. k-- so this Bessel k function with index nu. Nu is the same as this nu. And then from here, you just expand in small z. Then you can find the B divided by A. Then you find B divided by A as gamma, minus nu, gamma nu. So from here, you just expand in small z. And then you extract different power. You obtain the ratio of B divided by A, OK? So as you get gamma nu-- minus nu, gamma nu k divided by 2 to the power 2 nu. So you find the two-point function. So since the two-point function is just the 2 nu B, A-- so you find the Euclidean two-point function in momentum space. It's just 2 nu gamma, minus nu, gamma nu, k divided by 2 to the 2 nu. So in some sense, this is precisely what you expect because we are working with the conformal field theory. And the two-point function can only depend on k as a power because there's no other scale. There's no other scale. And the power is determined by the dimension of the operator. And for example, if you go to the coordinate space-- so you Fourier transform, go to coordinate space. So that just gives you the X to the power 2 delta. Sorry-- I will stop here.
MIT_8821_String_Theory_and_Holographic_Duality_Fall_2014
10_Basics_of_String_Theory_and_Lightcone_Gauge.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. HONG LIU: So let me first remind you what we did at the end of last lecture. Say, suppose you can see the string moving in some spacetime M. So it's the following metric, given by g mu nu and the coordinate, which I've denoted by X nu. So nu typically will go from, say, mu nu, go from 0, 1 to D minus 1. And capital D will be the [? number of ?] spacetime dimension. And then the worldsheets will be characterized by this X mu as a function of worldsheet coordinate sigma tau. Then this describe the [? mapping ?] of a two-dimensional surface embedded in the full spacetime. And then we also use the notation of X mu equal to sigma a, and sigma a will be sigma 0, sigma 1, and equal to tau and sigma, OK? And then the simplest action, which have a clear geometric meaning, is essentially just the action that is given by the area of the surface. Given by the area of the surface. And this delta h is the induced metric. So if you have embedding in this spacetime, then this embedding will induce a longitudinal metric on the worldsheets. So this h is the induced metric on the worldsheet. And then if you take the determinant of it, then it essentially gives you the area. [? Adamant ?] on the worldsheets. But we also showed that classically, this is action is equivalent to this action. You can rewrite it by introducing something like a Lagrangian multiplier like gamma ab. And then by eliminate gamma ab, then you cover this. OK? And the equation of motion for the gamma ab-- yeah, if you solve the equation for gamma ab, the thing you will get something like this, lambda hab. And lambda is arbitrary constant. Arbitrary function. Yeah, say if you solve the equation of motion for gamma ab, and then that's what you get. So for this final form, so this is the action we will use to quantize it, is because this action have a nice polynomial form. Have a nice polynomial form. Not like this action, which is a square root. It's always awkward. And so this have a nice polynomial form. And then when you quantize the action, say if you do path integral quantization, then you need to integrate both dynamic variables. So exponential i Sp gamma on X. OK. So both are dynamic variables. When you quantize, you have to do path integral over all of them. And so let me just say a little bit more about the geometric, which we mentioned at the end in last lecture, the geometric meaning of this action. So if you just look at this action itself, this just like a scalar field in a two-dimensional curved spacetime. And this two-dimensional curved spacetime have a longitudinal metric, which is given by gamma. And the scalar field is just this X, OK? So this gamma can be considered as some kind of intrinsic. And we can see that as an intrinsic metric on the worldsheet. And then this X still describes this embedding of this worldsheet in spacetime. But X, from the point of view of this two-dimensional worldsheet, it just behave like a scalar field. It just behave like a scalar field. So when we quantize this action, so when we do the path integral, if you just do the path integral over DX, then this is just like a standard two-dimensional quantum field theory, OK? Just like a standard two-dimensional quantum field theory. But what's unusual here, when you quantize a string, because the gamma is also a dynamic variable, you also have to integrate over gamma. And this make it more nontrivial, and then this is no longer just like quantizing the scalar fields. And because you are integrating over all possible intrinsic metrics on the worldsheet. So this is like quantizing 2D gravity. When you integrate over all possible metric, then this is like quantizing gravity. But this is only the gravity on the worldsheet, coupled to D scalar fields. OK? Yes. AUDIENCE: So in what sense is it 2D gravity? Because the metric is like a D by D object. HONG LIU: No, no. Gamma is a two-dimensional metric. It's intrinsic metric on the worldsheet. Yeah, so it's two by two. [? ab's ?] only goes two by two. ab is only going from 0 to 1. Yeah, so ab only runs over the worldsheet at 0 to 1. Yeah. So once you write that form, then you introduce an intrinsic metric on the worldsheet. They introduce an intrinsic curved spacetime on the worldsheet. And when you integrate over all possible metrics, then that's corresponding to take into account the fluctuations in the metric, from this two-dimensional point of view. So that's why we say this is like quantizing 2D gravity. By gravity we just mean the spacetime. OK, is this clear? So now we see the question of quantizing the string theory, essentially [INAUDIBLE] to quantize this two-dimensional gravity on worldsheets coupled to these scalar fields. And these scalar fields describe the embedding of your worldsheet. OK? So we will consider a simple example. We will consider the example which M is just Minkowski spacetime. Three-dimensional Minkowski spacetime, OK? So in this case, g mu nu [? come ?] [? equal ?] to eta [INAUDIBLE] nu. And then this Polyakov action can also be simplified. So this SP now just become 1 over [? 4 ?] alpha prime, d square sigma, gamma ab, partial a X mu, partial b X mu. So now when I write this X mu, X mu should be understood that they are contracted by the standard Minkowski metric. So now, this is really, if you just look at the X, this is like free scalar field series, because of the dependence on x, purely quadratic. So this is like a free scalar field in curved spacetime, in curved worldsheet, 2D worldsheet. So this still slightly unusual thing about this scalar field theory, from our standard one. Actually, I might be missing a minus sign somewhere. I might be missing overall minus sign. I think I'm missing overall minus sign. Just let me put it here. I think I'm missing overall minus signs. For our current purpose, it does not matter. But I think for the later purpose we need it. Because if you can see the standard field theory using this minus [? 1 ?] metric, then I need the minus sign. OK? So this is just like a free scalar field theory in this 2D curved spacetime. But the slightly [? thing ?] unusual about this one, so there are two unusual things. One is that we also need to integrate gamma. But there's a lot of unusual things, even from the perspective of scalar field theory. And so if you look at the 0-th component of X, because eta 0,0-- we are using the eta equal to minus 1, 1. Because eta 0,0 is minus 1. So the 0-th component of X, we have a opposite kinetic term to the other component, OK? So you might have learned in your quantum field theory class that actually you cannot change the sign of your kinetic term. Otherwise you get instability, because then your Hamiltonian will be unbounded from below, et cetera. So this action, so our story here have two unusual features. One, you have to integrate over gamma. And the second is that [? the 1 over ?] the scalar field have the wrong kinetic term. OK. The sign with kinetic term is wrong. And we will see that these two features actually solve for each other. And this is actually a consistent theory. So this is actually consistency. Any questions at this point? OK, so before you do any theory, what you should do? So before you work on any theory, what should you do first? AUDIENCE: [INAUDIBLE]. HONG LIU: No, no, that's too far away. Just the first thing you should do. After you write down the action [? Lagrange, ?] what should do? AUDIENCE: Solve the equation of motion. HONG LIU: That's still too far away. AUDIENCE: Check units. HONG LIU: That's something what you should do. Yeah, but that is done in kindergarten, so it's not included here. AUDIENCE: Constants of motion. HONG LIU: That's close. What is responsible for constant of motion? [INTERPOSING VOICES] AUDIENCE: Symmetry. HONG LIU: Yeah, that's right, symmetry. You first have to analyze the symmetries. OK, so that's what we will do. OK, let me call this equation three, just continue from my notation of last week. Yeah. So first we should analyze the symmetry of this action. So first, so this is a mapping to a Minkowski spacetime. The Minkowski [? spacetime ?] itself have a Poincare symmetry. And that Poincare symmetry will be reflected here. First we have the Poincare symmetry of X mu. In other words-- and this action is invariant-- if you take X mu to go to X mu plus a constant. OK, so a mu is a constant. So it's obvious this is invariant because everything on [? X ?] we said derivative. So if we shift by X mu with a constant, then I'll call that a symmetry. And also, this is invariant under Lorentz symmetry. OK. So lambda is the constant Lorentz transformation. OK? So this is also self-evident, because of this contraction, if you make a constant Lorentzian transformation, you don't see the derivatives. And then this is contracted as Lorentz scalar, then this is invariant. OK? So this is the first set of symmetry. The second set of symmetry essentially by construction, because we are writing this as a scalar field in the curved spacetime. Then this is invariant in the two-dimensional coordinate transformation. And in other words, this is invariant on the reparameterization of the worldsheet. OK. So this is essentially a less [? desirable ?] condition, which should satisfy by any string action, because as we said before, sigma tau, you just parameterize your worldsheet. And you actually should not depend on how you parameterize it, and that parameterization should be arbitrary. You should be able to change your parameterization. Because if you think about the surface in spacetime, no matter, you can say parameterization cause [INAUDIBLE] agreed on the surface. Whether you do it this way, or you do it in some other way, if you're not changing the surface, the property of the surface itself. So that means the action itself should not be invariant on the reparameterization of your worldsheet. And the parameterization of the worldsheet translates into just arbitrary coordinate transformations. So sigma and tau should be able to, say, if you make arbitrary coordinate transformation, and the action should be invariant. OK. And this is by construction. And this is action by construction in [? Lorentz. ?] And this number goes to action. And Polyakov action we obtained by rewriting this number called action. It's also about construction. Because its [? manner fits ?] because this is in [INAUDIBLE] in the curved space actions for scalar fields. So this automatically invariant under the coordinate transformation. And under this coordinate transformation, this X mu just transformed [? my ?] scalar field. So the scalar field transforms. Say as the following is going to X mu prime. And X mu prime, we've evaluated here that the new coordinate tau prime and sigma prime should be the same as your original X evaluated at the original location. So this is essentially the definition of a scalar field. You see that when you make a coordinate transformation, if you follow the point, the value of X does not change. Because when you go to the new location, the X prime evaluated at new location should be the same as your original value. OK. Yes. AUDIENCE: So basically we have a few more [INAUDIBLE] variants in real space, and the worldsheet. That's like-- HONG LIU: No, no, this is not if your [? morphism ?] invariant. No, this is not [? different ?] [? morphism ?] invariant. AUDIENCE: What about in a special case, though? HONG LIU: No, this is not [? different ?] [? morphism. ?] No, this is not different [? morphism. ?] This is global symmetry. AUDIENCE: Oh, yeah, OK. AUDIENCE: And also, did we ever consider curved real space? HONG LIU: Yeah, you can consider-- well, curved space. But right now we are at the high school level of string theory class, and if you go to curved space, there should be college in that level of string theory. It's beyond what we are doing right now. AUDIENCE: That's [INAUDIBLE] curve of space, we don't have Poincare. Will we have Poincare? HONG LIU: Yeah, then you don't have Poincare symmetry. So this is specific to this action. But this is [INAUDIBLE] general. So, of course, under this coordinate transformation, this gamma ab should transform as a tensor, as in standard [? in GR. ?] This evaluated gamma prime you evaluate at a new point, should be [? related ?] as a tensor. OK. So you can check yourself. This essentially by constructing, but you can check yourself that the action is indeed invariant under those transformations. OK? AUDIENCE: Is there also gauge symmetry for this, [INAUDIBLE] add a [? phase ?] to X and derivative with a it cancels. HONG LIU: Sorry? AUDIENCE: Is there [? any gauge's ?] theory in-- HONG LIU: Gauge in what? Gauge in which one? AUDIENCE: X, like in equation three, [INAUDIBLE] either global [? phase ?] to X and derivative with a and-- HONG LIU: No, no, no. No, X is real here. This is our full action. AUDIENCE: I see. OK. HONG LIU: Yeah, these are full action. X are real. So this is invariant on that. We will not show this is [? invariant. ?] This just essentially by construction. You should check it yourself, if you are not convinced. So the third symmetry is called a Weyl scaling of gamma ab. So the [INAUDIBLE] are slightly surprising. So under this symmetry, X mu does not change. But the metric we multiply it by overall prefactor. And this omega is arbitrary for any function. OK. So this is a very important symmetry, so let's check that. So when you multiply the gamma by our overall prefactor, when you go to gamma ab, then that's giving you the same prefactor, but with a minus sign here. And now, in the determinant for gamma, then that give you the square of these. And then take the square root, then give you a positive power. Then they cancel. Is it clear? So the [INAUDIBLE] power here cancels with the positive power here. OK? So this thing is invariant. On [INAUDIBLE] symmetry. So the way to see that-- another way to understand that somehow this action-- so the same thing with this action. This is not specific to we are working with Minkowski spacetime. The same feature here only depend on these two, OK? So the same thing happens here. So you can also see a feature why-- and you can also get a feeling why this is such symmetry, just by looking at the relation between the Nambu-Goto action with this action. So in this Nambu-Goto action, there's no gamma. OK, so in some sense you can see. You can say this is automatically invariant under this kind of symmetry, because there's no gamma. And now we see that the equivalence-- we write down equation of motion to go from here to Nambu-Goto action. The equivalence between them, they can be related by arbitrary function lambda. And really by aperture function lambda, yeah, this is another way to see this symmetry. OK? It's another way to see this symmetry. So in some sense, you can say that this symmetry is also required for the equivalence to the Nambu-Goto action, OK? So we can actually use-- we can also turn things around. We can also use as a symmetry principle. OK. I see major points that we can ask. Suppose we have action. Suppose we require an action to satisfy the a, b, c, to be invariant under a, b, c. What is the most general action? OK? And then, you can essentially uniquely determine the three. OK. We use actually symmetry principles to uniquely-- actually, it's almost uniquely determined. OK, so I will explain this "almost." So now I use this clear. Is the logic clear? So we do two ways. We first start with Nambu-Goto and then we go to this action. But then we observe this action having the symmetries. And now we ask ourselves, what is the most general action besides symmetries? OK. And that then leads us back to this term, to this action. Except there's one more term you can add. OK. There's one other term you can add. So that's why this is almost. So I will just immediately write it down. This is called an Euler action. So this action actually have a long history. So this is called Euler action. So this can be written as following. And this lambda is just a constant, some arbitrary constant. And R is the Ricci scalar for gamma ab. OK, gamma ab is symmetric. It's a two-dimensional metric. We will construct the corresponding Ricci scalar. So you can clearly check this action-- so let me call this equation four. So you can check easily that a four is invariant under a plus b under a. Because this does not depend on X. OK, of course, this is invariant under a. And this is invariant under b, because this is a covariant action, so this invariant on b. To see this is invariant under c requires a little bit of effort. Okay, requires a little bit of effort. But if I write down the formula, then it will become clear. So under c, if you make such a transformation, say under gamma prime equal to [? exponent ?] 2 omega gamma. [? 2w ?] gamma. And then you find that minus gamma prime and R prime-- R prime means the Ricci scalar for the gamma prime-- is equal to square root minus gamma, R minus 2. The 2-- yeah, this is a formula which you can check yourself. Maybe just directly write down. And then you see you get one more term, but this term is a total derivative. OK, so this term is a total derivative. And if we impose the right boundary conditions, or if you can see the compact worldsheets, then this term will automatically vanishes, and then this is invariant. OK? So we have shown this is invariant under everything, a to c. So essentially, that's it. So these two terms are only-- three and four are the only terms are invariant under a to c, OK? There's no other terms. So total action. Then. And then this is our total action. So actually some of you who knows two-dimensional gravity. AUDIENCE: Is there a minus sign? HONG LIU: Yeah. Maybe let me put the minus sign here. Good. So some of you who know the two-dimensional gravity will immediately point out, you actually don't need to add this term. Because this is actually a total derivative. So this is a property of-- I only make a claim here. So this thing is a total derivative. So you can check yourself. Go back, go to your favorite GR book, go to your favorite GR book, and we'll call it the scalar, Ricci scalar in 2D. And then you find actually the square root of gamma times i is actually a total derivative. So it will not affect the equation motion, et cetera. But it is important. Otherwise I would not have talked about it. Because when you go to the Euclidean signature, that's the standard way we evaluate the path integral. So when you go to Euclidean signature, this is only-- saying that this is a total derivative is only a local description. It's actually a local description. In fact, this is actually a topological term. So locally, you can write it as a total derivative. But if you put on the topological [? nontrivial ?] manifold, then you cannot do it globally. And so, in fact, in the Euclidean signature, it's a-- actually I don't know whether this is due to Euler. This Euler action is actually equal to lambda times chi. And the chi is equal to [? pi-- ?] yeah, which is defined to be-- so now it's R. It's actually 2 minus 2h, OK? And the h is the genus of the surface. AUDIENCE: [INAUDIBLE]. HONG LIU: No, this is only in 2D. AUDIENCE: This is only true in 2D. HONG LIU: Yeah, in 2D. So in 2D, this-- even though this is locally a total derivative, but it's not the global total-- But you cannot write a total derivative globally. And so we integrated over a nontrivial manifold. But manifold with a nontrivial topology-- actually, this is [INAUDIBLE]. It actually gives you this Euler number. So example, if you integrated on a sphere, then you find that this number is 2, because [? when ?] h equal to 0. And if you evaluate it on the torus, then you find that this is 0, et cetera. OK. So one second. So in the Euclidean path integral, so Euclidean path integral, this S minus-- so this exponential S Euler term-- just give you exponential [? s ?] lambda chi, OK? So now you may ring a bell, because this is precisely the [INAUDIBLE] term I added before when I say, when we sum over different genus. You have the freedom to add [? mechanical ?] potential, lambda times the Euler number. And actually this is dictated by the symmetry. And if you include the most general terms are consistent with those symmetries, then you are required to include this Euler term. And then you'll have automatically weight, that you would automatically have a weight for different genus, OK? But for this theory, the lambda is actually constant. So the choice of lambda is still arbitrary, but this does give you a nontrivial weight. So this is weight-- different genus. Yeah. Any questions about this, yes? AUDIENCE: So if you have [? a nontrivial ?] topology, wouldn't the boundary term from the symmetry also be nontrivial? So wouldn't the symmetry break? HONG LIU: Sorry? AUDIENCE: If you have a nontrivial topology, what is the total derivative boundary term, total derivative of the symmetry transformation also? HONG LIU: No. No, they are different kind of-- no, there's no global issue here. This is just a ordinary function. You have to write down this explicit to know it. Yes. AUDIENCE: So the claim was that [? efficient ?] work, no matter what value [INAUDIBLE] input. So this is why you were able to put in that factor at the sum. We could make lambda equal 0, in which case, we don't have it. HONG LIU: Yeah. Lambda is not dictated by symmetry, so lambda is just a free parameter here. But in principle, if I determine it exists, yeah. Just lambda equal to 0 is a special case. Yes. AUDIENCE: [INAUDIBLE]. HONG LIU: Yeah, in the Minkowski case it's a little bit tricky to talk about topology, [? et cetera. ?] So that's why we always, when we do the path integral, we always go to Euclidean signature. And for example, later when we quantized the action, which we work with at Lorentzian signature. And we only care, say when we solve the equation of motion, but we only care about the local structure. Then we can just ignore this term because this does not contribute [? to ?] local structure. Good. So to summarize, from the two-dimensional-- so again, we can signal this as a two-dimensional field theory coupled to in the curved spacetime. From that two-dimensional perspective, this is a global symmetry. OK, so a is a global symmetry. Because the symmetry does not depend on the sigma and the gamma, and the sigma and tau. So this should leave to conserve charge, conserve current, conserve charge and conserve current. OK? So we're going to see later this indeed plays a very important role in physics. And the b and the c are what we call the gauge symmetries, or local symmetries, because they depend on sigma and tau. They have [? arbitrary ?] dependence on sigma and tau. There and here can also depend on sigma and tau. So this is what we call the gauge symmetries. So for those of you who have started Quantum Field Theory II or even Quantum Field Theory I, for those of you who have quantized [? QED, ?] you know that the gauge symmetries are not [? general ?] symmetries. They just tells you they are redundant degrees of freedom. OK? And they just redundantly [? with the ?] freedom. And then when you quantize the theory, then you need to rid of those redundant degrees of freedom first. Because otherwise you will have a consistency problem. And the process of getting rid of redundancy problems-- those redundancies are called gauge fixing. OK, so that's what we'll get to a little bit later. So in principle, you can just proceed to do the path integral quantization as we did before. And the path integral quantization is conceptually simple and geometrically elegant. But it does not give you, say, string theory spectrum. And because it's a path integral, it's [INAUDIBLE]. If you really want to find what are the spectrums of the string theory, you actually have to do the standard canonical quantization. So that's what we are going to do next, OK? Now we are going to quantize the string in the canonical way. There are many ways of doing it. I will use the simplest way, so [INAUDIBLE] the simplest way, also conceptually simplest way. It's called the light-cone quantization. And the meaning of this word will be clear. So the basic picture will emerge from this quantization is that each oscillation mode-- so if you have a string, then you can oscillate, right? Because that's what the string does. It can oscillate. So each oscillation mode, so the bottom line is that we will show that each oscillation mode of the string gives rise to a spacetime particle. OK. From the string point of view, it looks like some kind of oscillation of string. But from the spacetime point of view, it's like a spacetime particle. In particular, if you have a closed string, then that will give rise, say, graviton. Say one particular mode will give you graviton and then many other particles. Actually, we have an infinite number of particles from string because there's infinite number of vibration modes with a string. And if you have an open string, say the string which does not close, and then actually then you find the gauge field-- a rise in one of the vibration modes. Say this can either be photon or gluon, et cetera. OK. Yeah, so this is the thing we like to show. Yes. AUDIENCE: [INAUDIBLE]. HONG LIU: Oh, there are an infinite number of particles. So because the string can oscillate, just any string oscillate in principle in an infinite number of ways. And if each oscillation mode [? cause ?] one into a particle, then you can have an infinite number of particles. AUDIENCE: [INAUDIBLE]. HONG LIU: Oh yeah, oh yeah. They're completely different from gravitons and protons. They're just completely different particles. AUDIENCE: So [INAUDIBLE] particles [INAUDIBLE]. What other particles in the same class with gravitons? HONG LIU: Depend on what you mean by class. AUDIENCE: Well, can you give a second [INAUDIBLE]? In the same line as graviton, [? what else can you do? ?] HONG LIU: Oh, for example, mass will spin two particles. Mass will spin three particles. AUDIENCE: Everything will spin more than one? HONG LIU: It can also have a scalar particle, [? massive ?] scalar particles, but they're all massive. AUDIENCE: That'd be boson? HONG LIU: They can be fermions. In this theory they can only be bosons. But we can play tricks to get fermions. AUDIENCE: So does this predict that-- so if you were to have a string theory model which would reproduce the standard model, this basically says that at higher and higher energies that we expect to always see new types of particles or something? HONG LIU: Yeah, that's right. AUDIENCE: Interesting. HONG LIU: But we will see. You need to go to a specific time-- we already go to a specific energy scale to see. AUDIENCE: OK. We can see how they're spaced. HONG LIU: Yeah, yeah. That's right. Yeah, but that space is determined by some energy scale, and that scale can be very high. OK, so now we will quantize this theory using canonical quantization, OK? So by canonical quantization-- so let me remind you, this is the procedure. So this is a procedure we typically explain, say, in the second day of free quantum field theory. And so we are doing the same thing here. So the quantization procedure is the following. The first step is you write down equation of motion. OK. And in the case which you have gauge symmetries, then we fix the gauge symmetry. And the third is find the complete set of classical solutions. OK. And the four is you promote the classical field on the worldsheet, worldsheet field, say the X or gamma to quantum operators. OK. So the quantization procedure is that you promote them to quantum operators, which satisfy the so-called canonical [? computation ?] relation. And the previous equation of motion then becomes operator equations for those operators. OK? In particular, then the classical solution you find in three becomes the solutions to operator equations. And in particular, the parameters we use to parameterize your classical solutions-- so those become, say, the creation and annihilation operators. Just as what we do in the free field theory quantization. And then the last step, once you have the creation and annihilation operators, then you can just find the spectrum. By acting the creation and annihilation operators on your vacuum, then you can find the spectrum. So find the spectrum. So this is the standard procedure. Yeah, so this is almost exactly the same as how we quantize our free scalar field theory. In Quantum Field Theory I, the only tricky thing is now, this is a system with gauge symmetries. That's the only tricky thing here. Any questions here? So now let me just again say some boundary conditions we are going to use. AUDIENCE: [INAUDIBLE]. HONG LIU: Sorry? AUDIENCE: What are parameters in classical solutions permutations. [INAUDIBLE]. HONG LIU: What other-- AUDIENCE: Parameters in classical solutions in case of [INAUDIBLE]. HONG LIU: It's different modes for your classical waves. The amplitude for your different classical waves. AUDIENCE: [INAUDIBLE]? HONG LIU: Yeah, it is like that, yeah. For scalar field theory, yeah, just for harmonic oscillator, you can write a cosine. a explains your i omega X plus a star, [INAUDIBLE] omega X. So a is the parameter, and a and a star become creation and annihilation operator. It's the same thing here. So we will consider both closed and open strings. So suppose closed and open strings. So if you have a closed string, then, just by convention we always take sigma to go from 0 to 2 pi. So the closed string means it's closed. Then we take this from to 0 to 2 pi. That means whatever function should be a periodic function of 2 pi. OK, should be a periodic function of 2 pi, because it's closed, OK? So the same thing with gamma. Same thing with gamma, OK? But if you have open string, by convention we take sigma goes to 0 to pi. So one end of the string is at sigma equal to 0. The other end is at sigma equal to pi. So for open strings, so here is sigma equal to 0. Here is sigma equal to pi. So the sigma is directly on the string. So we need to-- for open strings, then, because you have boundaries, then we need to supply boundary conditions at the end points. Good. So now with this set-up, we can now just follow that five steps, OK? We can just follow that five steps and quantize the string. So do you want to have a break? OK, let us start. So we just follow these steps. First let's write down equation of motion. OK, first we just do a variation of gamma ab. So gamma is the worldsheet metric from the point of view of two-dimensional field theory. The gamma is the worldsheet metric. When you do evaluation over gamma, essentially what you get is the stress tensor for this worldsheet theory, OK? So when you do the [? evaluation over ?] gamma, essentially, it's just the statement that the stress tensor had to be 0. OK. And then you can work this out explicitly. And the stress tensor, which I call Tab. OK. So I just [? did the evaluation ?] of that action with respect to gamma ab. This is what you get You find this thing has to be equal to 0. And I just call this thing Tab. And this Tab is essentially the stress tensor of this. If you [INAUDIBLE] the scalar field theory, so this Tab is essentially the stress tensor of this scalar field theory. OK? Yeah. AUDIENCE: If we add the topological term and said, there's a Ricci scalar there, there must be Einstein's tensor equal to Tab. HONG LIU: No, no, no. Because there's a total derivative term in two dimensions-- total derivative term never contribute to equation of motion. Yeah. AUDIENCE: So but if we calculated the Einstein tensor for that R [INAUDIBLE] get 0. HONG LIU: You will get 0, yeah. AUDIENCE: Sorry, so you just said that from the equation of motion, that gamma ab can get gamma ab equals 1/2 lambda hab. Is that it? HONG LIU: Yeah. It's the same thing. It's the same thing. But this is a very good question. Let me do that here. Say if I write gamma ab equal to 1/2 lambda hab, I can write this lambda explicitly. Then I can modify this by hab on both sides and this means hab gamma ab equal to lambda. OK, that means lambda is equal to gamma ab times hab. And so this equation, I can also write it as gamma ab is equal to 1/2 hab gamma ab hab. Oh, gamma cd. Yeah, which you can show is equivalent to that equation. OK. Or maybe I should do gamma ab. No, if you do that one, it doesn't matter. I can do-- to be exactly the same as that equation, let me just do both sides. I can track with gamma ab. Let me see, maybe I can [? track ?] both sides. Let me think. So that is [? HCD ?] so that is h gamma ab, times that thing, is equal to-- Yeah, you can-- just take this equation. You get this equation. Then you can show that equation is equivalent to this equation, OK? [INAUDIBLE] right now, they're not reading the identical way, but they can show the equivalence. So yeah, actually that's your homework. So now if you look at equation of motion for X mu, look at the equation of motion for X mu, then this is just [INAUDIBLE] your standard scalar field. So this just give you-- OK, and we call this equation six. So these are all of your equation of motion. If you have open string, then when you do the evaluation of X, you have to do integration by parts. Then in the standard [? story, ?] you have to have a boundary term. OK? But you actually have to be careful about that boundary term. For the open string, you will get a boundary term, which is given by delta mu gamma sigma b partial b X mu, evaluated at sigma equal to 0 and pi, should be equal to 0. OK? So this just come from where you get this second-order equation, you always have to integration by parts when you do the evaluation. That just comes from the boundary term of that [? variation, ?] evaluated at the end point of the string you have in the sigma direction. OK. Is this clear? Good. Again, I will not go through all the algebraic details. You should check those details yourself. You should check them, and if you find mistakes, it's greatly encouraged. Yeah, I will add to your points. Yeah, if you find mistakes, then I will add your points. If you have found enough mistakes, then you may not need to do the [? PSAT ?] anymore. So this tells us we can actually impose two kinds of boundary conditions. Say delta X mu have to be 0. So evaluated at sigma equal to 0, pi, equal to 0. Sigma 0 or pi-- it does not have to be the same. Or this thing is 0, gamma sigma b. The reason the sigma here, because we are interested only in the boundary term in the sigma direction. The boundary term in the time direction we [INAUDIBLE] care about. Because the boundary term in the time direction, we always assume in the far past and the far future, nothing happens. We always assume that. But you cannot assume a boundary condition at the spatial condition. You have-- yeah, in the spatial boundary, you have to be careful, because we have a finite-- [? a special ?] extension. OK, so that's why this sigma. So a lot of the possible boundary conditions, gamma sigma b, times partial b X mu, so evaluated at sigma is equal to 0 or pi, equal to 0. So this is normally called the Dirichlet boundary condition. This is normally call the Neumann boundary condition. The reason this is called Dirichlet boundary condition because if that X is fixed at the end point, it means that the value of X are not allowed to vary. So this is normally called the Dirichlet, OK? And for the Neumann boundary condition, you only constrain the derivative. And the value of X can be anything. And you only constrain the derivative. So this is called the Neumann boundary condition. So for simplicity, later we will treat both of them. But for now, let me just look at this one, because this one is slightly simpler. So for the moment, let me just consider the Neumann boundary condition, OK. So now the second step is fixing the gauge. Second step is fixing the gauge. So we have two gauge freedom. One is to change the coordinates. And the one is to do a scaling. OK. So before we fix the gauge, let's do a little bit of counting. So here I can change coordinates by arbitrary two functions, OK? So here I essentially have two arbitrary functions with freedom of the gauge freedom. And here, I have one arbitrary functions for both gauge freedom, because I can choose omega [INAUDIBLE] arbitrarily. And if you look at gamma ab, the worldsheet metric-- so ab is from 0 to 1. So this also has three freedoms, three degrees of freedom. So that means actually, in [INAUDIBLE] theory, at least you can expect by taking account those degrees of freedom, taking account of these three gauge degrees freedom, you can set this gamma ab to some fixed metric. Because there's only three freedom here, and we have three gauge freedom. And you can completely fix them. So [? not ?] let me erase it now. Now let me do the second step. Gauge fixing using this different morphism of b, so these have two parameters. You can show that you can just make a coordinate transformation. Take any gamma ab, you can show that by using a coordinate transformation, you can transform the gamma ab to the following form. OK. So I will not prove it here, but I'll let you go back to try to convince yourself of this fact, OK? Just by using the coordinate transformation starting with generic gamma ab, you can make a coordinate transformation so that in the new coordinates, the gamma ab is given by a prefactor times the Minkowski metric. And now we can use the c, that guy, because we have the freedom to change the gamma ab by arbitrary function. And then we can just choose that function to be [INAUDIBLE] to this guy. Then we can set gamma ab just equal to eta ab. OK. So we can use these two freedom to set the gamma ab with eta ab. Yes? AUDIENCE: So you're saying that we can [INAUDIBLE] gauge [? toward ?] locally on the worldsheet. [INAUDIBLE] gauge transform the metric [? globally ?] [INAUDIBLE]. HONG LIU: That's right. Yeah, we can transform it locally into [? flat ?] metric. And then, indeed, there is a global issue of which I will not worry about now. Yeah, because the question which I'm interested in, say, the spectrum only [? consists ?] of the local questions. For example, if you do the path integral, then you don't have to worry about the global issues. AUDIENCE: Why? Why is it that when you [INAUDIBLE]? HONG LIU: Yeah, because you need to integrate over all possible configurations. And you don't want to do the overcounting, or count mass. Yeah. AUDIENCE: So we don't have to work this out. So why does that mean we don't have to? HONG LIU: Yeah. So let me give you an example. For example, if you do the genus g surfaces, and then say let's do Euclidean again. It's easier. Do Euclidean. Indeed, you can locally transform to eta ab the actual set of discrete parameters. And then you have to be careful about those discrete parameters, et cetera. Yeah, just those discrete parameters have to do with global issues, et cetera. AUDIENCE: I see what you're saying. HONG LIU: Yes? AUDIENCE: [INAUDIBLE]. HONG LIU: Yeah, just imagine you have some two-dimensional surface. Right now let's not consider about the topology. Right now just consider. So in this case, for the closed string, the simplest thing-- you can see this [? sitting ?] there. So this is sigma direction. This is the tau direction. Yeah. Yeah, for the closed string it's [INAUDIBLE]. But the open string, roughly just like this. And right now we don't consider complicated topology. We just consider local question. Then we can consider the simplest topology. AUDIENCE: Yeah, sorry, I didn't quite get the path integral being-- HONG LIU: Yeah, don't worry about it. AUDIENCE: Don't you have to integrate around both sides of every [? pole that ?] you have in the-- HONG LIU: Yeah, you have to integrate over everything. You have to be careful about everything. AUDIENCE: So global [? effect is a factor. ?] HONG LIU: Yeah. AUDIENCE: It does matter. OK. HONG LIU: Of course. AUDIENCE: I thought you were saying the other. HONG LIU: No, no, no. I'm saying the global effects are very important when you do the path integral, and if you want to do the path integral correctly. But here we are interested in the spectrum. And this is a local question. OK, so now with this gauge fixing, we can see how the equation of motion simplifies. First we have fixed all gauge freedom, OK? So let's look at the six first. So six just become a free scalar equation. Because when gamma ab become a Minkowski metric, this is just like a scalar field, OK? This is like a scalar field. So this we can write explicitly. So this is a free scalar field. And let me call this somehow-- oh, yeah, let me call this seven. This one seven, this one eight, just to be consistent with my notes. Then I call this nine. But if we still have to worry about the five, OK, so the five can now also be written explicitly. So first thing you can check is that T0,0 is equal to T1,1, so the diagonal elements are the same. It's given by 1/2 partial tau X mu partial tau X mu, plus partial sigma X mu partial sigma X mu equal to 0. And T0,1 component will give you partial tau X mu [? times ?] partial sigma X mu equal to 0. So I'm just writing everything explicitly. So these are the three components of that equation, when I set gamma ab equal to eta ab. You can easily check yourself. Just plug that in. OK? So the reason that T0,0 is equal to T1,1 is because we actually have a scaling symmetry. So in the case when this action-- yeah, this is a free massless scalar field. It's a scaling symmetry. And you know that when you have a scaling symmetry under the stress tensor, the [? trace ?] of the stress tensor always vanishes, so that's why T0,0 and T1,1. So for the open string, we also have to worry about the boundary conditions. And so this just become the Minkowski metric. And the sigma sigma, only the sigma sigma is non-zero. So you just get the partial sigma X mu equal to 0. OK, so this should be evaluated at sigma equal to 0 and pi. So right now, we only consider Neumann boundary condition. And let me call it 12. So this is for Neumann. Good? Any questions on this? Yeah. AUDIENCE: For this Dirichlet boundary condition, it's a [? valuation. ?] So you already may say the [? valuation ?] is arbitrary by [INAUDIBLE] can be arbitrary. So it seems like only Neumann boundary condition is valid. HONG LIU: Yeah, this is your choice. This is your choice. So normally in the field theory, what we do-- indeed, normally in the field theory, yeah, this is your choice. In the normal field theory, where you consider, say, an infinite space, we just assume everything goes to 0 at infinity. And we don't even need to worry about the boundary condition. Then the boundary term automatically vanishes. And so that's why, normally, in the field theory in the infinite space, we never worry about the boundary conditions because we made that assumption. So here, because the string has a finite [? lens, ?] it's a finite boundary. And then you actually have a freedom to introduce both. Yeah. You do have the freedom to introduce both. Good? So let me emphasize again. So now in this gauge, we just have a free scalar field. So this is a standard wave equation in one plus one dimension. But we are not quantizing a free scalar field theory, because we still have to solve those things. We have to solve these two equations. And these are constraints. Because in this equation, in principle, you completely solve the equation. We know that this is a wave equation. You already solved X completely. But these two equation, coming from the gamma equation of motion, they impose a [? longitudinal ?] constraint we have to satisfy. And those constraints actually last [INAUDIBLE] because they are non-linear. This is a linear equation, but these are non-linear constraints. And this is the 10, 11-- I'll normally call them Virasoro constraints. Virasoro was a person. So this is the Virasoro constraints. OK? And so this allows me to constrain the equation [INAUDIBLE]. So now let's first solve the easy equation. So let's go to the third step, trying to solve the equation of motion. And now we have fixed the gauge. And now we try to solve the equation of motion, OK? So for people at your level, I can write down a solution of line immediately. So I can have a constant, of course, satisfy this equation. I can have a linear term in tau, which of course will satisfy this equation. For closed string, in principle, I can also have a linear term in sigma, but that won't satisfy the periodic boundary condition. So that's not allowed. And then I can add the traveling wave solutions. So this is a left-moving wave, a right-moving wave, and this is a left-moving wave. OK? So this is a full set of solutions to-- so x mu are arbitrary constants. x mu and v mu are arbitrary constants. And XL and XR are arbitrary for closed strings. So XL and XR just are independent, should be independent periodic functions of period 2 pi. OK? Because they have to be. So XL and XR are function of single variable. And they have to be period in sigma means they have to be periodic. So these are the periodic function. OK, is this clear to you? Do I need to explain this equation? Good. Yes? AUDIENCE: Why a function of tau minus the sigma? HONG LIU: It's because you have a traveling wave. Because first we can immediately see this satisfies this equation. And then you need to convince yourself these are the only solutions, in the sense that-- yeah, because these are the two arbitrary functions. There's no other choices anymore. Any more questions on this? AUDIENCE: [INAUDIBLE]. HONG LIU: Also not, for other reasons. Not for the Neumann boundary condition. Not for this boundary connection. And now we are come to open string. So for open string, you can then be-- again, we have these. We can check that if you have a linear sigma term [? it will be ?] incompatible with this boundary condition, so we don't worry about that. So we don't worry about that. And then, from this equation, then plug that into this equation, we conclude that the XL prime tau should be equal to XR prime tau at sigma equal to 0. And XL prime, tau minus pi, should be equal to XR prime, tau plus pi, at sigma equal to pi. OK, so you just plug this into here. So the first two terms does not matter, and then you just have the last two terms. So the last two terms, then if you evaluate it at sigma equal to 0, then you just get this equal to that. And if you evaluate it at sigma equal to pi, then find this, OK? Yeah. AUDIENCE: Why is it that you don't package those original two terms into the right and left functions? So why is that you write x mu plus v mu tau? HONG LIU: Because those equation don't have this form. They're not in the form of tau plus sigma or tau minus sigma. No, this is just tau. This is not tau plus sigma. Now, I can [? pack it ?] in that form. It doesn't matter. I'm just saying here I'm not doing this. Yeah. What I'm saying-- these are trivially periodic functions. And yeah, I [? just separate ?] that. OK? So now, from here, you can find that-- so this equation tells you that XL is essentially equal to XR, OK? So up to a constant we can absorb into X mu, for example. Then these two functions could be the same. And if we take these two function to be the same, then this function tells us this is a function-- the second line tells us this is periodic in 2 pi. OK? Yeah, I actually think I missed a minus sign here. So if I take the sigma derivative, yeah, I can put a minus sign here. But these two conclusions still are right. So now for both open and closed string, you have solved nine completely. Solved nine completely. Any questions about this? OK, but we still have to solve 10 and 11. So what 10 and 11 does is to impose some nontrivial constraints between XL and XR and x mu and v, et cetera. I'd just say plug in this most general solution into 10 and 11. And we'll impose some constraints on those functions. So I urge you to try it a little bit yourself. And those equations are not nice equations. There are lots of nice people. Yeah, there are lots of nice equations. And so at this stage, we face a decision. So when you see this 10 and 11 are hard to solve, then you have a decision to make. Yeah, whenever you have something that you don't know how to do immediately, you have a decision to make. We have some decision to make. Or you just give up. And first-- so one option is that we could just quantize this theory. We know how to quantize this free scalar field theory. We just quantized this free scalar field theory first. And then we can construct the Hilbert space, et cetera. And then we can impose those constraints at the quantum level. We can impose those constraints at the quantum level. Yeah, so this is one option. The second option is that you find some ingenious way to solve those equations so that you can actually find independent variables. OK, because those are constraint equations, means some degrees of freedom allow the independence from others. So the alternative is, I guess, you just solve it. And finally, independently, with freedom, and then quantize only [? those ?] independently with freedom. OK. So the first option, this stage is easy, because you can quantize the first [? gauge ?] field theory. But the process will impose the constraint at a quantum level. It's actually rather tricky. So I will not go there. So I will do the second method, [? rather ?] we try to solve this constraint, find some trick to solve this constraint, and then quantize independent degrees freedom. Is it clear? So I will take the second approach. So this is the idea of the light-cone quantization, OK? So this is the idea of going to the light-cone gauge. So this is an ingenious way to solve those constraints. OK? And it was first developed by four people, including our colleague here, Goldstone, Jeffrey Goldstone, who was among the first people to quantize this theory, using the light-cone gauge. Actually they quantized the Nambu-Goto action. They did not quantize this theory. They quantized the Nambu-Goto action using the light-cone gauge. So the key idea is the following. He said after fixing this gauge, after fixing gamma ab, you go to eta ab. There are actually still some residue gauge freedom. We actually have not fixed everything. OK. By residue gauge degrees freedom-- so if you fix the gauge completely, then that means that any operation which previously-- what we did b and c-- when you act on gamma, we will take this away from this configuration, OK? So by residue degrees freedom, it means I can still find the combination of coordinate transformation and the Weyl scaling so that after those operations, I'm still going back to this metric. OK, so is it clear what we mean by the residue gauge freedom? Good. So to see this I have to introduce so-called the light-cone coordinate on the worldsheet, introduce sigma plus/minus equal to square root 2 tau plus/minus sigma. And then the worldsheet metric can be written as minus d tau square plus d sigma square-- because we take the Minkowski metric. So this can be written as minus 2 d sigma plus, d sigma minus. So now here is the key observation is that this metric is preserved by the following coordinate transformation. I take sigma plus goes to sigma tilde plus, which is only a function of sigma plus. And sigma minus goes to some function sigma tilde minus, some new coordinate sigma tilde minus, which is only a function of sigma minus. So if you've plugged this in here, then you'll find under this coordinate transformation, we get this ds tilde square. If you just plug this in here, this is, say, 2d sigma tilde plus, 2 sigma tilde minus. You go to new coordinates. And [? these ?] new coordinates expressed in terms of original coordinates, you get 2f prime sigma plus, g prime sigma minus, and d sigma d minus. So now the key is that under this coordinate transformation, you only transform your metric by an overall prefactor. And then you can get rid [? of it ?] by Weyl scaling. It is preserved by this, followed by Weyl scaling. Because this coordinate transformation only changed your metric by an overall prefactor. And any overall prefactor you can get rid of by Weyl scaling. OK, is this clear? So we actually still have some residue of gauge freedom left. OK? So now we can use this freedom to do something good for us. So these new coordinates, say, in this new tilde coordinate, tau tilde, then have the form of square root sigma plus tilde plus sigma tilde minus. OK. So this have the form 1 over 2 f sigma plus tau, and g sigma minus tau, say tau minus sigma. So I have this form. So in this new coordinate, yes, so you can change the coordinates. So you're allowed to make this kind of coordinate change. So you are allowed to make this kind of coordinate change. Yes, so the residue degrees freedom means we are allowed to make this kind of coordinate change, OK? And that means-- and you see, this is precisely the combination of this form, a left-moving wave plus a right-moving wave. So that means we can use this freedom. We can choose. So this means we can choose f and g so that tau is given by any combinations of this X-- the solutions of X, OK? Because solutions of X precisely have this form. So what we will do is we will-- maybe should I erase this? Yeah, let me erase this because I think you all know this. So I did a level of the equation of motion. You have the freedom. [? You're going to ?] choose tau tilde to be any combinations of those X, OK? Because those X precisely have that freedom. And the smart choice is what we call the light-cone gauge-- we can choose so that tau tilde is proportional to so-called X plus. And X plus is defined by square root of 2, say, the 0-th component of X plus, say, one of the spatial directions. OK, say, take one. So to fix this gauge, to fix to fix this residue gauge freedom, I have the freedom to choose tau tilde to be some combination of X. And this is the combination that we will choose. So this is what we call the light-cone gauge. So I will suppress this tilde. So in the light-cone gauge means we go to a coordinate so that tau become equal to X plus, and up to some constant which I called V plus. OK. So V plus is some constant. Is this clear to you? And this is so-called light-cone coordinate in the target space. So this [INAUDIBLE] identify the worldsheet's time with the light-cone time in the target space. Or in other words, X plus equal to V plus, tau. This is what we call light-cone gauge. So why do we want to do that? And the ingenious thing is the following. It's that if you write X mu, so we can write X mu as X plus, X minus, and Xi. And the i come from 2 to D minus 1. So X plus and X minus is X0 and X1. So that's right in this form. OK, then you can easily check yourself, or you maybe already know it, is that this particular contraction is given by 2X plus, X minus, plus Xi squared. In particular [INAUDIBLE], just as here, which you have this off-diagonal structure. This kind of contraction has this off-diagonal structure in X plus and X minus. And this turns out to be key. Yes. AUDIENCE: Do you mean for that to be 0 and 1, and then that to be plus 1? HONG LIU: Sorry? AUDIENCE: Do you mean for that to be 0 and 1? HONG LIU: So these two are equivalent to X0 and X1. Let me include the rest, starting from i equal to 2 to D minus 1. So X mu have X0, X1, then what I would call Xi, which i start from 2 to D minus 1. I just renamed these two to be X plus and X minus. Just change this two to X plus and X minus. AUDIENCE: Oh, I see. OK. HONG LIU: Yes? Yes. AUDIENCE: So when we do another one, [? I thought ?] the Weyl scaling's already fixed when you're trying to fix [INAUDIBLE]. HONG LIU: No, no. I'm just saying I do these two operation, but in sequence. So I do that operation, and then I choose the Weyl scaling so that precisely cancels this. Of course, if you just do a single Weyl scaling, this will be violated. But if I do that two operation together, then this will be preserved. Yeah, I have to do that two together. Yeah, if you do a single one, this will be invalidated. OK, so this is the key point. So we will see that this structure will play a key role. So this is the smart thing which our friend Jeffrey did. This is in fact the ingenious thing of our friend Jeffrey and his friends [? did. ?] So now let's look at these two equations, OK? Now let's look at these two equations. So the first thing-- so now let's look at 10 plus 11. So first let's note, because of this, partial tau X plus is equal to V plus, and partial sigma X plus is equal to 0. OK? Because there's no sigma dependence there, only tau dependence, which isn't in here. And then let me just plug it in. So let's look at here. Let's forget about the 1/2. You can also think about the 1/2, doesn't matter. So look at plus and minus structure here. Use this here. Then you get minus 2 V plus, partial tau X minus, then plus partial tau Xi squared. This is for the first term. In the second term, the X plus and the X minus part is 0 because of these. Because of this and because of the off-diagonal structure here, OK? Do you see it? So the first term here would be partial tau minus 2, partial sigma X plus, partial sigma X minus plus partial sigma Xi squared. So this would be 0. But our friend here is 0, because of that is 0. So now we can erase it. So now let's change this to the other side. OK. Let's change this to the other side, and then change this sign. So the equation I get is the following. OK, so this is equation 10. So now let's look at equation 11. In 11, so we have partial tau X plus, then partial sigma X minus. Then you go to partial tau Xi and partial sigma Xi. And then this one just give us V plus. So that two equation become these two equation. And let me call this 14. This is 15. So is it clear to you, these equations? So if it's not immediately clear to you, just check it yourself. OK. Urge you to do the check, because then you will see the magic. Then you will see the magic and the beauty of this trick. AUDIENCE: [INAUDIBLE]. HONG LIU: Sorry? AUDIENCE: Can you tell X minus [INAUDIBLE] sigma X plus? HONG LIU: Yeah, that's 0 because partial 0 X plus is 0. So that's why there's only one here, not two. Because the other term is 0. OK, so now the beauty of these two equation is that they express the X minus solely in terms of Xi. So 14 and 15 tells you, is that X minus can be fully solved in terms of Xi. Because this equation give you the tau derivative of X minus, this equation give you the sigma derivative of X minus, and all expressed only in terms of Xi. So the X minus is completely solved, completely expressed, in terms of Xi. And then we have solved those constraints [? similarly ?] [INAUDIBLE] constraint explicitly, OK? So right now we are doing it a little bit fast. I urge you to go through the algebra yourself to appreciate the beauty of this step. So we conclude that independent degrees freedoms are only Xi. Because X plus we already know by fixing the gauge. And X minus now is completely solving in terms of Xi, so the independent freedom are only Xi's. And this will make our life very easy. Because Xi are just free scalar fields. Xi-- just free scalar field with the right kinetic term. You have also get rid of X0, because we said that [? before ?] X0, we have a [? long-sign ?] kinetic term. But now we are only independent-- [INAUDIBLE] only Xi's, and they have the right number of kinetic terms. They have the right sign for the kinetic term. And they are just free scalar fields. So we can quantize them as what you did in your first day of Quantum Field Theory class. And then you have quantized the string theory. And then you have quantized string theory. OK, yeah. I think I'm going very slow here. Maybe I'm a little bit too emotional. Anyway, so maybe let's stop here. The next time, then we will just quantize those things. And those equations still will give us some very important information, and then we will quantize those things. And then we will be able to see that the string theory contains gravity. It contains gravitons, OK?
MIT_8821_String_Theory_and_Holographic_Duality_Fall_2014
17_More_on_AdS_CFT_Duality.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. HONG LIU: So let me first just remind you briefly what we did in last lecture-- AUDIENCE: [INAUDIBLE]. HONG LIU: --which was ages ago. [LAUGHTER] So we, say, quote unquote, derived the AdS duality. And then we also described a little bit to the geometry of the Anti-de Sitter spacetime. So there are two things I want you to remember about Anti-de Sitter spacetime. You said, there's two ways to think about it. One is so-called the Poincare patch, is that, say, we have a coordinate z or R. It depends on your choice. So for each constant-- z is equal to 0. Add to the boundary of AdS. So AdS have a boundary, symmetric, yeah, so R squared. So z goes from 0 to infinity. And at z equal to 0, the overall [? prefect ?] blows up. So the spacetime become bigger and bigger. So this is what we will normally call the AdS boundary. And then when z become bigger, then you go to the interior. And then at each consonant z the spacetime is [INAUDIBLE] Minkowski space, say, our D dimension in Minkowski space. So this give you an AdS d plus 1. OK, so this gave you an AdS d plus 1. And so of each z [INAUDIBLE], it's an AdS space. So this is so-called the Poincare patch. And another way to think about AdS is so-called the global AdS. In the global AdS, AdS is just essentially a [? sort ?] in the cylinder. And it is a time direction, which we call tau. And then there's a radial direction we call rho. And then there's some angular direction. And then the boundary of the surface-- for example, you can write in this global case as 1 over, say R squared cosine squared rho, say, minus dt squared plus d rho squared plus sine squared rho [? d ?] omega squared d minus 1 squared. OK? And this rho, and the angular direction is this omega d minus 1, so d tau squared. OK? And then when [? rho ?] goes to pi over 2, then the, again, the overall [? prefactor ?] blows up. So at the rho power over 2, so this is the boundary. So the boundary here is S3 times R. So SD minus 1 times the time. So it's just really a cylinder. So here, the boundary is the R1 d minus 1, just the Minkowski spacetime, OK? So this is two way I want you to think about the Anti-de Sitter spacetime. And also, yeah-- so before we continue, do you have any questions? Yes? AUDIENCE: Just a short one. So you draw a cylinder, But the spatial part of that metric is not precisely a cylinder. HONG LIU: Hm? AUDIENCE: I mean, the spatial part of the metric is not precisely [INAUDIBLE]. HONG LIU: You mean the boundary? AUDIENCE: No, I mean the sine squared rho term. That's a cylinder. HONG LIU: Oh, I'm just saying the topology. I'm saying the topology is [? a sorted ?] cylinder. Yeah, the topology is [? a sorted ?] cylinder, yeah. Any other questions? OK, good. So also, let me remind you this original definition of the Anti-de Sitter spacetime. So Anti-de Sitter spacetime can also be defined as a hyperboloid. Hyperboloid in the d plus 2 dimensional Minkowski spacetime. OK, so this is the AdS d plus 1 can also be defined as hyperboloids in the d plus 2 dimensional Minkowski spacetime. It's 2 time. So it's a 2,d signature with 2 time. OK? So now, let's talk a little bit more about the-- the last time we talked about geometry of AdS. Now, we talk about little bit the symmetries of AdS. So let's just further that discussion, so d plus 1 dimension of AdS, Anti-de Sitter spacetime. So I assume all of you are familiar of is the concept called isometry. If not, it's very easy to understand. So isometry refers to the class of coordinate transformations, which leaves the metric invariant. OK? And it's, say, if it's a Minkowski spacetime, then the isometry will the Lorentz transformation plus the translations, OK? So that's the isometries. And so for this AdS, by definition, the isometry would be SO(d,2), because this is hyperbole for AdS, because the AdS is defined as a hyperboloid in this d plus 2 dimensional Minkowski spacetime with signature (2,d). And this hyperboloid preserves the Lorentz transformation. So the AdS, the symmetry will be SO(d,2). Just the Lorentz symmetry of this embedded Minkowski spacetime. There's no translation because this equation breaks the translation. OK? Because this equation breaks the translation. So yeah, so you can see immediately just from there. But what we will be useful later is to understand how this is refracted in the so-called Poincare patch. OK? So let's talk about how this symmetry is realized in the Poincare patch, which I will use these z coordinates, which I will use these z coordinates. So first, they are translations, which we can translate, say, x mu to some constant. So by x mu, I always refer to using these Poincare coordinates. The x mu always refer to t and the vector x. OK? So this is the-- so first, obvious, there's a translation in the t and x direction, because that's independent here [? in the ?] x. And then this also Lorentz transformation in the x direction, because of the dependence on the t and x is just the Minkowski metric, because each is [INAUDIBLE] that Minkowski metric. OK? So this has d generators. So this have 1/2 d times d minus 1, because this is the Lorentz transformation in the d dimension. And then, also, there's scaling. So that metric, it's invariant under such a scaling. So just scale x mu and z together. And clearly, the metric does not change. So if you scale z and x together, then all the scaling cancels. No, of course, this does not change the metric. OK? So there's also scaling, so this have one generator. So the last isometry in the-- it's called special conformal transformation. The last image of this space, which is called the special conformal transformation, it's a little bit more complicated. Let me just first write it down. Then I will try to give you some intuition about it. So the [? free ?] parameter, so the parameter for this transformation of [? actor ?] b mu. OK? So I will introduce. So the transformation is the following. z go to z prime equal to-- and the x mu goes to x mu prime. OK? So b mu is just an arbitrary constant vector, just constant vector. And so b mu squared, so b squared is just your standard b mu b mu. Under the A square, it's defined to be z square plus x mu x mu. OK? So you can check yourself, requires a little bit of effort. You plug this transformation into this expression. And then you can show that this is invariant. OK? So this have d parameters, because b. So the transformation parameters, they are the d parameters. So they are d parameters here. OK? Because of the b mu. So this transformation can be understood in a slightly easier way, as follows. It's as you will also do this in your p set, is that you can check. So to understand this Special Conformal Transformation, so I called SCT. First, you can check the following is isometry, something we call inversion. It said if z goes to z divided by A, so A is this quantity. Oh, A, I think I'm having some notation error, myself. So I should call it A, rather, A squared. Sorry. It doesn't matter. I can also call it A squared, but let me just call it A. I think that one would be [? A2. ?] Yeah, that is [? A2. ?] So, yeah, I could just call everything A squared. But to be consistent with my notation on the [? notes, ?] just let me call it A. So you can also check that the following discrete transformation is also isometry, so when z equal to z divided by A, and x mu goes to x mu divided by A. OK? You can check this is isometry. So if you plug this into this metric, you find this [? leaves ?] the metric invariant. And this thing is much easier to check. But as you will do in your p set, and you can check yourself, this discrete transformation actually is not part of the SO(d,2). It's, in fact, part of the O(d,2). So if you calculate its determinant, say the Jacobean, actually find the minus 1, rather than 1. OK? So this is not a proper Lorentz transformation. It's not a proper transformation, what we normally call the proper transformation. But however, if you do this twice, the minus 1 times minus 1 equal to 1, then you get the proper transformation. But, of course, if you do this twice, you just go back to itself. You can check. This is inverse of itself. So this is like a [? z2 ?] transformation. But you can do [? a slightly ?] [? trick ?]. Then you can show that this special conformal transformation is given by an inversion. You first do an inversion, and then you follow the [? bad ?] translation in b mu. And then invert it back. OK? So even though you can check that I squared is equal to 1, well, now in the middle, you have added the translation in b mu, a constant. And this is a symmetry. And, of course, the whole thing will be a symmetry. But now, this is a proper transformation, because I have act I twice, OK? So you can now check this transformation is precisely-- if you do this, it's precisely just that transformation. OK? So if you add all of them together, d will have d minus 1 d. Then you can show actually they form this symmetry group. They form this symmetry group. So altogether, you have d plus 1/2 d times d minus 1 plus 1 plus d. Then that give you 1/2 d. You can check yourself, like 1/2 d plus 1 and d plus 2, which is precisely the number generated for that one. OK? Yes? AUDIENCE: So the total here that dimension, the d plus 1-- HONG LIU: Yeah. AUDIENCE: Why like even translation, it only has a d dimensional invariant. HONG LIU: No, because this depend on z, so the translation [? in ?] z is not invariant. AUDIENCE: Oh, OK. Yeah, so that would be-- HONG LOU: Because this depend on z. AUDIENCE: Yeah. HONG LIU: So if you translate z, this is not the isometry [? under ?] the metric changes. So only the translation t and x is a symmetry. So that's why we only do the translation x mu. AUDIENCE: I have some questions. HONG LIU: Yes? AUDIENCE: So conversion is isometry. HONG LIU: Yeah. AUDIENCE: Yes? OK, so but inversion, there's a [? release ?] not in the SO(d,2)? HONG LIU: Yeah, right. AUDIENCE: So that means the symmetry, isometry [? work ?] [? with ?] [INAUDIBLE] [? SO? ?] HONG LIU: Yeah, yeah. It's the standard story. You have O. You can have an O symmetry. There's a discrete part, which is O/ When you're transformation, you're changing the Jacobean, whether your Jacobean is 1 or minus 1. You just actually [? exactly ?] the standards of the transformation in the Minkowski spacetime. In the Minkowski spacetime, you can also can see the [? site ?] transmissions. AUDIENCE: OK. HONG LIU: Yeah, just the same thing, because this is just the transformation in the Minkowski spacetime of this embedded space. AUDIENCE: So in the [INAUDIBLE] metric, we also have a, like a-- determine [INAUDIBLE] transformation. HONG LOU: Exactly the same as the standard Minkowski space. AUDIENCE: OK. AUDIENCE: But discrete part just like a [INAUDIBLE]? HONG LIU: Oh, no, it depends. Yeah, in the old dimension, yeah, in the 3 plus 1 dimension, just the [? parity. ?] That's right. In the 2 plus 1 dimension, if we invert all directions, then it's not. Right? Yes? AUDIENCE: Is there an easy way to see the special conformal symmetry of the system? HONG LIU: Hm? AUDIENCE: Is there an easy way to see a symmetry from the metric? HONG LIU: Yeah, you just plug this in. [LAUGHTER] AUDIENCE: Well, I mean, how do you know that there is such symmetry, [? complicated ?] symmetries? HONG LIU: No, just say another way to understand is to do this one. So this inversion is a much simpler transformation, which you can easily check [? its ?] isometry. And then this procedure will guarantee this is isometry. [? because ?] each step is isometry. Good? OK, so after talking about the-- so that concludes our very quick review about the geometry and properties of Anti-de Sitter spacetime. AUDIENCE: But maybe one more quick question. So since you said the isometry [? bigger, ?] and so if you why you only answer on the [? SO? ?] HONG LIU: It's the same thing. In Minkowski space, we can also separate the discrete transformations and consider the continuous part. AUDIENCE: Oh. HONG LIU: Yeah, so it's just exactly the same thing. AUDIENCE: OK. HONG LIU: Yeah. Certainly here, we can consider the inversion too. We just here, I'm considering the part, which connect to the [? identity ?]. AUDIENCE: OK. HONG LIU: Any other question? AUDIENCE: So is it true that in some dimensions the inversions [INAUDIBLE] isometry? HONG LIU: I think-- no, I think actually more dimensions. This is not. In more dimension, this is [INAUDIBLE]. Yeah, it-- AUDIENCE: [INAUDIBLE]. HONG LIU: Inversion is always isometry in any dimension. AUDIENCE: But it's just that the determinant [INAUDIBLE]. HONG LIU: No, it's always minus 1, any dimension. AUDIENCE: Oh, OK. HONG LIU: Yeah. Other questions? Good. So after talk about Anti-de Sitter spacetime, now we can talk a little bit about the string theory in Anti-de Sitter spacetime. OK? We can talk about string theory. Now, let's talk a little bit more about a string [INAUDIBLE] the distance [INAUDIBLE]. This discussion is very easy and essentially trivial because we know very little about this string theory in Anti-de Sitter spacetime. [LAUGHTER] And so there's not much to talk about. But there are a few general statements we can say. So there are a few general statements we can say. First is that both AdS 5-- so we have said that AdS 5 is the maximally symmetric space of negative curvature. And we all know from your really kindergarten years, that S5 is the maximal symmetric space of positive curvature. So this combined together is really a maximally symmetric space OK? So this AdS 5 times S5, so AdS 5 times S5 is a homogeneous space, homogeneous spacetime, maximally symmetric homogeneous spacetime. So whatever string theory in this space, it's-- so the only scale in this space is the curvature, curvature radius, and which is the same everywhere. OK? It's homogeneous. It's the same everywhere. So essentially, R, so this R will be just a single parameter to characterize the curvature of this spacetime, so single number, which characterize the curvature of the spacetime. So even the space is not homogeneous, and you have to specify which place, which curvature, et cetera. But this is the maximally symmetric space, homogeneous. So a single number captures the curvature everywhere. So that means that in string theory on this spacetime, it's really just characterized by the following dimensionless parameter, so alpha prime divided by R squared. I mean, alpha prime is a dimensional parameter. But in the end, it's given by the following dimensionless parameter. And only for any physical process can only this dimensionless parameter goes in, not separate of them, because there's no other scales. i is the only scale, and alpha prime is the only [? other ?] scale. And so for any physics, the only dimension parameter can come in is this one. And, of course, there's another dimensionless parameter is the gs, is what we call the string coupling. So essentially, the series is specified by these two parameters. OK? And depend on what you want, you can also construct the Newton constant. The Newton constant can be expressed in terms of gs and 1/2 prime. So in type IIB string, the relation, so we are talking about type IIB string, the relation between the Newton constant and the, say, alpha prime and the gs is given by the following. So instead of a considering these two parameters, you can exchange gs by Newton constant. So you can alternatively consider, say, gn to the R to the power 8, or alpha prime to the R squared, or just characterized by these two dimensionless numbers. After your choice, sometimes this is more convenient, and sometimes, this is more convenient. So if you talk about string theory, work with the string theory [? parse ?] [? integral, ?] et cetera, then this is more convenient. But if you think about the gravity, then the Newton constant appears naturally. And then this become more [? natural. ?] OK? So the fact of the theory is controlled by these two dimensionless parameters. AUDIENCE: The R is a constant R? HONG LIU: Yeah, same R. It's just the curvature radius over the spacetime. The whole spacetime, it's just controlled by the single R. AUDIENCE: Yeah, yeah. HONG LIU: Yeah, yeah, also, implicit in the discussion here, which is in the metric where I wrote last time, is that the curvature radius here is exactly the same as the curvature radius here. And so the single curvature radius characterizes the whole thing. OK? OK, good. So this means that whatever quantum gravitational theory or string theory in this spacetime, you can consider q limit, based on this 2 dimensionless parameter. So first is so-called the classical gravity limits. So this is the limit, which the string coupling goes to 0. So that means-- remember, the string coupling controls the [? loop ?] [? corrections ?] over the strings. And also, essentially, it's the fluctuation over the spacetime. And so the string coupling is 0. And also, the alpha prime divided by R squared goes to 0. OK? So this alpha prime divided by R squared goes to 0. So this is can be considered as a point particle limit. Because alpha prime essentially characterize so the [? sides ?] of a string. And when the side of the string is much smaller than the curvature radius, and then essentially, it's a point particle limit. So this is the standard classical gravity limit. So in this region, we have classical gravity. And you cannot tell that those particles are strings. They're just like ordinary particles. OK? So similarly, so g string [? to ?] [? 0 ?] can also be considered as the limit. The Newton constant goes to 0. It's the same thing. OK? It's same thing. And so in this region, [? you ?] get classical gravity. And more specifically, in our case, we get to have to be super gravity, which we briefly mentioned before. So there's a lot of the region is called the classical string limit. In this case, the alpha prime divided by R squared can be arbitrary. Does not have to be small. So in this region, than the string [? G ?] [? fact ?] will be important. So you should be able-- then that means that the spacetime curvature radius is comparable to the string sides. Then in this case, you can no longer treat the string as a point particle. But we still take g string goes to 0. So that means that the quantum fluctuation is small. The spacetime quantum fluctuation is small. So in this case, you still have a classical theory. But it's a classical string theory, rather than a classical gravity theory. OK? Any questions regarding this? So there's another-- something else? AUDIENCE: No. HONG LIU: So there's one feature in this spacetime. So this is altogether 10 dimensional spacetime. There's one feature of this spacetime. It said S5 is a compact space with a finite volume. And AdS 5 is uncompact OK? So normally, when you have a compact space, so it's convenient-- so S5 is compact. So it's actually convenient inside your situation to express whatever, say, 10 dimensional field in terms of-- yeah, it may be to not express, to expand, but in 10 dimensional fields, in terms of harmonics, [? inverse ?] 5. OK? Because there's a discrete set of harmonics on S5, you can always expand some 10 dimensional field on it. So for example, if you have a scalar field, so supposing if you have a 10 dimensional scalar field, say we have x mu. We have z. So this is AdS5 part. And them let me call it omega 5, which is S5 part. Then you can always, say, so you can expand these scalar fields in terms of spherical harmonics on S5. OK? Then what you get is you get a tau field in AdS 5. And then this are spherical harmonics in S5. OK? And in particular, when you do this expansion, than the higher harmonics, they will develop a mass, because of the curvature of the S5. So the lowest mode will be independent, say, of the coordinate on the S5, where the higher mode will depend on the coordinate of S5. And because of those dependence, because of the curvature of [? base ?] 5, they will develop a mass. So the lowest mode would be massless, then they will be followed by tau of massive modes, controlled by the [? size ?] of AdS 5. OK? So you can do this for any field, any particular [? the ?] metric. So when you do this, essentially, only the massless graviton in 5 dimension will [? mediate ?] no [? range, ?] say, gravitation [? interactions ?] in AdS 5. So essentially, so the gravity, so at a long distance, the gravity is essentially 5 dimensional. OK? It's the same in 5 dimen-- because you can always reduce on S5. So there's a slightly tricky thing here. But it's only-- it's that the size of S5, the curvature radius of S5 is actually the same as the curvature radius of AdS 5. So there's not really a hierarchy here in terms of the curvature radius. But there's still an important difference. These have infinite volume. These only have a finite volume. So these are compact. These are uncompact. OK? AUDIENCE: But why the gravity [? doesn't-- ?] HONG LIU: Hm? AUDIENCE: But why the gravity doesn't say this string [? S5? ?] HONG LIU: No, it-- [INAUDIBLE] [? this thing ?] radius-- this [? thing ?] S5. But just as I said, this is a compact part. You can always do a reduction. And then the higher graviton mode will develop a mass. AUDIENCE: Oh. HONG LIU: Yeah, we develop a mass, yeah, even though that mass is not very big. Yes? AUDIENCE: So if we applied this to the real world, then what would estimate for R? HONG LIU: Sorry? AUDIENCE: What's the estimate of this radius, this curvature for if it's should be comparables are of the original worlds? HONG LIU: [INAUDIBLE] the size of the universe. AUDIENCE: But then doesn't it mean that we only should worry about the mass of this compact mode? HONG LIU: Yeah. AUDIENCE: Only on distances much longer than ours? HONG LIU: Yeah, yeah. I mean, just saying, at very large distance, essentially is 5 dimensions. But it's still-- you can talk about the distance scale, et cetera. But here, R is the only scale. It doesn't matter how large is R. Everything matches against R. R provides your units. AUDIENCE: Yes, but when we talk about distances shorter than R then-- HONG LIU: Yeah, if you talk about a distance shorter than R, it doesn't matter. It's like a 10 dimensional. It's like a 10 dimension [INAUDIBLE]. But AdS is uncompact, you can go as much distance as you want. I'm just saying, the story is slightly tricky because they have the same curvature radius. But I'm trying to describe it in the spirit. And also, mathematically, you can always, because S5 is uncompact, you can always do dimensional reduction on it. And this provides a convenient way to organize fields, in terms of 5 dimensional fields. AUDIENCE: OK. HONG LIU: OK. So one thing we will often use is, indeed, just consider the dimensional reduction of the gravity to 5 dimension. So if you have, say, say suppose this is Einstein-Hilbert action in 10d. So this is AdS 5 part. This is the S5 part. So you have some 10 dimensional, so curvature. So R is a [? rich ?] scalar in 10d. OK. Now, suppose the metric, suppose this considered the lowest mode for the metric, which have low S5 dependence. And then you can just reduce this to a 5 dimensional theory. Gravity, so the volume of the S5 part, we are just [? factorize. ?] Then you have a 5 dimensional scalar. OK? And this b5 is just the volume of S5. OK? 5. And now, we can absorb this b5 into downstairs, then define or you factor your 5 dimension Newton constant. 5 [INAUDIBLE] Newton constant, which we are call G5, which is defined to be GN divided by the volume of the 5 sphere. OK? And the volume of the 5 sphere is, essentially, just pi q. OK? And then, of course, there's R to the power [? of 5th. ?] So this is the quantity we will often use. OK? So of the dimensional reduction, in the end, you get the action [? we will ?] have the form 14 pi G5, so 5 dimensional Newton constant, say, [? d5x ?], just AdS part. And then you have the gravity part of the action. And then you plus many, many matter fields. So this is, the essentially will be the structure of your action when you do the dimensional reduction. OK? AUDIENCE: Shall we have matter fields in the S5? HONG LIU: No, you reduce everything on S5. And once you have done that, then S5 will disappear. AUDIENCE: But when we do, say, the R is [? always ?] a constant, so we can do the integral easily in S5. But if we don't have some mass [INAUDIBLE]. HONG LIU: Yeah, so no, it doesn't matter. So you can take-- this is a standard story for [INAUDIBLE] reduct-- a standard story. When you expand around in this, so this will go into here. Then the only dependence on the S5 part will depend on those things. AUDIENCE: Uh-huh. HONG LIU: Then you just can't integrate those things. AUDIENCE: Oh. HONG LIU: You can always integrate them. AUDIENCE: Oh, OK. HONG LIU: And they just give you some integrals, some numbers. It would just give you some numbers. And in the end, you can always write the action in terms of 5 dimensional action. AUDIENCE: But that solution is a classical solution for the [INAUDIBLE] equation. HONG LIU: No, no, no. This is not a classical solution. I'm just doing an expansion. It's just like doing a Fornier transform. AUDIENCE: Oh, OK. HONG LIU: This is expand in terms-- in some basis. No, have not done anything. This just a mathematical rewriting, AUDIENCE: Mm-hm, OK. HONG LIU: Yes? AUDIENCE: But I think in terms of expression for G Newton into the affected-- don't we have that this G5 is like G squared alpha for the [INAUDIBLE] [? divided by ?] [? R to the 5th y ?] can put [? a use it ?] to this alpha R squared combination? HONG LIU: Sorry-- AUDIENCE: Express G5 in terms of GS and alpha prime. HONG LIU: Yeah? AUDIENCE: And [INAUDIBLE]. Why don't it look like only combinations of R prime over R squared and GS? HONG LIU: Sorry, I don't understand what you're asking. No, G5 is a dimensional parameter. 5 dimension Newton constant had dimensions 3. So whatever dimension here, some [? other ?] dimension there would be compensated by R5. But you always have 3 dimension left. So this is a dimension 3 number, yeah. Any other questions? Good. OK, so then this concludes our discussion about string theory in AdS 5 times S5. As I said, there's not much to talk about it. And so now, let's look at the other object, which is the N equal to 4 super-Yang-Mills theory, which is the other side of the equation, of the duality equation. OK? So in this case, we have a lot to talk about it. We, in principle, have a lot to talk about it, but we won't have time. So we will also not talk much about it. So we are only mentioning a few essential things, OK? So the field content, say, of a N equal to 4 super-Yang-Mills theory, which we already learned, which we already learned from the [INAUDIBLE] theory on the D3-brane is that you should have a gauge field. And then you have a 6 scalar field, because 1 into 6 transverse direction over the D3-brane. OK, you have 6 scalar fields. And then when you include the super symmetric center, in the super string, then they actually also the massless of fermions. So this is 3 plus 1 dimension. OK? So this is 3 plus 1 dimension. So in 3 plus 1 dimension, you can represent the fermions conveniently by two components, for example, two component Weyl fermions. They it turns out there are 4 such Weyl fermions. So A will go to 1 to 4. So alpha is a spinor indices. And A just neighbor different spinors. So altogether, you have four spinors. OK? And so this is, essentially, the field content of the N equal to 4 super-Yang-Mills theory. So we would not worry about the fermions. So but I'm just mention here for completeness. So if you have a U(N) gauge theory, if you have a U(N) guage theory, say, for the N D3-brane, then all this, all this field is in the of the [INAUDIBLE] of U(N). In other words, each of them is an N by N matrix, is N by N Hermitian matrix. OK? So each of them can be represented by Hermitian matrices. So altogether, yeah, yeah-- so let me not write it. Let me just say it in words. So altogether, you have 8 [INAUDIBLE]. You have 8 bosonic degrees of freedom. Because for the photon, you have 2. For photon, you have 2 [INAUDIBLE] degrees of freedom. And here, 6 scalar field, you have 8 [INAUDIBLE] degrees of freedom. And for the fermion, you can also count that the 4 Weyl spinors actually have 8 [INAUDIBLE] degrees of freedom. So altogether, you have 8 N square. Each of them is an N by N matrix. So you have 8 N squared, say, [INAUDIBLE] bosonic degrees of freedom, and eight 8 N squared for mionic [INAUDIBLE] degrees of freedom. OK, for the-- AUDIENCE: Why would we have 8 fermions? HONG LIU: Hm? AUDIENCE: We have 8 fermions? HONG LIU: No, we have 4 fermions, but each fermion have some spinor component. AUDIENCE: Oh, OK, OK. HONG LIU: Yeah. AUDIENCE: Where did the number come from? HONG LIU: Sorry? AUDIENCE: Where did 4 fermions? HONG LIU: That come from a calculation. This I cannot explain here. This, I can only quote this fact. It's just that if you work out the theory on the D3-brane, then [INAUDIBLE] there's 4 Weyl fermions, yeah, which we did not go through that, because we did not do super string. Yeah. AUDIENCE: Should this come from the fact that it's where [INAUDIBLE]? Symmetry? HONG LIU: Yeah, it's come from the super symmetry. Yeah, come form the string theory, super string theory. So in our discussion, we only discussed bosonic part. We never had time to discuss for mionic part, et cetera. So let me now mention one important point, which I think is-- I hope is self-obvious, self-evident to you. It's that this U(N)-- actually, there's a U(1) which decouples, which U(1) decouples, just actually, let me just write down the Yang-Mills theory first. So let me write down the Yang-Mills theory first. So you can write down the Yang-Mills theory, which we are actually wrote it down before. Let me only write down the bosonic part. So essentially, just given by Yang-Mills coupling trace. And this is a covariant derivative acting on the scalar, standard covariant derivatives. And the I and the j are all summed. OK? So I and the j are all summed. So this is the Lagrangian, then plus the fermionic part. So this is the bosonic part, then plus the fermionic part. OK? Yes? AUDIENCE: Are fermions coupled through 5 fields? HONG LIU: Yeah, they're all coupled. They're all coupled. AUDIENCE: How do they couple? HONG LIU: Standard [INAUDIBLE] coupling. Yeah, so the key, so they couple through a particular coupling. The hosting, it just can show to have a single coupling. In principle, if you have so many fields, you can have many, many different possible couplings. You can have one coupling between them, one coupling between them, et cetera. You can have different components, but and self-couplings, et cetera. And under the key of N equal to 4 super-Yang-Mills theory is that every coupling, they equal up to some constant, up to some constant factors, which is precisely give you the supersymmetry. Anyway, so I think it's obvious to you that you have a U(N) Yang-Mills theory that the U(1) part decouples. OK? OK, you unplug the couples. So U(N), you can always decompose it into SU(N) times U(1). Under the U(1) part, [INAUDIBLE]. So each A, each field is N by N matrix. And the U(1) part is the part which is proportional to the identity matrix. And the SU(N) part is the part which given by N by N trace this matrix. And so the U(1) part is proportional to the identity matrix. So but you can immediately see from here, anything which is proportional to the identity matrix all the commentators will vanish. OK? So essentially, U(1) part is a free theory. So U(1) part is a free theory. Yeah, if this is not clear to you, you can easily convince yourself afterwards. And this is also where-- this is also physically clear from the point of view, thinking that this is coming from the D-branes. Coming from the D-brane, you have, essentially, you have N D-brane together. And this Yang-Mills theory essentially describes the low energy dynamics of D-branes, brains, so how all these N D-branes interact with each other. But no matter know how they [? co-communicate ?] [? interact ?] with each other, there's always a center of mass motion. Does not depend on their internal structure. And that central mass motion essentially is just this U(1). And the central mass motion always decouple. OK? So U(1) decouples. And you can do the check from here. A U(1) decouples. OK? AUDIENCE: Yeah. HONG LIU: Good. So now let me-- so the interacting part is just SU(N). OK? The interacting part is just SU(N). So now, let me say a few words regarding the properties of the theory. So first, it's that this theory has N equal to 4 supersymmetry. OK? So this is another important remark. This is just explain the name, just explains the name. So in supersymmetric theories, so you have a transformation between boson and the fermion. OK? You have some-- the theory is invariant on the transformation between boson and fermion. So on the [? side of ?] transformation, the [? conserved ?] charge, so for [INAUDIBLE] transformation, then you have loss of charge. [? For ?] [? such ?] [? as ?] transformation and loss of charge can be described by a spinor. OK? So normally, we say N equal to 1 supersymmetry if the super charge is given by a single Weyl spinor in 4 dimension. So N equal to 4 means that the total number of such [? conserved ?] charge is actually given by the 4 Weyl spinors in the 4 dimension. So it doesn't matter, so these have a number of supersymmetries. OK? And also, if you know a little bit of supersymmetry, then this is actually the maximally allowed, the supersymmetry in 4 dimensions, for [INAUDIBLE] field theory. So this is actually maximum allowed supersymmetry. But this will not be important for us. OK? But this will be. But the next point will be very important for us. It's that because of this theory is so symmetric, so G Yang-Mills coupling, its dimensionless, classically. OK? Because this is a 4 dimension, the Yang-Mills coupling is dimensionless, classically. And the same thing with our own QCD, our own QCD coupling this dimensionless, classically. But then quantum mechanically, then the coupling actually changes with the scale, OK, because over the quantum corrections, et cetera. So generic coupling changes with scale. But N equal to 4 super-Yang-Mills theory is special. But this actually, this Yang-Mills coupling does not change with the scale, even at the quantum level. So the quantum level, this remains, so dimensionless coupling. So using the technical term, for those of you who have studied the QC Yang-Mills theory, is that the beta function at the quantum mechanical level, the beta function for G Yang-Mills is actually 0, which means you can really treat this G Yang-Mills as a parameter, as a dimensionless parameter, even quantum mechanically. OK? So this is a genuine parameter. In contrasting our QCD, there's no such parameter. OK? In QCD, we only have a scale. There's no parameter. Good? So any questions regarding these? AUDIENCE: But you said you wouldn't know the coupling [INAUDIBLE] coupling because of [INAUDIBLE]? HONG LIU: Yeah. AUDIENCE: That means this is not [INAUDIBLE]? HONG LIU: No, it's not. No, in the-- AUDIENCE: It's not [INAUDIBLE]. HONG LIU: In QCD, the Yang-Mills coupling is not the parameter. It's not the dimensionless parameter. AUDIENCE: OK. HONG LIU: It translates into a scale. AUDIENCE: Yes. HONG LIU: It translates into a scale. AUDIENCE: So-- HONG LIU: It translates into a mass scale. AUDIENCE: Yes, but can we find a massless parameter in [INAUDIBLE] like this? HONG LIU: No, because the beta function is nonzero there. AUDIENCE: Ah. HONG LIU: Yeah, yeah because the beta function is nonzero there. OK? And also because of this, the beta function is 0. This theory is actually conformally invariant, theory is actually conformally invariant. OK? So which we'll normally call, say, it's a conformal field theory, because it's a CFT. So one way to understand this is because of the beta function is 0, essentially, the theory does not have a scale. So theory classical scale, invariant quantum mechanically remains scale invariant. And then you can show that there actually is a little bit more generous conformally invariant. So let me say a little bit about the-- are people familiar with the concept of the conformally invariant, or conformal? AUDIENCE: No. HONG LIU: No? OK. Yeah, so let me say a few words about the conformal. So conformally invariant means that the theory were under the conformal transformations. OK? So let me say a few words on the conformal transformations. So conformal transformations are the following. Say, suppose your spacetime metric is given by some G mu mu. So the conformal transformations are those coordinate transformations. OK? You go from x to x prime. So that under such a transformation, your metric, the transformed metric is related to the original one only by an overall scale factor. OK? So if lambda is equal to 1, then this is what we called earlier isometry, which leave the metric invariant, will be isometry. And if you leave the metric invariant after your overall scale factor, then this is called the conformal transformation. OK? And for the Minkowski spacetime, for Minkowski spacetime, actually, I can just even use this thing here. Yeah, maybe let me-- yeah, let me erase here. So for Minkowski spacetime, say if the G mu mu is equal to [INAUDIBLE] mu mu-- suppose this is a d dimensional Minkowski spacetime-- OK? Then you can show-- you can just work how to solve this equation, work out all possible such transformations. OK? And you can show that conformal transformations are the following. First, they are the standard isometry, which is the translation, because the translation does not give you lambda 1. And then you have a Lorentz transformation, which also give you lambda 1. OK? And they you can have a scaling. Of course, if you scale your coordinate with some factor, then, of course, the metric will only change by overall factor. So this is a translation. So this is s Lorentz symmetry. And this is scaling. And then we also have something what we call a special conformal transformation. So this is for-- OK? And the b, again, is some constant parameter. OK? And also for the discrete transformation, also, you can have a discrete. Also you can have a discrete transformations inversion. You [? said ?] x prime mu equal to x mu divided by x squared. OK? So these are all the transformations. In general dimension, which leave you metric invariant, leave the Minkowski metric invariant number up to overall factor. And the conformal invariant theory is a theory which [? involves ?] transformations. OK? And so obviously, a Yang-Mills theory are always invariant on the translation on the Lorentz transformation. But when the beta function is 0, the system does not have a scale. Then it's invariant on the scaling. But typically, in general, 4 dimensional theory, any theory invariant on the scaling is also invariant on this special conformal transformations. So actually, the N equal to 4 super-Yang-Mills theory is actually conformal theory, OK, conformally invariant theory. So now, as part of your p set, you can actually see that this conformal transformations in fact [? have ?] 1 to 1 correspondence with these isometries of the AdS. OK? And this we will see later. This is a part, important part of the relation between AdS. This is an important part of this duality relation. It's that the symmetry have to be the same. So you compare the two. You can see that, indeed, they 1 to 1 correspond to each other. So the conformal transformation add together give you SO(d,2). OK? So this is the conformal group, so-called the conformal group in d dimension. Then you get to the [? SO(d,2). ?] OK? And but in this theory, there's actually also global symmetry, because the N equal to 4 super-Yang-Mills theory come from the D3-brane, which is a transverse space, is the [? rotation of ?] invariant. So this actually our SO(6) symmetry rotates different phi i's. So actually, there's also global SO(6) symmetry, which rotates phi i, OK, and the fermions. OK? And then when you include in the supersymmetry then you get a huge symmetry group. It's normally called a super conformal symmetry. Anyway, let me just write some notation down. It doesn't matter. So if you include the symmetry, the full symmetry group is what we normally call it. So this is a super group, so [? SU(2,2|4) ?] Doesn't matter. So the bottom line is that the N equal to 4 super-Yang-Mills theory is the most symmetric 4 dimensional theory. OK? And almost-- no theory has more symmetries than N equal to 4 super-Yang-Mills theory. Any questions? AUDIENCE: [INAUDIBLE]. HONG LIU: Yes? AUDIENCE: So How do you know that the [? flows are ?] [INAUDIBLE]? [? HONG LIU: Though ?] you can classify it by solving this equation. You can classify it by solving this equation. AUDIENCE: But what's the q [INAUDIBLE]? HONG LIU: No, this is just a notation. AUDIENCE: A notation? HONG LIU: Yeah. Any other questions? So I can say a few words about the conformal field theories. But it's getting a little bit late, and I want to talk about other things. So let me just say it in words. And then you can try to read it in other places. Because even if I just write a couple formulas here, it still won't change you very much. [LAUGHTER] I won't teach you too much. So important thing about this conformal field theory is that each operator, say each local operator-- yeah, you can classify local operators by how they transform, say, under these conformal symmetries. And then you can associate, say, a dimension to each operator, related to how they transform in these conformal transformations. And then you can also show that the symmetry actually dictates the 2-point function and the 3-point function of such operators. And essentially, the structure of the 2-point function and 3-point function are completely fixed. But not higher point function, but only 2 and 3-point functions. Yeah, so that's essentially it. Not much known, many things are known about conformal field theory in 1 plus 1 dimension. And but higher dimensions, yeah, other than what I said, not much. Not too much more is known, yeah. So any questions regarding this? OK, so let me just summarize, then we can have a break. So we can summarize. So let me just summarize what we have done so far, and also, with an important refinement. So what we said is we started with two picture. One is D-brane with some open strings, and then some closed strings can interact with it. And then when we go to the low energy limit, then we get N equal to 4 super-Yang-Mills theory with, now, I say SU(N) plus a decoupled U(1), and then plus what we discussed last time, a decoupled the gravitons in the low energy limit. So let me just write E equal to 0 limit. On the other side, we have this [? pure, ?] [? geometric ?] picture of the curved spacetime produced by those D-branes. So those curved spacetime at infinity behaves like just 10 dimensional Minkowski spacetime. And then when you go to the close to the brane, then the space deforms into the AdS 5 times S5. OK, form into AdS 5 times S 5. Under the low energy limit, because 1 into-- we can see the string theory in the [? very ?] [? much ?] [? tongue ?] this [? throat. ?] OK? So the low energy limit is the string theory. The one here when you're take E equal to 0, we get string theory in AdS 5 times S5, then plus decoupled graviton. Here, also, here, when we look at the geometry of the brane, essentially, we have fixed the location of the brane. So essentially, we have fixed to the center of mass motion. But in principle, you can allow, also, the brane to move anywhere. So here, essentially, there's also a decoupled center of mass motion. OK? So now, let's get rid of this decoupled graviton. Let's get rid of this decoupled U(1), or decoupled center of mass motion. Then what we get is the fully interacting part. OK? So the fully interacting part is so small refinement than what we said last time. We said N equal to 4 super-Yang-Mill theory with [? stage ?] group SU(N) should be the same as the type IIB string in AdS 5 times S5. OK? So, now, other side, now, this is SU(N). And more precisely, this is AdS prime S5 in the Poincare patch. OK? Yes? AUDIENCE: The N just to recall, is the that N comes from the amount of flux on the S5. Is that correct? HONG LIU: Yeah, that's right. That's right. So the N here related to the flux on the AdS 5, it's also related, as we will see, related the Newton Constant, et cetera. Yeah, we will see that, yeah. OK? So now, we can look at this geometric picture. OK? We can look at this geometric picture. Then we observer something interesting. So this SU(N) group, let me say, [INAUDIBLE] R 1,3. OK? On the 3, approximate dimensional-- yeah, let me write it better-- this SU(N) on the space, on the Minkowski space, R 1,3. So now, let's note. Make a simple observation based on the geometry of AdS is that this R 1,3 is actually the boundary of AdS 5. OK? It's essentially the boundary manifold of AdS 5. And the right hand side, no matter whether you do field theory or string theory, you can always do dimensional reduction on S5. You can always decompose everything on S5 in terms harmonics. So essentially, the right hand side, because S5 is a compact space, the right hand side is a 5 dimensional gravity theory. OK? So we actually now here see a 5 dimensional gravity theory. Now, it's equivalent that your 4 dimensional theory [? lives ?] on its boundary. OK? So this really can be considered [? actually ?] realization of holographic principle. So actually, there are two ways to think about it. One way is to think from this-- so previously, we motivated. Long time ago, we motivated the duality using two perspec-- [INAUDIBLE]. One is the holographic principle. Some gravity theory should be [INAUDIBLE] to the [? theory ?] living on this boundary. And also, motivated that the Yang-Mills theory in the gauge group SU(N) or U(N), when you do the [? large N ?] expansion, then behaves like a string theory. OK? So this could also be considered as an explicit example of that relation. That this is a Yang-Mill male theory, and then turned out to be [INAUDIBLE] to a string theory. OK? So now, if you look at this statement, you say, is this a coincidence? Because I don't really see somehow in this picture not so much somehow how why this Yang-Mills theory is actually should be considered as living on the boundary of AdS 5 times S5, because the boundary of AdS 5 times S5, somewhere around here. Yeah, I don't see some Yang-Mills theory living on there, OK? So is this really just some coincidence that just happened to be that? But actually, there's a very important prediction one can make if you take this point of view. Now, I can erase here. Maybe I will do it here. Yeah, but if you take this point of view, then there's a very important prediction you can make. Then there's a very important prediction you can make, and then check whether that has any chance of working. So if you believe this picture, then we can have a [? non-trivial ?] prediction, because AdS also have [? a lot ?] of description in terms of the global AdS. Then let's say, what happens if we put the string theory in this global AdS, which certainly, we should be able to do, because these two only differ by some global structure. OK? And this is one part of that. They only differ by some global structure. So if this is really true, if this is not a coincidence, then we can make a non-trivial prediction is that N equal to 4 is a type IIB string in global AdS 5 times S5. So now, the AdS 5 we take to be the global, to take to be the cylinder. That should be equal to the N equals to 4 super-Yang-Mills theory, which now, should live on the boundary of that cylinder, which is our S3 times R. OK? So this realization will give you a powerful prediction, which in principle, you can check. OK? And, of course, if this is a coincidence, then there's no reason why this to be true. Because from the brane point of view, we cannot really do the sphere. OK? AUDIENCE: Why is [INAUDIBLE]? Oh. HONG LIU: So any questions about this? AUDIENCE: One question [? on that. ?] so when we derive the right hand picture, we impose that there's some charge on the D-brane. But on the left hand side, it seems that we-- [INAUDIBLE] a the charge [INAUDIBLE]? HONG LIU: No, the charge is just reflected in open string dynamics. AUDIENCE: OK. So by your right hand side, we will have the charge. HONG LIU: No, you essentially have a charge. But in that description, you don't need to introduce a charge. Whatever charge-- AUDIENCE: If we impose charge if the left hand side will never be changed? HONG LIU: No, no. Just there from the side, you just talk about strings. AUDIENCE: Uh-huh. HONG LIU: You don't talk about the whatever charge. The charge is a gravity description. AUDIENCE: Uh-huh. HONG LIU: The charge don't even arise in this picture. I just have some strings. I have some open strings. I have some closed strings. They interact with each other. AUDIENCE: OK. And did this picture go to low energy means it will appear as a charge of the super-Yang-Mills theory. HONG LIU: No, no, no, no, no. No, no. This goes [? in ?] [? order. ?] You just super-Yang-Mills theory. Yeah. Good? Any other questions? Yes? AUDIENCE: Is this true? HONG LIU: Yeah, this is true. AUDIENCE: OK. [LAUGHTER] HONG LIU: And the yeah. So that means this is not a fantasy. Yeah, this tells this is a fantasy. This is actually a powerful realization. This is a powerful realization, yeah. So let me before conclude-- Let me just quickly mention something just in terms of semantics. So I always say, type IIB string in AdS 5 times S5. So you may ask, does this make sense to say such things? Because if we think this is really as a quantum gravitational theory, so if you have a finite string coupling, then everything fluctuates. And then why we specify a rigid spacetime here? OK? Why do we specify a rigid spacetime here? So the way you should understand the statement is the following, is that indeed, if you, say, consider the finite G Newton, the finite G string, then the spacetime can fluctuate a lot, will typical be a deviate from AdS 5 times S5. But this AdS 5 times S5 specifies it's asymptotic geometry of AdS. It's asymptotic geometry. It's the geometry of AdS very far away near the boundary. And essentially, this can be considered as specifying the boundary condition for gravity, the boundary condition for the quantum gravity. So this, essentially, specify the boundary condition. Good, so let's have a couple minutes break. Then we will talk more about the duality. OK, good. So let's start again. [SIDE CONVERSATION] So now, let's move to a new chapter. So now, have we established the duality. So let's try to understand the duality. So let me call it the duality toolbox. And then we can try to use it. So first, let me say some general aspect on the duality. Oh, I erased. Oh, I should keep that figure there. Anyway, so the first important thing is so-called IR/UV connection. So here, we see the equivalents, say, from the AdS 5 gravity. So we have some AdS 5 gravity to go to N equal to 4 super-Yang-Mills theory in d equal to 4. So from this direction, we may consider as a realization of the holographic principle. OK? We can consider this direction as a holographic principle. And then you may ask, from the field perspective, if we think from this angle, why somehow, this description of the field theory have one more dimension. So this is a 4 dimensional theory. This is a 5 dimensional theory. OK? What does this actual dimension does? What does this x dimension do, OK? How do we understand from the field theory perspective somehow this increase of one dimension? OK? So the answer turns out-- the answer is actually very simple. And it's already nice in a way. The answer is already in the way which we take the low energy limit. OK? So remember, when we take the low energy limit-- so let me draw this figure again-- in the gravity side, and this is going to R goes to 0. OK? So by studying the red shift due to the curved spacetime, we show that if you want to go to low energies, then you want to go to smaller and smaller R. OK? So you want to go to smaller and smaller R if you want to go to [? lower ?] and [? lower ?] emerges. OK? So smaller r, and then give you smaller energy. So we use this argument to decouple this, the other factor. We say, if you want to go to low energy, you take r go to 0 limit. Then there are infinite distance between them. But the nice thing about infinities is that after you go to infinity, there's still infinite left. OK? So the same argument also applies in AdS, in the AdS part, after you decouple the other stuff. And the AdS part of this argument still applies. Still, the same thing happens. The same red shift argument applies. So if you want to go to low energies, you want to go to smaller r. OK? And so this immediately tells us that this actual dimension, this r direction, it's precisely the direction, which external to the N equal to 4 super-Yang-Mills theory, transverse to the N equal to 4 super-Yang-Mills theory. So this [? immediate ?] [? here is ?] this actual dimension, this r, or in this coordinate z-- remember, in this z, z related to that thing just by R squared divided by r, so that r related to that z. So that the r goes to 0 limit, it tells you that that r dimension, r can be considered, this actual dimension can be considered as representing the energy scale of the Yang-Mills theory, OK, or the energy scale of the boundary theory. So if you want to go to low energy, you go down [? farthest ?] [? route. ?] OK? This is a very, very important point. And this nice into many phenomenon between AdS and the CFT and the disconnection. Very important, and if you have a good understanding, it will give you lots of good intuitions in you understand the relation. So let me just to elaborate the argument again, now using this metric. OK? OK, so now, we'll go over this red shift argument again, and to use this metric. OK? So again, so the key thing is that the x mu, which you t go to x appear in this metric, is defining the boundary unit. OK? So this defines-- so this defined in the boundary unit. So this is the unit we use in the Yang-Mills theory. OK? So those coordinates are defined in the boundary units. So now, if we want to talk about energies in some bulk, say, the local proper time, and the length, and the total proper length at some, say, at some z, OK, so at some location z in the bulk is related by the standard relation, which we can read from there. It say d tau should be equal to R, so this local proper time. Local proper time should be equal to R divided by z dt. And the dl, so local length scale, should be a proper length, should be related R z times dx. OK? OK, just because of this. You can just read it from here. Locally, you can always write this as a Minkowski metric in d plus 1 dimension. And then for the local observer, and this is the local time, local proper time and local proper length. OK? And then this tells you that you can easily invert this relation. This tells you the energy we observe in the Yang-Mills theory, was just energy we measure in terms of t should be related to the energy, which we measure locally, so all these relation, local energy. And length scale, now d in Yang-Mills theory, is related to the local length scale by this relation. OK? So this is an inverse of that because of the tau and the [? energy. ?] This is just the same as that. OK? Good? So now, let's consider the same bulk process. OK? So let's consider the same bulk process. Say, for example, this process we are sitting in this classroom, say, supposing we [? leaving ?] AdS. Let's imagine that happens at different values of z, OK say, for the same bulk process, at different z. OK. Then, of course, the E local, the local energy scale and the local length scale would be the same. OK? So if we consider the different process at different z, the same physical process at different z, in terms of a local observers, it's the same. So E local and the d local does not change. OK? But if you translate into the Yang-Mills scale, then you find now that the corresponding Yang-Mills energy scale now proportional to 1 over z, OK, because now, we have fixed E local. OK? We have fixed E local. We are changing z. So now, you find at a different location, now corresponding to a different energy scale in the Yang-Mills theory. Say, the d Yang-Mills we will [? portion it ?] to z. OK? And in particular, for the same process as z equal to 0, if we move this process close to the boundary-- z equal to 0 means close to the boundary-- then the Yang-Mills energy, the corresponding Yang-Mills energy, actually goes to infinity. Under the corresponding Yang-Mills scale, actually goes-- distance scale goes to 0. So that means this is a mapped to some UV process in the Yang-Mills theory. OK? And but if you take the z goes to infinity, which means you go to interior, far away from the boundary-- so you go to infinity, far away from the boundary, go to interior-- then this E Yang-Mills, then the corresponding Yang-Mills energy scale will go to 0. And the corresponding Yang-Mills length scale will go to infinity. OK? So this would corresponding to the IR process. So this will be a low-energy process, and [? much ?] distance in the Yang-Mills theory. So there, of course, [? those run ?] into the IR process in the Yang-Mills. OK? And [? this-- ?] So normally, if you think about the perspective from AdS point of view, when you go to the boundary, means you've got a long distance. So from AdS point of view, this is going to the IR. OK? And then when you go to the infinity, z go to infinity. You are going to the interior in the bulk. So roughly, it's not precisely-- [? if you ?] [? risk, ?] you can think of this actually going to the short distance in the-- not really the short distance, just-- yeah, this name is not very proper, but let's just call it the UV. Just go into the interior of the bulk. So now, since you see an [? opposite ?] process going on, in the AdS will go to IR. Then from the Yang-Mills theory, corresponding to going to the UV, but in the AdS, you go to interior. And then from the Yang-Mills theory, corresponding to go to the IR low energy. OK? So here, we have something like IR-UV connection. OK? In particular, I already said in particular once. [LAUGHS] So but I don't have other words. So in particular-- [LAUGHTER] --if you really consider the typical gravity process, OK, so if you consider the typical gravity process, so not consider string theory, just consider gravity. Typical gravity process, a classical gravity processes, then essentially, the curvature radius defines your scale. OK? So that means that the typical gravity processes, the E local is of order 1 over the curvature scale. And the typical length scale in the bulk, you can also see that it's also the curvature radius. So in these cases, if you plug this into this relation, then you can see that for the typical bulk process, you can really identify the E Yang-Mills, essentially, just with the radial direction, OK, because this product will be 1. This product will be of order 1. And then you can really just identify the z, the inverse z with the Yang-Mills energy. OK? So let me just say this in the little bit slide today, in the picture. Maybe I'll do it here, maybe just do it here. So think about this in the picture. So suppose this is a boundary. So this is the picture. This is the picture, which on the back page of the class it said, if you consider some process here, which we draw a cow, which I don't know how to draw a cow here. [LAUGHTER] And at some distance, say, here, some value of z here, and similar process as some other value of z, then the corresponding boundary image of them, so this will corresponding to a bigger guy. And this guy will corresponding slightly small guy. OK? Yeah, I hope you understand the picture. Yeah, [? so ?] the further away, the same process in the interior, they corresponding to a process with a larger distance and a lower energy from the field theory perspective. OK? So now, let me make some remarks. Yeah, I did not show it very well. Is this clear what I mean by this picture? OK, good. Yeah, a more fancy picture is just in the web page of the class. So now, let me make some remarks. The first remarks is that the AdS, if you, say, calculate the volume of AdS, so calculate the determinant of this guy, square root of the determinant of the metric, and then integrate over the [? O ?] space. Then you find it's divergent. And particularly, it's divergent because of the infinite distance in going to the boundary. OK? So often, we will, say, put some cut-off, IR cut-off, near the boundary, say, rather than let z go all the way to z equal to 0. We, say, we stop the space at z equal to some [? epsilon. ?] OK? That's the procedure we often do, just as a mathematical convenience. Because if you go all the way to z equal to 0, then this factor blows up. And then many things become tricky to do. And then we often, in order to get the finite answer, we always put the z-- put the boundary at z equal to some epsilon. And epsilon's some small parameter. OK? So and for example, this is what you will do in your p set, in one of the p set problem. OK, when we try to-- when you will check this holographic bound. And that's the trick [? you're ?] [? ready ?] to use. So putting the IR cut-off in AdS, at some z equal to epsilon, then from this IR-UV picture, then translate in the boundary. In the boundary, we introduce a short distance cut-off, or UV cut-off, say, at some short distance scale, say, [? data ?] x say or order epsilon, or energy cut-off, or UV energy cut-off at, say, energy, say, of order 1 over epsilon. OK? So this just a [? natural-- ?] the reason this goes 1 into a UV cut-off because of this relation, because of this relation. When you cut-off the space at some z, then you can no longer go to infinite energy. You can no longer go to infinite distance scale, OK? And essentially, equivalently providing a short distance cut-off from that relation. OK? Is it clear? And you will use this in your p set. OK? So the second remark is that here, we are considering, so N equal to 4 super-Yang-Mills theory, as we said, it's a conformal theory. Or it's a scale invariant theory. A feature for scale invariant theory is that there's no scale. There's no scale means you can have arbitrarily low energy excitations. OK? So for conformal theory, say, in R(1,3), there exists, because of the scale invariants, there exist arbitrarily low energy excitations. OK? So from this IR-UV, then this corresponding to-- then this just map to the z equal to infinity region in the bulk. OK? So this map to the z equal to infinity region in the bulk. So it's a very good thing that this z equal to infinity region. OK? It's a very good thing, this z equal to infinity region, because otherwise, those modes have nothing to map to. So on the other hand, even the theory have a gap, on the other hand, if, say, if the corresponding bulk spacetime, say, "ends" at the finite proper distance, OK, so important thing is the following. So let's take any point here. Take any point here. So this is some reference scale, say. Choose it as a reference scale in the field theory, because of this [? UV ?] [? R ?] connection. So now, we want to consider arbitrary low energy scale compared to this scale. Then what you want to do, then you just you go to z equal to infinity. And this, what is actually hidden here when you do this argument, is actually the proper distance going to z equal to infinity is infinite. And because when you do this proper distance, yeah, essentially, there's infinite proper distance [? in ?] go to z equal to infinity. And then you can go to infinite low energy scales. OK? But now, if you look at the global AdS, then take some interior point. Then actually, you can easily check. When you go to the most interior point, it's just ro equal to 0. OK? And that is finite proper distance away. So in some sense, the spacetime ends at the finite proper distance in the radial direction. OK? In this case, if you apply this, this will tell you that the theory cannot have arbitrary low energy excitations. OK? Is this clear? Let me just say it again. In this theory, in the Poincare patch, in the bulk, we can go to z equal to infinity. And that's infinite proper distance away. And using this UV IR argument, that translates to infinite low energy process in the field theory. But now, let's consider what's happening in the global AdS. Just from the geometry, we see something dramatic happens, because now, this is a cylinder. And look at the metric. You take finite distance from some point to go to the most center point, which is rho 0. So you no longer have infinite distance to go to. So if this argument is consistent, then this must be due to a theory without arbitrary low energy excitation. OK? But this is, indeed, the case, because according to what I erased, just before we break, we say for the gravity theory in the global AdS, that should be due to a Yang-Mills theory, and the boundary over here, which is S times R. For any field theory [? under ?] S-- again, this is a compact manifold-- then there's a mass gap from the vacuum. And so this theory actually have a mass gap. OK? So that means if the corresponding bulk space that ends this on finite proper distance, the boundary theory must have a mass gap, and vice versa. If the fields theory have a mass gap, which you cannot go to arbitrary low energies, and then the bulk spacetime have to be end somewhere. Otherwise, these statements will not apply. OK? So consistency check of that statement is that this has to be true. OK? So in the p set, you should think through these for the process over the field theory on the cylinder. OK? OK, I think I will stop here. Do you have any questions regarding this? Yes? AUDIENCE: I was somewhat under the impression that the global AdS and the Poincare patch were just different coordinates for presentation with the same thing, sort of? So how -- would that be correct? HONG LIU: Hm? AUDIENCE: I was under the impression that the global AdS and the Poincare patch are just different coordinate charts on the same sort of structure or [INAUDIBLE]. HONG LIU: Yeah. AUDIENCE: So why do we have infinite distance in one and finite distance in another? Is that? HONG LIU: Figured out. Figured it out. And this is geometry. You can just look at them. It depend on how you slice the spacetime. Yeah, it's very easy to see. Just see how you map one point to the other point, et cetera. Yeah. AUDIENCE: OK. HONG LIU: But the important thing, the important thing is that the-- yeah, I'll give you a hint. So this z equal to infinity can be considered as a coordinate singularity of that [? thing. ?] so even though here it's completely smooth, but in here is like a coordinate singularity because of this. AUDIENCE: OK. HONG LIU: Yeah, yeah, yeah. Yeah, so that's why, actually, to talk about theory on R3 and the theory on S3 is heading [? on 2 ?] because the physics are very different. Because here, yeah, because here, you always have a mass gap. And here, you don't. And so that refract the geometry very different. The way you view the geometry are very different. AUDIENCE: So the [INAUDIBLE] 5 [INAUDIBLE]? HONG LIU: Hm? AUDIENCE: The [INAUDIBLE]? HONG LIU: Yeah, yeah. [INAUDIBLE] AUDIENCE: But if you change this [INAUDIBLE], then it's still N equal 4 super-Yang-Mills [INAUDIBLE]. HONG LIU: Yeah, it would be something else. AUDIENCE: Then yeah. HONG LIU: Yeah, yeah. [APPLAUSE]
NCLEX_Review_Lectures
Next_Generation_NCLEX_News_A_CHANGE_Youll_LOVE_Lab_Values_for_NCLEX_Changes.txt
hey everyone it's Sarah with registered nurse rn.com and in this video I'm going to be talking about a new change that you're probably going to love that's going to be on the Next Generation NCLEX exam so in my previous video I talked about some big changes you could expect to see with the Next Generation NCLEX exam that's set to go live around April 2023. however the change that I'm talking about in this video is going to make your life so much easier and it has to do with reference ranges for lab values so as a nurse you have to be familiar with lab values and unfortunately there's a lot of variation with what is considered a normal range for a particular lab value for instance it can really vary depending on the facility you were the textbook you're looking at and the laboratory and if that's not frustrating enough Labs can be reported in different measurement units for example potassium can be reported as Milli equivalents per liter or millimoles per liter now the frustrating thing for you as a nursing student is that you spend a lot of time trying to memorize those normal ranges for their those lab values for your exams and for NCLEX and then when you get to practice like when you go to clinicals you'll see that whenever you're looking at a patient's lab results you see right there that normal reference range and it tells you whether it's high low or normal so if this is something that you've been stressing out about I have really good news for you because the next Generation NCLEX exam is trying to emulate real world nursing and now they've decided to provide you with those normal ranges for those lab values for those actual case studies and those knowledge based questions so guess what you don't have to memorize those normal ranges that takes a lot off of you or you can put that time into studying something else and personally I think that was a great move however I do want to caution you on a few things first the Next Generation NCLEX exam is not going to be released until April of 2023. so if you're taking the NCLEX before then you might still have some questions in addition some of your exams in nursing school could still decide to test you on those values use and then even though you don't have to memorize those actual reference ranges for the lab values there's still some things you want to know about Labs as a nursing student and nurse so what you should focus on is that you'll want to know what labs mean what they test for what's the causes of high and low results and the specific nursing interventions and treatments you can expect to be ordered by The Physician to treat these conditions so in my videos I actually concentrate on those things I have a whole comprehensive video over fluid and electrolytes where we go over those causes those signs and symptoms and treatments to help repair you for these exams okay so that is the big change that I wanted to tell you about and I hope you're just as excited as I am about that change because that's really going to make things a little bit easier so thank you so much for watching
NCLEX_Review_Lectures
ROME_Method_ABGs_Arterial_Blood_Gases_Interpretation_Compensated_vs_Uncompensated_Nursing.txt
hey everyone it's sarah thread sterner sorry and calm and in this video we're going to review the ROE method to help you solve those abd problems for your exams and as always whenever you get done watching this YouTube video you can access the free quiz which will give you some more ABG practice so let's get started when you're solving arterial blood gas problems for your lecture exams there are different methods you can use to help you do that in a previous review we covered the tic-tac-toe method however in this video we're going to concentrate on the roam method personally I like them both because they're very easy to use but it's really whatever your personal preference is so what are we looking for whenever we're solving these ABG problems well we're looking for a potential acid-base imbalance and for exams they're gonna give you three things and you're gonna have to look at the value of those things and determine what is going on with the patient so they're gonna give you the blood pH level along with the carbon dioxide level which is represented as co2 and the BI card level which is represented as hco3 and you want to remember that co2 carbon dioxide represents the respiratory system and bicarb hco3 represents the metabolic system and what's really cool about our body is when this blood pH decreases too much where we have acidosis or it increases too much where we have alkalosis these two systems the rest for your metabolic system will try to balance that blood pH to get it back to normal so whenever you were solving these arterial blood gas problems there are three things that you want to ask yourself along with applying the method that you are using to help you solve these problems so the first question you want to ask yourself is this a respiratory or metabolic problem second you want to ask yourself do we have acidosis or do we have alkalosis and then third you want to ask yourself do we have compensation you're either going to have no compensation where would be uncompensated or you're going to have partial compensation or you're gonna have full compensation and I'm show you how to solve those problems with all three different scenarios so before you even try to solve an arterial blood gas problem you have to have this table committed to memory because you're going to pull from your memory bank these values and apply it to whatever method you're using to solve the ABG problem so let's quickly go over this table pH a normal blood pH is 7.35 to 7.45 anything less than seven point three five is considered acidotic anything greater than seven point four five is considered alkalotic then carbon dioxide co2 a normal level is thirty five to forty five anything greater than 45 is acidotic and anything less than 35 is alcoholic then we have hco3 a normal level a bicarb is 22 to 26 and anything less than 22 is acidotic and anything greater than 26 is alkalotic now let's look at the acronym roam our stands for respiratory oh four opposite m for metabolic and E for equal and I like to keep the are in the Oh together and the M and the e together to help me keep my information separated so one the world does respiratory opposite mean well what value represented respiratory that was our carbon dioxide level or a co2 so whenever your carbon dioxide level is high and your blood pH is low hence their opposite its respiratory acidosis when your co2 level is low but your blood pH is high and can their opposite its respiratory alkalosis now what does metabolic an equal mean well metabolic was represented with bicarb hco3 so whenever your hco3 is low and your blood pH is low hence they're equal because they're both low its metabolic acidosis when your bicarb is high and your blood pH is high they're equal because they're both high its metabolic alkalosis so now let's take this method and work some problems so her problem tells us that we have a blood pH of seven point two eight a carbon dioxide level of 50 and a bicarb level of twenty four and I've went ahead and set up a problem and I've wrote out Rome ro in II remember that's respiratory opposite metabolic equal and then I have our blood pH over there so our blood pH a normal level is 7.35 to 7.45 we're at seven point two eight so we're on the low side and it's abnormal and it's considered acidotic so I'm gonna put a down arrow over here blood pH and just write acid to let myself remember that then we're gonna look at the respiratory system which is represented with co2 or carbon dioxide level and it's 50 a normal carbon dioxide level is about 35 to 45 and we're on the high side of this so we're gonna put it's elevated and anything greater than 45 is considered acidotic so we're gonna write a sit over there then we're gonna later by carb level and that's represented with the M of our acronym metabolic and it's 24 a normal level is 22 to 26 so we're actually normal with our metabolic level now let's apply Rome we have opposite going on our respiratory systems high pH is low so according to row we have respiratory acidosis so we're gonna write that out that's the story acidosis so we've answered our first two questions we figured out that we have a respiratory problem and we figured out that we have acidosis going on but our third thing we need to figure out is do we have compensation and this is where you have to look further at your problem I mean you know whatever method you're using you have to dive a little bit deeper with it so first of all ask yourself do we have compensation going on at all well whenever you have compensation going on full compensation that means the body has fully compensated its corrected itself our blood pH should be normal is not normal so whatever you're solving these problems and you see a normal blood pH level you should be thinking full compensation but we don't have that so we can rule that out now do we have partial compensation going on maybe or are we completely uncompensated for so partial compensation would be another system that is trying to balance it out for instance we have a respiratory problem we've determined that well if we had partial compensation our metabolic system should be abnormal because it should be trying to throw itself until I can alkalotic state so we can bring this will actually increase this blood pH back because remember they're trying to balance each other out like how I talked about at the beginning we don't even have that our metabolic system is still normal it's just hanging out it's like hey what's going on nothing's really going on so I'm not going to be doing anything it doesn't really know to compensate yet so we have no compensation going on so this is respiratory acidosis uncompensated our next problem says that our blood pH is 7.30 our carbon dioxide level is 40 and our bicarb level is 18 so let's analyze blood pH normal again what was at 7.35 to 7.45 we're at 7.30 so we're on the low side specifically acid side our respiratory system which is represented with carbon dioxide so we're gonna put it up here is 40 normal level is 35 to 45 so we're actually normal here and our bicarb is 18 again in normal is 22 to 26 so we're on the low side for our by carbon that was representing the metabolic system so we're low for that and we're on the acidotic side so with using Rome we see we have an equal metabolic is low pH is low so we have metabolic acidosis so we've answered our first two questions now the third question do we have any compensation going on is there blood pH normal no so we're not fully compensated but are we partially compensated so our system that should be helping balance this out because we already have a metabolic problem should be a respiratory system an arrest or a system right now is normal so it's not trying to make itself out normal to help balance this acidotic blood pH out so we don't have any compensation going on so we have metabolic acidosis and compensate it our next problem says that our blood pH is 7.4 to our co2 or carbon dioxide levels 26 and our bicarb is 18 so let's look at our blood pH normal levels 7.35 to 7.45 we're at 7.4 to so we're normal so right now if you're thinking about compensation you should be thinking oh I bet we have full compensation I bet you're right but we've got it this Herman is this arrest for a problem or metabolic problem so whatever we're looking at this blood pH we're normal but what side of normal are we on our wheel in the acid oxide normal or the alcoholic side normal and to help you do that remember that the absolute normal blood pH is 7 point 4 0 so anything greater than that would be on the alcoholic side of normal and anything less than that would be on the Asus acidotic side so we're at 7 point 4 2 so we're on the alkalotic side so we're just gonna put its elevated and just fit alkalotic to help us remember that now let's look at respiratory those represented in carbon dioxide and we are at 26 a normal level is 35 to 45 so we are on the low side so we're gonna put low and it's alkalotic so and then our metabolic is 18 normal is bicarb is 22 to 26 so we are on the low side so we're going lo and we are on the acidotic side because it's less than 22 so using the RO method we look at our blood pH which is elevated we're looking at a rest or a metabolic system we have opposites going on because metabolic isn't what isn't low and the pH isn't low so they're not equal so we definitely have a respiratory problem going on specifically we have respiratory alkalosis and we already know our third answer to our third question we have full compensation going on because our blood pH is normal our body has thrown the metabolic system out of normal levels to help balance that blood pH and help get it back to normal so we have respiratory alkalosis fully compensated the next problem says our blood pH is seven point three seven our carbon dioxide level is thirty two and our bicarb is 17 so let's look at our blood pH normal level is 7.35 to 7.45 we're at seven point three seven so we're normal and again if you're thinking about compensation that's the herd question oh we have full compensation going on but we got to figure out the other two questions is this metabolic or respiratory is it's alkalosis or acidosis so with our blood pH it's normal but what side of normal is it on absolute normal is seven point four zero so it's seven point three seven so it's lower than that so we're on the Asus acidotic side of normal so we're low and we're just going to put acid to help us remember okay respiratory normal carbon dioxide is thirty five to forty five we're at thirty two so we are low and we're on the alkalotic asad so we're going to put alcoholic here our bicarb is seventeen normals 22 to 26 we are low so we are going to put low on the metabolic part of our acronym and what side of low are we on we're on acidotic side so we're just going to write a sit here now using the row method when we look at our pH which is on the low side and we look at our metabolic it's also low so they're equal so this is where we're at we have metabolic acidosis and we are fully compensated our blood pH is back to normal but our respiratory system because remember these two systems balance each other up also when I'm normal on the alcoholic side to help balance those acidotic conditions we are having so we are now fully compensated our next problem says that the blood pH is 7.5 one carbon dioxide is 47 and our bicarb is 32 so let's analyze the blood pH normal blood pH is 7.35 to 7.45 so we are elevated because we're at 7 point 5 1 so it's increased and it's alcoholic hey our respiratory system which is represented with carbon dioxide is 47 normal carbon dioxide level is about 35 to 45 so we are on the high end so we are elevated and it's acidotic and our bicarb which represents the metabolic is at 32 normal bicarb is 22 to 26 so we are elevated and we are elevated on the side of alkalosis okay and using the row method we see that we have something that's equal we have metabolics elevated pH is elevated according to a row method that would make it metabolic alkalosis so we're gonna write that now we have to answer that third question do we have compensation and if so are we uncompensated or we partially compensate our fully well let's look at the blood pH it is abnormal so we know we don't have full compensation so we can write that off now do we have partial compensated cassation or are we uncompensated well let's look pH is I'm normal but we have a metabolic problem so is a respiratory system trying to help out did it go abnormal to try to correct our blood pH and it did but it hasn't achieved the results that we need yet because our blood pH isn't normal yet so it's trying to compensate so we have partial compensation now it would be uncompensated if this respiratory system its value the carbon dioxide is still normal because it's not really trying to compensate to help us correct this metabolic alkalosis okay so that wraps up this review over the roam method and don't forget to access the free quizzes which will give you more ABG practice
NCLEX_Review_Lectures
2Point_Gait_Crutches_Walking_Pattern_Demonstration_Nursing_Skill_NCLEX.txt
hey everyone it's sarah thread sterner sorry and calm and today we're going to demonstrate how to do the two-point gate using crutches so the two point we're gonna have two points on the ground at a time whether it's a crutch or a foot so what does it look like well this is where the patient will move the crutch on the injured side so we're going to say it's the right side so they move the right crutch and they move the left foot together then they will move the left crutch on that non injured side and the right foot together so you have two points thank you so much for watching and don't forget to subscribe to our channel for more videos
NCLEX_Review_Lectures
Lab_Values_Nursing_QUIZ_Which_one_is_ABNORMAL_shorts.txt
you're assessing your patient's morning lab work which lab value is abnormal and requires attention is it a potassium of 3.8 a calcium of 9 a sodium of 115 or a magnesium of two the answer is the sodium of 115. all the other lab values fall within normal range so here we're dealing with hyponatremia a normal sodium level is anywhere between 135 to 145 Milli equivalents per liter so whenever you see that your patient has abnormal Labs you want to be asking yourself as a nurse what is causing this now in terms of sodium you want to ask yourself is the patient losing sodium like are they losing it through their vomit through diarrhea GI suction or are they taking something that's wasting it like a thiazide diuretic or are they overloaded on fluids like in cases of heart failure and then even do they have an aldosterone insufficiency where their kidneys are wasting too much sodium in their urine and then whenever you evaluate their signs and symptoms you want to remember my mnemonic salt loss
NCLEX_Review_Lectures
Stages_of_Cancer_Tumor_Staging_and_Grading_TNM_System_Nursing_NCLEX_Review.txt
hey everyone it's sarah with registerednessrn.com and in this video i'm going to be talking about tumor grading and staging for cancer and as always whenever you get done watching this youtube video you can access the free quiz that will test you on this content so let's get started cancer can be graded and staged however these two terms each have different meanings and as a nurse you want to be familiar with the lingo used to describe each of these things so first let's talk about tumor grading tumor grading assesses tumor cells underneath a microscope so how they do this is that a doctor goes in removes part of the tumor of the tumor takes it sends it to a pathologist who will take some of those cells and look at it underneath a microscope and the pathologist is looking at some things it's looking at the cell's size shape color and arrangement and seeing how much those cells deviate from how healthy normal cells from that area should look so you may see some different terms used to grade tumors one term is well differentiated that means that these cells are very similar to how healthy normal cells should look in their appearance and arrangement and these grades are termed low grade and they tend to spread fairly slowly and grow slowly on the flip side we have poorly differentiated and these cells that tells us that they are very abnormal they don't look much like normal cells at all and they are usually high grade and this tells us that this type of cancer is going to grow quickly and spread quickly now with this low to high grade rating we can take this and give it a numerical rating to reflect what type of grade it is however it's important to know that not all cancers use the same type of grading system it really depends on the cancer for example breast cancer uses the nottingham score system so with that said let's look at the basic numerical system that grades it one through three or one through four so here you're going to see an illustration of a tumor found in a bladder and here on the right is what the normal cell should look like in the bladder and then below we have the different grading systems so grade one this is well differentiated it's low grade and notice how the cells in this picture pretty much mirror the normal healthy cells and how they look and how they're arranged but then we look over at grade two this is moderately differentiated intermediate grade and these cells have some differentiation they look a little bit different than those normal cells and then beside of it we have grade three this is poorly differentiated and is considered high grade and notice we have various sizes and arrangement and it's very abnormal compared to those normal cells and then some scales will go into grade four and this is termed undifferentiated it's considered high grade and the cells are extremely abnormal in this type now let's look at tumor staging so this tells us about that main tumor like its location and size and if it's spread to any other parts of the body such as the lymph nodes and organs now how do we determine a patient's stage of cancer well we can look at a few things number one testing we can look at the results of let's say an mri a ct scan x-ray ultrasound or we can look at lab results or any physical findings that we find in that patient to help us determine that stage and it's useful to stage a patient's cancer because it can help us develop a treatment plan because earlier stages of cancer are going to be treated different than those later stages plus it helps us determine if a patient may be a candidate to participate in a clinical trial based on what stage of cancer they have and it provides some insight on how aggressive and treatable this cancer may be now it's important to note that once a cancer stage is designated at diagnosis it does not change so let's say that the cancer gets worse it spreads throughout the body well that original stage stays but that information is added on to that stage in addition let's say for some reason this cancer has to be restaged well the restage doesn't replace the original stage but is included with it and staging can occur at different times for example a cancer can be staged before treatment and it's based on these testing findings lab results or physical findings and we call that clinical staging or it can be staged after surgery whenever they remove the tumor they can see what's really going on what's really the extent of this cancer and that's called pathological staging and then we have post therapy staging and this is where the patient has received some treatment before the staging like radiation now here in a second that's going to make sense with the classifications so what are these staging classification systems well there are different types of staging systems depending on the cancer and the one you want to be familiar with as a nurse especially for exams is called the tnm staging system this is a system that's really used for solid tumor cancers like colon cancer and it's not used for blood cancers such as leukemia or brain and spinal cord cancers so tnm is an acronym that is made up of categories that stands for tumor nodal involvement and metastasis tumor is the first category and we're talking about that primary main tumor and this category describes the location and how much that tumor is growing into other tissues is it just hanging out all by itself in its primary site or is it growing inside other structures and layers and the higher the number with this tea the more it has grown into other layers or structures so some ways you'll see this t displayed on reports you could see a tx this stands for tumor can't be measured or a t0 that means no tumor is found then a t with a lowercase i and s means tumor is in situ inside two actually means in its original place so whenever it says this it means that the tumor is found in its original place and has not spread from its original location so at this time it's not cancerous but in the future it could turn cancerous and spread then there's t1 t2 t3 and t4 and this describes the size slash amount the tumor has grown and affected other areas so higher the number the larger the size slash amount it has grown into other areas so a t1 is smaller than a t4 and depending on the cancer type sometimes letters can be added after the number to further describe the tumor's growth for example breast cancer tumors can be described such as like t1a and this means tumor is greater than one millimeter but less than five millimeters or it could be like t4a and this means the tumor has spread to the chest wall then the next category is in for nodes and this is the regional lymph nodes and what happens is that parts of the tumor can break off and collect in the lymphatic system hence a nearby lymph node so this category refers to the spread of cancer in nearby lymph nodes closest to that primary main tumor and lymph nodes are small clustered structures that help us fight infection and they are found around many important organs so an nx can be given and this means cancer in regional lymph node can't be measured in zero means no cancer present in regional lymph node then n1n2 and n3 represents the number and location of the lymph nodes that have cancer so an n3 means there are more lymph nodes that contain cancer than an n1 so just like with the category t tumor for some types of cancers letters can be added after the number to provide further detail about the spread to the lymph nodes for example like an n2a or an n2b and then the last category we have is m for metastasis and this details if the cancer that primary tumor has spread to other parts of the body and if this is the case how much and the location of it an m0 can be given and that means no cancer found in other parts and then an m1 and this means that cancer has spread to organs and tissues again depending on the cancer letters can be added after the number to further describe the metastasis of the cancer to the other body parts now let me talk about some other add-ons that can be included with this staging system that helps provide further information about this patient's stage of cancer because as a nurse whenever you're looking at these reports you want to be familiar with what these other lowercase letters may mean so you may see either a lowercase c or a lowercase p for example like ct1 or pt2 what does that mean so first let's talk about that lowercase c the lowercase c stands for clinical staging and that means that this cancer was staged before treatment and was based on those test results like the mri ct scan whatever the patient had done and those physical findings the lowercase p stands for pathological staging and this is staging that is done after surgery so whenever the patient has that tumor removed they can see the extent of the cancer how far has it spread and it takes into account what the testing results lab results in the physical findings were as well so it gives us a more detailed picture of that cancer however some patients can't have surgery to have their tumor removed and if that is the case a pathological stage can't be determined at that time furthermore a lowercase y or r can be in front of that tnm category for example yct3 or rpt3 so let's talk about the lowercase y first this represents post-therapy staging either clinical or pathological and this is given after therapy was administered for example like chemo and how much of the tumor remains before surgery and a lower case r is to show reoccurrence of the cancer now the tnm system is used to calculate a number system like stage 0 to stage 4. and this is a lot of times how patients refer to their cancer like they have colon cancer stage 3 or colon cancer stage 2. you don't hear patients refer to their cancer as i have t1an0m1 instead they refer to this numerical system so stage 0 is cancer in situ and this means the cancer is still in its original place hasn't invaded surrounding tissues stage one is the cancer is localized and not spread into other tissues or lymph nodes stage two is the cancer has spread into surrounding tissues and close by lymph nodes stage three is the cancer has spread into even deeper tissues than stage two and further away lymph nodes but it has not spread to other distant structures in the body like organs and then the last stage stage four which is the most advanced called metastatic cancer and this means the cancer has spread to other parts of the body beyond where the cancer started okay so that wraps up this review over tumor staging and grading if you'd like to test your knowledge on this content don't forget to access the free quiz in the youtube link below
NCLEX_Review_Lectures
Dosage_Calculations_Nursing_Practice_Problems_Comprehensive_NCLEX_Review.txt
hey everyone it's sarah with registerednessaurian.com and in this video i'm going to be going over a comprehensive dosage calculations review and what i'm going to be covering are all of these different types of calculations that you want to know as a nurse now whenever you're done watching this video you can access the free quiz that will give you a comprehensive review over these calculations so let's get started first we're going to work some basic conversion problems and this really lays the foundation for us being able to solve dosage calculations so what we'll do is we'll build upon this and as we go on these problems will become more complex now before you even start working dosage calculations you have to commit to memory that metric table and here's an example of the metric table so you want to take this information and you want to put it in your brain because you're going to be pulling from this table to help you solve these problems now let's work our first problem so it says 9 ounces equals how many milliliters so we're trying to figure out how many milliliters are in nine ounces and i'm going to be using dimensional analysis to work our problems so we're gonna go ahead and set that up so we know that we have nine ounces and we need to figure out how many milliliters this is our destination this is where we're getting so we're going to pull from the metric table we know that in one ounce there is 30 milliliters oh so we're there we've arrived to our destination so we're going to cancel out our ounces because that cancels out and we're going to multiply everything at the top everything at the bottom and then divide and we have our answer so 9 times 30 is 270 and then 1 times one is one when you divide that out you get 270. so 9 ounces equals 270 milliliters now we're going to work our next problem it says 30 milligrams equals how many micrograms so we're trying to figure that out again we'll set up our problem using dimensional analysis so we say 30 milligrams and we've got to ask ourselves using the metric table how many milligrams equals micrograms so we know one milligram equals a thousand micrograms and that is where we're supposed to get that's our destination so milligrams cancels out we're going to multiply everything at the top everything at the bottom and then divide so 30 times a thousand is 30 000 and 1 times 1 is 1 divide that out you get 30 000. so 30 milligrams equals 30 000 micrograms our next problem says 10 teaspoons equals how many milliliters so let's set our problem up again with dimensional analysis so ten teaspoons and pulling from the metric table we know that one teaspoon equals five milliliters so that cancels out our teaspoons and we've arrived where we need to get so multiply everything at the top and bottom and divide so 10 times 5 is 50 and then 1 times 1 of course is 1 and so our answer is 50. 10 teaspoons equals 50 milliliters then over here our next problem says 0.5 grams equals how many micrograms so let's go ahead and set our problem up using dimensional analysis so 0.5 grams and we have to think back to our metric table we know that there is one gram in a thousand milligrams and grams cancels out but we're not done because we have to get to micrograms so we need to do one more step so we know that in one milligram there is a thousand micrograms so we have arrived that's where we need to get was micrograms so milligrams cancels out over here and so what we're going to do is we're going to multiply everything at the top and everything at the bottom so 0.5 times a thousand times a thousand is 500 000 and then all of this multiplied together is one so our answer is 500 000 micrograms now some of you had some questions in my previous video where we did basic conversions and you're like well can i do it this way like maybe go with like for instance this problem we went from ounces and we went straight to milliliters but if you pulled from the metric table this is what's really cool about dimensional analysis you could have went from ounces and then went to tablespoons and then eventually you could have got to milliliters so it would sort of look something like this like a little three step like that and our last conversion problem says 170 pounds equals how many kilograms so we're going to set up with dimensional analysis so 170 pounds and we know that from the metric table there's 2.2 pounds in one kilogram and that's where we need to get was the kilograms so pounds cancels out we're going to multiply and divide so 170 times 1 is 170 and 1 times 2.2 is 2.2 now we're going to divide 170 divided by 2.2 is 77.2727 repeating and i'm gonna round to the nearest tenth and you always wanna make sure that you follow your program's rounding rules for how they want you to round these problems now we're gonna work in oral liquid medication problem and this problem says that the doctor writes an order for a medication that is in an oral suspension the order reads to administer 50 milligrams by mouth every four hours as needed for pain your dispense with a bottle that reads 25 milligrams per two mls how many teaspoons will you administer per dose so this is the important information we need to pull from this problem in order to solve so again we are solving for teaspoons per dose so let's set up using dimensional analysis so the doctor orders 50 milligrams every four hours so 50 milligrams is how much the patient needs to receive in this oral suspension we have a bottle of this medication that says for every two milliliters that you're pouring out there's 25 milligrams of this medication so we're going to put 25 milligrams down here and we have 25 milligrams equals 2 ml of medicine this cancels out our milligrams now we have to get to teaspoons so we're not done so we're looking at the metric table for this so we know that in five milliliters there's one teaspoon this cancels out milliliters and we are where we need to be so multiply and divide so 50 times 2 is a hundred and 25 times five is 125 we divide that out and we get 0.8 it will be teaspoons per dose is our answer now let's work a capsule and tablet problem so our problem says the doctor writes an order for medication the order reads administer 0.5 milligrams by mouth daily you're dispensed with a bottle that says 100 micrograms per tablet how many tablets will you administer per dose so we've pulled important stuff out that we need to know about this problem in order to solve so we need to figure out how many tablets we're going to give this patient based on what the doctor ordered the doctor ordered 0.5 milligrams and we have a bottle of tablets that says every tablet is a hundred micrograms so let's set up our problem using dimensional analysis so doctor ordered .5 milligrams and we're supplied with some tablets that are in micrograms so we have to get to micrograms right now so from our metric table we know that in one milligram there are a thousand micrograms that cancels out the milligrams and now we can plug in our amount that we're supplied with so we have we know that there are a hundred micrograms in one tablet micrograms cancels out and that's where we need to get was tablets so we're ready to solve multiply and divide so 0.5 times a thousand is 500 all this down here 1 times 1 times 100 is 100 and then we divide that out and our answer is 5. so we're going to give the patient five tablets now we're going to solve an iv bolus problem and this problem says the doctor writes an order for an iv medication that reads administer one milligram iv now you're dispensed with a vial that contains 0.4 milligrams per ml how many mls will you draw up to administer so here's the important information we need to know to solve the problem and we're trying to figure out how many milliliters we're going to withdraw from this vial to equal out a 1 milligram dose because our vial says for every milliliter that is in this vial it equals 0.4 milligrams so let's set up our problem so the doctor ordered one milligram and we need to get to milliliters so we know that from our vial that it contains 0.4 milligrams in every milliliter that cancels out milligrams and we are ready to solve so 1 times 1 is 1 1 times 0.4 is 0.4 and we're going to divide out and we get 2.5 and that's how many milliliters we would withdraw now we're going to solve some different types of iv flow rate problem so the first one we're going to look at we're going to try to solve for drops per minute so this problem says the doctor writes an order to infuse a solution the order reads infuse two liters of d5 half normal saline with 50 ml equivalents of potassium chloride over 48 hours the drip factor is 15 drops per ml how many drops per minute will be administered so we've pulled the most important information from our problem and if you notice in our problem it gave us some distractors it told us that there was 50 milli equivalents of potassium chloride in this bag of fluids and don't let that confuse you where you're thinking oh no where do i plug this 50 in at because it's just a distractor to throw you off all you need to know is that you're giving two liters of fluid it does have 50 ml equivalents of potassium chloride in it but you're going to give that two liters over 48 hours now this problem also tells us a drip factor drip factors come from your iv tubing and here in this bag it told us that our drip factor is 10 drops per milliliter for this specific iv tubing but for our problem it's telling us that for every milliliter that's going to be drop it's also called a drop factor there's going to be 15 drops so we're trying to solve for drops per minute that's where we're trying to get so what i'm going to do is i'm going to go ahead and convert our hours into minutes because that's where we're heading so let's go ahead and do that with our dimensional analysis so we know that in one hour there are 60 minutes and our problem tells us that we're going to give this solution over 48 hours so we're going to go ahead put 48 here and we're going to give 2 liters this cancels our hours and we have minutes so we're almost there to drops per minute so we need to plug our drip factor in but notice our drip factor is in milliliters so we have to get there because right now we're in liters so we're going to pull from our metric table and we know that in one liter there are a thousand milliliters so that cancels liter liters out so we're in milliliters but we got to get here so now we're going to plug in rest of our information about our drip factor so we know that one milliliter is going to equal us 15 drops so that cancels that out and we are where we need to be we have drops per minute we're ready to multiply and divide so multiply everything at the top we get thirty thousand multiply everything at the bottom we get two thousand eight hundred and eighty and when we divide that out we get ten point four repeating and i'm going to round to the nearest whole number so our answer is 10 drops per minute our next iv flow rate problem is going to deal with solving for the hourly rate such as milliliters per hour so this problem says the doctor writes an order to infuse a solution the order reads infuse two liters of d5 half normal saline over 48 hours what is the hourly rate milliliters per hour so here's our important information we need to know we're trying to get to milliliters per hour that is our destination so we know that the doctor has ordered two liters to infuse in over 48 hours so we need to get to milliliters per hour so we're already here in hours but we need to get this to milliliters so we know from our metric table that there is one liter in a thousand milliliters that cancels out our liters and look we're here milliliters per hour so we are ready to solve so multiply everything and divide so two times a thousand is two thousand forty eight times 1 is 48 and when you divide 2000 by 48 you get one i mean you get 41.66 repeating and i'm going to round to the nearest whole number and again follow your programs rules for rounding and our answer is 42 milliliters per hour now let's solve an iv flow rate problem that wants us to solve for the infusion time so when will this infusion actually be completed so our problem says the doctor writes an order to infuse a solution the order reads infuse two liters of normal saline at 150 milliliters per hour you start the infusion at eight o'clock and this is eight in the morning this is written in military time at what time will the infusion be complete so the important information we got from this problem is that the physician wants us to infuse two liters at an hourly rate of 150 milliliters per hour so every hour that patient is going to be receiving 150 milliliters of fluid but they only need two liters of fluid to go in and we started the infusion at 8 am so when will this order be complete that's what we're looking for so let's set up dimensional analysis patients ordered two leaders and the physician wants to go that to go in at a hourly rate of 150 milliliters per hour so notice we're here milliliters and we're over here we're in liters so we got to use our metric table to get over there to milliliters so we know from the metric table that in one liter there's a thousand milliliters so that cancels liters out and we're going to go ahead and plug in our hourly rate because we're ready to and the position wants it to go in at 150 milliliters in one hour so that cancels our milliliters out and we're in hours so we're ready to solve because we need a time so this is going to tell us the hours so two times a thousand times one is two thousand and everything multiplied at the bottom is one fifty so two 000 divided by 150 gives us 13.33 repeating hours okay so we have that but the problem is is we have leftover hours so we need to know minutes because we need to get a complete time for when this infusion was done so we know it's going to take 13 hours but let's convert our left over hours in two minutes so we get a complete time so we'll go ahead and write 13 right here so 0.33 hours because that's what we're left with over here and we know that in one hour there's 60 minutes so here we're at minutes and that's where we need it to be so .33 times 60 gives us 19.8 and then 1 times 1 is 1 and then we're going to round we're going to divide of course that gives you 19.8 and then we're going to round to the nearest whole number so that gives us 20. so it's going to take 13 hours and 20 minutes to infuse but it wants to know the time so 13 hours and 20 minutes from 8am in the morning would give us 9 20 pm but we need to write that in military time because that's how we give time in the medical field so military time for that would be 2120 and that is equal to 9 20 pm and that is our answer now we're going to solve a weight based calculation so this problem says the doctor orders an iv weight based medication to infuse at two micrograms per kilogram per minute the patient weighs 130 pounds you are supplied with a bag of the iv medication that reads 250 milligrams per 250 mls how many milliliters per hour will you administer so we've pulled the most important information from our problem to solve and we're trying to figure out how much milliliters per hour we're going to give this patient from this iv bag of medication and this medication is based on the patient's weight so the doctor wants us to give two micrograms per minute for every kilogram the patient weighs we're told the patient weighs 130 pounds so right there you know you're going to have to get from pounds to kilograms and we have a bag that is in milligrams the order is in micrograms so we know we're going to be doing some converting there and our bag is a 250 milliliter bag and in that bag we have 250 milligrams so let's set up and solve so first thing we're going to do right off the bat is we're going to deal with this pounds and we're going to get to kilograms so we can start plugging in our doctor's order so our patient weighs 130 pounds and we know from the metric table that there's 2.2 pounds in one kilogram so there we go we have dealt with that now let's plug in what the physician has ordered so for every kilogram the patient weighs we're going to give two micrograms per minute so that cancels out our kilograms now we can plug in what we are supplied with because we can start converting there so we're actually in micrograms right now we have to get to milligrams so we're going to plug in that metric table once again and we know that in a thousand micrograms there's one milligram cancels out micrograms so we're done with micrograms we can plug in what we have so we have 250 milligrams in a 250 ml bag that cancels out milligrams and we have to get milliliters to hours right now we're in milliliters per minute but we're going to go ahead and solve everything and then we'll convert our hours to minutes so we're going to multiply 130 times 1 times 2 times 1 times 250 and that gives us 65 000. then we're going to do 1 times 2.2 times 1 times 1 000 times 250 and that gives us 550 000. then we're going to divide that out and that's going to give us eight 0.11818 eight repeating so currently we're in milliliters per minute that's not what our problem wanted and one in milliliters per hour because that's our rate we're going to set our pump so we're going to convert that we know that there are 60 minutes in one hour so this is currently in minutes so one minute is equal to this many milliliters so one minute is equal to 0.11818 and that was repeating and their minutes cancels out so we are now ready to solve so 60 times 0.11818 gives us 7.0908 [Music] and then of course 1 times 1 is 1 and then when we mult divide that out that gives us we're going to round to the nearest whole number that gives us seven so our answer is seven milliliters per hour now we're going to solve another weight based calculation but we're going to apply that to the medication heparin because i want you to be able to see how to work a problem like that so our problem says the doctor orders your patient to start an iv heparin drip at five units per kilogram per hour and to administer a loading bolus dose of 10 units per kilogram iv before initiation of the drip you're supplied with a heparin bag that reads 12 500 units per 250 ml the patient weighs 110 pounds what is the flow rate you will set the iv pump rate so here is the important information we need to know we're going to give five units per hour for every kilogram the patient weighs they weigh 110 pounds and we have a bag that is a 250 ml bag and it contains 12 500 units now notice that this problem told you a loading dose to give before you start the drip this can be ordered with a heparin drip if the physician orders it but don't let that throw you off because that's just a distractor once again so let's figure out the milliliters per hour how we're going to set our pump to infuse so first thing what we're going to do is we're going to deal with this pounds we need to get to kilograms because that's how this drug is ordered it's weight based so our patient weighs 110 pounds we know from our metric table that there's 2.2 pounds and 1 kilogram so we're done with pounds we're now in kilograms and the order is for every kilogram the patient weighs we're going to give them five units per hour so for every kilogram patient weighs we're giving them five units per hour kilograms cancels out and now we are ready to put in what we're supplied with so we have we're told that in this bag there's 12 500 units in this 250 ml bag so units cancels out and we're left with milliliters per hour so we're ready to solve so when you multiply everything at the top you get 137 500 and multiply everything at the bottom you get 27 [Music] 500 and then we're gonna divide that out and whenever you divide that out you get five so our answer is five milliliters per hour now let's solve a pediatric safe dosage problem where we're going to figure out the safe range for this pediatric patient based on the recommendation and their weight so the problem says the doctor orders an iv drip for a child that weighs 78 pounds the safe dosage range for the ordered medication is 5 to 10 micrograms per kilogram per minute what is the safe dose range for this child so here is important information when you know the patient's weight and that recommended safe dose range for the patient so we're going to be working like a two-step problem because we have to get two different numbers and we're trying to get to micrograms per minute so we know their patient weighs say 78 pounds so let's go ahead and deal with the weight and get it to kilograms because that's what our medication range is in so the patient weighs 78 pounds and we know from the metric table that they are 2.2 pounds in one kilogram that cancels out our pounds and our dosage says that this child can have for every kilogram they weigh 5 micrograms per minute so we're just dealing with this five because that's the lowest part of the rate the range and the 10 is the highest part so we'll deal with the lowest part first and we're ready to solve because we are in micrograms per minute our kilograms canceled out so when you multiply everything at the top you get 390 everything at the bottom 2.2 when you divide that out get 177.22 repeating and i'm going to round to the nearest tenth so 177.3 that's the lowest part of this range but now we have to get the high part so now we're just going to plug in the 10 where we did our five and set up the problem so our patient weighs 78 pounds we know that from the metric table there's 2.2 pounds and 1 kilogram cancels out pounds know from our order that one kilogram patient on the high end can receive 10 micrograms per minute we're where we need to be so our kilograms cancels out we're going to multiply and divide so when we multiply everything at the top we get 780. everything at the bottom 2.2 when we divide that out we get 354. repeating around to the nearest tenth so that gives us 354.5 and this is the safe range for this child based on the recommended dose and their weight now we're going to work a drug reconstitution problem and this problem says the doctor orders 10 milligrams of a by-mouth medication that needs to be reconstituted instructions on the 0.25 gram container say to reconstitute with water to make a concentration of 0.5 grams per 3 ml you would administer blank milliliters per dose so here's the important information we pulled from our problem so we need to give 10 milligrams per dose and we have a 0.25 gram container and once it's reconstituted we're going to have a dose of every three meals equals point five grams so we're trying to get two milliliters per dose how what are we going to give this patient so what we're going to do is we're going to start with the amount ordered so the doctor ordered 10 milligrams per dose so we have to get to grams and we're in milligrams because that's what how much is in this container once it was reconstituted so we know from their metric table that in 1 000 milligrams there's one gram so milligrams cancels out and we are told that we have this amount so 0.5 grams is equal to 3 milliliters and this is how what its value is once it's been reconstituted so grams cancels out and we're left with milliliters per dose and that's what we need to know how much we need to give so so now whenever you multiply everything at the top and the bottom you're going to get 30 and then on the bottom that would be 500 and 30 divided by 500 is 0.06 so we are going to give 0.06 ml per dose now let's work a tube feeding problem so our problem says the patient is order jevity 1.2 cow you're supplied with a can of gevity that contains 337 milliliters of feeding formula the doctor orders the tube feeding to be administered at 3 4 strength with a rate of 50 ml per hour how many milliliters of water will you add to dilute the tube feeding formula as prescribed to solve this two feeding problem you have to memorize the following equation and it's going to help us solve for how many milliliters of water that we're going to add to this feeding formula to dilute it based on what the doctor ordered the doctor ordered a 3 4 strength and the doctor also ordered for it to go in at a rate of 50 mls per hour but we really don't need that information to help us solve this problem so we know that we have a can and in the can it has a volume of 237 mls but we have to dilute that to a 3 4 string so this formula is going to help us do it so what we're going to do is we're going to plug in our information so the milliliters in the can 237 so 237 mls we are solving for x x is the total volume that is going to be administered so we don't know what that is yet so we're just going to put that there and then the strength ordered is 3 4. so we're going to put three fourths now we're going to solve you're going to multiply everything across from each other so 237 times 4 is 948 and then 3 times x is 3x we're solving for x so what we're going to do is we're going to divide everything over here and everything over there it's going to cancel this 3 out so 3 divided by 3 is cancels it out and then 9 948 divided by 3 gives us 316 okay so 316 mls is our total volume that's what x told us but we need to know how much water we're giving because this is how much we're totally giving so how much of that is water so to figure that out you just take 316 which is the total volume subtract that from the original can size which is 327 and you get 79 mls so we will give 79 mls of water now we're going to wrap up this comprehensive review and we're going to solve a body surface area calculation so this problem says a pediatric patient is ordered a medication dose of 10 milligrams per square meter per day by mouth for 7 days the patient weighs 46 pounds and is 3 feet nine inches what is the daily dose of medication the patient will receive so we've pulled the most important information from our problem to help us solve and we are trying to get to milligrams per day now the special thing about this medication is that it is ordered by the patient's body surface area so it's telling us that for every square meter the patient needs 10 milligrams per day and this is going to be based on their height and their weight that's how we're going to figure out the body surface area and to do that there's a couple of equations you can use and we're going to use this one and it tells us for what we're solving for is the square meter so we're going to take pounds multiply that by inches divide that by 3131 and then we're going to find the square root of that and that's going to be our patients square meter their body surface area now to do this you're going to need a calculator that can solve the square root so keep that in mind so we're going to go ahead and figure that out and then we're going to solve for the milligrams per day using some dimensional analysis and we'll get our answer so let's go ahead and convert this so we have 3 feet 9 inches we need all in inches so let's go ahead and just figure that out so we know that there are 12 inches in a foot so 12 times 3 gives us 36 but they're 3 foot 9. so we're going to go ahead and add 9 because that's how many left over inches we have so 36 plus 9 is 45 so our patient is 45 inches so we have 45 inches 46 pounds we're good to go ahead and plug into this formula so let's just erase this and put it in so 46 times 45 so what is that that gives us 2070 ahead and carry our square root down and then we're going to divide that by 3131 and when we do that we are going to get um 0.6 etc and what we're going to do is figure the square root out of that so you plug that into your calculator and once you do that you are going to get 0.81 that's where we're going to round and that gives us our square meter so we have that information now let's solve using dimensional analysis so that's important for us to know so the doctor has ordered 10 milligrams for every square meter the patient weighs per day so the patient's body surface area is .81 square meter and for every square meter the patient is they need 10 milligrams per day square meter cancels out multiply everything at the top and bottom and divide and once we do that we get 8.1 milligrams per day and that is our answer okay so that wraps up this comprehensive review over dosage calculations now if you would like a more in-depth look at each of these different categories i have in the description below individual videos that will explain more in depth on how to work these problems and don't forget to access the free quiz that will test you on this material
NCLEX_Review_Lectures
ACE_Inhibitors_vs_ARBs_Mechanism_of_Action_RAAS_Nursing_NCLEX_Pharmacology.txt
hey everyone its air threats Turner sorry end comm and today we're going to compare some medications we're going to look at ace inhibitors and ARBs and as always whenever you get done watching this YouTube video you can access the free quizzes that will test you on these two medications so let's get started in our previous lectures we went in depth over ace inhibitors and ARBs but now let's do a quick review so we can see the similarities and the differences so when you take an exam you can easily differentiate between these two medications so we have ace inhibitors a stands for angiotensin converting enzyme ARB stands for angiotensin 2 receptor blockers now these medications affect rass they do it in different ways however they both achieve the same results now what is wrath well rice is the renin-angiotensin-aldosterone system but what does this system do well it manages our blood pressure specifically whenever it drops and the whole goal of Wrath is to get angiotensin 2 on board because it is a major active vasoconstrictors if we can vaso constrict other things we can increase our blood pressure we increase blood pressure we maintain tissue perfusion so what happens with rafts is that whenever the blood pressure drops the kidneys some cells in the kidneys will sense this and they will release renin when marooning goes into the circulation it is going to stimulate a substance that is in the liver called angiotensinogen and angiotensinogen is going to turn into angiotensin 1 well we got to get to angiotensin 2 so to do that ace will help us out that is angiotensin converting enzyme it's going to convert angiotensin 1 into angiotensin 2 so we're there we have this major vasoconstrictor present but it has to bind to some receptors because then we can get things going so one type of receptor that this angiotensin 2 is going to bind to is called an angiotensin ii receptor site type 1 and whenever angiotensin ii binds with those type 1 receptors it is going to lead to the results we need it's going to cause vasoconstriction of our smooth vessels so when we constrict vessels that is going to clamp down and is that that is going to help increase blood pressure increase our systemic vascular resistance in addition we are going to trigger the release of a substance called aldosterone by the adrenal cortex and the whole reason for this is to help increase blood volume because if we can increase blood volume we constrict our vessels we are definitely going to increase blood pressure and maintain tissue perfusion and to do this is this aldosterone will cause the kidneys to keep sodium in water but excrete potassium now let's look at how these two medications affect rats because remember they do it in different ways but they achieve the same results and really what they're targeting is this angiotensin 2 so let's look at ace inhibitors what ACE inhibitors are going to do is they are going to inhibit ACE hence their name so they inhibit this part of rass they prevent angiotensin 1 from turning into angiotensin 2 so we don't have angiotensin 2 being able to do its job now ARBs what it's going to do is it's going to inactivate the angiotensin ii receptor sites type 1 so we will have the conversion of angiotensin 1 to angiotensin 2 you have a snoo in its job but what we're going to prevent is as angiotensin 2 being able to bind with these type 1 receptors so you don't get the effects of angiotensin 2 so as you can see they're both affecting angiotensin ii ace inhibitors is just preventing this ace ARBs it's preventing the activation of these receptor sites either way they are affecting how Angie to works therefore with both of these medications what they're going to achieve is that they're going to cause vasodilation instead of vasoconstriction so that's going to decrease systemic vascular resistance and decrease the blood pressure and it's going to make it a little bit easier on the heart to pump and here in a moment where we talk about what these drugs are used for you'll see why they're beneficial on some patients in addition they're both going to decrease that secretion of aldosterone which again was to help increase our blood volume so instead of keeping sodium and water we're going to excrete it but they will cause the kidneys to keep potassium so we really have to watch out for hyperkalemia in these patients who take these drugs now some other things you want to remember to help you differentiate between them is how their generic name is what is it specifically in with with ACE inhibitors the generics are going to end with pril like lisinopril with ARDS the generics will end with Soren like losartan so when you're looking at the meds and you're trying to turn exist an ace or ARB look at the ending of those generic names it'll really help you and another thing I want to point out is with ace inhibitors some patients not all can develop this persistent nagging dry cough and the reason for that is because of how this ACE inhibitor is influencing this ACE enzyme that is converting angiotensin 1 into angiotensin 2 over here with ARBs a dry persistent nagging cough is not likely and a lot of times if a patient does develop this dry persistent cough with an ace the physician may put them on an ARB because that will help clear that up so the reason for that is because ace normally will inactivate a substance called Brady Kannan Brady Kannan is an inflammatory substance and what it will do is a little break it down so don't and activate it but if we're blocking this by throwing on an ACE inhibitor we're not going to be in debating this Brady cotton so it can increase and it can cause this coughing another thing with ace inhibitors that you want to watch out for as the nurse is something that can happen called angio edema this is where you have swelling of the deep tissues and it can present up swelling of the face the tongue the lips and it can cause a difficulty breathing and if that happens that's a medical emergency it's less likely to happen with an ARB so there's a low chance of it happening but you always just want to monitor for that as well now let's wrap up this review and let's talk about what these medications are used for and look at our nursing interventions and our patient education together so water ace and ARBs used to treat well we've already learned that they help lower the blood pressure so they're great in helping patients who have hypertension managing that blood pressure keeping it low in addition patients who have heart failure this is where the heart muscle is damaged and it can't really pump so it's not really maintaining cardiac output plus blood can backflow go into the lungs lead to pulmonary edema and how ace and ARBs will work is that they can help decrease the after load in the preload on the heart making it easier to pump and get blood out to maintain cardiac output in addition ace and arts can be used after a patient has a myocardial infarction again just helping the heart pump easier after it's been damaged and these medications can help decrease the progression of diabetic nephropathy in those patients who have type 2 diabetes now what is diabetic nephropathy well this is kidney disease caused by diabetes and whenever a patient has kidney disease those little nephrons and the kidneys are affected because that's a functional unit of the kidney that doesn't work and they really lose the ability to filter the blood very well so protein will start to leak into the urine well if you have high blood pressure you're increasing the amount of protein that's going into the urine well if we throw an ace and Arbonne that can help lower the blood pressure which will decrease now a protein that's going into the urine hints slowing down our kidney disease so these medications have like that renal protective mechanism now what are some patient education and things you want to watch out for eisenerz well we've learned that this meant these medications can increase the potassium level with the way that it affects aldosterone because aldosterone is not really going to be released so now the kidneys are going to start keeping potassium so there's a risk for hyperkalemia therefore you want to monitor the potassium levels but you want to tell the patient to avoid consuming a diet really high in potassium so watch those salt substitutes that have potassium and those foods that are high in potassium like spinach avocados bananas etc because I can increase our potassium levels in addition you want to talk to the patient about how to prevent a condition called rebound hypertension this is where the blood pressure will get so high it'll be hard to actually bring the blood pressure down and this tends to happen when a patient just abruptly quits taking their ace or their arm so educate the patient about the importance of never just quit taking the medication because sometimes patient let's say they're started on an ACE inhibitor they develop that dry nagging persistent cough it's driving them crazy and it's driving everyone else around them crazy because they keep coughing so they may just quit taking the medication but instead of doing that they need to talk to their doctor and their doctor can switch them to something else where they won't have that dry cough so just let them know that that can happen if they just abruptly quit taking it in addition they need to make sure they're monitoring their blood pressure at home they need to get a device they need to write down the recordings of what their blood pressure is daily doing this is best because we want to make sure that these medications are in fact managing their blood pressure or are they still hypertensive or is it too much they're hypotensive so definitely communicate that to them and lifestyle changes that they need to do if they're taking these medications to help lower the blood pressure let them know that anti hypertensive medications are not cure for high blood pressure they need to manage their diet by eating healthy exercising or quit smoking if they're smoking to help with that as well so we want to monitor the potassium level as we discussed over here because of hyperkalemia but we also want to look at the liver enzymes making sure the liver is not being affected and renal function because in some patients who are dependent on the RAF's because let's say they have severe heart failure with severe heart failure or their cardiac output isn't that great so they depend on this rafts to maintain cardiac output for them so if we give them an ace and an ARB which affects Rask that's really going to cause some kidney issues so we want to be making sure we monitor the BU in the creatinine what's the renal out there urinary output is at least 30 CC's an hour are they having any abnormal swelling going on in their body where they're retaining fluid and lastly you want to talk to the patient about this cough that can happen with these ACE inhibitors and if they can't tolerate it what should they do and remember they can be switched to arms which doesn't have that dry persistent cough now one thing you really want to watch out for if you're working with patients who have heart failure and they're on an ACE inhibitor let's say that all of a sudden they get this coffee well you want to further investigate this call if you don't want to just write it off oh it's that coffee that you get with ace inhibitors because with heart failure they their heart will be weak the blood will back up into the lungs or they'll get pulmonary edema so they'll start getting a call but this coffee will be like a wet COFF you'll hear crackles if you listen to their lungs they'll have difficulty breathing with just any movement like movie from the bedside chair to the bed they get really winded so you want to make sure that is this just that Koff that you get with ace inhibitors which is dry it's not gonna be have crackles or wet or is this heart failure exacerbation so make sure you look at that in addition educate the patient about this angioedema that's more likely to happen with the ACE inhibitors but can happen with ARBs it's less likely it's rare to but educate them about the swelling of the face the mouth the lips difficulty breathing and let them know that's a medical emergency and that they should seek attention immediately okay so that wraps up this review over ace inhibitors versus ARBs thank you so much for watching don't forget to take the free quiz and to subscribe to our channel for more videos
NCLEX_Review_Lectures
Insulin_Drip_Calculations_mLhr_Infusion_Nursing_Practice_Problems_Dosage_Calculations_NCLEX.txt
hey everyone it's sarah with registerednessrn.com and in this video i'm going to be solving insulin drip calculations and as always whenever you get done watching this video you can access the free quiz that will give you more practice problems so let's get started the doctor has given us an order to start an insulin drip of regular insulin and the doctor wants the patient to receive two units per hour of this regular insulin well pharmacy sends us a bag of regular insulin mixed and normal saline and on the bag it says there's a hundred units of regular insulin per 100 ml now we have an iv pump and we need to set the iv pump to a milliliters per hour an infusion rate so the patient will get two units per hour so how are we going to do that well i'm going to set up a problem using dimensional analysis the order says 2 units per hour we have a bag that has a hundred units in it units cancels out and 100 ml and we're where we need to be milliliters per hour so we're going to multiply everything at the top everything at the bottom and then divide so 2 times 100 is 200 and then 1 times 100 is 100 and then when you divide that out you get 2. so the answer is two milliliters per hour is what we're going to set our pump at so our patient receives two units per hour now let's look at our next problem for this problem the doctor has ordered 12 units per hour of regular insulin to infuse and pharmacy sends us a bag that says there's 125 units of regular insulin per 250 mls so what we need to solve for is the infusion rate so we're solving for milliliters per hour but we're also going to solve for the infusion time how many hours it's going to take for this bag to infuse based on what is prescribed for this patient so first we need to solve for the infusion rate the milliliters per hour and again i'm going to set up using dimensional analysis so we're ordered 12 units per hour and we have a bag that has 125 units in it per 250 mls that cancels out units and we're where we need to be milliliters per hour so we're going to multiply everything at the top and everything at the bottom so you get 3 000 over 125 and then when you divide that out you get 24 milliliters per hour so that is our infusion rate but now we need to solve for our infusion time so what we're going to do is we are going to look how big is our bag we have a 250 ml bag so 250 ml is our bag and we are going to infuse at 24 mils per hour so 24 milliliters is going to go in every hour to equal this 12 units per hour so milliliters cancels out and we're where we need to be in hours so you get 250 over 24 and when we divide that out we get 10.4166 repeating so that's an hour so we know we have 10 hours but we have leftover hours that .41666 so we need to convert that hours into minutes so we know exactly how many minutes so we already know we got 10 hours so i'll go ahead and put 10 hours right here okay so we're going to convert rest of that to minutes so we have 0.4166 is our hours and we know that there in one hour is 60 minutes so hours cancels out and we're where we need to be minutes so multiply everything at the top bottom and then divide so we get when we multiply that out you get 24.96 996 and then that would be one and then that would be your answer so we're going to round to 25 so we have 25 minutes so our infusion time is 10 hours and 25 minutes okay so that wraps up this review on how to solve insulin drip calculations and if you would like more practice on these problems you can access the link in the youtube description below
NCLEX_Review_Lectures
Car_Seat_Safety_Teaching_Nursing_Care_Discharge_Pediatric_Maternity_Nurse_NCLEX_Review.txt
hey everyone it's sarah with registerednessrn.com and in this video i'm going to talk about car seat safety for your pediatric nursing classes and as always whenever you get done watching this youtube video you can access the free quiz that will test you on this content so let's get started as a nurse you play an important role in teaching the parents about car seat safety and this education actually starts at birth before the child even goes home from the hospital in their first car ride so in this lecture we're going to concentrate on the main concepts that you need to know as a nurse and for exams first let's talk about the four types of car safety restraints that you can use in a motor vehicle the first type is called a rear-facing restraint these restraints sit your child backwards in the vehicle with a five point harness the next type is a forward-facing restraint and this restraint sets your child forward in the vehicle with a five-point harness next is the booster seat restraint and this sits your child forward in the vehicle using the car's seat belt system and both the shoulder and the lap belt should be used together when using this type of restraint and lastly is the seat belt and this sits the child forward in the vehicle using the car's seat belt and again both the shoulder and the lap belt should be used together when using this restraint now how do you decide what type of car safety restraint should be used on a child well that depends on the following factors the first factor deals with the child's height and weight and their development like do they have any special needs or disabilities you would want to take all of that into account next would be the height and weight limits set by that car seat manufacturer therefore you really want to stress to the parent to become familiar with the manual that is included with the car seat safety restraint because this manual will explain the height and weight limits expiration date of the equipment how to install and secure the child in the device now when selecting a car seat safety restraint you want to educate the parent that they don't want to solely go by the child's age especially whenever it's time to upgrade the restraint instead they want to look at these other factors and they want to see how well their child's actually fitting in that restraint because there's some signs that you can tell if your child actually fits in that restraint which we're going to talk about here in a moment plus they want to make sure that they're within those limits set by that car seat manufacturer so now let's talk about some general guidelines to remember about car seat safety first it's important to remember that the back seat of the car is actually the safest place for a child 12 and under next it's important whenever the child is in the car that the parent has the parental locks on the door on so the child can't open the door whenever the car is on and that the window locks for rolling up and down the window or on as well and then when it comes to installing the rear facing or the forward-facing car seat you want to tell the parent to try to move the car seat and they don't want it to move more than one inch from side to side or front to back and if it does move they want to try to retighten the car seat and another thing is about how the child is dressed whenever they're in the restraint so you want to make sure that you're not dressing the child in bulky clothes because it could affect how the device works to secure the child and lastly it's important to remember that laws actually vary among states for car seat safety and these laws tend to enforce the bare minimum for when using a car seat safety restraint therefore it's really best to follow the american academy of pediatrics latest guidelines for car seat safety because they have the latest research and evidence that will help keep kids safe in a vehicle now let's dive into these different types of car seat safety restraints and talk about those main concepts that you need to know so first let's talk about the rear-facing restraint so with this restraint again the child is going to be backwards facing in the car so here's our driver and in the back seat they're going to be facing this way so you start using this type of device at birth and it can extend all the way up until that child is like a toddler or preschooler but it really varies depending on how that child is growing so how long do they stay into in this type of device well they stay until they actually outgrow the rear-facing device so whatever those height and weight limit set by that manufacturer of that device and then they're switched to a forward-facing device and this typically occurs around four years but again this varies among children now you may be wondering if you've been keeping up with car seat safety what happened to that two year parameter like when at two the child is typically switched to forward-facing well in 2018 the american academy of pediatrics actually removed that age criteria and just said whenever the child outgrows their rear-facing device is whenever they need to be switched now there are different types of rear-facing devices that you want the parent to be familiar with because they have different height and weight cutoffs so this is where it's very important that you stress again to that parent to become familiar with that manual that came with that car seat safety device so for instance here we have an infant rear-facing restraint and this one has a base that actually stays in the car and it's designed to be mobile where you can carry the seat part around and this particular car seat has a weight limit of 5 to 22 pounds and a height limit of 19 to 29 inches and then here we have a convertible rear-facing to forward-facing restraint so it stays in the car it does not have a base and it's not mobile so you can't carry the child around in it like with the other one and in the rear-facing position it's for a child up to 50 pounds and for a child who's going to be sitting forward-facing it will fit them up to 65 pounds so as the nurse or the parent who has their kids secured in this rear-facing device it's important that you look for signs that this restraint fits and it's secured properly so these signs include that the child's head is one inch below the top of the restraint the straps of the harness at the shoulder should be at or below the child's shoulders you should not be able to detect any slack in the harness so whenever you go to pinch it you shouldn't be able to pinch the harness together also the chest clip when it's buckled should rest at the level of the armpits on the child and that it's normal as a child grows to not have as much leg room in the back so you may start to notice that the legs start to come into contact with the back seat and many car seat manufacturers are actually adding an extension panel to help with this since children are staying in this rear-facing position longer and i know as a parent especially with my first child i was concerned about this but they really recommend just using that extension panel and not switching your child to the forward-facing position just because of this occurring next is the forward facing car seat restraint now whenever a child uses this they are going to be sitting this way in the car so normally how you would set in the car but they're going to be in this special car seat that has a 5 point restraint so they will use this once they outgrow that rear-facing car seat device and they typically start using this whenever their preschool age and it extends up to school age now when will a child switch from forward facing to a booster seat well whenever they outgrow that forward-facing device and this typically is around about six years of age but again this varies depending on the child so again this is where you really want to know those limits those height and weight limits of that specific device and to know the signs to look for that your child actually fits in this forward-facing device and that they're secured properly and this includes that the helix of the ears are below the head of the restraint so in other words those tops of the ears are below the top of the restraint and that the straps of the harness at the shoulders should be at or above the child's shoulders also you shouldn't be able to detect slack in the harness like when you attempt to pinch it like with the rear facing device and the chest clip when it's buckled should rest again at the level of the armpits next is the booster seat so with the booster seat the child is going to be setting forward in the car just like normal but they're going to be setting in this special seat that is going to use the car's seat belt system so they need to use both the shoulder belt and the lap belt in order for the booster seat to work correctly so this is used when the child outgrows that forward-facing device and they typically start using this whenever they're school age and they'll continue to use the booster seat until they're ready to use just the seat belt of the car and this again varies for children but typically when the child is the height of four nine and anywhere between eight to twelve years of age but what are those signs that you need to look for that tells you that your child is ready for this booster seat and that the booster seat actually fits them securely enough to provide them protection well there's some things you want to look for first you want to look at their development how do they obey instructions can they obey your instructions to set quietly in the car they're not going to try to unbuckle themselves throughout the car ride and also you want to look at the shoulder and lap belt position on the child's body to see how it's fitting them so the shoulder belt should sit across the middle of the collarbone and the chest it should not be up around the face or the neck and the lat belt should be positioned on the lower part of the hips not on the abdomen just where the upper part of the legs are and it's important to know and educate the parent that whenever they put them in that booster seat that that lap belt should go under the armrest on both sides of the booster seat and not above it so the child can be as secure as possible and then lastly we have the seat belt restraint so with this the child is going to be setting forward in the car but they're not going to be in any like device that's going to keep them secure instead they're going to be sitting in the seat of the car using the shoulder and the lap belt together and this is used once that child outgrows the booster seat so this typically occurs when the child is preteen so it could maybe occur a little bit before or after again that really varies on the child and it will extend throughout their lifetime anytime they ride in a car they should wear a seatbelt so we want to look for signs that our child is ready for the seat belt and actually fits in the seat belt therefore they should be secured in a position that's going to look similar to what it looked like in the booster seat but without the booster seat so that shoulder and lap belt should be similar in the way it looks and they should be comfortable proportional and they should be developmentally ready so here you're going to see a picture of a child who is not ready for a seat belt so notice that the shoulder strap does not set across the shoulder in the middle of the collarbone at the chest instead it's up at the face and that lap belt does not set across the lower part of the hips at the upper thighs instead it's on the abdomen and this is not where we want it plus the child's legs do not dangle down very well and they do not bend over that seat and in addition you would want to make sure that the child's back is able to comfortably set up against the back of the seat so based on what we just seen here this child would need to stay in a booster seat and they're not quite ready for a seat belt restraint so now let's test your knowledge about the content we just covered okay so we have a child and you're helping the parent install a forward-facing car seat after installing the car seat you place the child in the car seat and buckle it which two findings cause concern and require action a the top of the child's ears are above the top of the restraint b the shoulder straps of the harness are found above the child's shoulders c the buccal chest clip is found one inch below the armpit level or d you're unable to pinch the sides of the harness together so to help us answer this question we have to think about those signs that i talked about to let us know that this child actually fits in this car seat so our child we're told is in a forward-facing seat so we want to look at our options and we want to make sure we don't get forward-facing and rear-facing confused because they have little subtle differences that you want to look for so for this specific question the answer is a and c and these findings demonstrate the child doesn't fit this restraint because that's what we're looking for and this restraint is not secured properly because the top of the child's ears should be below the top of the restraint and the buckle chest clip should be at armpit level not one inch below it okay so if you would like more free quiz questions you can access the link below and don't forget to access the other videos in this pediatric nursing series and thank you so much for watching
NCLEX_Review_Lectures
Hypochloremia_and_Hyperchloremia_Nursing_NCLEX_Review_Fluid_Electrolytes.txt
hey everyone it's sarah with registerednessrn.com and in this video i'm going to cover hypo and hyperchloremia and whenever you get done watching this youtube video you can access the free quiz that will test you on this content so let's get started chloride is an electrolyte that actually has an important relationship with other electrolytes in the body such as sodium and chloride as we know is a negatively charged ion why sodium is positively charged and they both like to congregate outside of the cell therefore because of this usually if there is a loss of sodium there's also going to be a loss of chloride and on the flip side if there's an increase of sodium there's going to be an increase of chloride therefore you're going to see a lot of overlapping with their causes their signs and symptoms and the interventions so chloride is very important for helping maintain our acid-base balance because of its relationship with bicarb they actually have an opposite relationship where when the chloride level is low the bicarb level will be high and vice versa plus whenever you have an imbalance of chloride that's occurring either due to metabolic acidosis or alkalosis the potassium level will be altered as well so that is why whenever you're taking care of a patient with a chloride imbalance as a nurse you want to be looking at three other lab levels you want to be looking at that sodium level because of its relationship with sodium also you want to look at the bicarb level and the potassium level also it plays a role in digestion because we need it in order to make hydrochloric acid and it plays a role with balancing the fluids in our body with the help of sodium a normal level is 95 to 105 milli equivalents per liter now chloride levels are maintained with the help of our kidneys they tweak our blood and decide okay how much chloride we need if we don't need a lot we're going to excrete it also it's excreted through the sweat and the gi juices so if you have some issue with the kidneys or sweating too much or gi juices chances are you can imbalance your chloride levels so let's look at hypochloremia so this is where we have low blood levels of chloride and some main causes are typically gi related where the patient is losing a lot of chloride through vomiting or their gastric juices like suction or if they have an ileostomy an ileostomy also can cause hyponatremia because this is where a surgical procedure has been created to bring the small bowel on top of the skin so the patient is having effluent which is stool coming through that now this is really rich in sodium also chloride so if they have where they're putting out a lot of effluent that can cause these levels to drop also diuretics can cause it thiazides that was very similar to hyponatremia burns and cystic fibrosis with cystic fibrosis these patients lose a lot of chloride especially through their sweat and patients who have fluid volume overload like heart failure siadh that's going to dilute the chloride and metabolic alkalosis can do this as well this is where we have a high level of bicarb and this is going to drop our chloride level and the reason it does this is because bicarbon chloride have this like opposite relationship especially in how they shift in and out of the red blood cell to help with proper gas exchange now the signs and symptoms of hypochloremia don't have their own specific ones compared to these other fluid and electrolytes they typically are going to be associated with whatever is causing this problem and if you can really remember the signs and symptoms of hyponatremia you can remember what hypochloramine is going to be because they really overlap so you may see dehydration signs and symptoms with an increased heart rate along with a decreased blood pressure fever vomiting diarrhea or being lethargic now let's look at the nurse's role in treatment for a patient with hypochloremia and to help us remember that information we're going to remember the word loss because we have a loss of chloride in the blood l is for look at the sodium level and assess for signs and symptoms of hyponatremia because remember these two electrolytes they like to copy each other so just to recap from our previous review on an imbalance of sodium a normal sodium level was about 135 to 145 milli equivalents per liter so anything less than 135 is hyponatremia so some nursing interventions you want to remember for this is that you want to monitor that patient's neurostatus because whenever that sodium level drops too low patients can become very confused and whenever a patient's confused they're at risk for injury to themselves because they're not really aware of their surroundings like they normally are and this can be due to swelling in the brain because whenever we are dropping those electrolyte levels outside of that cell especially of our sodium water will start to rush into that cell cause those cells to swell and we get swelling of the brain in addition you want to put that patient on seizure precautions because they're at risk for seizures and they can experience respiratory distress so look at their respiratory status and we want to make sure we're watching how much they're taking in and how much they're putting out so monitoring their eyes and o's their vital signs and looking at those daily weights o is for other labs to monitor so you want to be looking at not only that sodium level but look at the bicarb level and the potassium levels so a patient with a low chloride level could be presenting with a high bicarb and low potassium especially if this cause of hypochloremia is due to metabolic alkalosis because remember bicarb and potassium are also related to the balance of chloride because they all work together to balance that acid base system and the fluid in our body s is for saline and we're talking about normal saline administration so we can give a patient through their iv some normal saline and what's this going to do is it's going to help add chloride along with sodium directly into that blood and help increase those levels and then our last s is sources of chloride rich food so if your patient can take things by mouth you want to help encourage them to consume foods that are high in chloride and anything that's really salty or has a lot of sodium in it is also going to have chloride with it so table salt is good tomatoes like tomato juice olives seafood processed meats and canned foods are all rich in chloride now let's look at hyperchloremia what can drive that chloride level up well it's going to be similar to the causes of hypernatremia because again sodium and chloride really go hand in hand so consuming too much sodium can drive the chloride level up like giving the patient too many hypotonic solutions also the patient not drinking enough or losing too much water can dehydrate them raising that sodium level up along with the chloride decrease bicarb level whenever the bicarb drops that can increase the chloride because of their opposite relationship so losing too much with maybe having too much diarrhea also the con syndrome this is where they have the increased aldosterone so the patient is going to be retaining a lot of sodium but excreting potassium and that can elevate our chloride level medications like corticosteroids and then metabolic acidosis can do this as well maybe a medication leading to this condition or some type of renal problem now the signs and symptoms of hyperchloremia are similar to hypernatremia and acidosis and to help you remember those signs and symptoms you can remember the word fried which is the same mnemonic we use to remember hypernatremia so f is for fatigue r is for restlessness really agitated they can become confused because they're having central nervous changes i is for increased reflexes respirations this can progress to seizures and coma e is for extreme thirst this is a big sign and then d is for decreased urinary output and dry mouth slash skin now let's look at the nurse's role in treatment for a patient who has hyperchloremia and to help us remember that information we're going to remember high cl for high chloride hs4 holds sodium chloride infusions and sodium chloride rich foods they'll need to follow a low sodium diet because we don't want to administer more sodium chloride into their blood because this will actually increase their levels even more and increase that chloride level and that's one of the side effects of giving too much sodium chloride infusions to a patient is4 instead use lactated ringers so lr can be used to help decrease that chloride level but why does it do it how does it do it well whenever we administer lriv that lactate once it enters into the body it's actually turned into bicarb and this is going to help increase bicarb levels which in turn will bring down the chloride levels which is what we want in hyperchloremia because remember bicarb and chloride have an opposite relationship and this is really good if your patient's experiencing acidosis which is one of the causes of hyperchloremia because this will help increase the ph in the blood make it more alkaline also the administration of bicarb and certain diuretics can actually help decrease chloride levels as well c is for collect eyes and o so how much they're taking in how much they're putting out that's going to be very helpful in allowing you to monitor their fluid balance status also their daily weights and their vital signs then l is for labs to monitor so as a nurse you want to check out that chloride level make sure it's not trending too high and that it's actually coming down with the treatment we're doing you also want to check out that sodium level the bicarb level especially if we're giving them some fluids to help increase the bicarb level we want to make sure we're not making them too alkaline and increasing that blood ph too much you also want to look at potassium because they could have hyperkalemia especially when acidosis is presenting because whenever acidosis is happening in the body potassium leaves the cell and moves into the extracellular area hence the blood in exchange for hydrogen ions so we can have an elevated potassium level now let's test your knowledge with this quiz question which type of iv fluid below can be prescribed to treat a patient with a chloride level of 69 milli equivalents per liter a sodium bicarbonate b normal saline c lactated ringers the answer is b normal saline this chloride level of 69 is too low and we're actually having hypochloremia so with this we would want to administer normal saline because that would help replace our sodium and our chloride level and help increase that level so we don't have hypochloremia anymore now if you'd like more free quiz questions you can access the link below and thank you so much for watching