video_id
stringlengths 11
11
| text
stringlengths 361
490
| start_second
int64 0
11.3k
| end_second
int64 18
11.3k
| url
stringlengths 48
52
| title
stringlengths 0
100
| thumbnail
stringlengths 0
52
|
---|---|---|---|---|---|---|
qHLLMg0Teg4 | everything happening here so we did in filtering we know what those quantities are going to be then how about these here what can we reorganize here to make this the summations kind of move in well it's going to be similar to what we have happening here x0 is the furthest out in the chain and so it gets on the most inner side simply will be through 4 x4 that's the furthest | 1,000 | 1,026 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1000s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | away so actually going to be able to first look at X 4 and say we're just going to sum over X 4 let's let's forget about X chief for now we'll need to squeeze in there but we'll sum over Explorer so we're left with summation over X 4 and we can actually just do this part there's no Explorer anywhere else the result of that will be something that does involve X 3 because as you some out | 1,026 | 1,053 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1026s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | X for Z's are constants but there's still an extreme here so we need to put on the outside still a summation over X 3 it's still in there and multiply in every appearance of X 3 this one this one and so we have this thing over here now let's look at the quantities we have after we group things this way so let's start again over here what do we get if you look at this quantity over here we | 1,053 | 1,088 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1053s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | looked at it in filtering this is computing P x1 comma Z 0 comma Z one simple recursive calculation then we look at this one over here this quantity here we have not seen it in filtering but we can interpret it what is Z Z for given X for X 4 and X 3 this is really then Z 4 comma X for given X 3 it's what we have over here then we sum over X 4 so we sum out X 4 so we end up with here | 1,088 | 1,130 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1088s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | is P Z 4 X 4 is being summed out given X 3 all right now as we keep processing this and this will in the future call a backward message be X 3 X 3 is the variable then once we multiply in Z 3 given X 3 what do we what do we get we get I should we multiply multiplying so multiplying this one Z 3 given X 3 which would give us pc3 comma Z forgiven X 3 if we go up to here then we'll apply X 3 | 1,130 | 1,179 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1130s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | given X 2 which will make it multiply next we give an X to graze X 3 to the front P X 3 Z 3 Z 4 given X 2 then we sum out X 3 and we're left with P z3 z4 given x2 so from one side of the chain we get the probability of the evidence that comes after x2 given x2 that's living here over here we have to join between X 1 Z 0 Z 1 but actually we can multiply this thing and some out | 1,179 | 1,225 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1179s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | over X 1 so we'll get P x2 comma Z 0 z 1 so you have the joint of x2 with the past evidence then we have bringing in the current evidence at time 2m we know the conditional submission for the later evidence given x2 so if you multiply all three of these together we get exactly this quantity over here which is to join between x2 and all the evidences and so he think to observe here is that | 1,225 | 1,259 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1225s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | evidence that comes from the past is just a standard forward filter being run that's shown over here Adams coming from the future is some kind of backwards filter running that does these updates here that work from the back of the chain back to x2 and give you a condition of all future evidence given x2 and of course you need the evidence at the actual time to also incorporate | 1,259 | 1,282 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1259s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | it to get the full evidence alright so in terms of math that we did here ultimately all it is is writing out the full joint distribution and moving around the summations and discovering the structure of how can do calculations from the front and from the back to bring in to the time where we're at in a way that is not exponential in the number of variables we're considering | 1,282 | 1,308 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1282s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | it's linear every calculation is simple we do one simple calculation per time slides to work our way to time - all right any questions about this then let me project in typeset lay tag the equations we just derived by example on the board but we're going to look at see how these on the slides this is what we did we did this the whole smoothing thing this is the full filter equations | 1,308 | 1,352 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1308s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | at the bottom going to magnify them there's a backward and a forward we can combine them to get the local so here's the full thing we can run a filter forward and we'll call those things a messages in some sense indexed by time so very simple set of update equations incorporate the dynamics model and the next observation and repeat then backward pass does something very | 1,352 | 1,378 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1352s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | similar but works its way from the back you initialize with just uniform because you have nothing really to go out there's no prior at the end you have just nothing to start from and then you start bringing in evidencing is bringing the dynamics that got you there so the dynamics into the time step you were working from but otherwise it looks exactly the same | 1,378 | 1,398 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1378s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | and once you have those you can combine them to get the distribution for the variable XT jointly with all evidence at all times now one thing that might come up in practice is that as you run this the way it's shown here even though mathematically is the simplest way you might run into numerical problems because actually compute a joint over more and more variables the actual | 1,398 | 1,426 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1398s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | probability value will keep going down and down and you might get under flow where you get numbers that are below numbers you can represent in floating point so in practice even though the math is kind of simple and cleanest when working with the joint often you would renormalize as you work along so just say okay I have maybe currently my this thing is really a joint with all past | 1,426 | 1,448 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1426s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | evidence but I can also just renormalize it and forget about it being a joint and just say hey I'm just going to renormalize and know that it's now a conditional instead of a joint what do you lose you lose that probability you don't know anymore the probability of all the evidence you just have a conditional now for x2 or whatever time slice it is given everything else if you do care | 1,448 | 1,469 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1448s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | about the actual value because you want to say was this a likely or unlikely run that I just saw happen then you can keep track of these in log space you can just keep track of the log probabilities instead of the actual probabilities and that way avoid the underflow alright so last thing to do if you do it this way is just a normalization but again as I said for numerical reasons often you'll | 1,469 | 1,495 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1469s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | be the new you'll be doing the normalization as you work along to make sure things stay in the range that your floating point is computation is happy with now we can do other things the same ideas we can use to find pairwise posteriors for example the posterior between XT and XT plus 1 jointly with all the evidence from what we derived on the board it should be clear what's | 1,495 | 1,518 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1495s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | going to happen here XT and XT plus 1 sitting next to each other we're going to work our way from the front and the back towards them and we're gonna stop right before each one of them coming from each side and then multiply in the middle the conditional XT plus 1 given XT and the evidences for that time slice in fact one way you could Mathematica think of it as just the thing of it is | 1,518 | 1,538 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1518s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | whatever make XT and XT plus on one variable as if it's one variable we ignore that it's really two variables then exactly the same calculation can be done and then when you unpack this one variable you'll see that you'll have to introduce an XT plus 1 given XT into it because you're unpacking the details otherwise it's just the same thing to compute these forward and backward | 1,538 | 1,558 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1538s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | messages and then when you are hitting the middle point you essentially just have the backward coming in XT plus 1 the forward coming into XT you're multiplying the conditional and the one observation conditional that you hadn't incorporated yet now you might wonder why would we care about pairwise posterior I shall come clear later in this lecture for now you might say well | 1,558 | 1,582 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1558s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | why would ever care but we'll clarify that alright so these are just the laws apply but the same as we did on the board oh you might take this to the next level a little harder to do you can do as an exercise can I find the joint between XT and XT plus K which is not a neighboring variable with all the evidence you can imagine that the relief definitely again | 1,582 | 1,607 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1582s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | messages coming from left and right but you'll have to a little bit of thinking about what you do with the stuff in the middle you still have to sum over those variables in a way but not lose XT as you work your way to XT plus K ik is then you don't recover the joint you need to keep somehow XT around so you'll essentially do something like we did right on something out over XT you just | 1,607 | 1,627 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1607s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | keep it around and you skip the summing out over X T and keep working your way forward all the way till XT plus K and the XT will just still be in the air because she never some doubt over it what is the common smoother the common smoother is exactly what we just covered applied to the situation where these probability distributions are conditional gaussians where conditional | 1,627 | 1,652 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1627s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | of XT plus 1 given X T is a linear Gaussian and a conditional of ZT given XT is a linear Gaussian and then their concrete they're not just like these abstract distributions but in that specific case you get the common smoother so you find that the math for common smoother is very similar to come on filter which agnostic covered will be very similar equations happening that | 1,652 | 1,675 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1652s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | are really just matrix updates you don't need to do any explicit integration any kind of weird integrals that are hard to compute no these closed forms you just manipulate matrices and you'll find updates for your covariance matrix and for your mean and this case will come from both sides and then I'll come together and give you the smooth estimate based on evidence from both | 1,675 | 1,696 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1675s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | sides so well you can do it as an exercise see if you can work through that if you want to check if you really understood the derivations that we're done in the previous lecture for the common filter you could see if you can find the derivations for the backward pass the forward pass will stay the same the backward pass will be the new thing can you find what that looks like and if | 1,696 | 1,720 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1696s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | you can do that then means really understood how this works now we can also look at the results the imagine we run a Kalman filter or a common smoother how well does it work so a natural comparison would be something along the lines of let's say I have some dynamical system and I don't get to observe the state directly but I get to see some observations so I would run a | 1,720 | 1,740 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1720s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | Kalman filter but since I'm running an experiment that could say well okay let me actually give myself access to this state see what it is and see how precise my filter is how well does it track this state and that might be but for debugging and just understanding how well a Kalman filter could work but then you could also sending for the smoother you could say oh let me also run the | 1,740 | 1,759 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1740s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | smoother and what would you hope for you'd hope for the smoother let's move your estimate to have a mean that is closer to the real state than the filter it doesn't always have to be closer but an expectation it should be closer because it brings in more information by bringing more information it should be able to do better where might this would be most pronounced at time 0 because at | 1,759 | 1,783 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1759s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | time 0 the filter will have no information yet but its mood will have incorporated everything from the future to estimate the state at time 0 will not be pronounced at all at the very end is at the very end of your time sequence the smoother and the filter uses the exact same information and they should have the same estimate otherwise something funny going on I mean maybe | 1,783 | 1,803 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1783s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | some numerical things going on but overall they should have the same estimate because they use exact same information to get the estimate at the last time slice in between you can think of it as the smoother having roughly twice as much information is it's not necessarily exactly the Pens in the exact conditional probabilities observer and it can depend on a lot of things but | 1,803 | 1,821 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1803s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | in general you can think about is having twice as much information especially roughly in the middle and so you'd expect the variance to be about half meaning that the average deviation from the real estate for the smoother should be about half in terms of variance compared to the average squared deviations be about a half compared to the filter well let's take a look here's | 1,821 | 1,841 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1821s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | some MATLAB code wrote awhile back and ran this experiment and so what we have here is a plot we just did 20 time steps we see in solid line the state or a two state very both losing one state variable shown in blue once the rebels shown in green two-dimensional space and we see the state the greens variable starts at the top there blue starts at the bottom here | 1,841 | 1,865 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1841s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | then we can look at the smoother in dotted and the filter in dashed we look at the estimates for example early on here we see that well the filter really has no clue when it's just starting out and it's not really close to this state but the smoother is very close because as seen all the future to understand what the state might be now then at the very end we see that they're very close | 1,865 | 1,889 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1865s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | together because that's just the way it is you might say why is it never perfectly on the state why does it not at the end noticed it perfectly maybe you've seen some things where it says a common thought that will converge to the correct state that's only true is no noise in the system if there's no noise then over time it'll nail the state but because there is noise in this | 1,889 | 1,907 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1889s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | simulation because noise on the observations noise in the dynamics you can never perfectly know the state because you never get access to it all you get is noisy measurements but we will see that over time the common filter will converge to a kind of fixed variance a fixed expected squared error around the state you'll get that kind of convergence but we won't converge to the | 1,907 | 1,928 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1907s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | actual state per se any questions about this yes Suroosh actually when you use the mic so you talked about the normalization at the very end is that trivial every in every instance or are there certain instances in which you can't do it analytically or it's computationally intractable yeah so I would say there's even a more general question as we look at these update | 1,928 | 1,964 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1928s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | equations for filter and smoother are they tractable in general and in general they are not and we'll actually see approximations later this lecture where we it's not tractable because they're integrals and the integrals you need to do it numerically and in high dimensions you can't do it precisely because you need to populate the high dimensional space to get a reasonable approximation | 1,964 | 1,986 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1964s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | your integral not gonna be able to do it or in a discrete space if your state space is very large imagine I don't know imagine a state variable has is a vector so X is a vector let's say x0 is a vector and each entering that x0 vector can take on I don't know 100 values and maybe 100 entries now you have 100 to 200 possible values for your state and you can't enumerate that in | 1,986 | 2,014 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=1986s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | this summation because 100 to 100 will be far too much to work with and so things we'll see later this lecture is how to deal with this when I would say the equivalent of iterative lqr when it's a nonlinear system but maybe locally it's close to linear and then locally can approximate with linear gaussians and that's the extended column under we already covered that that's | 2,014 | 2,034 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=2014s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | extended Kalman filter I said a common file that you covered last lecture and we'll cover next lecture is particle filters which will essentially do sample based approximations to this entire calculation so they'll say I can't cover everything let me just run a particle filter which is much like the sampling based approaches to value iteration that we saw it's the counter part you just | 2,034 | 2,055 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=2034s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | sample bunch of states look at the valuation update particle filters the equivalent you sample a bunch of possible states you don't know which ones correct you propagate them all re-weight them based on what are the evidence compatible with them or not and that way get an approximate estimate of the distribution and say yes absolutely in general these filter calculations are | 2,055 | 2,074 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=2055s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | not possible to do exactly but in special cases discrete small number of values a state can take on yes very feasible and linear Gaussian distributions for a next state given current state and observation given current state again we can do it in closed form those are the only ones that are tractable the other ones you'll do approximations yes I say let me use this what's an example | 2,074 | 2,104 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=2074s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | of a smoother being useful in that you want to know the posterior you're giving future evidence okay yeah so a question is about the smoother being useful I'm gonna defer the answer to that till the second half of a lecture because we're kind of building up to where we're going to use it and so let's see if it's still that question after lecture but it's a very valid question just a little bit of | 2,104 | 2,123 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=2104s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | patience so what we've covered so far is filtering and smoothing which returns at distribution for the marginal what is the distribution for State at time T given all observations or given all past observations but sometimes we care about it some a little different so top is filtering middle is smoothing bottom you see in red all the states are marked we want to know what is the most likely | 2,123 | 2,152 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=2123s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | joint across all of them now typically adjoins this reason of our many variables not easy to represent so typically what you would do instead of trying to find the fool joint over all of them given all the evidence you'd say let's find the single most likely state combination over all times so what is the single most likely path in state space that was followed based on the | 2,152 | 2,173 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=2152s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | observations I had that's maximum a-posteriori estimation it's about finding the max instead of the distribution now we won't work through the math on the board for this one but it is a bit exercise to try it on your own and the results are on the slides but effectively you'll see happen in these slides is that instead of looking at a summation over the variables we | 2,173 | 2,197 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=2173s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | look at a max over two variables and a max will interact with this whole set of equations essentially the same way as a summation would and we'll play the exact same trick we'll see like which factors have a dependence on each on the variable we're maxing over bringing out in a smaller group together and so we'll recursively be able to calculate the max while running along the sequence so | 2,197 | 2,222 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=2197s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | there'll be a max that start at X 0 and y observation Z 0 what is the X 0 that's most likely given the observation so far but actually we'll do a little more than if what we do is find an egg Sierra that's most likely based on the observation it's not in the silicone paddle with everything that's following so they've just computed the most likely x0 we'll say for every x0 we're going to | 2,222 | 2,242 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=2222s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | calculate how likely it is given the evidence we've seen so far from there we'll then say once we have that we can combine that with the model for x1 given x0 and observation for z1 given x1 to find how likely each x1 is if we match it with the best x0 for that x1 so essentially saying for each x1 how likely is it if I get to match it with the best the most compatible x0 that's | 2,242 | 2,271 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=2242s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | what lives in m1 x1 do the same thing we'll find what how likely is each x2 assuming I get to match it with the best possible choice for x0 and x1 and so it's exactly the same thing instead of saying was a probability for some x2 value summed over x0 and x1 we're just saying if we got to pick the best x0 and x1 so there's a replacing that sum with a max otherwise the same thing is | 2,271 | 2,298 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=2271s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | happening same for them x3 and so forth now generally this would be the update equation just as simple as the ones we saw for filtering but now the summation becomes a max that's the only difference because we're not saying was the probability combined over all possible values of the other variables it's if I got to choose the best choice of value for the other variables now one thing | 2,298 | 2,326 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=2298s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | that happens when you run this at the end of the day where you have is for the last variable X H capital H at the very end you'll say for each value that can take on how likely is it the assuming the other ones take on the best matching but that's in all you have so I should have to keep some pointers around whenever you do this max here you have to keep track of for each value of XT | 2,326 | 2,348 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=2326s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | which value of XT minus 1 is the one that was chosen as the max so you can work your way back along the chain along those pointers to find the full sequence so details are shown here but essentially very simple it's just like the filtering operations except that now we have for all X T we have to store the Arg max to remember the most compatible value for the proof | 2,348 | 2,377 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=2348s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | from the previous time slice so when at the very end we're done at capital T we can see okay for all values of K of X capital T which one is the most likely if it gets to be completed in its optimal way you pick that one followed from that value to what the previous values should be previous value all the way back to the front so very efficient algorithm and you can do this for | 2,377 | 2,401 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=2377s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | example in a tabular case you can do it in general as long as the competition is tractable as long as you can do that maximization sometimes a maximization is easier to do that an integral so sometimes this thing is more tractable to run than doing the actual filtering because well maxing you can run gradient descendant maybe at least find the local maximum whereas if you need to do an | 2,401 | 2,423 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=2401s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | integration it kind of to sum over everything in the space and can be less tractable very often now one special case is the common filter or the linear Gaussian setting so summations become integral sure can't enumerate the overall associations but we can find solutions efficiently we know that when we have because we have multivariate gaussians everywhere the the crazy thing | 2,423 | 2,452 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=2423s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | is in some sense that for the common filter if you think about it if you're on your common filter you find the mean everywhere that sequence of means is actually also the most likely sequence so there is no difference in a common filter between the maximum a posteriori and the means that you find at every step from these from the smoother not the filter from the smoother because you | 2,452 | 2,479 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=2452s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | want the most likely full sequence so either account for everything why is that well think about it what if you do an exact calculation do an exact calculation forget about any kind of algorithms you say I'm gonna compute the full joint over all access given the evidence that's gonna be a Gaussian the Gaussian for all X is given the evidence well if we have a Gaussian for all X is given | 2,479 | 2,503 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=2479s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | the evidence what's the thing that's most likely it's the means all the means and if you wonder what's the most likely for this single time slice given all the evidence it's also the mean for that single time slice so it's a very special case where the means and the fool correlated maximum a-posteriori are actually the same it's because the I mean big part is essentially it's a very | 2,503 | 2,526 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=2503s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | simple distribution compared to most distributions and it happens to simplify that way an alternative you can do in situations like this often is to in this case in particular you can solve an optimization problem because essentially trying to find the set of variables that maximizes the objective namely the log probability some of the log probabilities of the evidences is just a | 2,526 | 2,547 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=2526s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | Comics optimization problem you can also find it that way all right so so far we looked at estimation we are given a model for dynamics and a model for measurements and from that we estimate distribution over state or max most likely sequence over state second half lecture will actually start looking at how we can estimate the parameters in this distribution we assuming they're | 2,547 | 2,574 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=2547s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | given to us we assume we're given dynamics model assume we're given the observation model and practice YouTube we now given them you have to come up with them by hand might be hard more convenient might be to collect data and estimate them and so we'll look at that in when we restart in two minutes let me mute it for a moment [Music] [Music] [Music] alright let's restart so let's look at | 2,574 | 2,835 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=2574s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | estimating some parameters Oh wrong one so simple example let's say we have a thumbtack and you want to build a probability distribution when you throw it up and it lands on the table or in the ground will it I should be pointing up or will it be lying on its side with the kind of neatly thing pointing diagonally down well what do you think was the probability of up or down | 2,835 | 2,878 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=2835s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | probably you know in principle you could think about if first principle say well the air flow around this thing what might happen and so forth not going to be easy to come up with a very precise number so how do we get this parameter then to know probability of up versus down what we can run an experiment imagine we do it 10 times and with this as the results we get we see it's up | 2,878 | 2,899 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=2878s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | eight times and down twice if that's the outcome of our experiments that we might just say well probability of up is 0.8 and we might just work with that now I might say well too small an experiment need to run this for longer sure that's just somebody did this and so any thoughts what the probability is gonna be zero point eight you think zero point eight now maybe I don't know yet well no | 2,899 | 2,931 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=2899s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | on the next slide any other guesses two-thirds it's never hard to know I mean it's very empirical so it turns out total up seventy-seven total down 23 so they tossed up ten of them every time and then looked at how many we're up versus down so yeah seventy-seven percent chance according to this experiment that you land at the pointy thing up okay so that might be our best model we can make for | 2,931 | 2,960 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=2931s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | this short of somebody collecting even more data and getting a more precise estimate of this thing but then I mean this kind of a somewhat specifically designed scenario it's very hard to just you do some first principles but even when you do have first principles available for the dynamical system you're trying to model often a lot of details you won't know very precisely | 2,960 | 2,982 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=2960s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | and so very often you'll still want to run experiments to get a more precise estimate of the dynamics model of the sensor model they you can get from just first principles so let's take a more general look at this math works out and how we can generalize this to other things [Music] so the first thing is that we said okay 77 up 23 down 77% chance that that seems | 2,982 | 3,020 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=2982s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | pretty reasonable but what if your distribution is more complex what are you gonna do I mean maybe there's no way to just do counting so what are you going to do then well the general principle ideally we'd find a general principle that always applies and in the case of the thumbtack experiments still simplifies and gives us the same solution we already know so how can we | 3,020 | 3,041 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3020s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | generalize this notion that we were just counting to get our best estimate well there's something called likelihood so imagine we observe eight up to down and let's say the probability of up we call theta then we're going to say you know what's the probability of a sequence that we observe maybe we have up down down up up up up up until the end what's the probability of that well we can | 3,041 | 3,071 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3041s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | write it out if we say it probably of up is theta then the probability of up down down all up would be theta times one minus theta times one minus theta times theta and this would be seven times one time in this two times and so total we'd have data to the eight times one minus theta squared as the probability of that particular sequence happening we'll call | 3,071 | 3,100 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3071s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | it as the likelihood of what we saw happen if we choose a parameter vector theta then we could say well how should we choose theta again we chosen by doing counts but we're hoping to find a more general principle that will reduce the counts in this case but in other cases still be possible to apply so you could say well a more general principle would be to say I want to find the parameter | 3,100 | 3,126 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3100s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | theta that maximizes this score because whichever theta maximizes this score is the theta that makes what I saw happen in the world more likely to happen than any other theta I would have made it so it's the best explanation of how the world works at least the part of the world that I observe so you can say okay well this thing look like you can plot this thing look something like this data will | 3,126 | 3,153 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3126s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | live between zero and one of course this is a probability and then dysfunction data to aid one minus theta squared what does that look like there 0.5 over here it'll look like this with it turns out the peak will be at 0.8 and so that's nice because that means that the principal we intuitively thought was pretty good which is just counting corresponds to something more general | 3,153 | 3,194 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3153s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | which is looking at the likelihood of one of the experiment under the parameter and then finding parameter that maximizes the likelihood now in general plotting will not be really an option you'll need to somehow find this thing without having to plot it but we've covered optimization already in the class we can look at derivatives gradients and find the optimum this | 3,194 | 3,213 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3194s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | thing for this very simple objective we can just say derivative of this thing with respect to theta is equal to well what is it it's something like 8 theta to the power 7 1 minus theta squared plus theta to the 8 times 1 minus theta well 2 times 1 minus theta and then there's a minus here so I have another negative 1 appearing here ok and then me if the function really looks like this I | 3,213 | 3,248 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3213s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | mean the derivative is actually 0 over here and over here so hopefully we can find where this is equal to 0 easily and really we find hopefully at 0.8 so let's see this thing equal to 0 while there's a theta to the 7 here theta up to the 8th over there so we can rewrite this as so I want this equal to 0 but this is equal to theta to D seven times I should have plotted it wrong it's like | 3,248 | 3,285 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3248s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | gonna go like this here theta to the seven times one minus theta we can bring up front and then there is left eight one minus theta plus 2 times theta equal to zero so we see that this thing will be equal to zero when theta is equal to 0 theta equal to 1 so those are actually minima rather than Maxima they're bad places to be but they have a derivative that's 0 and then the other thing is | 3,285 | 3,319 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3285s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | whenever this ting is equal to 0 which is 8 minus 8 theta plus 2 theta equal to 0 is that working out for us hopefully it's working out oh the minus -1 - there's a minus sign lost somewhere minus 2 here minus 2 theta so I have a minus over here and so then we have 8 equals 10 theta so theta equals 0.8 so we've got the three places where derivative equals 0 0 1 and 0.8 and this | 3,319 | 3,355 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3319s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | is of course the one we want we can verify this by plotting or we can verify by taking the second derivative at that spot and seeing that it's a negative second derivative which gives us that shape now this math was kind of ok we can do it but actually practice people often prefer to do the math slightly differently they'll say ok in general we have a likelihood maybe of the type L | 3,355 | 3,381 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3355s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | theta equals theta to the power N 1 how often we saw the first outcome 1 my state and 0 when we saw outcome 0 and we can work through the same kind of math we saw over there but instead we could actually look at also at the log of L theta the log likelihood which will be log of theta to the N 1 1 minus theta to the M 0 which is equal to n1 log theta plus and 0 log 1 minus theta why | 3,381 | 3,421 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3381s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | is the kid to look at the log instead of the original thing when you're trying to maximize or minimize by taking the log at every point on the function you are doing monotonic transformations something was the highest point it'll still be the highest point for that function because the lowest still be the lowest the ordering stays the same so it's okay to take the log then the | 3,421 | 3,444 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3421s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | derivative becomes simple because we don't have this product of stuff anymore we have a sum of things because the log of the product is sum of the logs and we take derivatives which is the sum of derivatives which is simpler than this thing where if we have many complicated terms multiplied together they'd all like stay together in complicated ways and be a lot more hairy to work with and | 3,444 | 3,463 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3444s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | we can do this thing derivative respect to theta equal to 0 is what we want let's look at the derivative so N 1 times 1 over theta plus and 0 times 1 over 1 minus theta and then a negative 1 here that equal to 0 they were multiplying by theta and 1 minus theta so we have N 1 1 minus theta plus well - and 0 theta equals 0 so now we need to reorganize this a little bit we end up | 3,463 | 3,499 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3463s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | with N 1 minus N 1 theta minus n 0 theta equals 0 so theta equals N 1 over N 0 plus N 1 which is what we hoping for because intuitive result we thought should be the right one but we're covering it in a principled way that does not depend at all on a distribution of this format you can have any kind of distribution and apply the same principle you could say I have a | 3,499 | 3,530 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3499s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | distribution as a complicated functional form very hairy form but I can still say this is the likelihood score under the form let me find the parameters that make this maximally likely now this plot over here is the reason people like the logs it simplifies the math that you do by hand that salsa simplifies the math you do it numerically this numerically once you | 3,530 | 3,550 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3530s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | take the log this plot on log scale on this axis right on the original scale will actually look more like this so it's a nice concave shape with a single optimum there's not this weird curvature happening where you also because this tends to be difficult to optimize with you don't have that show up it's much nicer behaved and so by taking the log you get something numerically easier to | 3,550 | 3,578 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3550s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | work with just as well as often analytically easier to work with we always talked about convex problems which are problems shaped like this and those are easy to minimize well these are concave problems in this case they're easy to maximize the same thing same algorithms can be applied guaranteed to find the one maximum that exists for this thing it was a question | 3,578 | 3,601 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3578s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | there so intuitively yeah so it's always I mean if you take a class in convex optimization you'll see that you know half of the class is dedicated to building intuition of how you highball something's comebacks or not and same thing with concave I mean it's the same kind of thing it's hard to say like how intuitively you would do that sort of like working through all those | 3,601 | 3,632 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3601s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | principles and starting to recognize all the patterns in this case we can just plot it so I made an actual precise plot of what it looks like and we'll see that it looks beautifully concave but yeah I'm practice you would look at second derivatives eigen values of the Hessian if they're all they're all negative then you have a nice concave shape there's no no magic recipe short of all those | 3,632 | 3,659 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3632s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | tricks that they teach in the convex optimization classes so we covered this we covered this we've covered the log is a monotonic transformation we can just work with the log instead of the original thing and then these are the two plots and again I just generated those plots and so we know in this case it's true because these are the precise plots for that objective but generally | 3,659 | 3,684 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3659s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | it's going to be true that the log will do where the log of shape I'll essentially help you getting maximum likely problems often will become better condition when you take the log compared to keeping the original and we said here I remember convex concave convex was it once we've covered before any line between two points on the function should be above the function concave is | 3,684 | 3,708 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3684s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | the other way around and that means you have a unique optimum unique maximum when you have that now effectively you can apply this principle to any kind of distribution we saw just a Bernoulli distribution up or down outcome for the thumbtack well how about multinomial where it can take on different values one two three and so forth up to capital K well we received some samples x1 | 3,708 | 3,734 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3708s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | through XM we can just see what's the log likelihood of these samples well is the log of the product of theta 1 to the power how often we have outcome 1 theta to the power half when we have the outcome 2 and so forth and this is what we end up with and then we can do the math and find that again it comes down to Counting how about an hmm imagine some sample human hmm we see | 3,734 | 3,761 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3734s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | both the state X and the observation Z at all times if we have that we could estimate the model the dynamics model and the observation model again by doing counts but they were precisely we could look at the made to principally derived this we can look at these are the models we want to estimate let's look at the likelihood of this sequence of observation State and sensor observation | 3,761 | 3,784 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3761s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | write out the likelihood under the Joint Distribution then we can run the kind of optimization in this case can be done in closed form and we'll find that indeed we'll get the counts for conditional of State at time T given state at time t minus 1 and the counts for the condition of observation given state doesn't need to be count based distributions or discretize regions here is a continuous | 3,784 | 3,812 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3784s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | distribution exponential distribution and exponential is of the form lambda e to the negative lambda X X can only take on positive values here and lambda is a choice that determines how quickly this thing decays versus maybe having a heavier tail you get some examples some samples from distribution three point one eight point two one point seven you can just say well what's the probability | 3,812 | 3,837 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3812s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | of each of these samples under this density multiply all of them together or take the log of the product of all of them and then see what maximizes it and in this case lambda is three over 13 and that might not have been as easy to read off by just looking at these numbers might not have said oh it's 3 over 13 you have to do a little bit more math derive what it is and you find that you | 3,837 | 3,859 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3837s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley | |
qHLLMg0Teg4 | know the equation comes down to what you see here which is some summation of all the values you've got in the denominator and the number of samples on top so this is the general version your lambda will be some sense the one over the average of the x-values that you perceive do the same thing for the distributions how about uniform distribution what do you think can be | 3,859 | 3,889 | https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3859s | Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley |