video_id
stringlengths 11
11
| text
stringlengths 361
490
| start_second
int64 0
11.3k
| end_second
int64 18
11.3k
| url
stringlengths 48
52
| title
stringlengths 0
100
| thumbnail
stringlengths 0
52
|
---|---|---|---|---|---|---|
pPyOlGvWoXA | entropy and so we have 1 plus entropy so expected length is entropy plus 1 so this is pretty good we here have discovered out not only the best thing you can do is entropy coding is the entropy in terms the number of expected bits but also you can use directly log 2 of 1 / PXI as with designated coatings and if you do that you're only one away on average from the optimal | 3,381 | 3,410 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3381s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | encoding now the one thing we haven't covered yet in this whole scheme is how do you find that encoding we now know that we could do entropy encoding we know that this will be close to optimal but running a massive search coming tutorials face might not be that practical check out Huffman codes can achieve the same optimality and we'll show that now so by induction on the | 3,410 | 3,437 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3410s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | number of symbols in our code book so a number of symbols M by induction meaning that in the proof we'll assume that if we have to encode only n minus 1 symbols and we use the Huffman encoding scheme we would end up with an optimal prefix code for those n minus 1 symbols and now I'm going to show that under that assumption it's also true for M and of course with only two symbols or one | 3,437 | 3,466 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3437s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | symbol wherever you want to start it's clear Huffman codes are optimal so we're good to go okay this is actually a little intricate but it's not too long Huffman coding always looks at these lowest probabilities symbol so we'll start there we look at the two lows for those symbols X and y there's always going to be two lowest probability symbols mater is a tie but that's fine | 3,466 | 3,491 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3466s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | they arbitrarily break ties and be the two lowest probability symbols in your original code book optimal prefix codes will have two leaves in elos double branch why is that you have a prefix code if symbol here maybe a symbol here some more symbols here in the lowest level branch which is this one over here there are two leads in higher levels that's not always true here that's not | 3,491 | 3,521 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3491s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | true here that's not true but the lowest level it's always true why is this always true imagine you didn't have two symbols left anymore only had one symbol after you didn't have this one here what do you do you would actually get rid of us all split here you put C up here and now it's gonna be true again if it's not two symbols at the bottom you only have one | 3,521 | 3,543 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3521s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | we just a bit more okay so at the bottom there is always gonna be two leaves at that lowest branch then without loss of generality we can assume that symbols x and y have the same parent does I have to meet a kids just bill imagine your tree looks like this it can be an X's here y is here and there's a Z here and a W here could be it they don't have the same parent but because they're the | 3,543 | 3,574 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3543s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | lowest probability symbols they will always sit at the lowest lowest level and at the lowest level you can interchange where they live and you can always make x and y appear together and put W over here and so there now have the same errand it's effectively the same code you just move things around at the bottom so x and y have the same current then every optimal prefix 3 will | 3,574 | 3,601 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3574s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | have x + y together at the lowest level with the same parent so that's what we're out now the steps we've made allow us to include this line over here no matter the tree structure the additional cost of having x and y rather than just a simple parent symbol Xena needed sending that with my n minus 1 symbols now this M resembled x and y the extra cost will be px plus py y is down the | 3,601 | 3,632 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3601s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | number of times you have to go down that extra level in a tree to reach x and y is P of X plus P of y if you only have to go to the parent of x + y you wouldn't have to go that extra level and whenever you have to go to extra level it cost you one extra bit it happens P of X plus P of y fraction of the time now the end symbol Huffman code tree adds this minimal cost to the | 3,632 | 3,658 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3632s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | optimal n minus 1 symbol Huffman code tree which is optimal by induction so this here it's a final part approve it's saying no matter what tree you build you'll always pay a prize of P of X plus P of Y when you need to split on x and y you can't just get away with apparent Z that's unavoidable the Hoffman code tree will have them appear that way together so the Hoffman code tree is incurring the | 3,658 | 3,699 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3658s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | minimum possible cost for being a n symbol tree prison minus one symbol tree it's having that minimal cost to when it's built so far which is a 10-1 symbol tree which we know is optimal by induction and we're good to go alright so click recap of everything we covered entropy is the expected encoding length when encoding each symbol with this length and so there's the equation for | 3,699 | 3,733 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3699s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | entropy atmosphere for 1948 so that every data source P of X is an order 0 Markov model which means there's no dependencies between symbols like that you are accounted for or able to come for then a compression scheme that independent codes each symbol in your sequence must use at least entropy bits per symbol an average Huffman code is able to do that with an overhead about | 3,733 | 3,766 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3733s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | most one how do we know that because entropy coding doesn't leave an overhead up at most one and we prove that Huffman codes are optimal so given entropy coding has an overhead about most one Huffman codes provide a constructive way of achieving something that also has an overhead about most one beyond the entropy cost any questions I have a question so for the competition you mentioned in the | 3,766 | 3,805 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3766s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | beginning 500,000 euro competition if we just take the photo and yeah if we compute the entropy of that file that's provided that will provide like the minimum number of bits right if we just turn that can we just turn that to see if it'll be like 116 megabytes like that should would that give like a lower bound on what can be achieved so yeah that's a really good question | 3,805 | 3,837 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3805s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | and that gets exact what you're getting at is a in some sense exactly this thing over here so for now we've assumed the order 0 Markov model so what that assumes is done let's say that is a sequence of symbols let's say there's only 26 letters and nothing else in that file of course there's other symbols too but you could just look at the frequencies of each of those letters you | 3,837 | 3,858 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3837s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | could then look at the entropy encoding or look at the entropy and say okay this is the entropy and now if I want to compress this by doing each of my 26 letters a bit sequence as its encoding what's the best I can possibly do I can actually find that number and you'll find that it is gonna be more than that one in a 16 megabytes because otherwise somebody would have long done it the | 3,858 | 3,886 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3858s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | reason there is a hope to be able to do better and well we'll get into how to do this soon is down in reality these letters in that file are not independent when you see first three letters you might have an easy time predicting the fourth letter because there's only so many reasonable completions of that word now you already starting to saw the first three letters off and so then the | 3,886 | 3,908 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3886s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | calculus becomes a little different than we get that in a moment and that's where things get combat is then all saying like oh it's not as simple as doing counting of trillions of each of the symbols you really need effectively a generic model that can understand how to predict the next law small previous symbols and start measuring the entropy and batma and then the question is how good a | 3,908 | 3,928 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3908s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | maliki you built and yes if you can build the world's best generative model to predict the next character in that sequence you look at the entropy of that then you might have a lower bound roughly speaking on I mean - I think of a few details to be sure it's exactly true but they don't give you a pretty good estimate of what the optimal encoding might be and we'll look at a | 3,928 | 3,954 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3928s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | few examples soon like three slides from now we'll get to a few more things that touch upon exactly what you're asking about really good question other questions okay let's move on then so a couple of coding considerations we want to look at here what happens when your frequent accounts or maybe some more complicated estimate the distribution over symbols is not precise you have an estimate P | 3,954 | 3,999 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3954s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | hat but really the tradition is P what's gonna happen with your performance of the compression scheme higher order models we pick the next symbol from previous Emal's how can that help you and what about that plus one didn't ask innocent as it seems or is actually very bad sometimes and what can we do about it so the expected code length when using p hat to construct the code is | 3,999 | 4,027 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3999s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | going to be expected code length but in reality our expectations with p is the way we encounter symbols is governed by p for though your symbol is p i but the code length we assign is based on p hat a lie so then this is our expected code length simply don't need to round up to get no natural numbers for the encoding so simple calculation then we add and subtract the same quantity now this | 4,027 | 4,058 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4027s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | quantity over here in the front we recognize as KL divergence the thing in the back we recognize as entropy so we see the expected coloured length when we use a distribution estimate P hat is going to be the entropy plus we know it's always going to be more any encoding is gonna cost you at least entropy may be more it's gonna cost you an additional decay all the versions in | 4,058 | 4,085 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4058s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | P and P hat so the price you pay is a cal divergence we know that the log likely objective when we learn a genera t'v model I should comes down to minimizing the KL divergence between the data distribution and the model that you learn and selectively when we're maximizing log likelihood we're minimizing this calorie version effectively trying to find a distribution now incur minimal overhead | 4,085 | 4,110 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4085s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | if we use it for an encoding to encode our data notice two ways you can prove Cal is positive we can prove it because we know every encoding has to be hired Andrew could call it done this means that this thing is positive because we know that already or you could prove me from first principles using Jensen's inequality which is shown at the bottom here but so | 4,110 | 4,130 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4110s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | we also will pay a price corresponding to the KL divergence so the better our generic model is the better our encoding scheme can be and so when we think about including with generative models there's really two things going on you want to somehow figure out a good encoding scheme but the other part is you want to do really well at this part over here which is a maximum likelihood estimator | 4,130 | 4,155 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4130s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | because that's going to help you ensure encoding scheme is actually good on the data now what if P of X is high entropy if P of X is high entropy that would gain a very long code length which you might not like you might be able to decrease the entropy by considering conditional entropies if you condition X on the context let's say what has come before they may be able to reduce the entropy | 4,155 | 4,183 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4155s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | in fact this it's easy to prove that the conditional entropy of X given context C is always smaller than the unconditional entropy H of X in fact auto regressive models do exactly this in an automatic model you predict the next symbol based on everything you've seen so far and often the prediction of the next symbol or the next pixel is gonna be a lot more lot easier to predict so a lot lower | 4,183 | 4,208 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4183s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | entropy than just independently predicting each pixel and so going back to the price thing that we were talking about effectively this is saying that if you don't assert each symbol independently but you train a conditional distribution maybe well you will not you should not do worse and likely you should do better then come then when you would do it with each | 4,208 | 4,228 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4208s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | independent symbol being encoded separately all right how about the +1 Matt seem pretty innocent no entropy is optimal n to be +1 why not up up their price of 1 let's look at an example and let's look an example where I might I should be pretty bad and it's not going to be uncommon if we have a good predictive model which makes H of X very low then it could be very high overhead for | 4,228 | 4,267 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4228s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | example our distribution over symbols the three symbols very peaked mostly on the first symbol because we can predict the next letter maybe very easily the next pixel very easily we're very peak distribution 90% of math then 5% 5% the entropy of this thing is roughly 0.5 but it will pay a penalty of +1 and in fact we send us a lot of time each thing is gonna cost us at least one because | 4,267 | 4,294 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4267s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | sending a bit sending anything across will be at least one well I should pay a price that's pretty high so here's the optimal code for this we could use just year over 0.9 and then 1 1 1 0 expected code length will be 1.1 so we're going to pay a price here that's actually pretty big that almost twice the length of the code compared to what entropy cover what entropy predicts has the | 4,294 | 4,323 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4294s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | lower bound so this first one gets expensive you send their long sequence of symbols essentially sent twice the sequence length compared to what in principle you wish you could be getting how can we get around this let's take a moment to think about that anybody any suggestions could you use larger trunks can you use larger chunks exactly why would you care about larger chunks the reason your this | 4,323 | 4,365 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4323s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | price is expensive the plus one is expensive is because when you only have three symbols where you send one symbol you still need use at least one bit but one symbol doesn't have much information in it in this case very little information it is deceiving the first symbol if we send multiple symbols in one go let's say we turn this into a distribution over we have three symbols | 4,365 | 4,390 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4365s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | ABC chosen distribution over there could be a a a there could be a a B there could be a a C and so forth now we have 3 to the 3 is 27 possible combined symbols that we're trying to send not which friend this will work out a lot more nicely and the overhead will become a lot less than if when we try to send just one symbol of time so let's take a look at this in | 4,390 | 4,422 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4390s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | action one way people do this in actually sending faxes well Tom there keep you have used faxes but essentially before there was emailed with something called faxes where you could send documents over a phone line and the way it was encoded was by Senshi he's sending he naively would be uttered is fix less wide or block and as you step through the entire page naively you have to send | 4,422 | 4,447 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4422s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | one bit per pixel white or black very expensive because usually actually it's going to be a lot of light in a row or a lot of blacks in a row so you can instead encode it as the number of whites then the number of blocks number of lights it's called a thin coating and that's what they came up with so what are your symbols now your symbols are I run off I run off let's say one one I | 4,447 | 4,476 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4447s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | run off two whites and run up three wise one of four wives Sam for black you list out old possible run wings that you might want to care about encoding and then you can look at the probabilities of each of those run lines and then build a Huffman code and then you get the encoding so you're going to be using and you get a very compressed representation of that page that you're | 4,476 | 4,497 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4476s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | trying to send across even then also has a question about the English language how much entropy is there in English language and people have done this experiment so question here is was the entropy conditional entropy of X let's say X a position and given X 0 through X and minus 1 how predictable is the next character so Shannon ran his experiment and he concluded that the English | 4,497 | 4,524 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4497s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | language is only one bit per character so if you train a conditional model that predicts the next care could give it everything before you can get an entropy of 1 bit how do you even figure that out the way you did it is you actually ask people to do completions so you would say okay here's the beginning of some text now predict for me the next character and then the person predicts a | 4,524 | 4,549 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4524s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | character and then me Shannon would say right or wrong it is right you're done if it's wrong you get to guess again 79% of the time people get it correct in the first guess 8% of time it takes two guesses 3% of time takes three guesses and so forth whenever you get communicated back whether your guess was right or wrong effectively one bit of communication was communicated about | 4,549 | 4,582 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4549s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | what the underlying character is and so you can just stick the weight of some here you'll see that lands are roughly 1 which means that you need one bit of information for character on average people have not gotten to that by the way I mean that depth of at least ball automatic fix compression schemes have now gotten to that lower yet but things are getting closer and closer over time | 4,582 | 4,612 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4582s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | so looking at practical schemes if you use just 7-bit encoding fixed encoding well then you have seven bits per character if you use entropy encoding off individual characters of your two to the seven this is 128 or simply 128 characters you use entropy encoding you know I need 4.5 bits per character that's of course the ball it is if you look at entropy but | 4,612 | 4,641 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4612s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | you can't perfectly achieve that because you have to brown to a finite number of bits so - code which is optimal achieves four point seven now if you look at the entropy of groups of eight symbols and then look at the average entropy per character you can line up 2.4 now some thought is good asymptotically this goes to 1.3 so what you want to do instead of encoding one character at a time when I | 4,641 | 4,670 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4641s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | maybe could eight characters at a time and huffing code will be something probably slightly above 2.4 for that employee ok I propose we take a five-minute break here and when we start at 625 and we'll start looking at how some of these ideas can tie into the generative models we've been studying you alright let's restart any questions before we go to the next talks alright | 4,670 | 5,021 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=4670s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | then let's take a look at how we can combine all the rest of models which we covered in one of the first weeks of this class with coding the key motivation here is that we want a flexible system to group multiple symbols to avoid the potential +1 over head on every symbol including going back to Jerusalem who long this thing is going to be and want to be able to | 5,021 | 5,053 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5021s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | encode that on the fly so question we might have is how many symbols which symbols to group in a naive system that's what you'd have to do yes okay how many similar I'm going to group in groups of 3 or 10 or whatever and then make some decisions about how to grid them Ciotti hue is actually done but we no need to decide on how many symbols or which symbols win we're doing this for | 5,053 | 5,086 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5053s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | we're going to encode every possible symbol sequence by mixing into a distribution it'll show an example very soon and this works for training over there and is extremely compatible with all aggressive models so let's take a look let's take a look at an example we have a alphabet with two symbols a and B probability of a is 0.8 probability of E is 0.2 if we individually encode this | 5,086 | 5,120 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5086s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | we'll have to maybe send the 0 for a and a 1 for B and is gonna be a lot of overhead because a cost us just as much as B even though it's way more likely really there should be a way to make it cheaper the second most naive thing would be only talked about earlier gives you a head of time decide what is 3 a is going to be three B's two A's and a B and so forth we're | 5,120 | 5,141 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5120s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | gonna do something quite different we're going to do is we're going to say okay we get something to come in a a PA let's say first and code the first symbol here a we have a distribution that's available to us the malls the probability of these symbols I'm gonna say okay 80% chance it's a 2% chance it's B so we're gonna actually map the fact that I have an A to this interval | 5,141 | 5,170 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5141s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | over here it means we landed you can think about the all possible random events that couldn't happen in the world is the ones that lie on the 0 to 0.28 interval that's the event that has happened then when the next a comes in we're going to take that interval that we're working with now 0-3 0.8 and say okay it's again an a so it must have fallen within the first 80% of that new | 5,170 | 5,197 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5170s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | interval then it's a B which means that's fall in the last 20% the last 20% of that interval there and then it's a a which again means the first 80% so end up with is let me take a different color that's more visible is a lot of green already end up with the notion that this string a ABA gets mapped to a very specific interval within the 0 to 1 interval and the way we do this it | 5,197 | 5,235 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5197s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | should be clear that is unique for every string will have a unique interval he end up in and so that a different sequence we end up with a different interval the idea behind arithmetic code is that what we're going to communicate is the interval so way to communicate the this thing over here now we still have to decide how we're going to commute but that's the idea and you don't need | 5,235 | 5,262 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5235s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | to know I had I'm how long your bit string is or your symbol string is going to be because this interval one boom maps to whatever simple sequence you receive so just need to encode this and we're good to go if there were more symbols coming in if there was another B after this they would have split this thing again I would have had the small end of all was another a after this with | 5,262 | 5,285 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5262s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | him split this up a bit more and your the smaller interval and so forth so one-to-one mapping between simple sequences of arbitrary length and intervals okay how do we code an interval let's start with a naive attempt naive attempt of encoding interval so K represent each interval by selecting the number within the interval which is diffused bits and binary fractional notation and you set up the | 5,285 | 5,316 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5285s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | code so for example if we had these intervals we could resent those with point zero one for the first interval point one for the second one and point one one for the third one because those are binary numbers that fall into each of those respective intervals it's not too hard to show you for interval size s so the width of the interval asked we have the most negative log two of us | 5,316 | 5,345 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5316s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | bits rounded up to represent such a number which is great because that means that because the width here s s is really probability of the civil sequence so that's achieving entropy coding up to the rounding out the problem here is done these codes are not a set of prefix codes for example we have one here that we would send for a second symbol but after we receive one we wouldn't know | 5,345 | 5,371 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5345s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | did they send us a second symbol or was it the third symbol so the second symbol sent twice or the third symbol sent once there's no disambiguation and so this scheme while we might seem reasonable at first and it's efficient it actually does allow you to decompress correctly so what else can we do we have each binary number correspond to the interval of all possible completions so for the | 5,371 | 5,398 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5371s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | above example when we say point zero zero it means the interval from zero to 0.25 will say point one zero zero it means the interval from 0.5 to 0.625 will see a point 1 1 that means interval from point 7 5 to 1 so we're gonna we're gonna want it to be the case that remember on the previous page any simple sequence that I want to send will result in an interval we want to send we're | 5,398 | 5,426 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5398s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | gonna find a bit sequence such that when you look at the corresponding interval that we map to it which is given here by example that entire interval should fall inside end of all we're trying to encode leaving no I'm bagheera T which in the ball it belongs to and will not be a prefixed or anything else to work out the details of this it turns out you get an overhead of to possibly instead of 1 | 5,426 | 5,455 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5426s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | but that's actually pretty good because when we do this there's kind of arithmetic coding we can code arbitrary many symbols so the overhead of plus two is only in count incurred once for the entire sequence instead of incurred for every symbol we send the Cross so it's a one-time overhead for the entire sequence that we encode this way obviously we'd like to avoid the plus | 5,455 | 5,481 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5455s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | two but it's not that bad any remaining challenges well sometimes when you file this scheme what'll happen is that the interval that you're finding as you go to your a be a be a and so forth sequence and you start from that interval from zero to one you might find that you know at some point you have this interval like this but zero point five is here just marking this next | 5,481 | 5,510 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5481s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | thing you realize oh you're actually here next thing you realize they be you're here the next thing that the way it works out is that you always end up with that interval that's centered around 0.5 if the case you're never able to send that first bit till your entire sequence is complete and so the solution to that is to even though in principle to minimize the number of bits you need to send you | 5,510 | 5,536 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5510s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | need to go to the end of your simple sequence and code the whole thing and then send all your bits if you want to minimize latency not wait till the end of the whole thing before you can send anything at all you'll split into smaller blocks such tab he keeps traveling 0.5 it's something to say okay I'm done there's a bigger block I'm sending it across another thing down | 5,536 | 5,556 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5536s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | this scheme as I described it assumes this infinite precision it assumes that you can actually compute these intervals precisely and this interval becomes always small small over time and so you could imagine that you're starting under flow if you just do standard floating-point calculations to compute those intervals and then of course you would start losing information because | 5,556 | 5,577 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5556s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | the floating point system couldn't like encode the information you need to encode there is a solution to that you can actually convert this all into a scheme where you're only computed with integers and the blow-up compression survey then I linked at the one of the very first slides explains how I can turn this into an integer implementation rather than relying on | 5,577 | 5,599 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5577s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | real numbers now that we know how to encode let's think about how all the rest of models can play well in this so far we said we have P of a P of B but actually no need in this entire scheme that we described at the distribution used for P of X one has to be the same decision as we use for it P of X 2 for X 3 and X 4 we can instead use conditionals that are more precise and | 5,599 | 5,626 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5599s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | more predictive of the next symbol and some lower entropy and a more effective encoding scheme and so this arithmetic coding scheme is perfectly compatible in all aggressive models you can just work your way let's say put a pixel after pixel or get the distribution for next pixel next pixel next pixel and encode with arithmetic encoding accordingly working your way through an image the | 5,626 | 5,650 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5626s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | better the log probability the further compression will be so better likelihood off your recive model will mean better compression of that data and these two schemes couldn't be any more compatible perfectly lined up predict one symbol at a time and encoded one at a time and keep going so let me pause you're gonna see our questions about arithmetic coding and then we'll switch to a very | 5,650 | 5,679 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5650s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | different kind of coding scheme okay now I'll switch to think about how I can use a variational auto encoder something called bits back coding and a symmetric numeral systems to encode information this at least to me is one of the most was one of the most mind-boggling thinks how does this even possible they're confusing at first but I hope that you know - the way we laid it out in the | 5,679 | 5,721 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5679s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | slice at all again be clear how does exactly works but there's this notion somehow you get bits back and hence you send bits but it's actually not as expensive you thought it was because he got bits back and we'll make it more precise soon the references for this part of lecture are listed here with the initial bits back paper by Freund Hinton from 97 actually that's the same one the | 5,721 | 5,746 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5721s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | first one is here which was on using it in the context of man description length for the way to know I'm at work but then start looking at source coding then there was this paper here the bits back ans paper so law to refer it do it that way bits back 10s was the session let me let me restart the slide for a moment so first first thing that happened with bits back | 5,746 | 5,782 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5746s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | is the thing at the bottom here this wasn't a comics of pure vision learning was not in the comics of coding next thing that happened was in the comics of Ashley making this practical as a coding scheme this idea and but this used arithmetic coding it turns out that the scheme we're going to look at is not very compatible with arithmetic coding unlike autographs of models which are | 5,782 | 5,807 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5782s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | almost designed to do arithmetic coding the when you have a via it's not very compatible with that not in the same way so this result into lots of overhead lots of extra bits you need to be communicated chunking has to happen a lot when you usually don't want to chunk as you lose efficiency this isn't 97 then in 2019 this beautiful paper came out by concentrated | 5,807 | 5,833 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5807s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | barber who's sure they can do this with NS rather than automatic coding so the underlying information theoretic scheme used in their approaches NS rather than arithmetic coding we haven't covered in us yet but higher-level thing is that I think the coding looks at your data as a stream you go literally through it ans doesn't including that acts more like a stack and putting things popping things | 5,833 | 5,859 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5833s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | from stack is the way things get encoded and that matches much better with the ideas we're going to step through here and next is actually possibly practical in fact the NS used in many place but physically here very well matched with BAE type coding schemes then in our work Berkeley Jonathan let a lot of this work today with Freesat kima and myself we looked at essentially this paper here | 5,859 | 5,886 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5859s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | and made it more made it more efficient in by looking at hierarchical latent variable random just single it in variable auto-encoders all of this builds on this ANS TIG invented by chaired Duda in 2007 which is using many coding schemes but is very just encase a lot of information here was invented in authorities in the 50s 1940s 1950s right Shannon's theorem 1948 Hoffman code 1952 | 5,886 | 5,916 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5886s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | ans third aqua coding scheme today invented in 2007 POW at a time where nobody thought you couldn't still invent really groundbreaking new things that are going to be that widely used in compression and sure they did so quick refresher will covered encoding entropy coding assigns log to 1 over P of X I length for the encoding of a symbol entropy is a lower bound we know and how | 5,916 | 5,944 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5916s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | long do you go to even pass to be the Shannon theorem said that we can't do better than entropy Huffman says the maya with the Huffman scheme you can get to entropy plus one arithmetic coding always have verbal I think many symbols in one go plays a plus two but is not a plus two or similar so plus do for the entire simple sequence so it's actually more efficient than doing Huffman each | 5,944 | 5,968 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5944s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | separate symbol so that's we've all read so far where some key assumptions it is assumed that we have a model P of X for which we can do the following track to be numerate all X for Huffman otherwise we can do it build that tree to enumerate everything well you want to enumerate all possible images you might want to possibly encode no you can't build a Huffman tree for that arithmetic | 5,968 | 5,994 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5968s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | coding gets around that by you only need to be able to assign probabilities to the next symbol in your sequence if you can do that you can use arithmetic coding but even that tend to require that there's a relatively small number of symbol values if your symbol can take on infinitely many values it's not really clear how you doing arithmetic coding so when it is fire 11 1 X | 5,994 | 6,020 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=5994s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | continuous but that's actually quite fixable we'll look at that on the next slide and then the X is high dimensional and that's the main challenge we'll be looking at the observation here is down some high dimensional distributions still allow for convenient coding and what we'll see some examples and we will want to do is leverage that to efficiently code mixture models of these | 6,020 | 6,043 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6020s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | easy high dimensional distributions the key wooden that we'll get from this part of lectures down as long as the single the non mixture model can be employed efficiently Whittlesey scheme that from there allows us to encode data coming to the mixture model also very efficiently and of course mixture models are often a lot more expressive than their individual components which means | 6,043 | 6,066 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6043s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | that we can now have a coding scheme that is designed around a much more expressive distribution class that you can fit to your data oh that seems to be out of order okay well so a real number X has infinite information so we cannot really expect to send a real number across a line in a finite amount of time because the infant that's keeps going forever new information every bit so | 6,066 | 6,098 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6066s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | what we're gonna do we're gonna assume if we have to be able to continuous variables X that we can discretize and that we're happy with the discretization so this guy's up to some position T you can discretize in two ways imagine you have a Gaussian excellents on the horizontal axis is a Gaussian distribution you can discretize on the x axis or alternatively and often more | 6,098 | 6,126 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6098s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | convenient you can discretize in the cumulative distribution so it still acts the cumulative distribution will run something like this and then this goes from 0 to 1 you can discretize here first of all lets you deal with the notion a just relies on X what were you gonna do with the tails you probably make one again you make one big interval to go to infinity but still as somewhat | 6,126 | 6,158 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6126s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | inconvenient maybe also if you disregard an X well this thing has a lot of probability mass this one doesn't have much priority mask I see this guitar is based on the cumulative this is just saying every interval has the same probability mass that's how I'm going to disco ties to discretize me located here be here from there then here this interval you go here this | 6,158 | 6,182 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6158s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | interval then you go well it's not perfectly drawn but you could hear you get the interval here isn't the ball here isn't the ball and so forth so that's that's a way you can discretize continuous variables with equal probability analysis or doing it tacitly on the x axis we can look at something called the discretized variable X this crisis interval with T and then so this | 6,182 | 6,212 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6182s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | is this version of it okay look at the entropy of that variable which will be probability of being in the interval I which is T which is the width times the height P of X I and so this approximation of an integral and then the value of the function here okay now when we work this out just log of the product is sum of the logs then we see here that this is looks | 6,212 | 6,243 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6212s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | like approximation of integrals so we say okay it's almost the same as integral and we get is here is what is called the differential entropy and then this is some factor that ties into the discretization level so it seems can actually use the differential entropy if we have a functional representation of our of our distribution and we can compute the integral for it we can | 6,243 | 6,272 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6243s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | understand what the differential entropy is and then the log of our disposition level will determine the overall entropy that would go into representatives as a discrete variable okay so that's a bit of background on how to deal with entropy of continuous variables jump it will be determined by our discretization now let's go to the actual challenge is that we wanted to solve and it was and | 6,272 | 6,297 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6272s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | we'll mostly think about discrete variables now but it also works for contains you can choose so key assumption some high-dimensional P of X that allows for easy coding exercise I mention all hearings sometimes still used to encode one when X is Gaussian you can do it as an independent random variables along each axis and each and invariably you could efficiently we've said on the previous | 6,297 | 6,343 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6297s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | slide or maybe for each X we can use an auto Russell model and we know how to do a rest of encoding with arithmetic schemes and so forth these are examples of high dimensional situations where we can encode things efficiently might be more but for now let's just let's go mostly if think about this one over here mixture models allows to have them wider range of the situations and their | 6,343 | 6,370 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6343s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | components for example mixture of Gauss is much richer than a single Gaussian for example a single Gaussian all I can do is look like this but a niche model could have many of these bump mix together and then the overall thing would look something like that which is much more complexed representation that you can capture with this five component mixture that with a single gaussian the | 6,370 | 6,400 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6370s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | key question we want to answer is if P of X is a mixture model of easily incredible distributions does that mean we can also efficiently encode P of X there we'll look at one the illustrations to get the point across the way that it's easy to draw on slides but keep in mind that we're covering a method that generalizes to higher dimensions and I've always want to do is do including | 6,400 | 6,430 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6400s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | of 1d variables you can use many many methods it's not about 1d that's just a way to draw things on the slide also will not allow ourselves to rely on that the main effects being small because the main effects is being small we could rely on that we can do many other things so we imagined higher dimensional X take on many many values but somehow an efficiently incurring a single component | 6,430 | 6,457 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6430s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | and mixture that we're using to represent P of X ok let's see what we can do now our running example is going to be a mixture model P of X has a weighted sum this is choosing the mode there's different modes index by I there is a distribution of x given I so supposed to I think is down maybe when we sample X we first sample a mode and once you sample the mode from sample | 6,457 | 6,486 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6457s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | mode one this is the distribution we sample mode to be this is a distribution we sample mode three maybe this is a distribution sample mode or maybe this is a distribution and so forth assumptions of each of these modes themselves easy to encode East to encode means that we have a scheme that will give us close to this because that's what a good encoding would do it would | 6,486 | 6,516 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6486s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning | |
pPyOlGvWoXA | cost you a number of is equal to log 1 over P of x given I is that's one week more than oh it's mode I our distribution is P of X given mine ok first scheme we might consider is max mode encoding so max mode encoding what do we do we say ok we have a mixture distribution in this method - code X well we don't know how to correctly from P of X but we know how to | 6,516 | 6,547 | https://www.youtube.com/watch?v=pPyOlGvWoXA&t=6516s | L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning |