video_id
stringlengths
11
11
text
stringlengths
361
490
start_second
int64
0
11.3k
end_second
int64
18
11.3k
url
stringlengths
48
52
title
stringlengths
0
100
thumbnail
stringlengths
0
52
pPyOlGvWoXA
improve upon past state of garden compression now the specific point I shoulda named itself after the Silicon Valley show which was called the company's called Pied Piper and this is now called pact pie so there's actually a real thing they presented at TechCrunch in 2015 now first question you might ask is can we have universal data compression so their fundamental
602
627
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=602s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
question you'll see in this lecture a lot of questions we ask them to be very fundamental where we can give actually very very strong theoretical answers sometimes negative answers so can we come up with universal data impressive would that mean that would be can we come up with something that no matter what let's say filing you give it it can make it smaller and later decompress it
627
651
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=627s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
back out to the original well the things that's possible well okay let's see imagine you want to compress every possible to compress every possible possible bitstream they ever encounter okay so that's not possible no longer we can do this what's the intuition that should be simple we'll do a proof by contradiction suppose you have a universal data compression algorithm you that can
651
683
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=651s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
compress every bit streams no matter what your feet it is gonna make it less bits and then go to decompress it back out later to the original okay now given bit string B 0 you can compress it to get a smaller bit string B 1 if it strictly last bits otherwise it's not a universal compressor now B you want you can feed into it again it'll turn that into B 2 which is yet smaller you keep
683
709
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=683s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
doing this you do this especially many times at some point you'll have a big string of size 0 at that point it's obvious you cannot recover what the earth will lost kiss you it could have been anything and everything gets turned into 0 you can get back out will int it so what this shows them assuming somebody tells you I have a universal data compression already compress
709
732
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=709s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
everything no problem here's a prove that this is actually not possible to prove it another way also another way to prove it is to do it by Counting you can say okay suppose your algorithm can compress all thousand histories ok how many thousand bit strings are there we throw the one thousand possible bit strings now if we can compress all them that means we can
732
760
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=732s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
pick every one of them and turn them into something smaller and distinct smaller otherwise we cannot get the original back out but if we look at what's possible with all possible shorter bit strings actually you cannot encode all two to the 1000 possible thousand bit strings so since we can't include all possible to the 1000 bit strings it means we cannot compress all
760
787
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=760s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
of them so we have two different proves here to show that the universal data compression just not possible why is impossible and practice though even if you cannot universally compress everything well there are statistical patterns that you can exploit for example here's a piece of text and I'll give you all a minute to read this text so as you're reading this text you'll
787
823
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=787s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
you'll notice well likely you'll notice that there's something fun about the text and that the words are mostly misspelled but despite these was being misspelled it's how she's still very feasible to read this and effectively what it says is that most people have no problem reading a piece of text if for every word you keep the first two letters you keep the last two letters
823
850
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=823s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
but then everything in between you can randomly permute which means that the ordering of the letters in between has not much real information in it it could be any ordering so you don't need to stick to the original order in us here they do this crammed with we can still understand it so it means there's some redundancy it means that certain sequences are just not very likely and
850
872
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=850s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
when you read this it's close to a sequence that you're familiar with and so you can easily map it onto that and still understand words that were there originally there's another example from images so on the Left we see a bunch of real-world images of flowers in this case on the right we see random data the data on the left if your dataset looks like that it's very compressible because
872
898
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=872s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
there are a lot of regularities at an intuitive level example often maybe pixels have roughly the same value whereas images on the right which are completely random there is no correlation between neighboring pixels that you can exploit to maybe compress how to represent the data on the right so two very different distributions for the complete random distribution not clear how to compress
898
924
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=898s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
for the real-world type data you can already intuitively see that there are opportunities to compress for example you could just think of every other bit now every other pixel it wouldn't be perfectly lossless we could probably reconstruct most of the image from that alright so what we've covered so far is what is compression what's the goal in compression and why might we care go
924
951
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=924s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
from a practical point of view and from a AI point of view we also looked at the fact that universal lossless compressor is just not possible we looked at some intuition as they're being redundancy in most of the data that we encounter in the real world and because there is redundancy I'm totally speaking there should be a way to exploit that because it's only the did
951
974
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=951s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
that really occurs in the real world that you need to have a good compression for and the data that doesn't really occur in the real world even though they are also can be represented in principle bit strings you might not care much about how much that kind of non real-world that gets compressed so for the remainder of this lecture we'll want to look at is a couple of things first
974
994
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=974s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
thing we want to look at coding off symbols so we'll start looking at okay what does it mean to actually have a compression system and this will only comment with a method called Huffman coding that is actually used in many many of today's systems and it's actually quite intuitive very simple way to understand how compression could effectively work so I'm going to look at
994
1,014
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=994s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
some theoretical limits and from there we'll look at some additional consideration for coding that will help us a bit more than what we get from the simplest version we cover first from there we'll tie this into things that we've covered in this class we'll look at our aggressive models will look at Vee we'll look at flow models and try to understand how these models can be
1,014
1,040
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1014s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
leveraged to do better compression all right let's get this started so here's one way of coding information all right a way and just to be clear there's no compression and this wave coding so ASCII is a system that for everything that's on your keyboard will assign seven bits so every character you can type can be represented with seven bits so two to seven possible characters
1,040
1,077
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1040s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
could be represented this way what's nice about this very easy for kody and decoded there's a very simple one-to-one mapping always going to the seven bits for that character and back out to the character but if you encode this way you're not exploiting its disco parents is Nagano it's not compressing your information maybe some key strokes are far less likely than others so maybe the
1,077
1,100
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1077s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
ones that are less likely you should allow it to use more bits and the ones that are very likely you should try to represent with a very small number of bits and overall you might have a win that's the intuition behind law compression schemes but obviously here everything seven bits that's not going to happen let's in at least a reference as a starting point so we'll need a spare be
1,100
1,122
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1100s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
length codes codes down assign different lengths depending on how likely a symbol is how do we avoid ambiguity a simple way to avoid ambiguity when you have variable lengths is when your fix times is very easy first seven bits per character next sum of its next character December's next target but if it's variable length how you know a character has been fully translated and now the
1,122
1,144
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1122s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
next one is starting one way to do this is to ensure that no code word is a prefix of another code word so as you see bits come across the line at some point you'll have seen all the bits for some letter let's say and because no code word prefix of another one at that point you know there's nothing else that can continue from this this is the thing is the complete thing sent across and
1,144
1,170
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1144s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
the corresponding character can be decoded so another thing you can do but that might be consuming more bandwidth to space you could also end the stop character to each codeword the Morse decoding does this but this might be a little wasteful we can have a general prefix-free code and we'll look at that very soon so let's look at Morris first so in Morse code in what happens this
1,170
1,203
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1170s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
very old coding scheme this is where I'm back when you want to send let's say you just have a effectively a line the communication line over which all you could send was let's say voltage going on and back down you can make it go up briefly or go go up for longer for three times as long so a dot is a brief spike in or up in your voltage let's say and then a dash is three times as long and
1,203
1,231
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1203s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
then the spaces in between the dot and dash are also encoding that it's it's quiet the reason it's a quiet time in between and then between characters there will also be three so between total of three units and then between words there will be Sun units of quiet time so that way you can encode every character alphabet all numbers and all I need to be able to
1,231
1,256
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1231s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
do is send dots and dashes or short and longer signals and pauses be able to get everything across and people use that before before telephones time to go up Telegraph where they could send information across this way so some of the things you can already see here the letter A as a relatively short encoding same for e same for I and that's because those are frequent frequently used
1,256
1,285
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1256s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
letters and then things that are less frequent like maybe a Z has a longer encoding what else is left frequent there J longer encoding essentially well more or less to the letter sound you get a lot of points for in Scrabble have the long encoding skew here X over here and the letters that don't give you much points in Scrabble have these shorter in CONUS because there's many more words
1,285
1,310
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1285s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
that use them okay that's that's a very specific thing the more general thing that people tend to use a cynical prefix-free codes which can be represented as binary tries or binary trees so what is a binary try or tree it's a tree where one thing's split you hard-code ahead of time that the left will be a zero the right will be a one and so you can build a tree and you
1,310
1,344
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1310s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
don't even have to put the zeros and the ones on it that I'm putting on here because you always know those side is a zero the right side is a one that says their specific type of data structure the reason it's spelled with ie is because comes from retrieval it's a data structure for easy retrieval of certain information so that's also why it's often pronounced binary trees because it
1,344
1,364
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1344s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
comes from retrieval at the same time there's also threes this way that also our data structure and so it can be a little confusing if they're pronounced the same way so some people will still pronounce this as trust to distinguish from trees so that's a binary tree the way we're going to use it is that the symbols will always live in the leaves and a code word is a path from root leaf
1,364
1,388
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1364s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
so let's look at some example here's an example we have a code word table we have one two three four five six characters each character has an encoding with a sequence of bits of sometimes only one bit and you can see that there's a crisp honest adapt in this binary tree where all the characters are sitting in leaves of the tree because every character sitting in
1,388
1,424
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1388s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
a leave of a tree what it means lets him I'm getting some message across say I'm receiving this message over here I'm receiving a zero what will I do I will go down this path I'll say oh I hit a leaf that means I'm at the end nothing left to go ID code an A then I get this one over here we just restarted means good this way I get another one you this way get another one Musa go this
1,424
1,450
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1424s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
way get another one means go this way I hate to leave I know I'm ready to decode and it's a B and so because all the symbols live in the leaves I always know when I hate to leave what's him I need to decode and then come back to the top to start decoding the rest a message that's coming in now you can of course ask yourself the question for a given set of symbols that
1,450
1,475
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1450s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
you want to send are there multiple binary trees and in fact there are there are many many trees you could put forward to come up with a podium for these six symbols here is another example here the tree is set up a little family and we see them the same string being compressed twice and on the left it requires 30 bits on the right it requires 29 bits and so the name of the
1,475
1,500
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1475s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
game here is can we find a binary tree set down as I tried to encode my message my bitstream sir as I try to put my original symbol message into a bit stream that a bit stream is as short as possible and race where you could search over all possible binary trees but there would be many many many binaries and then apply decide which one is most efficient but we'll see better schemes
1,500
1,525
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1500s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
than that but you have a very naive way to put is just try all possible binary trees that have the symbols at the leaves and just see for each one of them how long the bitstream is and take the best one we'll see an efficient method to get very close to that I should we get to the optimal one okay so the efficient method to find the optimal one without needing to do that exhaustive
1,525
1,550
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1525s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
search I just described is something called Huffman codes and right now but we'll cover how Huffman codes work procedurally and then later once we've seen a bit more foundation on information theory we will also prove the fact that they are optimal but for now we're not going to yet prove that there are more you're going to look at the procedure okay so how does it work
1,550
1,578
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1550s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
Huffman algorithm said they're very simple consider the probability P I of each symbol I that's in your input so you have maybe a belong text file if you're encoded characters you would for each character do a count and then see what's the probability for each character to appear once you've done that you can start with one node corresponding to each symbol so for each
1,578
1,601
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1578s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
of these symbols you have a node so starts as a disconnected tree just a bunch of separate Leafs really but not really connected up to anything yet and you associate with it a weight P I which is the probability of that symbol from there you repeat the same process over and over until it's all connected together in a single tree what is this process you selected two trees with min
1,601
1,630
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1601s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
probabilities P kmpl initially when each symbol is its own thing what that means is you find the two symbols with the lowest probability later on once you've done some merges it'll be the trees that at the root have the lowest problem then you merge those two into a single tree with associated probability as the sum of the original probabilities and that's it that's all I need to do so let's take
1,630
1,659
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1630s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
a look at an example of how this works on some example data here we have six symbols each symbol has its own probability associate with it and so let's step through what Huffman coding does we have six symbols each your own probability we have a with a probability of zero point two we have B with probability zero point one C per building 0.05 T with probability zero
1,659
1,689
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1659s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
point two one e with probability zero point three six and F with probability zero point zero eight let's follow the algorithm what a to lowest probability thinks it's C and F so what do we do we connect them up CNF get connected up and together they have the sum of the probabilities which is zero point one three what's lowest in probability now it's the zero point 1 here and the zero point
1,689
1,719
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1689s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
1 3 over here so we'll connect this up and the top here now is probability 0.23 what's lowest now we have a zero point two and a zero point two one let's go point two one is somewhat inconveniently located so I'm going to move over here so be zero point two one and moving it off to the side zero point two one and DNA connect together for a zero point four one what are the two lowest now is
1,719
1,753
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1719s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
the zero point two three dots here and a zero point three six over here is 0.36 is incomming located I'm going to relocate e over here all right then connecting these and then together they have zero point five nine the two lowest ones are the only two left is zero point or one and the zero point five nine and here is our Huffman encoding and then what we do is like the left side is 0
1,753
1,787
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1753s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
and the right split is 1 there we go and now we have an encoding we want to know what is D D is 0 0 what is a a is 0 1 what is B B is 1 0 0 1 0 0 what is C 1 0 1 0 E is 1 1 M F is 1 0 1 1 and to say uniquely decodable code every symbol once you have sent the bits across you hit the leave of dizzy coding tree you know you've got an entire symbol and then you start again at the top of the
1,787
1,835
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1787s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
tree to decode the next symbol so we haven't covered why this is optimal but hopefully the procedure is clear that is a relatively simple procedure they can do for any symbol table that you have and it relies on these probabilities and you might already start foreshadowing here chorus this is what your narrative models might be handy very good generative models might allow us to
1,835
1,857
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1835s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
build good probability estimates that we can then use to find a real good encoding because of course if these probabilities are wrong then this tree will not be a very good tree for encoding the data so here's the same another example that you can work through solving another example and especially under many of the applications that are used on the internet Huffman codes are used to
1,857
1,885
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1857s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
compress their data all right so maybe let me pause here for a moment to see if there's any questions so have you any questions feel free to type them into the chat window or to just speak up or raise your hand oh hi drew yeah I had a question so I guess something that I noticed about Hoffman codes and stuff would be that you're kind of the number of symbols or
1,885
1,931
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1885s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
number of values that you have is fixed but if you're trying to let's say encode more complex data structure so if you have something like you know maybe maybe images have like a fixed dimensions to audio can be multiple dimensions for example so is there a way other than discretizing or is the notion just to make chunks and then compress them effect sized chunks which take discrete
1,931
1,954
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1931s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
values yeah very good questions so chunking is between an option and then just send over in chunks which by the way often can be desirable for other reasons also even if you had a fixed size thing but you wanted this let's say you had a video you wanted to watch on home if somebody first encodes the entire video sensitive crosses one file and only then you can decode and play it's not great
1,954
1,984
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1954s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
you want to be able to stream it across so you have there are reasons to chunk where you're actually for bill optimality of compression but you reduce latency and getting things across we will look at some other codes a little later and it's a really good question that actually fits very well with her describing so the coding systems we'll look at later are thematic coding and
1,984
2,004
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=1984s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
asymmetric numeral systems are able to encode streams in effectively a continuous way such that if the stream is longer it can keep encoding and it just on-the-fly continues to encode as you go along now in practice often people will still chant and stop at some point because otherwise you might have to wait too long before you can decode sometimes but in principle they can work
2,004
2,030
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2004s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
with arbitrary length and not knowing the length ahead of time so we'll cover that but you're absolutely right for Huffman codes do is that strong assumption that you have an alphabet of symbols and you build it encoding for a doubt alphabetical and you don't encode that specific symbols make sense thank you well compression usually be done in terms of bits like the encoding will be
2,030
2,059
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2030s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
the output of the encoder will be like a lookup table and it won't you say yes the way we think of compression and the way it's ultimately done on computers is that only what comes out is the sequence of bits you can think of a single bit as the sense of the minimal unit of information like a single bit can either be 0 or 1 and sometimes the minimum that you can send across in terms of
2,059
2,089
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2059s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
information just an outer 0 or a 1 because there's two options you can send information across if you have only one option while there's nothing you can do there's no information be transmitted so in fact often as we'll see the way information gets better than the size of well the amount of information your system gets measured is by bits the mineral number of bits required to
2,089
2,111
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2089s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
represent the original is the amount of information in that piece of data I say one quick thing to add maybe it is the case that when you actually transmit over certain lines that are not let's say a computer storing zero on bits there are transmission schemes where you send maybe two bits in one go to using potentially something close it to a continuous channel that you then
2,111
2,141
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2111s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
discretize on the other side to get out several bits in one go so that was happen also under the hood but in terms of kind of the information theoretic properties we tend to think of it as turning everything into a sequence of bits alright great questions thank you let's move to the next part which is threat of the limits and what we're going to cover here to me some of the
2,141
2,176
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2141s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
most beautiful math than any discipline has to offer somehow well we're going to cover we can cover in just a few slides we've been covering quite comprehensively in just a few slides and get very kind of deep throughout up on inside to guarantees across so very excited about getting to talk about that day so one thing you might have heard in the context of information fear this
2,176
2,203
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2176s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
thing called entropy DeShannon a sort of a measure of information so what is entropy by definition so it's just a mathematical definition not talking about properties yet entropy of X what is X X is a random variable right and so X is just really some distribution P of X and so whoo I measure entropy till the entropy of the random variable not a specific institution of that variable
2,203
2,227
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2203s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
but anyways the entropy of the distribution or entry of the random variable as a whole and this is definition use sum over all possible values the random variable can take on and you take a weighted sum of the probability of take on that value and then log 2 of 1 over P exile ok so this might seem look a little coming out of water but maybe let's get a little bit of intuition for this of why
2,227
2,256
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2227s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
this might be a meaningful way of how you measure entropy which is the amount of uncertainty you have in a distribution and hence there's a lot of uncertainty about the random variable you need more bits to send the cross we hope to tell the other person what the outcome us so there I have been unbearable I read an experiment I see the outcome of that random variable I
2,256
2,278
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2256s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
want to communicate the outcome to you how many bees do I need to send on average and this effect doesn't just talk about that but it also then kind of hints at an encoding scheme it kind of says the number of this you can I use for a outcome X is going to be log to 1 over P X I so let's look at an example distribution here's an example distribution random variable can take on
2,278
2,304
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2278s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
five values and will compute the entropy of this thing so it isn't like 1/4 1/4 1/4 1/8 1/8 and we can compute the entropy entropy is 2.25 then let's look at another distribution much more peaked beautiful these are three quarters and then 1/16 for everything else compute the entropy its 1.3 so 1.3 2.25 entropy is a lot larger on the left and on the right why is that because if I run an
2,304
2,335
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2304s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
experiment on the random variable on the left and then want to communicate the outcome Q actually there's many possible outcomes that are pretty likely and so it's not like you can come over to very efficient encoding scheme because you need to encode everything pretty much with some reasonable probability you're gonna have to send it across whereas here on the right what happens is this
2,335
2,356
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2335s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
first outcome over here scroll ugly so if you encode that first outcome with a very small number of bits then most of the time you have to send almost nothing and then yet sometimes you have to send more bits to get the other things across but most of the time is very cheap and so that that's effectively what's going on in this equation and for now a building intuition will make this a lot
2,356
2,378
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2356s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
more formal very soon so let's take a look at another example so think back to think back to our binary trees to encode some set of symbols we have symbol a b c and d if these are the probabilities 1/2 1/4 1/8 1/8 then this thing over here is a optimal way of encoding that have the time you sent just one bit for a the other half of the time you got to cover the rest and to say that you're covering
2,378
2,418
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2378s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
the rest you have to send the one then in the other half of the time half the time you have to send what it say it's be in the other half of the time you have to signal with the one that it's one of the other two and then at the end you decide which one it is when you're down here so what we can this even though we haven't proven this intuitively it should make sense that
2,418
2,439
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2418s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
this is a very good scheme for encoding this kind of distribution over symbols because you can't say anything less than one symbol for a otherwise you have not communicated anything a is the most frequent symbol I'll have to do is send one symbol and for being well you know if the first signal it's not a and then you send one more something symbol to communicate it to be similar than for
2,439
2,463
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2439s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
CMD this encoding scheme over here uses a length that is the to log of 1 over P of X I and so you could imagine a world if in the in your world every probability associate with any symbol you have some symbol X I and the probability P of X I is equal to 1 is equal to 1 over 2 to the length of sorry no let's see if ya the paralytics ID can be expressed as 2 to the power L I then
2,463
2,498
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2463s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
you can encode using the same scheme that symbol X I with a length L I bitstream into a tree that would be built up the way the tree was built up over here haven't threw in this but that's kind of the rough intuition and we'll see is of course things that generalize this to simple we're P of X is not necessarily one over two to the power of something that could
2,498
2,518
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2498s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
be any probability different from a power of 1/2 okay so that's some high-level intuition let's now take a look at some of the theory that we can put down first main theorem is the Kraft McMillan inequality what it says is that for any uniquely decodable code c okay so this is somebody tells you I have a code it's uniquely decodable and if it's not uniquely decodable you can't really
2,518
2,553
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2518s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
use it to do lossless compression so codes do need to be uniquely the code well unless we're not going to consider them for losses fresh when it comes up with a uniquely decodable code see what does that mean it means a mapping from symbols to bits bit strings corresponding to each symbol if it's indeed uniquely decodable then it's the case that this property holds true what
2,553
2,579
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2553s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
is this saying it's saying for each symbol and corresponding ain't coding so the word the bit word over you we can look at the length of the encoding and there's some property satisfied by these lengths so funny and somebody's your table symbols and bitstreams it's gonna be a and some bits string here then be some other bits here and so forth you give you a code that's
2,579
2,605
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2579s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
uniquely decodable then the lengths that you encounter here will satisfy this property this says it's smaller that one has to be smaller than one what does that mean these are negative powers here so it's effectively saying that the lengths have to be large enough they always going to be of a certain length otherwise it's not be satisfied so swimming out this thing is saying if
2,605
2,630
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2605s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
someone is uniquely decodable code I can guarantee you that the encodes have to be relatively long they cannot be shorter than a certain amount because otherwise they would not satisfy this property with Boston actually held in the opposite direction opposite direction says if you have a set of length L that satisfy that same thing then there is a code you can build
2,630
2,656
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2630s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
in fact the prefix code which is very convenient to deal with which is uniquely decodable of the same size as these links so it's a back and forth kind of mapping if something uniquely decodable this is satisfied if don't satisfy this you can build uniquely decodable code that in fact the prefix to a tree that allows you to encode symbols with feet word lengths what does
2,656
2,684
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2656s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
this mean this means that since there's a party that holds true for any usually decodable code someone can give you include the decodable code this property will be true when this profit is true there is also a prefix code with the same lengths so it means that we never need to resort to anything but prefix codes if my said have a very clever scheme to make the bitstream uniquely
2,684
2,711
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2684s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
decodable you might have to look ahead and look and look at many places of decode you can say no need i can use the same encoding length and build a prefix code that will have the same efficiency as your other uniquely decodable code which will be more troublesome to decode so we're going to strict attention to prefix codes all right so what's under the hood here let me give you a quick
2,711
2,737
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2711s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
brief sketch one direction for any prefix codes d and that's kind of a subset of what's on previous slide for our prefix codes C we have down this is satisfied that's what I stated what's the sketch order all the lengths of your code words then setup we prefix code right so we have all these lengths favorite prefix code who a previous code we can build a tree look
2,737
2,771
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2737s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
at a tree we can look at the tree will actually end initially over here at those red dots because that's where the code words are but we expand the tree to be of equal depth everywhere so even though maybe your symbol a would be encoded here you continue because you want to make it all equal death then after you've done that you can do a simple count you can say each code word
2,771
2,800
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2771s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
for example this one over here how many leaves are covered by it well the whole trees of depth 4 so the whole thing is depth for it is at depth - so what's under here is due to the four bonds to leaf nodes under here we have 2 to the 4 minus 1 as 8 leaf nodes living here and so forth so every Congress there's any leave of the expanded tree since it's a pretty code
2,800
2,833
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2800s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
let me know overlapped it's a clean tree so the total number of leaves will be in this case 2 to the 4 or in general 2 to the L + 4 n Ln the maximum length code that you're considering so the opposite equality here that the number of leaves covered is smaller than the total number of leaves you could have in a tree you just divide both sides by 2 to the L 1 and you get this thing over here
2,833
2,866
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2833s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
so not too hard to prove the details of the proof don't matter too much but it can be done in one slide that's the first part how about the second part second part says for any set of lengths if this is satisfied then we can build a prefix third tree with those lengths how is this done you consider a full tree of depth L M which is the longest length and for
2,866
2,897
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2866s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
each eye you pick any node of depth li still available so you're going a tree say a depth of Li is anything still available okay I picked this once you pick that you consider everything below it average so this is all done at this point nothing below there is available anymore this will consume tuna alin- li leaves of the expanded tree you can you know that as you count together how many
2,897
2,926
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2897s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
leaves you're going to cover in this process is going to be this many on the left here we're told that this thing on the top holds true which means that this is smaller than 2 - L M which means that we can fit this inside a tree so we have be able to fit all the code words inside a tree okay so two quick proofs we don't need to know to be able to prove going
2,926
2,949
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2926s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
forward but I wanted to get across it these are actually relatively simple to prove consequence from this is probably something you've heard many many times and that will now be able to prove very very easily then for any message distribution P of X there's some distribution over symbols then you have an Associated uniquely decodable code see then the average encoding length so
2,949
2,977
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2949s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
the expected length of your code we encode a symbol will be always larger than the entropy of the distribution there's a janitor in 1948 entropy is lower bound on how many bits need to encode symbols coming from a certain distribution so let's step through what the key things are to get there this is just what we're starting from the difference between entropy and exit code
2,977
3,004
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=2977s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
length entropy is this thing here expected code did this look at all possible symbols and look at the length and take the weighted sum we have px I here P of X I over here we can come bring this together we have this over here then to bring us more closer together we're going to say well L I equals log of 2 to the L I we have difference of two logs we can bring out
3,004
3,030
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3004s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
together the log things behind multiplied together or divided by each other if you have a negative sign so they've been negative appearing here then what is this thing over here what are we doing we're replacing let me expand it we're essentially replacing it with by bringing this thing over here we're the expected value of the log of something to make it into log of
3,030
3,066
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3030s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
expected value that's Jensen's inequality we've seen that in variational auto-encoders see it in many many places in machine learning we just applied Jensen's inequality expected value of the log smaller than log of the expected value and noise brought along we have the expected value of the log is over here whereas the log of the expected value is above and so we have
3,066
3,092
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3066s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
just an equality applied here how about the next step we do this is going to be the craft McMillon in a quote assess if we have uniquely decodable code and this thing over here has to be smaller than equal to one and then log of one is zero and we're done so to prove Shannon theorem all we needed is Jensen and then craft mill and inequality and we're good to go
3,092
3,120
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3092s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
we have the full proof let me maybe pause here who's a pretty big result and see there's any questions all right so at this point we've proven that and uniquely decodable but anybody can come up with with certain lines know the code words you can use a prefix code if you want to so that makes it very convenient and also that it's never going to be better than the entropy and
3,120
3,155
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3120s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
expected encoding length for she might have next n as well how close can we get to entropy can we find the code that achieves that achieves H of X or it close to it because if we can and we know we're doing optimal okay so here's one way to think of it expected code length would be entropy if we take the lengths of all of them exactly this thing over here on the
3,155
3,184
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3155s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
inside the coding is log to 1 over P of X I we're good to go now I'm practice I might not be a natural number so you might have to round it up to the nearest natural number to actually make it a bit sequence so this is essentially n2 between Shannon coding so how about we proposed this we're going to try to encode with this thing over here the first question you should have is that
3,184
3,212
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3184s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
even possible is this a valid set of lengths or would this be lengths that actually will not correspond to a curve well I'm gonna think craft of billing allows us to check for a given set of lengths is there a code that corresponds to it so let's check Kim can we find a code that matches up with this well the this thing over here is the thing that we have on the Left sent hand side into
3,212
3,245
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3212s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
credibility and equality and we want to hopefully prove that this is smaller than one so but trying to prove it's smaller than one we have to make the steps to get there if we can prove this then it means that code exists and we're good to go we can actually means looking to enter the coding so this thing is equal to the code lengths are given by this quantity over here so just
3,245
3,268
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3245s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
then this is more than or equal is because we're running up here and we're getting rid of the rounding up but the rounding up is happening in a negative exponent so by getting rid of the rounding up we end up with something bigger then this thing is easy to simplify to to the log to of something is just something that's what we have here now some of the probabilities is
3,268
3,293
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3268s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
equal to one and we are good to go we have them 2 to the minus L I sum over I saw you go to 1 which we know from Kraft McMillan implies exist a prefix code dad worked with the links so we now know that we can do entropy coding so be an alternative scheme to Huffman coding right we would have encoding the bill the treaty here would be you look at the probabilities of all your symbols and
3,293
3,330
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3293s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
then you assign the length and then you still define code words that match up with it but assuming you can run some search or some other item to find those code words you know they exist so you just need to find them and then you're good to go how is this well there is there's a little derivation we can do that this is very close to achieving entropy so look
3,330
3,353
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3330s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
at this over here what's expected length weighted sum of the lines fill in the lines then what is this thing over here well this length over here is rounding up so it could go up by one relative to the real number that's on the inside that's just the one plus over here once you have that in simplified it the one comes up from does summed over all pxi and then in the back here we have
3,353
3,381
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=3353s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg