video_id
stringlengths
11
11
text
stringlengths
361
490
start_second
int64
0
11.3k
end_second
int64
18
11.3k
url
stringlengths
48
52
title
stringlengths
0
100
thumbnail
stringlengths
0
52
pPyOlGvWoXA
you right so in our work what we did was we looked at weight in variable models which are not just one layer so we know that the the more powerful the the model the better the log likelihoods that we get out of it so we should get better compression so here we're looking at a setting where the model is has a Markov chain structure over the wing variables so there lady variables ZL
9,799
9,833
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9799s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
the L minus one up and seal the one the necks so this is the graphical model of a sampling path and and the the inference path accuse go going the other way and they're both Markov chains so this is a particular type of model that we're looking at and so if you had this particular model which is this this sort of chain structure there are two ways to view it you can just view it as a via e
9,833
9,866
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9833s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
with this block of latent variables as just one latent variable and then you can run bits by B ba and s1 and then that works perfectly fine but another way to view it is to view it as a leak variable model with just a little just draw the layers again so here's X 1 Z 2 3 so you can just view Z 1 as the one and only latent variable but then you see that it's prior is a VA with the
9,866
9,901
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9866s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
same structure so it's prior is P of Z 1 which is a VA II where it's prior is P of Z 2 which is another V and so on so these are two equivalent ways of looking at the same model so in terms of log likelihood they're the same because that if you just write down v variational bounds they're equal but they suggest slightly different compression algorithms with different practical
9,901
9,929
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9901s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
consequences so the idea is that instead of just treating disease as one single block of one large latent variable you can actually recursively invoke its back coding into the prior so you can just code the first variable so here this is the algorithm for just as usual so here this is basically um decode Z and then encode X and then encode the prior so this is P of X given Z
9,929
9,964
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9929s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
mmm-hmm and here Z given X what you can do instead is code just the first layer and then recursively invoke its back coding into the C the subsequent layers right and so the the consequence of doing this is that I won't go through these the exact steps but the consequence is that you no longer have to decode the entire block of latent variables at the very first step rather
9,964
10,007
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=9964s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
you you just need to decode one of them and then you get some you can add more to the bit stream and then decode more and so on and what that means is that you need fewer auxiliary bits to start bits back coding so what this means is that remember before BBN has to make sense you need a bit stream with some bits on it to even sample Z in the first place and those bits must be sent across
10,007
10,035
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10007s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
and if you you don't have any and if there are no bits there then you end up wasting them and so if you're able to not have to decode it too many link variables in the first go then you can save on transmitting those logs iliad bits so you can see that in these experiments for especially deep latent variable models we're able to get better code lengths compared to just decoding
10,035
10,064
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10035s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
the entire block of latent variables at once you right so that was via E's so let's just move on to how to trim flow models into compression algorithms so in this class we went through a series of likelihood based models like Auto regressive models reason flows and we were seeing in this lecture that really any likely hit based model is as a compression algorithm so
10,064
10,094
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10064s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
what about flow models they should also be compression algorithms and so what what's particularly appealing about them is that you get this we can write down the exact the exact log likelihood flow model this is not a bound this is just the real thing so hopefully we should be able to get some really good compression with this so let's think about what that actually means it turns out that it
10,094
10,119
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10094s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
doesn't really make sense to say that to get a compression algorithm that achieves this code length which is that which is just this flow log likelihood formula and the reason is that flows are density models and it doesn't make sense to code continuous data because it just needs you need infinite precision to do that so rather well we're gonna say is that will code data discretize the high
10,119
10,142
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10119s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
precision so you you have your space of data like this let's say this is the space of all possible images and then we just tile it with these with this very fine grid like this and then we're just going to discretize every possible data point so instead of coding one data point exactly we'll just coat the biddin that it lies in like that so that's G of X is like the cube bed that's some data
10,142
10,168
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10142s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
point lies in and the point of doing this is that if if you define a probability mass function given by integrating this density given by the flow over these cubes and then you get then you get a negative log likelihood that looks like this it's just this the density times Delta negative plug density times Delta so this is now a probability mass function and now it
10,168
10,194
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10168s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
makes sense to say that we can compress up to this cook length so actually the code length that we're going to look for when we compress with flow models is this it's negative like this the flow times Delta so it's really just the same thing plus this additional term here so this is just the number of bits of discretization so it can actually be a lot of bits but then we can recover
10,194
10,218
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10194s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
them later right so yeah so now we have a probability mass function I can Chris wants to flow so can we just run Huffman coding so the answer is no because to do that we need to build a tree that's as big as the number of possible data points but these are we're working with large images here so that's that's exponential in the dimension so that's not tractable so we we need to harness
10,218
10,248
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10218s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
the model structure we actually we actually have to make use of the fact that this is a flow model so one naive attempt to do this maybe this just the most intuitive thing is to take the latent that you get out of the flow let's say we want to code X so why not just compute Z when that just compute Z by passing it through the flow and coding back using the prior so so
10,248
10,274
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10248s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
maybe that's very simple but unfortunately doesn't work so you can just write down the code link that you get it's just negative log P of Z times let's say Delta but if a flow model is its trained well then the distribution of Z's will match the prior so you end up just let's say the prior is Gaussian so you end up coding Gaussian noise using a Gaussian prior so that's no
10,274
10,301
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10274s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
compression at all and so if you compare this expression with this expression up here you see that this is the missing term so somehow this this naive approach does not take into effect take into account this jacobian the the fact that the flow changes the volume so we have to somehow deal with that right okay so how do we do this well the claim is that we can
10,301
10,333
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10301s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
turn any flow model into a VA actually we can locally approximate it using a VA so we have this flow model that takes here's F here's the full model and here's here's a point X and we turn it into f of X this is just what a flow does and so we can define this distribution here so this distribution on the left this like this this ellipsoid and we're going to define it
10,333
10,364
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10333s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
to be a standard normal where the mean is just f of X it's just the length but we give it this covariance matrix which is Sigma squared is just a small number like 0.0001 or something like that just some hyper parameter to this algorithm times the Jacobian times Jacobian transpose of the flow model and and so this is what we call the encoder and this is a decoder so we
10,364
10,397
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10364s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
so on the float on top of the flow model we define this encoder and decoder and the decoder is just the inverse of the flow with this identity covariance such as very small so that's what these two is ellipsoid on the left and their small circle on the right are so what why did we define this well the point is that if for a flow model is represents a differentiable function so if if you
10,397
10,424
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10397s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
have some data point X and we had a very small amount of noise to it and then you map that to the latent space that small that see that small amount of Gaussian noise that you added at the beginning will also be Gaussian it'll just have this twist there down covariance and that's given by how the flow behaves linearly and that's just the Jacobian so we know that if you take a multivariate
10,424
10,452
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10424s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
Gaussian and you multiply by a matrix you also get a multivariate Gaussian so locally the the flow behaves like a linear transformation and that matrix is the Jacobian so that that's what that's where this comes from and the point is that if you then run bits back coding using these two distributions so this was Z given X here Q this P of X given Z so if you run
10,452
10,482
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10452s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
Kota using these two distributions the code link that you get from bits fat coding will be exactly what we wanted plus this little error term which is this second-order error term so so that this is how you turn or this is a way of turning a flu model into a compression algorithm is to convert it into is to locally approximate it with a certain beauty to find like this and then the
10,482
10,509
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10482s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
code link that you get from from bits back coding on that Wilmette that's what we wanted plus a very small error term so what what's nice about this is that it turns an intractable algorithm into a more tractable one so if you wish to directly implement this algorithm it turns out you do have to compute the Jacobian of the flu model and you do have to factorize it in a certain way
10,509
10,535
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10509s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
and so that's that's polynomial time it's better than exponential time but it's still not good enough for high dimensional data and so the solution to that is that we can actually specify we can we can specialize this algorithm you even further and so for a lot of regressive flows for example it turns out that we can just code one dimension at a time without ever
10,535
10,555
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10535s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
constructing that Jacobian so that works in linear time if we have a composition of flows like we like we do in real MVPs and then you can just code one layer at a time and we recursively invoke this coding into the next layer but just like we can with hierarchical Yogi's so all together for real MVP type flows if you implement it correctly you don't need to
10,555
10,578
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10555s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
compute the Jacobi and ever and you get a you actually get a linear time compression algorithm you so the so that's nice so and we achieve this code length here which is negative log density times Delta but if you look at this this this suffers by terms of negative log Delta X which can be like actually quite bad like 32 bits or something like that so this is because
10,578
10,606
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10578s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
we had to discretize the data a lot so that we can actually approximate the integral that defines a probability mass function easily so that seems like a huge waste of bits especially if we want to transmit say integer data like images from CFR for example or it specifies its are specified as integers and we don't want to have to transmit lots of bits after the decimal point so the solution
10,606
10,633
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10606s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
to this is to use those extra bits four bits back again and so if if you want to do that it turns out that there is an optimal way of doing this and it's and this sort of encoder that you use for that is a D quantizer which I think we talked about and so if you plug it's back coding to the D quantizer to get those extra bits and then altogether the code length you get is the
10,633
10,662
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10633s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
variational D quantization bound which is what you explicit you train to to be small on on the data set so it ends up being a reasonable you and so with all this stuff we tried it for one of the models that we trained and we found that we were able to get some code links that are very close to what is predicted by the variational T quantization bound and this sort of
10,662
10,693
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10662s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
holds across all these data sets um and there is a caveat which is that this algorithm doesn't need a lot lots of auxiliary bits actually much more than a via II type methods and that shows up in the fact that we need something maybe like 50 bits per dimension actually to just send one image and so that means that this algorithm really does not make sense if
10,693
10,720
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10693s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
you just want to send one data point but say if you wanted to send like use this algorithm for each frame in a long video in a movie or something like that then the initial overhead can be amortized across all the different dreams so that is a caveat of this algorithm right so finally let's talk about some other things which are not exactly about bids back so well all these algorithms
10,720
10,753
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10720s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
that we talked about so far basically fall into the to the framework of you pre-trained a generative model on on some training set which you assume is you know drawn from the same distribution as the test set that you want to compress and then and then you just devise a coding algorithm that matches the negative log likelihood of that and that's how you go but they're
10,753
10,777
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10753s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
actually the other types of algorithms which are quite successful in text compression which we actually all use like in gzip and zip and so on which learn online so they they don't you don't really pre train them on a certain data set they just just give it a file and it learns how to compress it online and it turns out that maybe theoretically these types of algorithms
10,777
10,800
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10777s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
at least if you get them lots of resources can actually learn to compress any distribution so we call them Universal codes so there's one algorithm called problem if LZ somebody's and which works like this so I'll just try to very quickly describe it you're basically trying to so here's a long string that you're trying to compress and the way it works is that
10,800
10,831
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10800s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
when you try to compress but you're at some position in the file let let's see we're at this position of the file and we want to code the future so what you do is you basically try to find a string starting at this position eight which has already occurred in the past so we here we have this screen AAC and then we see out in the past AAC occurred so let's just store the index into the past
10,831
10,865
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10831s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
so this occurred one two three times steps into the past so let's just store this number three in the past and then we we also add into the next character which is B so that's that's basically how this works at this point C we see oh there's a string CAE in the future but of that string occurred in the past so let's just store the number three which indicates that you just need to jump
10,865
10,891
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10865s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
three into the past to just copy that string the past over so this is roughly how Lempel Ziv works you just look for matches that you're trying to compress and past and copy them over and so what why is this a good idea okay so why is this a good idea so just very roughly why this was a good idea if the source of symbols you see is independent then whatever symbol you're at right now will
10,891
10,929
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10891s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
reoccur that will will actually reoccur if you wait long enough and the reoccurrence time is as a geometric distribution so the average reoccurrence time is just one of the probability of the symbol so that's that and so the local civil rhythm says to just write down the time that the the write down the time that it that you have to look back to find the same symbol again so
10,929
10,957
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10929s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
that's going to take log t bits where T is the time so on average it's just log log T bits which is log 1 over P of X so you can see that this goes to the entropy of the source so this is an interesting algorithm it's basically nearest neighbors and it's saying that if you run if you just memorize tons of data over time and you run nearest neighbors then this is like the best
10,957
10,981
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10957s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
learning algorithm that you can or this learning algorithm does work um it just might take a very long time to learn and you can see that it does take very long time to learn because template matching does not generalize okay so that was simple as if so I'll conclude by talking just giving you a taste of some very recent research on on deep learning and compression so
10,981
11,013
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=10981s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
by no means is this comprehensive or anything like that it's just to give you an idea of what might be out there so this the authors of EBA NS released some new work and I clear this year where they show that you can train a fully convolutional deep latent variable model on small images and just because it's fully convolutional you can just run it on large images make sure that this
11,013
11,039
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=11013s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
works very well so this these are I think some of the best numbers on full resolution image net just by using this full accomplishment property these authors here describe a very intriguing alternative to bits back coding so they described what they called minimum minimal random code learning which is a coding scheme for latent variable models which achieves the bits that code link
11,039
11,066
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=11039s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
without needing bits back so the way that works is that you know the encoder samples a lot of weights the number of leads and samples is 2 to the KL divergence between the encoder and D and and prior and then picks a random one I did just a uniform random one and the decoder can do the same thing if they share the same random number generator and so it turns out that this is a way
11,066
11,095
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=11066s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
to basically get a sort of a low bias sample from cube by sampling a lot of these lanes and picking a random one and the number of bits you need to encode the index the the random one that you picked out of them it's just the KL divergence just log K so the visionary log K bits um so this achieves the bits back cut length without many bits back the trade-off is
11,095
11,123
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=11095s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
computational complexity because the concurrency could have to collect a lot of samples and finally there's this other paper which has a very different flavor from the ones that we were talking about this is a paper about lossy compression where they come up with a recurrent encoder and decoder architecture for lossy compression on sequential data like videos say and the
11,123
11,146
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=11123s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
way it works is quite interesting um just the very high-level idea is that the encoder simulates the decoder so normally you would think that the encoder and decoder just operate independently and the encoder you know just doesn't worry about what the decoder is doing but if there's this time structure then the encoder can simulate what the decoder is doing sort
11,146
11,167
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=11146s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
of one time step behind and based on that you can send extra information that will help the decoder or just send the right information that will help the decoder reconstruct the data just in the right way and they show how to write down a neural network architecture that captures this idea and optimizes for the resulting code length in this end-to-end way so that's quite a cool idea right
11,167
11,197
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=11167s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
yeah so that's that's all I have to say hopefully that was helpful that was great Jonathan what's it's all over time but I'm thinking maybe we can spend a couple more minutes if people have some questions didn't want to ask as we wrap up here anyway also value of lecturing and I also did a bunch of questions in the chat to be able to do that in parallel to you making progress
11,197
11,226
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=11197s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
on lecture I had a question about ans so I still don't see what the connection between like oh it seems like ans was just like a little add-on to this lecture like what's the connection feels like I don't I don't really see why we need ans like why you can just use another yeah it's a there are ways of combining Ritz back with arithmetic coding it's just sort of the popular
11,226
11,256
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=11226s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
recent thing to do is to combine bits back with ans and in the reason we do it is because you get a very clean algorithm that works very well so that's that's just them that was the motivation can't you use it sorry can't you use base back so do any encoding scheme yeah yeah you definitely can just particularly convenient because of the stack structure of ans and also because
11,256
11,280
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=11256s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
the NS does work well in practice um to use it with that social practical reasons why a net using mmhmm yeah maybe also dumping in here if you look at the Brendan frame geoff hinton paper it managed to do compression with a VA e and arithmetic coding but it had a bunch of overhead that you encountered because I think they could a nice look cute and this back axle or like a stack and so
11,280
11,315
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=11280s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
there's an overhead occurred then look at the Townsend on paper you can see how to make it all compatible through using a Dennis and get much better efficiency and compression efficiency tanam the previous paper that uses arithmetic coding another things that NS well invent it pretty late meaning compared to the other one it actually turns out and the ideas might be somewhat complex the
11,315
11,341
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=11315s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
S27pHKBEp30
all right cool well thanks everybody um so I'm gonna give the second talk tonight which I'm not crazy about and and I don't want this pattern to to repeat but you know Andrew and I wanted to kick this series off and and felt like me talking twice or better than then not but we're gonna we're gonna get more diversity of folks if any of you want to give a talk yourselves you know
0
29
https://www.youtube.com/watch?v=S27pHKBEp30&t=0s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
somebody who you think might that'd be awesome but a topic that I feel is important for practitioners to understand is a real sea change in natural language processing that's you know all of like 12 months old but is one these things I think is incredibly significant in in the field and that is the advance of the Transformers so the outline for this talk is to start out
29
55
https://www.youtube.com/watch?v=S27pHKBEp30&t=29s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
with some background on natural language processing and sequence modeling and then talk about the LS TM why it's awesome and amazing but still not good enough and then go into Transformus and talk about how they work and why they're amazing so for background on natural language processing NLP I'm gonna be talking just about a subset of NLP which is the supervised learning a part of it so not
55
83
https://www.youtube.com/watch?v=S27pHKBEp30&t=55s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
structured prediction sequence prediction but where you're taking the document as some input and trying to predict some fairly straightforward output about it like is this document spam right and so what this means is that you need to somehow take your document and represent it as a fixed-size vector because I'm not aware of any linear algebra that works on vectors of variable dimensionality and
83
113
https://www.youtube.com/watch?v=S27pHKBEp30&t=83s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
the challenge with this is that documents are of variable length right so you have to come up with some way of taking that document and meaningfully encoding it into a fixed size vector right so the classic way of doing this is the bag of words right where you have one dimension per unique word in your vocabulary so English has I don't know about a hundred thousand words in the vocabulary
113
136
https://www.youtube.com/watch?v=S27pHKBEp30&t=113s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
right and so you have a hundred thousand dimensional vector most of them are zero because most words are not present in your document and the ones that are have some value that's maybe account or tf-idf score or something like that and that is your vector and this naturally leads to sparse data where again it's mostly zero so you don't store the zeros because that's
136
158
https://www.youtube.com/watch?v=S27pHKBEp30&t=136s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
computationally inefficient you store lists of a position value tuples or maybe just a list of positions and this makes the computation much cheaper and this works this works reasonably well a key limitation is that when you're looking an actual document order matters right these two documents mean completely different things right but a bag of words model will score them
158
182
https://www.youtube.com/watch?v=S27pHKBEp30&t=158s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
identically every single time because they have the exact same vectors for for what words are present so the solution to that in this context is in grams you can have by grams which every pair of possible words or trigrams are for every combination of three words which would easily distinguish between those two but now you're up to what is that a quadrillion dimensional vector and you
182
206
https://www.youtube.com/watch?v=S27pHKBEp30&t=182s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
can do it but you know you start running into all sorts of problems when you walk down that path so a in neural network land it's the natural way to just solve this problem is the R and n which is the recurrent neural network not the recursive neural network I've made that mistake but RN ends are a new approach to this which asked the question how do you calculate a function on a
206
232
https://www.youtube.com/watch?v=S27pHKBEp30&t=206s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
variable-length set of input and they answer it using a for loop in math where they recursively define the output at any stage as a function of the inputs at the previous stages and the previous output and then for the purpose of supervised learning the final output is just the final hidden state here and so visually this looks like this activation which takes an input from the raw
232
261
https://www.youtube.com/watch?v=S27pHKBEp30&t=232s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
document X and also itself in the previous time you can unroll this and visualize it as a very deep neural network where there the final answer the number you're looking at the end is this and it's this deep neural network that processes every one of the inputs along the way alright and the problem with this classic vanilla all right on this plane recurrent neural network is
261
284
https://www.youtube.com/watch?v=S27pHKBEp30&t=261s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
vanishing an exploding gradients right so you take this recursive definition of the hidden state and you imagine what happens just three points in right and so you're calling this a function this a transformation over and over and over again on your data and classically in the vanilla want a case this is just some matrix multiplication some learned matrix W times your input X and so when
284
310
https://www.youtube.com/watch?v=S27pHKBEp30&t=284s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
you go out and say a hundred words in you're taking that W vector W matrix and you're multiplying it a hundred times alright so in in simple math in in real number math we know that if you take any number less than one and raise it to a very high dimensional value sorry very high exponent you get some incredibly small number and if your number is slightly larger than one then it blows
310
334
https://www.youtube.com/watch?v=S27pHKBEp30&t=310s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
up to something big and if you go if your X even higher if you have longer documents this gets even worse and in linear algebra this is about the same except you need to think about the eigenvalues of the matrix so the eigenvalues is say how much the matrix is going to grow or shrink vectors when the transformation is applied and if your eigenvalues are less
334
355
https://www.youtube.com/watch?v=S27pHKBEp30&t=334s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
than one in this transformation you're going to get these gradients that go to zero as you use this matrix over and over again if they're greater than one then your gradients are going to explode all right and so this made vanilla RN ends extremely difficult to work with and basically just didn't work on anything but fairly short sequences all right so LST m to the rescue right so I
355
375
https://www.youtube.com/watch?v=S27pHKBEp30&t=355s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
wrote this document a few years ago called the rise and fall and rise and fall of LST M so at least Tim came around in the dark ages and then it went into the AI winter it came back again for awhile but I think it's on its way out again now with with transformers so Ellis Tim to be clear is a kind of recurrent neural network it just houses more sophisticated cell inside and it
375
402
https://www.youtube.com/watch?v=S27pHKBEp30&t=375s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
was invented originally in the dark ages on stone tablet that has been recovered into a PDF that you can access on Sep hawk right there's a server III kid but seven and you're gonna both grade I enjoy they're both quite a bit but they did a bunch of amazing work in the 90s that was really well ahead of its time and and often get neglected and forgotten as time goes on that's totally
402
432
https://www.youtube.com/watch?v=S27pHKBEp30&t=402s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
not fair because they did an amazing research so the LST emcell looks like this it actually has two hidden states and the the input coming along the bottom and the output up the top again and these two hidden states and I'm not going to go into it in detail and you should totally look at Christopher ollas blog post if you want to dive into it but the key point is that these these
432
452
https://www.youtube.com/watch?v=S27pHKBEp30&t=432s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
transformations these the matrix multiplies right and they are not applied recursively on the main hidden vector all you're doing is you're adding in or the forget gate yeah you actually don't really need it but you're adding in some some new number and so the OS TM is actually a lot like a res net it's a lot like a CNN resonate in that you're adding new values on to the activation
452
477
https://www.youtube.com/watch?v=S27pHKBEp30&t=452s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
as you go through the layers right and so this solves the exploding and vanishing gradients problems however LST M is still pretty difficult to train because you still have these very long gradient paths even even without even with those residual connections you're still propagating gradients from the end all the way through this transformation cell over at the beginning and for a
477
500
https://www.youtube.com/watch?v=S27pHKBEp30&t=477s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
long document this means very very deep networks that aren't just Toria Slee difficult to train and more importantly transfer learning never really worked on these LST M models right one of the great things about image net and cnn's is that you can train a convolutional net on millions of images in image net and take that neural network and fine-tune it for some new problem that
500
526
https://www.youtube.com/watch?v=S27pHKBEp30&t=500s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
you have and the the starting state of the you mention at CNN gives you a great a great place to start from when you're looking for a new neural network and makes training on your own problem much he there was much less data that never really worked with Ellis Jim sometimes it did but it just wasn't very reliable which means that anytime you're using an LS TM you need a new label data set
526
550
https://www.youtube.com/watch?v=S27pHKBEp30&t=526s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
that's specific to your task and that's expensive okay so this this changed dramatically just about a year ago when the burp model was was released so you'll hear people talk about Transformers and Muppets together and the reason for this is that the original paper on this technique that describes the network architecture it was called the transformer network and then the
550
574
https://www.youtube.com/watch?v=S27pHKBEp30&t=550s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
Bert paper is a muppet news and Elmo paper and you know researchers just run with the joke um so this is just context you understand what people are talking about if they say well use them up in network so this I think it was the natural progression of the sequence of document models and it was the transformer model was first described about two and a half years ago in this
574
595
https://www.youtube.com/watch?v=S27pHKBEp30&t=574s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
paper attention is all you need and this paper was addressing machine translation so think about taking a document in in English and converting it into French right and so the classic way to do this in neural network is encoder/decoder here's the full structure there's a lot going on here right so we're just going to focus on the encoder part because that's all you need for these supervised
595
618
https://www.youtube.com/watch?v=S27pHKBEp30&t=595s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
learning problems the decoder is similar anyway so zooming in on the encoder part of it there's still quite a bit going on and so we're but basically there's three parts there's we're gonna talk about first we're going to talk about this attention part then we'll talk about the part of the bottom of the positional coding the top parts just not that hard it's just a simple fully connected layer
618
637
https://www.youtube.com/watch?v=S27pHKBEp30&t=618s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
so the attention mechanism in the middle is the key to making this thing work on documents of variable lengths and the way they do that is by having an all-to-all comparison for every layer of the neural network it considers every pause for every output of the next layer considers every plausible input from the previous layer in this N squared way and it does this weighted sum of the
637
661
https://www.youtube.com/watch?v=S27pHKBEp30&t=637s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
previous ones where the waiting is the learned function right and then it applies just a fully connected layer after it but it this is this is great for for a number of reasons one is that you can you can look at this thing and you can visually see what it's doing so here is this translation problem of converting from the English sentence the agreement on the European
661
683
https://www.youtube.com/watch?v=S27pHKBEp30&t=661s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
Economic Area was signed in August 1992 and translate that into French my apologies la casa la zone economic European at this may and oh I forgot 1992 right and you can see the attention so as its generating lips as a generating the each token in the output it's it's starting with this whole thing's name button its generating is these output tokens one at a time and it
683
710
https://www.youtube.com/watch?v=S27pHKBEp30&t=683s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
says okay first you got to translate the the way I do that it translates into la and all I'm doing is looking at this next I'll put a color all I'm doing is looking at agreement then sir is on la is the okay now interesting European Economic Area translates into zone economic European so the order is reversed right you can see the attention mechanism is reversed also or you can
710
732
https://www.youtube.com/watch?v=S27pHKBEp30&t=710s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
see very clearly what this thing is doing as it's running along and the way it works in the attention are setting the transformer model the way they describe it is with query and key vectors so for every output position you generate a query and for every input you're considering you generate a key and then the relevant score is just the dot product of those two right and to
732
756
https://www.youtube.com/watch?v=S27pHKBEp30&t=732s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
visualize that you first you combine the key the query and the key values and that gives you the relevant scores you you use the softmax normalize them and then you do a weighted average of the values the third version of each token to get your output now to explain this in a little bit more detail I'm going to go through it in pseudocode so this looks like Python it wouldn't actually
756
780
https://www.youtube.com/watch?v=S27pHKBEp30&t=756s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
run but I think it's close enough to help people understand what's going on so you've got this attention function right and it takes as input a list of tensors I know you don't need to do that a list of 10 serious one per token on the input and then the first thing it does it goes through each everything in the sequence and it computes the query the key and the value by multiplying the
780
805
https://www.youtube.com/watch?v=S27pHKBEp30&t=780s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
appropriate input vector by Q k and V which are these learned matrices right so it learns this transformation from the previous layer to whatever should be the query the key and the value at the at the next layer then it goes through this double nested loop alright so for every output token it figures out okay this is the query I'm working with and then it goes through
805
829
https://www.youtube.com/watch?v=S27pHKBEp30&t=805s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
everything in the input and it multiplies that query with the the key from the possible key and it computes a whole bunch of relevant scores and then it normalizes these relevant scores using a soft Max which makes sure that they just all add up to one so you can sensibly can use that to compute a weighted sum of all of the values so you know you just go through for each output
829
854
https://www.youtube.com/watch?v=S27pHKBEp30&t=829s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
you go through each of the each of the input tokens the value score which is calculated for them and you multiply it by the relevance this is just a floating point number from 0 to 1 and you get a weighted average which is the output and you return that so this is what's going on in the attention mechanism which can be which can be pretty confusing when you just look at it look at the diagram
854
877
https://www.youtube.com/watch?v=S27pHKBEp30&t=854s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
that like that but I hope this I hope this explains it a little bit I'm sure we'll get some questions on this so relevant scores are interpretable as I say and and this is is super helpful right now the an innovation I think it was novel in the transformer paper is multi-headed attention and this is one of these really clever ID and important innovations that it's not actually all
877
905
https://www.youtube.com/watch?v=S27pHKBEp30&t=877s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
that complicated at all I you just do that same thing that same attention mechanism eight times whatever whatever value of 8 you want to use and that lets the network learn eight different things to pay attention to so in the translation case it can learn an attention mechanism for grammar one for vocabulary one for gender one for kent's whatever it is right whatever the thing
905
927
https://www.youtube.com/watch?v=S27pHKBEp30&t=905s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
needs to it can look at different parts of the input document for different purposes and do this at each layer right so you can kind of intuitively see how this would be a really flexible mechanism for for processing a document or any sequence okay so that is when the key things that enables the transfer model that's the multi-headed attention part of it now let's look down
927
948
https://www.youtube.com/watch?v=S27pHKBEp30&t=927s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
here at the positional encoding which is which is critical and novel in a critical innovation that I think is incredibly clever so without this positional encoding attention mechanisms are just bags of words right there's nothing seeing what the difference is between work to live or live to work right there they're just all positions they're all equivalent positions you're
948
972
https://www.youtube.com/watch?v=S27pHKBEp30&t=948s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
just going to compute some score for each of them so what they did is they took a lesson from Fourier theory and added in a bunch of sines and cosines as extra dimensions sorry not as extra dimensions but onto the the word embeddings so going back so what they do is they take the inputs they use word Tyvek to calculate some vector for each input token and then
972
997
https://www.youtube.com/watch?v=S27pHKBEp30&t=972s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
onto that onto that embedding they add a bunch of sine and cosines of different frequencies starting at just pi and then stretching out longer and longer and longer and if you look at the whole thing it looks like this and what this does is it lets the model reason about the relative position of any tokens right so if you can kind of imagine that the model can say if the orange
997
1,023
https://www.youtube.com/watch?v=S27pHKBEp30&t=997s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
dimension is slightly higher than the blue dimension on one word versus another then you can see how it knows that that token is to the left or right of the other and because it has this at all these different wavelengths it can look across the entire document at kind of arbitrary scales to see whether one idea is before or after another the key thing is that this is how the
1,023
1,046
https://www.youtube.com/watch?v=S27pHKBEp30&t=1023s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
system understands position and isn't just a bag of words for Fortran when doing the attention okay so transformers there's the two key innovations as positional encoding and multi-headed attention transformers are awesome even though there are N squared and the length of the document these all-to-all comparisons can be done almost for free in a modern GPU GPUs
1,046
1,069
https://www.youtube.com/watch?v=S27pHKBEp30&t=1046s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
changed all sorts of things right you can do a thousand by thousand matrix multiply as fast as you can do a ten by two in a lot of cases because they have so much parallelism they have so much bandwidth that but a fixed latency for every operation so you can do these massive massive multiplies almost for free in a lot of cases so doing things in M Squared is is not actually
1,069
1,090
https://www.youtube.com/watch?v=S27pHKBEp30&t=1069s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
necessarily much more expensive whereas in an RNN like an L STM you can't do anything with token 11 until you're completely done processing token 10 all right so this is a key advantage of transformers they're much more computationally efficient also you don't need to use any of these sigmoid or tan h activation functions which are built into the LS TM model these things of
1,090
1,115
https://www.youtube.com/watch?v=S27pHKBEp30&t=1090s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg