video_id
stringlengths
11
11
text
stringlengths
361
490
start_second
int64
0
11.3k
end_second
int64
18
11.3k
url
stringlengths
48
52
title
stringlengths
0
100
thumbnail
stringlengths
0
52
qHLLMg0Teg4
the outcome there we can plug in the math but actually for uniform it's intuitive is much simpler for uniform you want to maximize mobility of things you saw but it has to be uniform so essentially you look at the farthest out samples and it's a last spot where you assign any probability and everything in between has the same probability so uniform thing that maximize it would be
3,889
3,912
https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3889s
Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley
https://i.ytimg.com/vi/q…g4/hqdefault.jpg
qHLLMg0Teg4
this thing when your highest sample is be your lowest sample as a it with all the mass between a and B and this uniform has to be equal how about gaussians can do the same thing the math is again the same we're not going to work through the details but essentially you said that's my density function when I have a sample or multiple samples I maximize the product of the
3,912
3,933
https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3912s
Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley
https://i.ytimg.com/vi/q…g4/hqdefault.jpg
qHLLMg0Teg4
probabilities of these samples or the sum of the log probabilities of these samples log is convenient here because the Gaussian is an exponential in it and the exponential cancels with the log and you work to the math what do you see well the mean of the Gaussian will be the mean of your samples that's the maximum likelihood estimate and the variance parameter of your Gaussian will
3,933
3,953
https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3933s
Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley
https://i.ytimg.com/vi/q…g4/hqdefault.jpg
qHLLMg0Teg4
actually be the empirical variance on your samples not too surprising but formally derived that that is actually the right thing to do to do maximum likelihood estimation for a Gaussian how about conditional Gaussian but this litigation would be where you have a distribution where Y is effectively a linear regression of X but it could be higher dimensional of course y equals a0
3,953
3,974
https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3953s
Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley
https://i.ytimg.com/vi/q…g4/hqdefault.jpg
qHLLMg0Teg4
plus a1x plus noise that's a linear Gaussian from one D to one D you can get a bunch of samples work through it and find the maximum likelihood estimate what is it going to be well you have to do some math you'll have a bunch of y's and x's and this will be the probability of their combination and then you'll have to look at okay what maximizes the product of those probabilities you do a
3,974
3,998
https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3974s
Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley
https://i.ytimg.com/vi/q…g4/hqdefault.jpg
qHLLMg0Teg4
bunch of math what comes out well you see that effectively you get a least square solution that you have to do to find the parameters of this linear Gaussian and you will find that the variance is essentially the empirical variance left when estimating Y from X based on your best estimate of that linear fit you can do this for multivariate gaussians again the math is going to be more hairier to
3,998
4,027
https://www.youtube.com/watch?v=qHLLMg0Teg4&t=3998s
Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley
https://i.ytimg.com/vi/q…g4/hqdefault.jpg
qHLLMg0Teg4
this higher dimensional and so forth but only it's just very linear map there's no trickery happening you're just saying this is my density there's my data just plug away at it and out comes some result for these kind of solutions in nice closed form and again conditional multivariate Gaussian y equals C times X will come something that looks like least squares solution for the C matrix
4,027
4,051
https://www.youtube.com/watch?v=qHLLMg0Teg4&t=4027s
Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley
https://i.ytimg.com/vi/q…g4/hqdefault.jpg
qHLLMg0Teg4
and then for the covariance matrix we'll get the empirical covariance on the samples if you actually want to work through this and get this result here are some key matrix identities that are useful otherwise you probably won't get to this result and so these are just kind of things that you at some point might have derived in a previous class or might have never drived it might be a
4,051
4,071
https://www.youtube.com/watch?v=qHLLMg0Teg4&t=4051s
Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley
https://i.ytimg.com/vi/q…g4/hqdefault.jpg
qHLLMg0Teg4
surprise right now but these are true quantities that can come in very handy when doing these multi well multidimensional derivations with gaussians you'll see these tricks will help you out probably one of the more intriguing ones it's like the gradient of the log of the determinant of a matrix with respect to the entries in that matrix is just the inverse of that
4,071
4,093
https://www.youtube.com/watch?v=qHLLMg0Teg4&t=4071s
Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley
https://i.ytimg.com/vi/q…g4/hqdefault.jpg
qHLLMg0Teg4
matrix why does it matter well remember a multivariate Gaussian has that determinant of the covariant main covariance matrix up front they all need to find derivative respect to the entries in the covariance matrix in a maximizing the likelihood and the covariance matrix is a parameter you try to find the right setting of that whole matrix will have to take derivatives of
4,093
4,111
https://www.youtube.com/watch?v=qHLLMg0Teg4&t=4093s
Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley
https://i.ytimg.com/vi/q…g4/hqdefault.jpg
qHLLMg0Teg4
that thing and so turns out nice closed form for this if you don't know about this you might say oh well no closed-form impossible is gonna have to do this numerically but it turns out you can do it as in closed form alright so how about a full we observed linear Gaussian Basin filter setting so you have XT plus 1 equals ax t plus bu t + WT ZT plus 1 equals CX T plus D plus V T
4,111
4,140
https://www.youtube.com/watch?v=qHLLMg0Teg4&t=4111s
Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley
https://i.ytimg.com/vi/q…g4/hqdefault.jpg
qHLLMg0Teg4
there's a standard common filter type setting if everything is observed you can actually apply maximum likelihood to find a B C D and the covariance matrix Q for W and the covariance matrix R for V and that we have a model of your system now one thing you might want to be wary of is that sometimes you don't want to just do the maximum likely estimate you might
4,140
4,168
https://www.youtube.com/watch?v=qHLLMg0Teg4&t=4140s
Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley
https://i.ytimg.com/vi/q…g4/hqdefault.jpg
qHLLMg0Teg4
want to pay attention to something else so think about thumbtack example let's say I had five ups would you say that the probability of down is zero probably you would not because you might say well you never know it could be down sometimes and so what that means is that you have some prior information that is not present yet or reflected yet in this data data is too small you have
4,168
4,197
https://www.youtube.com/watch?v=qHLLMg0Teg4&t=4168s
Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley
https://i.ytimg.com/vi/q…g4/hqdefault.jpg
qHLLMg0Teg4
knowledge about the world that you've condensed in this notion that actually sometimes it could fall the other way just hasn't happened yet in this experiment so we can do you can introduce a prior explicitly to account for that you can say well my prior is something that some probability on theta some 1 1 minus theta I raise them to the same power here theta times 1 minus
4,197
4,216
https://www.youtube.com/watch?v=qHLLMg0Teg4&t=4197s
Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley
https://i.ytimg.com/vi/q…g4/hqdefault.jpg
qHLLMg0Teg4
theta it's as if as if I've seen one time theta come out which is up 1 times 1 minus theta which is which is down I said I assumed that already happened ahead of time hasn't happened yet but I know it could happen let's assume it already happened and then multiplied in with everything else those those kind of priors are particularly convenient come up with a lot of priors if you take
4,216
4,237
https://www.youtube.com/watch?v=qHLLMg0Teg4&t=4216s
Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley
https://i.ytimg.com/vi/q…g4/hqdefault.jpg
qHLLMg0Teg4
priors that look like as if you already ran an experiment say assume I already ran the experiment that I already saw a few times is a few times that then I'll be in the same form factor as the likelihood and if your derivation for maximum likelihood came out nice and closed form then this will come out nationally in closed form too because they'll be the same derivation just
4,237
4,256
https://www.youtube.com/watch?v=qHLLMg0Teg4&t=4237s
Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley
https://i.ytimg.com/vi/q…g4/hqdefault.jpg
qHLLMg0Teg4
introduce some fake experiments in the mix but otherwise everything's the same for this kind of bear knew your experiment this is what it could look like if you have data to the power alpha minus 1 1 minus 2 power beta minus 1 and then closed form on the right there you'll see the effectively add pseudo counts alpha minus 1 beta minus 1 pseudo counts as Zima this happened but hasn't
4,256
4,279
https://www.youtube.com/watch?v=qHLLMg0Teg4&t=4256s
Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley
https://i.ytimg.com/vi/q…g4/hqdefault.jpg
qHLLMg0Teg4
happened for some of these choices by the way like simple ones you might think about like the sign one here that's all faint go to Baker at beta equal to it's like you put as if each side has already happened once but then in the extreme where alpha and beta are smaller than one it's as if you have a negative version of it happening it's like you think it's not likely both have already
4,279
4,302
https://www.youtube.com/watch?v=qHLLMg0Teg4&t=4279s
Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley
https://i.ytimg.com/vi/q…g4/hqdefault.jpg
qHLLMg0Teg4
happened it's actually more likely the only one could have happened and your prior comes out the opposite way where you see that it puts a lot of weight on either one or zero not a lot of weight in the middle that's possible too you don't have to make it uniform your prior it's whatever you think might be likely and so if you think I think it's always gonna be the same it I don't know which
4,302
4,323
https://www.youtube.com/watch?v=qHLLMg0Teg4&t=4302s
Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley
https://i.ytimg.com/vi/q…g4/hqdefault.jpg
qHLLMg0Teg4
side is going to be but so it's gonna be the same that you have this alpha beta equals 0.5 as a reasonable prior it's only called irrational a distribution which generalizes this to multinomial variables but the high level I mean there's a lot of symbols here Ohio was the same thing you're just saying I pretend I already saw a few experiments I have pseudo counts for those pretend
4,323
4,347
https://www.youtube.com/watch?v=qHLLMg0Teg4&t=4323s
Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley
https://i.ytimg.com/vi/q…g4/hqdefault.jpg
qHLLMg0Teg4
experiments and I multiply the probability of those into the likelihood of the actual experiments do the same thing for a Gaussian to make the math work out you want the prior for your mean of your Gaussian they also be a Gaussian because then you have a Gaussian multiplied with a Gaussian and we know that's again a Gaussian and the math will be easy if you said well
4,347
4,367
https://www.youtube.com/watch?v=qHLLMg0Teg4&t=4347s
Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley
https://i.ytimg.com/vi/q…g4/hqdefault.jpg
qHLLMg0Teg4
actually I think the prior for the mean is that I know it's guaranteed to be positive can never be negative so a Gaussian is not a right fit because the Gaussian even if up without positive it'll still have something running negative well then it's not going to work out as nicely with your math and your product to make a trade-off you might say you know what it's fine I take
4,367
4,384
https://www.youtube.com/watch?v=qHLLMg0Teg4&t=4367s
Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley
https://i.ytimg.com/vi/q…g4/hqdefault.jpg
qHLLMg0Teg4
a Gaussian far enough positive a small enough variance there's very low probability mass and a negative and the math will work out cleanly and that's what I'm going to use or you might say no I'm going to use some other kind of prior and now I have to do some numerical optimization to find the maximum likely or maximum a posteriori estimate because it's not closed for him
4,384
4,402
https://www.youtube.com/watch?v=qHLLMg0Teg4&t=4384s
Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley
https://i.ytimg.com/vi/q…g4/hqdefault.jpg
qHLLMg0Teg4
anymore so if typically we'll make a trade-off between convenience and precision of the prior that you are imposing your problem you can do the same thing for conditional linear gosh you can have priors they're gonna priors over the linear coefficient a but more general kind of priors of the matrix there goes some X to Y or from XT to XT plus one that matrix a here are some examples of
4,402
4,427
https://www.youtube.com/watch?v=qHLLMg0Teg4&t=4402s
Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley
https://i.ytimg.com/vi/q…g4/hqdefault.jpg
qHLLMg0Teg4
this worked out so the slides can just work through what it looks like when you have a prior so there are some points shown in blue the true relation is shown in green so that's what we were hoping to recover but the data is noisy so the maximum likely this within a small amount of data shown in red is actually pretty far off from green but if it had a prior that in this case thinks the
4,427
4,451
https://www.youtube.com/watch?v=qHLLMg0Teg4&t=4427s
Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley
https://i.ytimg.com/vi/q…g4/hqdefault.jpg
qHLLMg0Teg4
coefficients are more likely to be small rather than be large it'll kind of regularize that and you'll find the black line which is running closer to horizontal compared to the red line now one thing you also want to do and don't want to forget about is cross-validation so whenever you have some data and you just fit to the data it's possible that you're memorizing data over fitting it
4,451
4,479
https://www.youtube.com/watch?v=qHLLMg0Teg4&t=4451s
Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley
https://i.ytimg.com/vi/q…g4/hqdefault.jpg
qHLLMg0Teg4
rather than paying attention to the real pattern we saw this in valuation sample based valuation you don't want to just over fit the few samples and make sure that you fit your neural net in a way that it generalizes to other data so people we do is you split your data into train and validation data and then for a range of priors you can compute the maximum posterior e and then C on the
4,479
4,505
https://www.youtube.com/watch?v=qHLLMg0Teg4&t=4479s
Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley
https://i.ytimg.com/vi/q…g4/hqdefault.jpg
qHLLMg0Teg4
validation data which estimate of your maximum a-posteriori parameter gives the best performance on the validation data and that tells you that the prior you use to estimate that was the better prior that's the same thing in standard neural net learning you'd say I put some like coefficient in front of weights square because I want to keep the weights small the coefficient in front
4,505
4,525
https://www.youtube.com/watch?v=qHLLMg0Teg4&t=4505s
Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley
https://i.ytimg.com/vi/q…g4/hqdefault.jpg
qHLLMg0Teg4
of that is a choice it's a hyper parameter it's a prior over your weights effectively a Gaussian prior over your weights that you're putting in same thing would be happening here you put in a prior and then in cross-validation find out which prior yielded the best results now what we covered so far assumed in all of the maximum life here the maximum posterior that we observe
4,525
4,548
https://www.youtube.com/watch?v=qHLLMg0Teg4&t=4525s
Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley
https://i.ytimg.com/vi/q…g4/hqdefault.jpg
qHLLMg0Teg4
all the data we can write out the density or the probability of all that data and there's no unobserved variables what we're going to cover next week Tuesday because I'm Thursday we'll do particle filters when Tuesday will wrap up this part we'll look at what to do when there's variables we did not observe and still want to do maximum likelihood estimation that's it for
4,548
4,570
https://www.youtube.com/watch?v=qHLLMg0Teg4&t=4548s
Lecture 13 Kalman Smoother, MAP, ML, EM -- CS287-FA19 Advanced Robotics at UC Berkeley
https://i.ytimg.com/vi/q…g4/hqdefault.jpg
2lkUNDZld-4
hi there today we'll look at big self supervised models are strong semi-supervised learners by ting chen simon Kornbluth kevin sworsky muhammad nur Uzi and Geoffrey Hinton of Google brain so this paper on a high level it's also known as Sinclair v2 demonstrates that if you want to do semi-supervised learning that you're very well served by starting out with self supervised
0
28
https://www.youtube.com/watch?v=2lkUNDZld-4&t=0s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
learning and then doing fine-tuning much like NLP models do rather than the kind of semi-supervised approach that image image tasks had so far and they present this same clear v2 which is an improvement over the same clear approach to self supervised pre training and they demonstrate it outperforms a lot of the baselines alright so if you like content like this don't forget to
28
56
https://www.youtube.com/watch?v=2lkUNDZld-4&t=28s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
share it out and leave a like and tell me what you think in the comments so this paper um it sort of is kind of a club together thing of different of different things so they present this new method like simply same clear v2 which is a modification of sim clear and we'll go over that but they also try to make like a and a scientific claim namely that the that somehow bigger
56
87
https://www.youtube.com/watch?v=2lkUNDZld-4&t=56s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
models are better for this pathway of learning and we'll try to untangle all of these things so first of all we're in the semi supprised learning regime right here semi-supervised basically means that you have a data set and you only have labels for a part of that data set so this could be like here at the bottom 10% or so because labels might be expensive to get and so you only have a
87
117
https://www.youtube.com/watch?v=2lkUNDZld-4&t=87s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
few of them but you have much more data that's unlabeled now sometimes this problem is formulated as this here is your data set and then this here is like a different data set but one that's close enough such that you can learn from it and that's usually in in NLP you'll have your data set is like a sentiment classification task but you have all of Wikipedia that is not
117
141
https://www.youtube.com/watch?v=2lkUNDZld-4&t=117s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
labeled but it's just text so you can sort of pre train on it in this case we'll be in a situation where will artificially construct a small data set so this entire thing here is going to be the image net data set and this right here is going to be our labelled portion like we have labels now usually one has labels for image net as well but we artificially restrict ourselves to
141
170
https://www.youtube.com/watch?v=2lkUNDZld-4&t=141s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
simulate a situation where we have lots of data and we only have a fixed budget so we can only because to obtain labels often times you have to ask humans right to label images and let's say we're a company and we've collected this big data set but we only have like maybe 500 bucks on Amazon Mechanical Turk and we only managed to get a very flat aset labeled now we're in the in the regime
170
200
https://www.youtube.com/watch?v=2lkUNDZld-4&t=170s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
of semi-supervised learning ok this is slightly different from what NLP does and as I said in NLP usually assume you have different data sets the large one being the different distribution and in this semi-supervised regime you often assume that it is actually the same data distribution but you only have labels for some of them but there should be a fair bit of overlap between the two
200
224
https://www.youtube.com/watch?v=2lkUNDZld-4&t=200s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
things so oh I've recently made a video about open a eyes image GPT that kind of goes into the into the same direction as this work right here that basically says pre training on unlabeled data like this whole data set without the labels can be a very good pre conditioner for fine-tuning later and this paper says the same thing so basically in in the good old days what you would do is you
224
256
https://www.youtube.com/watch?v=2lkUNDZld-4&t=224s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
would devise a method that somehow takes you know takes in a devise a method that takes in a mini batch and in the mini batch you have your data samples and then some of them would be labeled right here you'd have Y and here it'd have a Y but most of them would be not labeled and you'd have like some sort of loss function that would put special weight on the
256
281
https://www.youtube.com/watch?v=2lkUNDZld-4&t=256s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
ones that are labeled or somehow handle these ones that are unlabeled in a way you might be doing like a some sort of a consistency loss such that if they are very nearest near neighbors to these in the feature space they should have similar labels or things like this so these semi-supervised methods they basically try to solve the problem at once but while taking data that is
281
306
https://www.youtube.com/watch?v=2lkUNDZld-4&t=281s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
labeled and not labeled this paper goes into a different direction this paper says first we should it's actually three stages right here and they have a diagram so I don't need to draw they have a three stage approach three stages the one on the left is unsupervised pre training so they say let's forget about the labels right now even like your unlabeled data so even the data where we
306
334
https://www.youtube.com/watch?v=2lkUNDZld-4&t=306s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
have the labels let's forget about the labels and let's just do unsupervised pre-training now unsupervised pre training in this kind of setting is also known as self supervised pre training and this first stage is done using a contrasted loss and that's very similar to sim clear to this contrastive loss so what you'll do and they describe it very very well here so what you'll do is
334
361
https://www.youtube.com/watch?v=2lkUNDZld-4&t=334s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
given a randomly sampled mini batch of images each image is augmented twice using random crop color distortion and Gaussian blur creating two views of the same example okay so you have an image in your mini batch each image you take and you make two versions of it and each version you crop real random crop somewhere so version one could be random cropped here version cook two could be
361
386
https://www.youtube.com/watch?v=2lkUNDZld-4&t=361s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
random cropped here and then you put some Gaussian blur on it and so on so a little bit of us as you can see random crop color distortion Gaussian blur so what you want is two different versions of the same image each of these versions has been augmented in a different way cropped in a different way blurred in a different way such it's it's two slightly different versions of
386
413
https://www.youtube.com/watch?v=2lkUNDZld-4&t=386s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
the same image and now you want to enforce you want to put this through your network so ultimately as you can see on the right side here what you want to end up is a a network and then okay we'll forget about this right now what you want to Train is this network right here actually including these projection layers we'll get to them later this is the network that you want to Train so
413
441
https://www.youtube.com/watch?v=2lkUNDZld-4&t=413s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
you want to put you take your unlabeled data you take an image you'd make two versions of it and you put those through the network right until the end right here so you'll get Z 1 Z 2 these are the the output of the network for the two images and then what you want to do is you want to take another image that's not this image and also put it through the network maybe also augment it first
441
470
https://www.youtube.com/watch?v=2lkUNDZld-4&t=441s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
and then you have Z 3 so now you have the outputs of 2 things that are supposed to come from the same image and one thing that's supposed to come from a different image and now your loss is simply going to be make those two things close together and push those two things apart or those 3 actually so the loss and this is this is the contrastive loss of self supervised learning as you know
470
500
https://www.youtube.com/watch?v=2lkUNDZld-4&t=470s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
you don't need any labels right here you simply say the things that come from the same image should be close together and the things that come from different images should be far apart and this relies heavily on these data augmentations that you do right here they also employ some other tricks like the momentum encoder from moco from momentum contrast and so on but this is
500
523
https://www.youtube.com/watch?v=2lkUNDZld-4&t=500s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
the main the main part so you can pull a lot of strings here to get like another percent of performance but ultimately they won't see the similarity of Zi and ZJ which are the outputs of the same image to be close together and then this down here they want to be far apart Zi with Z K where K is all the other images okay you can do this in a mini-batch fashion so this is self
523
557
https://www.youtube.com/watch?v=2lkUNDZld-4&t=523s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
supervised learning and the reason why you do this is you don't need labels and it tends we know it tends to give very very good representations so um pass that so what this network here will learn will be very good representation with these self supervised loss with contrastive loss for example gives such good performance there have been papers recently that modify the loss and so on
557
592
https://www.youtube.com/watch?v=2lkUNDZld-4&t=557s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
but it's not super well understood yet but if you do it like this there the network here will give you already very very good representation and we know this because we can take a network like this and then simply train a linear classifier on top of that on a data set and achieve very very good performance and mind you you have trained it with unlabeled data right so
592
617
https://www.youtube.com/watch?v=2lkUNDZld-4&t=592s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
the the network has never been trained to solve like image net classification it has simply been trained to look at the pictures and determine if you know two versions of a picture come from the same picture or from different pictures and now if you simply train a linear classifier on top of these representations you're doing extremely well already so we know these
617
638
https://www.youtube.com/watch?v=2lkUNDZld-4&t=617s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
representations they actually learn something about these images so that's the first part then stage 2 let's cancel all of that stage 2 is you want to do supervised fine tuning now you already see that the arrow here coming out is not this was a task agnostic big CNN the arrow is actually coming out of those those yellow boxes and the yellow boxes are these projection heads so in the
638
668
https://www.youtube.com/watch?v=2lkUNDZld-4&t=638s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
original seem clear paper what they did was they they wanted originally they wanted to train this network right here this is like a ResNet 50 it's pretty standard in these kind of self supervised approaches and so on to train or these few label approaches to train a standardized network and this is like a ResNet 50 so in the original sim clear paper they said we want to make ResNet 50 as strong
668
696
https://www.youtube.com/watch?v=2lkUNDZld-4&t=668s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
as possible but in order to do this loss right here we are going to attach this projection head just to because the dimensionality here I think is like 2048 and we want to do this inner product in a lower dimension of like maybe 256 or so so this these are just multi-layer perceptrons these are just fully connected layers that compress the representation down to that and once
696
726
https://www.youtube.com/watch?v=2lkUNDZld-4&t=696s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
we're done with the unsupervised returning we're going to throw those away right and this ResNet is the thing that we really care about now here they claim okay it actually works better and they have experiments to prove this or to show this if you use one if you actually leave one of these layers here so in the end they I guess they converge on three projection head layers and then
726
752
https://www.youtube.com/watch?v=2lkUNDZld-4&t=726s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
they only throw away the top two and like they make this big deal out of the fact where you know I can just call I can just call this part right here now the encoder and I don't so I don't know exact like I don't see the giant deal here like you've just made your network one layer bigger and now you consider that to be your encoder and the projection head is now two layers and
752
780
https://www.youtube.com/watch?v=2lkUNDZld-4&t=752s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
that will be much easier than calling the projection head three layers but we leave one layer and we train from the middle layers in any case they have this layer additional layer right here compared to the old sim clear and then the representation of that goes into supervised fine-tuning now this is pretty easy this is exactly what it sounds like so now you use only only
780
802
https://www.youtube.com/watch?v=2lkUNDZld-4&t=780s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
the dataset that has labels so the part of the data set that has labels and you do the fine tuning and fine tuning is simply supervised learning you train this network in a supervised fashion on that small fraction of data that has class labels and that already performs pretty well and they show this in experiments but then you can go a step further and do what's known as
802
828
https://www.youtube.com/watch?v=2lkUNDZld-4&t=802s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
distillation or self-training and what's distillation or self-training it's so distillation is when you have a network that you call the teacher Network and that network has been trained to do some classification maybe into three classes pretty pretty well okay but now this is very large and you want maybe a smaller model so you just want like this tiny model because you want to ship it on a
828
858
https://www.youtube.com/watch?v=2lkUNDZld-4&t=828s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
mobile device right but it's also supposed to do this and you know that if you just directly train this which is called the student model it doesn't perform as well as the teacher model there is a better way if you have the teacher model you can sort of transfer the knowledge to the student model you can distill the knowledge and how do you do that you do that by so what would you
858
883
https://www.youtube.com/watch?v=2lkUNDZld-4&t=858s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
do in supervised training in supervised training you would take an image put it in and then put the label that comes along with the image you put it up here and you compare the output to the label and that gives you the loss function right now you do that right here if you distill you put the image into both now the teacher is already trained so its output will be a distribution over
883
910
https://www.youtube.com/watch?v=2lkUNDZld-4&t=883s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
classes it won't be a single label it will be like okay 90% class 1 10% class 2 0 % class 3 something like this and now you take this as like a pseudo label this entire distribution and you put it here and you compare the output of the student to that of the teacher and that's your loss function so this kind of the teacher might have learned to put some nuance into the classification
910
936
https://www.youtube.com/watch?v=2lkUNDZld-4&t=910s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
well I'm pretty sure this is class one but I'm not a hundred percent sure and it can transfer that knowledge to the student and that makes the student better than had you just trained it from the beginning from from with just the labels right so this is distillation and you can do this even what they call self distillation here or self-training so apparently this even helps if the
936
964
https://www.youtube.com/watch?v=2lkUNDZld-4&t=936s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
teacher is if the student model is the same as the teacher model now why does it help in this case and I think it is not exactly the case in this case because they always say their teacher model has this extra projection layer right and then the student model doesn't have that even if they do self-training but why does it help in this case I mean it's it's kind of shocking and I'm
964
988
https://www.youtube.com/watch?v=2lkUNDZld-4&t=964s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
pretty sure it helps in any case but in this particular case it helps because now you're using the unlabeled data again so you have a teacher model and the teacher model is trained first using unsupervised like this is the teacher model right here using unsupervised training then the teacher model is further fine-tuned on the small data right so it is now already pretty good
988
1,016
https://www.youtube.com/watch?v=2lkUNDZld-4&t=988s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
at the task but how can you get a student model that's even better than the teacher model it's by using again this unlabeled that you have this giant amount of data so what you'll do is you take an image from the unlabeled data and you ask the teacher model teacher model what do you think about that image right and the teacher model will give you a prediction like let's say again
1,016
1,040
https://www.youtube.com/watch?v=2lkUNDZld-4&t=1016s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
this 90 percent 10% 0% and then you take the student model you input that image and you compare its output to what the teacher said so this combines the teacher model you freeze the teacher model right the teacher model is only trained until here you take it from here the student model is now able to take basically the teacher it takes everything that the teacher model knows
1,040
1,070
https://www.youtube.com/watch?v=2lkUNDZld-4&t=1040s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
not only about this data but about all the data so it kind of gets to ask the teacher model what do you think about this what do you think about this what do you think about this and it it can incorporate all that knowledge about all of this unlabeled data and that's why the student model here in the end if it's the same size will probably end up even better than the teacher model right
1,070
1,094
https://www.youtube.com/watch?v=2lkUNDZld-4&t=1070s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
so distillation I think also is still kind of a mystery of why you get a better model or I mean to to make it smaller if you make it a lot smaller usually you don't up end up with a better model but you end up with a pretty good model that you couldn't have gotten by just training the small small model but so that's already pretty cool but why you get a better model with when
1,094
1,119
https://www.youtube.com/watch?v=2lkUNDZld-4&t=1094s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
they're the same size that's I don't think that's well understood yet so that's the three-stage approach so recap first use all of the data without labels to do unsupervised or self supervised contrastive pre-training second use only the data that has labels to do fine tuning third either distill the learnt classifier to a smaller model or distill it to a model of the same size again
1,119
1,152
https://www.youtube.com/watch?v=2lkUNDZld-4&t=1119s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
within both cases you would again use the unlabeled all of the unlabeled data okay and that's the three-step approach that's same clear v2 in it's in all of its form alright so they go into fine tuning right here and yeah so they say we elaborate with a three layer projection head so that's the three layer projection head this here is the output of resonate 50 where Sigma is a
1,152
1,187
https://www.youtube.com/watch?v=2lkUNDZld-4&t=1152s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
Rayleigh non-linearity and we ignore the bias term for gravity power blah blah blah so they contrast this here for fine tuning sim clear uses this right here which is just it's basically just a classifier on top of the ow foot of the ResNet 50 okay yada yada yada yada this is fine tuning from the input layer of the projection head to fine tune from the first layer of the
1,187
1,216
https://www.youtube.com/watch?v=2lkUNDZld-4&t=1187s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
projection head we have a new encoder function as this which is ResNet followed by fully connected layers and you see they take the resonate 50 output and they ship it through the first projection layer and then there is a task specific classifier now again why I don't even see why they make like this ginormous deal out of it especially especially since the last layer of the
1,216
1,241
https://www.youtube.com/watch?v=2lkUNDZld-4&t=1216s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
resonate 50 I'm not ok here is I'm not entirely sure but are they taking the low no they're probably not taking the log it's ok but it's yeah I'm it's just weird like is there even a non-linearity at the end right here or is this really just like to matrix multiplications in a row which I'm going to guess there's a big chance that that's the case that the last layer of this encoder is actually
1,241
1,269
https://www.youtube.com/watch?v=2lkUNDZld-4&t=1241s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
not even followed by non-linearity and therefore you'll just kind of make the dimension different and I don't see why you can't just just incorporate this into the model and have to like say it over and over again that this is a new special thing right again this is equivalent of tuning from a middle layer of the projection head instead of the output layer like ok you just make your
1,269
1,290
https://www.youtube.com/watch?v=2lkUNDZld-4&t=1269s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
model a bit bigger yeah so the third step is self training or knowledge distillation and they give two variants right here this variant as you can see here this is this is just the cross-entropy but instead of having labels right here Y or you have the T term what the teacher model thinks Y is given X okay that's that's cross-entropy but not with the true labels but with
1,290
1,320
https://www.youtube.com/watch?v=2lkUNDZld-4&t=1290s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
the output of the teacher model and you can even mix that so you can as you can see right here you can mix this with an actual supervised loss so this would be the supervised loss whatever yeah I guess that I was wrong that wasn't I guess P of Y is always in that case but they don't use this particular kind I think except in one of the ablations so how does this work it
1,320
1,349
https://www.youtube.com/watch?v=2lkUNDZld-4&t=1320s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
works pretty well and so one of their experiments as you see up here it works pretty well in that if you have 1% of the labels only 1% of imagenet labels which they say is smaller or equal than 13 images per class so there's a thousand classes and you only have 13 labels per class or less if you and they differentiate if your encoder that you train is a resonate 50 then you get and
1,349
1,388
https://www.youtube.com/watch?v=2lkUNDZld-4&t=1349s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
you can see the dashed line here is a supervised baseline you almost get to the supervised baseline with one percent of the labels and if you actually have a larger ResNet then you get to the supervised performance without without 99 percent of the labels and if you have excuse me ten percent of the labels you pass the supervised baseline so the supervised baseline is on 100% of the
1,388
1,417
https://www.youtube.com/watch?v=2lkUNDZld-4&t=1388s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
labels mind you and you only have ten percent and this outperforms the supervised baseline now of course you could hear you could have another graphic where you show Oh 100% what if we you know what if we do the whole procedure with 100 percent of the labels so first we don't label the data we do supervised self supervision then we fine-tune on a hundred percent of the
1,417
1,438
https://www.youtube.com/watch?v=2lkUNDZld-4&t=1417s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
data and then we do this distillation again you would of course be even better and I think they have this somewhere in a table but this is already pretty pretty impressive and another claim they make right here is about the model sizes so and this figure is description this now relates to the title they say bigger models yield larger gains when fine-tuning with
1,438
1,468
https://www.youtube.com/watch?v=2lkUNDZld-4&t=1438s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
fewer labeled examples so there there are three comparative statement words in one sentence let's unpack this bigger models yield larger gains so the bigger the bigger the model the better the good let's say when fine-tuning with fewer labeled examples let's just look at the graph it's pretty it's really clear so here we have a number of parameters going over so these are the different
1,468
1,498
https://www.youtube.com/watch?v=2lkUNDZld-4&t=1468s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
models they look at how many parameters they have to do this whole procedure and here is the relative improvement in percent over the top image net one top accuracy so if you do this whole thing with a hundred percent of the labels right I'm gonna guess this here this here is where they start out and you can see as you grow your models you grow the performance and this this is just by
1,498
1,530
https://www.youtube.com/watch?v=2lkUNDZld-4&t=1498s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
increasing the model size right you have the same data set you have the same amount of labels you have the same number of steps that you train for and so on just by the fact that you make your model bigger you gain in performance okay now you can see that these curves here are above one another and these curves refer to getting small less and less labels okay so if you only
1,530
1,559
https://www.youtube.com/watch?v=2lkUNDZld-4&t=1530s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
have 10% of the labels your relative gains are a larger this doesn't mean that you perform better with 10% of the labels than with hundred-percent of the labels that would be that would be like ridiculous well I guess in this day and age nothing is ridiculous but for now we're still performing better by having more labels if we do the same procedure right it's not like here so here this
1,559
1,586
https://www.youtube.com/watch?v=2lkUNDZld-4&t=1559s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
baseline the supervised baseline only does supervised training right so that's why we can outperform it with less of labels but here we do the same procedure this is relative improvement right so this right here the starting point would be if you had 10 percent of labels and a 25 million model parameter model and this right here for example is if you have the same amount
1,586
1,614
https://www.youtube.com/watch?v=2lkUNDZld-4&t=1586s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
of labels but a 200 million parameter model and this is relative improvement okay but what the graph says is that the relative improvement is larger that the relative improvement is higher the more parameters you have which is the more you go to the right and that effect in itself is higher the fewer labels you have which is the different graphs and you can see that right here so if you
1,614
1,648
https://www.youtube.com/watch?v=2lkUNDZld-4&t=1614s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
have fewer and fewer labels it becomes more and more important that you have bigger models and that's really counterintuitive right because you would expect that the bigger models they can over fit much more easily to the fewer labels but that doesn't seem the case so this self supervision it really seems to be sort of a counter to this notion of overfitting and if you have larger and
1,648
1,674
https://www.youtube.com/watch?v=2lkUNDZld-4&t=1648s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
larger models that's what they argue in the paper you might be able to learn more and more features that might be useful for classification so if you have a larger model you might you're gonna learn more kinds of features and then you're going to outperform because you have more chance that these features are going to be useful for classification and I don't think they really make a
1,674
1,697
https://www.youtube.com/watch?v=2lkUNDZld-4&t=1674s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
statement as to why that happens more with the if you have less labels so let's think about this if I have very few labels very very few labels why does it help me even more if I have a big model well with the same argumentation we could say and maybe they actually say this already so I'm I might be copying them involuntarily maybe with fewer and fewer labels like let's say we have all
1,697
1,727
https://www.youtube.com/watch?v=2lkUNDZld-4&t=1697s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
the labels that's probably too many right if we can learn a task with some accuracy we probably had too many labels okay it's like weakly like we can't learn a task we know we have too few somewhere there is a border where we have enough but that's like can one number and everything else is too too many technically speaking like learning theoretically speaking so
1,727
1,753
https://www.youtube.com/watch?v=2lkUNDZld-4&t=1727s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
usually we have too many labels and what does that mean that probably means that there are multiple ways like if we have too many labels there are multiple different features we can pick up tool and there are multiple different paths to learn our goals so if we have imagenet and like that there's this weird task to recognize a three and we get lots and lots and lots of examples
1,753
1,775
https://www.youtube.com/watch?v=2lkUNDZld-4&t=1753s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
of threes right we can we can decide on a feature we can say oh I all the threes that I see they have this bow down here or all the threes that I see they have this bend here and so on but if I only have very few labels there might only be like a single feature that is even theoretically possible to learn from the labels I'm given and therefore if I have a bigger model in cell in pre-training
1,775
1,801
https://www.youtube.com/watch?v=2lkUNDZld-4&t=1775s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
because the pre-training happens with the same amount of data right if I have a if I have a bigger model that does the self supervised preaching is going to learn more features and then there's a higher chance that that one feature that I'm that these very few labels that I am able to learn something from is going to be in these features so that's kind of how I make sense of it in combination
1,801
1,828
https://www.youtube.com/watch?v=2lkUNDZld-4&t=1801s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
what with what they're saying right here okay so this was the main points they do a lot of empirical studies showing the effects of these sizes they've stressed that it's important to have both deep and wide no that works and they also do this additional attention mechanism over the convolution filters I don't want to go into that particularly but they they also do linear evaluation compared to
1,828
1,861
https://www.youtube.com/watch?v=2lkUNDZld-4&t=1828s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
supervised compared to to fine tuning on with 100% of the labels so they do a very thorough empirical investigation and yeah I I do appreciate that and they kind of show the same things and here they show the number of layers in the projection head so as you increase the number of layers in the projection head and train from the optimal layer in the middle your performance goes up as you
1,861
1,893
https://www.youtube.com/watch?v=2lkUNDZld-4&t=1861s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
can see but it also this effect is stronger when you have fewer labels right you can see the differences here are greater than the differences here or even here when you have a hundred percent of the labels so the fewer labels the fewer the labels the more benefit you have from the architecture right here and here they show that it's not always optimal to train from the
1,893
1,917
https://www.youtube.com/watch?v=2lkUNDZld-4&t=1893s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg
2lkUNDZld-4
last projection layer but here the first one so they I guess they converge on three projection layers and you always want to keep the first one around after self supervised training as we mentioned before okay they investigate different different distillation losses and show that it is actually important that you do the distillation loss on labeled and unlabeled sets you can see here if you
1,917
1,944
https://www.youtube.com/watch?v=2lkUNDZld-4&t=1917s
Big Self-Supervised Models are Strong Semi-Supervised Learners (Paper Explained)
https://i.ytimg.com/vi/2…axresdefault.jpg