video_id
stringlengths
11
11
text
stringlengths
361
490
start_second
int64
0
11.3k
end_second
int64
18
11.3k
url
stringlengths
48
52
title
stringlengths
0
100
thumbnail
stringlengths
0
52
S27pHKBEp30
scale your activations to 0 1 why are these things problematic so these were bread-and-butter in the old days of of neural networks people would use these between layers all the time and they make sense there that kind of biologically inspired you take any activation you scale it from 0 to 1 or minus 1 to 1 but they're actually really really problematic because if you get a
1,115
1,140
https://www.youtube.com/watch?v=S27pHKBEp30&t=1115s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
neuron which has a very high activation value then you've got this number up here which is 1 and you take the derivative of that and it's 0 or it's some very very small number and so your gradient descent can't tell the difference between an activation up here and one way over on the other side so it's very easy for the trainer to get confused if your activations don't stay
1,140
1,163
https://www.youtube.com/watch?v=S27pHKBEp30&t=1140s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
near this middle part all right and that's problematic compare that to rel U which is the standard these days and really you yes it does have this this very very large dead space but if you're not in the dead space then there's nothing stopping it from getting getting bigger and bigger and scaling off to infinity and one of the reasons why when the intuitions behind why this works
1,163
1,185
https://www.youtube.com/watch?v=S27pHKBEp30&t=1163s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
better as Geoffrey Hinton puts it is that this allows each neuron it to express a stronger opinion right in an LS sorry in a sigmoid there is really no difference between the activation being three or eight or twenty or a hundred the output is the same right it all I can say is kind of yes no maybe right but in with our Lu it can say the inactivation of five or a hundred or a
1,185
1,213
https://www.youtube.com/watch?v=S27pHKBEp30&t=1185s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
thousand and these are all meaningfully different values that can be used for different purposes down the line right so each neuron it can express more information also the gradient doesn't saturate we talked about that and very critically and I think this is really underappreciated values are really insensitive to random initialization if you're working with a bunch of sigmoid
1,213
1,235
https://www.youtube.com/watch?v=S27pHKBEp30&t=1213s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
layers you need to pick those random values at the beginning of your training to make sure that your activation values are in that middle part where you're going to get reasonable gradients and people used to worry a lot about what initialization to use for your neural network you don't hear people worrying about that much at all anymore and rail users are really the key reason why that
1,235
1,256
https://www.youtube.com/watch?v=S27pHKBEp30&t=1235s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
is also really runs great on low precision Hardware those those floating of the smooth activation functions they need 32-bit float maybe you can get it to work in 16-bit float sometimes but you're not going to be running it an 8-bit int without a ton of careful work and that is the kind of things are really easy to do with a rel u based network and a lot of hardware is going
1,256
1,277
https://www.youtube.com/watch?v=S27pHKBEp30&t=1256s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
in that direction because it takes vastly fewer transistors and a lot less power to do 8-bit integer math versus 32-bit float it's also stupidly easy to compute the gradient it's one or at zero right you just take that top bit and you're done so the derivatives ridiculously usually rel you have some downsides it does have those dead neurons on on the left side you can fix
1,277
1,299
https://www.youtube.com/watch?v=S27pHKBEp30&t=1277s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
that with a leaky rail you there's this discontinuity in the gradient of the origin you can fix that with Gail u which burnt uses and so this brings me to a little aside about general deep learning wisdom if you're designing a new network for whatever reason don't bother messing with different kinds of activations don't bother trying sigmoid or tant they're they're probably not
1,299
1,322
https://www.youtube.com/watch?v=S27pHKBEp30&t=1299s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
going to work out very well but different optimizers do matter atom is a great place to start it's super fast it tends to give pretty good results it has a bit of a tendency to overfit if you really are trying to squeeze the juice out of your system and you want the best results SGD is likely to get you a better result but it's going to take quite a bit more time to
1,322
1,342
https://www.youtube.com/watch?v=S27pHKBEp30&t=1322s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
Verge sometimes rmsprop beats the pants off both of them it's worth playing around with these with these things I told you about why I think SWA is great there's this system called attitude my old team at Amazon released where you don't even need to take a learning rate it dynamically calculates the ideal learning rate scheduled at every point during training for you it's kind of
1,342
1,362
https://www.youtube.com/watch?v=S27pHKBEp30&t=1342s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
magical so it's worth playing around with different optimizers but don't mess with the with the activation functions okay let's pop out right there's a bunch of a bunch of Theory bunch of math and and ideas in there how do we actually apply this stuff in code so if you want to use a transformer I strongly recommend hopping over to the the fine folks at hugging face and using their transformer
1,362
1,386
https://www.youtube.com/watch?v=S27pHKBEp30&t=1362s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
package they have both pi torch and tensor flow implementations pre-trained models ready to fine tune and I'll show you how easy it is here's how to fine tune a Bert model in just 12 lines of code you just pick what kind of Bert you want the base model that's a paying attention upper and lower case you get the tokenizer to convert your string into tokens you download the pre trained
1,386
1,410
https://www.youtube.com/watch?v=S27pHKBEp30&t=1386s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
model in one line of code pick your data set for your own problem process the data set with the tokenizer to get training validation splits shuffle one batch um four more lines of code another four lines of code to instantiate your optimizer define your loss function pick a metric it's tensorflow so you got to compile it and then you call fit and that's it that's
1,410
1,435
https://www.youtube.com/watch?v=S27pHKBEp30&t=1410s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
all you need to do use - all you need to do to fine-tune a state-of-the-art language model on your specific problem and the fact you can do this on some pre trained model that's that's seen tons and tons of data that easily is really amazing and there's even bigger models out there right so Nvidia made this bottle called megatron with eight billion parameters they ran a hundreds
1,435
1,456
https://www.youtube.com/watch?v=S27pHKBEp30&t=1435s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
of GPUs for over a week spent vast quantities of cash well I mean they own the stuffs but so not really but they they put a ton of energy into training this I've heard people a lot of people complaining about how much greenhouse gas comes from training model like Megatron I think that's totally the wrong way of looking at this because they only need to do this once in the history of
1,456
1,480
https://www.youtube.com/watch?v=S27pHKBEp30&t=1456s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
the world and everybody in this room can do it without having to burn those GPUs again right these things are reusable and fine tunable I don't think they've actually released this yet but but they might and somebody else will right so you don't need to do that that expensive work over and over again write this thing learns a base model really well the folks at Facebook trained this
1,480
1,503
https://www.youtube.com/watch?v=S27pHKBEp30&t=1480s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
Roberta model on two and a half terabytes of data across over a hundred languages and this thing understands low resource languages like Swahili and an Urdu in ways that the it's just vastly better than what's been done before and again these are reusable if you need a model that understands all the world's languages this is accessible to you by leveraging other people's work and
1,503
1,527
https://www.youtube.com/watch?v=S27pHKBEp30&t=1503s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
before Bert and transformers and the Muppets this just was not possible now you can leverage other people's work in this way and I think that's really amazing so to sum up the key advantages of these transforming networks yes they're easier to train they're more efficient all that yada yada yada but more importantly transfer learning actually works with them right you can
1,527
1,549
https://www.youtube.com/watch?v=S27pHKBEp30&t=1527s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
take a pre trained model fine-tune it for your task without a specific data set and another really critical point which I didn't get a chance to go into is that these things are originally trained on large quantities of unsupervised text you can just take all of the world's text data and use this as training data the way it works very very quickly is kind of comparable to how
1,549
1,570
https://www.youtube.com/watch?v=S27pHKBEp30&t=1549s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
word Tyvek works where the language model tries to predict them some missing words from a document and in that's enough for it to understand how to build a supervised model using vast quantities of text without any effort to label them Ellis team still has its place in particular if the sequence length is very long or infinite you can't do n squared right and that happens if you're doing real
1,570
1,598
https://www.youtube.com/watch?v=S27pHKBEp30&t=1570s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
time control like for a robot or a thermostat or something like that you can't have the entire sequence and for some reason you can't pre train on some large corpus LS TM seems to outperform transformers when your dataset size is is relatively small and fixed and with that I will take questions well you yes yeah would CNN how do you compare words CNN transformer so when
1,598
1,636
https://www.youtube.com/watch?v=S27pHKBEp30&t=1598s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
when I wrote this paper the rise and fall and rise and fall of LST M I predicted that time that word CNN's were going to be the thing that replaced LST M I did not I did not see this this transformer thing coming so a word CNN has a lot of the advantages in terms of parallelism and the ability to use rel you and the key difference is that it only looks at a fixed size window fixed
1,636
1,661
https://www.youtube.com/watch?v=S27pHKBEp30&t=1636s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
size part of the document instead of looking at the entire document at once and so it's it's got a fair amount fundamentally in common word CNN's have an easier task easier time identifying diagrams trigrams things like that because it's got those direct comparisons right it doesn't need this positional encoding trick to try to infer with with fourier waves what where
1,661
1,688
https://www.youtube.com/watch?v=S27pHKBEp30&t=1661s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
S27pHKBEp30
things are relative to each other so it's got that advantage for understanding close closely related tokens but it can't see across the entire document at once right it's got a much harder time reasoning like a word CNN can't easily answer a question like does this concept exist anywhere in this document whereas a transformer can very easily answer that just by having some
1,688
1,713
https://www.youtube.com/watch?v=S27pHKBEp30&t=1688s
LSTM is dead. Long Live Transformers!
https://i.ytimg.com/vi/S…axresdefault.jpg
PXOhi6m09bA
hi welcome to lecture nine of CS 294 158 deep unsupervised learning spring 2020 I hope everyone had a good spring break despite the rather unusual circumstances today we will be covering two main topics semi supervised learning and unsupervised distribution alignment before diving into that couple of logistics so common situation and a quick mid semester update well first we
0
38
https://www.youtube.com/watch?v=PXOhi6m09bA&t=0s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
hope everyone and their families are able to keep healthy during these pretty unusual times please practice your health and well be and accordingly please don't hesitate to let us know if this class would interfere with that and we'll be happy to figure something out when the kiss by kids faces since water just past the middle of the semester and says there's some replanting paired the
38
60
https://www.youtube.com/watch?v=PXOhi6m09bA&t=38s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
current situation here's a quick overview of what's still ahead in this class today we have lecture 9 which will cover semi supervised learning and unsupervised distribution alignment next week we'll have lecture ten on compression which will be a live zoom lecture then at the end of next week your final project three-page milestone reports argue this is not graded but
60
95
https://www.youtube.com/watch?v=PXOhi6m09bA&t=60s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
it's a great way to get feedback and make sure you're on track for a good final project and we'll try to give you feedback into your Google Doc that you share with us on pretty short turnaround then we'll have lecture 11 on language models with a guest instructor aleck Radford from open AI then we'll have our midterm which will adjust to the current circumstances we'll see how we do it but
95
122
https://www.youtube.com/watch?v=PXOhi6m09bA&t=95s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
the high-level model will promised a similar that we want to cover the main derivations that we've seen in this semester and have you be able to read arrive those then we'll have our final regular lecture lecture 12 on representation learning in reinforcement learning that also be a live zoom lecture and recording then as our our week which hopefully gives you time to
122
148
https://www.youtube.com/watch?v=PXOhi6m09bA&t=122s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
catch up on a lot of things you know hopefully including them making the extra push in the final project for this class and then during finals week on the 13th of Wednesday there's final project presentations we'll see how to do that with the new situation and final project reports will be Q at that time also with all the logistics covered let's dive into the technical accountant for
148
176
https://www.youtube.com/watch?v=PXOhi6m09bA&t=148s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
today so we'll cover first semi-supervised learning which Harvin along will cover and then we'll cover unsupervised situation alignment which will be covered by Peter Chen and Ashley will use the lecture he gave last year also for this year welcome to lecture 9a of deepens for us learning in this part of the lecture we'll be covering semi-supervised learning so first to
176
205
https://www.youtube.com/watch?v=PXOhi6m09bA&t=176s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
understand what is amiss for us learning let's look at supervised learning in supervised learning you have a joint distribution of data points and labels and use sample from the Joint Distribution in expectation over two samples from the Joint Distribution your goal is to maximize a lot of probability of the classifier love P of Y given X we all know how to do this as far as
205
233
https://www.youtube.com/watch?v=PXOhi6m09bA&t=205s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
procedurally how it's done the basically sample image label or like sequence and particular label or any pair of x and y from your data set assuming they're all independent identically distributed and you don't know what the analytical following the distribution is where you assume that it is some complicated distribution and you just keep sampling multiple points repeatedly and from
233
263
https://www.youtube.com/watch?v=PXOhi6m09bA&t=233s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
stochastic rain descent optimized objective now what is semis corresponding assume that you have an unlabeled data set tu where X is sample from P of X which is a marginal responding to the Joint Distribution from the label data set sample from so you have D you the unlabeled data set and D s the label ta set and your goal is to perform the same thing earlier
263
292
https://www.youtube.com/watch?v=PXOhi6m09bA&t=263s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
which is supervised learning on label data set but with access to this extra unlabeled data that's coming from the same marginal so that's the assumption you make semi-square is learning which is that the unlabeled data is coming from the margin of the correspond to the same Joint Distribution that supervised materials coming from and assistant mathematical assumption in practice you
292
316
https://www.youtube.com/watch?v=PXOhi6m09bA&t=292s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
can't really ensure that but your goal is to make sure that you can use this extra unlabeled data to perform your supervised learning objective even better on the label Deana so not mathematically here is the summary of what we just described take a task like classification a fully supervised scenarios where you have every single data point given to you in the form of image from a label they try
316
347
https://www.youtube.com/watch?v=PXOhi6m09bA&t=316s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
and predict a label for new images that's your task that for a supervised learning now the scenario is you're going to be given a few on label samples but multiple you're going to be given a few label samples but you're also gonna be given a lotta on label samples now labeling is a time-consuming process and potentially very expensive and actually pretty hard in certain domains like
347
374
https://www.youtube.com/watch?v=PXOhi6m09bA&t=347s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
medicine right so or detecting rare events and sub driving for them for that matter so if you have a lot of unlabeled data and your training data set can be reprime tries now is having some lis pairs of label data points image command label and also a lot of other data points where you just have the image and your goal is take this extra data extra data points where you don't have labels
374
403
https://www.youtube.com/watch?v=PXOhi6m09bA&t=374s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
and try to improve our classifiers it just works with the label data and how you do it is totally up to you and that basically decides what kind of summarized algorithm you're gonna come up with to use and we're gonna look at in this lecture we're going to look at how these algorithms can be designed what are the mathematical or intuitive aspects of these algorithms and how you
403
426
https://www.youtube.com/watch?v=PXOhi6m09bA&t=403s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
compare each other and how they can scale the larger data sets like image that and beyond so as to why we even interested in this problem semi-supervised learning is really important because they believe us even though you can collect label data very easily these days with a lot of data annotation startups it's still expensive in terms of hiring people to write annotation manuals for actual data
426
458
https://www.youtube.com/watch?v=PXOhi6m09bA&t=426s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
AM traders and preparing graphical user interfaces so that all this is done really fast and making sure it's stored on the cloud efficiently and syncing from the browser to the cloud there are lots of engineering challenges involved in setting up a good data on editing tool now that's not to say we're never gonna do it we are still gonna do it but the goal is to make sure that we don't do it
458
483
https://www.youtube.com/watch?v=PXOhi6m09bA&t=458s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
as much as we're doing it right now because we also have access to full-on link data that we can potentially exploit and they maybe even improve the performance or legal data systems this is similar in spirit to our goals for Salesforce learning and service learning is a different take on this subspace learning can work with just unlabeled data whereas my learning needs some
483
507
https://www.youtube.com/watch?v=PXOhi6m09bA&t=483s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
label do a lot of unlabeled data that's the key difference so here is a slide from Tung Wong who took this particular picture from Vincent Van Hook on a blog post called the quiet some expose learning revolution where the belief of many more practitioners at least until recently was that semi-square is learning will be really useful in a low data regime where it's really really
507
538
https://www.youtube.com/watch?v=PXOhi6m09bA&t=507s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
going to be better than normal supervised learning when you hardly have any label here however once you collect sufficient amount of labels supervised learning we catch up and eventually be much better this is why a lot of startups don't care about since Portland because the amount of effort needed to do research and engineering together since we're learning working especially
538
561
https://www.youtube.com/watch?v=PXOhi6m09bA&t=538s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
even that it's a brand new fear this lot whenever needed to collect the external data points and you're guaranteed a better performance anyway so that's the rationale so as far as the left bloc glows but look at the plot on the right the dream of many times players learning researchers is that it not only is gonna be super useful in a loliter regime but it's gonna be extremely useful even in
561
589
https://www.youtube.com/watch?v=PXOhi6m09bA&t=561s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
the high energy because it's still gonna give you that final few percentage extra performance because of access to a lot of unlabeled data and learning much richer or more fine-grained detail classifiers because of that and that's basically what's happened recently and we look at the history of the field in recent times so the core concepts needed to understand Sims quest learning are
589
616
https://www.youtube.com/watch?v=PXOhi6m09bA&t=589s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
very few and we're just gonna look at them at a very high level and it's really really intuitive and not hard to understand so the first principle we look at is confidence of classifiers versus minimal entropy on unlabeled data it's it's it's a very nice coupling that the disciplines will shows and it's being used to very good effect in recent recent work and mathematically there
616
651
https://www.youtube.com/watch?v=PXOhi6m09bA&t=616s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
have been papers on these two ideas which is first ideas entry minimization which we look at just the idea of taking on label data and making sure that the classifier training or label here has minimal entropy on unlabeled data so that way you're making sure that the classifier is confident even on unlabeled data it's a useful bit a regularizer classifier the second idea
651
675
https://www.youtube.com/watch?v=PXOhi6m09bA&t=651s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
is your labeling where you take your classifier you asked to classify to predict what the labels are for unlabeled here and you take the confident directions and convert them to extra they ask 'but was the ground truth and train the model on those data points so this idea is also referred to literature as sub training the area of training the model of its own predictions if the model is
675
703
https://www.youtube.com/watch?v=PXOhi6m09bA&t=675s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
confident enough and expanding your data set and regularizing your model further this is a little tricky because it has this reinforcement reinforcing effect of the model using its own predictions so it needs to be done very carefully so that's the caveat there and the other way to add noise to a model to regularize the model is virtual adversarial training which we really
703
727
https://www.youtube.com/watch?v=PXOhi6m09bA&t=703s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
look at in detail but the idea is similar to how Ibis your training is performed for and you know absolute examples and images where you have a particular label and which you're trying to fool the classifier and believing that this image corresponds that label and you're trying to add a particular noise in the direction of the gradient of the output with respect to the input
727
751
https://www.youtube.com/watch?v=PXOhi6m09bA&t=727s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
Society model starts producing some other label similarly here you want to make sure that the model the same squared learning model is regularized in the directions around which tomorrow on unlabeled data you want to make sure find directions in which the classifier is likely to be confused and you want to make sure that the model is not confused in those directions so that's the area
751
774
https://www.youtube.com/watch?v=PXOhi6m09bA&t=751s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
in which we'll have a single training and there are also areas like label consistency which is make sure that the augmentations of the sample have the same class so you have an image we know that we use a lot of the documentation and regular supervised learning but in semi-supervised learning you have a lot of unlabeled data and you can't apply the augmentation to them if
774
799
https://www.youtube.com/watch?v=PXOhi6m09bA&t=774s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
you're not passing them to the classifier but instead what you can do is you take an unlabeled sample it create two different orientations of it and you see it you tell the classifier that it's predictions on these two different augmentations of the unlabeled data should be roughly similar because even though you don't have a label you tell the classifier that whatever it's
799
822
https://www.youtube.com/watch?v=PXOhi6m09bA&t=799s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
printing it should be similar and this way the classifier gets a lot of structure and loss function and parameters that are being learned from unlabeled data a lot of constrains are being imposed and therefore it's going to be much more regularized than just training on the label data so this is a very neat idea and it's also similar in spirit to things you've seen in
822
846
https://www.youtube.com/watch?v=PXOhi6m09bA&t=822s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
subsidising which is the idea of taking two different views of an image and trying to make sure that they attract each other relative to another image so basically consistency constraints embedded into your encoder and raise different ideas in the past have attempted to do this and we are going to look at them in the PI Maru temporal ensemble and mean teacher finally we're
846
877
https://www.youtube.com/watch?v=PXOhi6m09bA&t=846s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
going to look at regularization which is the idea of taking a model making sure that it's generalizing well to a new unlabeled data set or new validation set so typically people use weight decay dropout data argumentation for making sure that the classifiers generalize well and those are also pretty important in semi-supervised learning and methods that identity use these are unsupervised
877
906
https://www.youtube.com/watch?v=PXOhi6m09bA&t=877s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
it augmentation or UDA and mix-match which we look at in detail but there are also other papers that we can't really cover in the scope of the structure but you should check out other people's that are related I mentioned this related work on these papers finally we look at the area of cool training or a self-training or pseudo Laban all these are ideas that
906
927
https://www.youtube.com/watch?v=PXOhi6m09bA&t=906s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
have already been mentioned in this list of bullet points but there is a particular paper on his student which has taken these ideas to a whole new level in terms of performance and so we look at that in a little more detail so entropy minimization it's a very simple idea you have a lot of unlabeled data and you have your label here your training and classifier on the label
927
953
https://www.youtube.com/watch?v=PXOhi6m09bA&t=927s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
data but you want to make sure that the unlabeled data is also influencing the classifier in some way so one simple idea is you take your classifier and ask ask you to predict on the unlabeled data and you want to make sure that the classifier is pretty confident on the unlabeled data or rather it's entropy of the class probabilities it outputs on the unlabeled data is small enough and
953
983
https://www.youtube.com/watch?v=PXOhi6m09bA&t=953s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
this way you ensure that the structure is this way you ensure that the classifier is understanding the structure the unlabeled data we're trying to be confident about it so it's trying to find a solution for the label data in such a manner that it will be pretty confident on the unlabeled data as well so this this is one way to do something special so very old idea and
983
1,007
https://www.youtube.com/watch?v=PXOhi6m09bA&t=983s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
pseudo label this is a very similar idea and being see how it's actually similar but the goal here is to take your classifier that's being trained on label data and ask you to predict on unlabeled data and you pick the most confident predictions and you turn them into extra label layer as if that were the ground truth and you train a classifier or its own prediction so the classifier is
1,007
1,034
https://www.youtube.com/watch?v=PXOhi6m09bA&t=1007s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
making a bunch of predictions and those are being converted to ground truth proxy 100 data for itself and it's going to train again on new things new data sets created from the unlabeled data based on itself and this principle is also referred to as self training and there is a connection to end musician so here is the connection so consider an image X and classes y1 y2 y3
1,034
1,059
https://www.youtube.com/watch?v=PXOhi6m09bA&t=1034s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
and let's say you're doing a classification problem and let's say there is a classifier a with probabilities for the output classes being point one point eight point one and that's why at B with the probability is being point one point six and point three so that's why a clearly has lower entropy and you can say it's more confident it's more confident that the
1,059
1,081
https://www.youtube.com/watch?v=PXOhi6m09bA&t=1059s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
true ground truth is y2 and its score for y2 is much higher and scores for the other two classes are more similar and lower compared to Class B so there is clearly a connection to be made in terms of a classifier being more confident and therefore having lower entropy in the output probability distribution of the classes and therefore minimizing the entropy of your classifier on unlabeled
1,081
1,111
https://www.youtube.com/watch?v=PXOhi6m09bA&t=1081s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
data I can do taking a classifiers outputs an unlabeled layer if they are confident enough you're trying to train on its own predictions so it has a similar effect and mathematically it's shown in these older papers that have been linked here so you can go and check it out and the next thing we're gonna see is later augmentation for label consistency so you take an image let's
1,111
1,139
https://www.youtube.com/watch?v=PXOhi6m09bA&t=1111s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
say it's whether it's unlabeled or labeled you're given an image and now you've created it for an organisation's of it so similar this is the same picture we used in Sinclair and moco so I'm just using it so that you can relate this concept with the earlier lecture where we talked in cells for learning about data augmentation consistency using contrast losses so similar ideas
1,139
1,163
https://www.youtube.com/watch?v=PXOhi6m09bA&t=1139s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
have been used in semi-supervised learning as well so like I said you're already using the documentation for label data so it doesn't matter if you enforce consistencies there but for unlabeled data if you take two different views and make sure that the logits are close enough for the classifier that's being trained on label dia that enforces a lot of structurally believer so you just make
1,163
1,187
https://www.youtube.com/watch?v=PXOhi6m09bA&t=1163s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
sure that the predictions are roughly similar and if you do this for a lot of legal data with a lot of different data occupations then your classifier is getting very much regularize in generalizable even those training on very little data so that's the idea of label label consistency constrains using the augmentation and the more they documentation you use the better it is
1,187
1,210
https://www.youtube.com/watch?v=PXOhi6m09bA&t=1187s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
so that's it for the foundational material next we'd actually look at different semi-supervised learning algorithms like the PI model temporal ensemble in virtual adversarial training and so on but we also look at how the algorithms compare to each other and this particular paper from Google brain realistic evaluation decent spread learning algorithms compares these
1,210
1,243
https://www.youtube.com/watch?v=PXOhi6m09bA&t=1210s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
various different semi-square learning techniques on the C fart and svh and data set which are reasonably small and you can run a lot of prototyping experiments with that so the four algorithms we're going to be looking at our PI model temporal ensemble main teacher and virtual adversary training so basically let's look at the PI model basically the idea is pretty much whatever we talked
1,243
1,276
https://www.youtube.com/watch?v=PXOhi6m09bA&t=1243s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
about our legal consistency you take your image you are creative different views using the stochastic a dog hunter but stochastic it could be at a random crop or a sequence of data augmentations whose sequence is randomized or but there you apply it running grayscale so on and now you take it out you pass it to the model and the model itself could be stochastic it could have drop out so
1,276
1,300
https://www.youtube.com/watch?v=PXOhi6m09bA&t=1276s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
every forward pass could give you a different output even for the same image and you get two different latent variables for these different we'll turn the input down in the model so every time you make a forward pass you label data you can enforce it regularly the supervised crossing could be lost and if you throw unlabeled data you can enforce the square or square
1,300
1,328
https://www.youtube.com/watch?v=PXOhi6m09bA&t=1300s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
difference between the outputs of the model or dutiful views and for label either you can force both of these losses well for unlabeled data you can you just enforce this label consistency loss which is you just take your output before the softmax or even after the softmax it depends on how you want to implement it but you take a particular layer at the end and you make sure that
1,328
1,351
https://www.youtube.com/watch?v=PXOhi6m09bA&t=1328s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
the layers is similar for two different views and you weigh both these losses together so one is gonna be unsupervised on same squares loss and the other is supervised loss and you can actually control loose loss actually dominates the training the beginning at the end so for instance one reasonable motivation is you can make sure that the supervised lost dominates in the beginning so that
1,351
1,379
https://www.youtube.com/watch?v=PXOhi6m09bA&t=1351s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
the model already learns how to classify images and then you can ramp up to wait for these cells for the semi polarized or unsupervised loss of that it's learning to structure the unlimited on similar fashion so this idea is called pi model and this is the zero code for pi model excise your training wise your labels WT is your rammed up wait for your hands provides function f theta faxes your
1,379
1,408
https://www.youtube.com/watch?v=PXOhi6m09bA&t=1379s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
neural net that's the classification task and it could have some drop are in that world to be stochastic G of X is your data argumentation is also stochastic and you basically perform two different augmentations of your mini-batch get two different output CI and Zi to D and you make sure that Zi and Zi absolutely are close to each other using a square error loss or some
1,408
1,432
https://www.youtube.com/watch?v=PXOhi6m09bA&t=1408s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
some distance metric and you also make sure that the predictions of the classifier or matching the true ground truth whenever you have labels so that's that's really it that's as simple as I can get the PI model basically it's using the label the distance e principle so temporal augmentation there's something slightly different which is it says hey I don't want to do four passes of two different
1,432
1,462
https://www.youtube.com/watch?v=PXOhi6m09bA&t=1432s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
views all the time because it's expensive why not let me just keep a moving average of these sample embeddings for every single sample and make sure that the consistencies and falls across time so I would still do a stochastic the augmentation stochastic Network I would get an embedding every time in the forward pass but I would say that those embeddings should be close to
1,462
1,486
https://www.youtube.com/watch?v=PXOhi6m09bA&t=1462s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
some historical a version of the same same same samples embedding the past and so that way gives to enforcing some kind of data augmentation constraints because you would have done a different organization in the past but you're going to keep an estimator of it for every single sample separately so this is very similar to those ideas we talked about in again let chosen your improve
1,486
1,512
https://www.youtube.com/watch?v=PXOhi6m09bA&t=1486s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
gang where there was a constraint on the parameters to be closely extort conversions so this is that at a sample level and other than that it's pretty much the same as the PI model there's a cross between laws there's a roundup function for the unsupervised objective and we'll the objectives are only optimize together so this is the serial code for tempora ensemble is proposed in
1,512
1,537
https://www.youtube.com/watch?v=PXOhi6m09bA&t=1512s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
the same paper we're fine models pose so one negative thing about imported ensemble is it's not going to scale with the size of the dataset you're not gonna be able to maintain a separate moving average embedding for every sample if you have if it is that is so big enough like a million or billion images so mean teacher basically amortize that and said that hey if you want to keep an
1,537
1,563
https://www.youtube.com/watch?v=PXOhi6m09bA&t=1537s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
exponential moving average for n bearings why not just keep an expansion moving average for parameters so that you used to take two different views but make sure that the embeddings match that of the moving average versions rather than creating separate moving average embeddings for every sample so you take your model Tita there's also en male version of Tita and
1,563
1,586
https://www.youtube.com/watch?v=PXOhi6m09bA&t=1563s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
you make sure that the embeddings that you get for one view matches the in bearings very good for the other view but with different encoders basically so that's the idea of this main teacher approach where the teacher can be considered as the AMA version and a you think about it as a teacher because it's giving you these constraints and you also perform the classification task as
1,586
1,611
https://www.youtube.com/watch?v=PXOhi6m09bA&t=1586s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
fication lost in peril sometimes they finally let's look at virtual art whistle training so in adversarial training you're create using this fast sine gradient method where you basically calculate the gradient of your input image basically the gradient of your output with respect your input image this is a huge dimensional vector or matrix depending on what your input is
1,611
1,640
https://www.youtube.com/watch?v=PXOhi6m09bA&t=1611s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
and you move your input in the direction where you basically get this gradient you get this sign and you move your input in a small epsilon in that direction and that lets you fool your classifier and you want to make sure that the in your classifier is not full if you for these perturbations and so you would perform at with stable training to make sure that the
1,640
1,666
https://www.youtube.com/watch?v=PXOhi6m09bA&t=1640s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
classifier is not full at these data in these different rotor patient directions now in Sammy squirrel learning you don't have the labels for unlabeled data so how would you address your training there so the idea is to do virtual address zero training where you look at the distribution of your classes instead of a particular class because you don't you you don't want to make take a
1,666
1,691
https://www.youtube.com/watch?v=PXOhi6m09bA&t=1666s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
particular class and see the perturbation direction for that class you take a distribution of classes and you take something a distance metric between your unperturbed data point and your perturb data point in trying to figure out the direction that maximizes this scale and it turns out if you linearized scale term you can actually solve for this direction are using power
1,691
1,716
https://www.youtube.com/watch?v=PXOhi6m09bA&t=1691s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
iteration and once you get the direction and you can make sure that you info structure your classifier around unlabeled data we're trying to make sure that on these perturbations are unlabeled here the classifier is still not fooled even though you don't have access to a true label so that's why it's referred to as virtual data Co training it's not actually adversarial
1,716
1,736
https://www.youtube.com/watch?v=PXOhi6m09bA&t=1716s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
training but it shares principles with annual training done cleverly with some mathematical tricks and this is basically the pseudocode for the power iteration method which i mentioned because it works because you linearized the k ho term so that's that's it for the techniques by model temporal on some going mean teacher and questionable training of that these are the four technique
1,736
1,768
https://www.youtube.com/watch?v=PXOhi6m09bA&t=1736s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
considered in this comparison paper and they make sure that they use the same architecture for all these techniques because prior work did not do that so they use a wide wrist not and the idea and white resonators your normal resin goes through a bottleneck and the wireless and doesn't do that it just uses three by three cons and does know one by one confer down sampling SAS wide
1,768
1,793
https://www.youtube.com/watch?v=PXOhi6m09bA&t=1768s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
as possible so given all these constraints even all these similar architecture similar hyper parameters for various different semi-square learning algorithms it turns out that which will address your training performs the best if you look at C 410 which is four thousand labels so you see pretend originally it has 50,000 images so you basically just use 46,000
1,793
1,825
https://www.youtube.com/watch?v=PXOhi6m09bA&t=1793s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
unlabeled data points and four thousand label data points which is like four hundred labels for class so that's really tiny compared to what the ocean is that is and virtual abyssal training gets an error rate thirteen point eight six percent which is the best among all the other methods and which I was so training plus entropy minimization together it's even lower error rate of
1,825
1,847
https://www.youtube.com/watch?v=PXOhi6m09bA&t=1825s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
thirteen point one three percent and the trends is similar for the svh and I said where virtual artists are training plus entropy immunization outperforms all the other methods and the authors also say that the report much better piece like stand prior work like for instance in prior work the baseline supported had much higher error rates than what the authors report so
1,847
1,878
https://www.youtube.com/watch?v=PXOhi6m09bA&t=1847s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
this paper actually took the effort to make sure that all the ablations are done carefully and one negative thing about semi-supervised learning on c4 is that if you use something like a preacher image it you take an image Tantalus pre-trained on image netlabels and then you find unit on c4 you actually get better numbers than using the unlabeled data on c4 itself
1,878
1,905
https://www.youtube.com/watch?v=PXOhi6m09bA&t=1878s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
even though see far the unlabeled data on c4 us coming from the same underlying distribution and image that is a completely different distribution completely different image sizes so that's slightly bad so and there's a significant difference as at least one point one just one one plus percentage difference and even if you address the class overlaps and remove the classes
1,905
1,932
https://www.youtube.com/watch?v=PXOhi6m09bA&t=1905s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
overlapping and see far it's - you still get a lower error it and just using some experimenting on c4 and the author is also analyze each things like hey if your unlabeled data as soon see further Oerlikon ten classes and if you if you assume that the unlegal leader doesn't have the uniform class distribution its label there and you can play around what the distribution of unlabeled here points
1,932
1,965
https://www.youtube.com/watch?v=PXOhi6m09bA&t=1932s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
are as far as a class overlap with Lolita is its it it is clear that as you as the distribution response increases which for a personal training is the most persistent compared to all the other approaches similarly if you vary the number of legal data points obviously the test error is going to be lower as the number of legal data points increases because that's slowly getting
1,965
1,992
https://www.youtube.com/watch?v=PXOhi6m09bA&t=1965s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
to the supervised learning regime and all the meant is roughly performed similar but in the extreme scenario value are very few labels 215 up to 50 label data points and so on which will have a certain training significantly the best in this vhn and it is through the best on see for our notes other methods are competitive as well the lessons from this paper are when you
1,992
2,020
https://www.youtube.com/watch?v=PXOhi6m09bA&t=1992s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg