video_id
stringlengths
11
11
text
stringlengths
361
490
start_second
int64
0
11.3k
end_second
int64
18
11.3k
url
stringlengths
48
52
title
stringlengths
0
100
thumbnail
stringlengths
0
52
PXOhi6m09bA
use the standard when you compare different algorithms and semi squares I mean you should make sure that you use a standard architecture and equal training budget which is you should spend equal amount of time tuning hyper parameters for all of them and if your unlabeled data is coming from a distribution that is a necessary overlap with your label data points then the benefits of
2,020
2,041
https://www.youtube.com/watch?v=PXOhi6m09bA&t=2020s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
something surprise learning will not be there thirdly most methods that likelike don't work well in a very very low data regime so this is not true right now but this was true when that paper was published and we look see how it changed over time and transferring 320 machine had produce better error rates but again it's not true right now so this is an old paper but the main reason for going
2,041
2,070
https://www.youtube.com/watch?v=PXOhi6m09bA&t=2041s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
through that is to introduce all these different techniques like the PI Maru and virtually I was still training and temporal on something in the mean teacher so the agenda for the less rest of the lecture is to cover three very recent papers and some surprise learning that actually have taken some space learning to a whole new level unsupervised the augmentation makes
2,070
2,097
https://www.youtube.com/watch?v=PXOhi6m09bA&t=2070s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
match and noisy student so before that let's actually take a break you you okay resuming so first let's look into unsupervised data augmentation for at for a consistency training in same as for a Fleming so this is a paper from Google brain from Cork lace group and these slides are from tong long was one of the authors in this paper so we've already seen how important data
2,097
2,142
https://www.youtube.com/watch?v=PXOhi6m09bA&t=2097s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
augmentation is and it's it's it's been significantly useful in supervised learning the high data regime but if you just do supervised learning if you don't have a lot of labels just data augmentation isn't gonna get you very far even if you use extremely aggressively documentation like Auto augment which is shown here where you basically can rotate shear and add colors to some
2,142
2,167
https://www.youtube.com/watch?v=PXOhi6m09bA&t=2142s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
scene and create a lot of different views of the same image similarly in language you can create different versions of the same phrase or sentence using the technique called back translation so what it does is basically they cannot take and take a sentence in particular language translate it it to another language and then translate it back from the other language to the
2,167
2,195
https://www.youtube.com/watch?v=PXOhi6m09bA&t=2167s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
existing set first source language so you go from a source language you go to a target language and you come back from the target language to the source language so you hope that this entropy in the decoder and the encoder will result in a in in a new version of the same sentence and examples are here so so the source sentence here is given the low budget and correctional limitations
2,195
2,224
https://www.youtube.com/watch?v=PXOhi6m09bA&t=2195s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
the movie is very good and if you look at it three different Mac translations since it was highly limited in terms of the budget and the production restrictions the film was cheerful there are a few budget items with production limitations to make this film a really good one dude is a small dollar amount and recommendations Neos for them is very beautiful so
2,224
2,248
https://www.youtube.com/watch?v=PXOhi6m09bA&t=2224s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
the first and third versions are particularly really good well the second conveys a slightly different meaning but it's more or less there so this is giving a lot of diversity and based on which language you move to you're gonna get very different outputs and also the same language you're gonna get different outputs for different decoding so the key idea and UDA or unsprayed
2,248
2,273
https://www.youtube.com/watch?v=PXOhi6m09bA&t=2248s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
augmentation is apply the state-of-the-art data augmentation techniques on unlabeled data for consistency training in semi-supervised learning you've already seen that PI model are like like Supai model was basically doing consistency training but it was a pretty old paper and data augmentation through these numeral architectures were not as devil back then so you can think of UTA is
2,273
2,299
https://www.youtube.com/watch?v=PXOhi6m09bA&t=2273s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
doing pi model right by using a lot of data augmentations on the right architectures so here's a nice way to understand UDA so think about label data and you have X and your ground through the Y star and your training a classifier at P feet of Y given X and you have your standard supervised cross-entropy laws that make sure that the logit so the true class are really maximized and you also have
2,299
2,325
https://www.youtube.com/watch?v=PXOhi6m09bA&t=2299s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
unlabeled data so this is the situations and it's quite funny and models like virtual address zero training add noise to regularize to models predictions and the noises between the virtual address retraining direction very calculated gradient in approximate fashion next you have this thing called unsupervised consistency loss which is you take the noise from the model and
2,325
2,353
https://www.youtube.com/watch?v=PXOhi6m09bA&t=2325s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
your original model and you make sure that the logic seriously similar unlabeled data so this is something you already know and a final loss is combination of supervised and unsupervised consistency loss and you can see that virtual adversity of training actually works pretty well so this is a green eyes illustration of which I do sell training even though it's being presented in the
2,353
2,376
https://www.youtube.com/watch?v=PXOhi6m09bA&t=2353s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
UTA strikes so the uncolored data points are unlabeled data by the green and pink data points with a label here you only have roughly eight data points which are labeled but after performing virtual adversity of training after imposing the consistency between the model and the noise version of the model with a noise comes from that you can see how this labels propagated and covered up this
2,376
2,404
https://www.youtube.com/watch?v=PXOhi6m09bA&t=2376s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
federal so that's really cool so that's the goal of semi-square learning to do this really well in high dimensions when you have a lot of data and a lot more parameters so you can think of UDA as creating this noise at the input level using various different data augmentations and depending on the two modality for instance you use Auto Alcuin for images you would use TF idea
2,404
2,432
https://www.youtube.com/watch?v=PXOhi6m09bA&t=2404s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
word replacement or back translation for NLP and based on that you create different organizations at the same image or the same sample and enforces consistency laws and the unsupervised consistency but the last part and you also do supervised cross and freelance and you together out twice this so that's basically UDA and augmentations provides a diverse and valid
2,432
2,462
https://www.youtube.com/watch?v=PXOhi6m09bA&t=2432s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
perturbations for your input so like I said back translation produce three different versions that look very different from each other in converted roughly the same meaning as the original source sentence so in this case they actually went from English to French and back to English but you can also think of doing it to other languages and you can increase the diversity by playing
2,462
2,487
https://www.youtube.com/watch?v=PXOhi6m09bA&t=2462s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
around the temperature various different sampling techniques like beam search on nucleus sampling and you know whenever you do use a software so you can always use a temperature there you're gonna get tablet samples if you use high not high enough temperature and if use a low enough temperature you're going to get the most confidence after product samples with less diversity but high
2,487
2,509
https://www.youtube.com/watch?v=PXOhi6m09bA&t=2487s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
quality so you can control for that similarly in images you can use argument but depending on the type of argumentation you can control the strength of the argumentation and get sufficient distortions based on what one level of distortion eclair bar so let's look at the experiments carried out on this paper so in language they experimented with document classification or sentiment
2,509
2,537
https://www.youtube.com/watch?v=PXOhi6m09bA&t=2509s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
classification or review review sentiment analysis so if you look at the size of the data sets they have like twenty five thousand or four five sixty thousand and six fifty thousand samples and so on and the error rates before birth and after birth are mentioned different lines and you can see how Bert has significantly improved the error rates so what they did in the UDA papers
2,537
2,565
https://www.youtube.com/watch?v=PXOhi6m09bA&t=2537s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
for various different initializations random bird base birth large and bird fine-tuned they have numbers for whether you use UDA or whether you don't use EDA but you notice how the number of label data points are changed by orders of magnitude three or four higher so if you look at IMDB you earlier had twenty five thousand label data points but the authors run his experiment trending
2,565
2,592
https://www.youtube.com/watch?v=PXOhi6m09bA&t=2565s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
labeled eight points which is three orders of magnitude lower similarly for yell quits 2.5 km this is 650 K and so on and you can see how the performance especially when you use word fine-tune is on par with the birth large that strain on all the label data points so the performance you get from taking birth large and fine-tuning on the fully supervised baseline is actually on par
2,592
2,620
https://www.youtube.com/watch?v=PXOhi6m09bA&t=2592s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
with what do you get from you da plus using bird fine-tune but just on like thousand x few labels which is incredible so it means that the consistency loss is using back translation is actually working really well secondly they have this idea called training signal annealing which is you want to prevent overtraining on the label here so you have a label data so
2,620
2,648
https://www.youtube.com/watch?v=PXOhi6m09bA&t=2620s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
you don't want to make sure that you want to make sure that your classifier doesn't over train on it and for that they actually have something like a threshold a procedure where they take the classifier and if it's sufficiently confident they don't train the classifier points and this threshold is actually varied over time so you have an indicator variables of whether the
2,648
2,668
https://www.youtube.com/watch?v=PXOhi6m09bA&t=2648s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
classifiers output logic score is less than threshold and only if it is you're gonna have the model trained on those data points or else you're just not gonna back up those gradients and this threshold has the you can play around with different schedules you could it could be really confident the beginning at the end you want to make sure that the thresholds are actually high enough
2,668
2,695
https://www.youtube.com/watch?v=PXOhi6m09bA&t=2668s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
so that model is not training and in the beginning you don't want to have high enough threshold it so because your mom could be erroneous so they play around with linear or exponential and log and the paper finally this is a really cool plot that shows the dream of something which is the benefits hold even in the high rated regime even when you how to twenty five thousand label examples the
2,695
2,726
https://www.youtube.com/watch?v=PXOhi6m09bA&t=2695s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
performance that you get is from semi-supervised learning is better than supervised bird find here so it's actually able to take advantage of all the unlabeled I have so next let's look at the computer vision experiments carried out on the UDA paper so basically they use the standard benchmarks on C for Tanana speech and semi-supervised learning which you saw in the prior work on
2,726
2,751
https://www.youtube.com/watch?v=PXOhi6m09bA&t=2726s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
realistic evaluation of same exploiting algorithms so a lot of these baselines are from that paper where there is a virus net with 28 letters and you can see how the parameters are controlled for all the different algorithms and the numbers are reported for C flower with 4000 labels innovation with thousand labels and UDA is actually the best algorithm in this setting with an error
2,751
2,781
https://www.youtube.com/watch?v=PXOhi6m09bA&t=2751s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
rate as low as 5% on c4 and 2.5% on s vhn and with architectural changes like shake shake and shake drop and permanet they get all the way down to 2.7 percent which is significantly lower so just four thousand samples they're as good as ninety 90 97 either better than nice and accurate on C far which is kind of accuracies you usually get with using all the label data points so that really
2,781
2,811
https://www.youtube.com/watch?v=PXOhi6m09bA&t=2781s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
means to do domain consistency that the data augmentation label consistency is really really helping them and the models are the scaling was with larger networks so when you moved away from 1.5 million parameters it was typically used in semi-square is learning to see far earlier to a model that's big enough they passed 26 million parameters the error rates are getting
2,811
2,836
https://www.youtube.com/watch?v=PXOhi6m09bA&t=2811s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
significantly lower so this shows that this technique scales with the number of parameters use next is how you can actually match the complete supervised baselines with just using order of magnitude fewer table data points so the complete supervised baseline uses 50,000 labels and this one uses 4,000 labels so that's 10x R or at like more than 10x fewer and and and if
2,836
2,867
https://www.youtube.com/watch?v=PXOhi6m09bA&t=2836s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
you look at the numbers you basically get 5.4 percent error rate with supervised and 5.3 percent with UTA for the virus that's 28 architecture and for the other models even though UDA is slightly higher for shake-shake-shake drop it actually matters to supervise baselines though it's not as good as the Auto argument abortion it's still very very close finally they also ablated for how much
2,867
2,897
https://www.youtube.com/watch?v=PXOhi6m09bA&t=2867s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
data mutation matters and it seems to matter really it's really the biggest deal as we would expect because we've seen that in subsequent learning as well and that's what they observe even here so the summary of UDA is that it's a data augmentation technique that's applied on unlabeled data to improve the performance of a classifier that's trained on very few labels their points
2,897
2,920
https://www.youtube.com/watch?v=PXOhi6m09bA&t=2897s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
and data augmentation is that for a critical part of this pipeline and it's an effective perturbation technique for us any surprise nothing so it's even it's it's even more effective than perturbing the model and Yui is significantly improves for both language and vision by 10x or 100x a thousand X fewer label data requirements and combines very well to transfer learning
2,920
2,946
https://www.youtube.com/watch?v=PXOhi6m09bA&t=2920s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
like Bert and scales for the model parameters and model sizes so they also experimented with imagenet where they take unlabeled imagenet of 1 million examples and 1 million unlabeled data points they use 10% label data which is 100,000 labels or 100 labels per image and pure supervised gets 55% whereas their model you DEA gets 68.9 close to 69 percent accuracy which is at
2,946
2,980
https://www.youtube.com/watch?v=PXOhi6m09bA&t=2946s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
least 30% better and this shows the benefits are using more unlabeled data points so another thing that they tried is having a data centers even larger than Internet which is the jft layer set from Google it's an internal data set it's on a scale of Google photos and they'd use 1.3 million images from JFK to just to see how much the domain is much works out [Music]
2,980
3,011
https://www.youtube.com/watch?v=PXOhi6m09bA&t=2980s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
Huli obtaining extra in domain unlegal data actually the auto domain on legal deer herd state performance and so you can see that it's not actually working as well so using more future days that actually works better for them and they also oblations on different schedules for the thresholding and you find out the exponential scale it's better for language but linear scale
3,011
3,039
https://www.youtube.com/watch?v=PXOhi6m09bA&t=3011s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
works better for images that's that's something very empirical diversity constrains how did how do you control the diversity constraints for making sure that you have effective data augmentation they use a lot of hacks like minimizing the entropy and your decoding using the softmax temperature controlling and confidence based masking for your image image net case where they
3,039
3,064
https://www.youtube.com/watch?v=PXOhi6m09bA&t=3039s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
found the outer distribution dataset unlabeled unlabeled I said like JT wrote the performance in that for they would take the most confident predictions from your label reader and you use that to filter the habibu data and tenday would find gains so domain relevance based data filtering is like a critical aspect of using own label data for improving performance of label data and
3,064
3,087
https://www.youtube.com/watch?v=PXOhi6m09bA&t=3064s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
semi-square learnings we already saw most of the mathematical or intuitive foundation system is for learning make the assumption that the unlabeled data is coming from the same data distribution as to marginal corresponding to the joint distribution that you use for label data points but that's often not the case in practical scenarios because today said one label
3,087
3,109
https://www.youtube.com/watch?v=PXOhi6m09bA&t=3087s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
unlabeled data is usually something coming from some other data set or some other data source and you want to make sure you still transfer knowledge therefore that that's a suspect to assumption to make but in practice it can be there's a workaround using various kinds of filtering techniques that this paper proposes so next look at mix-match which is another very
3,109
3,138
https://www.youtube.com/watch?v=PXOhi6m09bA&t=3109s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
interesting the similar paper similar spirits so the key idea in mismatches you take on label data you perform a lot of different augmentations and now you have you run all of these different augmentations to the same classifier and you get the predictions and you average two predictions across these augmentations and you end up with a bunch of class probabilities you can
3,138
3,164
https://www.youtube.com/watch?v=PXOhi6m09bA&t=3138s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
sharpen those class probabilities with soft max temperature controllers and once you get a sharpened distribution you you you you have an idea of what the classifier would have guessed for the unlabeled data point and now what are you going to do is you're gonna take this and use it as part of your label Diaz for every label data updated new SEM is worth learning so it's that simple only
3,164
3,188
https://www.youtube.com/watch?v=PXOhi6m09bA&t=3164s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
thing is you you're making sure that your guess for unlabeled data comes from averaging or multiple augmentations and sharpening the distribution so there it's confident enough so it's called mix-match because it uses to mix up trick firstly for if you're not familiar with mix up the idea is you take your input X and your output Y and let's say you're trading a model to
3,188
3,219
https://www.youtube.com/watch?v=PXOhi6m09bA&t=3188s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
predict Y from X you basically create convex combinations of two pairs of X for my one next to I do and create a new data file so for images it should be it would be something like for every pixel you take the weighted combination the pixel from the first image and the pixel from the second image and you which is average those two pixels and create a new image and similarly you would
3,219
3,245
https://www.youtube.com/watch?v=PXOhi6m09bA&t=3219s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
average the corresponding and ground route labels in Korea target ground truth for your cross country with large its loss and this this technique is called mix-up and it's a documentation technique so mix-match basically the following it takes a bastion label data and imagine unlegal data and produces a new batch of process legal examples with this guessing technique
3,245
3,275
https://www.youtube.com/watch?v=PXOhi6m09bA&t=3245s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
and mixing up so let's look at the English part of this pseudocode apply data argumentation to X B so basically our badgerlink data points X X B PB and unlabeled data points you be so you apply the data argumentation to X P and you apply Cayetano data augmentation to you B which is the only with a point and you compute the average predictions across all the augmentations of UB in
3,275
3,304
https://www.youtube.com/watch?v=PXOhi6m09bA&t=3275s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
practice they just use cake with a 2 but it can be really large if you want it to be and you apply temperature sharpening which is soft max temperature to make sure that the average predictions are picky enough and after the peaks are obtained you can take the our Max and get the corresponding classifier corresponding data point for proxy data point for the classifier so you Ottoman
3,304
3,332
https://www.youtube.com/watch?v=PXOhi6m09bA&t=3304s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
the legal examples with these guest labels and using these guesses and the original labels you create an Augmented mini batch and you shuffle will combine these mini batch data points using the mix up trick and once you apply mix up to the labeled and unlabeled data you can just train a regular service learning model that has domain consistency losses and the supervisors
3,332
3,358
https://www.youtube.com/watch?v=PXOhi6m09bA&t=3332s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
cross will be lost treating this new batch as this mix match so mixed match is producing this processed label plus on people Taylor batch from two different data batches that are coming independently so the last function for mixed match is going to be the regular cross country loss in a classifier there's some kind of consistency laws for unlabeled data points that you normally use in
3,358
3,385
https://www.youtube.com/watch?v=PXOhi6m09bA&t=3358s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
semi-supervised learning with some waiting constant and in practice it works really well earlier you saw in the realistic evaluation of learning algorithms that all these techniques were not really working well and this was concurrent work with UDA and you can clearly see how its really into the performance and see for at an SV hm so here are the numbers you can see that
3,385
3,411
https://www.youtube.com/watch?v=PXOhi6m09bA&t=3385s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
it's not only working in the regime where you have 4,000 label examples but it works all the way well even when you have 250 labels mixed masses able to get all the way down to 10% error rate on CFR which is a which is very impressive so 250 images for class this means when two to three images of labels just means 25 images per class and you stood ability gets a classifier that gets
3,411
3,437
https://www.youtube.com/watch?v=PXOhi6m09bA&t=3411s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
around 90% accuracy so that's it for mixed match an expansion UD ever like conquering work and I'll raise similar ideas different kind of implementations and UDA is probably more broader in terms of applications to NLP as well make sponsors well analyze internally ablated foresee for RNs future so let's look at the final paper in this agenda sub training with my student this is the
3,437
3,469
https://www.youtube.com/watch?v=PXOhi6m09bA&t=3437s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
largest scale since Freud learning experiment conducted and machine learning so far these are also sliced from tango Wong was one of the authors in this paper so as I said there's the semi-square learning had a background just the dream is to make sure that unlabeled data improves the performance of supervised learning or label data even when you have a lot of labels and
3,469
3,492
https://www.youtube.com/watch?v=PXOhi6m09bA&t=3469s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
that's what this paper tries to achieve so you you already remember how unfiltered aft was not able to give sufficient gain on imagenet for the UDA paper and they actually use clever filtering techniques noisy student is actually a larger scale version of that so the way it works is as follows you train a teacher model with long label data trying to really really good
3,492
3,521
https://www.youtube.com/watch?v=PXOhi6m09bA&t=3492s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
classifier and asset classifier to predict an unlabeled here and you infer the cereal labels on illegal data and you train a student model with the combined data of the teacher teacher of the original data label data that you use for the teacher as well as the guest labels on our label data and you add noise to this process through data augmentation dropout and stochastic
3,521
3,546
https://www.youtube.com/watch?v=PXOhi6m09bA&t=3521s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
depth which is another version of skip connections with stochastic and create a lot you know noisy student that's why it's called noisy student here you're gonna have a lot of data augmentation to this process and once you do that your student model is pretty good it's it's highly regular I stand also trained on a lot more data and therefore it can over train on anything but still it rains on
3,546
3,571
https://www.youtube.com/watch?v=PXOhi6m09bA&t=3546s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
a lot of different proxy proxy later you generated from label data and which has already been filtered because it's only taking the confident predictions so now you can treat that student as a new teacher and repeat this process multiple times so you take the new student as a teacher if you you basically train it on label data you infer the pseudo you infer the zero
3,571
3,594
https://www.youtube.com/watch?v=PXOhi6m09bA&t=3571s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
labels unlabeled data again with this new teacher and you create a new student and so on and repeat this process like multiple times and you get a really good for Adam so here are the experiment settings the architecture that you use is efficient at and the model noises they use is dropout of stochastic tap the input nicely uses Randolph which is a version of auto valve it's more
3,594
3,619
https://www.youtube.com/watch?v=PXOhi6m09bA&t=3594s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
efficient and for zero labels they use the soft seal enables continuous values they don't actually use the one harden coatings and the label data said they uses a ridge net which is 1.3 million images and the unlabeled data said they use is jft which is 300 million images and they basically do iterative training where they take the biggest efficient at model possible and actually make it
3,619
3,648
https://www.youtube.com/watch?v=PXOhi6m09bA&t=3619s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
wider for the next next scenarios so the original teacher could be b7 which is the widest and deepest efficient net that exists and the student model the trains next could be another bigger version of that model which is the L tomorrow the Dakar so in terms of results they actually had the state-of-the-art numbers for an image not already here two eighty eight
3,648
3,672
https://www.youtube.com/watch?v=PXOhi6m09bA&t=3648s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
point four percent top one accuracy which is significantly better than any other model and the previous best was eighty six point four percent which is actually trained on 3.5 billion labeled images from Instagram so with just one point three million labeled images and 300 million unlabeled images you're actually able to surpass those numbers by two percentages significant
3,672
3,696
https://www.youtube.com/watch?v=PXOhi6m09bA&t=3672s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
especially in those those regimes and they do that with one order of magnitude fewer labels and they actually do that with twice few as that smaller in terms of number of parameters because they use efficient mass which are much more efficient in terms of parameters and flops and rest net or rest next so the improvements are also exists without iterative training so it's not that they
3,696
3,725
https://www.youtube.com/watch?v=PXOhi6m09bA&t=3696s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
actually need it right of training so even without iterative training they get significant improvement which is one iteration they can get minimum of one percent improvement for all the different model regimes b0 b2 b5 b7 improvement of 1% is pretty standard and which is pretty pretty nice because this means that the filtering mechanism actually works for finally they also show really good
3,725
3,750
https://www.youtube.com/watch?v=PXOhi6m09bA&t=3725s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
robustness results on a mission net because of training on a lot of different data augmentations and model noises and water unlabeled data in addition to label data you would expect the resulting classifier to actually be good on a new server robustness benchmarks which happens to be the case so they actually beat the state of their army robustness benchmarks on each net a
3,750
3,772
https://www.youtube.com/watch?v=PXOhi6m09bA&t=3750s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
which is a corrupted version of image net where usual classifiers fail over there and their model actually gets 83 point once and 83 point seven and top one which is unprecedented and they also do really well on emission at C and P where they get very very competitive very very very good numbers on the top one accuracy over their 77227 eighty-six percent which is significantly better than the
3,772
3,801
https://www.youtube.com/watch?v=PXOhi6m09bA&t=3772s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
models that did not use this noisy student process so here's like examples where the model that they trained on these harder versions of imagenet ended up making the right predictions which is in black well the baseline models ended up making the wrong predictions so the baseline models are focusing on aspects that we use the car up and there's actually your
3,801
3,826
https://www.youtube.com/watch?v=PXOhi6m09bA&t=3801s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
amazing car or like for instance they're actually looking at so the models able to capture the basketball in the photo on the bottom room where there's a man holding in basketball whereas the baseline models are not able to do that so so that shows this models very fine-grain in terms of its recognition abilities and here are a bigger example is this a dragonfly at
3,826
3,857
https://www.youtube.com/watch?v=PXOhi6m09bA&t=3826s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
the right but but the baseline models of food and thinking it's a bullfrog and similarly parking meter was his vacuum swing was his mosquito net so this mod is actually very very good at details and they also have ablations for how much the noise matters in the in this process and it seems to matter significantly enough when he used all the different kinds of noises like
3,857
3,888
https://www.youtube.com/watch?v=PXOhi6m09bA&t=3857s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
grappa stochastic depth and data augmentation you get the best possible numbers so in summary we looked at semi-square is learning and it's a practically important problem in the industry for two different scenarios one is when you have a lot of label data and a lot more on label data like for instance image ladder and j ft and you're trying to improve the performance
3,888
3,911
https://www.youtube.com/watch?v=PXOhi6m09bA&t=3888s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
of image net the other is when you have very little label data and you have plenty of unequal data which is usually the case in medicine or finance or something like that so the promise of semi-supervised learning is always existed for the second second scenario but there have been very good results in the the last few months or like last year or so in both these scenarios which is the
3,911
3,947
https://www.youtube.com/watch?v=PXOhi6m09bA&t=3911s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
noisiest student model really helping in the scenario where you have a lot of legal data but you have a lot more and label data but then unsupervised data augmentation or mix-match is really very good at the low Reiter regime where you have unlabeled data but very little label data and you're able to do really well so this means that when you have the scenario are you using unlabeled data to
3,947
3,970
https://www.youtube.com/watch?v=PXOhi6m09bA&t=3947s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
improve the performance of supervised learning systems self supervised learning is not necessarily the only option semi-supervised learning is asked lucrative or probably even better because its ability to improve the performance even in the high data regimes and make it possible for building emotional classifiers that have an unprecedented top on accuracies like
3,970
3,993
https://www.youtube.com/watch?v=PXOhi6m09bA&t=3970s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
noisy student that's it for the lecture thank you very much all right so let's get started welcome back to the lecture nine I'm actually not sure Oh keep unsupervised learning something lecture something and so we will have a two part lectures today the first part we would look at something called unsupervised distribution alignment it also goes by a lot of other names and
3,993
4,028
https://www.youtube.com/watch?v=PXOhi6m09bA&t=3993s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
then the second part would be a guest lecture by Professor Alyosha to talk about some of I guess the works from his lab so any logistics questions before we dive into your lecture mouse tone can we have late days for mouse tones all right there you go which no one is working on all right so so let's get started so in this lecture we will look at unsupervised distribution alignment so
4,028
4,114
https://www.youtube.com/watch?v=PXOhi6m09bA&t=4028s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
what does that even mean so let's remove the part unsupervised let's just first look at a distribution alignment problem so a lot of problems in image to image translation take this form so let's say I want to go from semantic mask to RGB images then this is a distribution alignment problem because we can think of us having a distribution over mask here and then we also have a
4,114
4,146
https://www.youtube.com/watch?v=PXOhi6m09bA&t=4114s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
distribution of like just regular images here and then they would like co-occur with certain join probability distribution right it's mostly for one image there is only one correct semantic mask but for one mask there could be many corresponding images and the goal would be like how can you align these two in such a way that when I give you an image on the right you can generate
4,146
4,173
https://www.youtube.com/watch?v=PXOhi6m09bA&t=4146s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
the mask or the other way if you want to generate more training data and then the more image problems that takes this form let's say I want you know an image what does it look like in the day time and what does it look like in night time and then again we can think of it as having a distribution of images of day time and then also a distribution of images in
4,173
4,195
https://www.youtube.com/watch?v=PXOhi6m09bA&t=4173s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
night time and then you want to align them in certain way so you say why is this helpful like one way that this could be helpful is say if we want to train at home as vehicles to drive safely at night but then it's harder to collect data at night so is there a way for us to collect corresponding images during daytime and then find a way to find their nighttime counterpart that would
4,195
4,220
https://www.youtube.com/watch?v=PXOhi6m09bA&t=4195s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
be useful if we can do this and then there's somehow a lot of other problems also fall under this formulation like black and white images to color images and for basically everything that we have seen they are relatively tractable because like I can totally just take a color image and then convert it into black and white and that gives me a lot of pairs that I can train
4,220
4,243
https://www.youtube.com/watch?v=PXOhi6m09bA&t=4220s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
on and similarly to the semantic mask and Street View RGB images as well as daytime and nighttime for a lot of those you can actually find natural pairs so these are some of the problems distribution alignment problems in image space and this kind of distribution alignment problems also happen in the attacks analog of that and most straightforward example is really just
4,243
4,274
https://www.youtube.com/watch?v=PXOhi6m09bA&t=4243s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
machine translation how do you translate a sentence or a paragraph from one language to another that again you can think of as a distribution alignment problem you can think of there was a distribution of English text and then there's a distribution of Chinese text and then the question here is how do you align these two things together so when this kind of distribution alignment
4,274
4,299
https://www.youtube.com/watch?v=PXOhi6m09bA&t=4274s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
problem when is supervised then they are relatively easy so when in the in the case of form an image goes to semantic mask it's basically just a Mantic segmentation problem when it's like other image to image translation there's this text effects work that's done here at Berkeley and then for text to text domain alignment when you have the supervised pairs it's just machine
4,299
4,328
https://www.youtube.com/watch?v=PXOhi6m09bA&t=4299s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
translation and when you want to go from image to text again when you have supervised pairs they are just like captioning tasks and a lot of things so in the end of it it really just borrowed down to feeding a certain conditional distribution like your given image B what the correct mask a and the you have this luxuries when you have kind of a and B pairs that co-occur in the wheel world
4,328
4,354
https://www.youtube.com/watch?v=PXOhi6m09bA&t=4328s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
either through your annotation effort or by taking an image at a time and it take the same image at nighttime as long as you can gather this kind of pairs is somewhat trivial at least from a formulation perspective it's really just fitting a conditional distribution and we have talked about all sorts of ways to feed distributions in this class can auto regressive model or whatever and
4,354
4,380
https://www.youtube.com/watch?v=PXOhi6m09bA&t=4354s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
but the question becomes interesting if this kind of pair of data what if they are expensive to obtain or they just don't exist then we are basically going out of the range of this supervised distribution alignment problem like you have one distribution you have another but you don't have any pair of data then like can you still do this or even do it at all so I'm taking some examples from
4,380
4,413
https://www.youtube.com/watch?v=PXOhi6m09bA&t=4380s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
a paper called cycle game like what if you want to turn the painting into a photograph or turn a photograph into a painting the second one might be more tractable because like you could possibly say I take a picture and then I hire someone to paint it for me but if I want to do it in a very specific style by a specific artist and you really couldn't do that so in a sense the
4,413
4,434
https://www.youtube.com/watch?v=PXOhi6m09bA&t=4413s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
natural pairs don't even exist in the real world similarly like if you want you for whatever reason if we want to turn a zebra into a horse or turn a horse into a zebra then it would be very difficult to force a zebra in a horse to take up exactly the same pose and take a picture of them having the exact correspondence so these are the kind of pair of data that would not exist in the
4,434
4,465
https://www.youtube.com/watch?v=PXOhi6m09bA&t=4434s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
real world and there are a lot of other applications so let's think back to machine translations so if we want to if I want to translate between Chinese and English or English and Germany that's relatively easy because there are large demand of those language uses and it makes it economical to annotate a lot of data of basically supervised language sentences pairs but then like it's not
4,465
4,494
https://www.youtube.com/watch?v=PXOhi6m09bA&t=4465s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
economical to do it for the other probably hundreds of languages that exist in the world it just doesn't make sense to annotate that much data and and if we can make this kind of distribution alignment to work without any supervision then it could be used as a lot more it can be used as a way to augment label examples in a kind of semi-supervised way we can be also used
4,494
4,521
https://www.youtube.com/watch?v=PXOhi6m09bA&t=4494s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
to do style transfer like some of the things that we have seen that basically had no ground truth in the real world yes yeah well it's just okay so if I have a good translation model between two languages then the value of that model is kind of proportional to the usage that you can get from it let's say to train any pair of languages you need the same amount of
4,521
4,552
https://www.youtube.com/watch?v=PXOhi6m09bA&t=4521s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
investment that's called fifty million dollars probably on the lower side then like if I throw in this fifty million dollars for between Chinese and English you probably get a ton of usage and you get ads or like what other revenue but then I have like English - like whatever language that probably only a hundred thousand people speak then you get drastically less usage of the model that
4,552
4,577
https://www.youtube.com/watch?v=PXOhi6m09bA&t=4552s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
means for the same investment you get a lot less out of it so it's not that they are more expensive to label it's just like that value it doesn't make it justified so okay so let's look at this problem again so it would be of course it would be a nice thing to be able to achieve to give me two distributions and then find a way to align them but this is even a feasible problem right so if
4,577
4,600
https://www.youtube.com/watch?v=PXOhi6m09bA&t=4577s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
we look at the problem statement we are basically told to do two things one thing is we have distributions a we have two random variables a and B two distributions and then we get access to the samples from them like we get a bunch of samples in one domain we also get a bunch of samples from the other domain that's all great but what we crucially don't have is we don't have
4,600
4,623
https://www.youtube.com/watch?v=PXOhi6m09bA&t=4600s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
any samples from the pairs yet we need to estimate how they are related to each other so this problem form a high level seems pretty hopeless because you have really given too little information to to tacko it so what we would look at next kind of certain so basically the crucial problem now is like where do we even get any training data like if I don't have any supervised pairs like
4,623
4,666
https://www.youtube.com/watch?v=PXOhi6m09bA&t=4623s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
what do I even train the model on so the the way that people have been doing this is they try to rely on certain invariants that are true for kind of any pairs of distributions and then somehow you could get some meaningful learning signal out of it so the first kind of invariance that we can rely on is something called marginal matching so there's some math here but then the
4,666
4,697
https://www.youtube.com/watch?v=PXOhi6m09bA&t=4666s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
brief idea is really like if I want to translate one distribution from one distribution to another after the translations the distribution should still look like each other and more precisely what it means is that there was some fundamentally unknown coupling there's some fundamentally unknown relationship between these two random variables a and B that we are trying to
4,697
4,724
https://www.youtube.com/watch?v=PXOhi6m09bA&t=4697s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
approximate but we don't have access to them so let's call that our approximation Q so given B like what what a is most likely well this is the other this should be so basically we were trying to learn two mappings or two conditional distributions and when you specify this kind of conditional distribution distribution Q of a given B you implicitly induce a marginal
4,724
4,755
https://www.youtube.com/watch?v=PXOhi6m09bA&t=4724s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
distribution so when I specify Q of B given a I implicitly specifying a marginal distribution on B and the way that you compute it is if I sample a for my one true distribution of a and then you essentially average out this conditional distribution that would be the marginal distribution of p and ideally I warn my Q to be close to the ground truth conditional distribution P
4,755
4,786
https://www.youtube.com/watch?v=PXOhi6m09bA&t=4755s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg
PXOhi6m09bA
of P given a and that means if I sample a lot of a and then I map it through my conditional distribution the outcome of that should map back to the original ground true of distribution and similarly I can do that for a so I sample from B and then I would from from this B samples I would calculate my approximate conditional distribution and after those transformations they should
4,786
4,820
https://www.youtube.com/watch?v=PXOhi6m09bA&t=4786s
L9 Semi-Supervised Learning and Unsupervised Distribution Alignment -- CS294-158-SP20 UC Berkeley
https://i.ytimg.com/vi/P…axresdefault.jpg