video_id
stringlengths
11
11
title
stringlengths
0
100
text
stringlengths
513
648
start_timestamp
stringlengths
8
8
end_timestamp
stringlengths
8
8
start_second
stringlengths
1
5
end_second
stringlengths
2
5
url
stringlengths
48
52
thumbnail
stringlengths
0
52
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
supervised learning might be picking up on these spurious correlations or artifacts and you know assume when you evaluate it on a new card set which is the set of answers that the model that only looks in the sentence evidence can get right drops quickly from like 16 percentage points from 80 to 88 to 72 percent and this shows up across the board there's now probably a dozen papers in this space if not more that show that kind of these analyzed systems that you know nominally we're supposed to have human level accuracy actually
01:59:01
01:59:29
7141
7169
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7141s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
are not consistent or not robust or not systematically generalizing so you know this is another one from Glockner that you know very carefully constructs kind of these probe sets so it's like permuting objects and the sentences are permuting you know synonyms or antonyms and you know on these probes they show that you know actually again drops quite a lot and then a final point here is on distributional robustness so this is a paper from Devon called learning evaluating general linguistic intelligence and so what they showed is
01:59:29
01:59:56
7169
7196
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7169s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
those near sort of question-answering models which again on squad it's you take a wikipedia sentence and you take burden which we've already talked about how much of an improvement it's had and you know how you know it's improved scores a ton so you take that sentence that question answering well that's trainable capilla and you just run out on a different data sets it's still question answering except maybe we run it on like trivia like trivia factoids that are sourced from like google search results or maybe
01:59:56
02:00:20
7196
7220
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7196s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
we run it in a more conversational framework with kind of you know to to people asking questions between each other and we see that just like a gracie's can crater or you know f ones basically actually metric here so it's you know the same task and we know that people when you ask them a question on one task force on another they're gonna do about the same you know maybe yes one task is a little bit harder than the other but you don't see them like suddenly you know lose half their accuracy this is again just hit set some
02:00:20
02:00:45
7220
7245
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7220s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
of the distribution a lot more muscles tissues and brittleness we're seeing and again this is still some of the best stuff we've got it's combining supervised learning and unsupervised learning but there's hints as we're going to go through here that these self supervised methods and unsupervised pre training is really helping with the robustness we're still not there yet but we're making progress and all that's being driven by moving away from a purely supervised learning framework to moving to these like hybrid Android training and pre
02:00:45
02:01:09
7245
7269
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7245s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
training methods so as I mentioned like there's a lot of things that could be going on current techniques are brittle they're memorizing and so generalizing they're exploiting spurious correlations you know also stop learning once you want to get to you know memorizing your training set you just wall turns off because the gradient ties ISM training lost goes to zero so it just kind of feels incorrect so there's like a lot of different routes we could go down to make progress we can do better models and architectures we could do more data
02:01:09
02:01:35
7269
7295
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7269s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
we can go down different paths all together so obviously since I'm kind of talking about unsupervised learning in a supervised learning class I'm gonna talk about how that's a very exciting one but you know we could always keep working in the supervised learning paradigm and just say well we're gonna have better models and we're gonna get more data we're some kind of purses problems in the same way and so this was like kind of what I'd say a lot of like early deep learning was really highlighting was kind of you know we were working on
02:01:35
02:01:57
7295
7317
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7295s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
supervised learning datasets we kind of were seeing you know these new architectures that were exploiting priors and inductive biases of the data domain we're really helping a ton so on images you know this is the grand story of we added you know comets and they are a great fit for the domain and they kind of cleverly quote encode you know all these equivariance and translation and you know shared weights and all this structure and that helps a ton with their accuracies and then we just use a large supervised
02:01:57
02:02:25
7317
7345
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7317s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
notice and let HDD figure it all out for us and so this kind of led to I think of mindset of heavily emphasizing architecture engineering you know there's a very large design space here someone cynically it allows for a lot of different papers to be written and you know you can really kind of combine and contrast like all these building blocks like we really like playing with these blocks and you know a lot of really good work has been done that like does empirically push the state of the art by exploiting you know properties of domains and you
02:02:25
02:02:52
7345
7372
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7345s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
know an example of that is this diagram on the left so does anyone want to guess the name of this model well sorry because it's kind of hypothetical it's called a simple model so this has got six different color embeddings and you know there's screws and character models and by attentions and MLPs and you know it starts to get quite complex when you're really all you've got is inductive biases and kind of the standard supervised learning datasets so it's a heroic effort but you're kind of exploiting more and more
02:02:52
02:03:23
7372
7403
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7372s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
details and getting more and more complex to make progress when you kind of have locked in these other constraints like the dataset size and the paradigm of training and on the right is another one that I think is like almost looks like it's like you know some like pentagram or something you know they look like kind of these very cool like architectures and they're very quite fun to look at and kind of look through all the work that's been done on creating these systems and again like we said there's all these different
02:03:23
02:03:48
7403
7428
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7403s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
methods of having a deck Tobias and it can really help a lot and so they're all important and very impactful and please don't take this as like criticising kind of the standard approach of like iterating and hell climbing on supervised learning with like better and better architectures but I think it's a bit like this where really when you treat a data set in isolation if we come back to how people learn and experience the world it's so varied it's so diverse there's so much experience in information and knowledge you're
02:03:48
02:04:14
7428
7454
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7428s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
leveraging before you ever saw this data set and some machine learning models when they're started in isolation on a supervisor zone you get a set by itself are kind of like you know that supervised data set is like a peak in a very big space it's a small peak and we can add more and more data and make that peak more you know taller and wider and that might help with robustness and generalization but at the other day it's kind of a little bit futile I think you know the real way to solve these tasks or at least the way that people do it is
02:04:14
02:04:43
7454
7483
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7454s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
they don't sit down and memorize you know a million different examples they somehow learn a much more general set of tasks behavior and transfer of knowledge and information instead of like just becoming a master at a very specific isolated domain you know we're amazing because of our Gen not because of our you know or well we were amazing for both because we can do incredible things in specific domains but at least machine learning is starting to see that I'm very targeted supervised data says you can do it
02:04:43
02:05:08
7483
7508
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7483s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
models that do a bit better and so then there's papers that even on architecture engineering show kind of somewhat critically that some of these you know fancy or new architectures that we saw them don't quite improve as much as you think or with more careful oblations don't show much of a benefit so you know there's a one of the famous examples here is they took a baseline Alice T and gave it some love this is kind of a common story for language modeling and show that it was about performing kind of a lot of new recent state-of-the-art
02:05:08
02:05:35
7508
7535
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7508s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
models if you just have careful comparisons in careful trying to me so you know maybe we need to back off and rethink beyond just pure supervised learning on test specific datasets and you know I think one of the reasons to frame this is the largest supervised days so you know basically in the world what I'm aware of publicly is gft 300 million actually there's a Facebook one that I haven't talked about their Instagram pre training but this work this was true a little while ago so there's a straight a million images
02:05:35
02:06:02
7535
7562
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7535s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
18,000 classes if you if you do like a very simple like loose bound on how much information content you get you have like log 18,000 bits per image and you have 300 million of them so that ends up with about getting 300 50 megabytes of constraint on the function you can learn so this is the world's biggest data set and in terms of the correct function that we're trying to approximate with supervised learning you know we only are able to pump about from this kind of slightly naive in toyish view about 530 megabytes of information into the system
02:06:02
02:06:33
7562
7593
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7562s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
from the supervision here but you know like like trying to connect this back to everything we've been talking about today there's you know terabytes and petabytes of actual raw natural language on the internet so if we figure out how to exploit all that information in some reasonable way there's a hell of a lot horror there that we should hopefully and again we're gonna be a lot less efficient you know gold labeled supervised data you know per bit is probably helping far more and less specify and learn a task but we only
02:06:33
02:07:01
7593
7621
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7593s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
have a little bit of it it's like you know yawns kickin ology of the cherry on top first kind of you know everything else we need to be able to do and I kind of tried to take the supervised learning approach for language for I spent most of 2015 myself building what I hoped would be an image time for text it was a very large weekly supervised data set where we basically did classification over edit communities and we built like on a 50 million training examples over a thousand communities we turned our Nan's to
02:07:01
02:07:26
7621
7646
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7621s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
predict everything and we're hoping they would learn useful features and representations kind of skip thought style it was pretty concurrent work at the time except we were going with the supervisor route instead the unsupervised route and the sad thing was the unsupervised model beat us so skip thought vectors was beating you know just by doing a large bottle objective was beating this system that we built with like your middle e weekly supervised data but we were like oh yeah this is the gold label so you know these
02:07:26
02:07:51
7646
7671
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7646s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
are the right things to predict they're semantically aligned with classification and this kind of really made me quite confused and kind of skeptical so what's going on in his face because you or at least supervised learning and got me really excited more on the gener of long ago and some of us I'm excited because we just weren't seeing the supervised learning pull through here because it's just I think is a little bit too weak of a supervision source and a little bit too specific so like again the big question I think is a lot more you know
02:07:51
02:08:20
7671
7700
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7671s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
in terms of like novel research frontiers how do we go from kind of isolated peaks of competency that you can very quickly you know fall down if you change the problem just a little bit you know quickly collapse in terms of task mastery how do we go to systems that perform you know and then much more general robust kind of you know maybe they're not nearly as good in terms of competency on any given specific task but how do they perform much more broadly across the board and again so this this is an example of kind of the
02:08:20
02:08:48
7700
7728
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7700s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
classic architecture engineered approach like one of the kind of you know incredibly well done versions here that's exploiting so much information with inductive biases is using a word net which is like that great hand curated data set and so we see that it like gets and you know because it's able to exploit all this site information of you know helping with like learning oh these are in it you know synonyms or antonyms or this is you know more abstract or less abstract you know a child or a parent and in terms of like
02:08:48
02:09:13
7728
7753
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7728s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
semantic hierarchy of a different word so you can see how that's bringing in the information that should help with generalization and so it actually does better on those kind of systemic evals so this is one way of like widening that peak and someone excitedly though if we just slot GPT one in as well it performs just as well on the more robust transfer setting so there we didn't have to you know manually curate that the relations between words or build Ward Annette we kind of let a language model figure it out and so I think this really again is
02:09:13
02:09:43
7753
7783
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7753s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
one of the proof points that you know some supervisory training is really figuring out the same relation is the same kind of connecting the concepts connecting them and helping with robustness and generalization and there's some new work from Tim Hendricks this week that I've which I have put in these slides showing that Berkeley as follows are much more robust a t-distribution than classic purely supervisor models with like LS TNS or cnn's so I think that's starting to get much more well empirically founded than kind
02:09:43
02:10:07
7783
7807
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7783s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
of me spouting off one or two numbers from like the models I know so kind of at the high level takeaway here kind of this is just a hurrah message for everyone taking this course is you know I really think that one of the most promising methods of moving forward here is in terms of like really lying tasks and robust systems that actually you know perform the things we want them to is we need to move away from standard supervised learning instead of manually specifying what to predict through the creation of large supervised data sets
02:10:07
02:10:34
7807
7834
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7807s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
we need to figure out how to learn from and predict everything out there and you know one of the ways that you can think of this is like every time we build a data set we're sitting the importance of everything in that data set to one and everything else in the world and all the other useful information may be out there is set to zero so like when you start with a model from scratch you should really get in that supervised learning as well as head and be like oh it's almost a hopeless task you know they know so little and we've
02:10:34
02:10:59
7834
7859
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7834s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
hidden so much from them when we only give them this one canonical gold standard data set and of course they're gonna cheat however they can because they're you know great at optimizing the objectives we give them but if they don't have the foundations with which to truly you know build off of all they can do is exploit clever spurious correlations so yeah I think this kind of comes together with all the work we've been chatting about of like a potential recipe for and you know I think this is getting proved out with t5
02:10:59
02:11:25
7859
7885
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7859s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
and all the future work here of how to kind of combine a bunch of pieces together we need high capacity and flexible model classes so they can handle a lot of different tasks we need algorithms for extracting information running the structure across many different domains so this would be basically you know a lot of things we talked about it turned out language modeling you actually just worked really well as one of these it's an incredibly old idea but that algorithm just or you know method just worked quite well in terms of people
02:11:25
02:11:51
7885
7911
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7885s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
there's a lot of different clever approaches to specify and proxy tasks but this simple one has gone it's quite far and you're unfortunately still going to need because these are dumb models that don't you know have anywhere near the robustness or generality of humans you're going to need a lot of data tiling everything but at least it'll be unsupervised and so we have it available and you're going to need unfortunately at least to get the you know the soda grind a little more you can in fed some amount of compute with which to learn
02:11:51
02:12:16
7911
7936
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7911s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
them but again that may produce a model that's actually quite small and efficient to run a test time and I think that's one of the hopeful direction is going for is you you know train these big models and you know Google or Facebook or open the I you know burns the GPU years to to get that model but then you know you're able to distill it and prune in and release it and then it can still run on your own laptop and or on you know a single GPU and you know that means that there's downstream tasks that you may want to investigate or you
02:12:16
02:12:44
7936
7964
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7936s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
know build models on are much more efficient because you've amortized all this compute that went into pre training and now you're able to you know use that during the fine-tuning so it may actually be that like bird is actually you know though it took a ton of compute to train bird and Roberto may actually have reduced the overall volume of compute needed to achieve a given level of result and may actually widen the amount of usefulness and test that can be tackled in the field because it can transfer and you know been and be
02:12:44
02:13:09
7964
7989
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7964s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
beneficial to everything downstream and you know I think it's very reasonable that some people in the field kind of look at all this coming together and are like well you know I don't find that satisfying and so I think that's a valid view and so you know maybe backing up and working towards you know more grounded learning and there's lots of really interesting work in this space now of you know moving towards reinforcement learning and granted learning with you know multimodal agents and all this kind of stuff that connects
02:13:09
02:13:37
7989
8017
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7989s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
to you know more what feels like you know true learning about the world instead of just seeing abstract bits of text you know that I think that's a very valid approach but right now you know we've been just seeing that it's been driving a good chunk of empirical progress over the last few years you know there's a whole other set of methods here that's multitask learning and I think that that's actually been showing a lot of promise in the last when I made these slides this slide last year I think I was a little bit more
02:13:37
02:14:01
8017
8041
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8017s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
pessimistic on it and there's actually been a lot of good work like I'm tdmn and others that's been making progress on on this set of methods but it still kind of relies on us building a data set so for multi task learning you train on a bunch of different tests together and you kind of hope that you get transferred nationally between them but often they're all supervised tasks and t5 is a good paper actually like really talk through the nuances of what's the test learning for gendered pre training and one of the surprising things they
02:14:01
02:14:25
8041
8065
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8041s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
share now is when you do it well and you kind of exactly emulate the pre trained and fine-tuned framework if you do multitask pre-training followed by supervised fine-tuning you still need the uh you still need the unsupervised objective of like math slang which blah blah like but you can get rid of or you can at least find very similar performance in many cases compared to having to do the giant pre-training on you know the full internet for instance so they're still having room left and it's actually improving these methods
02:14:25
02:14:51
8065
8091
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8065s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
and the final one here to just chat a little bit about is some of the fall work we did it and they're open ion gt2 and this is kind of like what I've been chatting through here is kind of like a lot of the motivation that went to this project so we collected more data compared to GPT one and we collected much more diverse and heterogeneous data so we're hoping that we have models that would generalize better and see a much broader set of tasks so it's 40 gigabytes of text 10 billion to go cans 8 million webpages we scale up the
02:14:51
02:15:18
8091
8118
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8091s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
models just because we kind of saw those trend lines and you know I think there's a lot of reasonable arguments for why you just need bigger models to handle complex tasks and it's just a language model which predicts everything so immediately it still left right on aggressive model so it has some drawbacks compared to things like Bert but it's just a language model and so what we focused on in this case was purely how well this mala could do across you know many different tasks in at zero shot setting so we we never
02:15:18
02:15:44
8118
8144
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8118s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
fine-tuned it because you know supervised learning is tricky and it learns to exploit spurious correlations and dependencies so we're only ever saying well you did all your pre training work and we had you predict a bunch of words how well can you handle this new data distribution you've not only never seen before I mean really you know we trained on a lot of data so we actually see a bit of a lot of data distributions you're not letting it like specifically turn specific tasks but that specific label we're just saying run it and see what I
02:15:44
02:16:08
8144
8168
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8144s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
can do and we show that like it actually begins to do something particularly as you scale the model across a wide range of canonical and LP tasks so it's purely unsupervised there's no you know there's no direct human labeling or supervision going on here but this model can actually you know you can feed it a paragraph in the mask of question and you get transfer and linkage well can give the right answer sometimes often they're just matching kind of old baselines and they still have a huge gap to the you know human
02:16:08
02:16:33
8168
8193
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8168s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
performance but I feel like this is a much better measure of potentially what the like underlying performance of these systems might be because we're not doing supervised training here and you know yeah and surprisingly we know our models are still worse than people so that kind of shows up here but it also shows a promising trend line where in some cases like there's very domain-specific algorithms for unsupervised translation middlee it's been a year so that speech should be up here now is there like some great follow up work from Ferran well
02:16:33
02:16:58
8193
8218
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8193s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
there's pushing and supervising on team farther but this is just a language model with no real customization and we're just seeing that it begins to do translation between the cumulation French you know you can tack a tldr on the end of a document and get something like summarization it's pretty garbage on the official metrics because it's only barely matching read three random sentences from the article but kind of quantitatively and qualitatively if you ask people which you prefer it looks a lot better than these numbers show
02:16:58
02:17:26
8218
8246
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8218s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
because this is a kind of very coarse evaluation metric and then the final thing here is like question answering so it's kind of shows domain knowledge and kind of like kind of world knowledge and potentially a lot of factoid information and this one we unsurprisingly see a really strong scaling curve with model capacity so like how is this working how does this kind of unsupervised system it's just a language model begin to translation question answering and reading comprehension well if we go through an inspector data said it turns
02:17:26
02:17:56
8246
8276
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8246s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
out there's actually just like kind of natural occurrences of tasks and you're turning them all to predict the next words so that's you know it's easy an English sentences then you're like then it happens to just have you know inside of the middle of this article that someone wrote a training example of English to French so it's a much more natural way of learning and when you have very large data sets you just actually begin to have a non-trivial data and so you see for translation for summarization like if we just like crap
02:17:56
02:18:21
8276
8301
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8276s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
through the data set how many times does TLDR up here well there's not a thousand training examples in quotes here and how many times does like someone asked her who what where when how why question well there's six million of those so we're kind of seeing that these kind of systems that you know don't make assumptions and silly about any specific task we kind of try to predict everything kind of really begin to make some progress I mean again like one of those areas we saw the most on is this question answering an open domain
02:18:21
02:18:46
8301
8326
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8301s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
question answering where you're just asking like what is the capital of you know Paris or in what year was star wars released and you know I think that this kind of gives you a very clear picture of why supervised learning with like kind of the standard approach just is never going to really be able to solve this kind of task so on the x-axis we have a number of training examples seen and again this is log scale and yeah if you start with a Randal initialize what model there's no way it's going to be able to do question answering I don't
02:18:46
02:19:12
8326
8352
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8326s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
open domain you know there's no way it can have the information for you know what is the capital of Paris until it's seen that training example and there's very little generalization there you just need so much data to approach this from a naive supervised learning approach whereas we have bigger models that have more capacity you know in the limit they very quickly began to do non trivially on these data sets and then they kind of fine tune in and learn how to better extract the information that's somehow contained within the weights to
02:19:12
02:19:38
8352
8378
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8352s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
various degrees so again this red baseline here is completely randomly emitted and these are basically random guessing numbers the entire way through you know those data sets doll is 20,000 labeled examples but as we try bigger and bigger language models we see that they really begin to make personís and t5 I think has continued pushing this quite a lot farther to where they're actually sometimes matching with only a neural model that's never looking at documents with like the actual factoids in them it just from its parameters is
02:19:38
02:20:04
8378
8404
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8378s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
able to answer quite competitively on some of these tasks yeah we're pretty much into a conversational period at this point but um you know some of the takeaways I would kind of say from this and kind of you know really pushing on language models for a few years here's performances you know not usually limited by something single paper fixes this is a very long history of you know I think we probably talked about 25 papers during the trajectory of research here and usually it's always someone chipping away on one
02:20:04
02:20:32
8404
8432
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8404s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
specific access you know diminishing returns basically mean there's always some other bottleneck so if you scale to compute but not the data you'll get back there if you scale you know the parameter size you'll just need more computer or if you scale you know the Moloch caste but don't increase that is that it'll just over fit or you could try to scale via like you know human intuition and you can use fancier models but maybe that's just more difficult to train so kind of I tell you that like you know particularly if you have a
02:20:32
02:20:58
8432
8458
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8432s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
little bit more of an engineering mindset here kind of the pragmatic approach of kind of pushing on all these axes together may allow kind of for a larger effect size to show up than pushing on anyone in isolation this is an unfortunate tension I think in research and science where you often want to you know microscopically measure effect sizes and walk controlled oblations and experiments in isolation but you know if you change a few things together you might actually see morgan outsized effect because that's like one
02:20:58
02:21:22
8458
8482
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8458s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
of the things we do we typically to where we get more data get more you know a bigger model through everything cut it together let me try to see if that really pushes toward qualitatively different behavior maybe yeah I mean I really could transition in a question period at any point now you know there's a little bit more advice at the end just saying that don't work on large scale models particularly you know like as things like a lecture so show you can work on the smaller models and see the same effects showing up they're not
02:21:22
02:21:48
8482
8508
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8482s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
going to have the same accuracy curves but you know we know from scaling laws and kind of all those trend lines that if you start seeing an effect that's robust at small scale probably fingers crossed it'll also hold at larger scale so you can do a lot more more rapid development and you know and this I think works quite well you know you should try ten as many or ten times as many times models that are just ten times smaller each and you know that way you can run 10 times as many experiments in parallel this is still you know a
02:21:48
02:22:17
8508
8537
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8508s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
large research field so there's a lot of things you got to try and you know beat all the behaviors in a paper like GP d2 which kind of I feel like gets pointed to as a canonical like the computer Big Data kind of thing they still show up on models you can train on a single desktop it you know it takes a week to about to see the hints of that middle E but you know gbg small you can train quite well and I've got a week on like a for GPU setup and then after you get proofs of concept on like your algorithm or your idea then you can scale up if
02:22:17
02:22:46
8537
8566
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8537s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
you can have the computer resources you're going to get a you know a low enough time on the cluster or thanks and kind of the same strategy used back to the day with like the seventh unit where the initial proofs of concept were 512 dimensional as teams that took a day or two on standard hardware and then you know for the final version then we kicked off a big run with the model that took 16 times the computer and you know how do you not go insane when you wait for a model to train from month well we like to do this thing at opening eye
02:22:46
02:23:13
8566
8593
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8566s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
where you boot your big model over before you go on vacation so you put that before winter break and you just let her train over the break and luckily fortunately you're in the set of sight machine for the whole time but don't stare at that graph every day you won't make nearly as much progress if you're just staring every day at that number but often models surprise you when you give them more time to learn so you know when you're really trying to push that result at the end it's a really good idea to try that if it's available in
02:23:13
02:23:38
8593
8618
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8593s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
the option yeah and again one of the other spreading things just about this field has been how far we've gotten pushing often where the developers of one paper or modeling architecture let me just push them log probability or the you know type 1 evals and then someone else coming along in another paper and showed oh this thing's actually great a type 2 evals so I think that's you know really reassuring and you know I'd often say that you could work on one or the other in isolation and often you see things that robustly
02:23:38
02:24:04
8618
8644
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8618s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
scale or contribute on both sides you know there's some gotcha as always with scaling you know things break when you at some point you can't extrapolate too far and things just change so you've got to watch out a bit for that you know for like a model like 2 PT 2 real on one of my collaborators was like we were originally trying to train these deeper bigger models and they just weren't working better and we had to fix an initialization technique and rearm came up with this and it helped you know continue scaling so when you see you're
02:24:04
02:24:34
8644
8674
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8644s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
scaling like not happening in the way you'd expect or the try mods kind of suggest it's also in some that something's wrong you need to like tweak it or fine-tune it or come up with like to actually do the clever work I don't do much of that myself to fix it up and try to keep making progress yeah and then the other thing is just like writing efficient and smart code these days luckily hardware is proving and for the same price point so with things like FP 16 1/2 precision compute if you switch over to that with
02:24:34
02:25:02
8674
8702
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8674s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
things like with example being GPT one the original version took 25 days and FP 32 and one generation older hardware and then the same next generation hardware where you you know a lot of people did a great job optimizing this to Scott grey in particular it's amazing GPU engineer and researcher and opening I we worked with some blocks part part the box of our spork is basically his work and he was able to optimize these down by almost order of magnitude on just the next generations hardware from a lot of you know great improvements across the
02:25:02
02:25:32
8702
8732
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8702s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
field so often if you write efficient code and use all the right tricks in terms of accelerating your models you can ring a lot out of the same you know the same level of hardware and just be efficient about that we have a library called the block sparse library that can help with that and provides a lot of these opsin honestly also libraries like right origin are doing a great job merging these in providing their own ops kind of more integrated into these these kind of wrappers so that's I think exciting for
02:25:32
02:25:59
8732
8759
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8732s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
the fellows at all yeah you know in terms of sweet spots for computer you know for 20 ATI desktops still can do along with space they just cost a fair amount of money and then you know your standard 80 100 bucks on a cloud provider is a very is a medium scale compute platform you know papers like electro can do a lot which 'single be 100 and I mean a 20 ATI is basically the cheapy 100 for 4 or 5 times less oh yeah that's about it honestly I think we have where you have about 15 minutes left for questions and you know I have a few more
02:25:59
02:26:35
8759
8795
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8759s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
random slides everyone this this is really great Alec thank you so much let's see if people have some questions hey Alec yeah how question so I was wondering if you could give you a views on like what do you see 0 shot language modeling something that could could could be production quality performance over time or do you think it's always gonna be lower than a collecting supervises and fine tuning some big current model just try to understand like the space between GPT and like Bert like models yeah oh yeah
02:26:35
02:27:29
8795
8849
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8795s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
so you know right now it is absolutely garbage from a production perspective like GPT - I mean well okay there's hints of life there you know for reading comprehension it's it's matching some of the original neural supervise baselines so I'd say there's hints of life there we're still talking about you know you need to do a lot more research and if you looked at kind of those scaling laws for like what kind of you know GPT - looked like like if you draw those out there's still quite a lot of order of magnitudes left to go so from a
02:27:29
02:27:59
8849
8879
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8849s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
pragmatic or practical perspective it's not really there right now and that might be the scary answer which is you know our models do rely on exploiting and you know I don't override this view but you know like it may just be to actually do these tests correctly you do just need you know much more compute and something like the zero shot setting so it's kind of like working I think I see it kind of is like working with you know like letting shoes or something like resistance training I think it's a fascinating research area to push on
02:27:59
02:28:25
8879
8905
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8879s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
because it does have some of these exciting qualities from like maybe representing the you know much more difficult and hopefully much more true representation of test performance but yeah it still has a long way to go so I think it's a fascinating research direction but here's a lot of pushing to be done on that thank you and yeah I think for a pragmatic perspective like you said you know you really should find tune on some supervised data and you know like I mentioned Burke models are still showing quite good robustness out
02:28:25
02:28:51
8905
8931
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8905s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
of distribution there I don't think there's been any good work comparing pure zero shot to learning of a task to like supervise the fine-tuning of a pre train model but I think we're talking about something that's like a few years out at least thank you thank you I saw a question here earlier you motivated Ellen's by comparing probabilities of pairs of strings to exact knowledge such as cats at first cat sets has this intuition comparing sentences I guess with exact knowledge been used for training general models a text or
02:28:51
02:29:21
8931
8961
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8931s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
Maxo like that ubiquitous so maybe this is about a comparative or contrastive method for training generative models where you compare sentences and know that like one should have higher probability than the other there was one one paper for reps in tation learning perspective which it's not quite the generative model side but it's representation learning CPC you know is that whole family of contrast methods is dominating you know unsupervised learning for image representations so it's somewhat of a contrast where in NLP
02:29:21
02:29:51
8961
8991
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8961s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
we haven't seen it really tick off yet so I think it's a very exciting research direction in the original CPC paper actually had some results that were promising on natural language but they you know like the original CPC paper in general we're exciting but nowhere near stay the art and a lot of the refinements in the last year or two on the image side really pushed that quite far I think you might have had a lecture just on that or about two so it would be very cool to see if someone could do that kind of similarly for natural
02:29:51
02:30:18
8991
9018
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=8991s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
language but if it was about kind of exploiting more structured knowledge about like differences and encoding that into the generative model there is some pretty interesting work on this particularly from some more the like linguistic heavy folks and field of combining kind of hybrid systems of you know neural and with like kind of something like grammar constraints or the like and it's you know I'd say it's primarily focused a little bit more on you know the settings where you might expect encoding that inductive bias to
02:30:18
02:30:49
9018
9049
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=9018s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
help which is like smaller data sets but you know at least personally my kind of I find some degree of at least from a pragmatic perspective a lot of current language modeling benchmarks I think are quite artificial because they work with such small amounts of data which from pragmatic perspective just doesn't make sense because there's all of what could be out there there's it's so easy to just write a scrape or your stuff or download a shard of comic roll and that's more data than you basically ever going to need to work with or you know
02:30:49
02:31:17
9049
9077
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=9049s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
be able to process and so I think at least from a pragmatic perspective we should really be for how to use the large volumes of data we have you know I think it's a very valid other approach to push on data in isolation and you know how how much you know how data efficient we can get with limited set of data but I think it's probably add just to farm in extreme when you have you know only a million words of training data and things like country things so Alec a follow-up question on that it seems like one way to to learn languages read the
02:31:17
02:31:54
9077
9114
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=9077s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
entire internet right another way to learn language is the way I think most people learn language which is absolutely you kind of I don't know how many words or how large the data set would be that somebody encounters by the time maybe they're six years old and they can speak pretty well maybe at that point they have any notion of kind of how much data is required in that context compared to how much data is required here oh it's it's awful at least for you know in for neural models I think it's um yeah for like a six-year-old child I
02:31:54
02:32:28
9114
9148
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=9114s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
think it's maybe you know I just bashed on 1 million words being unrealistic but I think it's about one to ten million so you know compared to GPT to being ten billion tokens there's orders there's three orders of magnitude at least of headroom there potentially and i think that again understandably motivates why a lot of people do work on that city but my guess would be that to really make progress in that setting a lot of that is because of transfer between modalities and you know actually you know interacting with very
02:32:28
02:32:56
9148
9176
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=9148s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
high quality sources of supervision like other people and you know being you're grounded agent that interacts with you know video and audio and like i think that that research is very interesting longer-term and you know we're probably going to saturate kind of what we can do with these ungrounded giant systems in the next few years or maybe it's even already starting in the last year so that's like very i think exciting next round of work and clearly like the numbers just show there's a huge amount of room to go got it thank you makes
02:32:56
02:33:31
9176
9211
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=9176s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
that work well for when you are laying apply to like other abilities like video surgeon educators okay yeah so genetic did is actually a great example there there's a really I think Joshua Meyer and collaborators between I think was it proud and it's an NYU team in slush fair I think Rob Fergus is not working a lot on this so they took Bert and they applied it to protein sequences or I think sorry amino acid sequences and probably I don't have strong bio background but much of bio background but they were showing that
02:33:31
02:34:06
9211
9246
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=9211s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
the same methods or you know learning like a lot of the structure in those different domains so like kind of the central unit analysis or the sentiment you know example I gave for pure language there was also another paper from I believe church the church lab at Harvard where they took like literally my code and ran it over amino acid sequences and we're showing that there was like instead of a central unit there was like a like a beta sheet unit or so current course finding like secondary or tertiary structure of proteins the
02:34:06
02:34:37
9246
9277
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=9246s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
models were having units that were like understanding you know or even though these are like very nonparametric kind of abstract models that just like you know have a bunch of parameters that just factorize a probability distribution they're somehow learning the structure of the domain or hints of that so I think that's very exciting and that's another line of work I think given how exciting this stuff has been for MLP and how much of an impact it's made over the last few years whether it could work in other domains would be
02:34:37
02:35:04
9277
9304
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=9277s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
quite interesting you know there's definitely differences so for video I think video just needs so much compute that it's like still maybe quite a few years off just because of the volume of data and you know the amount of compute that might be necessary but maybe I'm just being cynical there whereas I'm images you know there's a weird contrast which is like I mentioned the contrast in methods are doing quite well and if you just run a generative model where you know actually okay that's not quite right there's one paper from deep mind
02:35:04
02:35:31
9304
9331
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=9304s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
called big by again where they took it immediately you know pretty different generative model and they were showing that those are starting to learn quite good representations or images at least from the standards of unsupervised learning still being crushed by the latest moco's or sim clears but they're you know they're quite promising and you know showing a kind of a foothold of this generative model kind of approach in other domains and maybe you know one more piece of context to shine on there I think there's some one of a nicety to
02:35:31
02:36:01
9331
9361
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=9331s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
language because it's produced by you know people it kind of is naturally designed to be very clean and very high-level and yeah it removes all the noise so when I think we run and try to train the same generative models or approaches in domains like images or video it may just be that like when you're dealing with raw natural audio signals are you know sorry not raw natural signals they have so much noise like particularly a likelihood based generative model is just like spending so much effort and capacity trying to predict all that
02:36:01
02:36:30
9361
9390
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=9361s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
noise and the you know this noise to signal ratio it's just a lot cooler that that just like makes it a much more difficult task right now yeah you know it's it's I think it's a very interesting research question so Alec we're about out of time here give any closing thoughts oh yeah let's wrap it up we're mostly there I guess you know what one thing again is like you know III one of the things that I really enjoyed about being able to have the opportunity to this talk was kind of going through and showing that full history here and
02:36:30
02:37:22
9390
9442
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=9390s
https://i.ytimg.com/vi/B…axresdefault.jpg
BnpB3GrpsfM
L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20
kind of you know I think it's it's a great example of how there's so many pieces that built on top of each other and you know there's so many different authors and so many different institutions that really contributed to this and you know even given that open area there's been a lot of climbers that have pushed on this stuff over the last few years and you know it really you see it just evolved like so many different pieces of the research with all the different you know things being brought to bear new models new datasets you know
02:37:22
02:37:50
9442
9470
https://www.youtube.com/watch?v=BnpB3GrpsfM&t=9442s
https://i.ytimg.com/vi/B…axresdefault.jpg
1sJuWg5dULg
L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20
hello welcome to lecture 8 of deep and squeezed learning today we are going to talk about the strengths and weaknesses of various narrative models and representation learning methods that we've seen so far so the brain has 10 to the power 14 synapses and we only live for 10 to power 9 seconds and so we have a lot more parameters then then I'm the data we ingest so this motivates that we should do a lot on scores learning because in order to provide sufficient fodder for the number of parameters that we have in our brain we should be able
00:00:00
00:00:41
0
41
https://www.youtube.com/watch?v=1sJuWg5dULg&t=0s
https://i.ytimg.com/vi/1…axresdefault.jpg
1sJuWg5dULg
L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20
to predict a lot more bits from the data that we ingest which is 10 to the power 5 order magnitude smaller right so this was a statement made by Jeff Fenton in this 2014 in a reddit so firstly you summary of the course so far we've looked at or aggressive Morrow's fix learn and pick so CNN picked two skin and paws glass pixel snail we looked at four models really only P family of models and also the connection between Auto regressive flows and in with Auto regressive flows next we covered latent variable models models with approximate
00:00:41
00:01:23
41
83
https://www.youtube.com/watch?v=1sJuWg5dULg&t=41s
https://i.ytimg.com/vi/1…axresdefault.jpg
1sJuWg5dULg
L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20
density estimates using the variational lower bound and various associations of that like the VA e importance rated our encoder VQ BAE pixel BAE and so forth we also then jumped into a different class of jeredy models that don't work with the likelihood principle the impasse density models against energy based models in the moment matching principle and finally we questioned the idea of like whether we even need to learn generative models if all we care about is extracting useful features from unlabeled data and that God isn't with
00:01:23
00:01:59
83
119
https://www.youtube.com/watch?v=1sJuWg5dULg&t=83s
https://i.ytimg.com/vi/1…axresdefault.jpg
1sJuWg5dULg
L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20
this topic because house provides representation and we saw that with the right kind of simple cognitive principles and a lot of data and compute we can learn really useful representations of unlabeled images that are competitive with supervisor plantations so represent so let's let's look at auto regressive models used in the in 2015 the main paper was Bush with which introduced this idea of masked or encoder for density estimation and it was able to produce these I'm miss digits which were reasonable looking but very jittery and
00:01:59
00:02:42
119
162
https://www.youtube.com/watch?v=1sJuWg5dULg&t=119s
https://i.ytimg.com/vi/1…axresdefault.jpg
1sJuWg5dULg
L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20
this idea was extended to much stronger architect more expressive architectures well-suited for image modeling like masking illusions this individual introduced in the pixel or an analytics is seen and family of models and you certainly started seeing generative models working for higher dimensional and much more diverse assi Commission ad so these are samples from image net 64 by 64 you can see that the the structure across 4,000 pixels is pretty coherent but the color is not that good and therefore you're not actually able to
00:02:42
00:03:18
162
198
https://www.youtube.com/watch?v=1sJuWg5dULg&t=162s
https://i.ytimg.com/vi/1…axresdefault.jpg
1sJuWg5dULg
L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20
identify any visible class from imagenet but this was a big big jump from the quality you saw and made and this idea of mass convolutions it has also been applied for one dimensional data like audio and in order to model long-range cover in audio samples the idea of using dilated combinations was introduced and this was also applied for a text-to-speech system where you're going to convert linguistic and text features to raw audio and that can be used in any Indonesian assistant like the Google assistant and this was the Wayman
00:03:18
00:04:01
198
241
https://www.youtube.com/watch?v=1sJuWg5dULg&t=198s
https://i.ytimg.com/vi/1…axresdefault.jpg
1sJuWg5dULg
L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20
architecture that was commercially deployed after after a year and the same idea of using mass conversions with Auto regressive pixel level modeling has also been applied for Rio prediction why are you looking at the pass frames and encoding them with a convolutional STM and then you're taking the embedded representation as a conditioning information for a pixel scene and decoder that generates the next frame pixel by pixel and it's able to produce coherent video look like a robot moving out to Tehran so over time
00:04:01
00:04:41
241
281
https://www.youtube.com/watch?v=1sJuWg5dULg&t=241s
https://i.ytimg.com/vi/1…axresdefault.jpg
1sJuWg5dULg
L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20
the order aggressive modeling community has expanded further and further in terms of the level of engineering and architectural innovation and on the left you can see if the subscale pixel networks which have very coherent samples because of the clever conditioning can assume to use on the right you see hierarchical auto regressive image models with auxillary decoders where the idea of using latent space auto regressive models was introduced by quantizing representations or encoders and and modeling pixel CNN
00:04:41
00:05:14
281
314
https://www.youtube.com/watch?v=1sJuWg5dULg&t=281s
https://i.ytimg.com/vi/1…axresdefault.jpg
1sJuWg5dULg
L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20
the latent space which is also similar to the vqv a a idea that you seen in the VA a lecture so apart from images and audio and video auto regressive models have had immense success in language and these are samples from GPT to riots it would actually produce a coherent story about unicorns and like a like a story of how unicorn skin when their own language and also talks about a scientist who is able to observe all this phenomenon and this shows that language modeling at the level of a paragraph or even multiple
00:05:14
00:05:55
314
355
https://www.youtube.com/watch?v=1sJuWg5dULg&t=314s
https://i.ytimg.com/vi/1…axresdefault.jpg
1sJuWg5dULg
L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20
paragraphs as possible by just training large models which used to order aggressive structures this slide shows the evolution of language models over time we're on the first you see Shannon's three grand models which I reasonably good but not super coherent across the full sentence and then Ilya sutskever is model of using an RNN is able to produce a couple of sentences but not completely making sense and then over time they using bigger LSD and bigger transformers you ended up with the quality that's UPD to experts right now so all these huge
00:05:55
00:06:38
355
398
https://www.youtube.com/watch?v=1sJuWg5dULg&t=355s
https://i.ytimg.com/vi/1…axresdefault.jpg
1sJuWg5dULg
L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20
advances have been possible due to multiple reasons and let's go through them quickly the first thing is just being able to train with larger batch sizes because of more computer availability and training with larger bad sites then we stabilizes the training of these models and optimizes these losses much better making the models wider making a modest deeper figuring our clever race condition your next year you're building a conditional class conditional or audio condition or text condition model the figuring out
00:06:38
00:07:10
398
430
https://www.youtube.com/watch?v=1sJuWg5dULg&t=398s
https://i.ytimg.com/vi/1…axresdefault.jpg
1sJuWg5dULg
L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20
ways to get the conditioning information cleverly is very useful pre-processing like in wavenet we use some new law pre-processing to quantize continuous audio into discrete entities are for example in pixels you're actually using categorical information for modeling rather than rather than using gaussians so these are these are and in language you using by parent coding which is pre trained on a huge corpus and therefore your mod in on modeling neither at the character level or at the word level but your modeling in the sub word level and
00:07:10
00:07:47
430
467
https://www.youtube.com/watch?v=1sJuWg5dULg&t=430s
https://i.ytimg.com/vi/1…axresdefault.jpg
1sJuWg5dULg
L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20
that's much more useful for generalization and also building more efficient models compute power and as we progress in the last two three years we just have at we just where access to a lot more compute like TPU so I like big GPU rigs which have lots GPUs connected really with a really fast interconnect and therefore be able to train data data parallel model is much better and we're to train see that several weeks or basic training are usually producing much better results and also making fewer assumptions about the whole problem like
00:07:47
00:08:30
467
510
https://www.youtube.com/watch?v=1sJuWg5dULg&t=467s
https://i.ytimg.com/vi/1…axresdefault.jpg
1sJuWg5dULg
L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20
before trying the idea of this predicting categorical distributions for every pixel why would he want to imagine that pixels are definitely gonna be modeled with calcium's instead of categorical distributions like indy really doesn't make any sense but then practically it's better for a neural network to work with cross entropy losses there are also been architectural advances that made all these was much better so mass conversions were applied in the original Pisa CNN but as transformers and dilated communist art
00:08:30
00:09:06
510
546
https://www.youtube.com/watch?v=1sJuWg5dULg&t=510s
https://i.ytimg.com/vi/1…axresdefault.jpg
1sJuWg5dULg
L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20
exists the samples just got much better with more coherent structure across long range dependencies and and making the whole modeling problem look more like supervised learning helps a lot and therefore relying relying heavily on oh they'll be here crossing will be lost and optimizes that have been much better tuned for this loss ensures that generative modeling can also benefit from all this but engineering advancements so now what's the future for our regressive models we're only scratching the surface of what's
00:09:06
00:09:43
546
583
https://www.youtube.com/watch?v=1sJuWg5dULg&t=546s
https://i.ytimg.com/vi/1…axresdefault.jpg
1sJuWg5dULg
L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20
possible and and once we have motor pilot training we'll be able to realize a lot more for instance be able to train trillion parameter models on all of the Internet's text and that that way we could compress all the Internet's text into a giant neural network that can be a like a know-it-all language model and secondly we can figure out ways to Train one single model for multiple modalities just even bigger generative model they could work at a video level on YouTube or image level Instagram text level Cabiria so that way it's able to
00:09:43
00:10:24
583
624
https://www.youtube.com/watch?v=1sJuWg5dULg&t=583s
https://i.ytimg.com/vi/1…axresdefault.jpg
1sJuWg5dULg
L8 Round-up of Strengths and Weaknesses of Unsupervised Learning Methods -- UC Berkeley SP20
probably correlate information across multiple entities and chameleons for expansion so for all these kind of modeling requires hardware and software advances from auto pilot training we should it's also possible to make or aggressive models more useful by figuring out faster ways to sample with better low-level primitives at the CUDA that will like for instance fast kernels and and better act like for example wave are an N uses all these mechanisms for production components and doesn't need to be distilled into something like a
00:10:24
00:11:02
624
662
https://www.youtube.com/watch?v=1sJuWg5dULg&t=624s
https://i.ytimg.com/vi/1…axresdefault.jpg