video_id
stringlengths 11
11
| title
stringlengths 0
100
| text
stringlengths 513
648
| start_timestamp
stringlengths 8
8
| end_timestamp
stringlengths 8
8
| start_second
stringlengths 1
5
| end_second
stringlengths 2
5
| url
stringlengths 48
52
| thumbnail
stringlengths 0
52
|
---|---|---|---|---|---|---|---|---|
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | things about this is if we go and poke into that model and say well you have this hidden state that summarizes everything you've seen and we do probes over that we found that actually there was a single unit within this language model which very vividly indirectly just computes a running estimates of what is the sentiment of the characters I've seen so far in the review so you can see that you know as it turns on this is one of Michael Crichton's best books and so we have green colored as positive and red colored as negative so again there's | 01:11:52 | 01:12:19 | 4312 | 4339 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4312s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | no supervised learning going on here this is all just unsupervised prediction of a byte stream it just sees a stream of bytes 40 billion in a row and they're all just you know numbers 0 to 256 and it somehow figures out in order to better predict this text you know it recovers this useful feature which is well as this review gonna be excited or you know dismissive and you know it can handle complexity where you know I can switch from a great start you know it's something where it's like you know you know here in the middle seriously the | 01:12:19 | 01:12:48 | 4339 | 4368 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4339s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | screenplay and the directing were horrendous and then it suddenly drops off and it's you know performance analysis and starts going negative you know I can't fault the actors I know good novels especially are hard but this may be the absolute worst disparity in quality between a null and screen adaptation forever so it really does it and it turns out that if we just threshold on this unit so we're not even fitting parameters we're fitting one parameter it actually was matching these old words avec or by ground baselines and even things | 01:12:48 | 01:13:17 | 4368 | 4397 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4368s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | like skip thought vectors and it's just a single unit in the model and we're just running it over this over the document and you know threshold the value at zero and so this is a histogram for positive reviews and negative reviews of what this system does so this is kind of showing I think in a very clean and pure way how you can really do some unsupervised representation learning here and start to learn something that really helps potentially with downstream tasks it's very hand engineered it was very targeted we knew that like you know | 01:13:17 | 01:13:44 | 4397 | 4424 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4397s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | product reviews sentiment is a very important feature so we were kind of really hoping that something like this would happen and it would learn a really good representation but it was you know it's still like kind of shows a proof point when with limited scale but lots of data you can get something done here a follow-up work we did was with scott gray was pushing on kind of model signs again so we said maybe hidden state size is the bottleneck so again these standard LS teams and RNs summarize the entire past context as a fixed length | 01:13:44 | 01:14:12 | 4424 | 4452 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4424s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | feature vector and so that might be for a standard model in a big model like 4096 units or that were false model was 8 K units and you know if we had like three hundred dimensional word vectors if you naively just concatenated them into your that state representation you could only handle like 30 in a row with like a you know an 8k or 9k state size that's only about a sentence or two so we thought that you know maybe it just turned out that models were really limited by their state size and so we pushed on these kind of blocks sparse | 01:14:12 | 01:14:40 | 4452 | 4480 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4452s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | methods that kind of allowed us to train with much larger state sizes that we would factorize the weight matrices such that they would be represented kind of as this two layered system of having a dense sub dense block and a lot of sparse blocks that are pruned away and we saw that these were slightly efficient more efficient in terms of parameters and they also worked better on things like set analysis when evaluated by these linear models which is like a standard probing for how good of a feature representation have I | 01:14:40 | 01:15:08 | 4480 | 4508 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4480s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | learned that's partially because when your model is just like lots of features and that's all they wear their expressiveness comes from you know when your summer ability is easier and high dimensional spaces and yeah I was this was kind of like explaining some of the history of I was pushing on trying to get these things to work and figure out how do I like really you know push their performance potentially and so this is like showing again that that performance analysis of these units learned by these models so this is how | 01:15:08 | 01:15:32 | 4508 | 4532 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4508s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | that kind of representation evolves and we show kind of data efficiency on the x-axis here so in the limit we know there's that zero shot performance of fitting a threshold to zero examples and that actually turned out to be about about here on this graph if you use all the data to probe and find it but if you just fit kind of naively as you saw more and more data you you know could start with like in the limit only needing 10 labeled examples to beat some of the original supervised learning baselines which just train systems from scratch | 01:15:32 | 01:16:00 | 4532 | 4560 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4532s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | there's this recurrent neural tenser Network paper from a searcher at all very early do planning work here with a you know really cool complex model and we were able to imagine it with just ten labeled examples whereas it was trained on all 8,000 in this case and then as we kind of keep adding more and more data we see that the representations learned by these language models can be quite powerful and you're kind of able to like quickly sweep through kind of in the limit you know if you don't have any pre training you started getting into these | 01:16:00 | 01:16:25 | 4560 | 4585 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4560s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | increasingly complex and desperate desperate maybe a judgy word ensembles of 30 different models to hit sodas and then we're able to just use this model that exploits this unsupervised learning on a lot more data to push significantly higher and then that small world improvement with blocks bars had another large jump above that and so this is kind of one of the precursors that or kind of heralds what's about to happen on every task over the next few years this is 2017 as like this field really starts taking off so we mentioned this | 01:16:25 | 01:16:56 | 4585 | 4616 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4585s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | kind of cool and interesting thing of learning a single feature within one of these networks that kind of really shows some representation learning going on so there's another really great paper I love here from Royce warts and collaborators in 2017 that I think again starts to speak to hey these language models that are you know recurrent networks or more expressive neural networks are really actually learning something interesting and beginning to be useful for downstream tasks that might be difficult so this is a data set | 01:16:56 | 01:17:26 | 4616 | 4646 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4616s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | called the story closed task so what you do is you have a paragraph of context in this case Karen was assigned a roommate for her first year in college yeah they go to a music show together and it goes really well and then you turn you're trying to train a system to predict which is the right ending and which is the wrong end and so this fits very cleanly or this is what Rory was quite clever about was realizing that this fits very cleanly into the generative modeling framework you could say well what is the probability of the right | 01:17:26 | 01:17:52 | 4646 | 4672 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4646s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | ending versus what is the probability of the wrong ending and again as we get better language models they should start to learn to exploit context and assign correct like you know the correct probabilities to these different strings and so very early work kind of took the classic supervised learning approach of just throwing you know a you know a model maybe with even word vectors pretty trained at the system and treating it as like a binary classification task but in this case the story close task it's difficult to | 01:17:52 | 01:18:16 | 4672 | 4696 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4672s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | generate this data they only had 2,000 labeled examples so a purely supervised discriminatory system really couldn't get that far and they actually were basically not performing much better than random and so what Roy was able to show is that well if you exploit tons more additional data which was available of like training on small short stories and then you use this model to score the endings so it just produces a single scalar which is like the ratio of the probabilities is sinky my trick that we talked about before but commuted for a | 01:18:16 | 01:18:45 | 4696 | 4725 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4696s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | language model where you say well what is the probability of the N being given the story and you normalize by the probability the ending in isolation and this this trick just helps a bit compared to just computing only the probability the ending given story that actually still works quite well but you get a fair amount more and so they were able to significantly improve the performance on this data set again in the limit just using that single feature the RN NLM features here they got a almost 10% jump in performance just by | 01:18:45 | 01:19:11 | 4725 | 4751 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4725s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | using the generative model off the shelf there's no discernible it's not exploiting statistic you know spurious correlations here because it doesn't see any labels it's just fitting a threshold of what it already thinks is the right ending versus wrong I'm being you know another quick inner loop of scaling so these kind of all are happening Nestle together and I think this gives kind of a sense of how you know research feels often involved where you see these different authors and different people pushing down different | 01:19:11 | 01:19:36 | 4751 | 4776 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4751s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | lines of work and then kind of things come together in exciting ways so this is work from Noma shows here really just pushing on maybe parameter counting the bottleneck you know maybe that's what's been holding back language models and so they really went crazy here and they they train models that have these what they call sparse again a mixture of experts layers so you have your standard LS teams and pink on top and bottom of this model and then in the middle you sandwich in what's called its mixture of experts layer and | 01:19:36 | 01:20:01 | 4776 | 4801 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4776s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | what this has is it has a gaining network that decides to pick basically a two layer fully connected Network and it says which ones just slotted in for this given word so you think that you know maybe you want to memorize a lot of information and when you see you know they went to the city blank or something the mixture network and the gating Network will say oh I should load up like you know the expert that handles you know where cities are in the world or is this kind of just a hand wavy high-level intuition and particularly | 01:20:01 | 01:20:28 | 4801 | 4828 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4801s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | when you train this thing at large scale because it's sparse only one experts being evaluated for any given location at a time so you can group these and you can have many of these experts being trained in parallel and so they're able to push to like you know an eye-popping 137 billion parameters in this language model it's all on this very specific sub module but it ends up being more computer ficient and it has like a lot of clever and very impressive system engineering work to handle how do you run this thing at scale and you know | 01:20:28 | 01:20:55 | 4828 | 4855 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4828s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | have it be efficient when handling so many parameters there so now we come back to type to evals and kind of the standard slutted in and see how it does and this is like really the paper that kind of set this field off it's called a elmo from peter's at all this day i to work again and they or elmo is the name of them all but it's really about deep contextualized word representations and this is kind of where there's the clean mark between the word defect error and the like language model era and so the way era and so the way they do this is | 01:20:55 | 01:21:30 | 4855 | 4890 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4855s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | they kind of cleverly say well what do word vectors do they slot in kind of as inputs and they rear nth you know this discrete identity tour ID identifier token of like you know word you know cab being ID 256 with a distributed representation as we discussed before things like contexts are missing in this case so this paper talks about how to use a language model to do the same thing they're substituting the input representation but instead what they're using is a deep bi-directional language model so this is kind of the schematic | 01:21:30 | 01:22:03 | 4890 | 4923 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4890s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | here where they have a for Dallas TM that will first take in its own learn word representations and it right over the sentence in a left/right fashion and then they want to you know have context not just for sentence words that happened in the past but words that might be about to happen so they also run a backwards L s TM in the other direction from the right to the left and then they have this Bihar cable or sorry you sees me a deep model with multiple layers so they run two layers rails TM and then what they do is | 01:22:03 | 01:22:30 | 4923 | 4950 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4923s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | they learn weighted averages of the word vector so maybe for some low-level tasks you only want those input representations but maybe for some tasks you really want that kind of long-range context and so you might want to use the higher-level layers and so then they rear nth instead of feeding in those kind of like one-to-one look up in a table what the word vector is they have this RNN language model that processes the sentence or a piece of text in both directions and it learns to you know reuse its hidden state representation as | 01:22:30 | 01:22:56 | 4950 | 4976 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4950s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | the input to the model instead of the word vector representation so kind of seeing all those early results like skip thought vector showing that well you could learn and distribute representation of the sentence this one does it but it does it at a word level and it just cleanly slots in where word vectors used to go and so what this is quite nice by is it allows you to have very direct comparisons with prior work and across the board they basically show that like simple baseline models which substituted to use these representations | 01:22:56 | 01:23:22 | 4976 | 5002 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4976s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | instead to reward effective representations we're outperforming very well-engineered very tuned state-of-the-art systems that were like squeezing as much performance as they could add award vectors and they're getting you know quite large numbers here where you see you know 10 20 % relative or sorry yet relative error improvements and importantly they kind of have that clean sweep of very many different tasks like question-answering entailments coreference ner so even classical tests like you know part of speech recognition like and you know | 01:23:22 | 01:23:50 | 5002 | 5030 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5002s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | this kind of really just swept everything and it was very clean it kind of like made clear that okay you know it word vectors were great but it's time to you know here comes the new thing and you know the other very important and fascinating thing I find about this is this model was that language model that was developed or the limit of all they used for this system is that language model that were fall developed in 2016 at Google to just along with co-authors like Orioles that they really were just pushing on | 01:23:50 | 01:24:19 | 5030 | 5059 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5030s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | four-plex keys they're just pushing on how well can we get a generative model this text and then you know two years later or you know two years later someone just was like wait a second this thing is learning amazing representations and you know those two works are separated by two years and completely different research labs and they just discovered that you know these language models are really doing something here yeah so that's kind of like really where things turned and you see you know again looking at data | 01:24:19 | 01:24:46 | 5059 | 5086 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5059s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | efficiency that when you're at very low amounts of data you get huge improvements like 10 plus percent absolute improvements so that really feels like you know as you get more and more supervised data you can begin to overcome the limitations of you know training from scratch but in the limit you know you want to use as little data as possible you want to learn as quickly as possible so this is like very exciting and it's kind of like really got everyone to start stirring and paying attention to this field yeah so | 01:24:46 | 01:25:13 | 5086 | 5113 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5086s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | final one before the break is kind of you could think of it as pretty much in the same vein as Elmo and what we did instead is we took a better language model again so transformers came out and we were really excited by their ability to handle longer range dependencies and they were also very computer efficient so you you could train them quite well and quite fast so we swap out the recurrent network or the LS TM in the language model for a transformer based language model and if we want we could talk a bit about a self attention and | 01:25:13 | 01:25:44 | 5113 | 5144 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5113s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | transform based architectures in a bit but now just think of it as like we subbed in a different better architecture and it's slightly larger we use a similar data set of books it's the same data set that skip thought vectors introduced slash trained on and we we just fine-tune it the same way that Android I it all did and this exciting thing here is we saw that we no longer needed these tests specific architectures for each task so you know a lot of the cleanliness of Elmo was that because it was just substituting | 01:25:44 | 01:26:14 | 5144 | 5174 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5144s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | the impact representation you could reuse all those and engineered architectures and often they would you know counteract for the issues of handling long term dependencies in or an end with like an attention layer the like but they still require you know that engineering of each these tasks for each of these different architectures which means that you're still leaving performance on the head room you know it's not like where you're initializing the middle layer features of a CNN instead of like the edge detectors of the lower | 01:26:14 | 01:26:37 | 5174 | 5197 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5174s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | layers but then we still are sticking new layers on top so we were trying to like really kind of move towards a general-purpose framework that kind of a lot of Surrey is the same architecture everywhere and not have to have as much of these tests specific engineering which requires a lot of effort and time and grad student hours to like push those systems for so we have this transfer based language model and we kind of showed that for a fair variety of question of tasks primarily classification we kind of take the same | 01:26:37 | 01:27:03 | 5197 | 5223 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5197s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | model and without having to modify it or introduce new layers we could just fine-tune it with only a linear classifier in text typed on top and we could across-the-board do quite well and in many cases we were outperforming ensembles the same way that elmo is doing before and using basically the same unified architecture to perform quite a lot of different tasks and the glue benchmark had recently come out as like kind of a standard multi test benchmark and this is kind of one of the first major ones to bump up accuracy | 01:27:03 | 01:27:31 | 5223 | 5251 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5223s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | there and reduce the complexity of it and you know there's two particular things that I'd like to focus on for discussing some of the results from that paper which is if we oblate the number of features transferred we really see that this is a 12 transformer 12 self attention block model we really see that you need all those layers and the random initialization of higher layers was not working well at the time it may be you know as always that you figure out better initialization methods and you can close that gap but you see kind of | 01:27:31 | 01:27:58 | 5251 | 5278 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5251s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | cleanly that we're transferring a deep heart you know a deep distributed representation and the you know the deeper it was the better it was generalizing and that seemed to hold true across multiple data sets and was a very clean kind of performance increase as you just transfer more and more of those blocks so Elmo is a 2 layer model and now we're going to like a 12 layer model and then this rightmost graph is really the one that I want to focus on and this kind of links together some of the hints and pieces we've been seeing | 01:27:58 | 01:28:25 | 5278 | 5305 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5278s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | so far through like many of the different papers which is kind of this interesting behavior sometimes the language model is learning a supervised task or a task we kind of thought needed supervision to classically trained in the machine learning framework without any direct explicit labeling or supervision of it so what we did here is we took this transform language ball and we kind of design these heuristic ways of having it compute probabilities the same way that right where Schwartz was doing and we kind of started to extend that beyond | 01:28:25 | 01:28:51 | 5305 | 5331 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5305s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | just you know the very specific thing like which of these two sentences is most likely so like for instance we could take a language model and do exactly that example at the beginning and ask it well you just saw a movie sentence review do you think the word very positive or very negative it's more likely after seeing this sentence so this would be this probe here which is sentiment analysis in blue and so we show over the course of training this language model we evaluate this kind of zero shot performance probe and we call | 01:28:51 | 01:29:17 | 5331 | 5357 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5331s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | it zero shot because and this is a broader you know we didn't invent zero shot by any means but it just means evaluating on a task or data set or a class that we've never seen before and we haven't done standard supervised learning to update the representations or to train the model to do this and so we see that kind of as you train you steadily improve performance we've normalized test performance so that zero is random guessing and one was the overall state of the art do you still see across the board that these models | 01:29:17 | 01:29:42 | 5357 | 5382 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5357s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | are you know nowhere near soda and often they're less than 50% of between soda and random guessing but they're showing clear and steady improvements and they're showing that even on tasks like question answering you could actually you know take a paragraph of like a question answering task and asked it well which of these answers do you think is more likely and you know there's no supervised training here it was trying to predict books and then you ask it like a 5th grade science question and it starts to sorry I shouldn't answer | 01:29:42 | 01:30:06 | 5382 | 5406 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5382s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | promotoras it so much but you just probe it you know you can compute some conditional probabilities from it you start to see progress being made on you know some potentially quite far afield task the final point to make your to is self attention and transformers really seem to help a lot here where as we did the same exact model or you know it's equivalent size and similar compute with an LS TM and we were seeing that especially on the zero shot tasks sometimes it could do relatively well but on some of them especially ones that | 01:30:06 | 01:30:33 | 5406 | 5433 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5406s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | handle long range dependencies you really need these self attention layers handle long term dependencies cool so I think it's we're at about half time for the lecture and I think that's probably good time for a break then fantastic Alec thank you let's let's take a break till 6.50 specific time well about eight minutes does that sound good yeah okay great and I'll pause the recording for a moment in here I'm sure if you have a certain like limitations on how large your Monica tree life is everything you need to run in like a | 01:30:33 | 01:31:12 | 5433 | 5472 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5433s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | particular device and you can to like train a large model or a cartoon elephant this one are there any strategies yeah I mean so I admittedly have been emphasizing the need for scale but it's kind of a continuous spectrum thing and there's some work we'll be talking about later that kind of focuses on efficiency and kind of how far you can push models of a given capacity in size you know probably the answer here I think from a pragmatic perspective is to kind of use whatever is the largest thing you can fit into the given device | 01:31:12 | 01:31:41 | 5472 | 5501 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5472s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | framework or kind of you know resource specifications you have but then kind of really pushing on how far you can you can take that thing and some of the methods and techniques that have been developed especially in the last year or two of kind of increased efficiencies by factors of maybe five to ten so there's I think a lot of promise there from you know really pushing even with a fixed size and many of those still fit on single GPUs yeah thanks cool yeah so yeah I guess given it seems like the class has gone over transforms a few | 01:31:41 | 01:32:14 | 5501 | 5534 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5501s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | times I won't do the super detailed version here so yeah I guess we'll just kind of look through that real quickly so we've kind of talked right now so far about standard mostly standard language models and kind of using different architectures you know character level aren't ends and 2ls TM or double all those teams and transfer based language models and they're always kind of trained with the standard auto regressive left right or in the case of Elmo adding a backwards right-left language model and you know that's nice | 01:32:14 | 01:32:46 | 5534 | 5566 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5534s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | it's a it's a clear framework it allows you to compete probabilities easily it allows you to sample kind of in just a iterated fashion it's not the fastest but it's quite simple to do you just feed in the sample in the distribution over the next word and then you feed that as a new input and conditioned on it and then resample and so it's it's it's a very clean in like general framework but it may actually not be all that optimal so it's cool and exciting to see some of the things that these language models are doing and some of the work as I was | 01:32:46 | 01:33:14 | 5566 | 5594 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5566s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | just mentioning has really pushed farther by walking away from that very explicit like left/right auto regressive language modeling strategy so this is a common leaderboard it's called the glute benchmark and it combines a set of like nine tasks together and this is this was pretty important for the field to kind of standardize on the set of tasks people reported on as you can imagine especially early on when the research is kind of scattered and not all that standardized you kind of see you know people picking their favorite benchmarks | 01:33:14 | 01:33:46 | 5594 | 5626 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5594s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | I was totally guilty of this myself I really cared about seven classification just because I happen to you know find that to be an interesting task and worked on it a lot and so you know my favorite report is that a classification someone else is to report on you know in tailmon and someone else report on crushing answering you gotta have this lack of commonality and comparison points so the blue benchmark came in and said we're going to standardize we're gonna focus on 7th level comparison tasks primarily and we're going to kind | 01:33:46 | 01:34:10 | 5626 | 5650 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5626s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | of have a suite of them and we're basically gonna say hey you should report on all of them so you can't hide your bad results on one and this helped drive a lot of progress too so this is a screenshot of kind of where this leaderboard has gone showing all these new improvements and methods so there's the BIOS TM Elmo baseline autumn that we mentioned and GT one would have slotted in slightly above that but then there's at now ranked 20 from Jacob Devlin and crew and then there's Facebook ai's Roberta as another big jump and so we | 01:34:10 | 01:34:42 | 5650 | 5682 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5650s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | saw it like on the average metric here which kind of just averages the performance across all these different tasks it went from 70 to 80 between the biles team elmo bench baseline - yeah - burn so that was a big jump there and then an almost equally sized jump happened with Berk - Roberta which we'll talk about in a bit and then there's newer things like Elektra and t5 so this kind of whole area has really you know really exploded in the last year two years in terms of the amount of teams and basically every | 01:34:42 | 01:35:11 | 5682 | 5711 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5682s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | major research lab Microsoft like Stanford and why you like pretty much you know AI too you see a huge amount of people you know I do every everyone everywhere has been kind of pushing what they can do on this this kind of benchmark and really seeing a lot of progress so we're going to go through kind of each of some of these improvements that these are highlighted select a few there's many others so sorry if I dropped here you briefly was a cool recording spec on so an SST too is like synth analysis like we mentioned before so it's kind of a | 01:35:11 | 01:36:15 | 5711 | 5775 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5711s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | you know again a diverse suite of tasks here so how do we kind of what are these big improvements we're seeing beyond the standard left-right language models and there's one more point Domanick which is there is a human baseline here and it's slotted in actually in the middle it's in 12th place now so what does it mean like are these models actually better than people and you know the answer really is no and it's complicated and confusing and we'll chat about this a bit more later and supervised learning is always playing tricks on you but you | 01:36:15 | 01:36:42 | 5775 | 5802 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5775s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | know now these models are like really really you know went from like in the last two or three years because of leveraging unsupervised pre training and kind of scalable methods to really make quite a lot of progress in this space very quickly so this is Bert so what Bert does is it basically finds a very great way to hybridize a language model objective with kind of the importance of like bidirectionality so again you know by default we have this like left/right auto regressive factorization where we say given the | 01:36:42 | 01:37:12 | 5802 | 5832 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5802s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | previous words predict the next word in the language model that's like what GPT one does and so what we see with that is you're not able to exploit context for the right you're not able to see you know you by masking the model you have to prevent it from being able to just look at the next word and say well I see my sequence that it's cat so I'll just learn to copy it over in predict cat so that has a major limitation and when we when we release GP t1 we actually like weren't able to do well on or you know we found that some of the | 01:37:12 | 01:37:43 | 5832 | 5863 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5832s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | question-answering datasets we just couldn't do as long because we weren't able to exploit bidirectional context whereas elmo was the old bi-directional image model and with an LST M and you know because they trained afford one of their backward one and the average of the representations that works quite well for shell models and that gets you that my directional context and that can help a ton and you know they performed us I still have performed this on some data sets because of that and then Bert basically figures out how to have | 01:37:43 | 01:38:06 | 5863 | 5886 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5863s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | bi-directional context within a self attention model and the way they do this is they change the objective so they're no longer doing this like standard you know maximum likelihood training on like just the data distribution they use this kind of proxy tasks called mast language modeling so again you know at the bottom here we could see left-right LM is like the cat sat on the and then you blink out a word and it's supposed to predict math right language modeling would be we'd go the other way around and same at the onset | 01:38:06 | 01:38:32 | 5886 | 5912 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5886s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | cat mask and predict you know there and so what master LUN does is it just takes your input sequence and it's it corrupts a few locations 15% in the case of bird and it trains the model to predict what's at those masked locations so you know might in in this case there's no like left requirement to write requirement it just randomly selects 15% and this allows you to have bi-directional like representations you can't leak the word because it's masked in the inputs whereas for a standard left-right right-left Ella you kind of | 01:38:32 | 01:39:02 | 5912 | 5942 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5912s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | hide that this is probably one detail of self attention models that you have that you have the self attention matrix and that kind of defines the connectivity pattern between different locations in your sequence of inputs that you're processing and so you used masked self attention matrix for standard left/right language modeling or right left wing which following where you masked the upper triangle and that prevents you from that future blinking and so you know we say bi-directional context that corresponds to training the same self | 01:39:02 | 01:39:33 | 5942 | 5973 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5942s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | attention transformer basically except you don't longer have this masking to prevent future locations like J you know after I being able to have I look at and attend to position J after I so that that's kind of the architectural detail that corresponds to this change and it kind of makes sense that having that ability to look at both sides of context helps with disambiguation it helps with information processing and you know information flow through the model because you know the model can like query back and you know for things like | 01:39:33 | 01:40:01 | 5973 | 6001 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=5973s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | question answering for instance if you have a if you have the question after the context you can't update the representations of the context you know in a left-right Auto regressive model after you've seen the question because they're mass then they're hidden from it so the model isn't doing any you know right context dependent processing but in bird it can actually you know bidirectionally attend and quickly passed information forward and backward and this is just what you see if anyone who actually does like | 01:40:01 | 01:40:24 | 6001 | 6024 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6001s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | self attention architecture from scratch on a supervised test they always use bi-directional or almost always instead of no mess mess self attention matrices and so this turns out to have a huge boost on glue so that that bump I believe between GP t1 and Burt was like they went from I will GPT one had like an average of 78 or something or sorry excuse me I think this got reworked and sorry we excluded WN a lie it was like a bump of like five five plus percent so it basically got a double the Headroom on gbg1 and they show with very careful | 01:40:24 | 01:41:00 | 6024 | 6060 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6024s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | controls that for the exact same model in the exact same setting you know it does look quite a bit better so bidirectionality makes sense also for sentence comparison tasks like entailment where you have two sentences you're comparing really want them all to be able to compute them and tend back and forth between them and you know look at one and then the other that just seems like correct behavior to do whereas 251 would just go left right and then you'd be done so yeah Burt ended up being kind of the thing after Elmo Elmo | 01:41:00 | 01:41:28 | 6060 | 6088 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6060s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | kicked it off especially in the research side and got a lot of people to start investing in this space bird is kind of the thing that moved this to the point where suddenly it was like you know ready for like more commercialization or you know production ready basically and so this is now deployed in Google search and its really like kind of showing up everywhere you know if you go to basically any leaderboard burg variants are often very near the top now and pretty much most NLP tasks and just like GPT one they use the same architecture | 01:41:28 | 01:41:57 | 6088 | 6117 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6088s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | everywhere they remove the need for kind of having these tests specific modules on top and so this was another like incredibly strong step so you know that was Bert I guess there's one more point to make which is because it's predicting these masking tokens it's only predicting like you know you have to set that mass percentage and by default it's often said to like 15 percent so you should understand that like your left/right model it actually predicts a lot more words because it'll predict the full sequence within a single fordpass | 01:41:57 | 01:42:25 | 6117 | 6145 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6117s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | whereas by default you'd have to run a Bert you know model like six times two on average see every predict every token so it turns out that they often learn a bit slower early but then they just keep training and they begin to learn how to use the bidirectional representations to their benefit and then they continue to outperform left/right language models now the problem is you can't sample from it and it's no longer quite as clear that it's like you know you can't compute a correct I'm normally like correctly normalized probability over | 01:42:25 | 01:42:52 | 6145 | 6172 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6145s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | the sequence without a lot of work there's some research and to figure out how to do this with clever methods but kind of it removes some of the elegance of like sampling and having easy density or probability estimates for kind of trading off this representation capability so Roberta is if we go back to this leaderboard the next big jump up from 80 point five to eighty eight point one you know kind of you know like as a benchmark or you know important event it kind of you know solidly is above the supposed human average baselines here so | 01:42:52 | 01:43:24 | 6172 | 6204 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6172s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | what is Roberta Roberta is a very well executed engineering refinement on Burt it's it's a good example of how so often in this field kind of you know the second pass at an approach with maybe the same very similar model architecture algorithm I've can just by careful engineering and fine tuning and tweaking still have tons of extra Headroom to it so they better tune the hyper parameters they remove a few hacks that the original Burt had so for instance the original point for computational reasons predicted most of its training on a | 01:43:24 | 01:43:54 | 6204 | 6234 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6204s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | relatively short context length and I believe out of 28 tokens and then right at the end of training double that two times two up to 512 tokens for prediction and so they just train at 512 the whole way through it's the same model capacity it has the same runtime per sequence length but they just have you know they spend the pre-training compute to buy that and when you're thinking about deploying the system you know one of the important criteria to realize is especially when you're talking about a system that might get | 01:43:54 | 01:44:23 | 6234 | 6263 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6234s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | deployed broadly and used across the you know across the world and many different companies once it's released most of the compute is going into inference time it's not actually going into training time and so that means that if you have a method of getting further performance improvements by spending more flops at pre training time often it can be quite worth it from like kind of a full ecosystem view of where is the compute being spent this is one of the counterintuitive things I think about how you think about these systems so | 01:44:23 | 01:44:48 | 6263 | 6288 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6263s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | they also do better data generation it turned out that the original bird kind of from a simplicity perspective cache the masking so they only actually masked the sequences once and they always print mask location and so you can simply change that to an online setting where you keep sampling the mask and that helps with overfitting and they also use a more flexible vocab scheme these kind of a full BP scheme that can do kind of full utf-8 byte sequences so you can handle any string at least with the standard byte sequence representation and then | 01:44:48 | 01:45:17 | 6288 | 6317 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6288s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | they just train longer with more compute so as we mentioned before bird is only predicting like one of six tokens on average so that just means it's under train for the coolant amount of time and you can actually just keep training it longer with more GPUs and continue to see higher and higher performance and so I mentioned Roberta Bert was on the leaderboards everywhere well now about you know eight months later it's Roberta everywhere on the leaderboards and that's still true today largely except for a few like targeted things is I | 01:45:17 | 01:45:42 | 6317 | 6342 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6317s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | think if you go to the our general PD leaderboard you're gonna find that model in the first place is some variants of like a row burger or something so that's like an example again of where you know it's not you know like there's no super clever new algorithm or approach or you know and even for bird it's a pretty you know it's a pretty precise refinement of previous of like gt1 but it can have a huge impact when it's just well executed and you know is I think somewhat you know exciting from one view where it's | 01:45:42 | 01:46:13 | 6342 | 6373 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6342s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | like okay we're kind of really finding that there's a lot of fertile ground here and with like kind of the right tweaks and you know clever insights we can continue to make further progress so this is where a lecture comes in and this is like I think one of the ones that first shows kind of another new interesting algorithmic potential improvement and someone excitingly shows that it's much more efficient so we mentioned the masking for bird so there's actually this kind of gap here which is the problem is when you | 01:46:13 | 01:46:39 | 6373 | 6399 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6373s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | were training you're masking all these input sequences and you know you may sample the masquerades so you you're kind of crunching your inputs but then when you want to train a test time or when you want to transfer to some downstream task it doesn't make sense to corrupt the inputs right because if you were doing some analysis and you mask the token you know this was a mask movie you don't know if it's going to be a great movie or a terrible movie in that mask location so bird just kind of like as a few tricks to minimize this impact | 01:46:39 | 01:47:07 | 6399 | 6427 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6399s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | but if the other day it's kind of this train test gap where you trained it with one distribution with mask inputs and then you want to test it and and and predict with it on a different one and it turns out though that gap actually looks to contribute to some performance issues the other gap is again it's only predicting 15% of tokens so it may also be learning slower than it could because he would have to F crop six times to see the same same predicted segment of data so when Elektra does is a very clever hybrid system so they have | 01:47:07 | 01:47:36 | 6427 | 6456 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6427s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | a bird or basically a mini bird inside of it so it's the standard math language modeling technique and then you sample from it and you say well you know for that first word that we've asked what do you think is the right word so sample from it's uh its distribution over next tokens and then you're going to feed it into this discriminator which is the actual Electra model and what its job is to do is to predict whether or not the token at any given location is the original token or a replaced token so if the generator gets it wrong again it | 01:47:36 | 01:48:04 | 6456 | 6484 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6456s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | might sample something kind of reasonable and kind of correct like cooked verse eight the job of the discriminator or the Electra system is to just estimate is this the correct one or wrong not so it's just a binary classification task but it's done at every location it's basically saying was this input corrupted and that allows it to you know one it has a natural distribution and it may be because this math language model could be quite good a lot closer to the real input distribution so you don't have this | 01:48:04 | 01:48:29 | 6484 | 6509 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6484s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | shock when you transfer it for your downstream tasks and you also speed things up because you're taking a loss and propagating a gradient for every location because you're always estimating is it the correct one or the wrong one and that's like still can be a difficult task for every location rather than like kind of the degenerate thing for like 80 percent of tokens which is just like the egg and B function for birth and so when we look across the board here we see that this model kind of the standard models and you get like | 01:48:29 | 01:48:53 | 6509 | 6533 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6509s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | glove or selma over spurt and they're all kind of like smashed up right here on 0 because and your TP one starts to move over and then you get kind of you know roberto scales with more and more compute and then the graph keeps going it is hidden you can see that electro is kind of across the board can be quite a lot more efficient and often by like factors of 5 for kind of equivalent performance on a data set so that's quite exciting and in the limit they show that for instance in Elektra a small electrical model so quite a lot | 01:48:53 | 01:49:18 | 6533 | 6558 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6533s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | smaller than even a GPT one by exploiting bidirectionality in this dense training function can actually outperform GPT one in two days on a single be 100 whereas gbg1 was 25 days on 8p 6000 partially this is because I have P 16 verse F P 32 but it really shows like how you know I think unfortunately some people have and it kind of makes sense because I've talked about the importance of scale what not you know some people have kind of like written this whole subfield office like whoever has the most GPUs is gonna win and you know oh | 01:49:18 | 01:49:48 | 6558 | 6588 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6558s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | it's all just training bigger models and maybe you know as a new when you're a grad student or as a hobbyist I don't have access to the resources to do interesting work in this space but if paper like electro it's really exciting because it shows that you know a single commercial GPU can actually still have very interesting results in this space nominally they still run the foreign version of the model on a TPU pod but you know there's here you're already having last year's model being beaten in a day or two on a single GPU next year | 01:49:48 | 01:50:16 | 6588 | 6616 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6588s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | so I think that's a very exciting point in this is from Clarke it all blew the system for Google Clarion I think it's Kevin sorry his first name yeah so it's it's really exciting work here there's this final one kind of this is like kind of the deluxe result coming out of space from Colin Rafal and collaborators at Google and this is like kind of after the first crazy year of like well there's you know bird and now there's Roberta and others you know like you know all these things coming out one after the other every few months bumping | 01:50:16 | 01:50:51 | 6616 | 6651 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6616s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | up the leaderboard this is the paper that like took a step back and kind of more systematically studying the space analyzed it used a lot of compute to do it but kind of really brought a lot of things together and kind of very carefully curated it's a it's a treasure trove of information for this space it's 50 pages long there's pages and pages of table so with hundreds of numbers and them so it can take a while to work through it but I really recommend it is like one of the ways to like get up to speed on this whole area and all the | 01:50:51 | 01:51:17 | 6651 | 6677 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6651s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | techniques and all the different ways so they again systematically study this so their standard language modeling objectives there's bird style masking there's there's there's their own kind of things like spam based extensions of bird and then they also look at differences in the architecture so there's your standard left/right language model there's encoder decoders which could have like bi-directional encoder that processes like the previous sentence kind of skip thought style and then a autoregressive decoder and then there's | 01:51:17 | 01:51:45 | 6677 | 6705 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6677s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | a corporate hybrid called a prefix L M which is a single well you could have untied weights but you think of it as like a partial and tying of the masking and a self attention matrix where you allow some part of the sequence to do bi-directional attention like in the past and then you switch over at some point to doing auto regressive like language modeling so you can get the benefits potentially of for past contexts doing bidirectional representations or in the limit if your downstream task is always just going to | 01:51:45 | 01:52:11 | 6705 | 6731 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6705s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | be bi-directional you can just run it in purely bi-directional mode so it's kind of trading a hybrid system and I think that was also a quite clever improvement they had the other thing I really like about this paper is it goes even farther in terms of elegance of kind of this shared framework for doing all tasks and all predictions so kind of one of the trends has been moving away from these custom architectures to the kind of shared pre-trained models that are a little bit more monolithic and can be used across a wide range of tasks with | 01:52:11 | 01:52:37 | 6731 | 6757 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6731s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | high performance and so tv5 you know typically like you know what do t1 and Bert the only difference we would do is we still flawed in the linear classifier at the end you like predict which of the right classes it's correct so what t5 says instead is and this is extra something that Brian McCann and collaborators at Salesforce introduced about two years ago is they basically say we're gonna phrase everything is like pure natural language pure question-answering or something so we're going to give the model like you know a | 01:52:37 | 01:53:03 | 6757 | 6783 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6757s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | command or a prompt as the prefix like translate in English sentence to German and then it'll give it the English that is good and then t5 just through natural language rests that - you know dust is good or something and you know for all of these it does this basically so for the coolest sentences it'll predict the language phrase not acceptable and for you know STS STS B here's a kind of almost silly version where it's a it's a continuous valued sentence similarity prediction task and then they just have an output discreet token 3.8 so it has | 01:53:03 | 01:53:33 | 6783 | 6813 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6783s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | the but you know because it's pre-trained it's learned kind of the continuum of numbers and the similarities between them but it's kind of like this for me to see a regression test reframed as discrete token prediction and you know again it's quite general you can do summarize and everything and so we saw a little bit of this what's like when we were probing with like where Schwartz's work just using natural language like probabilities from language model or like some of those zero shot transfers transform like GPT one and so | 01:53:33 | 01:53:57 | 6813 | 6837 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6813s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | like to five really goes through and shows yeah you can actually exploit the natural language that it's learned and that helps with the transfer and helps with the fine-tuning tasks potentially so yeah to five it's a really good kind of overview of like all the work in this space and then they also just threw a big model on it to at the end in that gets you another bump on those leaderboards that we were talking about so that's that in fourth place though now so others have done some more things on top of it | 01:53:57 | 01:54:22 | 6837 | 6862 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6837s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | so that's kind of I think the core like set of literature I wanted to cover here and ideas at this point we've kind of gone through the history of kind of language models and how they've been adapted and used across kind of you know the winding history here of how kind of NLP like kind of really took off with these and supervised and self supervised methods and kind of figured out how to use them and all these different papers that kind of found pieces of the puzzle and propose different methods that it or didn't work and kind of combined well | 01:54:22 | 01:54:53 | 6862 | 6893 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6862s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | with other you know modeling improvements and everything I think it's a really cool story I'm excited that I was able to chat through that with y'all today the last bit here now we still have about fifteen minutes left but we should maybe leave a little bit for questions at the end to is kind of just a bit of more high-level thoughts on this is an unsupervised learning course why do we need it you know what's wrong with the current paradigm of supervised learning and I'm sure you see motivation you know and there's been great discussion | 01:54:53 | 01:55:18 | 6893 | 6918 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6893s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | already on this topic here but kind of I would like to share a bit of my own like thoughts and opinions here so you know I think a motivating thing here again kind of we've had this thread running through a lot of the discussions so far on this in this talk has been how well does supervised learning work and you know what should we expect of it and so kind of concurrent with some of the stuff taking off in the last few years was kind of a lot of work that started critically evaluating kind of deep learning for supervised NLP and so you | 01:55:18 | 01:55:48 | 6918 | 6948 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6918s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | know for a national ears in France for instance this is a three-way classification task and even before pre training really took off so isom is using just word vectors and it was a very you know well design architecture they nominally got to you know average human accuracy of a I believe a single Turk worker it may have been an unsound three so it's like whoa is this done did we already hit human accuracy and you know like I think everyone kind of knew well no because clearly these models are you know still making weird mistakes and | 01:55:48 | 01:56:17 | 6948 | 6977 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6948s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | this is kind of where in the last few years there's been a lot of great work starting to really quantify these concepts of how robust our models how well do they work kind of distribution kind of pressuring and challenging the standards supervised learning paradigm of you know training on an iid training set and evaluating on another ID split held out data and basically showing that that's no longer sufficient and something's going wrong somewhere in supervised learning that means this is a being too fortunate to algorithms and | 01:56:17 | 01:56:43 | 6977 | 7003 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=6977s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | you know not being fortunate enough to humans and so this is a great paper from a Sutra and I believe called annotation artifacts of natural image inference data and so when you do hear people talking about these models are exploiting statistical artifacts and Maya sees of the train distribution you know this is a paper that really nailed that down and showed it quite conclusively so you know they kind of start from a high level well how were these datasets created these supervised data sets you know admittedly they're | 01:56:43 | 01:57:10 | 7003 | 7030 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7003s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | kind of artificial you're paying people to label these tasks they're not natural instances of the task it kind of what people can come up with on the top of their head or you know they can have very good experimental methodology in data sets like a semi multi know why are some of the best we've got in terms of like very good set ups you know curated by people who really know what they're doing but you still run into the issue of like well you've got to have a human generate an example and you know maybe they're less creative than you think | 01:57:10 | 01:57:36 | 7030 | 7056 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7030s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | so the data is actually drawn from a much more narrow distribution that it should be and so this paper kind of went through critically and showed a lot of these artifacts actually showing up so you know a worker would be told make a you know a negative or a contradictory label and so they would just be like oh I'll just slap a knot and top with all copy the sentence you know and it's not quite this bad but it gives you the idea of what's going on is they'll copy the premise sentences the hypothesis and they'll just put a knot in it or they'll | 01:57:36 | 01:58:03 | 7056 | 7083 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7056s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | you know to have entailment they'll just restate the sentence in a more generic or abstract way and so you might go from you know you know you know a dog is you know playing to an animal is playing or a pet is playing or stuff like that or you'll add some kind of like super information like tall or sad or popular to hint at the neutral class which is like well it might be true or am I not but it's not clear that way and so what they showed is somewhat disturbingly if you only trained to model on the hypothesis sentence so the second | 01:58:03 | 01:58:34 | 7083 | 7114 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7083s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | sentence and again semantically this task is defined as the logic correlation between two sentences but have you trained a model only on the second sentence to predict which of the classes it would be it actually did it basically got half of them right you know it went from 33 to 66 percent or so it was a large bump and you know by default we know that model can't be doing the true task because it's just predicting you know given only at the random second sentence so this is a great example of where you can see that standard | 01:58:34 | 01:59:01 | 7114 | 7141 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=7114s |