video_id
stringlengths 11
11
| title
stringlengths 0
100
| text
stringlengths 513
648
| start_timestamp
stringlengths 8
8
| end_timestamp
stringlengths 8
8
| start_second
stringlengths 1
5
| end_second
stringlengths 2
5
| url
stringlengths 48
52
| thumbnail
stringlengths 0
52
|
---|---|---|---|---|---|---|---|---|
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | you can also get very interesting relations like the concept of like kind of you know like the CEO to a company might all be expressed in kind of the same same subspace or the same direction in the embedding space or connecting a zip code to its city and you can see kind of how you know how could this be happening well co-occurrence is kind of get you this don't they you know Honolulu if you want to predict this random number you've got to eventually figure out that oh these you know do occur together because they are | 00:24:17 | 00:24:52 | 1457 | 1492 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1457s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | the code of one to the other and so yeah you you get a lot of structure even though again all we were doing or you know one of the views of all these algorithms are doing is they're just processing very local relations in a very simple fashion and it's just scaleable and simple but it can work quite well as a starting point now you know these aren't the end of obviously it's only thirty minutes in two or three hour lecture so there's a long way to go so kind of these are all cool and whatnot and they really did | 00:24:52 | 00:25:24 | 1492 | 1524 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1492s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | drive the first few years of modern deep learning NLP and helping to move these models to much higher performance but kind of what might be the issues with them so you know obviously languages a lot more than just the counts of words it has a ton of structure on top of than in addition to words and and furthermore context is very important and these kind of fix static representations of words that we're learning are just insufficient in many cases so you might have for instance three different sentences I went to the riverbank I made | 00:25:24 | 00:25:53 | 1524 | 1553 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1524s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | it withdrawal from the bank or I wouldn't Bank on it and all of them have the word Bank being used in a very different context and you know basically representing a different thing it's a now and a reverb or you know it or just a phrase of expression and so you really need to learn how to do more complex things but if you're just counting whether two words happen to occur in the sentence or you know in word to vector kind of looking at a very short window and just using an averaging operator you you you can't really model all that much | 00:25:53 | 00:26:22 | 1553 | 1582 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1553s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | complexity so we need to do more and you know there's also just kind of the design space right now you have this million by 300 dimensional matrix so it's like word vectors and then the question is just what do we do with that and you know obviously we figured out quite a lot of ways to use them but there's a lot that's still up to the practitioner and this often involves a lot of tests specific models slapped on top of it and that's where a lot of the first few years of research and NLP for deep learning went was kind of designing | 00:26:22 | 00:26:52 | 1582 | 1612 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1582s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | all these tests specific models a model for doing question answering a model for doing summarization a model for doing sentiment analysis and they would all kind of take this common input of the word vectors and slap them in but then there was a huge amount of design on top of that and these models got progressively more and more complex with more and more details and so you can kind of think this does like well we only really did the first step sure learning word vectors is great but they're really kind of like learning | 00:26:52 | 00:27:16 | 1612 | 1636 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1612s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | just the edge detectors in computer vision and they get us something but we know like in you know debugging for computer vision there's a lot more that goes into comment than just some edge detectors at the beginning system and that's true for an LP as well so there's a lot more going on in language beyond just these input representations so kind of how do we get there well we're going to take a little bit of a detour into the history of language models and kind of walk through how this kind of method and kind of set of generative models | 00:27:16 | 00:27:45 | 1636 | 1665 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1636s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | kind of ended up providing one of the methods for moving beyond just word vectors and kind of introducing the second wave of modern NLP methods that use unsupervised or self supervised methods so fun overview real quickly is kind of seventy years of history here on one slide where we kind of are looking at a language model what is a language model well it models language and it's a generative model so hopefully depending on how nicely it's set up we can draw samples from it to understand kind of what distribution it's actually learned | 00:27:45 | 00:28:16 | 1665 | 1696 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1665s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | and how well it's actually approximated the real distribution of language so without getting in the details of how you sample you can kind of see this kind of list here so very early there's this thing called a three gram model from Claude Shannon himself in the 1950s and this kind of still makes basically gibberish they also point to ninety-nine point six billion dollars from two hundred four six three percent of interest rate stores as Mexico and Brazil in market conditions well that's basically gibberish but notice that | 00:28:16 | 00:28:45 | 1696 | 1725 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1696s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | there's still a bit of like local correlation and structure it says a lot of numbers and then it mentions interest rates after six point three percent or six three percent and that's like all kind of right and you can see how there's the tiniest bit of structure in there beyond just like what it would look like I could be just drew words independently according to their frequencies and then there's been a lot of investment in this kind of field in area over the last few years so Ilya sutskever in 2011 kind of | 00:28:45 | 00:29:12 | 1725 | 1752 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1725s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | introduced a character RNN for the task and so here anytime there's a prompt it's highlighted in yellow which means it's a manually specified kind of prefix and then you condition on that and you sample from that so the meaning of life is the tradition of ancient human reproduction that's almost a sentence it is less favorable to the good boy for when to remove her bigger so it quickly fell apart in the second part but it almost got something there and it's still gibberish but it at least shows another hint of structure and then | 00:29:12 | 00:29:42 | 1752 | 1782 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1752s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | there's a Rafale or falls paper from 2016 which is basically a much bigger word level version of that RNN from 2011 and just kind of use a scale and a lot more data and here's a sample drawn from it with even more new technologies coming onto the market quickly during the past three years the increasing number of companies was now Ted called the ever-changing and ever changing environmental challenges online so that's basically a sentence at this point there's a weird thing where it repeats itself with ever-changing and | 00:29:42 | 00:30:10 | 1782 | 1810 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1782s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | ever changing but we've now got a phrase you know multiple phrases or clauses and kind of longer term structure there so that's a big amount of progress and again as we talked about with word vectors a lot of their failure is that they don't exploit contexts and they're kind of these isolated representations of only single words so the fact that these language models were starting to learn context as you looked at and inspected their samples is kind of a clue that they're going in the right direction towards some of the | 00:30:10 | 00:30:35 | 1810 | 1835 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1810s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | functionality behaviors we might want in a natural image processing so then the next major step came in 2017 2018 with the introduction of the transformer based architecture we'll talk a little bit about that later if that's appropriate but its handles long term dependencies much better through self intention mechanisms and then you start to see potentially multiple sentences that kind of flow together and then the final one here is GPT - which can kind of take potentially a pretty low probability or difficult to understand | 00:30:35 | 00:31:05 | 1835 | 1865 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1835s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | prompt that you know probably isn't in the training data like scientist discovering a herd of unicorns living in a you know remote previously unexplored Valley in the Andes Mountains and they're able to speak English and then it can write something that looks like a news article on the top of that this does cherry-picked all howl like 20 times i sat there till I got a good one but it's progress and most of these are cherry picked so it's cherry picks again cherry fix all the way down and yeah at that point you basically have something that | 00:31:05 | 00:31:33 | 1865 | 1893 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1865s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | just reads like a full news article and it keeps characters and names persistent and you know pulls information from the source sentence over you know multiple paragraphs and this is all a lot of progress being driven in the last few years so kind of now that we have just like a look at the cool samples let's like get into the details here so this is going to be about statistical or probabilistic language modeling and kind of the way we formulate this or is we interpret language as a high dimensional discrete data distribution that we want | 00:31:33 | 00:32:02 | 1893 | 1922 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1893s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | to model and kind of the set up since this is statistical method is we're going to observe a bunch of strings of language and the framing here with a probabilistic language model is we want to learn a function that can just compute the probability or density of new sentences so we want to be able to compute Oh what is the probability of the sentence is it going to rain today and we're just going to give it a bunch of strings and somehow we're going to design a model that can compute this quantity so what does it mean to compute | 00:32:02 | 00:32:27 | 1922 | 1947 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1922s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | the probability of a string you know what what should the probability of that sum is the cat sat on the mat B well you know there's some people who kind of think that this might not be the most well defined concept or there's a lot of reason for skepticism potentially Noam Chomsky in 1969 has a very famous quote but it must be recognized that the notion of the probability of a sentence is an entirely useless one under any known interpretation of this term of this term so some people were quite skeptical to be fair this is well before | 00:32:27 | 00:32:55 | 1947 | 1975 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1947s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | that kind of any of the modern renaissance and he goes on to kind of explain a bit more that there's quite likely that statistical methods can work but it's a good example of kind of where we're coming from and some of the contrast in the field so let's instead kind of like talk about why this concept might be useful like why do we want to know what the probability of this is and this is where I think it begins to see the connection between oh what does the generative and how like we end up using it or why might actually learn useful | 00:32:55 | 00:33:23 | 1975 | 2003 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=1975s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | functionality for downstream tasks or for transfer and so you know we could compare for instance the probability of the sentence the cat sat on the mat so the probability of the sentence the cat sets on the mat and you know we would expect that let's say we somehow have the true function here we don't know how to learn it yet but we just assume we have like the ground truth of the probabilities of these two sentences well it should assign more probability to the grammatically correct one and that gives you something like grammar | 00:33:23 | 00:33:49 | 2003 | 2029 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2003s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | and that's you know an important part of language is understanding its structure and what are the valid sentences or not but you know should the probability of the sentence the cat sets on the mat be zero well no because sometimes people fudge their keyboard or miss type it should be much lower but it shouldn't be all the way to zero for instance and then you can kind of get to more interesting sentences that you could query you could say you know what's the probability in the sentence the hyena stephannie met and compare that to the | 00:33:49 | 00:34:15 | 2029 | 2055 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2029s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | sentence the cat sat on the mat and you know we would say well as a human being asked this you would say well hyenas you know are wild animals they don't often sit on mats unless they're at the zoo or something so this kind of shows how to do this to compute this probability correctly you would need to start to have world knowledge what is a common operator you know what is a common environment for a hyena you know what is even sitting on a mat mean and then you can ask other questions again you could start to get two conditional | 00:34:15 | 00:34:43 | 2055 | 2083 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2055s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | probabilities too depending on how you set up this generative model is you might be able to query you know given that the prefix is the substring two plus two equals you know what should the probability of the completion for be it probably shouldn't be one because people sometimes joke that two plus two equals five but maybe if you had bit more context you would be able to disambiguate which of those two you might predict and then finally kind of coming back to some of the data sets or tests we've already mentioned if you | 00:34:43 | 00:35:09 | 2083 | 2109 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2083s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | have the prefix that movie was terrible I'd rate it one star out of five you you really should know that that is a likely completion and so to do that completion and to generate that sentence and to know that string is likely you basically have to have that language model somehow I've learned what sentiment analysis is and what is a little you know a likely relation between the concept of like one star or five stars the kind of you know reception of the movie or the description of the movie before that and so with that one it kind | 00:35:09 | 00:35:39 | 2109 | 2139 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2109s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | of becomes clear that in the limit these functions that these language models learn and compute should be quite useful traditionally we approach that as a supervised learning problem right we were gonna like oh let's go build a data set let's go collect you know a bunch of crowd laborers and have them assign ratings to a bunch of different movie reviews that's what the Stanford sentiment tree Bank is but in the limit this kind of unsupervised scalable method of just like fit a probability distribution to strings of language | 00:35:39 | 00:36:03 | 2139 | 2163 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2139s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | should eventually maybe be able to handle a test like this without any of the classic supervised learning framework being used and yet so that kind of extends much more broadly those are kind of some you know canonical example or some toy examples but this actually can be quite useful and this is kind of where language models got their start in many cases 30 or 40 years ago or 20 years ago in kind of machine learning so they're often used for speech recognition and machine translation which again are traditionally approach to supervised | 00:36:03 | 00:36:35 | 2163 | 2195 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2163s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | tasks with pairs of transcripts that are somehow aligned and you know a major promise is that we can somehow leverage these language models to you know really help with these problems and for speech recognition for instance you could prune the space of possible transcriptions from an acoustic model there's a variance example from Jeffrey Hanson of you know how to tell the difference between the sentence recognize each and recognize speech you know they're very similar from a you know a raw audio perspective but if you have context you | 00:36:35 | 00:37:05 | 2195 | 2225 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2195s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | know that these can be quite different things and wrecking a nice Beach is just also a much less likely string than to recognize speech and for translation for instance you could rewrite possible translations based on a monolingual language model so if you have an English to French translation system and you have some proposal of the French translation you could say well hey language model that I've trained already how likely do you think the sentence is in French and there's a lot of work on integrating this directly into decoders | 00:37:05 | 00:37:30 | 2225 | 2250 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2225s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | and using them as restoring mechanisms Oh statistical language model has really got their start often as in these tasks so let's move towards actually having a computational model of language so first maybe we'll do some pre-processing like lower case so we'll take some maybe messed up text and turn it into just all lowercase to simplify it well then you know maybe said a vocabulary size to just like make the distribution easier to handle to set it to like you know a million tokens or something so we might substitute a rare | 00:37:30 | 00:38:03 | 2250 | 2283 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2250s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | word like countertop with like an unknown token just so we kind of don't have to deal with this like potentially open-ended probability of observing a new novel where I've never seen before and then finally we'll use something like a tokenizer which will take a input string and return a sequence of tokens so it'll chunk it into a sequence somehow with kind of some rules or logic and you know this is another example of classic and LP work on designing tokenizer x' so we might take you know the cats out of the man and choke it | 00:38:03 | 00:38:33 | 2283 | 2313 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2283s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | into just the words and you know throw that punctuation on the end there and then you know because this is machine learning we basically are we representing these words as you know unique identifiers or indices and that's again a way to get a window into how a machine learning model really sees natural language you know we come in as humans with so much understanding in context and from like lived experience but you know if you try to train a naive supervised learning model and you started from random visualization it's a | 00:38:33 | 00:38:59 | 2313 | 2339 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2313s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | lot harder to understand what 223 in 1924 742 followed by 101 23 etc is and I think this helps you get into the mindset of when people talk about machine learning models being spurious pattern matchers or just learning weird correlations that aren't true if you if you've looked at a bunch and it's like tried to do natural and processing tasks as a human where your inputs are represented in this format you'd probably be a lot worse than current machine learning models already are and it'd be understandable if you made kind | 00:38:59 | 00:39:27 | 2339 | 2367 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2339s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | of mistakes as an algorithm trying to figure out how to make sense of any of this especially once you get to some more complicated task like do these two sentences logically reason or follow each other you could even just do a simpler thing like split it into spaces so there's a huge design space here and I'm just providing a few examples right now okay so there's character level there's byte level which would be kind of working on you know if you just work on characters how do you deal with non ASCII text or text in there you know non | 00:39:27 | 00:39:56 | 2367 | 2396 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2367s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | a Roman numeral or sorry standard like lettering systems so you could work on like a standard encoding scheme like utf-8 byte stream you could also work on unicode symbols or code points and then there's kind of these middle grounds between word level and character level which would be something like by parent coding and this one actually turns out to be super important so I'm kind of just covering it as part of general NLP methods and it's used by quite a lot of methods in the space now so what this does it starts with the | 00:39:56 | 00:40:25 | 2396 | 2425 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2396s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | character level vocabulary and it kind of just merges the two most common pairs of characters at a time so you might have T and H be the most common pair of words or characters and then you'll combine them into a new token called you have th and you'll merge that and you'll resub stitute it in all of your words and then you'll run this loop again and so if you run this and kind of just keep merging and merging and merging it learns basically a tree of merges that quickly pop out words full words like the and you know common endings like IMG | 00:40:25 | 00:40:53 | 2425 | 2453 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2425s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | and he and two and this learns something that kind of lets us handle potentially the full distribution of of language while also having maybe the efficiency of representing semantic chunks like words instead of operating on these characters which might result in strings that are five or ten times or five times longer and require like much more compute and have much longer term dependencies that are difficult to handle then standard board models so if I clear encoding from recur Center is all over the place and is a very common | 00:40:53 | 00:41:24 | 2453 | 2484 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2453s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | middle ground to back off to character level if you see something rare you don't know or to handle like all these different languages while still having some sort of like kind of sensible handling of common words and frequencies hey Alec is there a common kind of number of bite pairs that you want to end up with cuz it sounds like you start with byte level which is just 256 possibilities and then you could imagine that you can have many many by two pairs and sometimes it goes beyond pairs I think right where you recombine an | 00:41:24 | 00:41:59 | 2484 | 2519 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2484s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | existing pair with an initial just a bite like the iron and the G yeah yeah when do you stop so yeah that's a good question usually you have a heuristic for merging only across or only within words so you won't merge across like word boundaries with like whitespace or things like that and that just helps with efficiency because otherwise you'll start wasting emerges on things like you know come in like pairs of you know maybe fill our words or stop words and the other thing is you you could just in the limit run | 00:41:59 | 00:42:29 | 2519 | 2549 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2519s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | this all the way out to a full vocab but we often work on this kind of middle ground where you know to get good coverage of of natural language you often need a hundred thousand plus words and you know in the limit if you want to start having you know common names and places you need really like million sized vocabularies and that can just be incredibly competition expensive so you'll often stick this in a middle ground of like 32 K bps and you're absolutely right that it'll merge all the way up to a full word like you will | 00:42:29 | 00:42:55 | 2549 | 2575 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2549s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | get things like you know neurobiology in the limit that would be merged all the way with BPA just by doing merges over and over again yep thank you cool so how do we compute the probability of extreme well the dumbest model is we can just assume a uniform like prior over tokens and assume all of our independence we just product they're probably independent probabilities together to compute for any arbitrary sequence dumbest model but we'll start somewhere all right so let's get rid of some of these dumb assumptions so we could | 00:42:55 | 00:43:25 | 2575 | 2605 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2575s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | suddenly like say well we know some words are more common than others and that kind of word co-occurrence matrix has that diagonal term which is just the frequencies or counts of words so we could use that instead and you know that would just allow us to say well the word these really comments were going to send more probability mass to it and you know the word supercalifragilisticexpialidocious is just pretty rare so this would be called a unigram model where all we do is we just product proportional to the | 00:43:25 | 00:43:48 | 2605 | 2628 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2605s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | probabilities of the tokens from like the empirical distribution and again we can estimate that just by counting a time we can then go a bit farther and start to exploit context so again we've talked before about how important context might eat and this is where you can start to see you language well begin to handle a potentially so you can say instead that we're instead of estimating just like the you know diagonal of that of that matrix we can use that full matrix basically and say well yeah then that you know we just saw the word the | 00:43:48 | 00:44:16 | 2628 | 2656 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2628s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | how often is the word cat after the word though and so we can kind of conditioned on that previous token and you know use a modified version of that like look at our count table and start to handle a little bit of context so that's a bigram model by Gram language model but there's a problem of generalization here and this is where kind of counting methods eventually like hit their limit and yeah we can brute force them with all the data on the internet but at the other day they're not flexible enough so let's say you've | 00:44:16 | 00:44:41 | 2656 | 2681 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2656s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | never seen a word before like self attention you can't assign zero probability to that if you're trying to optimize for like log loss or something because you just get an infinite loss and you know if we just start going to longer and longer strings this count method explodes and the observances of every substring get rare and rare and this just kind of hits a wall so in the like 80s and 90s the way we kind of handle this is we kind of accepted that we couldn't handle the longer term dependencies here and we kind of use | 00:44:41 | 00:45:12 | 2681 | 2712 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2681s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | clever smoothing methods of mixture models where you might put a lot of your probability on you know the you know grammar by grammar trigram model which is more expressive but you'll smear probability backing off if you don't see a word for instance or don't have a match to that unigram model or uniform model in the limit and so this was kind of what you saw a language models in the 80s and 90s spend a lot of their time on is they kind of were these very they were still basically count tables and statistical count tables but they | 00:45:12 | 00:45:39 | 2712 | 2739 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2712s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | optimized for kind of achieving something looking more like generalization of a simple form of kind of combining these mixture models and so this is a good review paper if you ever want to kind of go back through the history of this and all the different methods develop there you start to get things that look more like representation learning and even multi-layer models so they'll start doing things like clustering over parts of speech or substituting for that so it's a very hand engineered way of potentially adding expressiveness but | 00:45:39 | 00:46:04 | 2739 | 2764 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2739s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | it's a good history of kind of where these methods came from so you know since we're talking about NLP and language models is one of the core workhorses here kind of how do you evaluate and interpret a language model well probabilities are often within running or of 0 since language is a huge discrete space in the sentence might you know or a document might just be very long and so the most common way of evaluating these models and saying how well does it do is we use a quantity that's not dependent on the length so we | 00:46:04 | 00:46:33 | 2764 | 2793 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2764s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | use like the average negative log probability per token and you know this token definition again might be arbitrary character level or might be weird level and so if we're using character level we might convert from you know base e to base two and report bits per character or bits per byte you see a lot of common language modeling benchmarks work in this setting and word level language model is often exponentiate that quantity and report what they call the perplexity and set so yeah it's just giving you bigger numbers | 00:46:33 | 00:47:01 | 2793 | 2821 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2793s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | and better improvements because you're working on expensive log scale so how do we ground these numbers they're kind of abstract or random quantities you know what is the difference between one point two three bits per character and one point two bits per character especially if you just spent pretty much your life working on a paper and that's the number you got out so you know it's important to understand these quantities our data set dependent it's really easy to guess all zeroes it's really hard to guess the | 00:47:01 | 00:47:23 | 2821 | 2843 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2821s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | archive and you know you can start calibrating the scales by saying well random guessing would get you you know log two of you know one over 256 so eight bits per character and human estimates from not the best studies but the only ones we've got I've kind of tried to peg on people in the range of like zero point six to one point three bits per character and the best of the models now are often a little bit lower than one bit per character so that range probably is lower for humans and we're somewhere you know getting okay but not | 00:47:23 | 00:47:53 | 2843 | 2873 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2843s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | matching humans on these kind of quantities and you know way of grinding perplexities is gonna use the same random baseline so it ends up just matching the vocabulary size for like a standard model so random guessing would be a plexi of 50k and one way of thinking about perplexity is as like a branching factor of language so flexi to the N is like the space of possible generations of length then how many your model might assign so you have a perplexity of 10 for a language model and you generate you know to two like | 00:47:53 | 00:48:22 | 2873 | 2902 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2873s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | word sequence there might be a hundred kind of high probability events in that space and human level estimates again Europe between five and ten and an example though again is this is always data set dependent always problem dependent if you have a lot of well constrained context like in translation these numbers can be a lot lower and best models are often like three perplexity on translation so you're picking between maybe three as possible likely words and you know that kind of agrees with like maybe there's a few | 00:48:22 | 00:48:48 | 2902 | 2928 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2902s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | different ways for a human translate to census but it's not a huge space by any means so evaluation type 2 is kind of what we talked about so that evaluation type 1 is very much the generative model perspective of like well how good of a probabilistic model is this and so type 2 is instead kind of transfer and the things we're really talking about and caring about more there's a lot of ways we could use these language models so you could say how well does a better language mall potentially improve the word error rate | 00:48:48 | 00:49:14 | 2928 | 2954 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2928s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | for your speech recognition system or the blue score for your translation system or the Younger suit for your document classification and this is kind of where NLP has really taken off leveraging these language models and kind of the history of the last five years has been discovering more and more ways we could use smarter and smarter language models or better and better language models to do more and more things so let's go through kind of the history here of kind of the sequence of developing real context models models | 00:49:14 | 00:49:39 | 2954 | 2979 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2954s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | that can generalize better than these kind of count based methods we've so far been kind of using it for all of our discussion so the first one here is surprisingly you know like honestly this paper is amazing if you go back and read it it's from yatra Ben geo and from 2003 and it has a ton of very modern things in it and has skipped connections like you see in things like resonates in 2003 you know it's learning distributed representations of words and this is kind of that core concept we mentioned right at the beginning of like | 00:49:39 | 00:50:09 | 2979 | 3009 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=2979s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | representing a word by a vector with learned values for each location this is like the paper that kind of really introduced this in the neural setting and they were doing like large-scale distributed asynchronous SGD on a cluster even back then in 2003 they had to do it because single mushy peas were so slow so this is like I think it's 64 128 CPU cluster and it would take them I think a month to train a model with like three layers and you know sixty hidden units so this is a still a and grand model but we're using a multi-layer | 00:50:09 | 00:50:45 | 3009 | 3045 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3009s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | perceptron to compute the kind of conditioning on the context so instead of this kind of you know cow based method we have an MLP that looks at you know the index for reward t minus one the index forward to minus two and you know T minus let's say just a three word context so these three of these vectors can and together of you know the last three words seen we then run it through a hidden layer and then we feed it through a soft max to try to predict what the next word would be so this is a trigram language model still but we've changed | 00:50:45 | 00:51:14 | 3045 | 3074 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3045s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | the model from a count based method to a distributed setting with an MLP and you know this was kind of the first paper that heroically showed that they could match the performance of some of those super optimized and grand models but again it took like ten days or a month and it was on a giant cluster so you know neural language models really had some catch up to play compared to these smart quick count methods and this is a lot of what took this so long was just unfortunately they do need a lot more compute so then the next major step | 00:51:14 | 00:51:43 | 3074 | 3103 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3074s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | here was kind of moving away from these fixed context windows which are kind of unsatisfying we know that as humans we can look back in pieces of text and condition on multiple sentences but these kind of methods always so far have had fixed context windows and have only been able to process or condition on just the last few words so this is kind of where our n ends come in and Thomas week loves 2010 paper it's kind of the first modern deep learning version of this that kind of started working quite well so we | 00:51:43 | 00:52:12 | 3103 | 3132 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3103s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | replaced that MLP with a recurrent neural network and that allows for handling potentially unbounded context now it handles that context in a learn fashion so you'll get an input word vector at one time step and you'll have this context buffer which is a learned memory state that the RNN kind of modifies and updates and you'll use that to kind of represent a running summary of everything you've seen that's important for predicting the next potential word this has potentially unbounded context but in practice we'll | 00:52:12 | 00:52:39 | 3132 | 3159 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3132s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | train it with method methods like truncated back prop where we only update for and compute you know how to modify the the transition function of the state for up to like maybe 32 words or 64 words so it might be biased in that way but it's kind of just still can potentially learn to encode a lot of information into a learned memory system instead of kind of using like hard coded methods of just like keeping the explicit input presentations so here we get again like probably one of the first real language models where on that | 00:52:39 | 00:53:12 | 3159 | 3192 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3159s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | previous one we were still using a type 1 evaluation we're just like how well can you predict the next word but here with my cloths paper they showed that if you ran this for a speech translation system you can get actually a much lower word error rate on you not only predict better and you start really improving over the the traditional like in ground based language models but if you look at this word error rate table here you actually see that it improves the speech recognition system so your transcriber will make much potentially | 00:53:12 | 00:53:42 | 3192 | 3222 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3192s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | like you know over 1 to 2 so the k10 baseline here is 13.5 percent were word error rate so you messed up 13.5 percent words and using all these are nouns together you could actually reduce that by over two points and so you're talking about like a 20 percent error reduction which is quite significant yeah this is like a lot of early language models were actually published in speech conferences because this was such a important and exciting application of them to start with and again you don't need to collect | 00:53:42 | 00:54:09 | 3222 | 3249 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3222s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | a bunch of speech transcription data here in the limit you could just run this thing over like New York Times articles and then use it to help potentially with your speech transcriber and that's where a lot of the power comes in from an unsupervised scaleable method and transfer capabilities so we already showed samples from this one but it's kind of a slightly different version where all these models so far have been operating on words and kind of pre-built tokenizer x' to split it off and chunk it and kind of fix | 00:54:09 | 00:54:36 | 3249 | 3276 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3249s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | vocabularies the exciting thing with character level models so it's the same kind of architecture or recurrent Network it approximates a richer transition function where you might have a different set of weights with multiplicative interactions this was back when we thought optimization was hard so it's using second-order optimizers because RNs are scary and we still haven't gotten used to like just first-order methods working well and you know it begins to handle much longer short dependencies when you work on | 00:54:36 | 00:55:05 | 3276 | 3305 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3276s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | character level you're you're you know suddenly talk about sequences that are five times longer so you start having models that maybe handle hundreds of time steps and that starts you know abstractly meaning maybe you could have a model that could actually parse a paragraph or parse a page and you know it wasn't a lot better than Engram models in terms of its perplexities but it was very easy to sample from and this was kind of one of the first I think demos that people might have seen online of the language model back on | 00:55:05 | 00:55:29 | 3305 | 3329 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3305s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | some meats University of Toronto static website from like 20 2011 so the next male like not like quick question when you look at character level models versus word level models can you directly compare the perplexity uh if you're careful yes so you know it in the limit these are both just predicting a sequence and if you set it up correctly you could just like you know here would be the simplest way to do it with a character level model sorry I should clarify you can go one way so you can you can compute for a character or byte | 00:55:29 | 00:56:09 | 3329 | 3369 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3329s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | level model the perplexities that it would assign to a ward level model but some word level models might have limitations like using unknown tokens or out of vocabulary that means they can't actually compute probabilities of arbitrary sentences whereas that someone in the expressive benefits of a character level model so the simplest way to do this would be you would convert the word level model like the token sequence of processed like let's just say it's split on spaces you'll just rejoin on spaces and then compute | 00:56:09 | 00:56:34 | 3369 | 3394 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3369s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | the probabilities the the character level model assigns and you'll have an adjustment factor you could just sum the probabilities over the full sequence and then renormalize by the relevant metric and we'll actually be using that later to talk about how to compare different language almost more appropriately but again you need to have the expressivity to handle an arbitrary string to be able to compute this and you know old models because their computation is often worked with small vocabularies so they wouldn't truly be computing the | 00:56:34 | 00:57:01 | 3394 | 3421 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3394s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | probability of arbitrary strings because they might normalize them in various ways got it thank you yep so the next step is kind of going to multi-layer ellis TMS and also introducing the LS TM again even though it came out in 2000 and kind of gotten realized primarily by you know one of the major people that we popularized it was Alex greys and kind of 2013 ish so this is gonna character level model except we now have a gated RNN which uses kind of these multiplicative gates and more complicated transition dynamics to | 00:57:01 | 00:57:33 | 3421 | 3453 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3421s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | better store state and to help compared to like a multiplicative Arn in with kind of credit assignment and just trainability and you start to get the things that like can handle a kind of arbitrary arbitrary strings of text so you get you know something that's learning how to parse Wikipedia markdown or XML and Andre Carpathia kind of really popularized these models with like some blog post in 2015 showing that they're like you know work full of tech they work for XML they were Python programs you know they're | 00:57:33 | 00:58:01 | 3453 | 3481 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3453s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | not generating valid things but they kind of can handle this they really have this flexibility of kind of you know feeling exciting from an unsupervised learning perspective you give them some data distribution of like Python programs or something and you just have a you know train over that and then you get something that looks like it's really drawn from that distribution so we kind of like talking out through like a lot of the early work here and although there was one example with the Thomas pickle off paper of Thomas paper | 00:58:01 | 00:58:27 | 3481 | 3507 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3481s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | of actually having an application a lot of this was kind of just like competing for competing sake on type one evals or like look at the funny samples so again one of the very fascinating things about the last few years of NLP has been how we figured out how to really use these things much more broadly across the board and this is where I think it really starts to get exciting so one of the first papers to do this it was the skip thought vectors paper from Jimmy tauros and collaborators in 2015 and so what they did is they proposed learning | 00:58:27 | 00:58:56 | 3507 | 3536 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3507s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | a RN and sequencing coder to provide context to in a language model and basically to learn how to use a sentence level feature extractor so what I mean by that is let's say we have a sentence I could see the cat on the steps what this model is trying to do is it first ingest this context sentence in the middle and they call it skip thought vectors because it's you can think of this is basically that skip grande model that was again you take a word in the center and then you predict the word before in the word after this is | 00:58:56 | 00:59:25 | 3536 | 3565 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3536s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | generalizing it to sentences and it's using an RN end to kind of learn to summarize the context of the long sequence and handle kind of predicting complex dependencies between multiple words so we encode that center sentence with an RNN we iterate over it in the left-to-right fashion and then we have Anna linguish model that predicts the previous sentence so what might have happened before the sentence and then a language model that also predicts the suffix sentence that comes after it so what they then do is they say well | 00:59:25 | 00:59:55 | 3565 | 3595 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3565s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | you know a model that does this test very well should learn to kind of summarize this sentence in the middle and for our nen's it's you know still these distributed representations so you have this state vector that's representing kind of an alert fashion all of the previous words you've seen so importantly that's now generalized from representations of single words to representations of sequences that can exploit context and potentially handle more complex properties and just big u8 meanings of words and they showed across | 00:59:55 | 01:00:23 | 3595 | 3623 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3595s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | the board that these models healthily outperformed classic methods like so Cibao with word effect would be the simplest version so you know what does a document how do you represent a document with word effect well one of the future observations you could take is to just average the embeddings of each of the each of the words in the document and that would be what this like Cibao baseline here is on a bunch of different data sets and so you could instead say well we're gonna you know we somehow learned this sorry we somehow learned | 01:00:23 | 01:00:52 | 3623 | 3652 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3623s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | this sequence extractor we could run it take its feature representation for each sentence and use that instead and you kind of see that these models you know if you use the combined skip of the like bi-directional models and using the forward and backward versions you can actually get these to start to outperform the words effect models kind of across the board and it wasn't really like this paper was kind of exciting especially from the breadth of things they do they have things like image captioning representations that they | 01:00:52 | 01:01:20 | 3652 | 3680 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3652s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | learn and they kind of show you know with analysis methods like tease me that you see clustering according to classes you know it was kind of like on the edge where it showed some pretty exciting promise and it was you know a lot a lot stronger than potentially a super baseline but there were other still discriminative methods for like training models from scratch that were still matching it with like you know well-designed comment architectures or things like this so although this had be like a very exciting kind of oh it's a | 01:01:20 | 01:01:49 | 3680 | 3709 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3680s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | learn feature extractor that's able to handle long term contexts and dependencies it it kind of worked but it wasn't like sweeping the sodas away it was you know exciting and honestly I think a lot of people ended up using it more as a language model where they saw some cool demos of having to generate multiple sentences but it never really quite you know blew everyone away from its quality so you know this is like a good early hit but it didn't quite you know it wasn't a homerun by any means and so this is where androids paper from 2015 | 01:01:49 | 01:02:19 | 3709 | 3739 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3709s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | semi-supervised sequence learning kind of comes at it from a slightly different angle so again for skip thought vectors we just used this vector representation as an input to a model and we fix the model itself and we just like train another model on top of this vector representation and it's a rebekah representation summarizing the whole sentence so maybe that's kind of a difficult test to summarize all the complexities of long sentences short sentences so what died all did instead was they said we'll take this language | 01:02:19 | 01:02:48 | 3739 | 3768 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3739s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | model that we've learned and we're just going to fine-tune it directly we're not going to like precache the features like kind of words of Exile we're just gonna you know take it whatever parameters that language model learned predicting the sequence we're going to use that as an initialization point for training a supervised model for a downstream task and this is the one that started to get good results and they were showing compared to standard supervised learning on you know datasets with like 20,000 labeled examples and stuff like that | 01:02:48 | 01:03:14 | 3768 | 3794 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3768s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | that these models could get quite far and so you see in the limit that you know if you you kind of have all of your different baselines here of you know word vectors feeding as inputs but then we could use like a sequence auto encoder or sequence language model and fine-tune that and you start getting quite large drops here and what's kind of cool here is these two different rows here one of these is pre-training only on the IMDB movie reviews so basically the same data set it's a two-stage algorithm and then this third table here | 01:03:14 | 01:03:45 | 3794 | 3825 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3794s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | or this third row here is using a bunch of unlabeled Amazon reviews and that's you know starting to get towards transfer learning starting to get towards well we can run this thing over a lot of data and as we get more compute we can just get more data from the internet we can feed in more and we see that that actually improves things significantly over only using like the small standard supervised learning dataset in isolation some of this might have just been at the time that it was difficult to Train language models and | 01:03:45 | 01:04:09 | 3825 | 3849 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3825s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | train Armand's back in the day but you know as we'll see for the rest of this lecture the methods have kind of continued to hold on top of this and continue to make progress this is the first one where it got a strong soda and you know there were strong bass lines before and people started like really I mean well to be fair it came out and not much work happened in the space for the next two years but it kind of and a lot of that was because it like really just killed it on these sentences datasets and not not as much elsewhere and this really | 01:04:09 | 01:04:40 | 3849 | 3880 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3849s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | took some further work to kind of figure out how do we make this a generalizable approach that works kind of everywhere the same way that like plugging in word vectors does so moving back one one moment to a type one evil there was a followed paper or a neck in the next year that kind of really started to push on scale and compute used for training language models as we mentioned before they've kind of always been compute limited so this was a that Google paper that showed kind of the first language of all that could generate something | 01:04:40 | 01:05:05 | 3880 | 3905 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3880s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | like a coherent sentence and a lot of it was using a larger data set so the billion word benchmark was a big data set at the time and they use a an HK hidden unit projection LST M which is basically a low rank factorization of like the transition transition matrix just to keep the parameter count down while keeping the state size hi it's character aware with some improvements that let it process the character level inputs so you kind of see on the right that this is starting get to be a kind of complex system and then they throw | 01:05:05 | 01:05:35 | 3905 | 3935 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3905s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | you know a large vocabulary they throw a 32 King 4k 40s at it so 32 GPUs for three weeks and they kind of really got a huge improvement over the previous results and at this point those old Engram language models the old statistical methods were in the mid 40s or even in the 50s and 60s were hybrid systems and suddenly you're at like 23.7 so you basically have this metric you know again it's exponentiated so it's actually like a 20 percent reduction in like just actual log loss but you know you're starting to see a lot of | 01:05:35 | 01:06:08 | 3935 | 3968 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3935s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | significant progress at this space just throwing scale at it and this has ended up being you know something that was really developed just to push it and say how far can we get you know sentence quality can we start to get something that looks like coherence and one of the surprising results is it turned out that this actually paved the way for further methods even though it was just designed to be a really good language model and just better predict the next word it ends up laying the foundations for talk about in a little bit called Elmo | 01:06:08 | 01:06:35 | 3968 | 3995 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3968s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | that really was the first one to crack how do we use these Ellen's all over the place and start seeing it working for question answering and you know summarization and all these different domains so there's kind of a bit of the Tibbett here we're at an hour should we stop for a little bit or let me check out a stopping point I'm gonna go a bit farther we could go to about an hour 30 and stop for a little bit longer there is my kind of conference yeah so you know I've motivated scale a little bit so like I mentioned there's a whole | 01:06:35 | 01:07:06 | 3995 | 4026 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=3995s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | internet out there there's so much information and that perfect language model would you know basically from one view need to fit the Internet into its parameters given how big it is it's not surprising that we're going to need a big model to do that we're going to need a lot of compute pretentiously to do it to get as close as possible and for many of these tests we're talking about where you want to learn long term dependencies we want to learn complicated tasks you know they might be quite rare they also are quite difficult so you know the | 01:07:06 | 01:07:30 | 4026 | 4050 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4026s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | closer you get the better you are to maybe learning real interesting behaviors first kind of a basic system that just like is locally plugging a few words together so another you know just vivid way of pointing this out is a small character RNN is basically gibberish you know this is what happens you know this can be a very good architecture but if you don't give it capacity it just can't really learn language you know there's so many words there's so many objects there's so many relations you really need a lot of | 01:07:30 | 01:07:56 | 4050 | 4076 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4050s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | expressivity to handle all that complexity and you know another way of putting pointing this out is classic resources that were built by humans trying to map out kind of like the relations between all words in natural language you know build hierarchies over them so there's there's really heroic efforts here like wordnet they were larger than many of the language models we were still training especially a few years ago so it might have like five point five million relational features in this package and you know when you | 01:07:56 | 01:08:21 | 4076 | 4101 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4076s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | have it zipped on disk or unzipped on disk it's like already 55 megabytes and you know a lot of common language model is especially early on we're only a few megabytes for parameters and so we know this is probably going to be very inefficient and you know we're probably going to need quite large models and right now you know the answer we have so far is to kind of address this facts with scale and you know hopefully we do find out more efficient and we'll talk a bit about that later too but right now you know kind of the | 01:08:21 | 01:08:51 | 4101 | 4131 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4101s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | first dumb thing you try is brute force if we scale and you know another reason why this is worth investing in is it's now a very well validated empirical trend so across the bottom here is for both language modeling and and for like computer vision kind of the performance of models laid out on log scale plots where you see you have a large scale x-axis which might be the amount of words you train on so every new tic is a doubling of the data set size you know block scale is not great because it quickly gets inefficient but these | 01:08:51 | 01:09:22 | 4131 | 4162 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4131s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | trends are incredibly linear they're very predictable so like it's almost like the natural kind of domain to think about is like what how does this look on a log scale and you see that again for language models on the right left and for like the performance of like captioning cysts or sorry image consecration systems on image net in the middle so these are quite consistent trends and they span now quite a few orders of magnitude so so far they've continued to improve from 6 million parameters up to 600 million on like | 01:09:22 | 01:09:51 | 4162 | 4191 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4162s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | image net and you know data set sizes spanning that's probably over two to two orders of magnitude or two orders of magnitude there yeah and also computers becoming available as investment of more resources in Jim machine learning and AI and improvements in Hardware and distributive training have kind of allowed for you know even though there's there's this logarithmic or this heavy demand for additional compute to see kind of finite sized improvements at least as of yet kind of the industry as a whole has been developing techniques | 01:09:51 | 01:10:21 | 4191 | 4221 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4191s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | and systems to keep providing that additional compute to keep these trend lines going so that was kind of just a quick digression on likewise scale might be important and it really intimately plays into like where these language models came from and how they kind of had their success so here's a like kind of a cute example looking at kind of starting to get away from just learning these kind of feature representations that could then be reused by downstream tasks towards maybe we can learn the tests themselves without having to have | 01:10:21 | 01:10:49 | 4221 | 4249 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4221s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | standard human label feedback and kind of shared that intuition with like the talking about the you know computing the probability of the string I rate this you know one star out of five after seeing the prefix of the of the product review so this is a paper I did in a 20-17 which was like kind of a very targeted experiment here and one of the hypotheses I was working on was that maybe just data was the model neck you know our models are so inefficient that if we were able to just tile kind of in an unsupervised fashion the landscape of | 01:10:49 | 01:11:19 | 4249 | 4279 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4249s | |
BnpB3GrpsfM | L11 Language Models -- guest instructor: Alec Radford (OpenAI) --- Deep Unsupervised Learning SP20 | one domain we might care about like product reviews we could maybe do quite well so we made a much larger dataset that's our rather we used an existing data set from from I think UCSD and Amazon in partnership which had 40 gigabytes of text so that was way bigger than that billion word benchmark and it's all in just one domain and we trained a byte level language model on this for you know a reasonable amount of computer month on for tiny Nexus the model ended up under fitting a lot but you know one of the most interesting | 01:11:19 | 01:11:52 | 4279 | 4312 | https://www.youtube.com/watch?v=BnpB3GrpsfM&t=4279s |