video_id
stringlengths 11
11
| text
stringlengths 361
490
| start_second
int64 0
11.3k
| end_second
int64 18
11.3k
| url
stringlengths 48
52
| title
stringlengths 0
100
| thumbnail
stringlengths 0
52
|
---|---|---|---|---|---|---|
hQEnzdLkPj4 | have this confusion matrix of their of their method and it shows that the confusion matrix is pretty much a long like block diagonal along these super clusters right here so you can you can see the dogs the network confuses the dogs fairly often and then insects with each other but not really across here which is still quite remarkable but I mean that's you get the same thing for a | 2,421 | 2,448 | https://www.youtube.com/watch?v=hQEnzdLkPj4&t=2421s | Learning To Classify Images Without Labels (Paper Explained) | |
hQEnzdLkPj4 | lot of these methods so I don't know I don't know how much different this would be in other methods but certainly it's interesting to look at now they go into one last thing and that is what if we don't know how many clusters there are right if we don't know anything so say so far we have assumed to to have knowledge about the number of ground truth classes the model predictions were | 2,448 | 2,474 | https://www.youtube.com/watch?v=hQEnzdLkPj4&t=2448s | Learning To Classify Images Without Labels (Paper Explained) | |
hQEnzdLkPj4 | valid losing the Hungarian matching algorithm we already saw this in the DET är by facebook if you remember however what happens if the number of clusters does not match the number of ground truth classes anymore so they now say Table three reports the results when we overestimate the number of ground truth classes by a factor of two okay so now they build just twenty classes for C for | 2,474 | 2,502 | https://www.youtube.com/watch?v=hQEnzdLkPj4&t=2474s | Learning To Classify Images Without Labels (Paper Explained) | |
hQEnzdLkPj4 | ten instead of ten classes and we'll going to look at table three real quick where's Table three this is Table three okay so when they over cluster you get the thing here on the bottom and you can see there is a drop in accuracy right here now what I don't actually they don't actually say how they do the over cluster matching so if you imagine if I now have I don't know six clusters but I any need | 2,502 | 2,538 | https://www.youtube.com/watch?v=hQEnzdLkPj4&t=2502s | Learning To Classify Images Without Labels (Paper Explained) | |
hQEnzdLkPj4 | to assign them to three clusters you know here do I still use this most optimistic thing so they are still used I think they still use this most optimistic matching right where you assign everything to its best fitted cluster right you compute all the permutations and then you give it the best benefit of the doubt now if you imagine the situation where I over | 2,538 | 2,568 | https://www.youtube.com/watch?v=hQEnzdLkPj4&t=2538s | Learning To Classify Images Without Labels (Paper Explained) | |
hQEnzdLkPj4 | cluster to the point that I have each image in its own cluster and I run this algorithm to evaluate my clustering I give it basically the most beneficial view then I would get a hundred percent accuracy okay so like in in in one of the in this over cluster approach I would sort of expect that you actually get a better score because you can like there is more generosity of the matching | 2,568 | 2,600 | https://www.youtube.com/watch?v=hQEnzdLkPj4&t=2568s | Learning To Classify Images Without Labels (Paper Explained) | |
hQEnzdLkPj4 | algorithm involved now that's counteracted by the fact that you can't group together things that obviously have similar features because they are in the same class so there's kind of two forces pulling here but I was kind of astounded that it's going down and the evaluation method of this matching algorithm it sort of breaks down when you have more classes at least in my | 2,600 | 2,624 | https://www.youtube.com/watch?v=hQEnzdLkPj4&t=2600s | Learning To Classify Images Without Labels (Paper Explained) | |
hQEnzdLkPj4 | opinion yeah but but it's interesting to see that you can just overshoot and but then you need some sort of heuristic to reconcile that in any case I think this paper is pretty cool it brings together a lot of things that we're already present and introduces this kind of this step approach but what you have to keep in mind and by the way there's lots of samples down here what you have to keep | 2,624 | 2,653 | https://www.youtube.com/watch?v=hQEnzdLkPj4&t=2624s | Learning To Classify Images Without Labels (Paper Explained) | |
hQEnzdLkPj4 | in mind is there are a lot of hyper parameters in here there are like this threshold and you know the first of all yeah the number of classes the thresholds the architectures and so on and and all of this has been tuned to get these numbers really high right all of these steps all of the augmentations and so on the chosen data argumentations it has been chosen to get this number as high as | 2,653 | 2,682 | https://www.youtube.com/watch?v=hQEnzdLkPj4&t=2653s | Learning To Classify Images Without Labels (Paper Explained) | |
hQEnzdLkPj4 | possible so you know to interpret this as oh look we can close classify without knowing the labels is you know if yes in this case but the hyper parameter choices of the algorithm are all informed by the labels so it is still very very unclear of how this method will actually work when you really don't have the labels when you actually have to choose the hyper parameters in | 2,682 | 2,711 | https://www.youtube.com/watch?v=hQEnzdLkPj4&t=2682s | Learning To Classify Images Without Labels (Paper Explained) | |
vzBpSlexTVY | today I want to talk to you guys about how to get machine learning on to your physical devices when there is no network connectivity and convince you why you would want to do that why you would actually prefer even with network connectivity to leave it aside perhaps in certain situations but to get there I want to start back we're gonna go back in time back to when the computer first | 0 | 29 | https://www.youtube.com/watch?v=vzBpSlexTVY&t=0s | On-device Machine Learning With TensorFlow | |
vzBpSlexTVY | came about right that started this computing revolution that we're in today but today we're in what we call an AI first world how did we get here well first we had the computer and that gave us computing then the internet came right the internet connected all the computers together and I want to point out that the internet did not wipe out the computer right the computer is still | 29 | 57 | https://www.youtube.com/watch?v=vzBpSlexTVY&t=29s | On-device Machine Learning With TensorFlow | |
vzBpSlexTVY | used today then came mobile the mobile revolution can brought the Internet to everyone it brought the connected computers into our pockets and again we see that mobile did not wipe out the Internet we still use the Internet it's still strong with an IP version 6 we're running out of IP addresses and so too will a I not replace mobile but build on top of it and so I see mobile has a key foundation | 57 | 90 | https://www.youtube.com/watch?v=vzBpSlexTVY&t=57s | On-device Machine Learning With TensorFlow | |
vzBpSlexTVY | for the core of how we will interact with AI in the years to come and so to that end I want to show you guys one approach for integrating AI into mobile devices and bring amazing user experiences today over half of the fortune 500 globally have disappeared the company's kaput gone since 2000 and so how can we not have that situation first of all but also what will happen | 90 | 130 | https://www.youtube.com/watch?v=vzBpSlexTVY&t=90s | On-device Machine Learning With TensorFlow | |
vzBpSlexTVY | to the other half right it's the companies who embrace AI these startups who are well nowadays all the startups are saying we are AI startup everyone is a start-up but a few short years ago machine learning and AI was something that companies would add on to their product they would say oh yeah we do a little machine learning here on the side right but now I root | 130 | 150 | https://www.youtube.com/watch?v=vzBpSlexTVY&t=130s | On-device Machine Learning With TensorFlow | |
vzBpSlexTVY | everyone's doing it it's it's super popular and for the most part people trained on the server right server has a lot of compute power it makes sense and I'm now going to refute that because it makes sense Mobile is not a compute powerful platform so we train on the server that's fine but what if we did the predictions on the mobile on mobile what if we did inference there instead | 150 | 178 | https://www.youtube.com/watch?v=vzBpSlexTVY&t=150s | On-device Machine Learning With TensorFlow | |
vzBpSlexTVY | of serving them from a server web server and in particular this approach can lead to amazing user experiences I want to just show with one illustration example this is a Google Translate and it has the ability to overlay the translation directly in the image that you see now you can tell just from the the speed that it can do this that it is definitely not happening over the | 178 | 213 | https://www.youtube.com/watch?v=vzBpSlexTVY&t=178s | On-device Machine Learning With TensorFlow | |
vzBpSlexTVY | network right the video is not being streamed to Google servers and then sent back that's why it works when you are offline that works it works on a boat it works underwater well if your phone can be underwater it would even probably would work in space and having access to AI on your phone wherever you go that is responsive and accurate that can really change things on a global scale | 213 | 244 | https://www.youtube.com/watch?v=vzBpSlexTVY&t=213s | On-device Machine Learning With TensorFlow | |
vzBpSlexTVY | for mobile for the internet and for computing so how can we make something like that right machine learning is hard enough by itself then put it on mobile I mean mobile apps aren't easy either and so combining those that can be a real challenge so let me start by showing you guys what a little demo of what I've kind of put together and then we'll talk through how | 244 | 272 | https://www.youtube.com/watch?v=vzBpSlexTVY&t=244s | On-device Machine Learning With TensorFlow | |
vzBpSlexTVY | we might build something like this it's a simple demo just mainly just to demonstrate the core kind of functionality and what I've got here is a phone and I have a little app if we switch over to the camera here I have on the table a few different candies let's see what we see here oh great and so let's see you guys see this okay so what I'm gonna do is what we'll take some of | 272 | 300 | https://www.youtube.com/watch?v=vzBpSlexTVY&t=272s | On-device Machine Learning With TensorFlow | |
vzBpSlexTVY | these away and if the lighting is good you know here we have a a Reese's Cup right and we can see here know that as the image kind of isolates down it recognizes that no I also have a smaller Reese's Cup which just fell on the floor okay so we have a smaller Reese's Cup here and you know and we it will switch over and recognize that now you might say well how do I know that he's not | 300 | 326 | https://www.youtube.com/watch?v=vzBpSlexTVY&t=300s | On-device Machine Learning With TensorFlow | |
vzBpSlexTVY | cheating by sending all these images over the web right well let me first of all there's no Wi-Fi here secondly let's let's just go ahead and hit airplane mode right as well and we'll see here that you know everything continues to work just fine these are some more peanut butter cups I'm a big fan of peanut butter and so let's push these guys away and so here we have a shot of | 326 | 352 | https://www.youtube.com/watch?v=vzBpSlexTVY&t=326s | On-device Machine Learning With TensorFlow | |
vzBpSlexTVY | the Justin's see here it says white chocolate peanut butter cups and you know one of my hand enters the frame it may get upset candy is a little bit bent out of shape from its days inside the suitcase but you know you can see that it clearly recognizes that versus a very similar packaging right but this is milk chocolate this is a milk chocolate peanut butter cup and and this one also | 352 | 376 | https://www.youtube.com/watch?v=vzBpSlexTVY&t=352s | On-device Machine Learning With TensorFlow | |
vzBpSlexTVY | updates you know it updates right away you can see the the confidence and then we here we have some juicy fruit gum just for variety you know some people don't like peanut butter I understand that so that's that's the gist of the demo here and so if we switch back to the slides we can think about how we might build something like this how do we go from collecting data to having an app that | 376 | 402 | https://www.youtube.com/watch?v=vzBpSlexTVY&t=376s | On-device Machine Learning With TensorFlow | |
vzBpSlexTVY | can recognize images in real time custom images right I just I chose these pretty arbitrarily if you were to take any generic machine learning visual model and pointed them at this it might take candy candy bar maybe it even says chocolate maybe it just says yellow but how can you get something to recognize something that's specific you know imagine having something recognize | 402 | 428 | https://www.youtube.com/watch?v=vzBpSlexTVY&t=402s | On-device Machine Learning With TensorFlow | |
vzBpSlexTVY | your particular products your whether it's your brand things in your home and so this is my are kind of guidelines here this is what will follow this our little nap this is our road map and what we're gonna do is we're gonna try to fill in each of these blocks and go from gathering data to having an application so the first thing you got to do is collect data my data collection is as we | 428 | 456 | https://www.youtube.com/watch?v=vzBpSlexTVY&t=428s | On-device Machine Learning With TensorFlow | |
vzBpSlexTVY | all know the most fun step of machine learning now I've found a bit of a shortcut for you instead of you know collecting lots of pictures and then trimming them down and then labeling them and it would just be a lot of work right it's a lot of pictures so maybe we can shoot some video we shoot some video and we only take the video of that particular object so I go through each | 456 | 485 | https://www.youtube.com/watch?v=vzBpSlexTVY&t=456s | On-device Machine Learning With TensorFlow | |
vzBpSlexTVY | one now you can see there I have one of the the peanut butter cups and we go through each one and we capture a video and what's nice about that is that entire video every single frame is a picture of that object hopefully from a different angle so keep that camera moving and then what we can do is we can well we chop that up right there's a command line tool called ffmpeg and | 485 | 510 | https://www.youtube.com/watch?v=vzBpSlexTVY&t=485s | On-device Machine Learning With TensorFlow | |
vzBpSlexTVY | there's lots of tools ways you can chop up a video that's a solved problem and we put those pictures all into one place all together in one folder for each of the objects you want to recognize so so one folder for your juicy fruit one folder for your milk chocolate peanut butter cups one folder white chocolate ones and so on and so now we've effectively just labeled all | 510 | 534 | https://www.youtube.com/watch?v=vzBpSlexTVY&t=510s | On-device Machine Learning With TensorFlow | |
vzBpSlexTVY | of the images right we didn't have to come up with any sophisticated way to label it for us with some system so that's great so we have folders of images what's next well we take these pictures and we send them to training right so in my particular case I uploaded them to the cloud because my MacBook was running out of space form all the images and pictures and so I | 534 | 558 | https://www.youtube.com/watch?v=vzBpSlexTVY&t=534s | On-device Machine Learning With TensorFlow | |
vzBpSlexTVY | zipped them up put them in the cloud and I happen to do my training in the cloud you can do them on your data center you can do them on your local machine if you have a lot of storage and the training we did is using transferred learning now this speaker before me mentioned transfer learning so I won't go too much into detail about it but I do have a kind of little story a little analogy | 558 | 578 | https://www.youtube.com/watch?v=vzBpSlexTVY&t=558s | On-device Machine Learning With TensorFlow | |
vzBpSlexTVY | that I recently accidentally discovered I was playing with a puzzle this is a jigsaw puzzle right lots of pieces and and this is the final picture this is the box showing what I was supposed to build and I'm not a very good jigsaw puzzle person I kind of struggle with it but as I was struggling through this I realized you know there's some good tricks in here I could do first of all I | 578 | 603 | https://www.youtube.com/watch?v=vzBpSlexTVY&t=578s | On-device Machine Learning With TensorFlow | |
vzBpSlexTVY | could separate the the pieces with no images the white pieces from the everything else right so I moved all of the white pieces to one side okay so that's the obvious step first step but then what how do we start from there I also noticed that in this image the roofs all the individual roofs are very distinctly patterned and so I said ah I know I can look for pieces with this | 603 | 629 | https://www.youtube.com/watch?v=vzBpSlexTVY&t=603s | On-device Machine Learning With TensorFlow | |
vzBpSlexTVY | pattern and with that I can begin to put those together those were easy to find I could recognize those patterns so my brains neural network through my eyes could find those individual small pieces and combine them together and say ah there's a roof similarly I noticed in the stairs the stairs have a distinct pattern they're parallel lines there's some of those X's there and I could find | 629 | 656 | https://www.youtube.com/watch?v=vzBpSlexTVY&t=629s | On-device Machine Learning With TensorFlow | |
vzBpSlexTVY | the pieces that kind of looked similar and begin to squeeze those together and eventually get that together and instantly everything else came together well there was a few more steps right but that that's as far as I got the edge pieces they're so hard so so a convolutional neural network kind of works in a similar way I use transfer learning with the inception model which | 656 | 683 | https://www.youtube.com/watch?v=vzBpSlexTVY&t=656s | On-device Machine Learning With TensorFlow | |
vzBpSlexTVY | is a model that the Google brain team created a few years ago it has 48 layers and to give you some perspective on how big or how small that might be just a few years ago I think it was 2011 or 2012 it was Impractical to train a neural network that was more than four layers deep the year before inception v3 came out the model that won the International image recognition | 683 | 711 | https://www.youtube.com/watch?v=vzBpSlexTVY&t=683s | On-device Machine Learning With TensorFlow | |
vzBpSlexTVY | competition called imagenet that had 22 layers so in the inception of e3 model really was a giant leap ahead right more than double the number of layers it really showcased the improvements in both computational power that was available as well as network design so we can use that wonderful research and take it for our advantage so we trained the last layer right of the network and | 711 | 739 | https://www.youtube.com/watch?v=vzBpSlexTVY&t=711s | On-device Machine Learning With TensorFlow | |
vzBpSlexTVY | leave everything else intact this means that everything in the visual part of recognizing those pieces recognizing the little bits the principal pieces are already there in place for us so you have a great model you can train it but when you're done you look at your file system and you say wow this model is 84 megabytes I'm trying to put this in a mobile app | 739 | 764 | https://www.youtube.com/watch?v=vzBpSlexTVY&t=739s | On-device Machine Learning With TensorFlow | |
vzBpSlexTVY | can you help me out sure thing we're gonna optimize it for mobile what can we do to shrink down a model well handily enough there is a graph transform tool and what that's gonna do for us there's a couple of steps in there that we can do the first thing is a technique called quantizing or quantization and the floating-point numbers those 32-bit floating-point numbers that are taking | 764 | 789 | https://www.youtube.com/watch?v=vzBpSlexTVY&t=764s | On-device Machine Learning With TensorFlow | |
vzBpSlexTVY | up all this space we're gonna shrink that down to just eight bits how can we get away with this we're gonna lose so much accuracy right well not necessarily luckily neural networks are designed for fuzziness in their inputs so by quantizing down to eight bits and and that's just by rounding but like actually saying these numbers are close enough we're gonna make them all kind of say this number | 789 | 814 | https://www.youtube.com/watch?v=vzBpSlexTVY&t=789s | On-device Machine Learning With TensorFlow | |
vzBpSlexTVY | and then this number so we then say the range we're gonna split that out but into 256 pieces so if the range of values is say negative ten to thirty within there when they divided up into 256 little steps and so that gives you a little more accuracy than just purely changing it directly to 8-bit and additionally that means when you do compression those values are the same | 814 | 838 | https://www.youtube.com/watch?v=vzBpSlexTVY&t=814s | On-device Machine Learning With TensorFlow | |
vzBpSlexTVY | and so that's what it lets you go down literally for X so you go from 84 megabytes down to around 20 21 megabytes and one small additional thing you can do is take away the parts of the graph the parts of the graph that you don't need anymore for prediction there's some graph pete nodes which are only useful for training and there's also a tool that will prune that down for you as | 838 | 863 | https://www.youtube.com/watch?v=vzBpSlexTVY&t=838s | On-device Machine Learning With TensorFlow | |
vzBpSlexTVY | well so that's also part of this graph transform tool is a whole suite of tools so that's really useful and I also want to call out that so far everything we've done is basically running existing code and existing tools you didn't have to custom write anything the only custom thing you had to do was shoot that video and run ffmpeg so this really makes it you know puts it in an immediate | 863 | 889 | https://www.youtube.com/watch?v=vzBpSlexTVY&t=863s | On-device Machine Learning With TensorFlow | |
vzBpSlexTVY | possibility type of stage there is one more thing one more consideration to think about when looking at deploying a machine learning model to a mobile device and that is whether you package it inside the app or alongside the app you can make it a data file or you can make it integrate it into the app and some of the thoughts there are whether you want to be able to secure the model | 889 | 913 | https://www.youtube.com/watch?v=vzBpSlexTVY&t=889s | On-device Machine Learning With TensorFlow | |
vzBpSlexTVY | whether you want to be able to download updates without pushing a new updated version of the app itself and whether or not you care about sizing and how whether or not you want to secure the the model from outside access so that's our our overall design right we gather it up we shoot up the videos slice it up and and train and optimize and then we can deploy so that that's | 913 | 944 | https://www.youtube.com/watch?v=vzBpSlexTVY&t=913s | On-device Machine Learning With TensorFlow | |
vzBpSlexTVY | our of our finished model and this is a video of the the same thing so I won't show that and the final kind of point here is how are we going to be able to how have we done this right what makes it possible and that's tensor flow tensor flow is Google's machine learning library hopefully some of you have heard of it it's too dark so I can't do a show of hands but it's been incredible to see | 944 | 968 | https://www.youtube.com/watch?v=vzBpSlexTVY&t=944s | On-device Machine Learning With TensorFlow | |
vzBpSlexTVY | the community adoption and the reception to the launch it was open sourced in November of 2015 and hit 1.0 this past February and with that we have support for not just these platforms right which we expect CPUs GPUs and of course Android but also iOS and Raspberry Pi so for those of you who like to tinker with IOT devices you can load a model onto a Raspberry Pi so that means you can | 968 | 997 | https://www.youtube.com/watch?v=vzBpSlexTVY&t=968s | On-device Machine Learning With TensorFlow | |
vzBpSlexTVY | recognize things without any network traffic it can be handy and the community responds to tensorflow in the past while now more than fourteen months but in the first fourteen months there were over fourteen thousand commits hundreds of non-google contributors and now that it's 1.1 of 1.0 it is production ready the AP's api's are stable and backwards compatible so things won't change out | 997 | 1,021 | https://www.youtube.com/watch?v=vzBpSlexTVY&t=997s | On-device Machine Learning With TensorFlow | |
vzBpSlexTVY | from under you so that's really quite nice and so in conclusion putting machine learning on mobile will just make that experience of mobile the internet and computing that much more powerful and usher in a new wave of innovation and open a whole new world of possibilities and moreover you can build one easily by gathering your own labelled data simply by shooting a video | 1,021 | 1,049 | https://www.youtube.com/watch?v=vzBpSlexTVY&t=1021s | On-device Machine Learning With TensorFlow | |
vzBpSlexTVY | and running basically a series of well-defined steps and having a trained model as the output so please use this to build magical experiences for your users and with that I want to thank you and have some resources for you here we have a code lab to help you do the inception retraining and as well as a sample app for loading models into tensorflow on github and so thank you | 1,049 | 1,081 | https://www.youtube.com/watch?v=vzBpSlexTVY&t=1049s | On-device Machine Learning With TensorFlow | |
oDKXwxaGkNA | [Music] all right thank you very much for the introduction and I hope you had a nice lunch and welcome to this talk about self supervised deep learning towards autonomously learning machines as you already heard my name is Simon T ballina I'm head of yard craft works and also lectured a couple of universities and craft works well you might have heard about us we are a vienna based | 0 | 33 | https://www.youtube.com/watch?v=oDKXwxaGkNA&t=0s | Self-Supervised Learning - Towards Autonomously Learning Machines—Simon Stiebellehner | |
oDKXwxaGkNA | artificial intelligence and big data company specializing in solving hard industrial problems using artificial intelligence and of course data also most of our clients come from the industry ranging from the automotive sector all the way to the energy sector and pretty much everything in between the topic of this talk self supervised deep learning is also primarily | 33 | 56 | https://www.youtube.com/watch?v=oDKXwxaGkNA&t=33s | Self-Supervised Learning - Towards Autonomously Learning Machines—Simon Stiebellehner | |
oDKXwxaGkNA | motivated by the work we do with our clients Before we jump right into the topic we first need to lose a few words about the current state of artificial intelligence though probably as most of you are aware of AI has come a pretty long way in the last couple of years in fact we have made massive progress for example when you think of popular voice assistance such as Syria or Cortana they | 56 | 82 | https://www.youtube.com/watch?v=oDKXwxaGkNA&t=56s | Self-Supervised Learning - Towards Autonomously Learning Machines—Simon Stiebellehner | |
oDKXwxaGkNA | have become incredibly good at understanding human natural language just in the last couple of years similarly in a completely different area of artificial intelligence in computer vision we can now highly accurately segment even complex images into all of its part parts and not only that we can even automatically generate text that describes what's happening in an image | 82 | 106 | https://www.youtube.com/watch?v=oDKXwxaGkNA&t=82s | Self-Supervised Learning - Towards Autonomously Learning Machines—Simon Stiebellehner | |
oDKXwxaGkNA | so called scene understanding so obviously great great progress has been made when it comes to artificial intelligence just in the last year's and it turns out that many of these breakthroughs many of these things you read about in the news are actually based on something that's called supervised deep learning and this brings us to the first part of this talk | 106 | 129 | https://www.youtube.com/watch?v=oDKXwxaGkNA&t=106s | Self-Supervised Learning - Towards Autonomously Learning Machines—Simon Stiebellehner | |
oDKXwxaGkNA | supervised deep learning the good and the ugly we will find - I supervise deep learning is so great and so many breakthroughs are based on it but also we will get to know it's very ugly side one of its major downsides one of its major weaknesses subsequently we will find out how taking a self supervised approach to deep learning can help us overcome or at least mitigate that core weakness and | 129 | 154 | https://www.youtube.com/watch?v=oDKXwxaGkNA&t=129s | Self-Supervised Learning - Towards Autonomously Learning Machines—Simon Stiebellehner | |
oDKXwxaGkNA | finally we will look into an industry case that hopefully gives you a good idea of how you can use self supervision in practice to make your models better and more robust but first let's look into supervised learning probably some of you or maybe if many of you know this data set it's an incredibly popular famous data set even containing images of obviously cats and dogs and this data | 154 | 180 | https://www.youtube.com/watch?v=oDKXwxaGkNA&t=154s | Self-Supervised Learning - Towards Autonomously Learning Machines—Simon Stiebellehner | |
oDKXwxaGkNA | set is used by data scientists around the world to do the first steps in image classification using especially deep learning so usually the task at hand is building a classified a differentiates between cats and dogs based on images theoretically you could approach this problem from two sides you could approach it from an unsupervised side unsupervised learning we do not use | 180 | 202 | https://www.youtube.com/watch?v=oDKXwxaGkNA&t=180s | Self-Supervised Learning - Towards Autonomously Learning Machines—Simon Stiebellehner | |
oDKXwxaGkNA | labels right and in supervised learning this would be the other side we do use labels so we do use textual or numeric information that tells us if there really is a cat or if there really is a dog in that image just a quick recap to bring us on the same page in unsupervised learning as I mentioned we don't use labels we just tried to detect similarities and dissimilarities in | 202 | 225 | https://www.youtube.com/watch?v=oDKXwxaGkNA&t=202s | Self-Supervised Learning - Towards Autonomously Learning Machines—Simon Stiebellehner | |
oDKXwxaGkNA | these images and based on that form homogenous clusters that hopefully so we end up with a cat and the dog cluster of course you're not using labels here you lack supervision so usually results will not be optimal actually the web more choice for this kind of task really is supervised learning especially supervised deep learning because in supervised deep learning by using | 225 | 249 | https://www.youtube.com/watch?v=oDKXwxaGkNA&t=225s | Self-Supervised Learning - Towards Autonomously Learning Machines—Simon Stiebellehner | |
oDKXwxaGkNA | different different deep learning techniques such as convolutional neural networks we can actually teach them what makes a cat a cat and what makes a dog a dog and they learn that in a fully automated fashion by providing them labels by providing them ground truth if the relays or if there really is a dog in that image and based on that they then make the classification and results can | 249 | 274 | https://www.youtube.com/watch?v=oDKXwxaGkNA&t=249s | Self-Supervised Learning - Towards Autonomously Learning Machines—Simon Stiebellehner | |
oDKXwxaGkNA | actually be astonishing astonishing good right we can use we can achieve human and even super human performance especially for these types of tasks which are very specific and also based on images you can achieve astonishingly great results using supervised deep learning but well that great performance that you often read about in the news for reaching that for reaching human | 274 | 300 | https://www.youtube.com/watch?v=oDKXwxaGkNA&t=274s | Self-Supervised Learning - Towards Autonomously Learning Machines—Simon Stiebellehner | |
oDKXwxaGkNA | level and superhuman level performance very often you're gonna need just tons of labeled data and really tons we are speaking about tens of thousands hundreds of thousands millions of labeled images and that is a lot and this is also a big big problem because obviously label data is so important for achieving this great performance but at the same time label data is scarce right | 300 | 327 | https://www.youtube.com/watch?v=oDKXwxaGkNA&t=300s | Self-Supervised Learning - Towards Autonomously Learning Machines—Simon Stiebellehner | |
oDKXwxaGkNA | data we have a lot of it we have data lakes and data warehouse is full of data but actually getting the labels that you need for solving your specific problem that's something that's usually not existent of course at that point you could argue well you know if I don't have these labels I can just label it by myself I can sit down on my desk look look through these images of cats and | 327 | 350 | https://www.youtube.com/watch?v=oDKXwxaGkNA&t=327s | Self-Supervised Learning - Towards Autonomously Learning Machines—Simon Stiebellehner | |
oDKXwxaGkNA | dogs and note if there is a cat or a dog on that image and I fully agree yes you can do that absolutely but you can imagine that this is quite some effort and that effort Rises very very quickly with the complexity of the image for example this image is an image we took from a real practical use case we did together with me burr moebus a large manufacturer of industrial parts and | 350 | 374 | https://www.youtube.com/watch?v=oDKXwxaGkNA&t=350s | Self-Supervised Learning - Towards Autonomously Learning Machines—Simon Stiebellehner | |
oDKXwxaGkNA | that image actually shows one of these parts well on that image in theory you can recognize defects in the underlying part well it's not an easy dog because these images they are really large they are highly noisy and defects they can be incredibly subtle incredibly small so it even takes a human expert quite some time to see me reliably find and label mark all these defects on such an | 374 | 403 | https://www.youtube.com/watch?v=oDKXwxaGkNA&t=374s | Self-Supervised Learning - Towards Autonomously Learning Machines—Simon Stiebellehner | |
oDKXwxaGkNA | image so assume you want to automate that by trading some supervised deep learning model and for that you need to build up a large label data set of let's say a hundred thousand images labeled images you can imagine that that's going to be quite some effort and you know time is money so it's also going to be expensive how expensive well it's actually not a too hard calculation to | 403 | 426 | https://www.youtube.com/watch?v=oDKXwxaGkNA&t=403s | Self-Supervised Learning - Towards Autonomously Learning Machines—Simon Stiebellehner | |
oDKXwxaGkNA | to compute this assume you use some publicly available labeling tool such as Amazon Sage make a ground truth for example they charge you I think if I remember correct around four US dollars per labeled image then as we said before you can do 30 images per hour because you can do you need two minutes per image and then of course it's not going to be you labeling but probably you're | 426 | 449 | https://www.youtube.com/watch?v=oDKXwxaGkNA&t=426s | Self-Supervised Learning - Towards Autonomously Learning Machines—Simon Stiebellehner | |
oDKXwxaGkNA | going to employ some working student for example it does the labeling and that working student charges you 15 US dollars per hour well and now this is critical highly critical but most people forget about that you do not only need one person labeling your images why well because a person can have a bad day a person you know might just not be too accurate on that task and and this | 449 | 475 | https://www.youtube.com/watch?v=oDKXwxaGkNA&t=449s | Self-Supervised Learning - Towards Autonomously Learning Machines—Simon Stiebellehner | |
oDKXwxaGkNA | actually biggest problem these images you saw a highly complex these defects are so difficult to spot it's really really easy to oversee one so you need multiple people labeling the same image and then you need to aggregate these labels to actually end up with a really robustly labeled data set if you don't do that you're gonna end up with a garbage dataset and you know how it is in | 475 | 501 | https://www.youtube.com/watch?v=oDKXwxaGkNA&t=475s | Self-Supervised Learning - Towards Autonomously Learning Machines—Simon Stiebellehner | |
oDKXwxaGkNA | machine learning garbage in garbage out your model is not gonna learn accurately what you wanted to learn and if you do the math behind this you will find out that building up that label data set cost you ten thousand hours work time and that amounts to more than a quarter of a million US dollars just labeling cost and this is significant right of course if you're big if your business | 501 | 524 | https://www.youtube.com/watch?v=oDKXwxaGkNA&t=501s | Self-Supervised Learning - Towards Autonomously Learning Machines—Simon Stiebellehner | |
oDKXwxaGkNA | case is big quarter of a million might be nothing you might not even lose a single thought on that but for most companies in most departments spending quarter of a million I'm just building up a label dataset it's a significant challenge because at that point you haven't built a single classifier at that point you haven't deployed anything in production you basically haven't shown any value and | 524 | 548 | https://www.youtube.com/watch?v=oDKXwxaGkNA&t=524s | Self-Supervised Learning - Towards Autonomously Learning Machines—Simon Stiebellehner | |
oDKXwxaGkNA | this is a problem and this is why we argue that supervised learning and especially supervised deep learning is just very often not feasible because first labels the labels you need to solve your problems probably don't exist and second if you want to create these labels well that's gonna cost you a lot of money and this is also by thought leaders of the fields argue that Dai | 548 | 572 | https://www.youtube.com/watch?v=oDKXwxaGkNA&t=548s | Self-Supervised Learning - Towards Autonomously Learning Machines—Simon Stiebellehner | |
oDKXwxaGkNA | Revolution is not going to be based on supervised learning and I agree how can Dai Revolution be based on supervised learning if we don't have labels right if labels are not ubiquitous how can a I be ubiquitous it can't at least not when it's based on supervised learning and when I thought about this the first time I still found it a bit strange because well deep learning is based on | 572 | 598 | https://www.youtube.com/watch?v=oDKXwxaGkNA&t=572s | Self-Supervised Learning - Towards Autonomously Learning Machines—Simon Stiebellehner | |
oDKXwxaGkNA | artificial neural networks and neural networks as the name already says are inspired at least inspired by the human brain but we humans we need so much less labelled data to learn a task highly accurately so where did we go wrong here what makes the difference and this brings us to the core of this talk self supervise deep learning when we talk about self supervise deep learning we | 598 | 627 | https://www.youtube.com/watch?v=oDKXwxaGkNA&t=598s | Self-Supervised Learning - Towards Autonomously Learning Machines—Simon Stiebellehner | |
oDKXwxaGkNA | first need to take a step back and think about the human brain we need to ask ourselves how do we humans actually learn do we do supervised learning well absolutely yes sometimes for example in school or at university you have an explicit supervisor telling you what's right and what's wrong what is a cat and what is a dog so yes we do but of course not always you don't always have a | 627 | 652 | https://www.youtube.com/watch?v=oDKXwxaGkNA&t=627s | Self-Supervised Learning - Towards Autonomously Learning Machines—Simon Stiebellehner | |
oDKXwxaGkNA | supervisor standing next to you telling you what's good and what's bad and even if we have a supervisor for example at school you don't need to be taught something 10,000 times before you understand to do a task actually just a few example as a fish so again we are we humans are such efficient and effective learners our supervised learning seems to be highly | 652 | 676 | https://www.youtube.com/watch?v=oDKXwxaGkNA&t=652s | Self-Supervised Learning - Towards Autonomously Learning Machines—Simon Stiebellehner | |
oDKXwxaGkNA | different from the supervised learning between machine learning obviously well how about trial and error learning or reinforcement learning just to bring us on the same page what is reinforcement learning basically it's a subfield of machine learning where we try to teach an agent to learn a policy a behavior to solve a highly complex mostly sequential task such as driving a | 676 | 699 | https://www.youtube.com/watch?v=oDKXwxaGkNA&t=676s | Self-Supervised Learning - Towards Autonomously Learning Machines—Simon Stiebellehner | |
oDKXwxaGkNA | car in a simulation or playing chess and that agent learns that by let's say smart trial-and-error basically we humans we also do trial and error learning absolutely sometimes we try something we fail we try again and then do better hopefully but of course we don't trial and error everything for example you don't trial and error learning how to drive a car you don't | 699 | 720 | https://www.youtube.com/watch?v=oDKXwxaGkNA&t=699s | Self-Supervised Learning - Towards Autonomously Learning Machines—Simon Stiebellehner | |
oDKXwxaGkNA | just drive around randomly until you manage to stay on the road right this is not a good strategy how to how you should learn how to drive actually what we do is quite different we get into a car the first time and after a couple of hours we are actually reasonable drivers we can at least follow traffic in a basic way and this is fascinating because if you want to teach a machine | 720 | 742 | https://www.youtube.com/watch?v=oDKXwxaGkNA&t=720s | Self-Supervised Learning - Towards Autonomously Learning Machines—Simon Stiebellehner | |
oDKXwxaGkNA | to follow a real-world traffic in a reliable way but this is a big big big problem you need a lot not only of examples but also engineering power behind this before as humans it's so easy it's oh it's so it's because we are such effective and efficient learners again so obviously also our trial and error learning that we humans do is fundamentally different from the trial and error learning we see | 742 | 768 | https://www.youtube.com/watch?v=oDKXwxaGkNA&t=742s | Self-Supervised Learning - Towards Autonomously Learning Machines—Simon Stiebellehner | |
oDKXwxaGkNA | in machine learning so what makes us humans such effective learners what is this magic ingredient that allows us to only require very little label data to learn a task incredibly well well this magic ingredient is something that we call having a general understanding of the world it's something that we all some also called common sense we humans we just know how | 768 | 796 | https://www.youtube.com/watch?v=oDKXwxaGkNA&t=768s | Self-Supervised Learning - Towards Autonomously Learning Machines—Simon Stiebellehner | |
oDKXwxaGkNA | things work right we just know how things work and we obtained this general understanding through observe raishin from the day that we are born until the day we die we humans we observe we observe what's happening around us with all our senses we smell we touch we see we hear and this continuous observation this taking the world around us as our supervisor this | 796 | 821 | https://www.youtube.com/watch?v=oDKXwxaGkNA&t=796s | Self-Supervised Learning - Towards Autonomously Learning Machines—Simon Stiebellehner | |
oDKXwxaGkNA | is actually what forms our general understanding and all this observation lets us understand the true meaning of things what does it mean if something is heavy what does it mean if something is hot what are the implications what are the consequences also we start to understand abstract concepts and concept relationships such as friendship for example and all these forms our general | 821 | 845 | https://www.youtube.com/watch?v=oDKXwxaGkNA&t=821s | Self-Supervised Learning - Towards Autonomously Learning Machines—Simon Stiebellehner | |
oDKXwxaGkNA | understanding and this general understanding makes us such effective and efficient learners because this means everything we learn we do not start from zero we always have a head start we always base everything we learn upon our understanding of how the world works and all our previously acquired knowledge and this is what makes us human such effective and efficient | 845 | 871 | https://www.youtube.com/watch?v=oDKXwxaGkNA&t=845s | Self-Supervised Learning - Towards Autonomously Learning Machines—Simon Stiebellehner | |
oDKXwxaGkNA | learners well that's great for us humans right but this talk is not primarily about human intelligence but of course it's about artificial intelligence so the question remains how do we inject this general understanding into machines how do we make machines effective and efficient learners how do we allow machines not to need tens of thousand hundreds of thousands millions of label | 871 | 899 | https://www.youtube.com/watch?v=oDKXwxaGkNA&t=871s | Self-Supervised Learning - Towards Autonomously Learning Machines—Simon Stiebellehner | |
oDKXwxaGkNA | data points anymore and still achieve great great maybe even human level performance well the answer is quite simple actually we just need to let them observe the world we just need to make the data and their supervisor well but how do you do that how do you force a machine to observe the world doesn't sound too easy well imagine you put up a video camera at this traffic junction and that video | 899 | 927 | https://www.youtube.com/watch?v=oDKXwxaGkNA&t=899s | Self-Supervised Learning - Towards Autonomously Learning Machines—Simon Stiebellehner | |
oDKXwxaGkNA | camera just continuously records what's happening in that traffic situation so you basically end up with an endless and an endless sequence of images and now imagine you have never seen a car before you have never seen a motorcycle before you have never seen traffic before and somebody gives you this video and you watch that video I promise after a sufficiently long amount of time you | 927 | 958 | https://www.youtube.com/watch?v=oDKXwxaGkNA&t=927s | Self-Supervised Learning - Towards Autonomously Learning Machines—Simon Stiebellehner | |
oDKXwxaGkNA | will have figured out what a car is what a motorcyclist and how traffic works when it's a car allowed to go right when it's a car allowed to go left when it needs to stops and so on you have understood the concept of traffic by simple observation well that's how we humans do it but again how do we frame this a machine learning problem now the good thing is the good news is that | 958 | 979 | https://www.youtube.com/watch?v=oDKXwxaGkNA&t=958s | Self-Supervised Learning - Towards Autonomously Learning Machines—Simon Stiebellehner | |
oDKXwxaGkNA | supervised machine learning problems always somewhat the same they always have this very very basic structure that you can see here and I would say 95% of all supervised machine learning problems are framed just like that you have some kind of input which in our case of course is our video our sequence of images that goes into a model could be anything in our | 979 | 999 | https://www.youtube.com/watch?v=oDKXwxaGkNA&t=979s | Self-Supervised Learning - Towards Autonomously Learning Machines—Simon Stiebellehner | |
oDKXwxaGkNA | case that lets say some type of neural network that model spits out a prediction and we then compare the prediction to the label to the truth based on a difference based on the error that our model makes we're going to take a step in our optimization procedure for example using gradient descent and the next time our model is going to perform better that's how supervised learning works at | 999 | 1,022 | https://www.youtube.com/watch?v=oDKXwxaGkNA&t=999s | Self-Supervised Learning - Towards Autonomously Learning Machines—Simon Stiebellehner | |
oDKXwxaGkNA | the very foundations and it's quite clear about to use this input and also the model part is clear but what is our model gonna predict actually and what are the labels we don't have labels right we only have a video only a sequence of images so what should we actually teach our model to output and this is actually myself supervision comes in and this is also where your | 1,022 | 1,045 | https://www.youtube.com/watch?v=oDKXwxaGkNA&t=1022s | Self-Supervised Learning - Towards Autonomously Learning Machines—Simon Stiebellehner | |
oDKXwxaGkNA | creativity comes in you need to think about how do I shape and frame the data the unlabeled data that I have to form a supervised learning problem from it one example would be well you could simply chop up your endless video your endless sequence of images into smaller sub sequences and always take the last image of such a sub sequence and use it as label and then teach your network to predict | 1,045 | 1,072 | https://www.youtube.com/watch?v=oDKXwxaGkNA&t=1045s | Self-Supervised Learning - Towards Autonomously Learning Machines—Simon Stiebellehner | |
oDKXwxaGkNA | the future from the past you teach it to predict the end of that sequence from the beginning or the previous elements of that sequence of these images your model is going to learn to predict the future from the past it's going to learn when a car is going to move when it can drive forward when it has to stop when it's allowed to go left and went allowed to go allowed to go right and we | 1,072 | 1,095 | https://www.youtube.com/watch?v=oDKXwxaGkNA&t=1072s | Self-Supervised Learning - Towards Autonomously Learning Machines—Simon Stiebellehner | |
oDKXwxaGkNA | can do this in a fully automated fashion you know you can simply create a small Python program for example that chops up your video into small sequences always takes the last frame of such a sequence using it as label and then you train your network and you're gonna have an expert network when it comes to traffic basically and you can use this concept of self supervision of automatically | 1,095 | 1,116 | https://www.youtube.com/watch?v=oDKXwxaGkNA&t=1095s | Self-Supervised Learning - Towards Autonomously Learning Machines—Simon Stiebellehner | |
oDKXwxaGkNA | creating labels from unlabeled data also in other types of data it's very flexible for example you can use it on images you can just randomly crop out images of let's say faces and teach your model to predict these rectangles or you can also use it on text you randomly remove words from text and use the surrounding words the context words to predict the target word the missing word | 1,116 | 1,139 | https://www.youtube.com/watch?v=oDKXwxaGkNA&t=1116s | Self-Supervised Learning - Towards Autonomously Learning Machines—Simon Stiebellehner | |
oDKXwxaGkNA | that you removed well that's that's awesome but so what right what what are you gonna do with the model I predict the missing rectangle in an image that's usually not what you want to solve and that's true so what we actually gain from this what do we gain from a model that knows how to complete an incomplete face well what we gain from it is understanding that model by performing | 1,139 | 1,167 | https://www.youtube.com/watch?v=oDKXwxaGkNA&t=1139s | Self-Supervised Learning - Towards Autonomously Learning Machines—Simon Stiebellehner | |
oDKXwxaGkNA | this task by learning this task builds up an understanding of the concept of a face it learns very low-level representations about the face and also high-level representations such as it's gonna learn that usually areas one knows in a face there are two ears in a face these ears sits on the sides of your head and so on it will learn a general understanding of the concept of a face | 1,167 | 1,189 | https://www.youtube.com/watch?v=oDKXwxaGkNA&t=1167s | Self-Supervised Learning - Towards Autonomously Learning Machines—Simon Stiebellehner | |
oDKXwxaGkNA | and you can use this understanding you can use this knowledge for any other task you really want to solve that is somewhat related what does it mean well first let's say you do exactly what did you randomly remove rectangles for images of faces and if if you have little images of faces you can remove rectangles in a variety of different ways right so you can actually end up | 1,189 | 1,213 | https://www.youtube.com/watch?v=oDKXwxaGkNA&t=1189s | Self-Supervised Learning - Towards Autonomously Learning Machines—Simon Stiebellehner | |
oDKXwxaGkNA | with a large data set you train your model to predict these missing rectangles your model will then have an understanding of the concept of a face then you take the same model you take the model with all of its knowledge and use it for whatever task you really want to solve for example for predicting the age based on images you just fine-tune it a bit maybe modify the architecture | 1,213 | 1,235 | https://www.youtube.com/watch?v=oDKXwxaGkNA&t=1213s | Self-Supervised Learning - Towards Autonomously Learning Machines—Simon Stiebellehner |