video_id
stringlengths
11
11
text
stringlengths
361
490
start_second
int64
0
11.3k
end_second
int64
18
11.3k
url
stringlengths
48
52
title
stringlengths
0
100
thumbnail
stringlengths
0
52
hg2Q_O5b9w4
momentum encoder parameters are a moving average of the parameters of the query encoder and that is so you get kind of get the best of both worlds you don't have to learn a second neural network but your second neural network is not the same as your first neural network but it it kind of lags behind but it is also it is also performing almost as well so that is um I don't know if that
1,209
1,242
https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=1209s
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
https://i.ytimg.com/vi/h…axresdefault.jpg
hg2Q_O5b9w4
makes sense but it is the best I can to explain it so to recap you take your observation you encode it as a query sorry you crop crop here for your anchor that gets your query and then you random crop for your keys right into positive and negative samples right so random crop from the same observation or from different observations right these become your positive and negative
1,242
1,276
https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=1242s
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
https://i.ytimg.com/vi/h…axresdefault.jpg
hg2Q_O5b9w4
samples then you take you take me push this through your encoders for the query and for the keys respectively you end up with the Q which is the encoded anchor and the case which are the encoded positive and negative samples and then you learn you update this encoder here using the contrastive loss right and at the same time you feed the q you feed the q here into the reinforcement
1,276
1,315
https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=1276s
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
https://i.ytimg.com/vi/h…axresdefault.jpg
hg2Q_O5b9w4
learning algorithm and you learn your reinforcement learning algorithm instead of giving having the observation directly as an input here you now have the Q here as an input right that is it the reinforcement learning works exactly the same but except having the so input Oh you now have the representation input queue and you don't have to worry about anything else in
1,315
1,345
https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=1315s
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
https://i.ytimg.com/vi/h…axresdefault.jpg
hg2Q_O5b9w4
terms of the reinforcement learning algorithm it stays exactly the same right the this whole thing here can actually run either in parallel or you can think of it before you can think of it off policy on policy it is sort of modular how you how you fit this in it simply comes up with good representations so that is that is basically a deal here right and you hope
1,345
1,371
https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=1345s
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
https://i.ytimg.com/vi/h…axresdefault.jpg
hg2Q_O5b9w4
that the whole procedure of this contrastive learning then gives you good representation of this anchor thing here if you encode that to the queue you hope that this representation now is a good representation as a basis for the RL algorithm and it turns out at least in their experiments it is so here you see the same thing they actually they do something more where in RL usually deal
1,371
1,401
https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=1371s
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
https://i.ytimg.com/vi/h…axresdefault.jpg
hg2Q_O5b9w4
with a stack of observations not just a single observation because so for example in Atari people always concatenate something like for the four last frames right and their their point is okay if we have this stack here if we do this data augmentation you know these crops we kind of need to do them consistently right we need to crop every single image at the same point for the
1,401
1,428
https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=1401s
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
https://i.ytimg.com/vi/h…axresdefault.jpg
hg2Q_O5b9w4
query and also if we do a random crop let's say a random crop down here we need to do this same random crop for all of the of the stack of images here right so um that that is kind of the additional thing they introduced it with respect to RL that deals with with stacked time frames but it's kind of the same the same diagram as above here right so they explained the the RL
1,428
1,462
https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=1428s
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
https://i.ytimg.com/vi/h…axresdefault.jpg
hg2Q_O5b9w4
algorithms they use and exactly they're they're their thing and here you can see that anchor is a crop and the positive sample is a random crop from the same image this would be up here somewhere the anchor is cropped from the middle and then the negative would be a random crop from a different image or a different stack of images they have a pseudo code here where that
1,462
1,489
https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=1462s
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
https://i.ytimg.com/vi/h…axresdefault.jpg
hg2Q_O5b9w4
was pretty simple we'll just go through it quickly right you start off with fq + FK these are the encoders for the query and keys you start them off the same then you go through your data loader you do this random augmentation of your query and you keys and I don't not even sure if the random augmentation needs actually to be a central crop for the anchor or
1,489
1,516
https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=1489s
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
https://i.ytimg.com/vi/h…axresdefault.jpg
hg2Q_O5b9w4
just two different two different crops from the same image that might be as well so you know just I guess I guess it's a thing you could choose I don't know what exactly is the best thing alright then I forward the query through the FQ and I forward the keys through the FK then important I detach this so I don't train I don't want to train the FK I only want to train the FQ right then I
1,516
1,550
https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=1516s
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
https://i.ytimg.com/vi/h…axresdefault.jpg
hg2Q_O5b9w4
do the bilinear product here with the W these these are the bilinear product and then I put this all of this into a cross entropy loss right in the end I update my fq m IW and i do this exponentially moving average for my key encoder and they test on two different things they test on the deepmind control tasks and they always test 100 K time steps so their big point is data efficiency right
1,550
1,594
https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=1550s
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
https://i.ytimg.com/vi/h…axresdefault.jpg
hg2Q_O5b9w4
they they claim they can use learn useful representations with not much data so the task is here how good are you at one 100 cases that time steps right so you don't you don't optimize until the end you just you get 100k time steps and then the question is how how good are you and the curl here outperforms all of the baselines handily in the deep mind control tasks and it also outperforms a
1,594
1,624
https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=1594s
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
https://i.ytimg.com/vi/h…axresdefault.jpg
hg2Q_O5b9w4
lot of the baselines in the Atari tasks and it actually if you look at the results it doesn't outperform everything but for example here the red is curl and the dashed gray is state as a si now state si si the important thing to note here is it has access to the state whereas curl only works from pixels right so that what I said before like if I had to craft the use for a
1,624
1,656
https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=1624s
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
https://i.ytimg.com/vi/h…axresdefault.jpg
hg2Q_O5b9w4
presentation basically state si si has access to that and you see that in many of the tasks that the curl comes close or performs equally well to the state si si right so that's pretty impressive especially if you've took at pixel si si sorry which is the same algorithm but does not have access to the state just the pixels it often fails terribly right so um that is pretty interesting to see
1,656
1,692
https://www.youtube.com/watch?v=hg2Q_O5b9w4&t=1656s
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
https://i.ytimg.com/vi/h…axresdefault.jpg
9cHAjRWI2oQ
Thailand GNN learning to tile with self supervised graph neural network many problems in computer graphics face combinatorial optimizations which are typically solved by approximation algorithms are heuristic search methods in this work we explore whether a learning-based approach can solve a classical combinatorial geometric problem tiling our problem specifically we focus on tiling the
0
37
https://www.youtube.com/watch?v=9cHAjRWI2oQ&t=0s
TilinGNN: Learning to Tile with Self-Supervised Graph Neural Network (SIGGRAPH 2020)
https://i.ytimg.com/vi/9…axresdefault.jpg
9cHAjRWI2oQ
interior of an arbitrary 2d shape using a given tile set while avoiding holes in tile overlaps our trained network can help produce tilings in time roughly linear to the number of candidate tile locations significantly outperforming traditional combinatorial search our learned to tile approach given an input tileset we first enumerate candidate tile locations
37
72
https://www.youtube.com/watch?v=9cHAjRWI2oQ&t=37s
TilinGNN: Learning to Tile with Self-Supervised Graph Neural Network (SIGGRAPH 2020)
https://i.ytimg.com/vi/9…axresdefault.jpg
9cHAjRWI2oQ
then we generate random shapes to crop and locate candidate tile locations after that we create a graph to describe each tile placement and train our network to predict tile placements with our self supervised loss at test time we applied the Train Network to predict tile locations for arbitrary shapes where we first locate tile placements and progressively fill
72
104
https://www.youtube.com/watch?v=9cHAjRWI2oQ&t=72s
TilinGNN: Learning to Tile with Self-Supervised Graph Neural Network (SIGGRAPH 2020)
https://i.ytimg.com/vi/9…axresdefault.jpg
9cHAjRWI2oQ
the shape with the health of our network overall this work has three technical contributions first we model this tiling problem as an instance of graph learning second we design a graph convolutional neural network to predict tile placement to be a graph convolution here we call our network tile and GNN third we define loss terms directly on the network output so Thailand GNN can be trained
104
137
https://www.youtube.com/watch?v=9cHAjRWI2oQ&t=104s
TilinGNN: Learning to Tile with Self-Supervised Graph Neural Network (SIGGRAPH 2020)
https://i.ytimg.com/vi/9…axresdefault.jpg
Alkm-PJu9To
morning everybody and welcome to day two of hike on our first two speakers are Angela liking and Lady moody say and they are from a major telco and there are data scientists and are going to be talking about anomaly detection using also encoders hello okay that's good morning everyone today we'll be sharing a talk about anomaly detection using what encoders I'm melody moody say hi
0
53
https://www.youtube.com/watch?v=Alkm-PJu9To&t=0s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
Alkm-PJu9To
I'm Angela and we are both data scientists obviously at a telco and this is also our first time speaking at PyCon yeah so just to take you through some of the content we'll be sharing with you today we'll give you an introduction of auto-encoders how the algorithm actually works a brief history about them thereafter we will give you some architectures different type of popular
53
80
https://www.youtube.com/watch?v=Alkm-PJu9To&t=53s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
Alkm-PJu9To
architectures of autoencoders which may be useful for you for your use cases also particular use cases that are out there where auto-encoders are good to solve we'll also be sharing some Python packages popular Python packages that you could use we'll take you through at repeater notebook we will introduce the notion of fraud anomalies and how to actually implement that then right after
80
107
https://www.youtube.com/watch?v=Alkm-PJu9To&t=80s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
Alkm-PJu9To
we'll have a quick sense visualization to show you how assets data scientists interpret the results as well as for business stakeholders then lastly we'll be sharing key takeaways from our experience from implementing this type of problem so how many of you are aware of neural networks I'm sure most of us were there yesterday at Alex's talk so I'm sure you are familiar with
107
136
https://www.youtube.com/watch?v=Alkm-PJu9To&t=107s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
Alkm-PJu9To
convolutional neural it works I mean he went into quite a lot of details feed-forward neural networks recurrent neural networks and all these type are solving for particular problems like computer vision machine translations and so forth Auto encoders are a part of the family of neural networks so yeah so as milady mentioned before right Auto encoders are a type of neural
136
161
https://www.youtube.com/watch?v=Alkm-PJu9To&t=136s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
Alkm-PJu9To
network whose goal is to determine an output based on a similar input so as you can see right the goal of the input data is to be compressed so that it's in a lower dimensional space such that when the decoder comes along it takes that learn representation of that data the pattern such that it's able to replicate this learned image of this mushroom so now just to get a bit more in depth in
161
189
https://www.youtube.com/watch?v=Alkm-PJu9To&t=161s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
Alkm-PJu9To
terms of the algorithm of the auto-encoders so it's split up into an encoder and a decoder the encoder is simply just a function of your input and your decoder is a function of your hidden layers now as you can see overall your algorithm is represented by the G f of X is equal to R now you want R to be as close as possible to your input layer so you want that data to be very very
189
217
https://www.youtube.com/watch?v=Alkm-PJu9To&t=189s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
Alkm-PJu9To
close to each other so and that's exactly why the objective of an auto encoder is to minimize the loss function now what the loss function means is that you want to reduce and minimize the error between your input and your output the way that these neural networks are trained they are trained through back propagation and what that means is that is the recursive process such that it's
217
240
https://www.youtube.com/watch?v=Alkm-PJu9To&t=217s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
Alkm-PJu9To
able to minimize the error between your input and your output and also something just really interesting but I think maybe you might find interesting what I do so autoencoders have been around for decades now people such as the laocoon and Hinton have used it are you all familiar with them I mean ok cool now let's move on to uses of auto-encoders right so the first being dimensionality
240
265
https://www.youtube.com/watch?v=Alkm-PJu9To&t=240s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
Alkm-PJu9To
reduction that means that you take your data you condense it into a lower dimensional space the reason for doing that is so that your data itself can be more easily represented visually and this will really assist before you applied into a neural network the next example would be denoising of data you can see that initially these images are very hazy fuzzy it you know you can't really see
265
290
https://www.youtube.com/watch?v=Alkm-PJu9To&t=265s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
Alkm-PJu9To
what's going on right now but through the power of auto-encoders what happens is that the noise is removed and it's a more crisper image you can see so now a third example is anomaly detection now what anomaly detection is it's basically a technique for identifying patterns within data so patterns that do not follow the norm so for example in autoencoders for example we have this
290
317
https://www.youtube.com/watch?v=Alkm-PJu9To&t=290s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
Alkm-PJu9To
idea of reconstruction errors so if an observation right it's passed in and it doesn't seem very similar to its input like there's a drastic change there the difference then that would be considered as an outlier hence it would be anomalous so you would see these red images these red dots that's an outlier and lastly we get a view of feature extraction so auto-encoders give you a
317
342
https://www.youtube.com/watch?v=Alkm-PJu9To&t=317s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
Alkm-PJu9To
view of which features in your dataset are useful or not so to take you through some of the different architectures that are out there of auto-encoders a very popular one is the restricted Boltzmann machine and this is actually produced this particular papers produced by our beloved Hinton and a restricted Boltzmann machine is basically a two layer or two encoder so how it works is
342
368
https://www.youtube.com/watch?v=Alkm-PJu9To&t=342s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
Alkm-PJu9To
that it has a visible layer and a hidden layer the visible layer is where our input would come in our variable inputs it would use a combination of that to get into the hidden layer then basically what it's learning is the difference between the hidden layer and a visible layer it uses metric called KL divergence to measure that difference between the two this particular paper by
368
393
https://www.youtube.com/watch?v=Alkm-PJu9To&t=368s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
Alkm-PJu9To
Hinton I would encourage that you go forward and read it if you want to get into auto-encoders it's basically how he used auto-encoders restricted Boltzmann machine and auto-encoders for dimensionality reduction and he actually compares this with PCA and the results that he gets is that with autoencoders he's able to reduce dimensions of nonlinear type of data so the results
393
421
https://www.youtube.com/watch?v=Alkm-PJu9To&t=393s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
Alkm-PJu9To
that he gets the patterns that he under lives were much better than he got for PCA within the field of autoencoders there's two different types of popular architectures there's under complete and over complete so what angela has just described to you is an under complete architecture so remember what she said is that we're trying to find the underlying pattern within our input but
421
447
https://www.youtube.com/watch?v=Alkm-PJu9To&t=421s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
Alkm-PJu9To
to do that what we need to do is to ensure that the neurons within our hidden layer are less than the neurons within our input layer to ensure that whatever our reconstruction of our output is it's not a direct copy of the input then it didn't learn the underlying pattern it needs to be a pattern that's how we ensure that so that is under complete so that's usually
447
471
https://www.youtube.com/watch?v=Alkm-PJu9To&t=447s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
Alkm-PJu9To
for most use cases that's how we implement an an autoencoder then we have the over completes architecture and there's three different types of them so this parse this denoising and contractor so how many of you are familiar with regularization in neural networks okay a few of you so within your own networks um what we usually do is that within these different if if we find that our neural
471
504
https://www.youtube.com/watch?v=Alkm-PJu9To&t=471s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
Alkm-PJu9To
network is overfitting what we sometimes can't do one technique to listen that is to put in a regularizer which means it's to penalize the variables within our weights right but with an autoencoder the spa's auto encoder uses a regularizer but it regularizes the activation functions before getting into the hidden layer as well as here that is to say that you could have an
504
530
https://www.youtube.com/watch?v=Alkm-PJu9To&t=504s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
Alkm-PJu9To
architecture of any type however some of the activation functions are not being initialized that means they not all of the inputs would have been necessarily used so if you experiencing if you build your auto encoder and you like oh my gosh still not finding that underlying pattern there's a lot of noise in my data this is a good thing a good technique that you could use
530
556
https://www.youtube.com/watch?v=Alkm-PJu9To&t=530s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
Alkm-PJu9To
another problem that occurs when implementing an auto encoder is that you get the exact copy like it's so annoying but what you can't do if you have such a problem is used the denoising so what do you know is ink does is that you add noise to your input layer and then you use the same under complete architecture of an auto encoder so it is quite a lot if your reconstruction layer is exactly
556
581
https://www.youtube.com/watch?v=Alkm-PJu9To&t=556s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
Alkm-PJu9To
like your input the contractive is similar to denoising the problem with adding noise to an input is that you really don't know how much noise you should admin so what contractive does is that in your activation functions it finds the derivative of each activation function so it reduces what they as the inputs so what it's it but that in tells is that it's more robust to noise so the
581
608
https://www.youtube.com/watch?v=Alkm-PJu9To&t=581s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
Alkm-PJu9To
more noise you have in your input because of those derivatives it's easier to learn that particular inherent pattern so we have many Python libraries available to us if you are interested in building your very own auto-encoders so first being Kerris which is basically just an abstraction level that sits on top of tensorflow then we have pi torch and then we all
608
631
https://www.youtube.com/watch?v=Alkm-PJu9To&t=608s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
Alkm-PJu9To
know if I could learn I'm sure so the very well notes I could learn and then we have h2o but for this purposes for today we'll be showcasing h2o alright so now we've reached the stage of the jupiter notebook but before we begin there I just want to ask you all a question so who have you have experienced any fraudulent acts in your life just array the pans cool so that seems like quite a
631
656
https://www.youtube.com/watch?v=Alkm-PJu9To&t=631s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
Alkm-PJu9To
few of you right now imagine like within industry as well they must probably be experienced vast amounts of fortune and activities that happen to them on a daily basis for instance we could look at within the banking sector were all very familiar with the tappan go system right so now imagine if a card is being tapped 200 times on the same day isn't that a huge red flag like
656
679
https://www.youtube.com/watch?v=Alkm-PJu9To&t=656s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
Alkm-PJu9To
someone's clearly taking your money unless you really like chopping a lot LMO sorry and then telecoms we get for engine cases like SIM swap fraud or delivery jessa4 so for instance right it's your customer information however the product that's being delivered to you is not sent to your address but it's sent to an address that's who knows 200 kilometers away
679
703
https://www.youtube.com/watch?v=Alkm-PJu9To&t=679s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
Alkm-PJu9To
from where you stay once again yet another red flag right and then in the retail space you can get fortunate acts like stocktaking or online purchases yeah so earn an example of an actual fraud case that has happened was what's called the Japan ATM scam this affected the standard bank that we know though it happened within Japan so what these fraudsters did it's
703
728
https://www.youtube.com/watch?v=Alkm-PJu9To&t=703s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
Alkm-PJu9To
like for real Ocean's eleven but these fraudsters that it was around a hundred people according to the article it's a suspected out around a hundred people went to various of ATMs within Japan and started taking out cash one of the banks that were affected within South Africa was Standard Bank and Stennett Bank lost 295 million rent from this particular activity they did this under three hours
728
758
https://www.youtube.com/watch?v=Alkm-PJu9To&t=728s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
Alkm-PJu9To
that's unlike the solutions so we would if I was the CEO of Standard Bank I'll definitely be like okay you fool me once I definitely wouldn't want again criminals are stealing from me the exact same way rights so we find such an emergent type of fraud that occurs and a business get scared so what we do is that we would ensure to reduce that we'd have either supervised learning model or
758
785
https://www.youtube.com/watch?v=Alkm-PJu9To&t=758s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
Alkm-PJu9To
rules so that in let's say the first month we get such a big spike of fraud in the next month that we want to reduce that so we would compact that but definitely those guys who stole that money I'm sure they have a new creative way of of stealing from a different type of company all standard bank and so what she wants to do within an organization is to try combats
785
809
https://www.youtube.com/watch?v=Alkm-PJu9To&t=785s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
Alkm-PJu9To
that emergent type of fraud so you can have usual fraud cases but also new types of fraud and if we have an actual algorithm that does work well let's say it's 70% accurate maybe 50% of that money could have been saved cool so just like my lady was mentioning you have like the whole idea of emerging fraud and then like a rule based fraud so banks if they really know the kind of
809
839
https://www.youtube.com/watch?v=Alkm-PJu9To&t=809s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
Alkm-PJu9To
fraud that they that are happening right now there's just these will based systems that'll combat it so just once you explain the concept behind anomalies first fraud so as you can see in this Venn diagram right something that is anomalous does not necessarily mean that is fraudulent but something that's fortunate may mean that it's anomalous cool so now I just want to ask you guys
839
863
https://www.youtube.com/watch?v=Alkm-PJu9To&t=839s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
Alkm-PJu9To
as well have a look at this table what stands out to you what is the anomaly yay you get a chocolate cool so fantastic but now but now if we think about this right imagine in a real-life situation when we we're not only just looking at six rows now we're looking at ten million rows and we want to cater for real-time situations in real time are we able to identify the anomalies in
863
896
https://www.youtube.com/watch?v=Alkm-PJu9To&t=863s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
Alkm-PJu9To
the data set and not only what we just have password change occurrence as a variable we'll have millions more so cool that's where anomaly detection using water encoders can play a role so now we move into the Cagle data set apologies for spelling we are data scientist not English teachers cool right so the keidel data set is just the data set which I'm sure you're all quite familiar
896
923
https://www.youtube.com/watch?v=Alkm-PJu9To&t=896s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
Alkm-PJu9To
with it's called the credit card data set and it's based on transactions of customers so as we begin you can see that this data set is highly imbalanced as you can see there are very few fraudulent cases which make up 0.17% of the data set now for machine learning algorithm to learn such a thing it makes it really difficult so cool but we'll explain how to combat that later on and
923
949
https://www.youtube.com/watch?v=Alkm-PJu9To&t=923s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
Alkm-PJu9To
then we read in our normal import so because we were speaking to h2o we'll be using the h2o deep learning estimator library we read in our normal packages we then begin by initiating your spa context and your h2o context followed by now this is where the fun begins right we read in our data set using spark we transform that spark data set into h2o because remember we're working with h2o
949
981
https://www.youtube.com/watch?v=Alkm-PJu9To&t=949s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
Alkm-PJu9To
models right now you can't pass in a spark data frame into an h2o library so hence you need to convert it then over here we defined our features list now because this is an online data set it's anonymized but in real case situations these features could represent things like maybe the number of times you've was drawn from an ATM is your card linked to the app how like how often are
981
1,006
https://www.youtube.com/watch?v=Alkm-PJu9To&t=981s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
Alkm-PJu9To
you in overdraft you know just like those kind of features then you take your data set and you split it into a chain test and then remember before we were showing or showing you how the data itself was highly imbalanced so in order to combat that right you train the model on what looks like normal what is considered normal you chain that so that the model learns so that when it's given
1,006
1,029
https://www.youtube.com/watch?v=Alkm-PJu9To&t=1006s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
Alkm-PJu9To
unseen data and it picks up patterns that don't follow what it learned then it will flag that as an anomaly cool so now we begin with defining our h2o deep learning estimator we pass it through a variety of parameters I'll just go through a few so the one being the model ID which is purely just the name of your auto encoder so when you do save it for reuse of later on you can reference it
1,029
1,054
https://www.youtube.com/watch?v=Alkm-PJu9To&t=1029s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
Alkm-PJu9To
an activation function of ten and a few hidden layers then you chain your model over here now you savor your model cool so now that we saved our model we want to reload it now that we've reloaded the model this is where the fun begins this is where we actually identify anomalous behavior so we apply it to a testing set and we produce these reconstruction errors now if you
1,054
1,082
https://www.youtube.com/watch?v=Alkm-PJu9To&t=1054s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
Alkm-PJu9To
remember these reconstruction errors are how different is the output from the input so as you can see equal this is the overall reconstruction error but now what if you are interested in identifying the reconstruction errors per feature we can view that over here so this over here will show you the reconstruction error per feature so it's just to show you and give you a sense of
1,082
1,103
https://www.youtube.com/watch?v=Alkm-PJu9To&t=1082s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
Alkm-PJu9To
which feature contributed more to a particular observation for a customer yeah and then if you are interested you know after this presentation you can go home and you can build your own auto-encoders you can visit the Kegel comm website and you can get this data set and so just a recap right of reconstruction errors in terms of like a real-life situation with this image
1,103
1,130
https://www.youtube.com/watch?v=Alkm-PJu9To&t=1103s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
Alkm-PJu9To
right the input data would be your pixels and then the output would be the reconstruction errors without the noise cool so I hope that clarifies reconstruction errors all right so now we're going to get into showing you the trick sense dashboard that we've built that we as data scientists and business stakeholders may be interested in might be a bit cheeky with tricking and
1,130
1,158
https://www.youtube.com/watch?v=Alkm-PJu9To&t=1130s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
Alkm-PJu9To
holding the mic so as you can see within this dashboard we chose what you see in over here is what's down then red is the normal pattern that the algorithm caught and what we have above in blue is the anomalies so how we picked the anomalies is was we we picked a threshold of 0.01 and we picked that threshold just based on what we saw from the particular diagram so what the amounts of anomalies
1,158
1,193
https://www.youtube.com/watch?v=Alkm-PJu9To&t=1158s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
Alkm-PJu9To
we caught is a hundred and five anomalies right so let's say I want to check as a data scientist that okay how much was actual fraud and how much did my prediction was one as you can see or if I close predictions as you can see that I my anomaly detection model picked up 69 fraudulent cases out of 89 if I want to check as the data scientists that how many fraud
1,193
1,231
https://www.youtube.com/watch?v=Alkm-PJu9To&t=1193s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
Alkm-PJu9To
cases that my predictor not get so out of all the fraudulent cases it didn't pick up 20 fraudulent cases the anomaly detection model so it's not a bad model it's pretty it's pretty neat for a fraud project well as you can see here we have 0.16 percent of fraudulent cases and more there's more anomalies that we found which goes to the results that we have what you see here is actually the
1,231
1,262
https://www.youtube.com/watch?v=Alkm-PJu9To&t=1231s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
Alkm-PJu9To
reconstruction errors that Angela described for each and every variable so from fraud analysts point of view that first one might be number of possible changes our example our initial example so the fraud analyst would see that okay these are the variables that are most impactful for different types of fraud if the if the fraud analyst wants to see for a particular customer customer one
1,262
1,292
https://www.youtube.com/watch?v=Alkm-PJu9To&t=1262s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
Alkm-PJu9To
seven zero five they experience fraud we picked up that fraud they would see that these are the variables that actually impacts at that particular customer what we added to the dataset Angela added we added places we just added that randomly but usually with a project you want to know maybe a particular area that is experiencing fraud and by how much and see what variables are impacting there
1,292
1,322
https://www.youtube.com/watch?v=Alkm-PJu9To&t=1292s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
Alkm-PJu9To
so that you're able to contact the customer and help them accordingly okay so just to share with you some key takeaways that we have from building this models at scale the first thing is the interpretability so we showed you how to interpret for the particular for this particular model and what does happen that if you have quite a lot of features it can be difficult for a fraud
1,322
1,357
https://www.youtube.com/watch?v=Alkm-PJu9To&t=1322s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
Alkm-PJu9To
analyst or whoever the business stakeholder is to be able to interpret why is this particular case fraud so that's a common problem that we have another problem is that and maybe this is a general machine learning problem is that if there isn't an underlying pattern in your data then the or think ah don't do anything for it so if that's the case you could think about building
1,357
1,384
https://www.youtube.com/watch?v=Alkm-PJu9To&t=1357s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
Alkm-PJu9To
more features that may assist you to get a particular pattern then when it comes to maintainability when we build it at scale a big reason why we chose h2o because we build models at scale with that so if you want to build it you can use the Cagle data set but we also chose it because of that then just the difference with k-means and an autoencoder using an anomaly detection
1,384
1,413
https://www.youtube.com/watch?v=Alkm-PJu9To&t=1384s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
Alkm-PJu9To
problem so we have used k-means before and you'd find the distance between the chester centroid and the observation does show anomalies but maintaining that code so when you have to retrain your model with new data your cluster sense has change your clusters change inevitably what you are trying to find in anomalies to change but with an autoencoder it is much more consistent then with a
1,413
1,439
https://www.youtube.com/watch?v=Alkm-PJu9To&t=1413s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
Alkm-PJu9To
threshold this goes to capacity so you saw we detected 105 anomalies when you are working at scale with much more data it might be 10,000 anomalies now sending 10,000 anomalies to an actual business they call that to work through might be a bit difficult so they might not have the capacity to do that so picking a threshold usually you have to work with the business area to understand what
1,439
1,466
https://www.youtube.com/watch?v=Alkm-PJu9To&t=1439s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
Alkm-PJu9To
threshold is been suited for them and then lastly a feedback loop we would want overtime to know that what we pick up as anomalies was actual fraudulent behavior and sometimes getting that feedback loop is difficulty so just to really like sum it up right so concerned parent if all your friends jumped off a bridge would you follow them machine learning algorithm
1,466
1,498
https://www.youtube.com/watch?v=Alkm-PJu9To&t=1466s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
Alkm-PJu9To
yes so basically all in all just want to sum it up to what we spoke about today just because a model may say that something is anomalous at the end of the day you also need to check does it make sense to the use case now that make sense to the stakeholders I don't just listen to the machine learning algorithm there's still more to it you still need to bring in like you have the human side
1,498
1,519
https://www.youtube.com/watch?v=Alkm-PJu9To&t=1498s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
Alkm-PJu9To
and to understand that it makes sense with business say yeah I mean thank you so much for listening thanks ladies presentation was great with regards to the Reconstruction era have you experienced any cases where you've got like really high variances in the range of your reconstruction errors and then like if you have what approaches have you taken to like scale those or have
1,519
1,557
https://www.youtube.com/watch?v=Alkm-PJu9To&t=1519s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
Alkm-PJu9To
you worked with them just as you know so let's say you've got a reconstruction error on 0.05 on one observation and then you got on another observation a reconstruction error of I don't know like a hundred right so in that case like have you experienced that and if you have like have you dealt with any sort of like normalization of step or standardization reconstruction era the one that is
1,557
1,595
https://www.youtube.com/watch?v=Alkm-PJu9To&t=1557s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
Alkm-PJu9To
higher to us we looking for that it's noise it's the anomaly that we're trying to find but so yeah we haven't we haven't dealt with that yeah even with real with these cases that we've worked on our Styles like ok but I think it goes to those different architectures that if your general generic Reconstruction era it's not finding the pets and there's still quite a lot of
1,595
1,620
https://www.youtube.com/watch?v=Alkm-PJu9To&t=1595s
Anomaly Detection using Autoencoders
https://i.ytimg.com/vi/A…axresdefault.jpg
pPyOlGvWoXA
so then let's get started for today welcome to lecture 10 of cs2 9458 deep unsupervised learning now this lecture will be on compression before we dive into that a couple of logistical things there are main logistical things that are ahead of you are your project milestone which is a three-page Goldbach intermediate report is due on Monday so we must look forward
0
30
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=0s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
to reading those giving you feedback in the days after the deadline so you can gonna make sure you're maximally on track for your full final project the other thing that's coming up in two weeks we'll have our midterm which will figure out how to do it remote under the current circumstances but the main thing we'll do later this week is release a set of study materials for you that
30
56
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=30s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
capture the core of the things covered in the class their very core compressive a little bit because of how much we're going to have you study because of course a more difficult semester than most you do outside circumstances so a relatively short study guide and it'll be a PDF with the questions and the answers and so you'll know exactly what the questions can be and what the
56
78
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=56s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
answers are that we expect you to get so that will come out later today or tomorrow for you to study link pause here and see if there's any questions about logistics oh and by the way this lecture is recorded so for some reason you you know don't like your voice to be heard just like with the in-class lectures that were recorded then please be aware of that alright then let's get started with the
78
111
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=78s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
Contin first day so compression what is it and why would we care in general and why would we there in this class so what is it it's an data you might want to reduce the number of bits for encoding a message a message could be an image you want to send or part of speech or maybe some music you want to send across on communication line and it's an original in a
111
140
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=111s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
collisional format might take up a very large number of bits and you might want to be able to get that same information across by seeing last bits in the communication channel so what does it look like you have some bit stream B on the left here so that's where you start out with then what happens next is you want to compress it and end up with a compressed version of that bit stream
140
168
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=140s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
and the hope that that compressed version has lost its anatomy original so when you send a compressed extreme over a channel or stored on a hard drive or whatever you want to do with those bits in a more compressed way then it's ideally a lot less good but then when you want to use it later you should be able to expand it back out just the compression into the original alright so
168
194
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=168s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
why do we care well you could save time you could save bandwidth over communications channel you could save space when you're storing it so many reasons you might care about this from the AI point of view and part of why it's interesting for this class is that often the ability to compress data reflects understanding of the data by the system that compressed the data so
194
218
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=194s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
if you throw this in that's really by compressing data that means that system somehow has absorbed an understanding of the data so now there's two types of compression lossy versus lossless compression in this lecture we'll be fully focused on lossless compression where the original bits can be completely reconstructed on the output now sometimes in practice you might care
218
243
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=218s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
about lossy compression you say well I don't need all the details back as long as I can save more bits I'm happy to lose some detail that would be loss of compression not the topic for this class but also a topic you might be interested in at some point so I want to make sure you know it exists now one of the very interesting things with compression there are some prizes associated it
243
264
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=243s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
so recently hutter I should increase the price used to be a 50,000 euro prize for compressing human knowledge and recently it went up to factor 10 it's now a five hundred thousand euro prize if you can compress human knowledge what does it mean more concretely so there's a one gigabyte file of I believe text and this file here and with nine and if you can compress that to less than one hundred
264
293
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=264s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
sixteen megabytes you win the prize you won this thing you cracked it the reason how to read out the surprise is not so much because it specifically wants that one gigabyte compressed into one sixteen megabytes because he believes now we go by it has interesting information that any system that can represent it as compactly as one sixteen megabytes must have made its hopefully that's what it
293
318
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=293s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
thinks some AI advances to be able to do that it's pretty interesting here because unlike most things we've covered in this class and you'll see in any kind of machine learning there's no train at st it's not that he asked you to send in a compressor and has a secret test set is gonna test your compressor on to see how it works no it's literally there's a 1 gigabyte file and if you can make it
318
341
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=318s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
smaller small enough you win the prize what's gonna be able to decompress it so you gotta be able to effectively send him something that's 116 megabytes or less and includes the code for decoding back into the one gigabyte so you'd be sending effectively both the decoder program and some encoding of this monkey bad file together it would be able to reconstruct the original 1 gigabyte file
341
367
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=341s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
so very very specific problem there's no test said just that one training example but nobody's got them close to actually making this work so interesting challenge maybe something you want to think about at some point and see if it can make some progress then there's another compression challenge on images so this is often held at CDR the main conference for computer vision and so
367
392
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=367s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
there's a workshop there that looks at how well your compressor and there it's really about a compressor that you send in compressor of course and they have a secret asset on which they test how well you can compress and decompress the test examples so two very different challenges but both very much at the core what we're going to be thinking about today in lecture all right so why
392
419
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=392s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
in this course it turns out that we've studied a lot of generic models in this course it also turns out that compression utilizes generative models so the better it narrative model the better the compression can be and in fact Jonathan who will cover the second half of this lecture has made several breakthroughs in this PhD research showing how some of the state-of-the-art
419
445
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=419s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
generative models can be converted into compression algorithms with the CIO narrative models under the hood such that you can get better compression now you might go get otherwise and we'll cover that later but so there's a very close connection between better generative models and better compression there be material we would recommend for this lecture is this PDF
445
472
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=445s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
overview to the nice write-up that covers the background on essentially information theory slack impression that we'll be covering in this lecture at least the first half in the second half will dive a lot more in the deep learning aspects and how and tied it into this so some applications you might have seen jarick file compression gz p z 7z a zip file systems various multimedia formats
472
499
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=472s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
you might have seen a file - fake file gif file mp3 mp4 communications that maybe you don't see news anymore now but where compression played a big role in the past fax modem Skype and so forth and all of these are examples of where here original information might have been represented with many many bits too large for you to a store on file in that format and because you can reduce them
499
527
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=499s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
or they can get back out the original you can now store it more efficiently or send it more efficiently over get in line when he said it more official over communication line it can reduce both the amount of does he need to send then in the process also reduce the latency because it might be less delay assuming you can decode quickly on the other side now maybe you might have
527
550
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=527s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
followed this TV show called Silicon Valley it's uh well pretty finish I would say with many things that are maybe a little too close to home and too close to true but still pretty funny and if you watch that show on HBO you have noticed that actually the company the central company Pied Piper would they put forward as their product is the well a middle-out compression algorithm
550
578
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=550s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg
pPyOlGvWoXA
nobody knows what middle out this but they put forward a compression algorithm and that's the secret sauce of their company turns out that some people really do this for their actual company so there's various startups out there that you invent don't disclose exactly what's under the hood but invent new compression algorithms using machine learning under the hood most likely to
578
602
https://www.youtube.com/watch?v=pPyOlGvWoXA&t=578s
L10 Compression -- UC Berkeley, Spring 2020, CS294-158 Deep Unsupervised Learning
https://i.ytimg.com/vi/p…axresdefault.jpg