|
WEBVTT Kind: captions; Language: en-US |
|
|
|
NOTE |
|
Created on 2024-02-07T20:51:27.1993072Z by ClassTranscribe |
|
|
|
00:01:22.450 --> 00:01:23.720 |
|
Good morning, everybody. |
|
|
|
00:01:25.160 --> 00:01:25.690 |
|
Morning. |
|
|
|
00:01:29.980 --> 00:01:31.830 |
|
Alright, so I'm going to get started. |
|
|
|
00:01:31.830 --> 00:01:33.590 |
|
Just a note. |
|
|
|
00:01:33.590 --> 00:01:37.980 |
|
So I'll generally start at 9:31 |
|
|
|
00:01:37.980 --> 00:01:38.590 |
|
exactly. |
|
|
|
00:01:38.590 --> 00:01:42.600 |
|
So I give a minute of slack and. |
|
|
|
00:01:43.360 --> 00:01:44.640 |
|
At the end of the class, I'll make it |
|
|
|
00:01:44.640 --> 00:01:45.056 |
|
pretty clear. |
|
|
|
00:01:45.056 --> 00:01:46.640 |
|
When class is over, just wait till I |
|
|
|
00:01:46.640 --> 00:01:47.945 |
|
say thank you or something. |
|
|
|
00:01:47.945 --> 00:01:49.580 |
|
That kind of indicates that class is |
|
|
|
00:01:49.580 --> 00:01:50.659 |
|
over before you pack up. |
|
|
|
00:01:50.660 --> 00:01:52.684 |
|
Because otherwise like when students |
|
|
|
00:01:52.684 --> 00:01:56.080 |
|
start to pack up, like if I get to the |
|
|
|
00:01:56.080 --> 00:01:57.507 |
|
last slide and then students start to |
|
|
|
00:01:57.507 --> 00:01:59.170 |
|
pack up, it makes quite a lot of noise |
|
|
|
00:01:59.170 --> 00:02:01.590 |
|
if like couple 100 people are packing |
|
|
|
00:02:01.590 --> 00:02:02.260 |
|
up at the same time. |
|
|
|
00:02:03.490 --> 00:02:05.930 |
|
Right, so by the way these are I forgot |
|
|
|
00:02:05.930 --> 00:02:08.210 |
|
to mention that brain image is an image |
|
|
|
00:02:08.210 --> 00:02:09.600 |
|
that's created by Dolly. |
|
|
|
00:02:09.600 --> 00:02:10.770 |
|
You might have heard of that. |
|
|
|
00:02:10.770 --> 00:02:14.580 |
|
It's a AI like image generation method |
|
|
|
00:02:14.580 --> 00:02:17.440 |
|
that can take an image, take a text and |
|
|
|
00:02:17.440 --> 00:02:19.210 |
|
then generate an image that matches a |
|
|
|
00:02:19.210 --> 00:02:19.480 |
|
text. |
|
|
|
00:02:20.320 --> 00:02:22.470 |
|
This is also an image that's created by |
|
|
|
00:02:22.470 --> 00:02:22.950 |
|
Dolly. |
|
|
|
00:02:24.430 --> 00:02:26.280 |
|
I forget exactly what the prompt was on |
|
|
|
00:02:26.280 --> 00:02:26.720 |
|
this one. |
|
|
|
00:02:26.720 --> 00:02:28.854 |
|
It was it didn't exactly match the |
|
|
|
00:02:28.854 --> 00:02:29.028 |
|
prompt. |
|
|
|
00:02:29.028 --> 00:02:30.660 |
|
I think it was like, I think I said |
|
|
|
00:02:30.660 --> 00:02:32.170 |
|
something like a bunch of animals |
|
|
|
00:02:32.170 --> 00:02:32.960 |
|
somewhere ring. |
|
|
|
00:02:33.970 --> 00:02:36.580 |
|
Orange vests and somewhere in green |
|
|
|
00:02:36.580 --> 00:02:38.090 |
|
vests standing in a line. |
|
|
|
00:02:38.090 --> 00:02:39.956 |
|
It has some trouble, like associating |
|
|
|
00:02:39.956 --> 00:02:41.900 |
|
the right words with the right objects, |
|
|
|
00:02:41.900 --> 00:02:44.030 |
|
but I still think it's pretty fitting |
|
|
|
00:02:44.030 --> 00:02:44.780 |
|
for nearest neighbor. |
|
|
|
00:02:46.130 --> 00:02:47.775 |
|
I like how there's like that one guy |
|
|
|
00:02:47.775 --> 00:02:49.700 |
|
that is like standing out. |
|
|
|
00:02:54.930 --> 00:02:58.120 |
|
So today I'm going to talk about two |
|
|
|
00:02:58.120 --> 00:02:58.860 |
|
things really. |
|
|
|
00:02:58.860 --> 00:03:01.249 |
|
So one is talking a bit more about the |
|
|
|
00:03:01.250 --> 00:03:03.540 |
|
basic process of supervised machine |
|
|
|
00:03:03.540 --> 00:03:06.320 |
|
learning, and the other is about the K |
|
|
|
00:03:06.320 --> 00:03:07.945 |
|
nearest neighbor algorithm, which is |
|
|
|
00:03:07.945 --> 00:03:10.330 |
|
one of the kind of like fundamental |
|
|
|
00:03:10.330 --> 00:03:11.800 |
|
algorithms and machine learning. |
|
|
|
00:03:12.560 --> 00:03:15.635 |
|
And I'll also talk about how we what |
|
|
|
00:03:15.635 --> 00:03:17.160 |
|
are the sources of error. |
|
|
|
00:03:17.160 --> 00:03:18.270 |
|
So why is it a? |
|
|
|
00:03:18.270 --> 00:03:19.939 |
|
What are the different reasons that a |
|
|
|
00:03:19.940 --> 00:03:21.460 |
|
machine learning algorithm will make |
|
|
|
00:03:21.460 --> 00:03:24.390 |
|
test error even after it's fit the |
|
|
|
00:03:24.390 --> 00:03:24.970 |
|
training set? |
|
|
|
00:03:25.640 --> 00:03:28.650 |
|
And I'll talk about a couple of |
|
|
|
00:03:28.650 --> 00:03:29.979 |
|
applications, so I'll talk about |
|
|
|
00:03:29.980 --> 00:03:32.344 |
|
homework, one which has a couple of |
|
|
|
00:03:32.344 --> 00:03:32.940 |
|
applications in it. |
|
|
|
00:03:33.540 --> 00:03:36.410 |
|
And I'll also talk about the deep face |
|
|
|
00:03:36.410 --> 00:03:37.210 |
|
algorithm. |
|
|
|
00:03:41.160 --> 00:03:43.620 |
|
So a machine learning model is |
|
|
|
00:03:43.620 --> 00:03:46.539 |
|
something that maps from features to |
|
|
|
00:03:46.540 --> 00:03:47.430 |
|
prediction. |
|
|
|
00:03:47.430 --> 00:03:51.040 |
|
So in this notation I've got F of X is |
|
|
|
00:03:51.040 --> 00:03:53.700 |
|
mapping to YX are the features, F is |
|
|
|
00:03:53.700 --> 00:03:55.460 |
|
some function that we'll have some |
|
|
|
00:03:55.460 --> 00:03:56.150 |
|
parameters. |
|
|
|
00:03:56.800 --> 00:03:59.120 |
|
And why is the prediction? |
|
|
|
00:04:00.050 --> 00:04:01.450 |
|
So for example you could have a |
|
|
|
00:04:01.450 --> 00:04:03.810 |
|
classification problem like is this a |
|
|
|
00:04:03.810 --> 00:04:04.530 |
|
dog or a cat? |
|
|
|
00:04:04.530 --> 00:04:06.660 |
|
And it might be based on image features |
|
|
|
00:04:06.660 --> 00:04:10.263 |
|
or image pixels and so then X are the |
|
|
|
00:04:10.263 --> 00:04:13.985 |
|
image pixels, Y is yes or no, or it |
|
|
|
00:04:13.985 --> 00:04:15.460 |
|
could be dog or cat depending on how |
|
|
|
00:04:15.460 --> 00:04:15.920 |
|
you frame it. |
|
|
|
00:04:16.940 --> 00:04:20.200 |
|
Or if the problem is this e-mail spam |
|
|
|
00:04:20.200 --> 00:04:23.210 |
|
or not, then the features might be like |
|
|
|
00:04:23.210 --> 00:04:25.440 |
|
some summary of the words in the |
|
|
|
00:04:25.440 --> 00:04:27.680 |
|
document and the words in the subject |
|
|
|
00:04:27.680 --> 00:04:31.430 |
|
and the sender and the output is like |
|
|
|
00:04:31.430 --> 00:04:33.420 |
|
true or false or one or zero. |
|
|
|
00:04:34.600 --> 00:04:36.450 |
|
You could also have regression tests, |
|
|
|
00:04:36.450 --> 00:04:38.144 |
|
for example, what will the stock price |
|
|
|
00:04:38.144 --> 00:04:39.572 |
|
be of NVIDIA tomorrow? |
|
|
|
00:04:39.572 --> 00:04:42.260 |
|
And then the features might be the |
|
|
|
00:04:42.260 --> 00:04:44.530 |
|
historic stock prices, maybe some |
|
|
|
00:04:44.530 --> 00:04:45.960 |
|
features about what's trending on |
|
|
|
00:04:45.960 --> 00:04:47.610 |
|
Twitter, I don't know anything you |
|
|
|
00:04:47.610 --> 00:04:48.070 |
|
want. |
|
|
|
00:04:48.070 --> 00:04:50.780 |
|
And then the prediction would be the |
|
|
|
00:04:50.780 --> 00:04:53.236 |
|
numerical value of the stock price |
|
|
|
00:04:53.236 --> 00:04:53.619 |
|
tomorrow. |
|
|
|
00:04:54.360 --> 00:04:55.640 |
|
When you're training something like |
|
|
|
00:04:55.640 --> 00:04:57.150 |
|
that, you've got like a whole bunch of |
|
|
|
00:04:57.150 --> 00:04:58.020 |
|
historical data. |
|
|
|
00:04:58.020 --> 00:04:59.710 |
|
So you try to learn a model that can |
|
|
|
00:04:59.710 --> 00:05:01.780 |
|
predict based, predict the historical |
|
|
|
00:05:01.780 --> 00:05:04.140 |
|
stock prices given the preceding ones, |
|
|
|
00:05:04.140 --> 00:05:05.579 |
|
and then you would hope that when you |
|
|
|
00:05:05.580 --> 00:05:08.830 |
|
apply it to today's data that it would |
|
|
|
00:05:08.830 --> 00:05:10.520 |
|
be able to predict the price tomorrow. |
|
|
|
00:05:12.140 --> 00:05:13.390 |
|
Likewise, what will be the high |
|
|
|
00:05:13.390 --> 00:05:14.410 |
|
temperature tomorrow? |
|
|
|
00:05:14.410 --> 00:05:16.297 |
|
Features might be other temperatures, |
|
|
|
00:05:16.297 --> 00:05:17.900 |
|
temperatures in other locations, other |
|
|
|
00:05:17.900 --> 00:05:21.410 |
|
kinds of barometric data, and the |
|
|
|
00:05:21.410 --> 00:05:23.530 |
|
output is some temperature. |
|
|
|
00:05:24.410 --> 00:05:25.600 |
|
Or you could have a structured |
|
|
|
00:05:25.600 --> 00:05:27.630 |
|
prediction task where you're outputting |
|
|
|
00:05:27.630 --> 00:05:29.295 |
|
not just one number, but a whole bunch |
|
|
|
00:05:29.295 --> 00:05:31.025 |
|
of numbers that are somehow related to |
|
|
|
00:05:31.025 --> 00:05:31.690 |
|
each other. |
|
|
|
00:05:31.690 --> 00:05:33.420 |
|
For example, what is the pose of this |
|
|
|
00:05:33.420 --> 00:05:33.925 |
|
person? |
|
|
|
00:05:33.925 --> 00:05:36.767 |
|
You would output positions of each of |
|
|
|
00:05:36.767 --> 00:05:38.520 |
|
the key points on the person's body. |
|
|
|
00:05:40.140 --> 00:05:40.440 |
|
Right. |
|
|
|
00:05:40.440 --> 00:05:42.313 |
|
All of these though are just mapping a |
|
|
|
00:05:42.313 --> 00:05:45.315 |
|
set of features to some labeler to some |
|
|
|
00:05:45.315 --> 00:05:46.170 |
|
set of labels. |
|
|
|
00:05:48.630 --> 00:05:50.720 |
|
The machine learning has three stages. |
|
|
|
00:05:50.720 --> 00:05:52.870 |
|
There's a training stage which is when |
|
|
|
00:05:52.870 --> 00:05:54.580 |
|
you optimize the model parameters. |
|
|
|
00:05:55.620 --> 00:05:58.260 |
|
There is a validation stage, which is |
|
|
|
00:05:58.260 --> 00:06:00.820 |
|
when you evaluate some model that's |
|
|
|
00:06:00.820 --> 00:06:03.357 |
|
been optimized and use the validation |
|
|
|
00:06:03.357 --> 00:06:06.317 |
|
to select among possible models or to |
|
|
|
00:06:06.317 --> 00:06:08.720 |
|
select among some parameters that you |
|
|
|
00:06:08.720 --> 00:06:09.900 |
|
set for those models. |
|
|
|
00:06:10.520 --> 00:06:13.380 |
|
So the training is purely optimizing |
|
|
|
00:06:13.380 --> 00:06:15.530 |
|
your some model design that you have on |
|
|
|
00:06:15.530 --> 00:06:16.590 |
|
the training data. |
|
|
|
00:06:16.590 --> 00:06:18.480 |
|
The validation is saying whether that |
|
|
|
00:06:18.480 --> 00:06:19.800 |
|
was a good model design. |
|
|
|
00:06:20.420 --> 00:06:23.150 |
|
And so you might iterate between the |
|
|
|
00:06:23.150 --> 00:06:25.290 |
|
training and the validation many times. |
|
|
|
00:06:25.290 --> 00:06:27.290 |
|
At the end of that, you'll pick what |
|
|
|
00:06:27.290 --> 00:06:29.290 |
|
you think is the most effective model, |
|
|
|
00:06:29.290 --> 00:06:30.680 |
|
and then ideally that should be |
|
|
|
00:06:30.680 --> 00:06:33.710 |
|
evaluated only once on the test data as |
|
|
|
00:06:33.710 --> 00:06:35.440 |
|
a measure of the final performance. |
|
|
|
00:06:39.330 --> 00:06:43.010 |
|
So training is fitting the data to |
|
|
|
00:06:43.010 --> 00:06:46.190 |
|
minimize some loss or maximize some |
|
|
|
00:06:46.190 --> 00:06:47.115 |
|
objective function. |
|
|
|
00:06:47.115 --> 00:06:49.490 |
|
So there's kind of a lot to unpack in |
|
|
|
00:06:49.490 --> 00:06:51.180 |
|
this one little equation. |
|
|
|
00:06:51.180 --> 00:06:54.290 |
|
So first the Theta here are the |
|
|
|
00:06:54.290 --> 00:06:56.405 |
|
parameters of the model, so that's what |
|
|
|
00:06:56.405 --> 00:06:57.610 |
|
would be optimized. |
|
|
|
00:06:57.610 --> 00:06:59.566 |
|
And here I'm writing it as minimizing |
|
|
|
00:06:59.566 --> 00:07:01.650 |
|
some loss, which is the most common way |
|
|
|
00:07:01.650 --> 00:07:02.280 |
|
you would see it. |
|
|
|
00:07:03.350 --> 00:07:06.020 |
|
Theta star is the Theta that minimizes |
|
|
|
00:07:06.020 --> 00:07:06.920 |
|
that loss. |
|
|
|
00:07:07.290 --> 00:07:10.080 |
|
The loss I'll get to it can be |
|
|
|
00:07:10.080 --> 00:07:11.440 |
|
different different definitions. |
|
|
|
00:07:11.440 --> 00:07:13.209 |
|
It could be, for example, a 01 |
|
|
|
00:07:13.210 --> 00:07:15.220 |
|
classification loss or a cross entropy |
|
|
|
00:07:15.220 --> 00:07:15.620 |
|
loss. |
|
|
|
00:07:15.620 --> 00:07:18.070 |
|
That's evaluating the likelihood of the |
|
|
|
00:07:18.070 --> 00:07:19.820 |
|
ground truth labels given the data. |
|
|
|
00:07:21.330 --> 00:07:23.427 |
|
You've got your model F, you've got |
|
|
|
00:07:23.427 --> 00:07:24.840 |
|
your features X. |
|
|
|
00:07:25.850 --> 00:07:28.430 |
|
Those errors are slightly off and your |
|
|
|
00:07:28.430 --> 00:07:29.510 |
|
ground truth prediction. |
|
|
|
00:07:29.510 --> 00:07:31.580 |
|
So Capital X, capital Y here are the |
|
|
|
00:07:31.580 --> 00:07:34.230 |
|
training data and they're those are |
|
|
|
00:07:34.230 --> 00:07:36.980 |
|
pairs of examples or examples, meaning |
|
|
|
00:07:36.980 --> 00:07:38.920 |
|
that you've got pairs of features and |
|
|
|
00:07:38.920 --> 00:07:40.250 |
|
then what you're supposed to predict |
|
|
|
00:07:40.250 --> 00:07:41.020 |
|
from those features. |
|
|
|
00:07:43.820 --> 00:07:45.750 |
|
So here's one example. |
|
|
|
00:07:45.750 --> 00:07:48.040 |
|
Let's say that we want to learn to |
|
|
|
00:07:48.040 --> 00:07:49.590 |
|
predict the next day's temperature |
|
|
|
00:07:49.590 --> 00:07:51.410 |
|
given the preceding day temperatures. |
|
|
|
00:07:51.410 --> 00:07:53.520 |
|
So the way that you would commonly |
|
|
|
00:07:53.520 --> 00:07:55.000 |
|
formulate this is you'd have some |
|
|
|
00:07:55.000 --> 00:07:56.810 |
|
matrix of features this X. |
|
|
|
00:07:56.810 --> 00:08:00.000 |
|
So in Python you just have a 2D Numpy |
|
|
|
00:08:00.000 --> 00:08:00.400 |
|
of A. |
|
|
|
00:08:01.110 --> 00:08:04.462 |
|
And you would often store it as that |
|
|
|
00:08:04.462 --> 00:08:06.330 |
|
you have one row per example. |
|
|
|
00:08:06.330 --> 00:08:07.970 |
|
So each one of these rows. |
|
|
|
00:08:07.970 --> 00:08:10.716 |
|
Here is a different example, and if you |
|
|
|
00:08:10.716 --> 00:08:12.850 |
|
have 1000 training examples, you'd have |
|
|
|
00:08:12.850 --> 00:08:13.510 |
|
1000 rows. |
|
|
|
00:08:14.410 --> 00:08:16.090 |
|
And then you have one column per |
|
|
|
00:08:16.090 --> 00:08:16.840 |
|
feature. |
|
|
|
00:08:16.840 --> 00:08:19.730 |
|
So this might be the temperature of the |
|
|
|
00:08:19.730 --> 00:08:21.650 |
|
preceding day, the temperature of two |
|
|
|
00:08:21.650 --> 00:08:23.415 |
|
days ago, three days ago, four days |
|
|
|
00:08:23.415 --> 00:08:23.740 |
|
ago. |
|
|
|
00:08:23.740 --> 00:08:25.530 |
|
And this training data would probably |
|
|
|
00:08:25.530 --> 00:08:27.390 |
|
be based on, like, historical data |
|
|
|
00:08:27.390 --> 00:08:28.170 |
|
that's available. |
|
|
|
00:08:29.840 --> 00:08:32.505 |
|
And then Y is what you need to predict. |
|
|
|
00:08:32.505 --> 00:08:35.275 |
|
So the goal is to predict, for example |
|
|
|
00:08:35.275 --> 00:08:38.840 |
|
50.5 based on these numbers here, to |
|
|
|
00:08:38.840 --> 00:08:41.025 |
|
predict 473 from these numbers here, |
|
|
|
00:08:41.025 --> 00:08:43.290 |
|
and so on South you'll have the same |
|
|
|
00:08:43.290 --> 00:08:45.640 |
|
number of rows and your Y as you have |
|
|
|
00:08:45.640 --> 00:08:48.040 |
|
in your X, but X will have a number of |
|
|
|
00:08:48.040 --> 00:08:50.677 |
|
columns that corresponds to the number |
|
|
|
00:08:50.677 --> 00:08:51.570 |
|
of features. |
|
|
|
00:08:51.570 --> 00:08:53.890 |
|
And if Y is just you're just predicting |
|
|
|
00:08:53.890 --> 00:08:55.250 |
|
a single number, then you will only |
|
|
|
00:08:55.250 --> 00:08:56.100 |
|
have one column. |
|
|
|
00:08:58.790 --> 00:09:00.270 |
|
So for this problem, it might be |
|
|
|
00:09:00.270 --> 00:09:03.240 |
|
natural to use a squared loss, which is |
|
|
|
00:09:03.240 --> 00:09:06.620 |
|
that we're going to say that the. |
|
|
|
00:09:07.360 --> 00:09:09.330 |
|
We want to minimize the squared |
|
|
|
00:09:09.330 --> 00:09:11.710 |
|
difference between each prediction F of |
|
|
|
00:09:11.710 --> 00:09:13.580 |
|
XI given Theta. |
|
|
|
00:09:14.400 --> 00:09:17.630 |
|
Is a prediction on the ith training |
|
|
|
00:09:17.630 --> 00:09:20.900 |
|
features given the parameters Theta. |
|
|
|
00:09:21.730 --> 00:09:24.810 |
|
And I want to make that as close as |
|
|
|
00:09:24.810 --> 00:09:27.387 |
|
possible to the correct value Yi and |
|
|
|
00:09:27.387 --> 00:09:29.973 |
|
I'm going to I'm going to say close as |
|
|
|
00:09:29.973 --> 00:09:32.040 |
|
possible is defined by a squared |
|
|
|
00:09:32.040 --> 00:09:32.990 |
|
difference. |
|
|
|
00:09:35.410 --> 00:09:37.470 |
|
And I might say for this I'm going to |
|
|
|
00:09:37.470 --> 00:09:39.720 |
|
use a linear model, so we'll talk about |
|
|
|
00:09:39.720 --> 00:09:42.720 |
|
linear models in more detail next |
|
|
|
00:09:42.720 --> 00:09:45.385 |
|
Thursday, but it's pretty intuitive. |
|
|
|
00:09:45.385 --> 00:09:47.850 |
|
You just have a set for each of your |
|
|
|
00:09:47.850 --> 00:09:48.105 |
|
features. |
|
|
|
00:09:48.105 --> 00:09:49.710 |
|
You have some coefficient that's |
|
|
|
00:09:49.710 --> 00:09:51.800 |
|
multiplied by those features, you sum |
|
|
|
00:09:51.800 --> 00:09:53.099 |
|
them up, and then you have some |
|
|
|
00:09:53.100 --> 00:09:53.980 |
|
constant term. |
|
|
|
00:09:55.170 --> 00:09:56.829 |
|
And then if we wanted to optimize this |
|
|
|
00:09:56.830 --> 00:09:58.900 |
|
model, we could optimize it using |
|
|
|
00:09:58.900 --> 00:10:00.390 |
|
ordinary least squares regression, |
|
|
|
00:10:00.390 --> 00:10:01.610 |
|
which again we'll talk about next |
|
|
|
00:10:01.610 --> 00:10:02.300 |
|
Thursday. |
|
|
|
00:10:02.300 --> 00:10:03.980 |
|
So the details of this aren't |
|
|
|
00:10:03.980 --> 00:10:06.170 |
|
important, but the example is just to |
|
|
|
00:10:06.170 --> 00:10:08.820 |
|
give you a sense of what the training |
|
|
|
00:10:08.820 --> 00:10:09.710 |
|
process involves. |
|
|
|
00:10:09.710 --> 00:10:11.770 |
|
You have a feature matrix X. |
|
|
|
00:10:11.770 --> 00:10:13.789 |
|
You have a prediction vector a matrix |
|
|
|
00:10:13.790 --> 00:10:14.120 |
|
Y. |
|
|
|
00:10:14.950 --> 00:10:16.550 |
|
You have to define a loss, define a |
|
|
|
00:10:16.550 --> 00:10:18.130 |
|
model and figure out how you're going |
|
|
|
00:10:18.130 --> 00:10:18.895 |
|
to optimize it. |
|
|
|
00:10:18.895 --> 00:10:20.350 |
|
And then you would actually perform the |
|
|
|
00:10:20.350 --> 00:10:22.420 |
|
optimization, get the parameters, and |
|
|
|
00:10:22.420 --> 00:10:23.110 |
|
that's training. |
|
|
|
00:10:25.480 --> 00:10:28.050 |
|
So often you'll have a bunch of design |
|
|
|
00:10:28.050 --> 00:10:29.470 |
|
decisions when you're faced with some |
|
|
|
00:10:29.470 --> 00:10:30.782 |
|
kind of machine learning problem. |
|
|
|
00:10:30.782 --> 00:10:33.660 |
|
So you might say, well, maybe that |
|
|
|
00:10:33.660 --> 00:10:35.520 |
|
temperature prediction problem, maybe a |
|
|
|
00:10:35.520 --> 00:10:38.450 |
|
linear regressor is good enough. |
|
|
|
00:10:38.450 --> 00:10:40.493 |
|
Maybe I need a neural network. |
|
|
|
00:10:40.493 --> 00:10:42.370 |
|
Maybe I should use a decision tree. |
|
|
|
00:10:42.370 --> 00:10:44.066 |
|
So you might have different algorithms |
|
|
|
00:10:44.066 --> 00:10:45.610 |
|
that you're considering trying. |
|
|
|
00:10:46.240 --> 00:10:48.460 |
|
And even for each of those algorithms, |
|
|
|
00:10:48.460 --> 00:10:50.320 |
|
there might be different parameters |
|
|
|
00:10:50.320 --> 00:10:51.925 |
|
that you're considering, like what's |
|
|
|
00:10:51.925 --> 00:10:53.280 |
|
the depth of the tree that I should |
|
|
|
00:10:53.280 --> 00:10:53.580 |
|
use. |
|
|
|
00:10:55.190 --> 00:10:58.160 |
|
And so you so it's important to have |
|
|
|
00:10:58.160 --> 00:11:00.450 |
|
some kind of validation set that you |
|
|
|
00:11:00.450 --> 00:11:01.960 |
|
can use to. |
|
|
|
00:11:02.990 --> 00:11:05.022 |
|
That you can use to determine how good |
|
|
|
00:11:05.022 --> 00:11:07.020 |
|
the model is that you chose, or what |
|
|
|
00:11:07.020 --> 00:11:08.755 |
|
how good the design parameters of that |
|
|
|
00:11:08.755 --> 00:11:09.160 |
|
model are. |
|
|
|
00:11:09.920 --> 00:11:12.766 |
|
So for each one of the different kind |
|
|
|
00:11:12.766 --> 00:11:13.940 |
|
of model designs that you're |
|
|
|
00:11:13.940 --> 00:11:15.592 |
|
considering, you would train your model |
|
|
|
00:11:15.592 --> 00:11:16.980 |
|
and then you evaluate it on a |
|
|
|
00:11:16.980 --> 00:11:19.300 |
|
validation set and then you choose the |
|
|
|
00:11:19.300 --> 00:11:20.210 |
|
best of those. |
|
|
|
00:11:21.100 --> 00:11:23.390 |
|
The best of those models as you're like |
|
|
|
00:11:23.390 --> 00:11:24.160 |
|
final model. |
|
|
|
00:11:25.280 --> 00:11:28.296 |
|
So in some if you're doing, like if |
|
|
|
00:11:28.296 --> 00:11:30.900 |
|
you're getting data sets from online, |
|
|
|
00:11:30.900 --> 00:11:32.460 |
|
sometimes data sets. |
|
|
|
00:11:32.460 --> 00:11:35.050 |
|
They'll almost always have a train and |
|
|
|
00:11:35.050 --> 00:11:37.200 |
|
a test set that is designated for you. |
|
|
|
00:11:37.200 --> 00:11:38.620 |
|
Which means that you can do all the |
|
|
|
00:11:38.620 --> 00:11:39.980 |
|
training on the train set, but you |
|
|
|
00:11:39.980 --> 00:11:41.400 |
|
shouldn't look at the test set until |
|
|
|
00:11:41.400 --> 00:11:42.500 |
|
you're ready to do your final |
|
|
|
00:11:42.500 --> 00:11:43.210 |
|
evaluation. |
|
|
|
00:11:44.090 --> 00:11:45.790 |
|
They don't always have a trained and |
|
|
|
00:11:45.790 --> 00:11:47.900 |
|
Val split, so sometimes you need to |
|
|
|
00:11:47.900 --> 00:11:49.680 |
|
separate out a portion of the training |
|
|
|
00:11:49.680 --> 00:11:51.240 |
|
data and use it for validation. |
|
|
|
00:11:52.820 --> 00:11:55.750 |
|
So the reason that this is important |
|
|
|
00:11:55.750 --> 00:11:59.680 |
|
because otherwise you will end up over |
|
|
|
00:11:59.680 --> 00:12:01.050 |
|
optimizing for your test set. |
|
|
|
00:12:01.050 --> 00:12:03.120 |
|
If you evaluate 1000 different models |
|
|
|
00:12:03.120 --> 00:12:04.820 |
|
and you choose the best one for |
|
|
|
00:12:04.820 --> 00:12:08.401 |
|
testing, then you don't really know if |
|
|
|
00:12:08.401 --> 00:12:09.759 |
|
that test performance is really |
|
|
|
00:12:09.760 --> 00:12:12.080 |
|
reflecting the performance that you |
|
|
|
00:12:12.080 --> 00:12:13.510 |
|
would see with another random set of |
|
|
|
00:12:13.510 --> 00:12:15.250 |
|
test examples because you've optimized |
|
|
|
00:12:15.250 --> 00:12:17.120 |
|
your model selection for that test set. |
|
|
|
00:12:20.220 --> 00:12:22.440 |
|
And then the final stages is evaluation |
|
|
|
00:12:22.440 --> 00:12:23.570 |
|
or testing. |
|
|
|
00:12:23.570 --> 00:12:26.090 |
|
And here you have some held out test |
|
|
|
00:12:26.090 --> 00:12:28.220 |
|
held out set of examples that are not |
|
|
|
00:12:28.220 --> 00:12:29.340 |
|
used in training. |
|
|
|
00:12:29.340 --> 00:12:30.890 |
|
Because you want to make sure that your |
|
|
|
00:12:30.890 --> 00:12:33.370 |
|
model does not only work well on the |
|
|
|
00:12:33.370 --> 00:12:35.580 |
|
things that it fit to, but it will also |
|
|
|
00:12:35.580 --> 00:12:36.999 |
|
work well if you give it some new |
|
|
|
00:12:37.000 --> 00:12:37.610 |
|
example. |
|
|
|
00:12:37.610 --> 00:12:39.445 |
|
Because you're not really interested in |
|
|
|
00:12:39.445 --> 00:12:41.045 |
|
making predictions for the data where |
|
|
|
00:12:41.045 --> 00:12:42.740 |
|
you already know the value of the |
|
|
|
00:12:42.740 --> 00:12:44.370 |
|
prediction, you're interested in making |
|
|
|
00:12:44.370 --> 00:12:44.870 |
|
new predictions. |
|
|
|
00:12:44.870 --> 00:12:46.955 |
|
You want to predict tomorrow's |
|
|
|
00:12:46.955 --> 00:12:48.650 |
|
temperature, even though nobody knows |
|
|
|
00:12:48.650 --> 00:12:50.090 |
|
tomorrow's temperature or tomorrow's |
|
|
|
00:12:50.090 --> 00:12:50.680 |
|
stock price. |
|
|
|
00:12:52.790 --> 00:12:55.255 |
|
Though the term held out means that |
|
|
|
00:12:55.255 --> 00:12:57.440 |
|
it's not used at all in the training |
|
|
|
00:12:57.440 --> 00:13:00.337 |
|
process, and that should mean that it's |
|
|
|
00:13:00.337 --> 00:13:00.690 |
|
not. |
|
|
|
00:13:00.690 --> 00:13:02.123 |
|
You don't even look at it, you're not |
|
|
|
00:13:02.123 --> 00:13:04.380 |
|
even aware of what those values are. |
|
|
|
00:13:04.380 --> 00:13:07.335 |
|
So in the most clean setups, the test, |
|
|
|
00:13:07.335 --> 00:13:10.660 |
|
the test data is on some evaluation |
|
|
|
00:13:10.660 --> 00:13:13.090 |
|
server that people cannot access if |
|
|
|
00:13:13.090 --> 00:13:13.530 |
|
they're doing. |
|
|
|
00:13:13.530 --> 00:13:15.830 |
|
If there's some kind of benchmark, |
|
|
|
00:13:15.830 --> 00:13:16.900 |
|
research, benchmark. |
|
|
|
00:13:17.610 --> 00:13:19.720 |
|
And in many setups you're not allowed |
|
|
|
00:13:19.720 --> 00:13:22.690 |
|
to even evaluate your method more than |
|
|
|
00:13:22.690 --> 00:13:25.185 |
|
once a week so that you to make sure |
|
|
|
00:13:25.185 --> 00:13:27.325 |
|
that people are not like trying out |
|
|
|
00:13:27.325 --> 00:13:28.695 |
|
many different things and then choosing |
|
|
|
00:13:28.695 --> 00:13:30.140 |
|
the best one based on the test set. |
|
|
|
00:13:31.830 --> 00:13:33.180 |
|
So I'm not going to go through these |
|
|
|
00:13:33.180 --> 00:13:34.580 |
|
performance measures, but there's lots |
|
|
|
00:13:34.580 --> 00:13:36.369 |
|
of different performance measures that |
|
|
|
00:13:36.370 --> 00:13:37.340 |
|
people could use. |
|
|
|
00:13:37.340 --> 00:13:39.390 |
|
The most common for classification is |
|
|
|
00:13:39.390 --> 00:13:41.410 |
|
just the classification classification |
|
|
|
00:13:41.410 --> 00:13:43.740 |
|
error, which is the percent of times |
|
|
|
00:13:43.740 --> 00:13:46.680 |
|
that your classifier is wrong. |
|
|
|
00:13:46.680 --> 00:13:48.200 |
|
Obviously you want that to be low. |
|
|
|
00:13:49.020 --> 00:13:50.850 |
|
Accuracy is just one minus the error. |
|
|
|
00:13:51.660 --> 00:13:54.835 |
|
And then for regression you might use |
|
|
|
00:13:54.835 --> 00:13:56.780 |
|
like a root mean squared error, which |
|
|
|
00:13:56.780 --> 00:13:59.725 |
|
is like your average more or less your |
|
|
|
00:13:59.725 --> 00:14:02.410 |
|
average distance from prediction to. |
|
|
|
00:14:03.930 --> 00:14:08.370 |
|
To true value or like a residual R2 |
|
|
|
00:14:08.370 --> 00:14:10.060 |
|
which is like how much of the variance |
|
|
|
00:14:10.060 --> 00:14:11.380 |
|
does your aggressor explain? |
|
|
|
00:14:14.400 --> 00:14:15.190 |
|
So. |
|
|
|
00:14:15.300 --> 00:14:15.900 |
|
|
|
|
|
00:14:16.730 --> 00:14:18.470 |
|
If you're doing machine learning, |
|
|
|
00:14:18.470 --> 00:14:19.190 |
|
research. |
|
|
|
00:14:20.060 --> 00:14:21.890 |
|
Usually the way the data is collected |
|
|
|
00:14:21.890 --> 00:14:23.700 |
|
is that the somebody collects like a |
|
|
|
00:14:23.700 --> 00:14:26.010 |
|
big pool of data and then they randomly |
|
|
|
00:14:26.010 --> 00:14:28.750 |
|
sample from that one pool of data to |
|
|
|
00:14:28.750 --> 00:14:30.520 |
|
get their training and test splits. |
|
|
|
00:14:31.300 --> 00:14:33.790 |
|
And that means that those training and |
|
|
|
00:14:33.790 --> 00:14:36.930 |
|
test samples are sampled from the same |
|
|
|
00:14:36.930 --> 00:14:37.350 |
|
distribution. |
|
|
|
00:14:37.350 --> 00:14:40.300 |
|
They're what's called IID, which means |
|
|
|
00:14:40.300 --> 00:14:41.610 |
|
independent and identically |
|
|
|
00:14:41.610 --> 00:14:43.695 |
|
distributed, and it just means that |
|
|
|
00:14:43.695 --> 00:14:44.975 |
|
they're coming from the same |
|
|
|
00:14:44.975 --> 00:14:45.260 |
|
distribution. |
|
|
|
00:14:46.290 --> 00:14:48.000 |
|
In the real world, though, that's often |
|
|
|
00:14:48.000 --> 00:14:48.840 |
|
not the case. |
|
|
|
00:14:48.840 --> 00:14:50.640 |
|
So a lot of a lot of machine learning |
|
|
|
00:14:50.640 --> 00:14:52.080 |
|
theory is predicated. |
|
|
|
00:14:52.080 --> 00:14:55.990 |
|
It depends on the assumption that the |
|
|
|
00:14:55.990 --> 00:14:57.540 |
|
training and test data are coming from |
|
|
|
00:14:57.540 --> 00:15:00.205 |
|
the same distribution but in the real |
|
|
|
00:15:00.205 --> 00:15:00.500 |
|
world. |
|
|
|
00:15:01.550 --> 00:15:03.120 |
|
Often they're different distributions. |
|
|
|
00:15:03.120 --> 00:15:07.330 |
|
For example, you might be, you might, |
|
|
|
00:15:07.330 --> 00:15:11.272 |
|
you might be trying to like categorize |
|
|
|
00:15:11.272 --> 00:15:14.980 |
|
images, but the images that you collect |
|
|
|
00:15:14.980 --> 00:15:17.680 |
|
in your for training are going to be |
|
|
|
00:15:17.680 --> 00:15:19.240 |
|
different than what the user is provide |
|
|
|
00:15:19.240 --> 00:15:20.220 |
|
to your system. |
|
|
|
00:15:20.220 --> 00:15:21.740 |
|
Or you might be trying to recognize |
|
|
|
00:15:21.740 --> 00:15:23.650 |
|
faces, but you don't have access to all |
|
|
|
00:15:23.650 --> 00:15:24.900 |
|
the faces in the world. |
|
|
|
00:15:24.900 --> 00:15:26.830 |
|
You have access to faces of people that |
|
|
|
00:15:26.830 --> 00:15:28.490 |
|
volunteer to give you your data, which |
|
|
|
00:15:28.490 --> 00:15:29.955 |
|
may be a different distribution than |
|
|
|
00:15:29.955 --> 00:15:30.960 |
|
the end users. |
|
|
|
00:15:31.190 --> 00:15:32.200 |
|
Of your application. |
|
|
|
00:15:33.440 --> 00:15:34.760 |
|
Or it may be that things change |
|
|
|
00:15:34.760 --> 00:15:37.490 |
|
overtime and so the distribution |
|
|
|
00:15:37.490 --> 00:15:37.970 |
|
changes. |
|
|
|
00:15:39.350 --> 00:15:41.180 |
|
So yes, go ahead. |
|
|
|
00:15:47.900 --> 00:15:52.190 |
|
So if the distribution changes, the. |
|
|
|
00:15:54.170 --> 00:15:55.660 |
|
So this is kind of where it gets |
|
|
|
00:15:55.660 --> 00:15:57.908 |
|
different between research and |
|
|
|
00:15:57.908 --> 00:16:00.679 |
|
practice, because in practice the |
|
|
|
00:16:00.680 --> 00:16:02.580 |
|
distribution changes and you don't |
|
|
|
00:16:02.580 --> 00:16:02.976 |
|
know. |
|
|
|
00:16:02.976 --> 00:16:05.570 |
|
Like you have to then collect another |
|
|
|
00:16:05.570 --> 00:16:08.210 |
|
test set based on your users data and |
|
|
|
00:16:08.210 --> 00:16:08.980 |
|
annotate it. |
|
|
|
00:16:08.980 --> 00:16:10.810 |
|
And then you could evaluate how you're |
|
|
|
00:16:10.810 --> 00:16:12.740 |
|
actually doing on user data, but then |
|
|
|
00:16:12.740 --> 00:16:14.635 |
|
it might change again because things in |
|
|
|
00:16:14.635 --> 00:16:16.345 |
|
the world change and your users change |
|
|
|
00:16:16.345 --> 00:16:16.620 |
|
so. |
|
|
|
00:16:17.660 --> 00:16:19.270 |
|
So you have like this kind of |
|
|
|
00:16:19.270 --> 00:16:21.480 |
|
intrinsically unknown thing about what |
|
|
|
00:16:21.480 --> 00:16:23.460 |
|
is the true test distribution in |
|
|
|
00:16:23.460 --> 00:16:24.020 |
|
practice. |
|
|
|
00:16:24.810 --> 00:16:28.835 |
|
In an experiment, if somebody if you |
|
|
|
00:16:28.835 --> 00:16:30.816 |
|
have like some domain what's called a |
|
|
|
00:16:30.816 --> 00:16:33.579 |
|
domain shift where the test, test, test |
|
|
|
00:16:33.580 --> 00:16:34.780 |
|
distribution is different than |
|
|
|
00:16:34.780 --> 00:16:35.450 |
|
training. |
|
|
|
00:16:35.450 --> 00:16:37.033 |
|
For example, in a driving application |
|
|
|
00:16:37.033 --> 00:16:41.014 |
|
you could say you have to train it on, |
|
|
|
00:16:41.014 --> 00:16:44.575 |
|
you have to train it on nice weather |
|
|
|
00:16:44.575 --> 00:16:46.240 |
|
days, but it could be tested on foggy |
|
|
|
00:16:46.240 --> 00:16:46.580 |
|
days. |
|
|
|
00:16:47.440 --> 00:16:49.970 |
|
And then you kind of can know what the |
|
|
|
00:16:49.970 --> 00:16:52.230 |
|
distribution shift is, and sometimes |
|
|
|
00:16:52.230 --> 00:16:54.135 |
|
you're allowed to take that test data |
|
|
|
00:16:54.135 --> 00:16:56.591 |
|
and learn unsupervised to adapt to that |
|
|
|
00:16:56.591 --> 00:16:58.970 |
|
test data, and you can evaluate how you |
|
|
|
00:16:58.970 --> 00:16:59.390 |
|
did. |
|
|
|
00:16:59.390 --> 00:17:01.800 |
|
So in the research world, we're like |
|
|
|
00:17:01.800 --> 00:17:03.590 |
|
all the tests and training data is |
|
|
|
00:17:03.590 --> 00:17:04.470 |
|
known up front. |
|
|
|
00:17:04.470 --> 00:17:06.070 |
|
You still have like a lot more control |
|
|
|
00:17:06.070 --> 00:17:07.630 |
|
and a lot more knowledge than you often |
|
|
|
00:17:07.630 --> 00:17:09.090 |
|
do in application scenario. |
|
|
|
00:17:16.920 --> 00:17:20.800 |
|
So this is a recap of the training and |
|
|
|
00:17:20.800 --> 00:17:21.920 |
|
evaluation procedure. |
|
|
|
00:17:22.700 --> 00:17:25.790 |
|
You have you start with some, ideally |
|
|
|
00:17:25.790 --> 00:17:27.500 |
|
some training data, some validation |
|
|
|
00:17:27.500 --> 00:17:28.750 |
|
data, some test data. |
|
|
|
00:17:29.900 --> 00:17:33.400 |
|
You have some model training and design |
|
|
|
00:17:33.400 --> 00:17:35.060 |
|
phase, so you. |
|
|
|
00:17:36.270 --> 00:17:39.670 |
|
You have some idea of what kind of what |
|
|
|
00:17:39.670 --> 00:17:41.370 |
|
different models might be that you want |
|
|
|
00:17:41.370 --> 00:17:42.060 |
|
to evaluate. |
|
|
|
00:17:42.060 --> 00:17:44.260 |
|
You have an algorithm to train those |
|
|
|
00:17:44.260 --> 00:17:44.790 |
|
models. |
|
|
|
00:17:44.790 --> 00:17:47.003 |
|
So you take the training data, apply it |
|
|
|
00:17:47.003 --> 00:17:48.503 |
|
to that design, you get some |
|
|
|
00:17:48.503 --> 00:17:49.935 |
|
parameters, that's your model. |
|
|
|
00:17:49.935 --> 00:17:52.180 |
|
Evaluate those parameters on the |
|
|
|
00:17:52.180 --> 00:17:55.730 |
|
validation set and the model validation |
|
|
|
00:17:55.730 --> 00:17:55.970 |
|
there. |
|
|
|
00:17:56.590 --> 00:17:59.160 |
|
And then you might look at those |
|
|
|
00:17:59.160 --> 00:18:01.160 |
|
results and be like, I think I can do |
|
|
|
00:18:01.160 --> 00:18:01.510 |
|
better. |
|
|
|
00:18:01.510 --> 00:18:03.390 |
|
So you go back to the drawing board, |
|
|
|
00:18:03.390 --> 00:18:05.880 |
|
redo your designs and then you repeat |
|
|
|
00:18:05.880 --> 00:18:08.642 |
|
that process until finally you say now |
|
|
|
00:18:08.642 --> 00:18:10.830 |
|
I think the best model that I can |
|
|
|
00:18:10.830 --> 00:18:12.790 |
|
possibly get, and then you evaluate it |
|
|
|
00:18:12.790 --> 00:18:13.560 |
|
on your test set. |
|
|
|
00:18:19.970 --> 00:18:21.640 |
|
So any other questions about that |
|
|
|
00:18:21.640 --> 00:18:23.170 |
|
before I actually get into one of the |
|
|
|
00:18:23.170 --> 00:18:25.520 |
|
algorithms, the KNN? |
|
|
|
00:18:28.100 --> 00:18:30.635 |
|
OK, this obviously like this is going |
|
|
|
00:18:30.635 --> 00:18:32.530 |
|
to feel second nature to you by the end |
|
|
|
00:18:32.530 --> 00:18:34.090 |
|
of the course because it's what you use |
|
|
|
00:18:34.090 --> 00:18:35.440 |
|
for every single machine learning |
|
|
|
00:18:35.440 --> 00:18:35.890 |
|
algorithm. |
|
|
|
00:18:35.890 --> 00:18:39.660 |
|
So even if it seems like a little |
|
|
|
00:18:39.660 --> 00:18:42.030 |
|
abstract or foggy right now, I'm sure |
|
|
|
00:18:42.030 --> 00:18:42.490 |
|
it will not. |
|
|
|
00:18:43.290 --> 00:18:44.120 |
|
Before too long. |
|
|
|
00:18:46.020 --> 00:18:49.050 |
|
All right, so first see if you can |
|
|
|
00:18:49.050 --> 00:18:52.070 |
|
apply your own machine learning, I |
|
|
|
00:18:52.070 --> 00:18:52.340 |
|
guess. |
|
|
|
00:18:53.350 --> 00:18:55.470 |
|
So let's say I've got two classes here. |
|
|
|
00:18:55.470 --> 00:18:58.704 |
|
I've got O's and I've got X's. |
|
|
|
00:18:58.704 --> 00:19:01.430 |
|
So and plus is a new test sample. |
|
|
|
00:19:01.430 --> 00:19:03.930 |
|
So what class do you think the black |
|
|
|
00:19:03.930 --> 00:19:05.460 |
|
plus corresponds to? |
|
|
|
00:19:09.830 --> 00:19:11.300 |
|
Alright, so I'll do a vote. |
|
|
|
00:19:11.300 --> 00:19:13.040 |
|
How many people think it's an X? |
|
|
|
00:19:14.940 --> 00:19:16.480 |
|
How many people think it's a no? |
|
|
|
00:19:18.550 --> 00:19:23.014 |
|
So it's about 90 maybe like 99.5% think |
|
|
|
00:19:23.014 --> 00:19:25.990 |
|
it's an X and about .5% think it's a |
|
|
|
00:19:25.990 --> 00:19:27.410 |
|
no. |
|
|
|
00:19:27.410 --> 00:19:27.755 |
|
All right. |
|
|
|
00:19:27.755 --> 00:19:28.830 |
|
So why is it an X? |
|
|
|
00:19:29.630 --> 00:19:30.020 |
|
Yeah. |
|
|
|
00:19:42.250 --> 00:19:45.860 |
|
That's like a Matthew way to put it, |
|
|
|
00:19:45.860 --> 00:19:46.902 |
|
but that's right, yeah. |
|
|
|
00:19:46.902 --> 00:19:49.137 |
|
So one reason you might think it's an X |
|
|
|
00:19:49.137 --> 00:19:51.988 |
|
is that it's closest to X. |
|
|
|
00:19:51.988 --> 00:19:54.716 |
|
That's the closest example to it is an |
|
|
|
00:19:54.716 --> 00:19:55.069 |
|
X, right? |
|
|
|
00:19:55.790 --> 00:19:57.240 |
|
Are there any other reasons that you |
|
|
|
00:19:57.240 --> 00:19:58.160 |
|
think it might be next? |
|
|
|
00:19:58.160 --> 00:19:58.360 |
|
Yeah. |
|
|
|
00:20:01.500 --> 00:20:02.370 |
|
It looks like what? |
|
|
|
00:20:03.330 --> 00:20:04.360 |
|
It looks like an X. |
|
|
|
00:20:06.090 --> 00:20:07.220 |
|
I guess that's true. |
|
|
|
00:20:08.290 --> 00:20:08.630 |
|
Yeah. |
|
|
|
00:20:09.960 --> 00:20:10.500 |
|
Any other? |
|
|
|
00:20:24.830 --> 00:20:25.143 |
|
OK. |
|
|
|
00:20:25.143 --> 00:20:27.410 |
|
And then this one was, if you think |
|
|
|
00:20:27.410 --> 00:20:29.120 |
|
about like drawing, trying to draw a |
|
|
|
00:20:29.120 --> 00:20:31.917 |
|
line between the X's and the O's, then |
|
|
|
00:20:31.917 --> 00:20:34.660 |
|
the best line you could draw the plus |
|
|
|
00:20:34.660 --> 00:20:36.710 |
|
would be on the X side of the line. |
|
|
|
00:20:37.940 --> 00:20:39.530 |
|
So those are all good answers. |
|
|
|
00:20:39.530 --> 00:20:41.150 |
|
And actually there, so there's like. |
|
|
|
00:20:41.920 --> 00:20:43.840 |
|
There's basically like 3 different ways |
|
|
|
00:20:43.840 --> 00:20:45.920 |
|
that you can solve this problem. |
|
|
|
00:20:45.920 --> 00:20:48.220 |
|
One is nearest neighbor, which is what |
|
|
|
00:20:48.220 --> 00:20:50.440 |
|
I'll talk about, which is when you say |
|
|
|
00:20:50.440 --> 00:20:52.423 |
|
it's closest to the X, so therefore |
|
|
|
00:20:52.423 --> 00:20:53.020 |
|
it's an X. |
|
|
|
00:20:53.020 --> 00:20:55.086 |
|
Or most of the points that are. |
|
|
|
00:20:55.086 --> 00:20:57.070 |
|
Most of the known points that are close |
|
|
|
00:20:57.070 --> 00:20:59.299 |
|
to it are X's, so therefore it's an X. |
|
|
|
00:20:59.300 --> 00:21:01.440 |
|
That's an instant space method. |
|
|
|
00:21:01.440 --> 00:21:03.990 |
|
Another method is a linear method where |
|
|
|
00:21:03.990 --> 00:21:06.120 |
|
you draw a line and you say, well it's |
|
|
|
00:21:06.120 --> 00:21:07.706 |
|
on the UX side of the line, so |
|
|
|
00:21:07.706 --> 00:21:08.519 |
|
therefore it's an X. |
|
|
|
00:21:09.230 --> 00:21:11.360 |
|
And the third method is a probabilistic |
|
|
|
00:21:11.360 --> 00:21:13.056 |
|
method where you fit some probabilities |
|
|
|
00:21:13.056 --> 00:21:14.935 |
|
to the O's into the X's. |
|
|
|
00:21:14.935 --> 00:21:16.830 |
|
And you say given those probabilities, |
|
|
|
00:21:16.830 --> 00:21:18.510 |
|
it's more likely to be an X than a no. |
|
|
|
00:21:19.170 --> 00:21:21.629 |
|
There's a really like all the different |
|
|
|
00:21:21.630 --> 00:21:23.833 |
|
methods that you can use, and the |
|
|
|
00:21:23.833 --> 00:21:25.210 |
|
different algorithms are just different |
|
|
|
00:21:25.210 --> 00:21:26.520 |
|
ways of parameterizing those |
|
|
|
00:21:26.520 --> 00:21:27.070 |
|
approaches. |
|
|
|
00:21:28.610 --> 00:21:30.089 |
|
Or different ways of solving them or |
|
|
|
00:21:30.090 --> 00:21:31.460 |
|
putting constraints on them. |
|
|
|
00:21:34.430 --> 00:21:36.990 |
|
So this is the key principle of machine |
|
|
|
00:21:36.990 --> 00:21:40.460 |
|
learning that given some feature target |
|
|
|
00:21:40.460 --> 00:21:44.660 |
|
pairs X1Y1TO XNN. |
|
|
|
00:21:44.660 --> 00:21:49.570 |
|
If XI is similar to XJ, then Yi is |
|
|
|
00:21:49.570 --> 00:21:50.850 |
|
probably similar to YJ. |
|
|
|
00:21:51.450 --> 00:21:53.115 |
|
In other words, if the features are |
|
|
|
00:21:53.115 --> 00:21:55.100 |
|
similar, then the targets are also |
|
|
|
00:21:55.100 --> 00:21:55.900 |
|
probably similar. |
|
|
|
00:21:57.020 --> 00:21:57.790 |
|
And this is. |
|
|
|
00:21:58.440 --> 00:21:59.586 |
|
This is kind of the. |
|
|
|
00:21:59.586 --> 00:22:01.220 |
|
This is, I would say, an assumption of |
|
|
|
00:22:01.220 --> 00:22:02.720 |
|
every single machine learning algorithm |
|
|
|
00:22:02.720 --> 00:22:03.810 |
|
that I can think of. |
|
|
|
00:22:03.810 --> 00:22:05.500 |
|
If it's not the case, things get really |
|
|
|
00:22:05.500 --> 00:22:06.010 |
|
complicated. |
|
|
|
00:22:06.010 --> 00:22:07.210 |
|
I don't know how you would possibly |
|
|
|
00:22:07.210 --> 00:22:10.250 |
|
solve it if XI if there's no. |
|
|
|
00:22:11.430 --> 00:22:13.750 |
|
If XI being similar to XJ tells you |
|
|
|
00:22:13.750 --> 00:22:17.390 |
|
nothing about how Yi and YJ relate to |
|
|
|
00:22:17.390 --> 00:22:19.310 |
|
each other, then it seems like you |
|
|
|
00:22:19.310 --> 00:22:20.320 |
|
can't do better than chance. |
|
|
|
00:22:21.960 --> 00:22:23.920 |
|
So with variations on how you define |
|
|
|
00:22:23.920 --> 00:22:24.790 |
|
the similarity. |
|
|
|
00:22:24.790 --> 00:22:26.330 |
|
So what does it mean for XI to be |
|
|
|
00:22:26.330 --> 00:22:27.570 |
|
similar to XJ? |
|
|
|
00:22:27.570 --> 00:22:29.650 |
|
And also, if you've got a bunch of |
|
|
|
00:22:29.650 --> 00:22:31.520 |
|
similar points, how you combine those |
|
|
|
00:22:31.520 --> 00:22:32.830 |
|
similarities to make a final |
|
|
|
00:22:32.830 --> 00:22:33.500 |
|
prediction. |
|
|
|
00:22:33.500 --> 00:22:36.010 |
|
Those differences are what distinguish |
|
|
|
00:22:36.010 --> 00:22:37.340 |
|
the different algorithms from each |
|
|
|
00:22:37.340 --> 00:22:39.050 |
|
other, but they're all based on this |
|
|
|
00:22:39.050 --> 00:22:41.063 |
|
idea that if the features are similar, |
|
|
|
00:22:41.063 --> 00:22:42.609 |
|
the predictions are also similar. |
|
|
|
00:22:45.500 --> 00:22:46.940 |
|
So this brings us to the nearest |
|
|
|
00:22:46.940 --> 00:22:47.810 |
|
neighbor algorithm. |
|
|
|
00:22:48.780 --> 00:22:50.960 |
|
Probably the simplest, but also one of |
|
|
|
00:22:50.960 --> 00:22:52.600 |
|
the most useful machine learning |
|
|
|
00:22:52.600 --> 00:22:53.170 |
|
algorithms. |
|
|
|
00:22:54.210 --> 00:22:56.760 |
|
And it kind of encodes that simple |
|
|
|
00:22:56.760 --> 00:22:58.540 |
|
intuition most directly. |
|
|
|
00:22:58.540 --> 00:23:02.339 |
|
So for a given set of test features, |
|
|
|
00:23:02.339 --> 00:23:05.365 |
|
assign the label or target value to the |
|
|
|
00:23:05.365 --> 00:23:07.505 |
|
most similar training features. |
|
|
|
00:23:07.505 --> 00:23:11.170 |
|
And if you say, you can sometimes say |
|
|
|
00:23:11.170 --> 00:23:13.910 |
|
how many of these similar examples |
|
|
|
00:23:13.910 --> 00:23:15.200 |
|
you're going to consider. |
|
|
|
00:23:15.200 --> 00:23:17.814 |
|
The default is often KK equals one. |
|
|
|
00:23:17.814 --> 00:23:20.193 |
|
So the most similar single example, you |
|
|
|
00:23:20.193 --> 00:23:23.460 |
|
assign its label to the test data. |
|
|
|
00:23:24.140 --> 00:23:25.530 |
|
So here's the algorithm. |
|
|
|
00:23:25.530 --> 00:23:27.730 |
|
It's pretty short. |
|
|
|
00:23:28.860 --> 00:23:30.620 |
|
You compute the distance of each of |
|
|
|
00:23:30.620 --> 00:23:32.030 |
|
your training samples to the test |
|
|
|
00:23:32.030 --> 00:23:32.530 |
|
sample. |
|
|
|
00:23:33.510 --> 00:23:35.870 |
|
Take the index of the training sample |
|
|
|
00:23:35.870 --> 00:23:37.810 |
|
with the minimum distance and then you |
|
|
|
00:23:37.810 --> 00:23:38.600 |
|
get that label. |
|
|
|
00:23:38.600 --> 00:23:39.505 |
|
That's it. |
|
|
|
00:23:39.505 --> 00:23:41.780 |
|
I can literally like code it faster |
|
|
|
00:23:41.780 --> 00:23:43.830 |
|
than I can look up how you would use |
|
|
|
00:23:43.830 --> 00:23:45.770 |
|
some library to for the nearest |
|
|
|
00:23:45.770 --> 00:23:46.440 |
|
neighbor algorithm. |
|
|
|
00:23:46.440 --> 00:23:47.420 |
|
It's like a few lines. |
|
|
|
00:23:49.320 --> 00:23:50.290 |
|
So. |
|
|
|
00:23:51.460 --> 00:23:54.450 |
|
And then within this, so there's just a |
|
|
|
00:23:54.450 --> 00:23:56.520 |
|
couple of designs. |
|
|
|
00:23:56.520 --> 00:23:58.720 |
|
One is what distance measure do you |
|
|
|
00:23:58.720 --> 00:24:00.780 |
|
use, another is like how many nearest |
|
|
|
00:24:00.780 --> 00:24:01.870 |
|
neighbors do you consider? |
|
|
|
00:24:02.500 --> 00:24:04.160 |
|
And then often if you're applying this |
|
|
|
00:24:04.160 --> 00:24:06.390 |
|
algorithm, you might want to apply some |
|
|
|
00:24:06.390 --> 00:24:08.020 |
|
kind of transformation to the input |
|
|
|
00:24:08.020 --> 00:24:08.600 |
|
features. |
|
|
|
00:24:09.380 --> 00:24:11.343 |
|
So that they behave better according |
|
|
|
00:24:11.343 --> 00:24:13.690 |
|
according to your similarity measure. |
|
|
|
00:24:14.430 --> 00:24:16.060 |
|
The simplest distance function we can |
|
|
|
00:24:16.060 --> 00:24:18.946 |
|
use is the L2 distance. |
|
|
|
00:24:18.946 --> 00:24:24.030 |
|
So L2 means like the two norm or the |
|
|
|
00:24:24.030 --> 00:24:25.510 |
|
Euclidian distance. |
|
|
|
00:24:25.510 --> 00:24:28.570 |
|
It's the linear distance in like in |
|
|
|
00:24:28.570 --> 00:24:29.605 |
|
space basically. |
|
|
|
00:24:29.605 --> 00:24:31.930 |
|
So usually if you think of a distance |
|
|
|
00:24:31.930 --> 00:24:33.819 |
|
intuitively, you're thinking of the L2. |
|
|
|
00:24:37.810 --> 00:24:41.040 |
|
So we can try to so K nearest neighbor |
|
|
|
00:24:41.040 --> 00:24:42.820 |
|
is just the generalization of nearest |
|
|
|
00:24:42.820 --> 00:24:44.060 |
|
neighbor where you allow there to be |
|
|
|
00:24:44.060 --> 00:24:45.996 |
|
more than 1 sample, so you can look at |
|
|
|
00:24:45.996 --> 00:24:47.340 |
|
the K closest samples. |
|
|
|
00:24:49.110 --> 00:24:50.500 |
|
So we'll try it with these. |
|
|
|
00:24:50.500 --> 00:24:53.840 |
|
So let's say for this plus up here my |
|
|
|
00:24:53.840 --> 00:24:55.632 |
|
pointer is not working for this one |
|
|
|
00:24:55.632 --> 00:24:55.950 |
|
here. |
|
|
|
00:24:55.950 --> 00:24:57.700 |
|
If you do one nearest neighbor, what |
|
|
|
00:24:57.700 --> 00:24:58.920 |
|
would be the closest? |
|
|
|
00:25:00.360 --> 00:25:03.190 |
|
Yeah, I'd say X and for the other one. |
|
|
|
00:25:05.760 --> 00:25:06.940 |
|
Right. |
|
|
|
00:25:06.940 --> 00:25:08.766 |
|
So for one nearest neighbor that the |
|
|
|
00:25:08.766 --> 00:25:11.010 |
|
plus on the left would probably be X |
|
|
|
00:25:11.010 --> 00:25:12.610 |
|
and the plus on the right would be O. |
|
|
|
00:25:13.940 --> 00:25:16.930 |
|
And I should clarify here that the plus |
|
|
|
00:25:16.930 --> 00:25:19.690 |
|
symbol itself is not really relevant, |
|
|
|
00:25:19.690 --> 00:25:21.810 |
|
it's just the position. |
|
|
|
00:25:21.810 --> 00:25:24.251 |
|
So here I've got 2 features X1 and X2, |
|
|
|
00:25:24.251 --> 00:25:28.880 |
|
and I've got two classes O and, but the |
|
|
|
00:25:28.880 --> 00:25:31.360 |
|
shapes of them are not are just |
|
|
|
00:25:31.360 --> 00:25:34.480 |
|
abstract ways of representing some |
|
|
|
00:25:34.480 --> 00:25:34.990 |
|
class. |
|
|
|
00:25:36.400 --> 00:25:37.830 |
|
In these examples. |
|
|
|
00:25:38.740 --> 00:25:40.930 |
|
So three nearest neighbor. |
|
|
|
00:25:40.930 --> 00:25:42.200 |
|
Then you would look at the three |
|
|
|
00:25:42.200 --> 00:25:42.600 |
|
nearest neighbors. |
|
|
|
00:25:42.600 --> 00:25:44.280 |
|
So now one of the labels would flip in |
|
|
|
00:25:44.280 --> 00:25:44.985 |
|
this case. |
|
|
|
00:25:44.985 --> 00:25:47.760 |
|
So these circles are not meant to |
|
|
|
00:25:47.760 --> 00:25:49.800 |
|
indicate like the region of influence. |
|
|
|
00:25:49.800 --> 00:25:51.953 |
|
They're just circling the three nearest |
|
|
|
00:25:51.953 --> 00:25:52.346 |
|
neighbors. |
|
|
|
00:25:52.346 --> 00:25:53.100 |
|
They're ovals. |
|
|
|
00:25:54.010 --> 00:25:58.520 |
|
So this one now has 2O's closer to it |
|
|
|
00:25:58.520 --> 00:26:00.405 |
|
and so it's label would flip. |
|
|
|
00:26:00.405 --> 00:26:02.733 |
|
It's most likely label would flip flip |
|
|
|
00:26:02.733 --> 00:26:04.840 |
|
to O and if you wanted to you could |
|
|
|
00:26:04.840 --> 00:26:06.700 |
|
output some confidence that says. |
|
|
|
00:26:08.030 --> 00:26:10.650 |
|
You could say 2/3 of them are close to |
|
|
|
00:26:10.650 --> 00:26:12.470 |
|
O, so I think it's a 2/3 chance that |
|
|
|
00:26:12.470 --> 00:26:13.160 |
|
it's a no. |
|
|
|
00:26:13.160 --> 00:26:15.130 |
|
It would be a pretty crude like |
|
|
|
00:26:15.130 --> 00:26:17.440 |
|
probability estimate, but maybe better |
|
|
|
00:26:17.440 --> 00:26:18.220 |
|
than nothing. |
|
|
|
00:26:18.220 --> 00:26:20.400 |
|
Another way that you could get |
|
|
|
00:26:20.400 --> 00:26:21.760 |
|
confidence if you were doing one |
|
|
|
00:26:21.760 --> 00:26:23.090 |
|
nearest neighbor is to look at the |
|
|
|
00:26:23.090 --> 00:26:26.025 |
|
ratio of the distances between the |
|
|
|
00:26:26.025 --> 00:26:28.832 |
|
closest example and the closest example |
|
|
|
00:26:28.832 --> 00:26:30.310 |
|
from the from another class. |
|
|
|
00:26:32.310 --> 00:26:33.980 |
|
And then likewise I could do 5 nearest |
|
|
|
00:26:33.980 --> 00:26:36.430 |
|
neighbor, so K could be anything. |
|
|
|
00:26:36.430 --> 00:26:38.590 |
|
Typically it's not too large though. |
|
|
|
00:26:39.350 --> 00:26:40.030 |
|
And. |
|
|
|
00:26:41.490 --> 00:26:43.940 |
|
And classification is the most common |
|
|
|
00:26:43.940 --> 00:26:45.530 |
|
case is K = 1. |
|
|
|
00:26:45.530 --> 00:26:48.130 |
|
But you'll see in regression it can be |
|
|
|
00:26:48.130 --> 00:26:50.020 |
|
kind of helpful to have a larger K. |
|
|
|
00:26:52.480 --> 00:26:52.800 |
|
Right. |
|
|
|
00:26:52.800 --> 00:26:55.080 |
|
So then what distance function do we |
|
|
|
00:26:55.080 --> 00:26:58.150 |
|
use for K&N? |
|
|
|
00:26:59.750 --> 00:27:01.990 |
|
We we've got a few choices. |
|
|
|
00:27:01.990 --> 00:27:03.360 |
|
There's actually many choices, of |
|
|
|
00:27:03.360 --> 00:27:05.170 |
|
course, but these are the most common. |
|
|
|
00:27:05.170 --> 00:27:06.980 |
|
One is Euclidian, so I just put the |
|
|
|
00:27:06.980 --> 00:27:07.722 |
|
equation there. |
|
|
|
00:27:07.722 --> 00:27:08.870 |
|
It's the it's. |
|
|
|
00:27:08.870 --> 00:27:11.540 |
|
You don't even need root if you're just |
|
|
|
00:27:11.540 --> 00:27:14.090 |
|
trying to find the closest, because |
|
|
|
00:27:14.090 --> 00:27:15.540 |
|
square root is monotonic. |
|
|
|
00:27:15.540 --> 00:27:15.880 |
|
So. |
|
|
|
00:27:16.630 --> 00:27:19.790 |
|
If a if the squared distance is |
|
|
|
00:27:19.790 --> 00:27:21.732 |
|
minimized, then the square of the |
|
|
|
00:27:21.732 --> 00:27:23.010 |
|
square distance is also minimize. |
|
|
|
00:27:24.710 --> 00:27:26.910 |
|
And but so you've got Euclidian |
|
|
|
00:27:26.910 --> 00:27:28.130 |
|
distance there, summer squared |
|
|
|
00:27:28.130 --> 00:27:30.890 |
|
differences, city block which is sum of |
|
|
|
00:27:30.890 --> 00:27:32.210 |
|
absolute distances. |
|
|
|
00:27:33.250 --> 00:27:34.740 |
|
Mahalanobis distance. |
|
|
|
00:27:34.740 --> 00:27:37.290 |
|
This is the most complicated where you |
|
|
|
00:27:37.290 --> 00:27:39.080 |
|
have where you first like do what's |
|
|
|
00:27:39.080 --> 00:27:41.430 |
|
called whitening, which is when you |
|
|
|
00:27:41.430 --> 00:27:45.630 |
|
just put a inverse variance matrix. |
|
|
|
00:27:46.400 --> 00:27:50.225 |
|
In between the product and. |
|
|
|
00:27:50.225 --> 00:27:52.340 |
|
So basically this makes it so that if |
|
|
|
00:27:52.340 --> 00:27:54.670 |
|
some features have a lot more variance, |
|
|
|
00:27:54.670 --> 00:27:56.510 |
|
a lot more like spread than other |
|
|
|
00:27:56.510 --> 00:27:57.070 |
|
features. |
|
|
|
00:27:57.760 --> 00:28:00.260 |
|
Then they 1st at first reduces that |
|
|
|
00:28:00.260 --> 00:28:02.280 |
|
spread so that they all have about the |
|
|
|
00:28:02.280 --> 00:28:03.560 |
|
same amount of spreads so that the |
|
|
|
00:28:03.560 --> 00:28:05.770 |
|
distance functions are like normalized, |
|
|
|
00:28:05.770 --> 00:28:06.520 |
|
more comparable. |
|
|
|
00:28:07.600 --> 00:28:09.687 |
|
Between the different features and it |
|
|
|
00:28:09.687 --> 00:28:10.925 |
|
will also rotate. |
|
|
|
00:28:10.925 --> 00:28:13.870 |
|
It will also like rotate the data to |
|
|
|
00:28:13.870 --> 00:28:15.660 |
|
find the major axis. |
|
|
|
00:28:15.660 --> 00:28:18.020 |
|
We'll talk about that more later. |
|
|
|
00:28:18.020 --> 00:28:19.940 |
|
I don't want to get too much into the |
|
|
|
00:28:19.940 --> 00:28:22.436 |
|
distance metric, just be aware of like |
|
|
|
00:28:22.436 --> 00:28:23.610 |
|
that it's there and what it is. |
|
|
|
00:28:25.650 --> 00:28:28.830 |
|
So of these measures L2. |
|
|
|
00:28:30.060 --> 00:28:32.600 |
|
Kind of assumes implicitly assumes that |
|
|
|
00:28:32.600 --> 00:28:34.660 |
|
all the dimensions are equally scaled, |
|
|
|
00:28:34.660 --> 00:28:37.740 |
|
because if you have a distance of three |
|
|
|
00:28:37.740 --> 00:28:40.140 |
|
for one feature and a distance of three |
|
|
|
00:28:40.140 --> 00:28:41.579 |
|
for another feature, it'll it'll |
|
|
|
00:28:41.580 --> 00:28:43.580 |
|
contribute the same to the distance. |
|
|
|
00:28:43.580 --> 00:28:46.400 |
|
But it could be that one feature is |
|
|
|
00:28:46.400 --> 00:28:48.400 |
|
height and one feature is income, and |
|
|
|
00:28:48.400 --> 00:28:49.930 |
|
then the scales are totally different. |
|
|
|
00:28:50.770 --> 00:28:52.510 |
|
And if you were to compute nearest |
|
|
|
00:28:52.510 --> 00:28:55.447 |
|
neighbor, where your data is like the |
|
|
|
00:28:55.447 --> 00:28:57.057 |
|
height of a person and their income, |
|
|
|
00:28:57.057 --> 00:28:58.396 |
|
and you're trying to predict, predict |
|
|
|
00:28:58.396 --> 00:29:01.490 |
|
their age, then the income is obviously |
|
|
|
00:29:01.490 --> 00:29:03.250 |
|
going to dominate those distances. |
|
|
|
00:29:03.250 --> 00:29:04.850 |
|
Because the height distances, if you |
|
|
|
00:29:04.850 --> 00:29:06.970 |
|
don't normalize, are going to be at |
|
|
|
00:29:06.970 --> 00:29:10.570 |
|
most like one or two depending on your |
|
|
|
00:29:10.570 --> 00:29:10.980 |
|
units. |
|
|
|
00:29:11.780 --> 00:29:16.120 |
|
And the income differences could be in |
|
|
|
00:29:16.120 --> 00:29:17.210 |
|
the thousands or millions. |
|
|
|
00:29:18.980 --> 00:29:23.890 |
|
So a city block is kind of similar, you |
|
|
|
00:29:23.890 --> 00:29:25.970 |
|
just taking the absolute instead of the |
|
|
|
00:29:25.970 --> 00:29:26.960 |
|
squared differences. |
|
|
|
00:29:27.700 --> 00:29:28.870 |
|
And the main difference between |
|
|
|
00:29:28.870 --> 00:29:30.826 |
|
Euclidean and city block is that city |
|
|
|
00:29:30.826 --> 00:29:33.937 |
|
block will be less sensitive to the |
|
|
|
00:29:33.937 --> 00:29:35.880 |
|
biggest differences, biggest |
|
|
|
00:29:35.880 --> 00:29:37.060 |
|
dimensional differences. |
|
|
|
00:29:37.930 --> 00:29:41.360 |
|
So with Euclidian, if you have say 5 |
|
|
|
00:29:41.360 --> 00:29:43.601 |
|
features and four of them have a |
|
|
|
00:29:43.601 --> 00:29:45.895 |
|
distance of one and one of them has a |
|
|
|
00:29:45.895 --> 00:29:47.926 |
|
distance of 1000, then your total |
|
|
|
00:29:47.926 --> 00:29:50.990 |
|
distance is going to be like a million, |
|
|
|
00:29:50.990 --> 00:29:54.649 |
|
roughly a million and four your total |
|
|
|
00:29:54.650 --> 00:29:55.420 |
|
square distance. |
|
|
|
00:29:56.120 --> 00:29:58.910 |
|
And so that 1000 totally dominates, or |
|
|
|
00:29:58.910 --> 00:30:00.480 |
|
even if that one is 10. |
|
|
|
00:30:00.480 --> 00:30:02.920 |
|
Let's say you have 4 distances of 1 and |
|
|
|
00:30:02.920 --> 00:30:05.965 |
|
a distance of 10, then your total is |
|
|
|
00:30:05.965 --> 00:30:08.010 |
|
104 once you square them and sum them. |
|
|
|
00:30:09.600 --> 00:30:13.250 |
|
But with city block, if you have 4 |
|
|
|
00:30:13.250 --> 00:30:15.564 |
|
distances that are one and one distance |
|
|
|
00:30:15.564 --> 00:30:17.724 |
|
that is 10, then the city block |
|
|
|
00:30:17.724 --> 00:30:19.877 |
|
distance is 14 because it's one plus |
|
|
|
00:30:19.877 --> 00:30:21.340 |
|
one 4 * + 10. |
|
|
|
00:30:22.010 --> 00:30:24.460 |
|
So city block is less sensitive to like |
|
|
|
00:30:24.460 --> 00:30:26.916 |
|
the biggest feature dimension, the |
|
|
|
00:30:26.916 --> 00:30:27.980 |
|
biggest feature difference. |
|
|
|
00:30:29.730 --> 00:30:32.010 |
|
And then Mahalanobis does not assume |
|
|
|
00:30:32.010 --> 00:30:33.360 |
|
that all the features are already |
|
|
|
00:30:33.360 --> 00:30:35.020 |
|
scaled for it will rescale them. |
|
|
|
00:30:35.020 --> 00:30:37.290 |
|
So if you were to do this thing with, |
|
|
|
00:30:37.290 --> 00:30:39.260 |
|
you're trying to predict somebody's age |
|
|
|
00:30:39.260 --> 00:30:41.090 |
|
given income and height. |
|
|
|
00:30:41.730 --> 00:30:43.770 |
|
Then after you apply your inverse |
|
|
|
00:30:43.770 --> 00:30:46.420 |
|
covariance matrix, it will rescale the |
|
|
|
00:30:46.420 --> 00:30:48.970 |
|
heights and the ages so that they both |
|
|
|
00:30:48.970 --> 00:30:49.750 |
|
follow some. |
|
|
|
00:30:50.950 --> 00:30:54.000 |
|
Unit norm distribution or normalized |
|
|
|
00:30:54.000 --> 00:30:57.240 |
|
distribution where the variance is now |
|
|
|
00:30:57.240 --> 00:30:58.840 |
|
one in each of those dimensions. |
|
|
|
00:31:05.200 --> 00:31:07.790 |
|
So with K&N, if you're doing |
|
|
|
00:31:07.790 --> 00:31:10.720 |
|
classification, then the prediction is |
|
|
|
00:31:10.720 --> 00:31:12.470 |
|
usually just the most common class. |
|
|
|
00:31:13.430 --> 00:31:15.520 |
|
If you're doing regression and you get |
|
|
|
00:31:15.520 --> 00:31:17.510 |
|
the K nearest neighbors, then the |
|
|
|
00:31:17.510 --> 00:31:19.290 |
|
prediction is usually the average of |
|
|
|
00:31:19.290 --> 00:31:21.406 |
|
the labels of those K nearest |
|
|
|
00:31:21.406 --> 00:31:21.869 |
|
neighbors. |
|
|
|
00:31:21.870 --> 00:31:23.820 |
|
So for classification, if you're doing |
|
|
|
00:31:23.820 --> 00:31:26.026 |
|
digit classification and you're 3 |
|
|
|
00:31:26.026 --> 00:31:29.100 |
|
nearest neighbors are 992, you would |
|
|
|
00:31:29.100 --> 00:31:29.760 |
|
predict 9. |
|
|
|
00:31:30.980 --> 00:31:32.210 |
|
If your. |
|
|
|
00:31:32.630 --> 00:31:38.850 |
|
Say trying to how aesthetic people |
|
|
|
00:31:38.850 --> 00:31:41.170 |
|
would think in images on a score on a |
|
|
|
00:31:41.170 --> 00:31:43.680 |
|
scale of zero to 10 and your returns |
|
|
|
00:31:43.680 --> 00:31:45.850 |
|
are 992, then you would take the |
|
|
|
00:31:45.850 --> 00:31:47.670 |
|
average of those most likely so it |
|
|
|
00:31:47.670 --> 00:31:48.769 |
|
would be 20 / 3. |
|
|
|
00:31:52.440 --> 00:31:54.710 |
|
So let's just do another example. |
|
|
|
00:31:55.040 --> 00:31:55.700 |
|
|
|
|
|
00:31:56.920 --> 00:31:58.130 |
|
So let's say that we're doing |
|
|
|
00:31:58.130 --> 00:31:58.960 |
|
classification. |
|
|
|
00:31:58.960 --> 00:32:00.470 |
|
I just kind of randomly found some |
|
|
|
00:32:00.470 --> 00:32:03.000 |
|
scatter plot on the Internet links down |
|
|
|
00:32:03.000 --> 00:32:03.380 |
|
there. |
|
|
|
00:32:03.380 --> 00:32:05.640 |
|
And let's say that we're trying to |
|
|
|
00:32:05.640 --> 00:32:07.890 |
|
predict the sex, male or female, from |
|
|
|
00:32:07.890 --> 00:32:09.370 |
|
standing and sitting heights. |
|
|
|
00:32:09.370 --> 00:32:11.032 |
|
So we've got this standing height on |
|
|
|
00:32:11.032 --> 00:32:13.320 |
|
the X dimension and the sitting height |
|
|
|
00:32:13.320 --> 00:32:14.845 |
|
on the Y dimension. |
|
|
|
00:32:14.845 --> 00:32:19.035 |
|
The circles are female, the males are |
|
|
|
00:32:19.035 --> 00:32:19.370 |
|
male. |
|
|
|
00:32:20.320 --> 00:32:22.590 |
|
And let's say that I want to predict |
|
|
|
00:32:22.590 --> 00:32:26.240 |
|
for the X is it a male or a female and |
|
|
|
00:32:26.240 --> 00:32:28.060 |
|
I'm doing 1 nearest neighbor. |
|
|
|
00:32:28.060 --> 00:32:29.890 |
|
So what would what would the answer be? |
|
|
|
00:32:31.770 --> 00:32:34.580 |
|
Right, the answer would be female |
|
|
|
00:32:34.580 --> 00:32:37.290 |
|
because the closest circle is a female. |
|
|
|
00:32:37.290 --> 00:32:38.580 |
|
And what if I do three nearest |
|
|
|
00:32:38.580 --> 00:32:38.990 |
|
neighbor? |
|
|
|
00:32:41.270 --> 00:32:41.540 |
|
Right. |
|
|
|
00:32:41.540 --> 00:32:42.490 |
|
Also female. |
|
|
|
00:32:42.490 --> 00:32:46.665 |
|
I need to get super large K before it's |
|
|
|
00:32:46.665 --> 00:32:48.710 |
|
even plausible that it could be male. |
|
|
|
00:32:48.710 --> 00:32:50.570 |
|
Maybe even like K would have to be the |
|
|
|
00:32:50.570 --> 00:32:52.070 |
|
whole data set, and that would only |
|
|
|
00:32:52.070 --> 00:32:53.180 |
|
work if there's more males than |
|
|
|
00:32:53.180 --> 00:32:53.600 |
|
females. |
|
|
|
00:32:54.720 --> 00:32:55.926 |
|
And what about the plus? |
|
|
|
00:32:55.926 --> 00:32:58.760 |
|
If I do if I do 1 N, is it male or |
|
|
|
00:32:58.760 --> 00:32:59.190 |
|
female? |
|
|
|
00:33:00.850 --> 00:33:01.095 |
|
OK. |
|
|
|
00:33:01.095 --> 00:33:02.560 |
|
And what if I do three and north? |
|
|
|
00:33:04.950 --> 00:33:08.386 |
|
Right, female, because now the out of |
|
|
|
00:33:08.386 --> 00:33:10.770 |
|
the five closest neighbor out of the |
|
|
|
00:33:10.770 --> 00:33:12.600 |
|
most relevant out of the three closest |
|
|
|
00:33:12.600 --> 00:33:14.060 |
|
neighbors, two of them are female and |
|
|
|
00:33:14.060 --> 00:33:14.630 |
|
one is male. |
|
|
|
00:33:15.970 --> 00:33:17.740 |
|
What about the circle, male or female? |
|
|
|
00:33:19.450 --> 00:33:21.220 |
|
Right, it will be mail for. |
|
|
|
00:33:22.060 --> 00:33:23.070 |
|
Virtually any K. |
|
|
|
00:33:24.350 --> 00:33:24.740 |
|
All right. |
|
|
|
00:33:24.740 --> 00:33:26.010 |
|
So that's classification. |
|
|
|
00:33:27.880 --> 00:33:29.450 |
|
And now let's say we want to do |
|
|
|
00:33:29.450 --> 00:33:30.540 |
|
regression. |
|
|
|
00:33:30.540 --> 00:33:32.530 |
|
So we want to predict the sitting |
|
|
|
00:33:32.530 --> 00:33:35.104 |
|
height given the standing height. |
|
|
|
00:33:35.104 --> 00:33:37.360 |
|
The standing height is on the X axis. |
|
|
|
00:33:38.020 --> 00:33:39.720 |
|
And I want to predict this sitting |
|
|
|
00:33:39.720 --> 00:33:40.410 |
|
height. |
|
|
|
00:33:41.670 --> 00:33:43.730 |
|
So it might be hard to see if you're |
|
|
|
00:33:43.730 --> 00:33:44.060 |
|
far away. |
|
|
|
00:33:44.060 --> 00:33:47.300 |
|
It might be kind of hard to see it very |
|
|
|
00:33:47.300 --> 00:33:51.150 |
|
clearly but for this height, so that I |
|
|
|
00:33:51.150 --> 00:33:52.850 |
|
don't know exactly what the value is, |
|
|
|
00:33:52.850 --> 00:33:56.360 |
|
but whatever, 100 and 4144 or |
|
|
|
00:33:56.360 --> 00:33:56.790 |
|
something. |
|
|
|
00:33:57.530 --> 00:33:59.750 |
|
What would be the sitting height? |
|
|
|
00:34:00.620 --> 00:34:01.360 |
|
Roughly. |
|
|
|
00:34:05.400 --> 00:34:08.050 |
|
So it would be whatever this is here |
|
|
|
00:34:08.050 --> 00:34:10.630 |
|
let me use my, I'll use my cursor. |
|
|
|
00:34:12.500 --> 00:34:14.760 |
|
So it would be whatever this point is |
|
|
|
00:34:14.760 --> 00:34:16.200 |
|
here it would be the sitting height. |
|
|
|
00:34:17.100 --> 00:34:18.716 |
|
And notice that if I moved a little bit |
|
|
|
00:34:18.716 --> 00:34:20.750 |
|
to the left it would drop quite a lot, |
|
|
|
00:34:20.750 --> 00:34:22.390 |
|
and if I move a little bit to the right |
|
|
|
00:34:22.390 --> 00:34:23.685 |
|
then this would be the closest point |
|
|
|
00:34:23.685 --> 00:34:24.660 |
|
and then drop a little. |
|
|
|
00:34:25.380 --> 00:34:28.110 |
|
So the so it's kind of unstable if I'm |
|
|
|
00:34:28.110 --> 00:34:30.677 |
|
doing one and what if I were doing 3 |
|
|
|
00:34:30.677 --> 00:34:33.830 |
|
and N then would it be higher than One |
|
|
|
00:34:33.830 --> 00:34:35.000 |
|
North or lower? |
|
|
|
00:34:39.130 --> 00:34:41.030 |
|
Yes, it would be lower because if I |
|
|
|
00:34:41.030 --> 00:34:42.720 |
|
were doing 3 N then it would be the |
|
|
|
00:34:42.720 --> 00:34:44.883 |
|
average of this point and this point |
|
|
|
00:34:44.883 --> 00:34:47.820 |
|
and this point which is lower than the |
|
|
|
00:34:47.820 --> 00:34:48.310 |
|
center point. |
|
|
|
00:34:50.130 --> 00:34:51.670 |
|
And now let's look at this One South. |
|
|
|
00:34:51.670 --> 00:34:54.329 |
|
Now this one. |
|
|
|
00:34:54.330 --> 00:34:56.090 |
|
What is the setting height roughly? |
|
|
|
00:34:56.730 --> 00:34:57.890 |
|
If I do one and north. |
|
|
|
00:35:02.740 --> 00:35:04.570 |
|
So it's this guy up here. |
|
|
|
00:35:04.570 --> 00:35:07.700 |
|
So it would be around 84 and what is it |
|
|
|
00:35:07.700 --> 00:35:09.990 |
|
roughly if I do three and north? |
|
|
|
00:35:17.040 --> 00:35:19.556 |
|
So it's probably around here. |
|
|
|
00:35:19.556 --> 00:35:22.955 |
|
So I'd say around like 81 maybe, but |
|
|
|
00:35:22.955 --> 00:35:25.110 |
|
it's a big drop because these guys, |
|
|
|
00:35:25.110 --> 00:35:27.625 |
|
these three points here are the are the |
|
|
|
00:35:27.625 --> 00:35:28.820 |
|
three nearest neighbors. |
|
|
|
00:35:30.010 --> 00:35:32.100 |
|
And if I am doing one nearest neighbor |
|
|
|
00:35:32.100 --> 00:35:34.020 |
|
and I were to plot the regressed |
|
|
|
00:35:34.020 --> 00:35:36.390 |
|
height, it would be like jumping all |
|
|
|
00:35:36.390 --> 00:35:37.280 |
|
over the place, right? |
|
|
|
00:35:37.280 --> 00:35:38.900 |
|
Because every time it only depends on |
|
|
|
00:35:38.900 --> 00:35:40.410 |
|
that one nearest neighbor. |
|
|
|
00:35:40.410 --> 00:35:42.335 |
|
So it gives us a really, it can give us |
|
|
|
00:35:42.335 --> 00:35:44.580 |
|
a really unintuitive, bly jumpy |
|
|
|
00:35:44.580 --> 00:35:46.180 |
|
regression value. |
|
|
|
00:35:46.180 --> 00:35:48.006 |
|
But if I do three or five nearest |
|
|
|
00:35:48.006 --> 00:35:49.340 |
|
neighbor, it's going to end up being |
|
|
|
00:35:49.340 --> 00:35:51.230 |
|
much smoother as I move from left to |
|
|
|
00:35:51.230 --> 00:35:51.380 |
|
right. |
|
|
|
00:35:52.330 --> 00:35:53.350 |
|
And then this is like. |
|
|
|
00:35:54.440 --> 00:35:56.080 |
|
This happens to be showing a linear |
|
|
|
00:35:56.080 --> 00:35:57.970 |
|
regression of justice all the data. |
|
|
|
00:35:57.970 --> 00:36:00.060 |
|
We'll talk about linear regression next |
|
|
|
00:36:00.060 --> 00:36:01.920 |
|
Thursday, but that's kind of the |
|
|
|
00:36:01.920 --> 00:36:02.860 |
|
smoothest estimate. |
|
|
|
00:36:05.470 --> 00:36:07.830 |
|
Alright, I'll show. |
|
|
|
00:36:07.830 --> 00:36:09.075 |
|
Actually, I want to. |
|
|
|
00:36:09.075 --> 00:36:10.380 |
|
I know it's kind of. |
|
|
|
00:36:11.770 --> 00:36:14.200 |
|
Let's see 93935. |
|
|
|
00:36:15.450 --> 00:36:17.830 |
|
So about in the middle of the class, I |
|
|
|
00:36:17.830 --> 00:36:19.480 |
|
want to like give everyone a chance to |
|
|
|
00:36:19.480 --> 00:36:20.880 |
|
like stand up and. |
|
|
|
00:36:22.090 --> 00:36:23.625 |
|
Check your e-mail or phone or whatever, |
|
|
|
00:36:23.625 --> 00:36:24.670 |
|
because I think it's hard to |
|
|
|
00:36:24.670 --> 00:36:27.040 |
|
concentrate for an hour and 15 minutes |
|
|
|
00:36:27.040 --> 00:36:27.480 |
|
in a row. |
|
|
|
00:36:27.480 --> 00:36:29.020 |
|
It's easy for me because I'm teaching, |
|
|
|
00:36:29.020 --> 00:36:30.120 |
|
but harder. |
|
|
|
00:36:30.120 --> 00:36:31.400 |
|
I would not be able to do it if I were |
|
|
|
00:36:31.400 --> 00:36:32.280 |
|
sitting in your seats. |
|
|
|
00:36:32.280 --> 00:36:33.980 |
|
So I'm going to take a break for like |
|
|
|
00:36:33.980 --> 00:36:34.580 |
|
one minute. |
|
|
|
00:36:34.580 --> 00:36:36.660 |
|
So feel free to stand up and stretch, |
|
|
|
00:36:36.660 --> 00:36:39.500 |
|
check your e-mail, whatever you want, |
|
|
|
00:36:39.500 --> 00:36:41.640 |
|
and then I'll show you these demos. |
|
|
|
00:38:28.140 --> 00:38:29.990 |
|
Alright, I'm going to pick up again. |
|
|
|
00:38:38.340 --> 00:38:39.740 |
|
Alright, I'm going to start again. |
|
|
|
00:38:41.070 --> 00:38:43.830 |
|
Sorry, I know I'm interrupting a lot of |
|
|
|
00:38:43.830 --> 00:38:44.860 |
|
conversations. |
|
|
|
00:38:44.860 --> 00:38:49.488 |
|
So here's the first demo here. |
|
|
|
00:38:49.488 --> 00:38:50.570 |
|
It's kind of simple. |
|
|
|
00:38:50.570 --> 00:38:52.600 |
|
It's a KCNN demo actually. |
|
|
|
00:38:52.600 --> 00:38:53.820 |
|
They're both CNN demos. |
|
|
|
00:38:53.820 --> 00:38:54.510 |
|
Obviously. |
|
|
|
00:38:54.510 --> 00:38:57.810 |
|
The thing I like about this demo is, I |
|
|
|
00:38:57.810 --> 00:38:59.070 |
|
guess first I'll explain what it's |
|
|
|
00:38:59.070 --> 00:38:59.380 |
|
doing. |
|
|
|
00:38:59.380 --> 00:39:00.958 |
|
So it's got some red points here. |
|
|
|
00:39:00.958 --> 00:39:01.881 |
|
This is one class. |
|
|
|
00:39:01.881 --> 00:39:03.289 |
|
It's got some blue points. |
|
|
|
00:39:03.290 --> 00:39:04.310 |
|
That's another class. |
|
|
|
00:39:04.310 --> 00:39:07.035 |
|
The red area are all the areas that |
|
|
|
00:39:07.035 --> 00:39:09.026 |
|
will be classified as red, and the blue |
|
|
|
00:39:09.026 --> 00:39:10.614 |
|
areas are all the areas that will be |
|
|
|
00:39:10.614 --> 00:39:11.209 |
|
classified as blue. |
|
|
|
00:39:11.930 --> 00:39:15.344 |
|
And you can change K and you can change |
|
|
|
00:39:15.344 --> 00:39:16.610 |
|
the distance measure. |
|
|
|
00:39:16.610 --> 00:39:19.090 |
|
And then if I click somewhere here, it |
|
|
|
00:39:19.090 --> 00:39:21.390 |
|
shows me which point is determining the |
|
|
|
00:39:21.390 --> 00:39:22.560 |
|
classification. |
|
|
|
00:39:22.560 --> 00:39:26.073 |
|
So I'm clicking on the center point and |
|
|
|
00:39:26.073 --> 00:39:28.190 |
|
then it's drawing a connecting line and |
|
|
|
00:39:28.190 --> 00:39:29.949 |
|
radius that correspond to the one |
|
|
|
00:39:29.950 --> 00:39:31.465 |
|
nearest neighbor because this is set to |
|
|
|
00:39:31.465 --> 00:39:31.670 |
|
1. |
|
|
|
00:39:33.160 --> 00:39:35.640 |
|
So one thing I'll note I'll do is just |
|
|
|
00:39:35.640 --> 00:39:36.116 |
|
change. |
|
|
|
00:39:36.116 --> 00:39:38.750 |
|
KK is almost always odd because if it's |
|
|
|
00:39:38.750 --> 00:39:40.400 |
|
even then you have like a split |
|
|
|
00:39:40.400 --> 00:39:41.560 |
|
decision a lot of times. |
|
|
|
00:39:42.770 --> 00:39:45.310 |
|
So if I have K = 3, just notice how the |
|
|
|
00:39:45.310 --> 00:39:47.790 |
|
boundary changes as I increase K. |
|
|
|
00:39:50.120 --> 00:39:52.370 |
|
It becomes simpler and simpler, right? |
|
|
|
00:39:52.370 --> 00:39:54.300 |
|
It just becomes like eventually it |
|
|
|
00:39:54.300 --> 00:39:55.710 |
|
should become well. |
|
|
|
00:39:57.440 --> 00:39:59.990 |
|
Got got bigger than the data, so in K = |
|
|
|
00:39:59.990 --> 00:40:01.770 |
|
23 I think there's probably 23 points, |
|
|
|
00:40:01.770 --> 00:40:03.250 |
|
so it's just the most common class. |
|
|
|
00:40:04.790 --> 00:40:07.450 |
|
And then it kind of becomes more like a |
|
|
|
00:40:07.450 --> 00:40:09.880 |
|
straight line with a very high K. |
|
|
|
00:40:10.190 --> 00:40:10.720 |
|
|
|
|
|
00:40:16.330 --> 00:40:18.820 |
|
Then if I change the distance measure, |
|
|
|
00:40:18.820 --> 00:40:19.915 |
|
I've got Manhattan. |
|
|
|
00:40:19.915 --> 00:40:22.610 |
|
Manhattan is that L1 distance, so it |
|
|
|
00:40:22.610 --> 00:40:24.800 |
|
becomes like a little bit more. |
|
|
|
00:40:24.890 --> 00:40:25.470 |
|
|
|
|
|
00:40:26.300 --> 00:40:27.590 |
|
A little bit more like. |
|
|
|
00:40:28.360 --> 00:40:30.720 |
|
Vertical horizontal lines in the |
|
|
|
00:40:30.720 --> 00:40:33.410 |
|
decision boundary compared to. |
|
|
|
00:40:33.530 --> 00:40:34.060 |
|
|
|
|
|
00:40:34.830 --> 00:40:37.120 |
|
Compared to the Euclidian distance. |
|
|
|
00:40:39.280 --> 00:40:40.780 |
|
|
|
|
|
00:40:41.460 --> 00:40:45.023 |
|
And then this is showing this box is |
|
|
|
00:40:45.023 --> 00:40:47.945 |
|
showing like the box that contains all |
|
|
|
00:40:47.945 --> 00:40:51.970 |
|
the points within the where K7 the |
|
|
|
00:40:51.970 --> 00:40:53.800 |
|
seven nearest neighbors according to |
|
|
|
00:40:53.800 --> 00:40:55.100 |
|
Manhattan distance. |
|
|
|
00:40:55.100 --> 00:40:57.504 |
|
So you can see that it's kind of like a |
|
|
|
00:40:57.504 --> 00:40:59.436 |
|
weird in some ways it feels like a |
|
|
|
00:40:59.436 --> 00:41:00.440 |
|
weird distance measure. |
|
|
|
00:41:00.440 --> 00:41:02.910 |
|
Another thing that I should bring up. |
|
|
|
00:41:02.910 --> 00:41:05.950 |
|
I decide not to go into too much detail |
|
|
|
00:41:05.950 --> 00:41:07.803 |
|
in this today because I think it's like |
|
|
|
00:41:07.803 --> 00:41:10.890 |
|
a more of a not as central of a point |
|
|
|
00:41:10.890 --> 00:41:11.710 |
|
as the things that I am. |
|
|
|
00:41:11.850 --> 00:41:12.210 |
|
Talking about. |
|
|
|
00:41:12.920 --> 00:41:16.990 |
|
But our intuition for high dimensions |
|
|
|
00:41:16.990 --> 00:41:17.730 |
|
is really bad. |
|
|
|
00:41:18.370 --> 00:41:21.325 |
|
So everything I visualize, almost |
|
|
|
00:41:21.325 --> 00:41:23.090 |
|
everything is in two dimensions because |
|
|
|
00:41:23.090 --> 00:41:25.110 |
|
that's all I can put on a piece of |
|
|
|
00:41:25.110 --> 00:41:26.060 |
|
paper or screen. |
|
|
|
00:41:27.700 --> 00:41:30.620 |
|
I can't visualize 1000 dimensions, but |
|
|
|
00:41:30.620 --> 00:41:32.167 |
|
things behave kind of differently in |
|
|
|
00:41:32.167 --> 00:41:33.790 |
|
1000 dimensions in two dimensions. |
|
|
|
00:41:33.790 --> 00:41:37.280 |
|
So for example, if I randomly sample a |
|
|
|
00:41:37.280 --> 00:41:39.197 |
|
whole bunch of points in a unit cube |
|
|
|
00:41:39.197 --> 00:41:41.944 |
|
and 1000 dimensions, almost all the |
|
|
|
00:41:41.944 --> 00:41:44.082 |
|
points lie like right on the surface of |
|
|
|
00:41:44.082 --> 00:41:46.219 |
|
that cube, and they'll all lie if I |
|
|
|
00:41:46.220 --> 00:41:47.025 |
|
have some epsilon. |
|
|
|
00:41:47.025 --> 00:41:48.750 |
|
If Epsilon is like really really tiny, |
|
|
|
00:41:48.750 --> 00:41:50.420 |
|
they'll still all be like right on the |
|
|
|
00:41:50.420 --> 00:41:51.170 |
|
surface of that cube. |
|
|
|
00:41:51.880 --> 00:41:54.400 |
|
And in high dimensional spaces it takes |
|
|
|
00:41:54.400 --> 00:41:56.510 |
|
like tons and tons of data to populate |
|
|
|
00:41:56.510 --> 00:41:59.320 |
|
that space, and so every point tends to |
|
|
|
00:41:59.320 --> 00:42:00.890 |
|
be pretty far away from every other |
|
|
|
00:42:00.890 --> 00:42:02.269 |
|
point in a high dimensional space. |
|
|
|
00:42:04.440 --> 00:42:06.639 |
|
They're just worth being aware of that |
|
|
|
00:42:06.640 --> 00:42:08.560 |
|
limitation of our minds that we don't |
|
|
|
00:42:08.560 --> 00:42:11.200 |
|
think well in high dimensions, but I'll |
|
|
|
00:42:11.200 --> 00:42:12.680 |
|
probably talk about it in more detail |
|
|
|
00:42:12.680 --> 00:42:13.910 |
|
at some later time. |
|
|
|
00:42:14.500 --> 00:42:17.290 |
|
So this demo I like even more. |
|
|
|
00:42:17.290 --> 00:42:19.260 |
|
This is another nearest neighbor demo. |
|
|
|
00:42:19.260 --> 00:42:21.280 |
|
Again, I get to choose the metric, I'll |
|
|
|
00:42:21.280 --> 00:42:22.820 |
|
leave it at L2. |
|
|
|
00:42:23.550 --> 00:42:25.360 |
|
It's that one nearest neighbor I can |
|
|
|
00:42:25.360 --> 00:42:26.700 |
|
choose the number of points. |
|
|
|
00:42:27.470 --> 00:42:31.110 |
|
And I'll do three classes. |
|
|
|
00:42:32.390 --> 00:42:33.050 |
|
So. |
|
|
|
00:42:35.540 --> 00:42:36.480 |
|
Let's see. |
|
|
|
00:42:39.720 --> 00:42:41.800 |
|
Alright, so one thing I wanted to point |
|
|
|
00:42:41.800 --> 00:42:45.600 |
|
out is that one nearest neighbor can be |
|
|
|
00:42:45.600 --> 00:42:48.006 |
|
pretty sensitive to an individual |
|
|
|
00:42:48.006 --> 00:42:48.423 |
|
point. |
|
|
|
00:42:48.423 --> 00:42:50.670 |
|
So let's say I take this one green |
|
|
|
00:42:50.670 --> 00:42:52.150 |
|
point and I drag it around. |
|
|
|
00:42:54.460 --> 00:42:56.770 |
|
It can make a really big impact on the |
|
|
|
00:42:56.770 --> 00:42:58.810 |
|
decision boundary all by itself. |
|
|
|
00:43:00.470 --> 00:43:02.200 |
|
Right, because only that point matters. |
|
|
|
00:43:02.200 --> 00:43:03.920 |
|
There's nothing else in this space, so |
|
|
|
00:43:03.920 --> 00:43:05.620 |
|
it gets to claim the entire space by |
|
|
|
00:43:05.620 --> 00:43:06.070 |
|
itself. |
|
|
|
00:43:07.220 --> 00:43:09.600 |
|
Another thing to note about CNN is that |
|
|
|
00:43:09.600 --> 00:43:12.660 |
|
for one N, if you create a veroni |
|
|
|
00:43:12.660 --> 00:43:15.690 |
|
diagram which is, you split this into |
|
|
|
00:43:15.690 --> 00:43:18.380 |
|
different cells where each cell, |
|
|
|
00:43:18.380 --> 00:43:20.250 |
|
everything within each cell is closest |
|
|
|
00:43:20.250 --> 00:43:21.390 |
|
to a single point. |
|
|
|
00:43:22.160 --> 00:43:23.500 |
|
That's kind of that's the decision |
|
|
|
00:43:23.500 --> 00:43:24.550 |
|
boundary of the cannon. |
|
|
|
00:43:26.750 --> 00:43:29.460 |
|
So it's pretty sensitive if I change it |
|
|
|
00:43:29.460 --> 00:43:30.740 |
|
to three and north. |
|
|
|
00:43:31.850 --> 00:43:34.760 |
|
It's not going to be as sensitive this |
|
|
|
00:43:34.760 --> 00:43:36.310 |
|
they're making white because it's a 3 |
|
|
|
00:43:36.310 --> 00:43:36.840 |
|
way tie. |
|
|
|
00:43:38.430 --> 00:43:40.910 |
|
So it's still somewhat sensitive, but |
|
|
|
00:43:40.910 --> 00:43:42.960 |
|
now if this guy invades the red zone, |
|
|
|
00:43:42.960 --> 00:43:45.213 |
|
he doesn't really have any impact. |
|
|
|
00:43:45.213 --> 00:43:48.220 |
|
If he's off by himself, he has a little |
|
|
|
00:43:48.220 --> 00:43:49.510 |
|
impact, but there has to be like |
|
|
|
00:43:49.510 --> 00:43:51.795 |
|
another green that is also close. |
|
|
|
00:43:51.795 --> 00:43:54.569 |
|
So this guy is a supporting guy, so if |
|
|
|
00:43:54.570 --> 00:43:55.310 |
|
I take him away. |
|
|
|
00:43:55.970 --> 00:43:57.400 |
|
Then this guy is not going to have too |
|
|
|
00:43:57.400 --> 00:43:58.380 |
|
much effect out here. |
|
|
|
00:43:59.460 --> 00:44:02.280 |
|
And obviously as I increase K that. |
|
|
|
00:44:02.730 --> 00:44:06.350 |
|
Happens even more so now this has |
|
|
|
00:44:06.350 --> 00:44:08.240 |
|
relatively little influence. |
|
|
|
00:44:08.310 --> 00:44:08.890 |
|
|
|
|
|
00:44:10.540 --> 00:44:12.510 |
|
A single point by itself can't do too |
|
|
|
00:44:12.510 --> 00:44:14.670 |
|
much if you have K = 5. |
|
|
|
00:44:17.270 --> 00:44:19.740 |
|
And then as I change again, you'll see |
|
|
|
00:44:19.740 --> 00:44:21.760 |
|
that the decision boundary becomes a |
|
|
|
00:44:21.760 --> 00:44:22.430 |
|
lot smoother. |
|
|
|
00:44:22.430 --> 00:44:23.599 |
|
So here's K = 1. |
|
|
|
00:44:23.600 --> 00:44:24.890 |
|
Notice how there's like little blue |
|
|
|
00:44:24.890 --> 00:44:25.520 |
|
islands. |
|
|
|
00:44:26.550 --> 00:44:29.549 |
|
K = 3 the islands go away, but it's |
|
|
|
00:44:29.550 --> 00:44:30.410 |
|
still mostly. |
|
|
|
00:44:30.410 --> 00:44:32.630 |
|
There's like a little tiny blue area |
|
|
|
00:44:32.630 --> 00:44:34.490 |
|
here, but it's a kind of jagged |
|
|
|
00:44:34.490 --> 00:44:35.490 |
|
decision boundary. |
|
|
|
00:44:36.110 --> 00:44:39.870 |
|
K = 5 Now there's only three regions. |
|
|
|
00:44:40.810 --> 00:44:43.510 |
|
And K = 7, the boundaries get smoother. |
|
|
|
00:44:44.680 --> 00:44:47.200 |
|
Also it's worth noting that if K = 1, |
|
|
|
00:44:47.200 --> 00:44:48.870 |
|
you can never have any training error. |
|
|
|
00:44:48.870 --> 00:44:51.890 |
|
So obviously like every training point |
|
|
|
00:44:51.890 --> 00:44:53.930 |
|
will be closest to itself, so therefore |
|
|
|
00:44:53.930 --> 00:44:55.163 |
|
it will make the correct prediction, it |
|
|
|
00:44:55.163 --> 00:44:56.350 |
|
will predict its own value. |
|
|
|
00:44:57.170 --> 00:44:58.740 |
|
Unless you have a bunch of points that |
|
|
|
00:44:58.740 --> 00:45:00.720 |
|
are right on top of each other, but |
|
|
|
00:45:00.720 --> 00:45:02.510 |
|
that's kind of a weird edge case. |
|
|
|
00:45:03.260 --> 00:45:06.840 |
|
And but if K = 7, you can actually have |
|
|
|
00:45:06.840 --> 00:45:07.820 |
|
misclassifications. |
|
|
|
00:45:07.820 --> 00:45:10.970 |
|
So there's a green points that would be |
|
|
|
00:45:10.970 --> 00:45:12.536 |
|
that are in the training data but would |
|
|
|
00:45:12.536 --> 00:45:14.200 |
|
be classified as blue. |
|
|
|
00:45:19.540 --> 00:45:22.166 |
|
So some comments on KNN. |
|
|
|
00:45:22.166 --> 00:45:26.130 |
|
So it's really simple, which is a good |
|
|
|
00:45:26.130 --> 00:45:26.410 |
|
thing. |
|
|
|
00:45:27.200 --> 00:45:29.440 |
|
It's an excellent baseline and |
|
|
|
00:45:29.440 --> 00:45:30.660 |
|
sometimes it's hard to beat. |
|
|
|
00:45:30.660 --> 00:45:33.050 |
|
For example, we'll look at the digits |
|
|
|
00:45:33.050 --> 00:45:36.740 |
|
task later the digit cannon with like |
|
|
|
00:45:36.740 --> 00:45:39.590 |
|
some relatively simple like feature |
|
|
|
00:45:39.590 --> 00:45:40.540 |
|
transformations. |
|
|
|
00:45:41.220 --> 00:45:43.330 |
|
Can do as well as any other algorithm |
|
|
|
00:45:43.330 --> 00:45:44.500 |
|
on digits. |
|
|
|
00:45:45.480 --> 00:45:47.220 |
|
Even the very simple case that I give |
|
|
|
00:45:47.220 --> 00:45:50.080 |
|
you gets within a couple percent error |
|
|
|
00:45:50.080 --> 00:45:52.040 |
|
of the best error that's reported on |
|
|
|
00:45:52.040 --> 00:45:52.600 |
|
that data set. |
|
|
|
00:45:55.640 --> 00:45:56.820 |
|
Yeah, so it's simple. |
|
|
|
00:45:56.820 --> 00:45:57.540 |
|
Hard to be in. |
|
|
|
00:45:57.540 --> 00:45:59.408 |
|
Naturally scales with the data. |
|
|
|
00:45:59.408 --> 00:46:02.659 |
|
So if you can apply CNN even if you |
|
|
|
00:46:02.660 --> 00:46:04.100 |
|
only have one training example per |
|
|
|
00:46:04.100 --> 00:46:06.312 |
|
class, and you can also apply if you |
|
|
|
00:46:06.312 --> 00:46:07.970 |
|
have a million training examples per |
|
|
|
00:46:07.970 --> 00:46:08.370 |
|
class. |
|
|
|
00:46:08.370 --> 00:46:10.050 |
|
And it will tend to get better the more |
|
|
|
00:46:10.050 --> 00:46:11.169 |
|
data you have. |
|
|
|
00:46:11.760 --> 00:46:13.380 |
|
And if you only have one training data |
|
|
|
00:46:13.380 --> 00:46:15.160 |
|
per class, A lot of other algorithms |
|
|
|
00:46:15.160 --> 00:46:16.680 |
|
can't be used because there's not |
|
|
|
00:46:16.680 --> 00:46:18.980 |
|
enough data to fit models to your one |
|
|
|
00:46:18.980 --> 00:46:22.040 |
|
example, but K and can be used so for |
|
|
|
00:46:22.040 --> 00:46:22.970 |
|
things like. |
|
|
|
00:46:23.720 --> 00:46:26.090 |
|
Person like identity verification or |
|
|
|
00:46:26.090 --> 00:46:26.330 |
|
something? |
|
|
|
00:46:26.330 --> 00:46:27.850 |
|
You might only have one example of a |
|
|
|
00:46:27.850 --> 00:46:29.420 |
|
face and you need to match based on |
|
|
|
00:46:29.420 --> 00:46:30.560 |
|
that example. |
|
|
|
00:46:30.560 --> 00:46:31.880 |
|
Then you're almost certainly going to |
|
|
|
00:46:31.880 --> 00:46:34.510 |
|
end up using nearest neighbor as part |
|
|
|
00:46:34.510 --> 00:46:35.300 |
|
of your algorithm. |
|
|
|
00:46:37.250 --> 00:46:40.040 |
|
Higher K gives you smoother functions, |
|
|
|
00:46:40.040 --> 00:46:42.330 |
|
so if you increase K you get a smoother |
|
|
|
00:46:42.330 --> 00:46:43.180 |
|
prediction function. |
|
|
|
00:46:44.630 --> 00:46:47.500 |
|
Now 1 disadvantage of K&N is that it |
|
|
|
00:46:47.500 --> 00:46:48.440 |
|
can be slow. |
|
|
|
00:46:48.440 --> 00:46:50.910 |
|
So in homework one, if you apply your |
|
|
|
00:46:50.910 --> 00:46:52.965 |
|
full test set to the full training set, |
|
|
|
00:46:52.965 --> 00:46:56.390 |
|
it will take 10s of minutes to |
|
|
|
00:46:56.390 --> 00:46:57.220 |
|
evaluate. |
|
|
|
00:46:58.100 --> 00:47:00.080 |
|
Maybe 30 minutes or 60 minutes. |
|
|
|
00:47:01.660 --> 00:47:03.210 |
|
But there's tricks to speed it up. |
|
|
|
00:47:03.210 --> 00:47:05.300 |
|
So like a simple thing that makes a |
|
|
|
00:47:05.300 --> 00:47:07.360 |
|
little bit of impact is that when |
|
|
|
00:47:07.360 --> 00:47:11.950 |
|
you're minimizing the L2 distance of XI |
|
|
|
00:47:11.950 --> 00:47:14.780 |
|
and XT, you can actually like expand it |
|
|
|
00:47:14.780 --> 00:47:16.490 |
|
and then notice that some terms don't |
|
|
|
00:47:16.490 --> 00:47:17.380 |
|
have any impact. |
|
|
|
00:47:17.380 --> 00:47:17.880 |
|
So. |
|
|
|
00:47:18.670 --> 00:47:19.645 |
|
XT is the. |
|
|
|
00:47:19.645 --> 00:47:21.745 |
|
I want to find the minimum training |
|
|
|
00:47:21.745 --> 00:47:24.910 |
|
image indexed by I that minimizes the |
|
|
|
00:47:24.910 --> 00:47:27.930 |
|
distance from all my Xis to XT which is |
|
|
|
00:47:27.930 --> 00:47:28.890 |
|
a test image. |
|
|
|
00:47:28.890 --> 00:47:32.905 |
|
It doesn't depend on this X t ^2 or the |
|
|
|
00:47:32.905 --> 00:47:35.910 |
|
XT transpose XT and so I don't need to |
|
|
|
00:47:35.910 --> 00:47:36.530 |
|
compute that. |
|
|
|
00:47:37.170 --> 00:47:39.170 |
|
Also, this only needs to be computed |
|
|
|
00:47:39.170 --> 00:47:40.460 |
|
once per training image. |
|
|
|
00:47:41.410 --> 00:47:43.405 |
|
Not for every single XT that I'm |
|
|
|
00:47:43.405 --> 00:47:45.905 |
|
testing, not for every test image that |
|
|
|
00:47:45.905 --> 00:47:47.509 |
|
test example that I'm testing. |
|
|
|
00:47:48.220 --> 00:47:51.460 |
|
And so it this is the only thing that |
|
|
|
00:47:51.460 --> 00:47:52.860 |
|
you have to compute for every pair of |
|
|
|
00:47:52.860 --> 00:47:54.060 |
|
training and test examples. |
|
|
|
00:47:56.600 --> 00:47:59.517 |
|
In a GPU you can actually do the. |
|
|
|
00:47:59.517 --> 00:48:01.770 |
|
You could do the MNIST nearest neighbor |
|
|
|
00:48:01.770 --> 00:48:03.595 |
|
in sub second. |
|
|
|
00:48:03.595 --> 00:48:06.260 |
|
It's extremely fast, it's just not fast |
|
|
|
00:48:06.260 --> 00:48:06.830 |
|
on a CPU. |
|
|
|
00:48:08.020 --> 00:48:09.475 |
|
There's also approximate nearest |
|
|
|
00:48:09.475 --> 00:48:11.560 |
|
neighbor methods like flan, or even |
|
|
|
00:48:11.560 --> 00:48:13.930 |
|
exact nearest neighbor methods that are |
|
|
|
00:48:13.930 --> 00:48:15.970 |
|
much more efficient than the simple |
|
|
|
00:48:15.970 --> 00:48:17.310 |
|
method that you would want to use for |
|
|
|
00:48:17.310 --> 00:48:17.750 |
|
the assignment. |
|
|
|
00:48:20.720 --> 00:48:22.010 |
|
Another thing that's nice is that |
|
|
|
00:48:22.010 --> 00:48:24.020 |
|
there's no training time, so there's |
|
|
|
00:48:24.020 --> 00:48:25.243 |
|
not really any training. |
|
|
|
00:48:25.243 --> 00:48:27.800 |
|
The training data is your model, so you |
|
|
|
00:48:27.800 --> 00:48:29.115 |
|
don't have to do anything to train it. |
|
|
|
00:48:29.115 --> 00:48:30.760 |
|
You just get your data, you input the |
|
|
|
00:48:30.760 --> 00:48:30.950 |
|
data. |
|
|
|
00:48:32.220 --> 00:48:33.680 |
|
And last year to learn a distance |
|
|
|
00:48:33.680 --> 00:48:34.940 |
|
function or learned features or |
|
|
|
00:48:34.940 --> 00:48:35.570 |
|
something like that. |
|
|
|
00:48:37.730 --> 00:48:41.170 |
|
Another thing is that with infinite |
|
|
|
00:48:41.170 --> 00:48:43.910 |
|
examples, one nearest neighbor has |
|
|
|
00:48:43.910 --> 00:48:48.030 |
|
provable is provably has error that is |
|
|
|
00:48:48.030 --> 00:48:50.140 |
|
at most twice the Bayes optimal error. |
|
|
|
00:48:52.250 --> 00:48:55.640 |
|
But that's kind of a useless, somewhat |
|
|
|
00:48:55.640 --> 00:48:59.573 |
|
useless claim because you never have |
|
|
|
00:48:59.573 --> 00:49:02.116 |
|
infinite examples, and if you have and |
|
|
|
00:49:02.116 --> 00:49:05.550 |
|
so I'll explain why that thing works. |
|
|
|
00:49:05.550 --> 00:49:07.880 |
|
I'm going to have to write on chalk so |
|
|
|
00:49:07.880 --> 00:49:09.220 |
|
this might not carry over to the |
|
|
|
00:49:09.220 --> 00:49:12.101 |
|
recording, but basically the idea is |
|
|
|
00:49:12.101 --> 00:49:15.949 |
|
that if you have if you have infinite |
|
|
|
00:49:15.949 --> 00:49:17.509 |
|
examples, then what it means is that |
|
|
|
00:49:17.510 --> 00:49:19.630 |
|
for any possible feature value where |
|
|
|
00:49:19.630 --> 00:49:21.280 |
|
there's non 0 probability. |
|
|
|
00:49:21.380 --> 00:49:23.040 |
|
You've got infinite examples on that |
|
|
|
00:49:23.040 --> 00:49:24.310 |
|
one feature value as well. |
|
|
|
00:49:25.210 --> 00:49:28.150 |
|
And so when you assign a new test, |
|
|
|
00:49:28.150 --> 00:49:30.430 |
|
point to that to a label. |
|
|
|
00:49:31.130 --> 00:49:34.870 |
|
You're randomly choosing one of those |
|
|
|
00:49:34.870 --> 00:49:37.010 |
|
infinite samples that has the exact |
|
|
|
00:49:37.010 --> 00:49:38.770 |
|
same features as your test point. |
|
|
|
00:49:39.470 --> 00:49:42.140 |
|
So if we look at a binary, this is for |
|
|
|
00:49:42.140 --> 00:49:43.740 |
|
binary classification. |
|
|
|
00:49:43.740 --> 00:49:47.570 |
|
So let's say that we have like. |
|
|
|
00:49:48.850 --> 00:49:52.940 |
|
Given some, given some features X, this |
|
|
|
00:49:52.940 --> 00:49:54.580 |
|
is just like the X of the test that I |
|
|
|
00:49:54.580 --> 00:49:55.210 |
|
sampled. |
|
|
|
00:49:55.850 --> 00:49:59.360 |
|
Let's say probability of y = 1 equals |
|
|
|
00:49:59.360 --> 00:50:00.050 |
|
epsilon. |
|
|
|
00:50:00.720 --> 00:50:07.199 |
|
And so probability of y = 0 given X = 1 |
|
|
|
00:50:07.200 --> 00:50:08.330 |
|
minus epsilon. |
|
|
|
00:50:09.650 --> 00:50:12.380 |
|
Then when I sample a test value and |
|
|
|
00:50:12.380 --> 00:50:14.000 |
|
let's say epsilon is really small. |
|
|
|
00:50:16.060 --> 00:50:18.710 |
|
When I sample a test value, one thing |
|
|
|
00:50:18.710 --> 00:50:21.123 |
|
that could happen is that I could |
|
|
|
00:50:21.123 --> 00:50:23.335 |
|
sample one of these epsilon probability |
|
|
|
00:50:23.335 --> 00:50:27.360 |
|
test values or test samples, and so the |
|
|
|
00:50:27.360 --> 00:50:28.469 |
|
true label is 1. |
|
|
|
00:50:29.460 --> 00:50:33.010 |
|
And then my error will be epsilon times |
|
|
|
00:50:33.010 --> 00:50:34.320 |
|
1 minus epsilon. |
|
|
|
00:50:35.560 --> 00:50:38.520 |
|
Or more probably, if Epsilon is small, |
|
|
|
00:50:38.520 --> 00:50:40.160 |
|
I could sample one of the test samples |
|
|
|
00:50:40.160 --> 00:50:41.299 |
|
where y = 0. |
|
|
|
00:50:42.420 --> 00:50:45.550 |
|
And then my probability of sampling |
|
|
|
00:50:45.550 --> 00:50:47.634 |
|
that is 1 minus epsilon and the |
|
|
|
00:50:47.634 --> 00:50:49.180 |
|
probability of an error given that I |
|
|
|
00:50:49.180 --> 00:50:50.940 |
|
sampled it is epsilon. |
|
|
|
00:50:50.940 --> 00:50:52.985 |
|
So that's the probability that then I |
|
|
|
00:50:52.985 --> 00:50:54.149 |
|
sample a training sample. |
|
|
|
00:50:54.149 --> 00:50:56.080 |
|
I randomly choose a training sample of |
|
|
|
00:50:56.080 --> 00:50:58.020 |
|
all the exact match matching training |
|
|
|
00:50:58.020 --> 00:51:00.390 |
|
samples that has that class. |
|
|
|
00:51:01.330 --> 00:51:02.760 |
|
And so the total error. |
|
|
|
00:51:03.790 --> 00:51:09.105 |
|
Is Epsilon is 2 epsilon minus two |
|
|
|
00:51:09.105 --> 00:51:10.480 |
|
epsilon squared? |
|
|
|
00:51:12.440 --> 00:51:15.130 |
|
As Epsilon gets really small, this guy |
|
|
|
00:51:15.130 --> 00:51:16.350 |
|
goes away, right? |
|
|
|
00:51:16.350 --> 00:51:18.540 |
|
This will go to zero faster than this. |
|
|
|
00:51:19.490 --> 00:51:22.950 |
|
And so my error is 2 epsilon. |
|
|
|
00:51:23.610 --> 00:51:26.000 |
|
But the best thing I could have done |
|
|
|
00:51:26.000 --> 00:51:27.680 |
|
was just chosen. |
|
|
|
00:51:27.680 --> 00:51:30.420 |
|
In this case, the optimal decision |
|
|
|
00:51:30.420 --> 00:51:33.220 |
|
would have been to choose Class 0 every |
|
|
|
00:51:33.220 --> 00:51:35.137 |
|
time in this scenario, because this is |
|
|
|
00:51:35.137 --> 00:51:37.370 |
|
the more probable one, and the error |
|
|
|
00:51:37.370 --> 00:51:38.970 |
|
for this would just be epsilon. |
|
|
|
00:51:38.970 --> 00:51:41.014 |
|
So my nearest neighbor error is 2 |
|
|
|
00:51:41.014 --> 00:51:41.385 |
|
epsilon. |
|
|
|
00:51:41.385 --> 00:51:43.240 |
|
The optimal error is epsilon. |
|
|
|
00:51:44.950 --> 00:51:46.950 |
|
So the reason that I show the |
|
|
|
00:51:46.950 --> 00:51:49.540 |
|
derivation of that theorem is just |
|
|
|
00:51:49.540 --> 00:51:50.180 |
|
that. |
|
|
|
00:51:50.300 --> 00:51:50.890 |
|
|
|
|
|
00:51:52.000 --> 00:51:54.090 |
|
It's like kind of ridiculously |
|
|
|
00:51:54.090 --> 00:51:54.606 |
|
implausible. |
|
|
|
00:51:54.606 --> 00:51:56.910 |
|
So the theorem only holds if you |
|
|
|
00:51:56.910 --> 00:51:58.626 |
|
actually have infinite training samples |
|
|
|
00:51:58.626 --> 00:52:00.479 |
|
for every single possible value of the |
|
|
|
00:52:00.480 --> 00:52:01.050 |
|
features. |
|
|
|
00:52:01.050 --> 00:52:04.327 |
|
So while while theoretically with |
|
|
|
00:52:04.327 --> 00:52:06.490 |
|
infinite training samples one NN |
|
|
|
00:52:06.490 --> 00:52:08.120 |
|
has error, that's at most twice the |
|
|
|
00:52:08.120 --> 00:52:10.950 |
|
Bayes optimal error rate, in practice |
|
|
|
00:52:10.950 --> 00:52:12.355 |
|
like that tells you absolutely nothing |
|
|
|
00:52:12.355 --> 00:52:12.870 |
|
at all. |
|
|
|
00:52:12.870 --> 00:52:14.650 |
|
So I just want to mention that because |
|
|
|
00:52:14.650 --> 00:52:16.690 |
|
it's an often, it's an often quoted |
|
|
|
00:52:16.690 --> 00:52:17.690 |
|
thing about nearest neighbor. |
|
|
|
00:52:17.690 --> 00:52:18.880 |
|
It doesn't mean that it's any good, |
|
|
|
00:52:18.880 --> 00:52:21.980 |
|
although it is good, just not for that. |
|
|
|
00:52:23.180 --> 00:52:24.420 |
|
So then. |
|
|
|
00:52:24.500 --> 00:52:24.950 |
|
|
|
|
|
00:52:25.830 --> 00:52:27.710 |
|
So that was nearest neighbor. |
|
|
|
00:52:27.710 --> 00:52:29.570 |
|
Now I want to talk a little bit about |
|
|
|
00:52:29.570 --> 00:52:31.930 |
|
error, how we measure it and what |
|
|
|
00:52:31.930 --> 00:52:32.560 |
|
causes it. |
|
|
|
00:52:33.690 --> 00:52:34.300 |
|
So. |
|
|
|
00:52:34.950 --> 00:52:36.660 |
|
When we measure and analyze |
|
|
|
00:52:36.660 --> 00:52:38.080 |
|
classification error. |
|
|
|
00:52:39.760 --> 00:52:43.060 |
|
The most common sounds a little |
|
|
|
00:52:43.060 --> 00:52:45.760 |
|
redundant, but the most common way to |
|
|
|
00:52:45.760 --> 00:52:48.320 |
|
measure the error of a classifier is |
|
|
|
00:52:48.320 --> 00:52:50.510 |
|
with the classification error, which is |
|
|
|
00:52:50.510 --> 00:52:51.930 |
|
the percent of examples that are |
|
|
|
00:52:51.930 --> 00:52:52.440 |
|
incorrect. |
|
|
|
00:52:53.400 --> 00:52:55.470 |
|
So mathematically it's just the sum |
|
|
|
00:52:55.470 --> 00:52:56.140 |
|
over. |
|
|
|
00:52:57.850 --> 00:53:00.229 |
|
I'm assuming that this like not equal |
|
|
|
00:53:00.230 --> 00:53:02.829 |
|
sign just returns A1 or A01 if they're |
|
|
|
00:53:02.829 --> 00:53:04.609 |
|
not equal, 0 if they're equal. |
|
|
|
00:53:05.120 --> 00:53:08.716 |
|
And so it's just a count of the number |
|
|
|
00:53:08.716 --> 00:53:10.120 |
|
of cases where the prediction is |
|
|
|
00:53:10.120 --> 00:53:12.390 |
|
different than the true value divided |
|
|
|
00:53:12.390 --> 00:53:13.610 |
|
by the number of cases that are |
|
|
|
00:53:13.610 --> 00:53:14.140 |
|
evaluated. |
|
|
|
00:53:15.550 --> 00:53:17.570 |
|
And then if you want to provide more |
|
|
|
00:53:17.570 --> 00:53:19.220 |
|
insight into the kinds of errors that |
|
|
|
00:53:19.220 --> 00:53:21.030 |
|
you get, you would use a confusion |
|
|
|
00:53:21.030 --> 00:53:21.590 |
|
matrix. |
|
|
|
00:53:22.400 --> 00:53:24.950 |
|
So a confusion matrix is a count of for |
|
|
|
00:53:24.950 --> 00:53:26.379 |
|
each how many. |
|
|
|
00:53:26.380 --> 00:53:27.533 |
|
There's two ways of doing it. |
|
|
|
00:53:27.533 --> 00:53:29.370 |
|
One is just count wise. |
|
|
|
00:53:29.370 --> 00:53:32.850 |
|
How many examples had a true prediction |
|
|
|
00:53:32.850 --> 00:53:35.580 |
|
or a true value of 1 label and a |
|
|
|
00:53:35.580 --> 00:53:37.200 |
|
predicted value of another label. |
|
|
|
00:53:37.860 --> 00:53:40.242 |
|
So here these are the true labels. |
|
|
|
00:53:40.242 --> 00:53:43.210 |
|
These are the predicted labels, and |
|
|
|
00:53:43.210 --> 00:53:48.520 |
|
sometimes you normalize it by the |
|
|
|
00:53:48.520 --> 00:53:50.620 |
|
fraction of true labels, typically. |
|
|
|
00:53:50.620 --> 00:53:53.352 |
|
So this means that out of all of the |
|
|
|
00:53:53.352 --> 00:53:55.460 |
|
true labels that were set, OSA, |
|
|
|
00:53:55.460 --> 00:53:58.230 |
|
whatever that means of 100% of them, |
|
|
|
00:53:58.230 --> 00:53:59.760 |
|
were assigned to set OSA. |
|
|
|
00:54:01.330 --> 00:54:04.890 |
|
Out of all the test samples where the |
|
|
|
00:54:04.890 --> 00:54:07.762 |
|
true label was versicolor, 62% were |
|
|
|
00:54:07.762 --> 00:54:10.740 |
|
assigned a versicolor and 38% were |
|
|
|
00:54:10.740 --> 00:54:12.210 |
|
assigned to VIRGINICA. |
|
|
|
00:54:13.150 --> 00:54:15.950 |
|
And out of all the test samples where |
|
|
|
00:54:15.950 --> 00:54:18.320 |
|
the true label is virginica, 100% were |
|
|
|
00:54:18.320 --> 00:54:19.650 |
|
assigned to virginica. |
|
|
|
00:54:19.650 --> 00:54:21.610 |
|
So this tells you like a little bit |
|
|
|
00:54:21.610 --> 00:54:22.950 |
|
more than the classification error, |
|
|
|
00:54:22.950 --> 00:54:24.420 |
|
because now you can see there's only |
|
|
|
00:54:24.420 --> 00:54:26.590 |
|
mistakes made on this versa color and |
|
|
|
00:54:26.590 --> 00:54:28.250 |
|
it only gets confused with virginica. |
|
|
|
00:54:30.900 --> 00:54:32.760 |
|
So I'll give you an example here. |
|
|
|
00:54:34.790 --> 00:54:38.620 |
|
So there's no document projector thing, |
|
|
|
00:54:38.620 --> 00:54:39.480 |
|
unfortunately. |
|
|
|
00:54:40.140 --> 00:54:44.077 |
|
Which I will try to fix, but I will |
|
|
|
00:54:44.077 --> 00:54:45.870 |
|
this is simple enough that I can just |
|
|
|
00:54:45.870 --> 00:54:48.175 |
|
draw on this slide or type on this |
|
|
|
00:54:48.175 --> 00:54:48.410 |
|
slide. |
|
|
|
00:54:50.880 --> 00:54:51.120 |
|
Yeah. |
|
|
|
00:54:54.590 --> 00:54:55.370 |
|
|
|
|
|
00:54:58.460 --> 00:54:59.040 |
|
There. |
|
|
|
00:55:05.270 --> 00:55:07.190 |
|
OK, I don't want to figure that out. |
|
|
|
00:55:07.190 --> 00:55:08.530 |
|
So. |
|
|
|
00:55:14.470 --> 00:55:14.940 |
|
I. |
|
|
|
00:55:21.420 --> 00:55:23.060 |
|
That sounds good. |
|
|
|
00:55:23.060 --> 00:55:23.500 |
|
There it goes. |
|
|
|
00:55:25.990 --> 00:55:29.845 |
|
OK, so I will just verbally do it. |
|
|
|
00:55:29.845 --> 00:55:31.770 |
|
So let's say so these are the true |
|
|
|
00:55:31.770 --> 00:55:32.055 |
|
labels. |
|
|
|
00:55:32.055 --> 00:55:34.430 |
|
These are the predicted labels. |
|
|
|
00:55:34.430 --> 00:55:36.020 |
|
What is the classification error? |
|
|
|
00:55:58.730 --> 00:56:00.082 |
|
Yeah, 3 / 7. |
|
|
|
00:56:00.082 --> 00:56:04.860 |
|
So there's 77 rows right that are other |
|
|
|
00:56:04.860 --> 00:56:08.463 |
|
than the label row, and there are three |
|
|
|
00:56:08.463 --> 00:56:10.023 |
|
three times that. |
|
|
|
00:56:10.023 --> 00:56:12.300 |
|
One of the values is no and one of the |
|
|
|
00:56:12.300 --> 00:56:13.090 |
|
values is yes. |
|
|
|
00:56:13.090 --> 00:56:16.580 |
|
So the classification error is 3 / 7. |
|
|
|
00:56:17.810 --> 00:56:21.170 |
|
And let's do the confusion matrix. |
|
|
|
00:56:28.020 --> 00:56:30.960 |
|
Right, so the so the true label. |
|
|
|
00:56:30.960 --> 00:56:33.060 |
|
So how many times do I have a true |
|
|
|
00:56:33.060 --> 00:56:35.070 |
|
label that's known and a predicted |
|
|
|
00:56:35.070 --> 00:56:35.960 |
|
label that's no. |
|
|
|
00:56:37.520 --> 00:56:38.080 |
|
Two. |
|
|
|
00:56:38.080 --> 00:56:39.570 |
|
OK, how many times do I have a true |
|
|
|
00:56:39.570 --> 00:56:41.265 |
|
label that's known and predicted label? |
|
|
|
00:56:41.265 --> 00:56:41.850 |
|
That's yes. |
|
|
|
00:56:45.390 --> 00:56:48.026 |
|
OK, how many times do I have a true |
|
|
|
00:56:48.026 --> 00:56:49.535 |
|
label that's yes and predicted label |
|
|
|
00:56:49.535 --> 00:56:50.060 |
|
that's no? |
|
|
|
00:56:51.800 --> 00:56:54.190 |
|
One, and I guess I have two of the |
|
|
|
00:56:54.190 --> 00:56:54.540 |
|
others. |
|
|
|
00:56:55.650 --> 00:56:56.329 |
|
Is that right? |
|
|
|
00:56:56.330 --> 00:56:58.300 |
|
I have two times that there's a true |
|
|
|
00:56:58.300 --> 00:57:00.420 |
|
label yes and a predicted label of no. |
|
|
|
00:57:00.420 --> 00:57:01.260 |
|
Is that right? |
|
|
|
00:57:03.730 --> 00:57:05.049 |
|
Or no, yes and yes. |
|
|
|
00:57:05.050 --> 00:57:06.050 |
|
I'm on yes and yes. |
|
|
|
00:57:06.050 --> 00:57:06.920 |
|
Two, yes. |
|
|
|
00:57:07.890 --> 00:57:08.740 |
|
OK, cool. |
|
|
|
00:57:08.740 --> 00:57:09.380 |
|
All right, good. |
|
|
|
00:57:09.380 --> 00:57:09.903 |
|
Thumbs up. |
|
|
|
00:57:09.903 --> 00:57:11.750 |
|
All right, so this sums up to 7. |
|
|
|
00:57:11.750 --> 00:57:13.510 |
|
So this is a confusion matrix. |
|
|
|
00:57:13.510 --> 00:57:14.920 |
|
That's just in terms of the total |
|
|
|
00:57:14.920 --> 00:57:15.380 |
|
counts. |
|
|
|
00:57:16.340 --> 00:57:18.510 |
|
And then if I want to convert this to. |
|
|
|
00:57:19.290 --> 00:57:23.000 |
|
A normalized matrix, which is basically |
|
|
|
00:57:23.000 --> 00:57:25.330 |
|
the probability that I predict a |
|
|
|
00:57:25.330 --> 00:57:27.530 |
|
particular value given the true label. |
|
|
|
00:57:27.530 --> 00:57:29.540 |
|
So this will be the probability that I |
|
|
|
00:57:29.540 --> 00:57:32.025 |
|
predicted no given that the true label |
|
|
|
00:57:32.025 --> 00:57:32.720 |
|
is no. |
|
|
|
00:57:33.360 --> 00:57:35.280 |
|
Then I just divide by the total count |
|
|
|
00:57:35.280 --> 00:57:37.230 |
|
or I divide by the. |
|
|
|
00:57:37.880 --> 00:57:40.250 |
|
By the number of examples in each row. |
|
|
|
00:57:40.250 --> 00:57:42.580 |
|
So this one would be what? |
|
|
|
00:57:42.580 --> 00:57:44.295 |
|
What's the probability that I predict |
|
|
|
00:57:44.295 --> 00:57:46.260 |
|
no given that the true answer is no? |
|
|
|
00:57:48.530 --> 00:57:49.200 |
|
F right? |
|
|
|
00:57:49.200 --> 00:57:50.760 |
|
I just divide this by 4. |
|
|
|
00:57:51.790 --> 00:57:53.680 |
|
And likewise divide this by 4. |
|
|
|
00:57:53.680 --> 00:57:55.360 |
|
And what is the probability that I |
|
|
|
00:57:55.360 --> 00:57:56.800 |
|
predict no given that the true answer |
|
|
|
00:57:56.800 --> 00:57:57.340 |
|
is yes? |
|
|
|
00:57:59.660 --> 00:58:02.440 |
|
Right 1 / 3 and this will be 2 / 3. |
|
|
|
00:58:03.400 --> 00:58:05.210 |
|
So that's how you compute the confusion |
|
|
|
00:58:05.210 --> 00:58:07.260 |
|
matrix and the classification error. |
|
|
|
00:58:12.880 --> 00:58:15.560 |
|
All right, so for regression error. |
|
|
|
00:58:15.650 --> 00:58:16.380 |
|
|
|
|
|
00:58:17.890 --> 00:58:20.920 |
|
You will usually use one of these. |
|
|
|
00:58:20.920 --> 00:58:23.316 |
|
Root mean squared error is probably one |
|
|
|
00:58:23.316 --> 00:58:26.320 |
|
of the most common, so that's just |
|
|
|
00:58:26.320 --> 00:58:27.280 |
|
written there. |
|
|
|
00:58:27.280 --> 00:58:30.790 |
|
You take this sum of squared values, |
|
|
|
00:58:30.790 --> 00:58:33.780 |
|
and then you divide it by the total |
|
|
|
00:58:33.780 --> 00:58:34.989 |
|
number of values. |
|
|
|
00:58:34.990 --> 00:58:37.580 |
|
N is the range of I. |
|
|
|
00:58:38.250 --> 00:58:40.050 |
|
And then you take the square root. |
|
|
|
00:58:40.050 --> 00:58:42.630 |
|
So sometimes the mistake you can make |
|
|
|
00:58:42.630 --> 00:58:44.000 |
|
on this is to do the order of |
|
|
|
00:58:44.000 --> 00:58:44.950 |
|
operations wrong. |
|
|
|
00:58:45.570 --> 00:58:47.855 |
|
Just remember it's in the name root |
|
|
|
00:58:47.855 --> 00:58:48.846 |
|
mean squared. |
|
|
|
00:58:48.846 --> 00:58:53.260 |
|
So you take the and then so it's like |
|
|
|
00:58:53.260 --> 00:58:55.594 |
|
right now as an equation it's the root |
|
|
|
00:58:55.594 --> 00:58:58.346 |
|
then the mean divided by north and then |
|
|
|
00:58:58.346 --> 00:59:00.946 |
|
you have this summation squared so you |
|
|
|
00:59:00.946 --> 00:59:01.428 |
|
take. |
|
|
|
00:59:01.428 --> 00:59:02.210 |
|
So yeah. |
|
|
|
00:59:05.010 --> 00:59:05.500 |
|
All right. |
|
|
|
00:59:05.500 --> 00:59:08.490 |
|
So that's so root squared is kind of |
|
|
|
00:59:08.490 --> 00:59:09.960 |
|
sensitive to your outliers. |
|
|
|
00:59:09.960 --> 00:59:13.850 |
|
If you had if you had like some things |
|
|
|
00:59:13.850 --> 00:59:15.510 |
|
that are mislabeled or just really |
|
|
|
00:59:15.510 --> 00:59:17.510 |
|
weird examples they could end up |
|
|
|
00:59:17.510 --> 00:59:19.260 |
|
dominating your RMSE error. |
|
|
|
00:59:19.260 --> 00:59:21.560 |
|
So if like one of these guys, if I'm |
|
|
|
00:59:21.560 --> 00:59:23.490 |
|
doing some regression or something and |
|
|
|
00:59:23.490 --> 00:59:26.500 |
|
one of them is like way, way off, |
|
|
|
00:59:26.500 --> 00:59:29.122 |
|
that's going to be the that the root |
|
|
|
00:59:29.122 --> 00:59:31.580 |
|
mean squared error of that one example |
|
|
|
00:59:31.580 --> 00:59:33.060 |
|
is going to be most of the. |
|
|
|
00:59:33.130 --> 00:59:34.010 |
|
Mean squared error. |
|
|
|
00:59:35.430 --> 00:59:36.930 |
|
So you can also sometimes do mean |
|
|
|
00:59:36.930 --> 00:59:39.000 |
|
absolute error that will be less |
|
|
|
00:59:39.000 --> 00:59:40.940 |
|
sensitive to outliers, things that have |
|
|
|
00:59:40.940 --> 00:59:42.000 |
|
extraordinary error. |
|
|
|
00:59:43.150 --> 00:59:45.700 |
|
And then both of these are sensitive to |
|
|
|
00:59:45.700 --> 00:59:46.480 |
|
your units. |
|
|
|
00:59:46.480 --> 00:59:48.590 |
|
So if you're measuring the root mean |
|
|
|
00:59:48.590 --> 00:59:51.090 |
|
squared error and feet versus meters, |
|
|
|
00:59:51.090 --> 00:59:52.740 |
|
you'll obviously get different values. |
|
|
|
00:59:53.900 --> 00:59:56.120 |
|
And so a lot of times sometimes people |
|
|
|
00:59:56.120 --> 01:00:01.250 |
|
use R2, which is the amount of |
|
|
|
01:00:01.250 --> 01:00:02.520 |
|
explained variance. |
|
|
|
01:00:02.520 --> 01:00:07.329 |
|
So you're normalizing so the R2 is 1 |
|
|
|
01:00:07.330 --> 01:00:09.740 |
|
minus this thing here, this ratio. |
|
|
|
01:00:10.470 --> 01:00:13.583 |
|
And the numerator of this ratio is the |
|
|
|
01:00:13.583 --> 01:00:16.890 |
|
sum of squared difference between your |
|
|
|
01:00:16.890 --> 01:00:18.460 |
|
prediction and the true value. |
|
|
|
01:00:19.470 --> 01:00:21.535 |
|
So if you divide that by N, it's the |
|
|
|
01:00:21.535 --> 01:00:21.800 |
|
variance. |
|
|
|
01:00:21.800 --> 01:00:23.930 |
|
It's the conditional variance of the. |
|
|
|
01:00:24.860 --> 01:00:27.936 |
|
True prediction given your model's |
|
|
|
01:00:27.936 --> 01:00:28.819 |
|
prediction. |
|
|
|
01:00:30.130 --> 01:00:32.746 |
|
And then you divide it by the variance |
|
|
|
01:00:32.746 --> 01:00:35.854 |
|
or the OR you could have a one over |
|
|
|
01:00:35.854 --> 01:00:37.402 |
|
north here and one over north here and |
|
|
|
01:00:37.402 --> 01:00:39.230 |
|
then this would be predicted the |
|
|
|
01:00:39.230 --> 01:00:40.805 |
|
conditional variance and this is the |
|
|
|
01:00:40.805 --> 01:00:42.060 |
|
variance of the true labels. |
|
|
|
01:00:43.280 --> 01:00:46.710 |
|
So 1 minus that ratio is the amount of |
|
|
|
01:00:46.710 --> 01:00:48.160 |
|
the variance that's explained and it |
|
|
|
01:00:48.160 --> 01:00:49.340 |
|
doesn't have any units. |
|
|
|
01:00:49.340 --> 01:00:52.359 |
|
If you measure it in feet or meters, |
|
|
|
01:00:52.360 --> 01:00:53.770 |
|
you're going to get exactly the same |
|
|
|
01:00:53.770 --> 01:00:55.440 |
|
value because the feet or the meters |
|
|
|
01:00:55.440 --> 01:00:57.519 |
|
will cancel out and that ratio. |
|
|
|
01:01:00.130 --> 01:01:01.520 |
|
That we might talk, well, we'll talk |
|
|
|
01:01:01.520 --> 01:01:03.120 |
|
about that more perhaps when we talk |
|
|
|
01:01:03.120 --> 01:01:04.070 |
|
about linear regression. |
|
|
|
01:01:05.230 --> 01:01:06.360 |
|
But just worth knowing. |
|
|
|
01:01:07.750 --> 01:01:08.780 |
|
At least at a high level. |
|
|
|
01:01:10.070 --> 01:01:12.100 |
|
All right, so then there's a question |
|
|
|
01:01:12.100 --> 01:01:15.620 |
|
of why if I fit a model as I can |
|
|
|
01:01:15.620 --> 01:01:18.120 |
|
possibly fit it, then why do I still |
|
|
|
01:01:18.120 --> 01:01:20.230 |
|
have error when I evaluate on my test |
|
|
|
01:01:20.230 --> 01:01:20.830 |
|
samples? |
|
|
|
01:01:20.830 --> 01:01:23.060 |
|
You'll see in your in your homework |
|
|
|
01:01:23.060 --> 01:01:24.670 |
|
problem, you're not going to have any |
|
|
|
01:01:24.670 --> 01:01:26.180 |
|
methods that achieve 0 error in |
|
|
|
01:01:26.180 --> 01:01:26.650 |
|
testing. |
|
|
|
01:01:29.320 --> 01:01:31.050 |
|
So there's several possible reasons. |
|
|
|
01:01:31.050 --> 01:01:33.280 |
|
So one is that there could be an error |
|
|
|
01:01:33.280 --> 01:01:34.770 |
|
that's intrinsic to the problem. |
|
|
|
01:01:34.770 --> 01:01:37.150 |
|
It's not possible to have 0 error. |
|
|
|
01:01:37.150 --> 01:01:39.020 |
|
So if you're trying to predict, for |
|
|
|
01:01:39.020 --> 01:01:41.660 |
|
example, what the weather is tomorrow, |
|
|
|
01:01:41.660 --> 01:01:42.989 |
|
then given your features, you're not |
|
|
|
01:01:42.990 --> 01:01:44.130 |
|
going to have a perfect prediction. |
|
|
|
01:01:44.130 --> 01:01:45.666 |
|
Nobody knows exactly what the weather |
|
|
|
01:01:45.666 --> 01:01:46.139 |
|
is tomorrow. |
|
|
|
01:01:47.350 --> 01:01:49.350 |
|
If you're trying to classify a |
|
|
|
01:01:49.350 --> 01:01:51.420 |
|
handwritten character again, it might. |
|
|
|
01:01:51.420 --> 01:01:53.520 |
|
You might not be able to get 0 error |
|
|
|
01:01:53.520 --> 01:01:55.630 |
|
because somebody might write an A |
|
|
|
01:01:55.630 --> 01:01:57.370 |
|
exactly the same way that somebody |
|
|
|
01:01:57.370 --> 01:02:00.260 |
|
wrote a no another time or whatever. |
|
|
|
01:02:00.260 --> 01:02:02.190 |
|
Sometimes it's just not possible to |
|
|
|
01:02:02.190 --> 01:02:04.630 |
|
know exact, to be completely confident |
|
|
|
01:02:04.630 --> 01:02:07.783 |
|
about what the true character of a |
|
|
|
01:02:07.783 --> 01:02:08.730 |
|
handwritten character is. |
|
|
|
01:02:10.160 --> 01:02:11.810 |
|
So there's a notion called the Bayes |
|
|
|
01:02:11.810 --> 01:02:14.410 |
|
optimal error, and that's the error if |
|
|
|
01:02:14.410 --> 01:02:16.945 |
|
the true function, the probability of |
|
|
|
01:02:16.945 --> 01:02:18.770 |
|
the label given the data is known. |
|
|
|
01:02:18.770 --> 01:02:20.320 |
|
So you can't do any better than that. |
|
|
|
01:02:23.510 --> 01:02:25.955 |
|
Another source of error is called is |
|
|
|
01:02:25.955 --> 01:02:28.470 |
|
model bias, which means that the model |
|
|
|
01:02:28.470 --> 01:02:29.970 |
|
doesn't allow you to fit whatever you |
|
|
|
01:02:29.970 --> 01:02:30.200 |
|
want. |
|
|
|
01:02:30.850 --> 01:02:33.600 |
|
There's some things that some training |
|
|
|
01:02:33.600 --> 01:02:35.500 |
|
data can't be fit necessarily. |
|
|
|
01:02:36.330 --> 01:02:39.290 |
|
And so you can't achieve. |
|
|
|
01:02:39.290 --> 01:02:40.890 |
|
Even if you had an infinite training |
|
|
|
01:02:40.890 --> 01:02:42.530 |
|
set, you won't be able to achieve the |
|
|
|
01:02:42.530 --> 01:02:43.510 |
|
Bayes optimal error. |
|
|
|
01:02:44.320 --> 01:02:47.030 |
|
So one nearest neighbor, for example, |
|
|
|
01:02:47.030 --> 01:02:48.010 |
|
has no bias. |
|
|
|
01:02:48.010 --> 01:02:50.550 |
|
With one nearest neighbor you can fit |
|
|
|
01:02:50.550 --> 01:02:52.280 |
|
the training set perfectly and if your |
|
|
|
01:02:52.280 --> 01:02:53.420 |
|
test set comes from the same |
|
|
|
01:02:53.420 --> 01:02:54.160 |
|
distribution. |
|
|
|
01:02:54.780 --> 01:02:56.519 |
|
Then you're going to you're going to |
|
|
|
01:02:56.520 --> 01:02:57.860 |
|
get twice the Bayes optimal error, but. |
|
|
|
01:02:59.130 --> 01:03:00.360 |
|
You'll get close. |
|
|
|
01:03:01.040 --> 01:03:04.695 |
|
So the One North has very minimal bias, |
|
|
|
01:03:04.695 --> 01:03:06.280 |
|
I guess I should say. |
|
|
|
01:03:06.280 --> 01:03:08.060 |
|
But if you're doing a linear fit, that |
|
|
|
01:03:08.060 --> 01:03:10.060 |
|
has really high bias, because all you |
|
|
|
01:03:10.060 --> 01:03:10.850 |
|
can do is fit a line. |
|
|
|
01:03:10.850 --> 01:03:12.147 |
|
If the data is on a line, you'll still |
|
|
|
01:03:12.147 --> 01:03:13.390 |
|
fit a line, it won't be a very good |
|
|
|
01:03:13.390 --> 01:03:13.540 |
|
fit. |
|
|
|
01:03:15.390 --> 01:03:18.155 |
|
Model variance means that if you were |
|
|
|
01:03:18.155 --> 01:03:20.290 |
|
to sample different sets of data, |
|
|
|
01:03:20.290 --> 01:03:22.190 |
|
you're going to come up with different |
|
|
|
01:03:22.190 --> 01:03:24.480 |
|
predictions on your test data, or |
|
|
|
01:03:24.480 --> 01:03:26.870 |
|
different parameters for your model. |
|
|
|
01:03:27.490 --> 01:03:31.100 |
|
So the variance the. |
|
|
|
01:03:32.070 --> 01:03:34.070 |
|
Bias and variance both have to do with |
|
|
|
01:03:34.070 --> 01:03:35.810 |
|
the simplicity of your model. |
|
|
|
01:03:35.810 --> 01:03:37.780 |
|
If you have a really complex model that |
|
|
|
01:03:37.780 --> 01:03:39.340 |
|
can fit everything, anything. |
|
|
|
01:03:39.980 --> 01:03:42.600 |
|
Then it's going to have low, then it's |
|
|
|
01:03:42.600 --> 01:03:44.892 |
|
going to have low bias but high |
|
|
|
01:03:44.892 --> 01:03:45.220 |
|
variance. |
|
|
|
01:03:45.220 --> 01:03:47.178 |
|
If you have a really simple model, it's |
|
|
|
01:03:47.178 --> 01:03:50.216 |
|
going to have high bias but low |
|
|
|
01:03:50.216 --> 01:03:50.650 |
|
variance. |
|
|
|
01:03:52.150 --> 01:03:53.400 |
|
The variance means that you have |
|
|
|
01:03:53.400 --> 01:03:55.200 |
|
trouble fitting your model given a |
|
|
|
01:03:55.200 --> 01:03:56.510 |
|
limited amount of training data. |
|
|
|
01:03:57.880 --> 01:03:59.030 |
|
You can also have things like |
|
|
|
01:03:59.030 --> 01:04:00.850 |
|
distribution shift that some things are |
|
|
|
01:04:00.850 --> 01:04:03.150 |
|
more common and some samples are more |
|
|
|
01:04:03.150 --> 01:04:04.220 |
|
common in the test set than the |
|
|
|
01:04:04.220 --> 01:04:06.450 |
|
training set if they're not IID, which |
|
|
|
01:04:06.450 --> 01:04:07.610 |
|
I discussed before. |
|
|
|
01:04:08.710 --> 01:04:10.350 |
|
Or you could have in the worst case of |
|
|
|
01:04:10.350 --> 01:04:12.360 |
|
function shift, which means that the. |
|
|
|
01:04:13.490 --> 01:04:16.375 |
|
That the answer and the test data, the |
|
|
|
01:04:16.375 --> 01:04:17.691 |
|
probability of a particular answer |
|
|
|
01:04:17.691 --> 01:04:20.023 |
|
given the data given the features is |
|
|
|
01:04:20.023 --> 01:04:21.560 |
|
different in testing than training. |
|
|
|
01:04:21.560 --> 01:04:24.305 |
|
So one example is if you're trying if |
|
|
|
01:04:24.305 --> 01:04:26.065 |
|
you're doing like language prediction |
|
|
|
01:04:26.065 --> 01:04:28.070 |
|
and somebody says what is your favorite |
|
|
|
01:04:28.070 --> 01:04:31.250 |
|
TV show and you trained based on data |
|
|
|
01:04:31.250 --> 01:04:36.197 |
|
from 2010 to 2020, then probably the |
|
|
|
01:04:36.197 --> 01:04:38.192 |
|
answer in that time range, the |
|
|
|
01:04:38.192 --> 01:04:40.047 |
|
probability of different answers then |
|
|
|
01:04:40.047 --> 01:04:41.560 |
|
is different than it is today. |
|
|
|
01:04:41.560 --> 01:04:42.980 |
|
So you actually have. |
|
|
|
01:04:43.030 --> 01:04:44.470 |
|
Changed your. |
|
|
|
01:04:44.470 --> 01:04:48.510 |
|
If you're test set is from 2022, then |
|
|
|
01:04:48.510 --> 01:04:50.980 |
|
the probability of Y the answer to that |
|
|
|
01:04:50.980 --> 01:04:53.910 |
|
question is different in the test set |
|
|
|
01:04:53.910 --> 01:04:55.550 |
|
than it is in a training set that came |
|
|
|
01:04:55.550 --> 01:04:57.130 |
|
from 2000 to 2020. |
|
|
|
01:05:00.450 --> 01:05:03.714 |
|
Then there's other things that are that |
|
|
|
01:05:03.714 --> 01:05:06.760 |
|
are that are also can be issues if |
|
|
|
01:05:06.760 --> 01:05:08.210 |
|
you're imperfectly optimized on the |
|
|
|
01:05:08.210 --> 01:05:08.880 |
|
training set. |
|
|
|
01:05:09.660 --> 01:05:12.550 |
|
Or if you are not able to optimize. |
|
|
|
01:05:13.420 --> 01:05:16.050 |
|
For the same, if you're a training loss |
|
|
|
01:05:16.050 --> 01:05:17.480 |
|
is different than your final |
|
|
|
01:05:17.480 --> 01:05:18.190 |
|
evaluation. |
|
|
|
01:05:18.980 --> 01:05:20.450 |
|
That's actually happens all the time |
|
|
|
01:05:20.450 --> 01:05:22.310 |
|
because it's really hard to optimize |
|
|
|
01:05:22.310 --> 01:05:23.040 |
|
for training error. |
|
|
|
01:05:26.620 --> 01:05:27.040 |
|
So. |
|
|
|
01:05:28.040 --> 01:05:28.830 |
|
Here's a question. |
|
|
|
01:05:28.830 --> 01:05:31.540 |
|
So what happens if so? |
|
|
|
01:05:31.540 --> 01:05:34.222 |
|
Suppose that you train a model and then |
|
|
|
01:05:34.222 --> 01:05:35.879 |
|
you increase the number of training |
|
|
|
01:05:35.879 --> 01:05:38.200 |
|
samples, and then you train it again. |
|
|
|
01:05:38.200 --> 01:05:40.170 |
|
As you increase the number of training |
|
|
|
01:05:40.170 --> 01:05:41.880 |
|
samples, do you expect the test error |
|
|
|
01:05:41.880 --> 01:05:43.850 |
|
to go up or down or stay the same? |
|
|
|
01:05:48.170 --> 01:05:49.710 |
|
So you'd expect it. |
|
|
|
01:05:49.710 --> 01:05:51.260 |
|
Some people are saying down as you get |
|
|
|
01:05:51.260 --> 01:05:54.052 |
|
more training data you should fit a |
|
|
|
01:05:54.052 --> 01:05:54.305 |
|
better. |
|
|
|
01:05:54.305 --> 01:05:55.710 |
|
You should have like a better |
|
|
|
01:05:55.710 --> 01:05:57.540 |
|
understanding of your true parameters. |
|
|
|
01:05:57.540 --> 01:05:59.110 |
|
So the test error should go down. |
|
|
|
01:05:59.870 --> 01:06:01.510 |
|
So it might look something like this. |
|
|
|
01:06:03.130 --> 01:06:07.910 |
|
If I get more training data and then I |
|
|
|
01:06:07.910 --> 01:06:09.170 |
|
measure the training error. |
|
|
|
01:06:10.070 --> 01:06:12.510 |
|
Do you expect the training error to go |
|
|
|
01:06:12.510 --> 01:06:14.080 |
|
up or down or stay the same? |
|
|
|
01:06:16.740 --> 01:06:17.890 |
|
There are how many people think it |
|
|
|
01:06:17.890 --> 01:06:18.560 |
|
would go up? |
|
|
|
01:06:21.510 --> 01:06:23.280 |
|
How many people think the training area |
|
|
|
01:06:23.280 --> 01:06:25.080 |
|
would go down as they get more training |
|
|
|
01:06:25.080 --> 01:06:25.400 |
|
data? |
|
|
|
01:06:27.750 --> 01:06:29.760 |
|
OK, so there's a lot of uncertainty. |
|
|
|
01:06:29.760 --> 01:06:32.593 |
|
So what I would expect is that the |
|
|
|
01:06:32.593 --> 01:06:35.410 |
|
training error will go up because as |
|
|
|
01:06:35.410 --> 01:06:37.170 |
|
you get more training data, it becomes |
|
|
|
01:06:37.170 --> 01:06:38.670 |
|
harder to fit that data. |
|
|
|
01:06:38.670 --> 01:06:40.720 |
|
Given the same model, it becomes harder |
|
|
|
01:06:40.720 --> 01:06:42.660 |
|
and harder to fit an increasing size |
|
|
|
01:06:42.660 --> 01:06:43.250 |
|
training set. |
|
|
|
01:06:44.120 --> 01:06:46.920 |
|
And if you get infinite examples and |
|
|
|
01:06:46.920 --> 01:06:49.230 |
|
you don't have any things like a |
|
|
|
01:06:49.230 --> 01:06:51.000 |
|
function shift, then these two will |
|
|
|
01:06:51.000 --> 01:06:51.340 |
|
meet. |
|
|
|
01:06:51.340 --> 01:06:54.122 |
|
If you get infinite examples, then you |
|
|
|
01:06:54.122 --> 01:06:54.520 |
|
will. |
|
|
|
01:06:54.520 --> 01:06:56.030 |
|
You're training and tests are basically |
|
|
|
01:06:56.030 --> 01:06:56.520 |
|
the same. |
|
|
|
01:06:57.140 --> 01:06:58.690 |
|
And then you will have the same error, |
|
|
|
01:06:58.690 --> 01:07:00.030 |
|
so they start to converge. |
|
|
|
01:07:02.070 --> 01:07:03.490 |
|
And this is important concept |
|
|
|
01:07:03.490 --> 01:07:04.350 |
|
generalization error. |
|
|
|
01:07:04.350 --> 01:07:06.530 |
|
Generalization error is the difference |
|
|
|
01:07:06.530 --> 01:07:08.240 |
|
between your test error and your |
|
|
|
01:07:08.240 --> 01:07:08.810 |
|
training error. |
|
|
|
01:07:08.810 --> 01:07:10.805 |
|
So your test error is your training |
|
|
|
01:07:10.805 --> 01:07:12.479 |
|
error plus your generalization error. |
|
|
|
01:07:12.479 --> 01:07:15.250 |
|
Generalization error is due to the |
|
|
|
01:07:15.250 --> 01:07:19.370 |
|
ability of your or the failure of your |
|
|
|
01:07:19.370 --> 01:07:21.520 |
|
model to make predictions on the data |
|
|
|
01:07:21.520 --> 01:07:22.500 |
|
hasn't seen yet. |
|
|
|
01:07:22.500 --> 01:07:24.670 |
|
So you could have something that has |
|
|
|
01:07:24.670 --> 01:07:26.480 |
|
absolutely perfect training error but |
|
|
|
01:07:26.480 --> 01:07:28.370 |
|
has enormous generalization error and |
|
|
|
01:07:28.370 --> 01:07:29.140 |
|
that's no good. |
|
|
|
01:07:29.140 --> 01:07:30.780 |
|
Or you could have something that has a |
|
|
|
01:07:30.780 --> 01:07:32.170 |
|
lot of trouble fitting the training. |
|
|
|
01:07:32.230 --> 01:07:33.750 |
|
Data, but its generalization error is |
|
|
|
01:07:33.750 --> 01:07:34.320 |
|
very small. |
|
|
|
01:07:39.000 --> 01:07:39.460 |
|
So. |
|
|
|
01:07:41.680 --> 01:07:43.470 |
|
If you train so suppose you have |
|
|
|
01:07:43.470 --> 01:07:45.820 |
|
infinite training examples, then |
|
|
|
01:07:45.820 --> 01:07:48.508 |
|
eventually you're training error will |
|
|
|
01:07:48.508 --> 01:07:51.175 |
|
reach some plateau, and your test error |
|
|
|
01:07:51.175 --> 01:07:53.239 |
|
will also reach some plateau. |
|
|
|
01:07:54.150 --> 01:07:56.773 |
|
This these will reach the same point if |
|
|
|
01:07:56.773 --> 01:07:58.640 |
|
you don't have any function shift. |
|
|
|
01:07:58.640 --> 01:08:01.795 |
|
So if you have some difference, if you |
|
|
|
01:08:01.795 --> 01:08:03.820 |
|
have some gap to where they're |
|
|
|
01:08:03.820 --> 01:08:06.130 |
|
converging, it either means that you |
|
|
|
01:08:06.130 --> 01:08:07.640 |
|
have that you're not able to fully |
|
|
|
01:08:07.640 --> 01:08:10.056 |
|
optimize your function, or that the |
|
|
|
01:08:10.056 --> 01:08:11.890 |
|
that you have a function shift that the |
|
|
|
01:08:11.890 --> 01:08:13.160 |
|
probability of the true label is |
|
|
|
01:08:13.160 --> 01:08:14.760 |
|
changing between training and test. |
|
|
|
01:08:16.520 --> 01:08:19.117 |
|
Now, this gap between the test area |
|
|
|
01:08:19.117 --> 01:08:20.750 |
|
that you would get from infinite |
|
|
|
01:08:20.750 --> 01:08:22.733 |
|
training examples and the actual test |
|
|
|
01:08:22.733 --> 01:08:24.310 |
|
area that you're getting given finite |
|
|
|
01:08:24.310 --> 01:08:28.020 |
|
training examples is due to the model |
|
|
|
01:08:28.020 --> 01:08:28.670 |
|
variants. |
|
|
|
01:08:28.670 --> 01:08:30.615 |
|
It's due to the model complexity and |
|
|
|
01:08:30.615 --> 01:08:32.500 |
|
the inability to perfectly solve for |
|
|
|
01:08:32.500 --> 01:08:34.200 |
|
the best parameters given your limited |
|
|
|
01:08:34.200 --> 01:08:34.770 |
|
training data. |
|
|
|
01:08:35.900 --> 01:08:38.800 |
|
And it can also be exacerbated by |
|
|
|
01:08:38.800 --> 01:08:40.840 |
|
distribution shift if you like, your |
|
|
|
01:08:40.840 --> 01:08:42.710 |
|
training data is more likely to sample |
|
|
|
01:08:42.710 --> 01:08:44.410 |
|
some areas of the feature space than |
|
|
|
01:08:44.410 --> 01:08:45.110 |
|
your test data. |
|
|
|
01:08:46.970 --> 01:08:49.990 |
|
And this gap the training error. |
|
|
|
01:08:50.830 --> 01:08:52.876 |
|
Is due to the limited power of your |
|
|
|
01:08:52.876 --> 01:08:55.360 |
|
model to fit whatever whatever you give |
|
|
|
01:08:55.360 --> 01:08:55.590 |
|
it. |
|
|
|
01:08:55.590 --> 01:08:58.200 |
|
So it's due to the model bias, and it's |
|
|
|
01:08:58.200 --> 01:09:00.120 |
|
also due to the unavoidable intrinsic |
|
|
|
01:09:00.120 --> 01:09:02.580 |
|
error that even if you have infinite |
|
|
|
01:09:02.580 --> 01:09:04.180 |
|
examples, there's some error that's |
|
|
|
01:09:04.180 --> 01:09:04.950 |
|
unavoidable. |
|
|
|
01:09:05.780 --> 01:09:07.420 |
|
Either because it's intrinsic to the |
|
|
|
01:09:07.420 --> 01:09:09.320 |
|
problem or because your model has |
|
|
|
01:09:09.320 --> 01:09:10.250 |
|
limited capacity. |
|
|
|
01:09:16.100 --> 01:09:16.590 |
|
All right. |
|
|
|
01:09:16.590 --> 01:09:18.230 |
|
So I'm bringing up a point that I |
|
|
|
01:09:18.230 --> 01:09:19.590 |
|
raised earlier. |
|
|
|
01:09:20.930 --> 01:09:24.070 |
|
And I want to see if you can still |
|
|
|
01:09:24.070 --> 01:09:25.350 |
|
explain the answer. |
|
|
|
01:09:25.350 --> 01:09:27.510 |
|
So why is it important to have a |
|
|
|
01:09:27.510 --> 01:09:28.570 |
|
validation set? |
|
|
|
01:09:30.680 --> 01:09:32.180 |
|
If I've got a bunch of models that I |
|
|
|
01:09:32.180 --> 01:09:35.400 |
|
want to evaluate, why don't I just take |
|
|
|
01:09:35.400 --> 01:09:37.060 |
|
do a train set and test set? |
|
|
|
01:09:37.710 --> 01:09:39.110 |
|
Train them all in the training set, |
|
|
|
01:09:39.110 --> 01:09:40.760 |
|
evaluate them all in the test set and |
|
|
|
01:09:40.760 --> 01:09:42.650 |
|
then report the best performance. |
|
|
|
01:09:42.650 --> 01:09:43.970 |
|
What's the issue with that? |
|
|
|
01:09:43.970 --> 01:09:46.120 |
|
Why is that not a good procedure? |
|
|
|
01:09:47.970 --> 01:09:49.590 |
|
I guess back with the orange shirt, |
|
|
|
01:09:49.590 --> 01:09:50.370 |
|
easier in first. |
|
|
|
01:09:52.350 --> 01:09:54.756 |
|
So your risk overfitting the model, so |
|
|
|
01:09:54.756 --> 01:09:56.190 |
|
that the problem is that. |
|
|
|
01:09:56.980 --> 01:09:59.915 |
|
You're the problem is that your test |
|
|
|
01:09:59.915 --> 01:10:02.840 |
|
error measure will be biased, which |
|
|
|
01:10:02.840 --> 01:10:05.170 |
|
means that it won't be the expected |
|
|
|
01:10:05.170 --> 01:10:07.620 |
|
value is not the true value. |
|
|
|
01:10:07.620 --> 01:10:08.980 |
|
In other words, you're going to tend to |
|
|
|
01:10:08.980 --> 01:10:11.400 |
|
underestimate the error if you do this |
|
|
|
01:10:11.400 --> 01:10:13.800 |
|
procedure because you're choosing the |
|
|
|
01:10:13.800 --> 01:10:15.529 |
|
best model based on the test |
|
|
|
01:10:15.530 --> 01:10:16.430 |
|
performance. |
|
|
|
01:10:16.430 --> 01:10:18.370 |
|
But this test sample is just one random |
|
|
|
01:10:18.370 --> 01:10:19.880 |
|
sample from the general test |
|
|
|
01:10:19.880 --> 01:10:21.250 |
|
distribution, so if you're to take |
|
|
|
01:10:21.250 --> 01:10:22.530 |
|
another sample, it might have a |
|
|
|
01:10:22.530 --> 01:10:23.200 |
|
different answer. |
|
|
|
01:10:24.770 --> 01:10:28.290 |
|
And there's been cases where one time |
|
|
|
01:10:28.290 --> 01:10:30.840 |
|
somebody had some agency had some big |
|
|
|
01:10:30.840 --> 01:10:34.819 |
|
challenge they had, they had, they |
|
|
|
01:10:34.820 --> 01:10:35.840 |
|
thought they were doing the right |
|
|
|
01:10:35.840 --> 01:10:36.045 |
|
thing. |
|
|
|
01:10:36.045 --> 01:10:37.898 |
|
They had a test set, they had a train |
|
|
|
01:10:37.898 --> 01:10:38.104 |
|
set. |
|
|
|
01:10:38.104 --> 01:10:40.827 |
|
They said you can only evaluate on the |
|
|
|
01:10:40.827 --> 01:10:43.176 |
|
train set and only test on the test |
|
|
|
01:10:43.176 --> 01:10:43.469 |
|
set. |
|
|
|
01:10:43.470 --> 01:10:45.135 |
|
But they provided both the train set |
|
|
|
01:10:45.135 --> 01:10:46.960 |
|
and the test set to the researchers. |
|
|
|
01:10:47.600 --> 01:10:50.780 |
|
And one group like iterated through a |
|
|
|
01:10:50.780 --> 01:10:53.400 |
|
million different models and found a |
|
|
|
01:10:53.400 --> 01:10:55.451 |
|
model that got that you could train on |
|
|
|
01:10:55.451 --> 01:10:57.080 |
|
the train set and achieved perfect |
|
|
|
01:10:57.080 --> 01:10:58.400 |
|
error on the test set. |
|
|
|
01:10:58.400 --> 01:11:00.182 |
|
But then when they applied a held out |
|
|
|
01:11:00.182 --> 01:11:02.459 |
|
test set, it did like really really |
|
|
|
01:11:02.460 --> 01:11:04.180 |
|
badly, like almost chance performance. |
|
|
|
01:11:05.170 --> 01:11:08.930 |
|
So the so training on your, even doing |
|
|
|
01:11:08.930 --> 01:11:10.319 |
|
model selection on your. |
|
|
|
01:11:11.920 --> 01:11:13.850 |
|
On your test set, it's called like meta |
|
|
|
01:11:13.850 --> 01:11:16.405 |
|
overfitting that you're kind of still |
|
|
|
01:11:16.405 --> 01:11:17.920 |
|
like an overfit to that test set. |
|
|
|
01:11:21.020 --> 01:11:21.330 |
|
Right. |
|
|
|
01:11:21.330 --> 01:11:24.730 |
|
So I have just a little more time. |
|
|
|
01:11:26.140 --> 01:11:28.790 |
|
And I'm going to show you two things. |
|
|
|
01:11:28.790 --> 01:11:30.660 |
|
So one is homework #1. |
|
|
|
01:11:31.810 --> 01:11:33.840 |
|
So, homework one you have. |
|
|
|
01:11:35.670 --> 01:11:37.000 |
|
2 problems. |
|
|
|
01:11:37.000 --> 01:11:38.580 |
|
One is digit classification. |
|
|
|
01:11:38.580 --> 01:11:40.140 |
|
You have to try to assign each of these |
|
|
|
01:11:40.140 --> 01:11:42.960 |
|
digits into a particular category. |
|
|
|
01:11:43.900 --> 01:11:47.060 |
|
And so the digit numbers are zero to |
|
|
|
01:11:47.060 --> 01:11:47.440 |
|
10. |
|
|
|
01:11:48.430 --> 01:11:52.110 |
|
And these are small images 28 by 28. |
|
|
|
01:11:52.110 --> 01:11:53.910 |
|
The code is there to just reshape it |
|
|
|
01:11:53.910 --> 01:11:56.150 |
|
into a 784 dimensional vector. |
|
|
|
01:11:57.270 --> 01:11:59.500 |
|
And I've split it into multiple |
|
|
|
01:11:59.500 --> 01:12:02.650 |
|
different training and test sets, so I |
|
|
|
01:12:02.650 --> 01:12:03.940 |
|
provide starter code. |
|
|
|
01:12:05.220 --> 01:12:07.720 |
|
But the starter code is really just to |
|
|
|
01:12:07.720 --> 01:12:09.025 |
|
get the data there for you. |
|
|
|
01:12:09.025 --> 01:12:11.550 |
|
I don't do the actual like K&N or |
|
|
|
01:12:11.550 --> 01:12:13.100 |
|
anything like that yourself. |
|
|
|
01:12:13.100 --> 01:12:14.422 |
|
So this is starter code. |
|
|
|
01:12:14.422 --> 01:12:15.660 |
|
You can look at it to get an |
|
|
|
01:12:15.660 --> 01:12:17.120 |
|
understanding of the syntax if you're |
|
|
|
01:12:17.120 --> 01:12:19.140 |
|
not too familiar with Python, but it's |
|
|
|
01:12:19.140 --> 01:12:20.735 |
|
just creating train, Val, test splits |
|
|
|
01:12:20.735 --> 01:12:22.460 |
|
and I also create train splits at |
|
|
|
01:12:22.460 --> 01:12:23.310 |
|
different sizes. |
|
|
|
01:12:24.090 --> 01:12:25.210 |
|
So you can see that here. |
|
|
|
01:12:26.210 --> 01:12:27.980 |
|
And darn it. |
|
|
|
01:12:29.460 --> 01:12:30.040 |
|
OK, good. |
|
|
|
01:12:33.290 --> 01:12:34.290 |
|
Sorry about that. |
|
|
|
01:12:36.090 --> 01:12:38.060 |
|
Alright, so here's the starter code. |
|
|
|
01:12:39.120 --> 01:12:42.110 |
|
So you fill in like the K&N function, |
|
|
|
01:12:42.110 --> 01:12:43.740 |
|
you can change the function definition |
|
|
|
01:12:43.740 --> 01:12:45.540 |
|
if you want, and then you'll also do |
|
|
|
01:12:45.540 --> 01:12:47.232 |
|
Naive Bayes and logistic regression, |
|
|
|
01:12:47.232 --> 01:12:49.000 |
|
and then you can have some code for |
|
|
|
01:12:49.000 --> 01:12:51.550 |
|
experiments, and then there's a |
|
|
|
01:12:51.550 --> 01:12:52.850 |
|
temperature regression problem. |
|
|
|
01:12:54.950 --> 01:12:57.770 |
|
So there's a couple things that I want |
|
|
|
01:12:57.770 --> 01:12:59.640 |
|
to say about all this. |
|
|
|
01:12:59.640 --> 01:13:02.930 |
|
So one is that there's two challenges. |
|
|
|
01:13:02.930 --> 01:13:05.830 |
|
One is digit classification. |
|
|
|
01:13:06.810 --> 01:13:08.400 |
|
And one is temperature regression. |
|
|
|
01:13:08.400 --> 01:13:10.210 |
|
For temperature regression, you get the |
|
|
|
01:13:10.210 --> 01:13:11.750 |
|
previous temperatures of a bunch of |
|
|
|
01:13:11.750 --> 01:13:11.960 |
|
U.S. |
|
|
|
01:13:11.960 --> 01:13:13.397 |
|
cities, and you have to predict the |
|
|
|
01:13:13.397 --> 01:13:14.400 |
|
temperature for the next day in |
|
|
|
01:13:14.400 --> 01:13:14.930 |
|
Cleveland. |
|
|
|
01:13:16.170 --> 01:13:17.881 |
|
And you're going to use. |
|
|
|
01:13:17.881 --> 01:13:18.907 |
|
You're going to. |
|
|
|
01:13:18.907 --> 01:13:20.960 |
|
For both of these you'll use Canon |
|
|
|
01:13:20.960 --> 01:13:22.720 |
|
Naive Bayes, and for one you'll use |
|
|
|
01:13:22.720 --> 01:13:24.190 |
|
logistic regression, the other linear |
|
|
|
01:13:24.190 --> 01:13:24.690 |
|
regression. |
|
|
|
01:13:25.510 --> 01:13:26.900 |
|
At the end of today you should be able |
|
|
|
01:13:26.900 --> 01:13:28.440 |
|
to do the key and part of these. |
|
|
|
01:13:29.520 --> 01:13:30.940 |
|
And then for. |
|
|
|
01:13:32.620 --> 01:13:34.880 |
|
For the digits, you'll look at the |
|
|
|
01:13:34.880 --> 01:13:37.830 |
|
error versus training size and also do |
|
|
|
01:13:37.830 --> 01:13:39.300 |
|
some parameter selection. |
|
|
|
01:13:40.350 --> 01:13:43.790 |
|
Using a validation set and then for |
|
|
|
01:13:43.790 --> 01:13:46.280 |
|
temperature, you'll identify the most |
|
|
|
01:13:46.280 --> 01:13:47.270 |
|
important features. |
|
|
|
01:13:47.270 --> 01:13:49.450 |
|
I'll explain how you do that next |
|
|
|
01:13:49.450 --> 01:13:51.070 |
|
Thursday, so that's not something you |
|
|
|
01:13:51.070 --> 01:13:52.270 |
|
can implement based on the lecture |
|
|
|
01:13:52.270 --> 01:13:52.670 |
|
today yet. |
|
|
|
01:13:53.370 --> 01:13:55.070 |
|
And then there's also a stretch goals |
|
|
|
01:13:55.070 --> 01:13:56.890 |
|
if you want to earn additional points. |
|
|
|
01:13:57.490 --> 01:13:59.230 |
|
So these are just trying to improve the |
|
|
|
01:13:59.230 --> 01:14:00.540 |
|
classification or regression |
|
|
|
01:14:00.540 --> 01:14:03.430 |
|
performance, or to design a data set. |
|
|
|
01:14:03.430 --> 01:14:05.160 |
|
We're naive's outperforms the other |
|
|
|
01:14:05.160 --> 01:14:05.390 |
|
two. |
|
|
|
01:14:07.080 --> 01:14:09.400 |
|
When you do these homeworks you have |
|
|
|
01:14:09.400 --> 01:14:11.145 |
|
this is linked from the website, so |
|
|
|
01:14:11.145 --> 01:14:12.280 |
|
this gives you like the main |
|
|
|
01:14:12.280 --> 01:14:12.840 |
|
assignment. |
|
|
|
01:14:14.200 --> 01:14:16.920 |
|
There's a starter code the data. |
|
|
|
01:14:17.620 --> 01:14:19.290 |
|
You can look at the tips and tricks. |
|
|
|
01:14:19.290 --> 01:14:25.780 |
|
So this has different examples of |
|
|
|
01:14:25.780 --> 01:14:28.510 |
|
Python usage in this case that might be |
|
|
|
01:14:28.510 --> 01:14:30.740 |
|
handy, and also talks about Google |
|
|
|
01:14:30.740 --> 01:14:32.820 |
|
Colab which you can use to do the |
|
|
|
01:14:32.820 --> 01:14:33.230 |
|
assignment. |
|
|
|
01:14:33.230 --> 01:14:34.900 |
|
And then there's some more general tips |
|
|
|
01:14:34.900 --> 01:14:35.710 |
|
on the assignment. |
|
|
|
01:14:38.340 --> 01:14:42.380 |
|
And then for when you report things, |
|
|
|
01:14:42.380 --> 01:14:44.990 |
|
you'll report you'll do like a PDF or |
|
|
|
01:14:44.990 --> 01:14:46.810 |
|
HTML of your Jupiter notebook. |
|
|
|
01:14:47.470 --> 01:14:50.540 |
|
But you will also mainly just fill out |
|
|
|
01:14:50.540 --> 01:14:53.700 |
|
these numbers which are the like kind |
|
|
|
01:14:53.700 --> 01:14:56.120 |
|
of the answers to the experiments, and |
|
|
|
01:14:56.120 --> 01:14:57.655 |
|
this is the main thing that we'll look |
|
|
|
01:14:57.655 --> 01:14:58.660 |
|
at to grade. |
|
|
|
01:14:58.660 --> 01:15:00.340 |
|
And then they'll only they may only |
|
|
|
01:15:00.340 --> 01:15:01.955 |
|
look at the code if they're not sure if |
|
|
|
01:15:01.955 --> 01:15:03.490 |
|
you did it right given your answers |
|
|
|
01:15:03.490 --> 01:15:03.710 |
|
here. |
|
|
|
01:15:04.620 --> 01:15:05.970 |
|
So you need to fill this out. |
|
|
|
01:15:07.150 --> 01:15:09.060 |
|
And you say, how many points do you |
|
|
|
01:15:09.060 --> 01:15:10.115 |
|
think you should get for that? |
|
|
|
01:15:10.115 --> 01:15:12.190 |
|
And so then the TAS will say, the |
|
|
|
01:15:12.190 --> 01:15:14.148 |
|
graders will say the difference between |
|
|
|
01:15:14.148 --> 01:15:15.790 |
|
the points that you get and what you |
|
|
|
01:15:15.790 --> 01:15:16.460 |
|
thought you should get. |
|
|
|
01:15:20.560 --> 01:15:22.590 |
|
So I think that's all I want to say |
|
|
|
01:15:22.590 --> 01:15:23.740 |
|
about homework one. |
|
|
|
01:15:26.900 --> 01:15:27.590 |
|
Let me see. |
|
|
|
01:15:27.590 --> 01:15:28.155 |
|
All right. |
|
|
|
01:15:28.155 --> 01:15:29.480 |
|
So we're out of time. |
|
|
|
01:15:29.480 --> 01:15:31.130 |
|
So I'm going to talk about this at the |
|
|
|
01:15:31.130 --> 01:15:33.470 |
|
start of the next class and I'll do a |
|
|
|
01:15:33.470 --> 01:15:35.390 |
|
recap of KNN. |
|
|
|
01:15:37.160 --> 01:15:40.330 |
|
And so next week I'll talk about Naive |
|
|
|
01:15:40.330 --> 01:15:43.010 |
|
Bayes and linear logistic regression. |
|
|
|
01:15:44.260 --> 01:15:44.810 |
|
Thanks. |
|
|
|
|