|
WEBVTT Kind: captions; Language: en-US |
|
|
|
NOTE |
|
Created on 2024-02-07T20:53:13.8059397Z by ClassTranscribe |
|
|
|
00:01:57.930 --> 00:01:59.190 |
|
It seems like there's like. |
|
|
|
00:02:01.950 --> 00:02:02.550 |
|
Yes, it's OK. |
|
|
|
00:02:03.590 --> 00:02:04.800 |
|
Alright, good morning everybody. |
|
|
|
00:02:08.160 --> 00:02:10.626 |
|
So I thought it I was trying to figure |
|
|
|
00:02:10.626 --> 00:02:12.030 |
|
out why this seems like there's a lot |
|
|
|
00:02:12.030 --> 00:02:13.520 |
|
of light on the screen, but I can't |
|
|
|
00:02:13.520 --> 00:02:14.192 |
|
figure it out. |
|
|
|
00:02:14.192 --> 00:02:16.300 |
|
I thought it was interesting that this |
|
|
|
00:02:16.300 --> 00:02:17.490 |
|
for this picture. |
|
|
|
00:02:17.580 --> 00:02:18.110 |
|
And. |
|
|
|
00:02:19.140 --> 00:02:20.760 |
|
So I'm generating all of these with |
|
|
|
00:02:20.760 --> 00:02:21.510 |
|
Dolly. |
|
|
|
00:02:21.510 --> 00:02:22.880 |
|
This one was a dirt Rd. |
|
|
|
00:02:22.880 --> 00:02:24.500 |
|
splits around a large gnarly tree |
|
|
|
00:02:24.500 --> 00:02:25.640 |
|
fractal art. |
|
|
|
00:02:25.640 --> 00:02:27.830 |
|
But I thought it was really funny how |
|
|
|
00:02:27.830 --> 00:02:30.780 |
|
it without my bidding it put like some |
|
|
|
00:02:30.780 --> 00:02:32.530 |
|
kind of superhero or something behind |
|
|
|
00:02:32.530 --> 00:02:34.756 |
|
the tree there's like some looks like |
|
|
|
00:02:34.756 --> 00:02:36.360 |
|
there's like some superhero that's like |
|
|
|
00:02:36.360 --> 00:02:37.230 |
|
flying in and. |
|
|
|
00:02:38.130 --> 00:02:39.420 |
|
I don't know where that came from. |
|
|
|
00:02:41.170 --> 00:02:42.430 |
|
Can you guys see the screen OK? |
|
|
|
00:02:43.750 --> 00:02:44.830 |
|
Seems a little faded. |
|
|
|
00:03:05.390 --> 00:03:05.650 |
|
But. |
|
|
|
00:03:12.610 --> 00:03:13.580 |
|
OK I. |
|
|
|
00:03:16.440 --> 00:03:18.730 |
|
Yeah, I put the lights are on all off. |
|
|
|
00:03:21.400 --> 00:03:22.580 |
|
But those are still on. |
|
|
|
00:03:25.090 --> 00:03:27.020 |
|
Alright, let me just take one second. |
|
|
|
00:03:48.110 --> 00:03:48.600 |
|
All right. |
|
|
|
00:03:48.600 --> 00:03:50.860 |
|
Anyway, I'll move with it. |
|
|
|
00:03:51.500 --> 00:03:53.410 |
|
Alright, so. |
|
|
|
00:03:53.540 --> 00:03:55.570 |
|
And so for some Logistics, I wanted to |
|
|
|
00:03:55.570 --> 00:03:57.160 |
|
I never got to introduce some of the |
|
|
|
00:03:57.160 --> 00:03:58.740 |
|
TAS because the couple couldn't be here |
|
|
|
00:03:58.740 --> 00:04:00.110 |
|
in the first day and I kept forgetting. |
|
|
|
00:04:01.130 --> 00:04:03.890 |
|
So, Josh, are you here? |
|
|
|
00:04:04.950 --> 00:04:08.900 |
|
OK, cool if you want to just actually. |
|
|
|
00:04:10.130 --> 00:04:11.160 |
|
I can give my mic. |
|
|
|
00:04:11.160 --> 00:04:12.780 |
|
If you want to just introduce yourself |
|
|
|
00:04:12.780 --> 00:04:14.540 |
|
a little bit, you can say like what |
|
|
|
00:04:14.540 --> 00:04:14.940 |
|
kind of. |
|
|
|
00:04:19.890 --> 00:04:20.450 |
|
Yeah. |
|
|
|
00:04:20.450 --> 00:04:20.870 |
|
Hi, everyone. |
|
|
|
00:04:20.870 --> 00:04:21.290 |
|
I'm Josh. |
|
|
|
00:04:21.290 --> 00:04:23.110 |
|
I've been applying machine learning to |
|
|
|
00:04:23.110 --> 00:04:25.980 |
|
autonomous cars and airplanes. |
|
|
|
00:04:27.150 --> 00:04:27.480 |
|
Cool. |
|
|
|
00:04:27.480 --> 00:04:27.900 |
|
Thank you. |
|
|
|
00:04:28.830 --> 00:04:31.020 |
|
And cassette, cassette. |
|
|
|
00:04:37.760 --> 00:04:38.320 |
|
Yeah. |
|
|
|
00:04:41.120 --> 00:04:46.150 |
|
OK hey everyone, I'm a TA for CS441 and |
|
|
|
00:04:46.150 --> 00:04:49.230 |
|
I have experience with NLP majorly. |
|
|
|
00:04:49.230 --> 00:04:49.720 |
|
Thank you. |
|
|
|
00:04:50.960 --> 00:04:51.190 |
|
Great. |
|
|
|
00:04:51.190 --> 00:04:52.080 |
|
Thank you. |
|
|
|
00:04:52.080 --> 00:04:54.230 |
|
And I don't think Peter's here, but |
|
|
|
00:04:54.230 --> 00:04:55.820 |
|
Peter, are you here, OK. |
|
|
|
00:04:56.520 --> 00:04:58.240 |
|
Usually has a conflict on Tuesday, so |
|
|
|
00:04:58.240 --> 00:05:00.780 |
|
also we have Pedro is a. |
|
|
|
00:05:01.510 --> 00:05:04.170 |
|
A pro stock course Assistant. |
|
|
|
00:05:04.170 --> 00:05:07.020 |
|
So it's not like a regular TA, but |
|
|
|
00:05:07.020 --> 00:05:07.650 |
|
he's. |
|
|
|
00:05:08.730 --> 00:05:10.510 |
|
Doing a postdoc with Nancy Amato. |
|
|
|
00:05:11.170 --> 00:05:12.880 |
|
And here is helping out with the online |
|
|
|
00:05:12.880 --> 00:05:15.190 |
|
course for a couple semesters. |
|
|
|
00:05:15.830 --> 00:05:17.060 |
|
And so he's helping out with this |
|
|
|
00:05:17.060 --> 00:05:19.140 |
|
course and he's. |
|
|
|
00:05:20.920 --> 00:05:23.700 |
|
One of the things he's doing is holding |
|
|
|
00:05:23.700 --> 00:05:25.640 |
|
office hours, and so especially if you |
|
|
|
00:05:25.640 --> 00:05:27.710 |
|
have, if you want help with your |
|
|
|
00:05:27.710 --> 00:05:31.940 |
|
projects or homeworks, they're like |
|
|
|
00:05:31.940 --> 00:05:34.000 |
|
higher level advice, then he can be a |
|
|
|
00:05:34.000 --> 00:05:35.676 |
|
really good resource for that. |
|
|
|
00:05:35.676 --> 00:05:37.400 |
|
So I know a lot of people want to meet |
|
|
|
00:05:37.400 --> 00:05:39.250 |
|
with me about their side projects, |
|
|
|
00:05:39.250 --> 00:05:40.720 |
|
which is also fine, you're welcome to |
|
|
|
00:05:40.720 --> 00:05:42.100 |
|
do that. |
|
|
|
00:05:42.100 --> 00:05:44.090 |
|
But he's also a good person for that. |
|
|
|
00:05:46.480 --> 00:05:49.550 |
|
Alright, so just as a reminder for |
|
|
|
00:05:49.550 --> 00:05:51.210 |
|
anybody who wasn't here, the first |
|
|
|
00:05:51.210 --> 00:05:53.630 |
|
lecture, all the notes and everything |
|
|
|
00:05:53.630 --> 00:05:55.635 |
|
are on this web page. |
|
|
|
00:05:55.635 --> 00:05:57.890 |
|
So make sure that you go there and sign |
|
|
|
00:05:57.890 --> 00:06:00.290 |
|
up for CampusWire where announcements |
|
|
|
00:06:00.290 --> 00:06:01.600 |
|
will be made. |
|
|
|
00:06:01.600 --> 00:06:06.120 |
|
Also, I sent a survey by e-mail and I |
|
|
|
00:06:06.120 --> 00:06:07.760 |
|
got a little bit of responses last |
|
|
|
00:06:07.760 --> 00:06:08.240 |
|
night. |
|
|
|
00:06:08.240 --> 00:06:10.390 |
|
Do you take some time to respond to it |
|
|
|
00:06:10.390 --> 00:06:11.300 |
|
please? |
|
|
|
00:06:11.300 --> 00:06:12.290 |
|
There's two parts. |
|
|
|
00:06:12.290 --> 00:06:14.340 |
|
One is just asking for feedback about |
|
|
|
00:06:14.340 --> 00:06:15.300 |
|
like piece of the course. |
|
|
|
00:06:15.380 --> 00:06:16.340 |
|
And stuff like that. |
|
|
|
00:06:16.420 --> 00:06:16.900 |
|
And. |
|
|
|
00:06:17.720 --> 00:06:20.640 |
|
One part is asking about your interests |
|
|
|
00:06:20.640 --> 00:06:23.060 |
|
for some of the possible. |
|
|
|
00:06:24.070 --> 00:06:26.332 |
|
Challenges that I'll pick for final |
|
|
|
00:06:26.332 --> 00:06:29.810 |
|
project and so basically for the final |
|
|
|
00:06:29.810 --> 00:06:32.149 |
|
project there will be 3 challenges that |
|
|
|
00:06:32.150 --> 00:06:33.600 |
|
are like pre selected. |
|
|
|
00:06:34.230 --> 00:06:35.720 |
|
But if you don't want to do those, you |
|
|
|
00:06:35.720 --> 00:06:38.370 |
|
can also just do some benchmark that's |
|
|
|
00:06:38.370 --> 00:06:40.070 |
|
online or you can even do a custom |
|
|
|
00:06:40.070 --> 00:06:40.860 |
|
task. |
|
|
|
00:06:40.860 --> 00:06:43.960 |
|
And I'll post the specifications for |
|
|
|
00:06:43.960 --> 00:06:46.640 |
|
final project soon as homework 2. |
|
|
|
00:06:47.960 --> 00:06:50.140 |
|
Also, just based on the feedback I've |
|
|
|
00:06:50.140 --> 00:06:52.810 |
|
seen so far, I think nobody thinks it's |
|
|
|
00:06:52.810 --> 00:06:54.570 |
|
way too easy or too slow. |
|
|
|
00:06:54.570 --> 00:06:57.150 |
|
Some people think it's much too fast |
|
|
|
00:06:57.150 --> 00:06:57.930 |
|
and too hard. |
|
|
|
00:06:57.930 --> 00:06:59.710 |
|
So I'm going to take some time on |
|
|
|
00:06:59.710 --> 00:07:03.450 |
|
Thursday to Reconsolidate and present. |
|
|
|
00:07:03.450 --> 00:07:07.280 |
|
Kind of go over what we've done so far, |
|
|
|
00:07:07.280 --> 00:07:09.750 |
|
talk in more depth or maybe not more |
|
|
|
00:07:09.750 --> 00:07:11.439 |
|
depth, but at least go over the |
|
|
|
00:07:11.440 --> 00:07:12.150 |
|
concepts. |
|
|
|
00:07:13.270 --> 00:07:16.080 |
|
And the algorithms and a little bit of |
|
|
|
00:07:16.080 --> 00:07:18.380 |
|
code now that you've had a first pass |
|
|
|
00:07:18.380 --> 00:07:18.640 |
|
edit. |
|
|
|
00:07:20.460 --> 00:07:22.830 |
|
So I'll tap the brakes a little bit to |
|
|
|
00:07:22.830 --> 00:07:24.500 |
|
do that because I think it's really |
|
|
|
00:07:24.500 --> 00:07:27.215 |
|
important that these that everyone is |
|
|
|
00:07:27.215 --> 00:07:28.790 |
|
really solid on these fundamentals. |
|
|
|
00:07:28.790 --> 00:07:31.260 |
|
And I know that there's a pretty big |
|
|
|
00:07:31.260 --> 00:07:33.090 |
|
range of backgrounds of people taking |
|
|
|
00:07:33.090 --> 00:07:35.060 |
|
the course, many people from other |
|
|
|
00:07:35.060 --> 00:07:35.710 |
|
departments. |
|
|
|
00:07:37.290 --> 00:07:39.900 |
|
As well as other different kinds of. |
|
|
|
00:07:41.230 --> 00:07:43.280 |
|
Of like academic foundations. |
|
|
|
00:07:44.270 --> 00:07:44.610 |
|
Alright. |
|
|
|
00:07:45.910 --> 00:07:47.890 |
|
So just to recap what we talked about |
|
|
|
00:07:47.890 --> 00:07:49.640 |
|
in the last few lectures, very briefly, |
|
|
|
00:07:49.640 --> 00:07:51.040 |
|
we talked about Nearest neighbor. |
|
|
|
00:07:51.780 --> 00:07:53.210 |
|
And the superpower is the nearest |
|
|
|
00:07:53.210 --> 00:07:55.170 |
|
neighbor are that it can instantly |
|
|
|
00:07:55.170 --> 00:07:56.230 |
|
learn new classes. |
|
|
|
00:07:56.230 --> 00:07:58.020 |
|
You can just add a new example to your |
|
|
|
00:07:58.020 --> 00:07:58.790 |
|
training set. |
|
|
|
00:07:58.790 --> 00:08:00.780 |
|
And since there's no model that has to |
|
|
|
00:08:00.780 --> 00:08:04.110 |
|
be like tuned, you can just learn super |
|
|
|
00:08:04.110 --> 00:08:04.720 |
|
quickly. |
|
|
|
00:08:04.720 --> 00:08:07.450 |
|
And it's also a pretty good predictor |
|
|
|
00:08:07.450 --> 00:08:08.980 |
|
from either one or many examples. |
|
|
|
00:08:08.980 --> 00:08:10.430 |
|
So it's a really good. |
|
|
|
00:08:10.530 --> 00:08:13.690 |
|
It's a really good algorithm to have in |
|
|
|
00:08:13.690 --> 00:08:15.330 |
|
your tool belt as a baseline and |
|
|
|
00:08:15.330 --> 00:08:16.760 |
|
sometimes as a best performer. |
|
|
|
00:08:18.500 --> 00:08:20.160 |
|
We also talked about Naive bees. |
|
|
|
00:08:21.050 --> 00:08:24.140 |
|
Night Bayes is not a great performer as |
|
|
|
00:08:24.140 --> 00:08:26.984 |
|
like a full algorithm, but it's often |
|
|
|
00:08:26.984 --> 00:08:27.426 |
|
a. |
|
|
|
00:08:27.426 --> 00:08:30.075 |
|
It's an important concept because it's |
|
|
|
00:08:30.075 --> 00:08:31.760 |
|
often part of an assumption that you |
|
|
|
00:08:31.760 --> 00:08:32.920 |
|
make when you're trying to model |
|
|
|
00:08:32.920 --> 00:08:35.560 |
|
probabilities that you'll assume that |
|
|
|
00:08:35.560 --> 00:08:37.630 |
|
the different features are independent |
|
|
|
00:08:37.630 --> 00:08:39.010 |
|
given the thing that you're trying to |
|
|
|
00:08:39.010 --> 00:08:39.330 |
|
predict. |
|
|
|
00:08:41.780 --> 00:08:44.290 |
|
It does have its pros, so the pros are |
|
|
|
00:08:44.290 --> 00:08:46.560 |
|
that it's really fast to estimate even |
|
|
|
00:08:46.560 --> 00:08:48.113 |
|
if you've got a lot of data. |
|
|
|
00:08:48.113 --> 00:08:49.909 |
|
And if you don't have a lot of data and |
|
|
|
00:08:49.910 --> 00:08:51.130 |
|
you're trying to get a probabilistic |
|
|
|
00:08:51.130 --> 00:08:53.300 |
|
classifier, then it might be your best |
|
|
|
00:08:53.300 --> 00:08:53.750 |
|
choice. |
|
|
|
00:08:53.750 --> 00:08:56.700 |
|
Because of its strong assumptions, you |
|
|
|
00:08:56.700 --> 00:08:59.880 |
|
can get decent estimates on those |
|
|
|
00:08:59.880 --> 00:09:02.160 |
|
single variable functions from even |
|
|
|
00:09:02.160 --> 00:09:02.800 |
|
limited data. |
|
|
|
00:09:05.460 --> 00:09:07.964 |
|
We talked about logistic regression. |
|
|
|
00:09:07.964 --> 00:09:10.830 |
|
Logistic regression is another super |
|
|
|
00:09:10.830 --> 00:09:12.230 |
|
widely used classifier. |
|
|
|
00:09:13.580 --> 00:09:16.400 |
|
I think the AML book says that SVM is |
|
|
|
00:09:16.400 --> 00:09:18.776 |
|
should be or like go to as a first as |
|
|
|
00:09:18.776 --> 00:09:20.660 |
|
like a first thing you try, but in my |
|
|
|
00:09:20.660 --> 00:09:22.130 |
|
opinion Logistic Regression is. |
|
|
|
00:09:23.810 --> 00:09:25.085 |
|
It's very effective. |
|
|
|
00:09:25.085 --> 00:09:26.810 |
|
It's a very effective predictor if you |
|
|
|
00:09:26.810 --> 00:09:28.720 |
|
have high dimensional features and it |
|
|
|
00:09:28.720 --> 00:09:30.250 |
|
also provides good confidence |
|
|
|
00:09:30.250 --> 00:09:31.460 |
|
estimates, meaning that. |
|
|
|
00:09:32.150 --> 00:09:35.320 |
|
You get not only most likely class, but |
|
|
|
00:09:35.320 --> 00:09:37.470 |
|
the probability that prediction is |
|
|
|
00:09:37.470 --> 00:09:40.800 |
|
correct and those probabilities fairly |
|
|
|
00:09:40.800 --> 00:09:41.400 |
|
trustworthy. |
|
|
|
00:09:43.320 --> 00:09:44.970 |
|
We also talked about Linear Regression, |
|
|
|
00:09:44.970 --> 00:09:46.560 |
|
where you're fitting a line to a set of |
|
|
|
00:09:46.560 --> 00:09:47.050 |
|
points. |
|
|
|
00:09:47.670 --> 00:09:50.610 |
|
And you can extrapolate to predict like |
|
|
|
00:09:50.610 --> 00:09:52.620 |
|
new values that are outside of your |
|
|
|
00:09:52.620 --> 00:09:53.790 |
|
Training range. |
|
|
|
00:09:54.530 --> 00:09:55.840 |
|
And so. |
|
|
|
00:09:56.730 --> 00:09:58.450 |
|
Linear regression is also useful for |
|
|
|
00:09:58.450 --> 00:10:00.270 |
|
explaining relationships you're very |
|
|
|
00:10:00.270 --> 00:10:02.100 |
|
commonly see, like trend lines. |
|
|
|
00:10:02.100 --> 00:10:03.390 |
|
That's just Linear Regression. |
|
|
|
00:10:04.130 --> 00:10:05.850 |
|
And you can predict continuous values |
|
|
|
00:10:05.850 --> 00:10:07.600 |
|
from many variables in linear |
|
|
|
00:10:07.600 --> 00:10:10.130 |
|
regression is also like probably the |
|
|
|
00:10:10.130 --> 00:10:12.760 |
|
most common tool for. |
|
|
|
00:10:12.830 --> 00:10:15.590 |
|
For things like, I don't know, like |
|
|
|
00:10:15.590 --> 00:10:17.790 |
|
economics or analyzing. |
|
|
|
00:10:18.770 --> 00:10:23.100 |
|
Yeah, time series analyzing like fMRI |
|
|
|
00:10:23.100 --> 00:10:25.930 |
|
data or all kinds of scientific and |
|
|
|
00:10:25.930 --> 00:10:27.180 |
|
economic analysis. |
|
|
|
00:10:30.420 --> 00:10:33.810 |
|
So almost all algorithms involve these |
|
|
|
00:10:33.810 --> 00:10:35.760 |
|
Nearest neighbor, logistic regression |
|
|
|
00:10:35.760 --> 00:10:36.850 |
|
or linear regression. |
|
|
|
00:10:37.540 --> 00:10:41.040 |
|
And the reason that there's thousand |
|
|
|
00:10:41.040 --> 00:10:43.330 |
|
papers published in the last 10 years |
|
|
|
00:10:43.330 --> 00:10:45.060 |
|
or so, probably a lot more than that |
|
|
|
00:10:45.060 --> 00:10:47.030 |
|
actually, is that. |
|
|
|
00:10:47.850 --> 00:10:50.120 |
|
Is really the feature learning, so it's |
|
|
|
00:10:50.120 --> 00:10:52.090 |
|
getting the right representation so |
|
|
|
00:10:52.090 --> 00:10:54.490 |
|
that when you feed that representation |
|
|
|
00:10:54.490 --> 00:10:56.610 |
|
into these like Linear models or |
|
|
|
00:10:56.610 --> 00:10:59.080 |
|
Nearest neighbor, you get good results. |
|
|
|
00:11:00.080 --> 00:11:00.660 |
|
And so. |
|
|
|
00:11:01.510 --> 00:11:03.020 |
|
Pretty much the rest of what we're |
|
|
|
00:11:03.020 --> 00:11:05.160 |
|
going to learn in the supervised |
|
|
|
00:11:05.160 --> 00:11:07.520 |
|
learning section of the course is how |
|
|
|
00:11:07.520 --> 00:11:08.460 |
|
to learn features. |
|
|
|
00:11:11.930 --> 00:11:14.150 |
|
So I did want to just briefly go over |
|
|
|
00:11:14.150 --> 00:11:15.640 |
|
the homework and remind you that it's |
|
|
|
00:11:15.640 --> 00:11:18.180 |
|
due on February 6th on Monday. |
|
|
|
00:11:19.060 --> 00:11:21.400 |
|
And I'll be going over some related |
|
|
|
00:11:21.400 --> 00:11:22.830 |
|
things again in more detail on |
|
|
|
00:11:22.830 --> 00:11:23.350 |
|
Thursday. |
|
|
|
00:11:24.300 --> 00:11:27.200 |
|
But there's two parts to the main |
|
|
|
00:11:27.200 --> 00:11:27.590 |
|
homework. |
|
|
|
00:11:27.590 --> 00:11:29.770 |
|
There's Digit Classification where |
|
|
|
00:11:29.770 --> 00:11:31.186 |
|
you're trying to predict a label zero |
|
|
|
00:11:31.186 --> 00:11:33.530 |
|
to 9 based on a 28 by 28 image. |
|
|
|
00:11:34.410 --> 00:11:36.409 |
|
These images get reshaped into like a |
|
|
|
00:11:36.410 --> 00:11:38.840 |
|
single vector, so you have a feature |
|
|
|
00:11:38.840 --> 00:11:41.020 |
|
vector that corresponds to the pixel |
|
|
|
00:11:41.020 --> 00:11:42.280 |
|
intensities of the image. |
|
|
|
00:11:44.350 --> 00:11:46.510 |
|
And then you have to do K and Naive |
|
|
|
00:11:46.510 --> 00:11:49.200 |
|
Bayes, linear logistic regression. |
|
|
|
00:11:50.060 --> 00:11:52.510 |
|
And plot the Error versus. |
|
|
|
00:11:52.670 --> 00:11:55.420 |
|
A plot Error versus Training size to |
|
|
|
00:11:55.420 --> 00:11:57.310 |
|
get a sense for like how performance |
|
|
|
00:11:57.310 --> 00:11:58.745 |
|
changes as you vary the number of |
|
|
|
00:11:58.745 --> 00:11:59.530 |
|
training examples. |
|
|
|
00:12:00.380 --> 00:12:02.820 |
|
And then to select the best parameter |
|
|
|
00:12:02.820 --> 00:12:06.490 |
|
using validation set which is a really |
|
|
|
00:12:06.490 --> 00:12:07.240 |
|
hyper parameter. |
|
|
|
00:12:07.240 --> 00:12:09.020 |
|
Tuning is like something that you do |
|
|
|
00:12:09.020 --> 00:12:10.100 |
|
all the time in machine learning. |
|
|
|
00:12:13.150 --> 00:12:14.839 |
|
The second problem is Temperature |
|
|
|
00:12:14.840 --> 00:12:15.700 |
|
Regression. |
|
|
|
00:12:15.700 --> 00:12:18.182 |
|
So I got this Temperature. |
|
|
|
00:12:18.182 --> 00:12:20.178 |
|
This data set of like the temperature |
|
|
|
00:12:20.178 --> 00:12:22.890 |
|
is a big cities in the US and then |
|
|
|
00:12:22.890 --> 00:12:24.111 |
|
made-up a problem from it. |
|
|
|
00:12:24.111 --> 00:12:26.550 |
|
So the problem is to try to predict the |
|
|
|
00:12:26.550 --> 00:12:28.220 |
|
next day's temperature in Cleveland |
|
|
|
00:12:28.220 --> 00:12:30.000 |
|
which stays zero given all the previous |
|
|
|
00:12:30.000 --> 00:12:30.550 |
|
temperatures. |
|
|
|
00:12:31.490 --> 00:12:34.350 |
|
And these features have meanings. |
|
|
|
00:12:34.350 --> 00:12:37.417 |
|
Every feature is some previous is, like |
|
|
|
00:12:37.417 --> 00:12:39.530 |
|
the temperature of 1 in the big cities |
|
|
|
00:12:39.530 --> 00:12:40.990 |
|
from one in the past five days. |
|
|
|
00:12:42.570 --> 00:12:44.753 |
|
But you can kind of. |
|
|
|
00:12:44.753 --> 00:12:46.110 |
|
You don't really need to know those |
|
|
|
00:12:46.110 --> 00:12:48.110 |
|
meanings in order to solve the problem |
|
|
|
00:12:48.110 --> 00:12:48.480 |
|
again. |
|
|
|
00:12:48.480 --> 00:12:50.780 |
|
You essentially just have a feature |
|
|
|
00:12:50.780 --> 00:12:53.730 |
|
vector of a bunch of continuous values |
|
|
|
00:12:53.730 --> 00:12:55.570 |
|
in this case, and you're trying to |
|
|
|
00:12:55.570 --> 00:12:57.290 |
|
predict a new continuous value, which |
|
|
|
00:12:57.290 --> 00:13:00.460 |
|
is Cleveland Cleveland's temperature in |
|
|
|
00:13:00.460 --> 00:13:01.240 |
|
the next day. |
|
|
|
00:13:02.010 --> 00:13:04.545 |
|
And again you can use KNN and a Bayes |
|
|
|
00:13:04.545 --> 00:13:06.000 |
|
and now Linear Regression. |
|
|
|
00:13:07.020 --> 00:13:08.935 |
|
KNN implementation will be essentially |
|
|
|
00:13:08.935 --> 00:13:11.440 |
|
the same for these A2 line change of |
|
|
|
00:13:11.440 --> 00:13:13.820 |
|
code because now instead of predicting |
|
|
|
00:13:13.820 --> 00:13:16.230 |
|
a categorical variable, you're |
|
|
|
00:13:16.230 --> 00:13:18.440 |
|
predicting a continuous variable. |
|
|
|
00:13:18.440 --> 00:13:20.580 |
|
So if K is greater than one, you |
|
|
|
00:13:20.580 --> 00:13:23.770 |
|
average the predictions for Regression |
|
|
|
00:13:23.770 --> 00:13:26.280 |
|
where for the Classification you choose |
|
|
|
00:13:26.280 --> 00:13:27.430 |
|
the most common prediction. |
|
|
|
00:13:28.580 --> 00:13:29.840 |
|
That's the only change. |
|
|
|
00:13:29.840 --> 00:13:31.590 |
|
Naive Bayes does change quite a bit |
|
|
|
00:13:31.590 --> 00:13:32.710 |
|
because you're using a different |
|
|
|
00:13:32.710 --> 00:13:33.590 |
|
probabilistic model. |
|
|
|
00:13:34.360 --> 00:13:36.710 |
|
And remember that there's one lecture |
|
|
|
00:13:36.710 --> 00:13:38.670 |
|
slide that has the derivation for how |
|
|
|
00:13:38.670 --> 00:13:40.545 |
|
you do the inference for nibbies under |
|
|
|
00:13:40.545 --> 00:13:41.050 |
|
the setting. |
|
|
|
00:13:42.330 --> 00:13:44.760 |
|
And then for linear and logistic |
|
|
|
00:13:44.760 --> 00:13:46.820 |
|
regression you're able to use the |
|
|
|
00:13:46.820 --> 00:13:48.350 |
|
modules in sklearn. |
|
|
|
00:13:49.890 --> 00:13:51.680 |
|
And then the final part is to identify |
|
|
|
00:13:51.680 --> 00:13:53.550 |
|
the most important features using L1 |
|
|
|
00:13:53.550 --> 00:13:54.320 |
|
Linear Regression. |
|
|
|
00:13:55.030 --> 00:13:57.160 |
|
So the reason that we use. |
|
|
|
00:13:58.020 --> 00:13:59.810 |
|
And when we do like. |
|
|
|
00:14:01.000 --> 00:14:03.170 |
|
Linear and logistic regression, we're |
|
|
|
00:14:03.170 --> 00:14:03.580 |
|
trying. |
|
|
|
00:14:03.580 --> 00:14:05.228 |
|
We're mainly trying to fit the data. |
|
|
|
00:14:05.228 --> 00:14:06.600 |
|
We're trying to come up with a model |
|
|
|
00:14:06.600 --> 00:14:08.340 |
|
that fits the data or fits our |
|
|
|
00:14:08.340 --> 00:14:09.920 |
|
predictions given the features. |
|
|
|
00:14:10.630 --> 00:14:13.720 |
|
But also we often express some |
|
|
|
00:14:13.720 --> 00:14:14.490 |
|
preference. |
|
|
|
00:14:15.190 --> 00:14:19.892 |
|
Over the model, in particular that the |
|
|
|
00:14:19.892 --> 00:14:21.669 |
|
weights don't get too large, and the |
|
|
|
00:14:21.670 --> 00:14:25.170 |
|
reason for that is to avoid like |
|
|
|
00:14:25.170 --> 00:14:27.070 |
|
overfitting or over relying on |
|
|
|
00:14:27.070 --> 00:14:30.410 |
|
particular features, as well as to |
|
|
|
00:14:30.410 --> 00:14:34.795 |
|
improve the generalization to new data |
|
|
|
00:14:34.795 --> 00:14:36.209 |
|
and generalization. |
|
|
|
00:14:36.210 --> 00:14:37.810 |
|
Research shows that if you can fit |
|
|
|
00:14:37.810 --> 00:14:39.220 |
|
something with smaller weights, then |
|
|
|
00:14:39.220 --> 00:14:42.013 |
|
you're more likely to generalize to new |
|
|
|
00:14:42.013 --> 00:14:42.209 |
|
data. |
|
|
|
00:14:44.640 --> 00:14:46.550 |
|
And here we're going to use it for |
|
|
|
00:14:46.550 --> 00:14:47.520 |
|
feature selection, yeah? |
|
|
|
00:14:58.910 --> 00:15:02.370 |
|
The so the parameters are. |
|
|
|
00:15:02.370 --> 00:15:04.510 |
|
You're talking about 1/3. |
|
|
|
00:15:04.510 --> 00:15:07.825 |
|
OK, so for Naive Bayes the parameter is |
|
|
|
00:15:07.825 --> 00:15:10.430 |
|
the prior, so that's like the alpha of |
|
|
|
00:15:10.430 --> 00:15:11.010 |
|
like your. |
|
|
|
00:15:11.670 --> 00:15:14.903 |
|
In the, it's the initial count, so you |
|
|
|
00:15:14.903 --> 00:15:15.882 |
|
have a Naive Bayes. |
|
|
|
00:15:15.882 --> 00:15:17.360 |
|
You have a prior that's essentially |
|
|
|
00:15:17.360 --> 00:15:19.200 |
|
that you pretend like you've seen all |
|
|
|
00:15:19.200 --> 00:15:20.230 |
|
combinations of. |
|
|
|
00:15:20.950 --> 00:15:23.930 |
|
Of things that you're counting, you |
|
|
|
00:15:23.930 --> 00:15:26.210 |
|
pretend that you see alpha times, and |
|
|
|
00:15:26.210 --> 00:15:28.510 |
|
so that kind of gives you a bias |
|
|
|
00:15:28.510 --> 00:15:30.200 |
|
towards estimating that everything's |
|
|
|
00:15:30.200 --> 00:15:33.170 |
|
equally likely, and that alpha is a |
|
|
|
00:15:33.170 --> 00:15:34.190 |
|
parameter that you can use. |
|
|
|
00:15:34.810 --> 00:15:36.270 |
|
You can learn using validation. |
|
|
|
00:15:37.010 --> 00:15:39.920 |
|
For Logistic Regression, it's the |
|
|
|
00:15:39.920 --> 00:15:42.650 |
|
Lambda which is your weight on the |
|
|
|
00:15:42.650 --> 00:15:43.760 |
|
regularization term. |
|
|
|
00:15:45.650 --> 00:15:48.180 |
|
And for K&N, it's your K, which is the |
|
|
|
00:15:48.180 --> 00:15:49.320 |
|
number of nearest neighbors you |
|
|
|
00:15:49.320 --> 00:15:49.710 |
|
consider. |
|
|
|
00:15:57.960 --> 00:15:58.220 |
|
Yeah. |
|
|
|
00:16:00.180 --> 00:16:03.284 |
|
So the K&N is. |
|
|
|
00:16:03.284 --> 00:16:05.686 |
|
It's almost the same whether you're |
|
|
|
00:16:05.686 --> 00:16:08.260 |
|
doing Regression or Classification. |
|
|
|
00:16:08.260 --> 00:16:09.980 |
|
When you find the K nearest neighbors, |
|
|
|
00:16:09.980 --> 00:16:11.790 |
|
it's the exact same code. |
|
|
|
00:16:11.790 --> 00:16:14.016 |
|
The difference is that if you're doing |
|
|
|
00:16:14.016 --> 00:16:15.270 |
|
Regression, you're trying to predict |
|
|
|
00:16:15.270 --> 00:16:16.200 |
|
continuous values. |
|
|
|
00:16:16.200 --> 00:16:19.340 |
|
So if K is greater than one, then you |
|
|
|
00:16:19.340 --> 00:16:21.532 |
|
want to average those continuous values |
|
|
|
00:16:21.532 --> 00:16:23.150 |
|
to get your final prediction. |
|
|
|
00:16:23.850 --> 00:16:26.060 |
|
And if you're doing Classification, you |
|
|
|
00:16:26.060 --> 00:16:28.490 |
|
find the most common label instead of |
|
|
|
00:16:28.490 --> 00:16:29.803 |
|
averaging because you don't want to |
|
|
|
00:16:29.803 --> 00:16:31.470 |
|
say, well it could be a four, it could |
|
|
|
00:16:31.470 --> 00:16:31.980 |
|
be a 9. |
|
|
|
00:16:31.980 --> 00:16:33.110 |
|
So I'm going to like split the |
|
|
|
00:16:33.110 --> 00:16:34.270 |
|
difference and say it's a 6. |
|
|
|
00:16:42.030 --> 00:16:45.420 |
|
That are averaging just that you so |
|
|
|
00:16:45.420 --> 00:16:47.600 |
|
like if K&N returns like the |
|
|
|
00:16:47.600 --> 00:16:54.870 |
|
temperatures of 1012 and 13 then you |
|
|
|
00:16:54.870 --> 00:16:57.190 |
|
would say that the average temperature |
|
|
|
00:16:57.190 --> 00:16:59.530 |
|
is like 11.3 or whatever that works out |
|
|
|
00:16:59.530 --> 00:16:59.730 |
|
to. |
|
|
|
00:17:04.600 --> 00:17:06.440 |
|
Yeah, at the end, if K is greater than |
|
|
|
00:17:06.440 --> 00:17:09.333 |
|
one, then you take the arithmetic mean |
|
|
|
00:17:09.333 --> 00:17:11.210 |
|
of the average of the. |
|
|
|
00:17:11.940 --> 00:17:14.560 |
|
Predictions of your K nearest |
|
|
|
00:17:14.560 --> 00:17:14.970 |
|
neighbors. |
|
|
|
00:17:16.590 --> 00:17:16.790 |
|
Yeah. |
|
|
|
00:17:18.610 --> 00:17:20.560 |
|
And so you could also get a variance |
|
|
|
00:17:20.560 --> 00:17:22.370 |
|
from that, which you don't need to do |
|
|
|
00:17:22.370 --> 00:17:24.500 |
|
for the homework, but so as a result |
|
|
|
00:17:24.500 --> 00:17:26.550 |
|
you can have some like confidence bound |
|
|
|
00:17:26.550 --> 00:17:27.840 |
|
on your estimate as well. |
|
|
|
00:17:30.050 --> 00:17:31.780 |
|
Alright then you have stretch goals, |
|
|
|
00:17:31.780 --> 00:17:32.170 |
|
so. |
|
|
|
00:17:32.850 --> 00:17:34.100 |
|
Stretch goals are. |
|
|
|
00:17:35.130 --> 00:17:37.000 |
|
Mainly intended for people taking the |
|
|
|
00:17:37.000 --> 00:17:39.020 |
|
four credit version, but you can anyone |
|
|
|
00:17:39.020 --> 00:17:39.510 |
|
can try them. |
|
|
|
00:17:40.240 --> 00:17:42.570 |
|
So there's just improving the MNIST |
|
|
|
00:17:42.570 --> 00:17:44.370 |
|
classification, like some ideas. |
|
|
|
00:17:44.370 --> 00:17:47.360 |
|
Or you could try to crop around the |
|
|
|
00:17:47.360 --> 00:17:49.000 |
|
Digit, or you could make sure that |
|
|
|
00:17:49.000 --> 00:17:51.840 |
|
they're all centered, or do some |
|
|
|
00:17:51.840 --> 00:17:53.410 |
|
whitening or other kinds of feature |
|
|
|
00:17:53.410 --> 00:17:54.340 |
|
transformations. |
|
|
|
00:17:55.430 --> 00:17:56.770 |
|
Improving Temperature Regression. |
|
|
|
00:17:56.770 --> 00:18:00.070 |
|
To be honest, I'm not sure exactly how |
|
|
|
00:18:00.070 --> 00:18:01.829 |
|
much this can be improved or how to |
|
|
|
00:18:01.830 --> 00:18:02.280 |
|
improve it. |
|
|
|
00:18:03.030 --> 00:18:04.720 |
|
Again, there's. |
|
|
|
00:18:04.720 --> 00:18:07.370 |
|
What I would do is try like subtracting |
|
|
|
00:18:07.370 --> 00:18:08.110 |
|
off the mean. |
|
|
|
00:18:08.110 --> 00:18:09.220 |
|
For example, you can. |
|
|
|
00:18:10.380 --> 00:18:12.370 |
|
You can normalize your features before |
|
|
|
00:18:12.370 --> 00:18:15.540 |
|
you do the fitting by subtracting off |
|
|
|
00:18:15.540 --> 00:18:16.750 |
|
means and dividing by steering |
|
|
|
00:18:16.750 --> 00:18:17.410 |
|
deviations. |
|
|
|
00:18:17.410 --> 00:18:18.140 |
|
That's one idea. |
|
|
|
00:18:19.060 --> 00:18:22.320 |
|
But we'll look at it after submissions |
|
|
|
00:18:22.320 --> 00:18:24.095 |
|
if it turns out that. |
|
|
|
00:18:24.095 --> 00:18:27.020 |
|
So the targets I Choose are because I |
|
|
|
00:18:27.020 --> 00:18:29.273 |
|
was able to do like some simple things |
|
|
|
00:18:29.273 --> 00:18:32.383 |
|
to bring down the Error by a few tenths |
|
|
|
00:18:32.383 --> 00:18:33.510 |
|
of a percent. |
|
|
|
00:18:33.510 --> 00:18:35.000 |
|
So I kind of figured that if you do |
|
|
|
00:18:35.000 --> 00:18:36.346 |
|
more things, you'll be able to bring it |
|
|
|
00:18:36.346 --> 00:18:38.420 |
|
down further, but it's hard to tell so. |
|
|
|
00:18:39.240 --> 00:18:40.960 |
|
If you do this and you put a lot of |
|
|
|
00:18:40.960 --> 00:18:42.600 |
|
effort into it, describe your effort |
|
|
|
00:18:42.600 --> 00:18:45.594 |
|
and we'll assign points even if you |
|
|
|
00:18:45.594 --> 00:18:47.680 |
|
even if it turns out that there's not |
|
|
|
00:18:47.680 --> 00:18:48.640 |
|
like a big improvement. |
|
|
|
00:18:48.640 --> 00:18:50.676 |
|
So don't stress out if you can't get |
|
|
|
00:18:50.676 --> 00:18:51.609 |
|
like 119. |
|
|
|
00:18:52.450 --> 00:18:54.200 |
|
RMS a year or something like that. |
|
|
|
00:18:55.130 --> 00:18:55.335 |
|
Right. |
|
|
|
00:18:55.335 --> 00:18:57.306 |
|
The last one is to generate a train |
|
|
|
00:18:57.306 --> 00:18:58.806 |
|
set, train Test, Classification set. |
|
|
|
00:18:58.806 --> 00:19:00.380 |
|
So this actually means don't like |
|
|
|
00:19:00.380 --> 00:19:02.020 |
|
generate it out of MNIST to create |
|
|
|
00:19:02.020 --> 00:19:02.804 |
|
synthetic data. |
|
|
|
00:19:02.804 --> 00:19:05.020 |
|
So you can Naive Bayes make certain |
|
|
|
00:19:05.020 --> 00:19:05.405 |
|
assumptions. |
|
|
|
00:19:05.405 --> 00:19:07.180 |
|
So if you generate your data according |
|
|
|
00:19:07.180 --> 00:19:09.390 |
|
to those Assumptions, you should be |
|
|
|
00:19:09.390 --> 00:19:11.900 |
|
able to create a problem that we're |
|
|
|
00:19:11.900 --> 00:19:13.520 |
|
Naive bees can outperform the other |
|
|
|
00:19:13.520 --> 00:19:13.980 |
|
methods. |
|
|
|
00:19:18.970 --> 00:19:22.130 |
|
So for these homeworks, make sure that |
|
|
|
00:19:22.130 --> 00:19:24.020 |
|
you of course read the assignment. |
|
|
|
00:19:24.020 --> 00:19:25.040 |
|
Read the tips. |
|
|
|
00:19:25.530 --> 00:19:26.210 |
|
|
|
|
|
00:19:27.060 --> 00:19:29.190 |
|
And then you should be adding code to |
|
|
|
00:19:29.190 --> 00:19:30.045 |
|
the starter code. |
|
|
|
00:19:30.045 --> 00:19:31.610 |
|
The starter code doesn't really solve |
|
|
|
00:19:31.610 --> 00:19:33.030 |
|
the problems for you, but it loads the |
|
|
|
00:19:33.030 --> 00:19:34.570 |
|
data and gives you some examples. |
|
|
|
00:19:34.570 --> 00:19:38.160 |
|
So for example, for example, there's a. |
|
|
|
00:19:38.810 --> 00:19:41.340 |
|
In the Regression, I think it includes |
|
|
|
00:19:41.340 --> 00:19:43.710 |
|
like a baseline where it computes RMSE |
|
|
|
00:19:43.710 --> 00:19:46.450 |
|
and median absolute error, so that |
|
|
|
00:19:46.450 --> 00:19:48.760 |
|
function can essentially be reused |
|
|
|
00:19:48.760 --> 00:19:50.060 |
|
later to compute the errors. |
|
|
|
00:19:51.120 --> 00:19:53.073 |
|
And that baseline gives you some idea |
|
|
|
00:19:53.073 --> 00:19:55.390 |
|
of like what kind of performance you |
|
|
|
00:19:55.390 --> 00:19:55.870 |
|
might get. |
|
|
|
00:19:55.870 --> 00:19:57.320 |
|
Like you should beat that baseline |
|
|
|
00:19:57.320 --> 00:19:58.620 |
|
because that's just based on a single |
|
|
|
00:19:58.620 --> 00:19:58.940 |
|
feature. |
|
|
|
00:20:00.300 --> 00:20:02.980 |
|
And then you complete the report and |
|
|
|
00:20:02.980 --> 00:20:04.940 |
|
make sure to include expected points. |
|
|
|
00:20:04.940 --> 00:20:07.040 |
|
So when the grader is graded they will |
|
|
|
00:20:07.040 --> 00:20:09.140 |
|
essentially just say if they disagree |
|
|
|
00:20:09.140 --> 00:20:09.846 |
|
with you. |
|
|
|
00:20:09.846 --> 00:20:12.470 |
|
So you if you claim like 10 points but |
|
|
|
00:20:12.470 --> 00:20:14.345 |
|
something was wrong then they might say |
|
|
|
00:20:14.345 --> 00:20:16.310 |
|
you lose like 3 points for this reason |
|
|
|
00:20:16.310 --> 00:20:20.340 |
|
and so that streamlines their grading. |
|
|
|
00:20:21.930 --> 00:20:23.580 |
|
The assignment, the report Submit your |
|
|
|
00:20:23.580 --> 00:20:26.070 |
|
notebook and either if you just have |
|
|
|
00:20:26.070 --> 00:20:28.800 |
|
one file, submitting the IPYNB is fine |
|
|
|
00:20:28.800 --> 00:20:29.890 |
|
or otherwise you can zip it. |
|
|
|
00:20:30.860 --> 00:20:32.220 |
|
And that's it. |
|
|
|
00:20:33.960 --> 00:20:34.620 |
|
Yeah, question. |
|
|
|
00:20:41.730 --> 00:20:47.160 |
|
So you need in three Credit was at 450, |
|
|
|
00:20:47.160 --> 00:20:47.640 |
|
is that right? |
|
|
|
00:20:48.650 --> 00:20:50.810 |
|
So think I think in the three credit |
|
|
|
00:20:50.810 --> 00:20:52.300 |
|
you need 450 points. |
|
|
|
00:20:53.660 --> 00:20:55.430 |
|
Each assignment without doing any |
|
|
|
00:20:55.430 --> 00:20:56.230 |
|
stretch goals. |
|
|
|
00:20:56.230 --> 00:20:58.620 |
|
Each assignment is worth 100 points and |
|
|
|
00:20:58.620 --> 00:21:01.240 |
|
the final project is worth 50 points. |
|
|
|
00:21:01.240 --> 00:21:02.673 |
|
I mean sorry, the final projects worth |
|
|
|
00:21:02.673 --> 00:21:03.460 |
|
100 points also. |
|
|
|
00:21:04.150 --> 00:21:06.310 |
|
So if you're in the three Credit |
|
|
|
00:21:06.310 --> 00:21:08.210 |
|
version and you don't do any stretch |
|
|
|
00:21:08.210 --> 00:21:10.960 |
|
goals, and you do all the assignments |
|
|
|
00:21:10.960 --> 00:21:12.500 |
|
and you do the final project, you will |
|
|
|
00:21:12.500 --> 00:21:13.570 |
|
have more points than you need. |
|
|
|
00:21:14.190 --> 00:21:17.740 |
|
So the so you can kind of pick |
|
|
|
00:21:17.740 --> 00:21:19.270 |
|
something that you don't want to do and |
|
|
|
00:21:19.270 --> 00:21:20.910 |
|
skip it if you're in the three credit |
|
|
|
00:21:20.910 --> 00:21:24.100 |
|
course and or like if you just are |
|
|
|
00:21:24.100 --> 00:21:26.330 |
|
already a machine learning guru, you |
|
|
|
00:21:26.330 --> 00:21:29.290 |
|
can do like 3 assignments with all the |
|
|
|
00:21:29.290 --> 00:21:31.630 |
|
extra parts and then take a vacation. |
|
|
|
00:21:32.920 --> 00:21:34.720 |
|
If you're in the four credit version, |
|
|
|
00:21:34.720 --> 00:21:37.490 |
|
then you will have to do some of the. |
|
|
|
00:21:37.670 --> 00:21:39.520 |
|
Some of the stretch goals in order to |
|
|
|
00:21:39.520 --> 00:21:41.470 |
|
get your full points, which are 550. |
|
|
|
00:21:49.580 --> 00:21:52.715 |
|
Alright, so now I'm going to move on to |
|
|
|
00:21:52.715 --> 00:21:54.180 |
|
the main topic. |
|
|
|
00:21:54.180 --> 00:21:57.340 |
|
So we've seen so far, we've seen 2 main |
|
|
|
00:21:57.340 --> 00:21:59.116 |
|
choices for how to use the features. |
|
|
|
00:21:59.116 --> 00:22:01.025 |
|
We could do Nearest neighbor when we |
|
|
|
00:22:01.025 --> 00:22:03.200 |
|
use all the features jointly in order |
|
|
|
00:22:03.200 --> 00:22:05.280 |
|
to find similar examples, and then we |
|
|
|
00:22:05.280 --> 00:22:06.970 |
|
predict the most similar label. |
|
|
|
00:22:07.910 --> 00:22:10.160 |
|
Or we can use a linear model where |
|
|
|
00:22:10.160 --> 00:22:11.980 |
|
essentially you're making a prediction |
|
|
|
00:22:11.980 --> 00:22:14.530 |
|
out of a of all the feature values. |
|
|
|
00:22:16.070 --> 00:22:18.490 |
|
But there's some other things that are |
|
|
|
00:22:18.490 --> 00:22:20.270 |
|
kind of intuitive, so. |
|
|
|
00:22:21.220 --> 00:22:24.010 |
|
For example, if you consider this where |
|
|
|
00:22:24.010 --> 00:22:26.260 |
|
you're trying to split the red X's from |
|
|
|
00:22:26.260 --> 00:22:27.710 |
|
the Green O's. |
|
|
|
00:22:28.370 --> 00:22:30.820 |
|
What's like another way that you might |
|
|
|
00:22:30.820 --> 00:22:33.180 |
|
try to define what that Decision |
|
|
|
00:22:33.180 --> 00:22:35.130 |
|
boundary is if you wanted to, say, tell |
|
|
|
00:22:35.130 --> 00:22:35.730 |
|
somebody else? |
|
|
|
00:22:35.730 --> 00:22:37.110 |
|
Like how do you identify whether |
|
|
|
00:22:37.110 --> 00:22:38.770 |
|
something is a no? |
|
|
|
00:22:52.240 --> 00:22:55.600 |
|
Yeah, I mean you so your jaw some kind |
|
|
|
00:22:55.600 --> 00:22:56.200 |
|
of boundary. |
|
|
|
00:22:57.150 --> 00:22:57.690 |
|
And. |
|
|
|
00:22:58.620 --> 00:23:00.315 |
|
And one way that you might think about |
|
|
|
00:23:00.315 --> 00:23:03.440 |
|
that is creating a kind of like simple |
|
|
|
00:23:03.440 --> 00:23:04.220 |
|
rule like this. |
|
|
|
00:23:04.220 --> 00:23:05.890 |
|
Like you might say that if. |
|
|
|
00:23:06.600 --> 00:23:09.040 |
|
You basically draw a boundary, but if |
|
|
|
00:23:09.040 --> 00:23:11.252 |
|
you want to specify you might say if X2 |
|
|
|
00:23:11.252 --> 00:23:15.820 |
|
is less than .6 and X2 is greater than |
|
|
|
00:23:15.820 --> 00:23:16.500 |
|
two. |
|
|
|
00:23:17.460 --> 00:23:21.480 |
|
And tX2, oops, that's just say X1 and |
|
|
|
00:23:21.480 --> 00:23:22.082 |
|
the last one. |
|
|
|
00:23:22.082 --> 00:23:24.630 |
|
And if X1 is less than seven then it's |
|
|
|
00:23:24.630 --> 00:23:26.672 |
|
an O and otherwise it's an X. |
|
|
|
00:23:26.672 --> 00:23:28.110 |
|
So basically you could create like a |
|
|
|
00:23:28.110 --> 00:23:29.502 |
|
set of rules like that, right? |
|
|
|
00:23:29.502 --> 00:23:32.161 |
|
So say if it meets these criteria then |
|
|
|
00:23:32.161 --> 00:23:34.819 |
|
it's one class and if it meets these |
|
|
|
00:23:34.820 --> 00:23:37.070 |
|
other criteria it's another class. |
|
|
|
00:23:40.160 --> 00:23:42.930 |
|
And So what we're going to learn today |
|
|
|
00:23:42.930 --> 00:23:45.280 |
|
is how we can try to learn these rules |
|
|
|
00:23:45.280 --> 00:23:48.220 |
|
automatically, even if we have a lot of |
|
|
|
00:23:48.220 --> 00:23:50.520 |
|
features in more complicated kinds of |
|
|
|
00:23:50.520 --> 00:23:51.250 |
|
predictions. |
|
|
|
00:23:52.920 --> 00:23:55.108 |
|
So this is basically the idea of |
|
|
|
00:23:55.108 --> 00:23:55.744 |
|
Decision trees. |
|
|
|
00:23:55.744 --> 00:23:58.490 |
|
So we all use Decision trees in our own |
|
|
|
00:23:58.490 --> 00:24:00.264 |
|
life, even if we don't think about it |
|
|
|
00:24:00.264 --> 00:24:00.812 |
|
that way. |
|
|
|
00:24:00.812 --> 00:24:02.710 |
|
Like you often say, if this happens, |
|
|
|
00:24:02.710 --> 00:24:04.121 |
|
I'll do that, and if it doesn't, then |
|
|
|
00:24:04.121 --> 00:24:05.029 |
|
I'll do this other thing. |
|
|
|
00:24:05.030 --> 00:24:06.685 |
|
That's like a Decision tree, right? |
|
|
|
00:24:06.685 --> 00:24:10.400 |
|
You had some kind of criteria, and |
|
|
|
00:24:10.400 --> 00:24:12.306 |
|
depending on the outcome of that |
|
|
|
00:24:12.306 --> 00:24:13.886 |
|
criteria, you do one thing. |
|
|
|
00:24:13.886 --> 00:24:16.680 |
|
And if it's the other way, if you get |
|
|
|
00:24:16.680 --> 00:24:17.900 |
|
the other outcome, then you would be |
|
|
|
00:24:17.900 --> 00:24:18.920 |
|
doing the other thing. |
|
|
|
00:24:18.920 --> 00:24:20.310 |
|
And maybe you have a whole chain of |
|
|
|
00:24:20.310 --> 00:24:22.090 |
|
them if I. |
|
|
|
00:24:22.250 --> 00:24:23.700 |
|
If I have time today, I'm going to go |
|
|
|
00:24:23.700 --> 00:24:25.990 |
|
to the grocery store, but if the car is |
|
|
|
00:24:25.990 --> 00:24:27.330 |
|
not there then I'm going to do this |
|
|
|
00:24:27.330 --> 00:24:28.480 |
|
instead and so on. |
|
|
|
00:24:29.850 --> 00:24:32.370 |
|
All right, so in Decision trees, the |
|
|
|
00:24:32.370 --> 00:24:34.500 |
|
Training is essentially to iteratively |
|
|
|
00:24:34.500 --> 00:24:37.340 |
|
Choose the attribute and split in a |
|
|
|
00:24:37.340 --> 00:24:40.080 |
|
split value that will best separate |
|
|
|
00:24:40.080 --> 00:24:41.530 |
|
your classes from each other. |
|
|
|
00:24:42.920 --> 00:24:44.610 |
|
Or if you're doing continuous values |
|
|
|
00:24:44.610 --> 00:24:47.010 |
|
that kind of group things into similar |
|
|
|
00:24:47.010 --> 00:24:48.240 |
|
prediction values. |
|
|
|
00:24:49.480 --> 00:24:52.440 |
|
So for example you might say if these |
|
|
|
00:24:52.440 --> 00:24:56.600 |
|
red circles are oranges and these |
|
|
|
00:24:56.600 --> 00:24:59.264 |
|
triangles are lemons, where there |
|
|
|
00:24:59.264 --> 00:25:01.090 |
|
oranges and lemons are plotted |
|
|
|
00:25:01.090 --> 00:25:02.250 |
|
according to their width and their |
|
|
|
00:25:02.250 --> 00:25:02.750 |
|
height. |
|
|
|
00:25:02.750 --> 00:25:07.726 |
|
You might decide well if it's less than |
|
|
|
00:25:07.726 --> 00:25:10.170 |
|
6.5 centimeters then. |
|
|
|
00:25:10.170 --> 00:25:12.690 |
|
Or I'll use greater since it's there if |
|
|
|
00:25:12.690 --> 00:25:14.190 |
|
it's greater than 6.5 centimeters. |
|
|
|
00:25:15.450 --> 00:25:17.267 |
|
Then I'm going to split it into this |
|
|
|
00:25:17.267 --> 00:25:19.410 |
|
section where it's like mostly oranges |
|
|
|
00:25:19.410 --> 00:25:22.110 |
|
and if it's less than 6.5 centimeters |
|
|
|
00:25:22.110 --> 00:25:24.395 |
|
width, then I'll split it into this |
|
|
|
00:25:24.395 --> 00:25:26.220 |
|
section where it's mostly lemons. |
|
|
|
00:25:27.250 --> 00:25:30.560 |
|
Neither of these a perfect split still. |
|
|
|
00:25:30.560 --> 00:25:32.910 |
|
So then I go further and say if it was |
|
|
|
00:25:32.910 --> 00:25:35.309 |
|
on this side of the split, if it's |
|
|
|
00:25:35.310 --> 00:25:37.915 |
|
greater than 95 centimeter height then |
|
|
|
00:25:37.915 --> 00:25:40.350 |
|
it's a lemon, and if it's less than |
|
|
|
00:25:40.350 --> 00:25:42.130 |
|
that then it's a. |
|
|
|
00:25:42.820 --> 00:25:43.760 |
|
Then it's an orange. |
|
|
|
00:25:44.900 --> 00:25:46.660 |
|
And now that's like a pretty confident |
|
|
|
00:25:46.660 --> 00:25:47.170 |
|
prediction. |
|
|
|
00:25:47.930 --> 00:25:49.610 |
|
And then if I'm on this side then I can |
|
|
|
00:25:49.610 --> 00:25:51.560 |
|
split it by height and say if it's less |
|
|
|
00:25:51.560 --> 00:25:51.990 |
|
than. |
|
|
|
00:25:53.690 --> 00:25:55.530 |
|
If it's greater than 6 centimeters then |
|
|
|
00:25:55.530 --> 00:25:57.714 |
|
it's a lemon, and if it's less than 6 |
|
|
|
00:25:57.714 --> 00:25:59.450 |
|
centimeters then it's an orange. |
|
|
|
00:25:59.450 --> 00:26:01.130 |
|
So you can like iteratively Choose a |
|
|
|
00:26:01.130 --> 00:26:03.180 |
|
test and then keep splitting the data. |
|
|
|
00:26:03.780 --> 00:26:06.510 |
|
And every time you choose a test, test |
|
|
|
00:26:06.510 --> 00:26:09.510 |
|
another test that splits the data |
|
|
|
00:26:09.510 --> 00:26:10.910 |
|
further according to what you're trying |
|
|
|
00:26:10.910 --> 00:26:11.320 |
|
to predict. |
|
|
|
00:26:12.270 --> 00:26:14.890 |
|
Essentially, this method Combines a |
|
|
|
00:26:14.890 --> 00:26:16.760 |
|
feature selection and modeling with |
|
|
|
00:26:16.760 --> 00:26:17.410 |
|
prediction. |
|
|
|
00:26:18.670 --> 00:26:20.420 |
|
So at the end of this, you transform |
|
|
|
00:26:20.420 --> 00:26:22.940 |
|
what we're two continuous values into |
|
|
|
00:26:22.940 --> 00:26:24.770 |
|
these four discrete values. |
|
|
|
00:26:25.450 --> 00:26:27.360 |
|
Of different chunks, different |
|
|
|
00:26:27.360 --> 00:26:30.130 |
|
partitions of the feature space and for |
|
|
|
00:26:30.130 --> 00:26:31.350 |
|
each of those. |
|
|
|
00:26:32.420 --> 00:26:34.850 |
|
Each of those parts of the partition. |
|
|
|
00:26:35.810 --> 00:26:38.360 |
|
You make a prediction. |
|
|
|
00:26:39.240 --> 00:26:41.620 |
|
A partitioning is just when you take a |
|
|
|
00:26:41.620 --> 00:26:44.390 |
|
continuous space and divide it up into |
|
|
|
00:26:44.390 --> 00:26:46.850 |
|
different cells that cover the entire |
|
|
|
00:26:46.850 --> 00:26:47.400 |
|
space. |
|
|
|
00:26:47.400 --> 00:26:49.859 |
|
That's a partition where the cells |
|
|
|
00:26:49.860 --> 00:26:51.040 |
|
don't overlap with each other. |
|
|
|
00:26:54.340 --> 00:26:56.460 |
|
And then if you want to classify, once |
|
|
|
00:26:56.460 --> 00:26:57.940 |
|
you've trained your tree, you get some |
|
|
|
00:26:57.940 --> 00:26:59.450 |
|
new test sample and you want to know is |
|
|
|
00:26:59.450 --> 00:27:01.450 |
|
that a lemon or an orange kind of looks |
|
|
|
00:27:01.450 --> 00:27:01.920 |
|
in between. |
|
|
|
00:27:02.610 --> 00:27:05.295 |
|
So you is it greater than 6.5 |
|
|
|
00:27:05.295 --> 00:27:05.740 |
|
centimeters? |
|
|
|
00:27:05.740 --> 00:27:06.185 |
|
No. |
|
|
|
00:27:06.185 --> 00:27:08.355 |
|
Is a tight greater than 6 centimeters? |
|
|
|
00:27:08.355 --> 00:27:08.690 |
|
No. |
|
|
|
00:27:08.690 --> 00:27:10.110 |
|
And so therefore it's an orange |
|
|
|
00:27:10.110 --> 00:27:10.970 |
|
according to your rule. |
|
|
|
00:27:13.260 --> 00:27:15.053 |
|
And you could take this tree and could |
|
|
|
00:27:15.053 --> 00:27:17.456 |
|
you could rewrite it as a set of rules, |
|
|
|
00:27:17.456 --> 00:27:20.560 |
|
like one rule is greater than 6.5, |
|
|
|
00:27:20.560 --> 00:27:23.478 |
|
height greater than 9.5, another rule |
|
|
|
00:27:23.478 --> 00:27:26.020 |
|
is greater than 65, height less than |
|
|
|
00:27:26.020 --> 00:27:27.330 |
|
9.5, and so on. |
|
|
|
00:27:27.330 --> 00:27:28.640 |
|
There's like 4 different rules |
|
|
|
00:27:28.640 --> 00:27:31.180 |
|
represented by this tree, and each rule |
|
|
|
00:27:31.180 --> 00:27:33.950 |
|
corresponds to some section of the |
|
|
|
00:27:33.950 --> 00:27:36.440 |
|
feature space, and each rule yields |
|
|
|
00:27:36.440 --> 00:27:37.140 |
|
some prediction. |
|
|
|
00:27:40.950 --> 00:27:44.020 |
|
So here's another example with some |
|
|
|
00:27:44.020 --> 00:27:45.580 |
|
discrete inputs. |
|
|
|
00:27:45.580 --> 00:27:48.030 |
|
So here the prediction problem is to |
|
|
|
00:27:48.030 --> 00:27:49.955 |
|
tell whether or not somebody's going to |
|
|
|
00:27:49.955 --> 00:27:50.350 |
|
wait. |
|
|
|
00:27:50.350 --> 00:27:52.440 |
|
If they go to a restaurant and they're |
|
|
|
00:27:52.440 --> 00:27:54.173 |
|
told they have to wait, so do they wait |
|
|
|
00:27:54.173 --> 00:27:55.160 |
|
or do they leave? |
|
|
|
00:27:56.290 --> 00:27:58.170 |
|
And the features are things like |
|
|
|
00:27:58.170 --> 00:28:00.160 |
|
whether there's an alternative nearby, |
|
|
|
00:28:00.160 --> 00:28:02.240 |
|
whether there's a bar they can wait at, |
|
|
|
00:28:02.240 --> 00:28:03.900 |
|
whether it's Friday or Saturday, |
|
|
|
00:28:03.900 --> 00:28:05.289 |
|
whether they're Hungry, whether the |
|
|
|
00:28:05.290 --> 00:28:07.106 |
|
restaurants full, what the price is, |
|
|
|
00:28:07.106 --> 00:28:08.740 |
|
whether it's raining, whether they had |
|
|
|
00:28:08.740 --> 00:28:10.560 |
|
a Reservation, what type of restaurant |
|
|
|
00:28:10.560 --> 00:28:12.900 |
|
is, and they would wait time. |
|
|
|
00:28:12.900 --> 00:28:14.747 |
|
And these are all categorical, so the |
|
|
|
00:28:14.747 --> 00:28:16.100 |
|
wait time is split into different |
|
|
|
00:28:16.100 --> 00:28:16.540 |
|
chunks. |
|
|
|
00:28:20.660 --> 00:28:22.670 |
|
And so you could. |
|
|
|
00:28:24.110 --> 00:28:27.810 |
|
You could train a tree from these |
|
|
|
00:28:27.810 --> 00:28:29.820 |
|
categorical variables, and of course I |
|
|
|
00:28:29.820 --> 00:28:31.590 |
|
will tell you more about like how you |
|
|
|
00:28:31.590 --> 00:28:32.390 |
|
would learn this tree. |
|
|
|
00:28:33.960 --> 00:28:35.670 |
|
But you might have a tree like this |
|
|
|
00:28:35.670 --> 00:28:36.500 |
|
where you say. |
|
|
|
00:28:37.730 --> 00:28:39.770 |
|
First, are there are there people in |
|
|
|
00:28:39.770 --> 00:28:40.370 |
|
the restaurant? |
|
|
|
00:28:40.370 --> 00:28:41.800 |
|
Patrons means like it's a restaurant |
|
|
|
00:28:41.800 --> 00:28:42.684 |
|
full or not. |
|
|
|
00:28:42.684 --> 00:28:46.310 |
|
If it's not full, then you leave right |
|
|
|
00:28:46.310 --> 00:28:47.790 |
|
away because they're just being rude. |
|
|
|
00:28:47.790 --> 00:28:49.330 |
|
If they tell, you have to wait I guess. |
|
|
|
00:28:49.930 --> 00:28:52.140 |
|
If it's partly full then you'll wait, |
|
|
|
00:28:52.140 --> 00:28:54.080 |
|
and if it's full then you then you have |
|
|
|
00:28:54.080 --> 00:28:55.680 |
|
like consider further things. |
|
|
|
00:28:55.680 --> 00:28:58.360 |
|
If it's a WaitEstimate, really short, |
|
|
|
00:28:58.360 --> 00:28:58.990 |
|
then you wait. |
|
|
|
00:28:58.990 --> 00:28:59.660 |
|
Is it really long? |
|
|
|
00:28:59.660 --> 00:29:00.170 |
|
Then you don't. |
|
|
|
00:29:00.960 --> 00:29:03.290 |
|
Otherwise, are you hungry? |
|
|
|
00:29:03.290 --> 00:29:04.693 |
|
If you're not, then you'll wait. |
|
|
|
00:29:04.693 --> 00:29:06.600 |
|
If you are, then you keep thinking. |
|
|
|
00:29:06.600 --> 00:29:08.320 |
|
So you have like, all this series of |
|
|
|
00:29:08.320 --> 00:29:08.820 |
|
choices. |
|
|
|
00:29:10.350 --> 00:29:12.790 |
|
That these trees and practice like if |
|
|
|
00:29:12.790 --> 00:29:14.230 |
|
you were to use a Decision tree on |
|
|
|
00:29:14.230 --> 00:29:14.680 |
|
MNIST. |
|
|
|
00:29:15.810 --> 00:29:17.600 |
|
Where the features are pretty weak |
|
|
|
00:29:17.600 --> 00:29:19.510 |
|
individually, they're just like pixel |
|
|
|
00:29:19.510 --> 00:29:20.140 |
|
values. |
|
|
|
00:29:20.140 --> 00:29:21.610 |
|
You can imagine that this tree could |
|
|
|
00:29:21.610 --> 00:29:23.160 |
|
get really complicated and long. |
|
|
|
00:29:27.970 --> 00:29:28.390 |
|
Right. |
|
|
|
00:29:28.390 --> 00:29:31.840 |
|
So just to mostly be state. |
|
|
|
00:29:32.450 --> 00:29:34.080 |
|
And then Decision tree. |
|
|
|
00:29:34.080 --> 00:29:36.410 |
|
The internal nodes are Test Attributes, |
|
|
|
00:29:36.410 --> 00:29:38.150 |
|
so it's some kind of like feature. |
|
|
|
00:29:38.150 --> 00:29:40.050 |
|
Attribute and feature are synonymous, |
|
|
|
00:29:40.050 --> 00:29:41.110 |
|
they're the same thing. |
|
|
|
00:29:41.830 --> 00:29:45.880 |
|
Some kind of feature attribute and. |
|
|
|
00:29:45.960 --> 00:29:47.420 |
|
And if it's a continuous attribute then |
|
|
|
00:29:47.420 --> 00:29:48.420 |
|
you have to have some kind of |
|
|
|
00:29:48.420 --> 00:29:53.420 |
|
threshold, so width greater than 6.5 or |
|
|
|
00:29:53.420 --> 00:29:54.650 |
|
is it raining or not? |
|
|
|
00:29:54.650 --> 00:29:56.050 |
|
Those are two examples of. |
|
|
|
00:29:56.740 --> 00:29:57.440 |
|
Of tests. |
|
|
|
00:29:58.370 --> 00:29:59.984 |
|
Then depending on the outcome of that |
|
|
|
00:29:59.984 --> 00:30:02.310 |
|
test, you split in different ways, and |
|
|
|
00:30:02.310 --> 00:30:03.860 |
|
when you're Training, you split all |
|
|
|
00:30:03.860 --> 00:30:05.935 |
|
your data according to that test, and |
|
|
|
00:30:05.935 --> 00:30:07.960 |
|
then you're going to solve again within |
|
|
|
00:30:07.960 --> 00:30:09.390 |
|
each of those nodes separately. |
|
|
|
00:30:10.480 --> 00:30:11.570 |
|
For the next Test. |
|
|
|
00:30:12.260 --> 00:30:14.110 |
|
Until you get to a leaf node, and at |
|
|
|
00:30:14.110 --> 00:30:16.125 |
|
the leaf node you provide an output or |
|
|
|
00:30:16.125 --> 00:30:18.532 |
|
a prediction, which could be, which in |
|
|
|
00:30:18.532 --> 00:30:20.480 |
|
this case is a class, in this |
|
|
|
00:30:20.480 --> 00:30:21.780 |
|
particular example whether it's a |
|
|
|
00:30:21.780 --> 00:30:22.540 |
|
Linear orange. |
|
|
|
00:30:25.060 --> 00:30:25.260 |
|
Yep. |
|
|
|
00:30:29.360 --> 00:30:31.850 |
|
So the question is how does it Decision |
|
|
|
00:30:31.850 --> 00:30:34.700 |
|
tree account for anomalies as in late |
|
|
|
00:30:34.700 --> 00:30:36.480 |
|
mislabeled data or really weird |
|
|
|
00:30:36.480 --> 00:30:37.260 |
|
examples or? |
|
|
|
00:30:50.100 --> 00:30:52.370 |
|
So the so the question is like how does |
|
|
|
00:30:52.370 --> 00:30:54.400 |
|
it Decision tree deal with weird or |
|
|
|
00:30:54.400 --> 00:30:55.860 |
|
unlikely examples? |
|
|
|
00:30:55.860 --> 00:30:58.020 |
|
And that's a good question because one |
|
|
|
00:30:58.020 --> 00:31:00.200 |
|
of the things about a Decision tree is |
|
|
|
00:31:00.200 --> 00:31:01.510 |
|
that if you train it. |
|
|
|
00:31:02.350 --> 00:31:04.460 |
|
If you train it, if you train the full |
|
|
|
00:31:04.460 --> 00:31:06.560 |
|
tree, then you can always. |
|
|
|
00:31:06.560 --> 00:31:09.970 |
|
As long as the feature vectors for each |
|
|
|
00:31:09.970 --> 00:31:11.560 |
|
sample are unique, you can always get |
|
|
|
00:31:11.560 --> 00:31:13.470 |
|
perfect Classification Error. |
|
|
|
00:31:13.470 --> 00:31:14.900 |
|
A tree has no bias. |
|
|
|
00:31:14.900 --> 00:31:16.580 |
|
You can always like fit your training |
|
|
|
00:31:16.580 --> 00:31:18.980 |
|
data perfectly because you just keep on |
|
|
|
00:31:18.980 --> 00:31:20.530 |
|
chopping it into smaller and smaller |
|
|
|
00:31:20.530 --> 00:31:22.070 |
|
bits until finally the answer. |
|
|
|
00:31:22.800 --> 00:31:24.960 |
|
So as a result, that can be dangerous |
|
|
|
00:31:24.960 --> 00:31:26.767 |
|
because if you do have some unusual |
|
|
|
00:31:26.767 --> 00:31:29.100 |
|
examples, you can end up creating rules |
|
|
|
00:31:29.100 --> 00:31:31.410 |
|
based on those examples that don't |
|
|
|
00:31:31.410 --> 00:31:32.920 |
|
generalize well tuning data. |
|
|
|
00:31:33.640 --> 00:31:36.191 |
|
And so some things that you can do are |
|
|
|
00:31:36.191 --> 00:31:38.119 |
|
you can stop Training, stop Training |
|
|
|
00:31:38.120 --> 00:31:38.590 |
|
early. |
|
|
|
00:31:38.590 --> 00:31:40.440 |
|
So you can say I'm not going to split |
|
|
|
00:31:40.440 --> 00:31:42.530 |
|
once I only have 5 examples of my leaf |
|
|
|
00:31:42.530 --> 00:31:43.990 |
|
node, I'm going to quit splitting and |
|
|
|
00:31:43.990 --> 00:31:45.460 |
|
I'll just output my best guess. |
|
|
|
00:31:46.520 --> 00:31:47.070 |
|
|
|
|
|
00:31:47.990 --> 00:31:49.240 |
|
There's also like. |
|
|
|
00:31:52.250 --> 00:31:53.770 |
|
Probably on Tuesday. |
|
|
|
00:31:53.770 --> 00:31:54.810 |
|
Actually, I'm going to talk about |
|
|
|
00:31:54.810 --> 00:31:56.790 |
|
ensembles, which is ways of combining |
|
|
|
00:31:56.790 --> 00:31:58.770 |
|
money trees, which is another way of |
|
|
|
00:31:58.770 --> 00:31:59.840 |
|
getting rid of this problem. |
|
|
|
00:32:01.360 --> 00:32:01.750 |
|
Question. |
|
|
|
00:32:09.850 --> 00:32:11.190 |
|
That's a good question too. |
|
|
|
00:32:11.190 --> 00:32:12.890 |
|
So the question is whether Decision |
|
|
|
00:32:12.890 --> 00:32:14.500 |
|
trees are always binary. |
|
|
|
00:32:14.500 --> 00:32:17.880 |
|
So like in this example, it's not |
|
|
|
00:32:17.880 --> 00:32:21.640 |
|
binary, they're splitting like the |
|
|
|
00:32:21.640 --> 00:32:23.250 |
|
Patrons is splitting based on three |
|
|
|
00:32:23.250 --> 00:32:23.780 |
|
values. |
|
|
|
00:32:24.510 --> 00:32:28.260 |
|
But typically they are binary. |
|
|
|
00:32:28.260 --> 00:32:29.030 |
|
So if you're. |
|
|
|
00:32:29.810 --> 00:32:32.277 |
|
If you're using continuous values, it |
|
|
|
00:32:32.277 --> 00:32:32.535 |
|
will. |
|
|
|
00:32:32.535 --> 00:32:34.410 |
|
It will almost always be binary, |
|
|
|
00:32:34.410 --> 00:32:35.440 |
|
because you could. |
|
|
|
00:32:35.440 --> 00:32:37.330 |
|
Even if you wanted to split continuous |
|
|
|
00:32:37.330 --> 00:32:40.260 |
|
variables into many different chunks, |
|
|
|
00:32:40.260 --> 00:32:42.350 |
|
you can do that through a sequence of |
|
|
|
00:32:42.350 --> 00:32:43.380 |
|
binary decisions. |
|
|
|
00:32:44.780 --> 00:32:47.620 |
|
In SK learn as well, their Decision |
|
|
|
00:32:47.620 --> 00:32:50.160 |
|
trees cannot deal with like multi |
|
|
|
00:32:50.160 --> 00:32:53.040 |
|
valued attributes and so you need to |
|
|
|
00:32:53.040 --> 00:32:55.000 |
|
convert them into binary attributes in |
|
|
|
00:32:55.000 --> 00:32:56.470 |
|
order to use sklearn. |
|
|
|
00:32:57.350 --> 00:32:59.470 |
|
And I think often that's done as a |
|
|
|
00:32:59.470 --> 00:33:01.590 |
|
design Decision, because otherwise like |
|
|
|
00:33:01.590 --> 00:33:03.050 |
|
some features will be like |
|
|
|
00:33:03.050 --> 00:33:04.780 |
|
intrinsically more powerful than other |
|
|
|
00:33:04.780 --> 00:33:06.690 |
|
features if they create like more |
|
|
|
00:33:06.690 --> 00:33:07.340 |
|
splits. |
|
|
|
00:33:07.340 --> 00:33:09.160 |
|
So it can cause like a bias in your |
|
|
|
00:33:09.160 --> 00:33:09.990 |
|
feature selection. |
|
|
|
00:33:10.740 --> 00:33:12.990 |
|
So they don't have to be binary, but |
|
|
|
00:33:12.990 --> 00:33:15.160 |
|
it's a common common setting. |
|
|
|
00:33:21.880 --> 00:33:24.480 |
|
Alright, so the Training 4 Decision |
|
|
|
00:33:24.480 --> 00:33:26.760 |
|
tree again without yet getting into the |
|
|
|
00:33:26.760 --> 00:33:27.000 |
|
math. |
|
|
|
00:33:27.710 --> 00:33:30.935 |
|
Is Recursively for each node in the |
|
|
|
00:33:30.935 --> 00:33:32.590 |
|
tree, if the labels and the node are |
|
|
|
00:33:32.590 --> 00:33:33.120 |
|
mixed. |
|
|
|
00:33:33.120 --> 00:33:35.256 |
|
So to start with, we're at the of the |
|
|
|
00:33:35.256 --> 00:33:38.030 |
|
tree and we have all this data, and so |
|
|
|
00:33:38.030 --> 00:33:39.676 |
|
essentially there's just right now some |
|
|
|
00:33:39.676 --> 00:33:41.025 |
|
probability that's a no, some |
|
|
|
00:33:41.025 --> 00:33:41.950 |
|
probability that's an 784x1. |
|
|
|
00:33:41.950 --> 00:33:43.900 |
|
Those probabilities are close to 5050. |
|
|
|
00:33:45.530 --> 00:33:47.410 |
|
Then I'm going to choose some attribute |
|
|
|
00:33:47.410 --> 00:33:50.020 |
|
and split the values based on the data |
|
|
|
00:33:50.020 --> 00:33:51.200 |
|
that reaches that node. |
|
|
|
00:33:52.310 --> 00:33:54.310 |
|
So here I Choose this attribute the |
|
|
|
00:33:54.310 --> 00:33:55.430 |
|
tree I'm creating up there. |
|
|
|
00:33:56.110 --> 00:33:58.060 |
|
X2 is less than .6. |
|
|
|
00:34:00.630 --> 00:34:05.310 |
|
If it's less than .6 then I go down one |
|
|
|
00:34:05.310 --> 00:34:07.067 |
|
branch and if it's greater than I go |
|
|
|
00:34:07.067 --> 00:34:08.210 |
|
down the other branch. |
|
|
|
00:34:08.210 --> 00:34:10.870 |
|
So now then I can now start making |
|
|
|
00:34:10.870 --> 00:34:13.440 |
|
decisions separately about this region |
|
|
|
00:34:13.440 --> 00:34:14.260 |
|
in this region. |
|
|
|
00:34:15.910 --> 00:34:19.200 |
|
So then I Choose another node and I say |
|
|
|
00:34:19.200 --> 00:34:21.660 |
|
if X1 is less than 7. |
|
|
|
00:34:22.630 --> 00:34:24.360 |
|
So I create this split and this only |
|
|
|
00:34:24.360 --> 00:34:25.490 |
|
pertains to the data. |
|
|
|
00:34:25.490 --> 00:34:27.570 |
|
Now that came down the first node so |
|
|
|
00:34:27.570 --> 00:34:28.989 |
|
it's this side of the data. |
|
|
|
00:34:29.710 --> 00:34:31.292 |
|
So if it's over here, then it's a no, |
|
|
|
00:34:31.292 --> 00:34:33.220 |
|
if it's over here, then it's an X and |
|
|
|
00:34:33.220 --> 00:34:34.620 |
|
Now I don't need to create anymore |
|
|
|
00:34:34.620 --> 00:34:36.690 |
|
Decision nodes for this whole region of |
|
|
|
00:34:36.690 --> 00:34:38.870 |
|
space because I have perfect |
|
|
|
00:34:38.870 --> 00:34:39.630 |
|
Classification. |
|
|
|
00:34:40.760 --> 00:34:43.010 |
|
Then I go to my top side. |
|
|
|
00:34:43.730 --> 00:34:45.390 |
|
And I can make another split. |
|
|
|
00:34:45.390 --> 00:34:47.760 |
|
So here there's actually more than one |
|
|
|
00:34:47.760 --> 00:34:48.015 |
|
choice. |
|
|
|
00:34:48.015 --> 00:34:49.960 |
|
I think that's like kind of equally |
|
|
|
00:34:49.960 --> 00:34:51.460 |
|
good, but. |
|
|
|
00:34:51.570 --> 00:34:56.230 |
|
Again, say if X2 is less than .8, then |
|
|
|
00:34:56.230 --> 00:34:57.960 |
|
it goes down here where I'm still |
|
|
|
00:34:57.960 --> 00:34:58.250 |
|
unsure. |
|
|
|
00:34:58.250 --> 00:35:00.080 |
|
If it's greater than eight, then it's |
|
|
|
00:35:00.080 --> 00:35:00.980 |
|
definitely a red X. |
|
|
|
00:35:03.260 --> 00:35:05.120 |
|
And then I can keep doing that until I |
|
|
|
00:35:05.120 --> 00:35:07.190 |
|
finally have a perfect Classification |
|
|
|
00:35:07.190 --> 00:35:08.030 |
|
in the training data. |
|
|
|
00:35:08.810 --> 00:35:10.000 |
|
So that's the full tree. |
|
|
|
00:35:11.070 --> 00:35:13.830 |
|
And if you could stop early, you could |
|
|
|
00:35:13.830 --> 00:35:15.739 |
|
say I'm not going to go past like 3 |
|
|
|
00:35:15.740 --> 00:35:18.310 |
|
levels, or that I'm going to stop |
|
|
|
00:35:18.310 --> 00:35:20.910 |
|
splitting once my leaf node doesn't |
|
|
|
00:35:20.910 --> 00:35:23.210 |
|
have more than five examples. |
|
|
|
00:35:39.470 --> 00:35:41.560 |
|
Well, the question was does the first |
|
|
|
00:35:41.560 --> 00:35:42.320 |
|
split matter? |
|
|
|
00:35:42.320 --> 00:35:43.929 |
|
So I guess there's two parts to that. |
|
|
|
00:35:43.930 --> 00:35:45.880 |
|
One is that I will tell you how we do |
|
|
|
00:35:45.880 --> 00:35:46.980 |
|
this computationally. |
|
|
|
00:35:46.980 --> 00:35:48.780 |
|
So you try to greedily find like the |
|
|
|
00:35:48.780 --> 00:35:49.900 |
|
best split every time. |
|
|
|
00:35:50.990 --> 00:35:53.530 |
|
And the other thing is that finding the |
|
|
|
00:35:53.530 --> 00:35:57.290 |
|
minimum size tree is like a |
|
|
|
00:35:57.290 --> 00:35:59.540 |
|
computationally hard problem. |
|
|
|
00:36:00.540 --> 00:36:01.766 |
|
So it's infeasible. |
|
|
|
00:36:01.766 --> 00:36:04.530 |
|
So you end up with a greedy solution |
|
|
|
00:36:04.530 --> 00:36:06.020 |
|
where for every node you're choosing |
|
|
|
00:36:06.020 --> 00:36:08.200 |
|
the best split for that node. |
|
|
|
00:36:08.200 --> 00:36:10.045 |
|
But that doesn't necessarily give you |
|
|
|
00:36:10.045 --> 00:36:11.680 |
|
the shortest tree overall, because you |
|
|
|
00:36:11.680 --> 00:36:13.020 |
|
don't know like what kinds of splits |
|
|
|
00:36:13.020 --> 00:36:14.250 |
|
will be available to you later. |
|
|
|
00:36:16.710 --> 00:36:19.050 |
|
So it does matter, but you have like |
|
|
|
00:36:19.050 --> 00:36:20.630 |
|
there's an algorithm for doing it in a |
|
|
|
00:36:20.630 --> 00:36:21.550 |
|
decent way, yeah. |
|
|
|
00:36:55.320 --> 00:36:55.860 |
|
|
|
|
|
00:36:57.660 --> 00:36:59.080 |
|
There have well. |
|
|
|
00:37:01.160 --> 00:37:02.650 |
|
How will you know that it will work for |
|
|
|
00:37:02.650 --> 00:37:03.320 |
|
like new data? |
|
|
|
00:37:05.000 --> 00:37:09.209 |
|
So basically if you want to know, you |
|
|
|
00:37:09.210 --> 00:37:10.740 |
|
do always want to know, you always want |
|
|
|
00:37:10.740 --> 00:37:12.420 |
|
to know, right, if you if the model |
|
|
|
00:37:12.420 --> 00:37:13.620 |
|
that you learned is going to work for |
|
|
|
00:37:13.620 --> 00:37:14.540 |
|
new data. |
|
|
|
00:37:14.540 --> 00:37:16.370 |
|
And so that's why I typically you would |
|
|
|
00:37:16.370 --> 00:37:18.030 |
|
carve off, if you have some Training |
|
|
|
00:37:18.030 --> 00:37:19.800 |
|
set, you'd carve off a validation set. |
|
|
|
00:37:20.450 --> 00:37:22.380 |
|
And you would train it say with like |
|
|
|
00:37:22.380 --> 00:37:25.040 |
|
70% of the Training examples and test |
|
|
|
00:37:25.040 --> 00:37:27.850 |
|
it on the 30% of the held out Samples? |
|
|
|
00:37:28.530 --> 00:37:30.040 |
|
And then there was held out Samples |
|
|
|
00:37:30.040 --> 00:37:32.170 |
|
will give you an estimate of how well |
|
|
|
00:37:32.170 --> 00:37:33.260 |
|
your method works. |
|
|
|
00:37:33.260 --> 00:37:35.250 |
|
And so then like if you find for |
|
|
|
00:37:35.250 --> 00:37:37.626 |
|
example that I trained a full tree and |
|
|
|
00:37:37.626 --> 00:37:39.850 |
|
of course I got like 0% Training error, |
|
|
|
00:37:39.850 --> 00:37:41.850 |
|
but my Test error is like 40%. |
|
|
|
00:37:42.590 --> 00:37:44.990 |
|
Then you would probably say maybe I |
|
|
|
00:37:44.990 --> 00:37:46.803 |
|
should try Training a shorter tree and |
|
|
|
00:37:46.803 --> 00:37:48.930 |
|
then you can like retrain it with some |
|
|
|
00:37:48.930 --> 00:37:51.120 |
|
constraints and then test it again on |
|
|
|
00:37:51.120 --> 00:37:52.755 |
|
your validation set and Choose like |
|
|
|
00:37:52.755 --> 00:37:53.830 |
|
your Parameters that way. |
|
|
|
00:37:54.860 --> 00:37:57.581 |
|
There's also I'll talk about most |
|
|
|
00:37:57.581 --> 00:37:59.140 |
|
likely, most likely this. |
|
|
|
00:37:59.140 --> 00:38:01.020 |
|
I was planning to do it Thursday, but |
|
|
|
00:38:01.020 --> 00:38:02.187 |
|
I'll probably do it next Tuesday. |
|
|
|
00:38:02.187 --> 00:38:04.580 |
|
I'll talk about ensembles, including |
|
|
|
00:38:04.580 --> 00:38:06.622 |
|
random forests, and those are like kind |
|
|
|
00:38:06.622 --> 00:38:09.150 |
|
of like brain dead always work methods |
|
|
|
00:38:09.150 --> 00:38:11.080 |
|
that combine a lot of trees and are |
|
|
|
00:38:11.080 --> 00:38:13.439 |
|
really reliable whether you have a lot |
|
|
|
00:38:13.439 --> 00:38:16.087 |
|
of data or well, you kind of need data. |
|
|
|
00:38:16.087 --> 00:38:17.665 |
|
But whether you have a lot of features |
|
|
|
00:38:17.665 --> 00:38:19.850 |
|
or a little features, they always work. |
|
|
|
00:38:20.480 --> 00:38:21.480 |
|
They always work pretty well. |
|
|
|
00:38:23.820 --> 00:38:25.950 |
|
Right, so in prediction then you just |
|
|
|
00:38:25.950 --> 00:38:27.560 |
|
basically descend the tree, so you |
|
|
|
00:38:27.560 --> 00:38:29.920 |
|
check the conditions is tX2 greater |
|
|
|
00:38:29.920 --> 00:38:32.370 |
|
than .6 blah blah blah blah blah until |
|
|
|
00:38:32.370 --> 00:38:33.750 |
|
you find yourself in a leaf node. |
|
|
|
00:38:34.380 --> 00:38:36.630 |
|
So for example, if I have this data |
|
|
|
00:38:36.630 --> 00:38:38.500 |
|
point and I'm trying to classify it, I |
|
|
|
00:38:38.500 --> 00:38:40.902 |
|
would end up following these rules down |
|
|
|
00:38:40.902 --> 00:38:44.290 |
|
to down to the leaf node of. |
|
|
|
00:38:45.960 --> 00:38:47.418 |
|
Yeah, like right over here, right? |
|
|
|
00:38:47.418 --> 00:38:50.158 |
|
X2 is less than .6 and X1 is less than |
|
|
|
00:38:50.158 --> 00:38:50.460 |
|
.7. |
|
|
|
00:38:51.260 --> 00:38:52.740 |
|
And so that's going to be no. |
|
|
|
00:38:53.860 --> 00:38:56.500 |
|
And if I am over here then I end up |
|
|
|
00:38:56.500 --> 00:38:59.420 |
|
following going down to here to here. |
|
|
|
00:39:00.480 --> 00:39:03.020 |
|
To here to here and I end up in this |
|
|
|
00:39:03.020 --> 00:39:07.299 |
|
leaf node and so it's an X and it |
|
|
|
00:39:07.300 --> 00:39:09.395 |
|
doesn't matter like where it falls in |
|
|
|
00:39:09.395 --> 00:39:10.580 |
|
this part of the space. |
|
|
|
00:39:10.580 --> 00:39:11.700 |
|
Usually this isn't like. |
|
|
|
00:39:12.390 --> 00:39:13.770 |
|
Even something you necessarily |
|
|
|
00:39:13.770 --> 00:39:15.520 |
|
visualize, but. |
|
|
|
00:39:16.060 --> 00:39:18.025 |
|
But it's worth noting that even parts |
|
|
|
00:39:18.025 --> 00:39:20.020 |
|
of your feature space that are kind of |
|
|
|
00:39:20.020 --> 00:39:22.640 |
|
far away from any Example can still get |
|
|
|
00:39:22.640 --> 00:39:24.360 |
|
classified by this Decision tree. |
|
|
|
00:39:25.070 --> 00:39:27.670 |
|
And it's not necessarily the Nearest |
|
|
|
00:39:27.670 --> 00:39:28.450 |
|
neighbor Decision. |
|
|
|
00:39:28.450 --> 00:39:31.186 |
|
Like this star here is actually closer |
|
|
|
00:39:31.186 --> 00:39:33.390 |
|
to the 784x1 than it is to the O's, but |
|
|
|
00:39:33.390 --> 00:39:35.010 |
|
it would still be a no because it's on |
|
|
|
00:39:35.010 --> 00:39:36.050 |
|
that side of the boundary. |
|
|
|
00:39:40.650 --> 00:39:42.350 |
|
So the key question is, how do you |
|
|
|
00:39:42.350 --> 00:39:45.810 |
|
choose what attribute to split and |
|
|
|
00:39:45.810 --> 00:39:46.384 |
|
where to split? |
|
|
|
00:39:46.384 --> 00:39:48.390 |
|
So how do you decide what test you're |
|
|
|
00:39:48.390 --> 00:39:50.000 |
|
going to use for a given node? |
|
|
|
00:39:50.920 --> 00:39:53.615 |
|
And so let's take this example. |
|
|
|
00:39:53.615 --> 00:39:56.290 |
|
So here I've got some table of features |
|
|
|
00:39:56.290 --> 00:39:57.180 |
|
and predictions. |
|
|
|
00:39:58.020 --> 00:39:59.010 |
|
And if. |
|
|
|
00:40:00.410 --> 00:40:02.280 |
|
And if I were to split, these are |
|
|
|
00:40:02.280 --> 00:40:04.570 |
|
binary features so they just have two |
|
|
|
00:40:04.570 --> 00:40:06.430 |
|
values T2 false I guess. |
|
|
|
00:40:07.400 --> 00:40:07.940 |
|
If. |
|
|
|
00:40:09.620 --> 00:40:12.440 |
|
If I split based on X1 and I go in One |
|
|
|
00:40:12.440 --> 00:40:14.570 |
|
Direction, then it's all true. |
|
|
|
00:40:15.200 --> 00:40:17.585 |
|
The prediction is true and if I go in |
|
|
|
00:40:17.585 --> 00:40:19.920 |
|
the other direction then 3/4 of the |
|
|
|
00:40:19.920 --> 00:40:21.080 |
|
time the prediction is false. |
|
|
|
00:40:22.810 --> 00:40:26.343 |
|
If I split based on X2, then 3/4 of the |
|
|
|
00:40:26.343 --> 00:40:27.948 |
|
time the prediction is true. |
|
|
|
00:40:27.948 --> 00:40:31.096 |
|
If it's true and 50% of the time the |
|
|
|
00:40:31.096 --> 00:40:32.819 |
|
prediction is false, X2 is false. |
|
|
|
00:40:33.530 --> 00:40:36.300 |
|
So which of these features is a better |
|
|
|
00:40:36.300 --> 00:40:37.400 |
|
Test? |
|
|
|
00:40:39.550 --> 00:40:41.530 |
|
So how many people think that the left |
|
|
|
00:40:41.530 --> 00:40:42.530 |
|
is a better Test? |
|
|
|
00:40:43.790 --> 00:40:45.070 |
|
How many people think they're right is |
|
|
|
00:40:45.070 --> 00:40:45.730 |
|
a better Test. |
|
|
|
00:40:46.840 --> 00:40:48.380 |
|
Right the left is a better Test |
|
|
|
00:40:48.380 --> 00:40:48.950 |
|
because. |
|
|
|
00:40:50.620 --> 00:40:53.380 |
|
Because my uncertainty is greatly |
|
|
|
00:40:53.380 --> 00:40:54.990 |
|
reduced on the left side. |
|
|
|
00:40:54.990 --> 00:40:58.750 |
|
So initially, initially I had like a |
|
|
|
00:40:58.750 --> 00:41:01.280 |
|
5/8 chance of getting it right if I |
|
|
|
00:41:01.280 --> 00:41:02.280 |
|
just guessed true. |
|
|
|
00:41:02.910 --> 00:41:06.706 |
|
But if I know X1, then I've got a 100% |
|
|
|
00:41:06.706 --> 00:41:08.600 |
|
chance of getting it right, at least in |
|
|
|
00:41:08.600 --> 00:41:09.494 |
|
the training data. |
|
|
|
00:41:09.494 --> 00:41:13.338 |
|
If I know that X1 is true, and I've got |
|
|
|
00:41:13.338 --> 00:41:15.132 |
|
a 3/4 chance of getting it right if I |
|
|
|
00:41:15.132 --> 00:41:16.280 |
|
know that X1 is false. |
|
|
|
00:41:16.280 --> 00:41:19.135 |
|
So X1 tells me a lot about the |
|
|
|
00:41:19.135 --> 00:41:19.572 |
|
prediction. |
|
|
|
00:41:19.572 --> 00:41:22.035 |
|
It greatly reduces my uncertainty about |
|
|
|
00:41:22.035 --> 00:41:22.890 |
|
the prediction. |
|
|
|
00:41:24.510 --> 00:41:26.412 |
|
And to quantify this, we need to |
|
|
|
00:41:26.412 --> 00:41:28.560 |
|
quantify uncertainty and then be able |
|
|
|
00:41:28.560 --> 00:41:32.350 |
|
to measure how much a certain feature |
|
|
|
00:41:32.350 --> 00:41:33.950 |
|
reduces our uncertainty in the |
|
|
|
00:41:33.950 --> 00:41:34.720 |
|
prediction. |
|
|
|
00:41:34.720 --> 00:41:36.800 |
|
And that's called the information gain. |
|
|
|
00:41:40.470 --> 00:41:44.540 |
|
So to quantify the uncertainty, I'll |
|
|
|
00:41:44.540 --> 00:41:45.595 |
|
use these two examples. |
|
|
|
00:41:45.595 --> 00:41:47.790 |
|
So imagine that you're flipping a coin. |
|
|
|
00:41:47.790 --> 00:41:50.150 |
|
These are like heads and tails, or |
|
|
|
00:41:50.150 --> 00:41:51.510 |
|
present them as zeros and ones. |
|
|
|
00:41:52.180 --> 00:41:54.820 |
|
And so one time I've got two different |
|
|
|
00:41:54.820 --> 00:41:56.186 |
|
sequences, let's say two different |
|
|
|
00:41:56.186 --> 00:41:57.740 |
|
coins and one in the coins. |
|
|
|
00:41:57.740 --> 00:42:00.120 |
|
It's a biased coin, so I end up with |
|
|
|
00:42:00.120 --> 00:42:03.330 |
|
zeros or heads like 16 out of 18 times. |
|
|
|
00:42:04.250 --> 00:42:06.520 |
|
And the other for the other Coin I get |
|
|
|
00:42:06.520 --> 00:42:09.400 |
|
closer to 5058 out of. |
|
|
|
00:42:10.050 --> 00:42:12.390 |
|
18 times I get heads so. |
|
|
|
00:42:13.530 --> 00:42:17.520 |
|
Which of these has higher uncertainty? |
|
|
|
00:42:18.540 --> 00:42:19.730 |
|
The left or the right? |
|
|
|
00:42:21.330 --> 00:42:22.580 |
|
Right, correct. |
|
|
|
00:42:22.580 --> 00:42:23.070 |
|
They're right. |
|
|
|
00:42:23.070 --> 00:42:24.900 |
|
Has a lot higher uncertainty. |
|
|
|
00:42:24.900 --> 00:42:27.370 |
|
So if I with that Coin, I really don't |
|
|
|
00:42:27.370 --> 00:42:28.470 |
|
know if it's going to be heads or |
|
|
|
00:42:28.470 --> 00:42:30.860 |
|
tails, but on the left side, I'm pretty |
|
|
|
00:42:30.860 --> 00:42:31.820 |
|
sure it's going to be heads. |
|
|
|
00:42:32.590 --> 00:42:33.360 |
|
Or zeros. |
|
|
|
00:42:34.720 --> 00:42:36.770 |
|
So we can measure that with this |
|
|
|
00:42:36.770 --> 00:42:38.645 |
|
function called Entropy. |
|
|
|
00:42:38.645 --> 00:42:41.350 |
|
So the entropy is a measure of |
|
|
|
00:42:41.350 --> 00:42:42.030 |
|
uncertainty. |
|
|
|
00:42:42.960 --> 00:42:45.740 |
|
And it's defined as the negative sum |
|
|
|
00:42:45.740 --> 00:42:48.070 |
|
over all the values of some variable of |
|
|
|
00:42:48.070 --> 00:42:50.220 |
|
the probability of that value. |
|
|
|
00:42:51.020 --> 00:42:53.490 |
|
Times the log probability that value, |
|
|
|
00:42:53.490 --> 00:42:56.470 |
|
and people usually sometimes use like |
|
|
|
00:42:56.470 --> 00:42:57.520 |
|
log base 2. |
|
|
|
00:42:58.630 --> 00:43:00.700 |
|
Just because that way the Entropy |
|
|
|
00:43:00.700 --> 00:43:02.550 |
|
ranges from zero to 1 if you have |
|
|
|
00:43:02.550 --> 00:43:03.490 |
|
binary variables. |
|
|
|
00:43:07.600 --> 00:43:10.820 |
|
So for this case here, the Entropy |
|
|
|
00:43:10.820 --> 00:43:13.280 |
|
would be -, 8 ninths, because eight out |
|
|
|
00:43:13.280 --> 00:43:14.600 |
|
of nine times it's zero. |
|
|
|
00:43:15.270 --> 00:43:17.300 |
|
Times log two of eight ninths. |
|
|
|
00:43:18.230 --> 00:43:21.210 |
|
Minus one ninth times, log 2 of 1 ninth |
|
|
|
00:43:21.210 --> 00:43:22.790 |
|
and that works out to about 1/2. |
|
|
|
00:43:24.370 --> 00:43:28.480 |
|
And over here the Entropy is -, 4 |
|
|
|
00:43:28.480 --> 00:43:30.270 |
|
ninths because four out of nine times, |
|
|
|
00:43:30.270 --> 00:43:32.900 |
|
or 8 out of 18 times, it's a 0. |
|
|
|
00:43:34.410 --> 00:43:37.104 |
|
Times log 24 ninths, minus five ninths, |
|
|
|
00:43:37.104 --> 00:43:39.010 |
|
times log two of five ninths, and |
|
|
|
00:43:39.010 --> 00:43:41.490 |
|
that's about 99. |
|
|
|
00:43:43.430 --> 00:43:45.280 |
|
The Entropy measure is how surprised |
|
|
|
00:43:45.280 --> 00:43:47.595 |
|
are we by some new value of this |
|
|
|
00:43:47.595 --> 00:43:47.830 |
|
Sequence? |
|
|
|
00:43:47.830 --> 00:43:50.123 |
|
How surprised are we likely to be in, |
|
|
|
00:43:50.123 --> 00:43:52.460 |
|
or how much information does it convey |
|
|
|
00:43:52.460 --> 00:43:54.895 |
|
that we know that we're in this |
|
|
|
00:43:54.895 --> 00:43:56.974 |
|
Sequence, or more generally, that we |
|
|
|
00:43:56.974 --> 00:43:57.940 |
|
know some feature? |
|
|
|
00:44:01.100 --> 00:44:03.425 |
|
So this is just showing the Entropy if |
|
|
|
00:44:03.425 --> 00:44:05.450 |
|
the probability if you have a binary |
|
|
|
00:44:05.450 --> 00:44:06.340 |
|
variable X. |
|
|
|
00:44:07.110 --> 00:44:09.720 |
|
And the probability of X is 0, then |
|
|
|
00:44:09.720 --> 00:44:12.180 |
|
your Entropy is 0 because you always |
|
|
|
00:44:12.180 --> 00:44:15.127 |
|
know that if probability of X = 2 is |
|
|
|
00:44:15.127 --> 00:44:16.818 |
|
zero, that means that probability of X |
|
|
|
00:44:16.818 --> 00:44:18.210 |
|
equals false is 1. |
|
|
|
00:44:18.860 --> 00:44:20.530 |
|
And so therefore you have complete |
|
|
|
00:44:20.530 --> 00:44:22.470 |
|
confidence that the value will be |
|
|
|
00:44:22.470 --> 00:44:22.810 |
|
false. |
|
|
|
00:44:24.070 --> 00:44:27.740 |
|
If probability of X is true is 1, then |
|
|
|
00:44:27.740 --> 00:44:29.590 |
|
you have complete confidence that the |
|
|
|
00:44:29.590 --> 00:44:30.650 |
|
value will be true. |
|
|
|
00:44:31.440 --> 00:44:35.570 |
|
But if it's .5, then you have no |
|
|
|
00:44:35.570 --> 00:44:37.120 |
|
information about whether it's true or |
|
|
|
00:44:37.120 --> 00:44:39.520 |
|
false, and so you have maximum entropy, |
|
|
|
00:44:39.520 --> 00:44:40.190 |
|
which is 1. |
|
|
|
00:44:45.770 --> 00:44:47.280 |
|
So here's another example. |
|
|
|
00:44:47.280 --> 00:44:49.340 |
|
So suppose that we've got two |
|
|
|
00:44:49.340 --> 00:44:51.070 |
|
variables, whether it's raining or not, |
|
|
|
00:44:51.070 --> 00:44:52.220 |
|
and whether it's cloudy or not. |
|
|
|
00:44:52.820 --> 00:44:55.700 |
|
And we've observed 100 days and marked |
|
|
|
00:44:55.700 --> 00:44:57.260 |
|
down whether it's rainy or cloudy. |
|
|
|
00:44:58.870 --> 00:45:00.150 |
|
Many and or Cloudy. |
|
|
|
00:45:00.930 --> 00:45:01.500 |
|
|
|
|
|
00:45:02.600 --> 00:45:06.300 |
|
So 24 days it was raining and cloudy. |
|
|
|
00:45:06.300 --> 00:45:08.210 |
|
One day it was raining and not Cloudy. |
|
|
|
00:45:09.320 --> 00:45:11.244 |
|
25 days it was not raining and cloudy |
|
|
|
00:45:11.244 --> 00:45:13.409 |
|
and 50 days it was not raining and not |
|
|
|
00:45:13.409 --> 00:45:13.649 |
|
Cloudy. |
|
|
|
00:45:15.620 --> 00:45:17.980 |
|
The probabilities are just dividing by |
|
|
|
00:45:17.980 --> 00:45:18.766 |
|
the total there. |
|
|
|
00:45:18.766 --> 00:45:20.850 |
|
So the probability of Cloudy and not |
|
|
|
00:45:20.850 --> 00:45:22.630 |
|
raining is 25 out of 100. |
|
|
|
00:45:24.040 --> 00:45:26.660 |
|
And so I can also compute an Entropy of |
|
|
|
00:45:26.660 --> 00:45:27.855 |
|
this whole joint distribution. |
|
|
|
00:45:27.855 --> 00:45:31.150 |
|
So I can say that the entropy of X&Y |
|
|
|
00:45:31.150 --> 00:45:33.446 |
|
together is the sum all the different |
|
|
|
00:45:33.446 --> 00:45:35.428 |
|
values of X and the over all the |
|
|
|
00:45:35.428 --> 00:45:36.419 |
|
different values of Y. |
|
|
|
00:45:37.060 --> 00:45:39.770 |
|
Of probability of X&Y times log 2, |
|
|
|
00:45:39.770 --> 00:45:41.920 |
|
probability of X&Y, and then that's all |
|
|
|
00:45:41.920 --> 00:45:42.880 |
|
just like written out here. |
|
|
|
00:45:43.650 --> 00:45:45.115 |
|
And then I get some Entropy value. |
|
|
|
00:45:45.115 --> 00:45:47.940 |
|
And sometimes people call those units |
|
|
|
00:45:47.940 --> 00:45:51.490 |
|
bits, so 156 bits because that's the |
|
|
|
00:45:51.490 --> 00:45:53.008 |
|
amount of, that's the number of bits |
|
|
|
00:45:53.008 --> 00:45:54.680 |
|
that I would need that I would expect |
|
|
|
00:45:54.680 --> 00:45:55.040 |
|
to. |
|
|
|
00:45:55.790 --> 00:45:57.780 |
|
Be able to like represent this. |
|
|
|
00:45:58.630 --> 00:45:59.700 |
|
This information. |
|
|
|
00:46:00.430 --> 00:46:04.395 |
|
If you if it were always not Cloudy and |
|
|
|
00:46:04.395 --> 00:46:04.990 |
|
not raining. |
|
|
|
00:46:05.850 --> 00:46:08.020 |
|
If it were 100% of the time not Cloudy |
|
|
|
00:46:08.020 --> 00:46:10.280 |
|
and not raining, then you'd have 0 bits |
|
|
|
00:46:10.280 --> 00:46:11.830 |
|
because you don't need any data to |
|
|
|
00:46:11.830 --> 00:46:12.810 |
|
represent the. |
|
|
|
00:46:13.710 --> 00:46:15.770 |
|
That uncertainty, it's just always |
|
|
|
00:46:15.770 --> 00:46:16.300 |
|
true. |
|
|
|
00:46:16.300 --> 00:46:18.300 |
|
I mean it's always like one value. |
|
|
|
00:46:18.300 --> 00:46:20.790 |
|
So 15 bits means that you have pretty |
|
|
|
00:46:20.790 --> 00:46:21.490 |
|
high uncertainty. |
|
|
|
00:46:25.250 --> 00:46:27.680 |
|
There's also a concept called specific |
|
|
|
00:46:27.680 --> 00:46:28.510 |
|
Entropy. |
|
|
|
00:46:28.510 --> 00:46:29.780 |
|
So that is. |
|
|
|
00:46:29.780 --> 00:46:33.560 |
|
That means that if one thing, then how |
|
|
|
00:46:33.560 --> 00:46:34.516 |
|
much does that? |
|
|
|
00:46:34.516 --> 00:46:36.610 |
|
How much uncertainty do you have left? |
|
|
|
00:46:37.460 --> 00:46:41.170 |
|
So, for example, what is the entropy of |
|
|
|
00:46:41.170 --> 00:46:43.610 |
|
cloudiness given that I know that it's |
|
|
|
00:46:43.610 --> 00:46:44.000 |
|
raining? |
|
|
|
00:46:45.420 --> 00:46:48.940 |
|
And the Conditional Entropy is very |
|
|
|
00:46:48.940 --> 00:46:51.280 |
|
similar form, it's just negative sum |
|
|
|
00:46:51.280 --> 00:46:52.720 |
|
over the values of the. |
|
|
|
00:46:53.710 --> 00:46:54.970 |
|
The thing that you're measuring the |
|
|
|
00:46:54.970 --> 00:46:55.780 |
|
Entropy over. |
|
|
|
00:46:56.800 --> 00:46:58.880 |
|
The probability of that given the thing |
|
|
|
00:46:58.880 --> 00:46:59.500 |
|
that. |
|
|
|
00:47:00.150 --> 00:47:03.610 |
|
Times the log probability of Y given X, |
|
|
|
00:47:03.610 --> 00:47:04.760 |
|
where Y is the thing you're measuring |
|
|
|
00:47:04.760 --> 00:47:06.400 |
|
the uncertainty of, and X is a thing |
|
|
|
00:47:06.400 --> 00:47:06.850 |
|
that you know. |
|
|
|
00:47:09.200 --> 00:47:12.660 |
|
So if I know that it's Cloudy, then |
|
|
|
00:47:12.660 --> 00:47:15.690 |
|
there's a 24 out of 25 chance that |
|
|
|
00:47:15.690 --> 00:47:16.150 |
|
it's. |
|
|
|
00:47:17.340 --> 00:47:17.950 |
|
Wait, no. |
|
|
|
00:47:17.950 --> 00:47:19.910 |
|
If I know that it's raining, sorry. |
|
|
|
00:47:19.910 --> 00:47:21.599 |
|
If I know that it's raining, then |
|
|
|
00:47:21.600 --> 00:47:23.931 |
|
there's a 24 out of 25 chance that it's |
|
|
|
00:47:23.931 --> 00:47:24.430 |
|
Cloudy, right? |
|
|
|
00:47:24.430 --> 00:47:26.190 |
|
And then one out of 25 chance that it's |
|
|
|
00:47:26.190 --> 00:47:26.760 |
|
not Cloudy. |
|
|
|
00:47:27.600 --> 00:47:30.340 |
|
So I get 24 to 25 there and one out of |
|
|
|
00:47:30.340 --> 00:47:33.070 |
|
25 there, and now my Entropy is greatly |
|
|
|
00:47:33.070 --> 00:47:33.660 |
|
reduced. |
|
|
|
00:47:39.810 --> 00:47:41.280 |
|
And then you can also measure. |
|
|
|
00:47:41.930 --> 00:47:44.250 |
|
In expected Conditional Entropy. |
|
|
|
00:47:46.020 --> 00:47:50.570 |
|
So that's just the probability of. |
|
|
|
00:47:50.570 --> 00:47:53.870 |
|
That's just taking the specific |
|
|
|
00:47:53.870 --> 00:47:54.940 |
|
Conditional Entropy. |
|
|
|
00:47:55.780 --> 00:47:58.660 |
|
At times the probability of each of the |
|
|
|
00:47:58.660 --> 00:48:00.360 |
|
values that I might know. |
|
|
|
00:48:01.260 --> 00:48:03.710 |
|
Summed up over the different values, |
|
|
|
00:48:03.710 --> 00:48:04.180 |
|
so. |
|
|
|
00:48:04.900 --> 00:48:06.130 |
|
The. |
|
|
|
00:48:06.820 --> 00:48:09.920 |
|
The expected Conditional value Entropy |
|
|
|
00:48:09.920 --> 00:48:11.950 |
|
for knowing whether or not it's raining |
|
|
|
00:48:11.950 --> 00:48:15.179 |
|
would be the Conditional Entropy. |
|
|
|
00:48:16.040 --> 00:48:19.010 |
|
Of it raining if I know it's raining. |
|
|
|
00:48:19.920 --> 00:48:21.280 |
|
Times the probability that it's |
|
|
|
00:48:21.280 --> 00:48:21.720 |
|
raining. |
|
|
|
00:48:22.460 --> 00:48:24.460 |
|
Plus the. |
|
|
|
00:48:25.190 --> 00:48:28.270 |
|
Entropy of cloudiness given that it's |
|
|
|
00:48:28.270 --> 00:48:30.210 |
|
not raining, times the probability |
|
|
|
00:48:30.210 --> 00:48:30.900 |
|
that's not raining. |
|
|
|
00:48:33.530 --> 00:48:35.550 |
|
And that's also equal to this thing. |
|
|
|
00:48:42.960 --> 00:48:43.400 |
|
Right. |
|
|
|
00:48:43.400 --> 00:48:46.168 |
|
So if I want to know what is the |
|
|
|
00:48:46.168 --> 00:48:47.790 |
|
entropy of cloudiness, I guess I said |
|
|
|
00:48:47.790 --> 00:48:48.730 |
|
it a little early. |
|
|
|
00:48:48.730 --> 00:48:50.890 |
|
What is the entropy of cloudiness given |
|
|
|
00:48:50.890 --> 00:48:52.720 |
|
whether that we know whether or not |
|
|
|
00:48:52.720 --> 00:48:53.340 |
|
it's raining? |
|
|
|
00:48:54.310 --> 00:48:56.240 |
|
Then that is. |
|
|
|
00:48:56.850 --> 00:48:59.540 |
|
Going to be like 1/4, which is the |
|
|
|
00:48:59.540 --> 00:49:02.009 |
|
probability that it's raining, is that |
|
|
|
00:49:02.010 --> 00:49:02.320 |
|
right? |
|
|
|
00:49:02.320 --> 00:49:04.790 |
|
25 out of 100 times it's raining. |
|
|
|
00:49:05.490 --> 00:49:08.225 |
|
So 1/4 is the probability that it's |
|
|
|
00:49:08.225 --> 00:49:11.240 |
|
raining times the Entropy of cloudiness |
|
|
|
00:49:11.240 --> 00:49:13.840 |
|
given that it's raining plus three |
|
|
|
00:49:13.840 --> 00:49:15.710 |
|
quarter times it's not raining times |
|
|
|
00:49:15.710 --> 00:49:17.570 |
|
the entropy of the cloudiness given |
|
|
|
00:49:17.570 --> 00:49:18.810 |
|
that it's not raining. |
|
|
|
00:49:20.470 --> 00:49:23.420 |
|
So that's a measure of how much does |
|
|
|
00:49:23.420 --> 00:49:25.930 |
|
knowing whether or not it's rainy, or |
|
|
|
00:49:25.930 --> 00:49:28.470 |
|
how much uncertainty do I have left if |
|
|
|
00:49:28.470 --> 00:49:29.880 |
|
I know whether or not it's raining. |
|
|
|
00:49:32.430 --> 00:49:34.030 |
|
How much do I expect to have left? |
|
|
|
00:49:37.700 --> 00:49:39.800 |
|
So some useful things to know is that |
|
|
|
00:49:39.800 --> 00:49:41.585 |
|
the Entropy is always nonnegative. |
|
|
|
00:49:41.585 --> 00:49:43.580 |
|
You can never have negative Entropy, |
|
|
|
00:49:43.580 --> 00:49:45.410 |
|
but do make sure you remember. |
|
|
|
00:49:46.480 --> 00:49:47.310 |
|
|
|
|
|
00:49:48.750 --> 00:49:50.380 |
|
So do make sure you remember these |
|
|
|
00:49:50.380 --> 00:49:53.390 |
|
negative signs in this like |
|
|
|
00:49:53.390 --> 00:49:54.910 |
|
probability, otherwise if you end up |
|
|
|
00:49:54.910 --> 00:49:56.780 |
|
with a negative Entropy that you left |
|
|
|
00:49:56.780 --> 00:49:57.490 |
|
something out. |
|
|
|
00:49:59.760 --> 00:50:02.815 |
|
You also have this chain rule, so the |
|
|
|
00:50:02.815 --> 00:50:06.320 |
|
entropy X&Y is the entropy of X given Y |
|
|
|
00:50:06.320 --> 00:50:08.580 |
|
plus the entropy of Y, which kind of |
|
|
|
00:50:08.580 --> 00:50:10.260 |
|
makes sense because the Entropy of |
|
|
|
00:50:10.260 --> 00:50:11.280 |
|
knowing two things. |
|
|
|
00:50:12.310 --> 00:50:14.540 |
|
Of the values of two things, is the |
|
|
|
00:50:14.540 --> 00:50:15.914 |
|
value of knowing one. |
|
|
|
00:50:15.914 --> 00:50:18.785 |
|
Is the OR sorry, the Entropy or the |
|
|
|
00:50:18.785 --> 00:50:20.199 |
|
uncertainty of knowing two things? |
|
|
|
00:50:20.199 --> 00:50:22.179 |
|
Is the uncertainty of knowing one of |
|
|
|
00:50:22.180 --> 00:50:22.580 |
|
them? |
|
|
|
00:50:23.280 --> 00:50:24.940 |
|
Plus the uncertainty of knowing the |
|
|
|
00:50:24.940 --> 00:50:26.515 |
|
other one, given that you already know |
|
|
|
00:50:26.515 --> 00:50:27.350 |
|
One South. |
|
|
|
00:50:27.350 --> 00:50:30.169 |
|
It's either Entropy of X given Y plus |
|
|
|
00:50:30.169 --> 00:50:32.323 |
|
Entropy of Y, or Entropy of Y given X |
|
|
|
00:50:32.323 --> 00:50:33.250 |
|
plus Entropy of 784x1. |
|
|
|
00:50:34.640 --> 00:50:38.739 |
|
X&Y are independent, then Entropy of Y |
|
|
|
00:50:38.740 --> 00:50:40.659 |
|
given X is equal the entropy of Y. |
|
|
|
00:50:42.870 --> 00:50:44.520 |
|
Meaning that 784X1 doesn't reduce our |
|
|
|
00:50:44.520 --> 00:50:45.240 |
|
uncertainty at all. |
|
|
|
00:50:46.530 --> 00:50:48.845 |
|
And Entropy of anything with itself is |
|
|
|
00:50:48.845 --> 00:50:50.330 |
|
0, because once you know it, then |
|
|
|
00:50:50.330 --> 00:50:51.480 |
|
there's no uncertainty anymore. |
|
|
|
00:50:52.880 --> 00:50:53.390 |
|
And then? |
|
|
|
00:50:54.110 --> 00:50:57.970 |
|
If you do know something, Entropy of Y |
|
|
|
00:50:57.970 --> 00:50:59.780 |
|
given X or at least has to be less than |
|
|
|
00:50:59.780 --> 00:51:01.430 |
|
or equal the entropy of Y. |
|
|
|
00:51:01.430 --> 00:51:04.020 |
|
So knowing something can never increase |
|
|
|
00:51:04.020 --> 00:51:04.690 |
|
your uncertainty. |
|
|
|
00:51:07.660 --> 00:51:09.520 |
|
So then finally we can get to this |
|
|
|
00:51:09.520 --> 00:51:11.132 |
|
information gain. |
|
|
|
00:51:11.132 --> 00:51:14.730 |
|
So information gain is the change in |
|
|
|
00:51:14.730 --> 00:51:17.530 |
|
the Entropy due to learning something |
|
|
|
00:51:17.530 --> 00:51:17.810 |
|
new. |
|
|
|
00:51:20.100 --> 00:51:23.310 |
|
So I can say, for example, what is? |
|
|
|
00:51:23.310 --> 00:51:26.160 |
|
How much does knowing whether or not |
|
|
|
00:51:26.160 --> 00:51:27.010 |
|
it's rainy? |
|
|
|
00:51:27.960 --> 00:51:30.610 |
|
Reduce my uncertainty of cloudiness. |
|
|
|
00:51:31.620 --> 00:51:34.542 |
|
So that would be the Entropy of |
|
|
|
00:51:34.542 --> 00:51:37.242 |
|
cloudiness minus the entropy of |
|
|
|
00:51:37.242 --> 00:51:39.120 |
|
cloudiness given whether or not it's |
|
|
|
00:51:39.120 --> 00:51:39.450 |
|
raining. |
|
|
|
00:51:41.710 --> 00:51:43.990 |
|
So that's the Entropy of cloudiness |
|
|
|
00:51:43.990 --> 00:51:46.500 |
|
minus the entropy of cloudiness given |
|
|
|
00:51:46.500 --> 00:51:47.640 |
|
whether it's raining. |
|
|
|
00:51:47.640 --> 00:51:49.660 |
|
And that's 25 bits. |
|
|
|
00:51:49.660 --> 00:51:50.993 |
|
So that's like the value. |
|
|
|
00:51:50.993 --> 00:51:52.860 |
|
It's essentially the value of knowing |
|
|
|
00:51:52.860 --> 00:51:54.100 |
|
whether or not it's meaning. |
|
|
|
00:51:59.210 --> 00:52:01.140 |
|
And then finally we can use this in our |
|
|
|
00:52:01.140 --> 00:52:02.140 |
|
Decision tree. |
|
|
|
00:52:02.140 --> 00:52:03.660 |
|
So if we recall. |
|
|
|
00:52:04.300 --> 00:52:07.310 |
|
The Decision tree algorithm is that. |
|
|
|
00:52:08.410 --> 00:52:10.940 |
|
If I'm trying to I go through like |
|
|
|
00:52:10.940 --> 00:52:12.700 |
|
splitting my data. |
|
|
|
00:52:13.550 --> 00:52:15.050 |
|
Choose some Test. |
|
|
|
00:52:15.050 --> 00:52:17.280 |
|
According to the test, I split the data |
|
|
|
00:52:17.280 --> 00:52:18.970 |
|
into different nodes and then I choose |
|
|
|
00:52:18.970 --> 00:52:20.440 |
|
a new test for each of those nodes. |
|
|
|
00:52:21.440 --> 00:52:22.840 |
|
So the key thing we're trying to figure |
|
|
|
00:52:22.840 --> 00:52:24.007 |
|
out is how do we do that Test? |
|
|
|
00:52:24.007 --> 00:52:25.800 |
|
How do we choose the features or |
|
|
|
00:52:25.800 --> 00:52:27.480 |
|
attributes and the splitting value? |
|
|
|
00:52:28.370 --> 00:52:30.100 |
|
To try to split things into different |
|
|
|
00:52:30.100 --> 00:52:32.030 |
|
classes, or in other words, to try to |
|
|
|
00:52:32.030 --> 00:52:33.640 |
|
reduce the uncertainty of our |
|
|
|
00:52:33.640 --> 00:52:34.150 |
|
prediction. |
|
|
|
00:52:36.190 --> 00:52:39.790 |
|
And the solution is to choose the |
|
|
|
00:52:39.790 --> 00:52:42.450 |
|
attribute to choose the Test that |
|
|
|
00:52:42.450 --> 00:52:44.780 |
|
maximizes the information gain. |
|
|
|
00:52:44.780 --> 00:52:46.770 |
|
In other words, that reduces the |
|
|
|
00:52:46.770 --> 00:52:49.600 |
|
entropy of the most for the current |
|
|
|
00:52:49.600 --> 00:52:50.370 |
|
data in that node. |
|
|
|
00:52:52.000 --> 00:52:52.530 |
|
So. |
|
|
|
00:52:53.260 --> 00:52:56.478 |
|
What you would do is for each for each |
|
|
|
00:52:56.478 --> 00:52:58.700 |
|
discrete attribute or discrete feature. |
|
|
|
00:52:59.630 --> 00:53:02.063 |
|
You can compute the information gain of |
|
|
|
00:53:02.063 --> 00:53:04.140 |
|
using that using that feature. |
|
|
|
00:53:04.140 --> 00:53:06.620 |
|
So in the case of. |
|
|
|
00:53:07.360 --> 00:53:08.280 |
|
Go back a bit. |
|
|
|
00:53:09.010 --> 00:53:11.670 |
|
To this simple true false all right, so |
|
|
|
00:53:11.670 --> 00:53:12.650 |
|
for example. |
|
|
|
00:53:13.650 --> 00:53:15.520 |
|
Here I started out with a pretty high |
|
|
|
00:53:15.520 --> 00:53:17.550 |
|
Entropy, close to one because 5/8 of |
|
|
|
00:53:17.550 --> 00:53:18.050 |
|
the time. |
|
|
|
00:53:18.690 --> 00:53:20.850 |
|
The value of Y is true and three it's |
|
|
|
00:53:20.850 --> 00:53:21.180 |
|
false. |
|
|
|
00:53:22.030 --> 00:53:26.620 |
|
And so I can say for X1, what's my |
|
|
|
00:53:26.620 --> 00:53:28.970 |
|
Entropy after X1? |
|
|
|
00:53:28.970 --> 00:53:31.020 |
|
It's a 5050 chance that it goes either |
|
|
|
00:53:31.020 --> 00:53:31.313 |
|
way. |
|
|
|
00:53:31.313 --> 00:53:34.020 |
|
So this will be 50 * 0 because the |
|
|
|
00:53:34.020 --> 00:53:36.541 |
|
Entropy here is 0 and this will be 50 |
|
|
|
00:53:36.541 --> 00:53:36.815 |
|
times. |
|
|
|
00:53:36.815 --> 00:53:38.630 |
|
I don't know, one or something, |
|
|
|
00:53:38.630 --> 00:53:40.659 |
|
whatever that Entropy is, and so this |
|
|
|
00:53:40.659 --> 00:53:42.100 |
|
Entropy will be really low. |
|
|
|
00:53:43.000 --> 00:53:45.700 |
|
And this Entropy is just about as high |
|
|
|
00:53:45.700 --> 00:53:46.590 |
|
as I started with. |
|
|
|
00:53:46.590 --> 00:53:48.330 |
|
It's only a little bit lower maybe |
|
|
|
00:53:48.330 --> 00:53:50.510 |
|
because if I go this way, I have |
|
|
|
00:53:50.510 --> 00:53:52.691 |
|
Entropy of 1, there's a 50% chance of |
|
|
|
00:53:52.691 --> 00:53:55.188 |
|
that, and if I go this way, then I have |
|
|
|
00:53:55.188 --> 00:53:56.721 |
|
lower Entropy and there's a 50% chance |
|
|
|
00:53:56.721 --> 00:53:57.159 |
|
of that. |
|
|
|
00:53:57.870 --> 00:54:00.010 |
|
And so my information gain is my |
|
|
|
00:54:00.010 --> 00:54:01.600 |
|
initial entropy of Y. |
|
|
|
00:54:02.980 --> 00:54:06.550 |
|
Minus the entropy of each of these, and |
|
|
|
00:54:06.550 --> 00:54:08.005 |
|
here the Entropy gain. |
|
|
|
00:54:08.005 --> 00:54:10.210 |
|
The information gain of X1 is much |
|
|
|
00:54:10.210 --> 00:54:12.940 |
|
lower than X2 and so I Choose X1. |
|
|
|
00:54:18.810 --> 00:54:20.420 |
|
So if I have discrete values, I just |
|
|
|
00:54:20.420 --> 00:54:22.449 |
|
compute the information gain for the |
|
|
|
00:54:22.450 --> 00:54:24.290 |
|
current node for each of those discrete |
|
|
|
00:54:24.290 --> 00:54:25.725 |
|
values, and then I choose the one with |
|
|
|
00:54:25.725 --> 00:54:26.860 |
|
the highest information gain. |
|
|
|
00:54:27.780 --> 00:54:29.650 |
|
If I have continuous values, it's |
|
|
|
00:54:29.650 --> 00:54:31.576 |
|
slightly more complicated because then |
|
|
|
00:54:31.576 --> 00:54:34.395 |
|
I have to also choose a threshold in |
|
|
|
00:54:34.395 --> 00:54:36.230 |
|
the lemons and. |
|
|
|
00:54:36.920 --> 00:54:40.150 |
|
And oranges we were choosing saying if |
|
|
|
00:54:40.150 --> 00:54:42.010 |
|
the height is greater than six then we |
|
|
|
00:54:42.010 --> 00:54:42.620 |
|
go one way. |
|
|
|
00:54:44.560 --> 00:54:46.640 |
|
So we have to choose which feature and |
|
|
|
00:54:46.640 --> 00:54:47.420 |
|
which threshold. |
|
|
|
00:54:48.430 --> 00:54:49.580 |
|
So typically. |
|
|
|
00:54:51.060 --> 00:54:53.512 |
|
Something this I don't know. |
|
|
|
00:54:53.512 --> 00:54:56.295 |
|
Like who thought putting a projector in |
|
|
|
00:54:56.295 --> 00:54:57.930 |
|
a jewel would be like a nice way to? |
|
|
|
00:54:58.590 --> 00:55:00.260 |
|
Right and stuff, but anyway. |
|
|
|
00:55:04.700 --> 00:55:06.340 |
|
But at least it's something, all right? |
|
|
|
00:55:06.340 --> 00:55:08.400 |
|
So let's say that I have some feature. |
|
|
|
00:55:09.420 --> 00:55:11.910 |
|
And I've got like some different |
|
|
|
00:55:11.910 --> 00:55:13.240 |
|
classes and that feature. |
|
|
|
00:55:16.190 --> 00:55:18.560 |
|
So what I would do is I would usually |
|
|
|
00:55:18.560 --> 00:55:19.950 |
|
you would sort the values. |
|
|
|
00:55:20.890 --> 00:55:22.440 |
|
And you're never going to want to split |
|
|
|
00:55:22.440 --> 00:55:24.010 |
|
between two of the same class, so I |
|
|
|
00:55:24.010 --> 00:55:26.469 |
|
would never split between the two X's, |
|
|
|
00:55:26.470 --> 00:55:29.250 |
|
because that's always going to be worse |
|
|
|
00:55:29.250 --> 00:55:31.070 |
|
than some split that's between |
|
|
|
00:55:31.070 --> 00:55:31.930 |
|
different classes. |
|
|
|
00:55:32.630 --> 00:55:35.450 |
|
So I can consider the thresholds that |
|
|
|
00:55:35.450 --> 00:55:35.810 |
|
are. |
|
|
|
00:55:36.460 --> 00:55:37.730 |
|
Between different classes. |
|
|
|
00:55:42.380 --> 00:55:44.000 |
|
Really. |
|
|
|
00:55:44.000 --> 00:55:44.530 |
|
No. |
|
|
|
00:55:46.130 --> 00:55:48.380 |
|
Yeah, I can, but I'm not going to draw |
|
|
|
00:55:48.380 --> 00:55:50.220 |
|
that long, so it's not worth it to me |
|
|
|
00:55:50.220 --> 00:55:50.860 |
|
to move on here. |
|
|
|
00:55:50.860 --> 00:55:52.160 |
|
Then I have to move my laptop and. |
|
|
|
00:55:53.030 --> 00:55:56.030 |
|
So I'm fine. |
|
|
|
00:55:56.750 --> 00:55:59.310 |
|
So I would choose these two thresholds. |
|
|
|
00:55:59.310 --> 00:56:01.680 |
|
If it's this threshold, then it's |
|
|
|
00:56:01.680 --> 00:56:04.152 |
|
basically two and zero. |
|
|
|
00:56:04.152 --> 00:56:07.470 |
|
So it's a very low Entropy here. |
|
|
|
00:56:07.470 --> 00:56:10.420 |
|
And the probability of that is 2 out of |
|
|
|
00:56:10.420 --> 00:56:11.505 |
|
five, right? |
|
|
|
00:56:11.505 --> 00:56:16.820 |
|
So it would be 0.4 * 0 is the. |
|
|
|
00:56:17.460 --> 00:56:18.580 |
|
Entropy on this side. |
|
|
|
00:56:19.570 --> 00:56:20.820 |
|
And if I go this way? |
|
|
|
00:56:21.670 --> 00:56:23.500 |
|
Then it's going to be. |
|
|
|
00:56:24.440 --> 00:56:25.290 |
|
Then I've got. |
|
|
|
00:56:26.660 --> 00:56:27.840 |
|
Sorry, two out of seven. |
|
|
|
00:56:29.750 --> 00:56:31.320 |
|
Out of seven times. |
|
|
|
00:56:32.470 --> 00:56:33.930 |
|
Times Entropy of 0 this way. |
|
|
|
00:56:34.650 --> 00:56:37.630 |
|
And if I go this way, then it's five |
|
|
|
00:56:37.630 --> 00:56:38.020 |
|
out of. |
|
|
|
00:56:38.980 --> 00:56:39.760 |
|
7. |
|
|
|
00:56:41.040 --> 00:56:41.770 |
|
Times. |
|
|
|
00:56:44.510 --> 00:56:47.560 |
|
Two out of five times log. |
|
|
|
00:56:52.980 --> 00:56:53.690 |
|
Thank you. |
|
|
|
00:56:53.690 --> 00:56:55.330 |
|
I always forget the minus sign. |
|
|
|
00:56:56.140 --> 00:56:58.270 |
|
OK, so minus 5 to 7, which is a |
|
|
|
00:56:58.270 --> 00:56:59.880 |
|
probability that I go in this direction |
|
|
|
00:56:59.880 --> 00:57:03.805 |
|
times one out of five times log one out |
|
|
|
00:57:03.805 --> 00:57:04.700 |
|
of five. |
|
|
|
00:57:05.550 --> 00:57:07.760 |
|
Plus four out of five. |
|
|
|
00:57:09.170 --> 00:57:10.710 |
|
Four to five times log. |
|
|
|
00:57:13.360 --> 00:57:14.100 |
|
Right. |
|
|
|
00:57:14.100 --> 00:57:15.750 |
|
So there's a one fifth chance that it's |
|
|
|
00:57:15.750 --> 00:57:16.270 |
|
an X. |
|
|
|
00:57:17.350 --> 00:57:19.180 |
|
I do 1/5 times log 1/5. |
|
|
|
00:57:19.820 --> 00:57:22.200 |
|
Minus 4/5 chance that it's a no, so |
|
|
|
00:57:22.200 --> 00:57:23.790 |
|
minus 4/5 times log four fifth. |
|
|
|
00:57:24.510 --> 00:57:26.210 |
|
And this whole thing is the Entropy |
|
|
|
00:57:26.210 --> 00:57:27.140 |
|
after that split. |
|
|
|
00:57:28.590 --> 00:57:30.650 |
|
And then likewise I can evaluate this |
|
|
|
00:57:30.650 --> 00:57:32.850 |
|
split as well and so. |
|
|
|
00:57:33.620 --> 00:57:35.650 |
|
Out of these two splits, which one do |
|
|
|
00:57:35.650 --> 00:57:37.190 |
|
you think will have the most |
|
|
|
00:57:37.190 --> 00:57:38.040 |
|
information gain? |
|
|
|
00:57:41.220 --> 00:57:43.320 |
|
Yeah, the left split, the first one has |
|
|
|
00:57:43.320 --> 00:57:45.050 |
|
the most information gain because then |
|
|
|
00:57:45.050 --> 00:57:47.168 |
|
I get a confident Decision about two |
|
|
|
00:57:47.168 --> 00:57:49.943 |
|
X's and like 4 out of five chance of |
|
|
|
00:57:49.943 --> 00:57:51.739 |
|
getting it right on the other side, |
|
|
|
00:57:51.740 --> 00:57:53.520 |
|
where if I choose the right split, I |
|
|
|
00:57:53.520 --> 00:57:56.791 |
|
only get a perfect confidence about 1X |
|
|
|
00:57:56.791 --> 00:57:59.529 |
|
and A2 out of three chance of getting |
|
|
|
00:57:59.529 --> 00:58:00.529 |
|
it right on the other side. |
|
|
|
00:58:15.920 --> 00:58:19.580 |
|
OK, so if I continuous features I would |
|
|
|
00:58:19.580 --> 00:58:21.490 |
|
just try all the different like |
|
|
|
00:58:21.490 --> 00:58:23.110 |
|
candidate thresholds for all those |
|
|
|
00:58:23.110 --> 00:58:24.690 |
|
features and then choose the best one. |
|
|
|
00:58:26.430 --> 00:58:28.360 |
|
And. |
|
|
|
00:58:28.460 --> 00:58:29.720 |
|
She's the best one, all right. |
|
|
|
00:58:29.720 --> 00:58:30.090 |
|
That's it. |
|
|
|
00:58:30.090 --> 00:58:31.430 |
|
And then I do that for all the nodes, |
|
|
|
00:58:31.430 --> 00:58:32.590 |
|
then I do it Recursively. |
|
|
|
00:58:33.670 --> 00:58:35.660 |
|
So if you have a lot of features and a |
|
|
|
00:58:35.660 --> 00:58:37.050 |
|
lot of data, this can kind of take a |
|
|
|
00:58:37.050 --> 00:58:37.600 |
|
long time. |
|
|
|
00:58:38.250 --> 00:58:40.610 |
|
But I mean these operations are super |
|
|
|
00:58:40.610 --> 00:58:41.710 |
|
fast so. |
|
|
|
00:58:42.980 --> 00:58:45.919 |
|
In practice, when you run it so in |
|
|
|
00:58:45.920 --> 00:58:48.380 |
|
homework two, I'll have you train tree |
|
|
|
00:58:48.380 --> 00:58:50.930 |
|
train forests of Decision trees, where |
|
|
|
00:58:50.930 --> 00:58:54.165 |
|
you train 100 of them for example, and |
|
|
|
00:58:54.165 --> 00:58:56.090 |
|
it takes like a few seconds, so it's |
|
|
|
00:58:56.090 --> 00:58:57.204 |
|
like pretty fast. |
|
|
|
00:58:57.204 --> 00:58:59.070 |
|
These are these are actually not that |
|
|
|
00:58:59.070 --> 00:59:01.030 |
|
computationally expensive, even though |
|
|
|
00:59:01.030 --> 00:59:02.610 |
|
doing it manually would take forever. |
|
|
|
00:59:05.590 --> 00:59:06.980 |
|
So. |
|
|
|
00:59:08.860 --> 00:59:10.970 |
|
We're close to the we're close to the |
|
|
|
00:59:10.970 --> 00:59:11.690 |
|
end of the lecture. |
|
|
|
00:59:12.320 --> 00:59:14.320 |
|
But I will give you just a second to |
|
|
|
00:59:14.320 --> 00:59:15.230 |
|
catch your breath. |
|
|
|
00:59:15.230 --> 00:59:17.030 |
|
And while you're doing that, think |
|
|
|
00:59:17.030 --> 00:59:17.690 |
|
about. |
|
|
|
00:59:19.060 --> 00:59:22.640 |
|
If I were to try and in this case I'm |
|
|
|
00:59:22.640 --> 00:59:23.760 |
|
showing like all the different |
|
|
|
00:59:23.760 --> 00:59:25.210 |
|
examples, the numbers are different |
|
|
|
00:59:25.210 --> 00:59:27.530 |
|
examples there and the color is whether |
|
|
|
00:59:27.530 --> 00:59:28.270 |
|
they wait or not. |
|
|
|
00:59:28.850 --> 00:59:30.570 |
|
And I'm trying to decide whether I'm |
|
|
|
00:59:30.570 --> 00:59:33.090 |
|
going to make a decision based on the |
|
|
|
00:59:33.090 --> 00:59:35.096 |
|
type of restaurant or based on whether |
|
|
|
00:59:35.096 --> 00:59:35.860 |
|
the restaurant's full. |
|
|
|
00:59:36.490 --> 00:59:40.840 |
|
So take a moment to stretch or zone |
|
|
|
00:59:40.840 --> 00:59:42.760 |
|
out, and then I'll ask you what the |
|
|
|
00:59:42.760 --> 00:59:43.200 |
|
answer is. |
|
|
|
01:00:05.270 --> 01:00:06.606 |
|
Part of it, yeah. |
|
|
|
01:00:06.606 --> 01:00:08.755 |
|
So this is all Training one tree. |
|
|
|
01:00:08.755 --> 01:00:10.840 |
|
And for a random forest you just |
|
|
|
01:00:10.840 --> 01:00:14.246 |
|
randomly sample features and randomly |
|
|
|
01:00:14.246 --> 01:00:16.760 |
|
sample data, and then you train a tree |
|
|
|
01:00:16.760 --> 01:00:19.250 |
|
and then you do that like N times and |
|
|
|
01:00:19.250 --> 01:00:20.600 |
|
then you average the predictions. |
|
|
|
01:00:27.420 --> 01:00:27.810 |
|
Yeah. |
|
|
|
01:00:30.860 --> 01:00:33.610 |
|
And so essentially, since the previous |
|
|
|
01:00:33.610 --> 01:00:35.440 |
|
Entropy is fixed when you're trying to |
|
|
|
01:00:35.440 --> 01:00:36.140 |
|
make a decision. |
|
|
|
01:00:36.910 --> 01:00:38.739 |
|
You're just essentially choosing the |
|
|
|
01:00:38.740 --> 01:00:41.810 |
|
Decision, choosing the attribute that |
|
|
|
01:00:41.810 --> 01:00:45.320 |
|
will minimize your expected Entropy |
|
|
|
01:00:45.320 --> 01:00:47.160 |
|
after, like given that attribute. |
|
|
|
01:00:57.950 --> 01:01:00.790 |
|
Alright, so how many people think that |
|
|
|
01:01:00.790 --> 01:01:02.610 |
|
we should split? |
|
|
|
01:01:03.300 --> 01:01:04.710 |
|
How many people think we should split |
|
|
|
01:01:04.710 --> 01:01:05.580 |
|
based on type? |
|
|
|
01:01:08.180 --> 01:01:09.580 |
|
How many people think we should split |
|
|
|
01:01:09.580 --> 01:01:10.520 |
|
based on Patrons? |
|
|
|
01:01:12.730 --> 01:01:13.680 |
|
Yeah, OK. |
|
|
|
01:01:14.430 --> 01:01:17.380 |
|
So I would say the answer is Patrons |
|
|
|
01:01:17.380 --> 01:01:19.870 |
|
and because splitting based on type. |
|
|
|
01:01:20.590 --> 01:01:22.310 |
|
I end up no matter what type of |
|
|
|
01:01:22.310 --> 01:01:24.200 |
|
restaurant is, I end up with an equal |
|
|
|
01:01:24.200 --> 01:01:26.120 |
|
number of greens and Reds. |
|
|
|
01:01:26.120 --> 01:01:30.140 |
|
So green green means I didn't like say |
|
|
|
01:01:30.140 --> 01:01:32.672 |
|
it very clearly, but green means that |
|
|
|
01:01:32.672 --> 01:01:35.842 |
|
you think that you go, that you wait, |
|
|
|
01:01:35.842 --> 01:01:37.540 |
|
and red means that you don't wait. |
|
|
|
01:01:38.460 --> 01:01:40.820 |
|
So type tells me nothing, right? |
|
|
|
01:01:40.820 --> 01:01:42.310 |
|
It doesn't help me split anything at |
|
|
|
01:01:42.310 --> 01:01:42.455 |
|
all. |
|
|
|
01:01:42.455 --> 01:01:44.898 |
|
I knew initially I had complete Entropy |
|
|
|
01:01:44.898 --> 01:01:47.780 |
|
Entropy of 1 and after knowing type I |
|
|
|
01:01:47.780 --> 01:01:48.880 |
|
still have Entropy of 1. |
|
|
|
01:01:49.900 --> 01:01:52.140 |
|
Where if I know Patrons, then a lot of |
|
|
|
01:01:52.140 --> 01:01:55.720 |
|
the time I have my Decision, and only |
|
|
|
01:01:55.720 --> 01:01:57.230 |
|
some fraction of the time I still have |
|
|
|
01:01:57.230 --> 01:01:57.590 |
|
to. |
|
|
|
01:01:57.590 --> 01:01:59.040 |
|
I need more information. |
|
|
|
01:02:00.990 --> 01:02:02.700 |
|
So here's like all the math. |
|
|
|
01:02:04.250 --> 01:02:05.230 |
|
To go through that but. |
|
|
|
01:02:08.910 --> 01:02:11.790 |
|
All right, So what if I? |
|
|
|
01:02:12.780 --> 01:02:14.730 |
|
So sometimes a lot of times trees are |
|
|
|
01:02:14.730 --> 01:02:16.930 |
|
used for continuous values and then |
|
|
|
01:02:16.930 --> 01:02:18.320 |
|
it's called a Regression tree. |
|
|
|
01:02:20.760 --> 01:02:22.960 |
|
The Regression tree is learned in the |
|
|
|
01:02:22.960 --> 01:02:23.510 |
|
same way. |
|
|
|
01:02:24.570 --> 01:02:29.490 |
|
Except that you would use the instead |
|
|
|
01:02:29.490 --> 01:02:30.840 |
|
of, sorry. |
|
|
|
01:02:32.260 --> 01:02:34.170 |
|
In the Regression tree, it's the same |
|
|
|
01:02:34.170 --> 01:02:36.530 |
|
way, but you're typically trying to |
|
|
|
01:02:36.530 --> 01:02:38.703 |
|
minimize the sum of squared error of |
|
|
|
01:02:38.703 --> 01:02:41.862 |
|
the node instead of minimizing the |
|
|
|
01:02:41.862 --> 01:02:42.427 |
|
cross entropy. |
|
|
|
01:02:42.427 --> 01:02:44.050 |
|
You could still do it actually based on |
|
|
|
01:02:44.050 --> 01:02:45.500 |
|
cross entropy if you're seeing like |
|
|
|
01:02:45.500 --> 01:02:47.770 |
|
Gaussian distributions, but here let me |
|
|
|
01:02:47.770 --> 01:02:48.900 |
|
show you an example. |
|
|
|
01:02:54.540 --> 01:02:55.230 |
|
So. |
|
|
|
01:02:57.600 --> 01:02:59.170 |
|
Let's just say I'm doing like one |
|
|
|
01:02:59.170 --> 01:03:00.700 |
|
feature, let's say like. |
|
|
|
01:03:01.480 --> 01:03:04.610 |
|
This is my feature X and my prediction |
|
|
|
01:03:04.610 --> 01:03:05.240 |
|
value. |
|
|
|
01:03:06.000 --> 01:03:07.830 |
|
Is the number that I'm putting here. |
|
|
|
01:03:18.340 --> 01:03:18.690 |
|
OK. |
|
|
|
01:03:19.430 --> 01:03:20.160 |
|
So. |
|
|
|
01:03:21.350 --> 01:03:22.980 |
|
I'm trying to predict what this number |
|
|
|
01:03:22.980 --> 01:03:25.460 |
|
is given like where I fell on this X |
|
|
|
01:03:25.460 --> 01:03:25.950 |
|
axis. |
|
|
|
01:03:27.030 --> 01:03:28.940 |
|
So the best split I could do is |
|
|
|
01:03:28.940 --> 01:03:30.550 |
|
probably like here, right? |
|
|
|
01:03:31.330 --> 01:03:33.800 |
|
And if I take this split, then I would |
|
|
|
01:03:33.800 --> 01:03:37.313 |
|
say that if I'm in this side of the |
|
|
|
01:03:37.313 --> 01:03:37.879 |
|
split. |
|
|
|
01:03:38.640 --> 01:03:42.450 |
|
Then my prediction is 4 out of three, |
|
|
|
01:03:42.450 --> 01:03:44.560 |
|
which is the average of the values that |
|
|
|
01:03:44.560 --> 01:03:45.690 |
|
are on this side of the split. |
|
|
|
01:03:46.510 --> 01:03:48.710 |
|
And if I'm on this side of the split, |
|
|
|
01:03:48.710 --> 01:03:51.030 |
|
then my prediction is 6. |
|
|
|
01:03:51.800 --> 01:03:53.900 |
|
Which is 18 / 3, right? |
|
|
|
01:03:53.900 --> 01:03:55.815 |
|
So it's the average of these values. |
|
|
|
01:03:55.815 --> 01:03:58.270 |
|
So if I'm doing Regression, I'm still |
|
|
|
01:03:58.270 --> 01:04:00.520 |
|
like I'm choosing a split that's going |
|
|
|
01:04:00.520 --> 01:04:03.580 |
|
to give me the best prediction in each |
|
|
|
01:04:03.580 --> 01:04:04.390 |
|
side of the split. |
|
|
|
01:04:04.980 --> 01:04:06.745 |
|
And then my estimate on each side of |
|
|
|
01:04:06.745 --> 01:04:08.170 |
|
the split is just the average of the |
|
|
|
01:04:08.170 --> 01:04:10.100 |
|
values after that split. |
|
|
|
01:04:11.000 --> 01:04:13.950 |
|
And the scoring, the scoring that I can |
|
|
|
01:04:13.950 --> 01:04:16.120 |
|
use is the squared error. |
|
|
|
01:04:16.120 --> 01:04:20.464 |
|
So if the squared error would be 1 -, 4 |
|
|
|
01:04:20.464 --> 01:04:22.536 |
|
thirds squared, plus 2 -, 4 thirds |
|
|
|
01:04:22.536 --> 01:04:24.905 |
|
squared plus 1 -, 4 thirds squared plus |
|
|
|
01:04:24.905 --> 01:04:29.049 |
|
5 -, 6 ^2 + 8 -, 6 ^2 + 5 -, 6 ^2. |
|
|
|
01:04:29.890 --> 01:04:31.665 |
|
And so I could try like every |
|
|
|
01:04:31.665 --> 01:04:33.549 |
|
threshold, compute my squared error |
|
|
|
01:04:33.550 --> 01:04:35.635 |
|
given every threshold and then choose |
|
|
|
01:04:35.635 --> 01:04:37.060 |
|
the one that gives me the lowest |
|
|
|
01:04:37.060 --> 01:04:37.740 |
|
squared error. |
|
|
|
01:04:41.040 --> 01:04:43.055 |
|
So it's the same algorithm, except that |
|
|
|
01:04:43.055 --> 01:04:44.245 |
|
you have a different Criterion. |
|
|
|
01:04:44.245 --> 01:04:46.370 |
|
You might use squared error. |
|
|
|
01:04:47.820 --> 01:04:49.530 |
|
Because it's continuous values that I'm |
|
|
|
01:04:49.530 --> 01:04:50.100 |
|
predicting. |
|
|
|
01:04:50.840 --> 01:04:53.030 |
|
And then the output of the node will be |
|
|
|
01:04:53.030 --> 01:04:54.390 |
|
the average of the Samples that fall |
|
|
|
01:04:54.390 --> 01:04:55.069 |
|
into that node. |
|
|
|
01:04:56.480 --> 01:04:58.300 |
|
And for Regression trees, that's |
|
|
|
01:04:58.300 --> 01:05:00.020 |
|
especially important to. |
|
|
|
01:05:01.330 --> 01:05:03.490 |
|
Stop growing your tree early, because |
|
|
|
01:05:03.490 --> 01:05:05.420 |
|
obviously otherwise you're going to |
|
|
|
01:05:05.420 --> 01:05:10.090 |
|
always separate your data into one leaf |
|
|
|
01:05:10.090 --> 01:05:12.030 |
|
node per data point, since you have |
|
|
|
01:05:12.030 --> 01:05:13.995 |
|
like continuous values, unless there's |
|
|
|
01:05:13.995 --> 01:05:15.410 |
|
like many of the same value. |
|
|
|
01:05:16.020 --> 01:05:17.410 |
|
And so you're going to tend to like |
|
|
|
01:05:17.410 --> 01:05:17.870 |
|
overfit. |
|
|
|
01:05:23.330 --> 01:05:25.920 |
|
Overfitting, by the way, that's a term |
|
|
|
01:05:25.920 --> 01:05:27.060 |
|
that comes up a lot in machine |
|
|
|
01:05:27.060 --> 01:05:27.865 |
|
learning. |
|
|
|
01:05:27.865 --> 01:05:30.870 |
|
Overfitting means that your model you |
|
|
|
01:05:30.870 --> 01:05:32.910 |
|
have a very complex model so that you |
|
|
|
01:05:32.910 --> 01:05:34.940 |
|
achieve like really low Training Error. |
|
|
|
01:05:35.660 --> 01:05:37.550 |
|
But due to the complexity you're Test |
|
|
|
01:05:37.550 --> 01:05:38.740 |
|
error has gone up. |
|
|
|
01:05:38.740 --> 01:05:41.570 |
|
So if you plot your. |
|
|
|
01:05:42.250 --> 01:05:44.210 |
|
If you plot your Test error as, you |
|
|
|
01:05:44.210 --> 01:05:45.510 |
|
increase complexity. |
|
|
|
01:05:46.200 --> 01:05:48.000 |
|
You're Test error will go down for some |
|
|
|
01:05:48.000 --> 01:05:50.030 |
|
time, but then at some point as your |
|
|
|
01:05:50.030 --> 01:05:51.880 |
|
complexity keeps rising, you're Test |
|
|
|
01:05:51.880 --> 01:05:53.590 |
|
Error will start to increase. |
|
|
|
01:05:53.590 --> 01:05:55.040 |
|
So the point at which you're. |
|
|
|
01:05:55.740 --> 01:05:57.650 |
|
You're Test Error increases due to |
|
|
|
01:05:57.650 --> 01:05:59.260 |
|
increasing complexity is where you |
|
|
|
01:05:59.260 --> 01:06:00.040 |
|
start overfitting. |
|
|
|
01:06:00.870 --> 01:06:02.300 |
|
We'll talk about that more at the start |
|
|
|
01:06:02.300 --> 01:06:03.000 |
|
of the ensembles. |
|
|
|
01:06:04.840 --> 01:06:06.610 |
|
Right, so there's a few variants. |
|
|
|
01:06:06.610 --> 01:06:08.620 |
|
You can use different splitting |
|
|
|
01:06:08.620 --> 01:06:09.490 |
|
criteria. |
|
|
|
01:06:09.490 --> 01:06:12.010 |
|
For example, the genie like impurity or |
|
|
|
01:06:12.010 --> 01:06:14.580 |
|
Genie Diversity index is just one minus |
|
|
|
01:06:14.580 --> 01:06:17.460 |
|
the sum over all the values of X |
|
|
|
01:06:17.460 --> 01:06:18.700 |
|
probability of X ^2. |
|
|
|
01:06:19.480 --> 01:06:22.140 |
|
This actually is like almost the same |
|
|
|
01:06:22.140 --> 01:06:25.800 |
|
thing as the Entropy. |
|
|
|
01:06:26.410 --> 01:06:27.800 |
|
But it's a little bit faster to |
|
|
|
01:06:27.800 --> 01:06:29.840 |
|
compute, so it's actually more often |
|
|
|
01:06:29.840 --> 01:06:30.740 |
|
used as the default. |
|
|
|
01:06:33.830 --> 01:06:35.890 |
|
Most times you split on one attribute |
|
|
|
01:06:35.890 --> 01:06:38.460 |
|
at a time, but you can also. |
|
|
|
01:06:39.190 --> 01:06:40.820 |
|
They're in some algorithms. |
|
|
|
01:06:40.820 --> 01:06:42.790 |
|
You can solve for slices through the |
|
|
|
01:06:42.790 --> 01:06:44.600 |
|
feature space you can. |
|
|
|
01:06:45.280 --> 01:06:47.490 |
|
Do like linear discriminant analysis or |
|
|
|
01:06:47.490 --> 01:06:49.200 |
|
something like that to try to find like |
|
|
|
01:06:49.200 --> 01:06:51.970 |
|
a multivariable split that separates |
|
|
|
01:06:51.970 --> 01:06:53.870 |
|
the data, but usually it's just single |
|
|
|
01:06:53.870 --> 01:06:54.310 |
|
attribute. |
|
|
|
01:06:56.180 --> 01:06:57.970 |
|
And as I mentioned a couple of times, |
|
|
|
01:06:57.970 --> 01:07:00.010 |
|
you can stop early so you don't need to |
|
|
|
01:07:00.010 --> 01:07:02.010 |
|
grow like the full tree until you get |
|
|
|
01:07:02.010 --> 01:07:03.025 |
|
perfect Training accuracy. |
|
|
|
01:07:03.025 --> 01:07:06.110 |
|
You can stop after you reach a Max |
|
|
|
01:07:06.110 --> 01:07:09.475 |
|
depth or stop after you have a certain |
|
|
|
01:07:09.475 --> 01:07:11.540 |
|
number of nodes per certain number of |
|
|
|
01:07:11.540 --> 01:07:12.620 |
|
data points per node. |
|
|
|
01:07:13.710 --> 01:07:15.470 |
|
And the reason that you had stopped |
|
|
|
01:07:15.470 --> 01:07:16.920 |
|
early is because you the tree to |
|
|
|
01:07:16.920 --> 01:07:18.990 |
|
generalized new data and if you grow |
|
|
|
01:07:18.990 --> 01:07:20.450 |
|
like a really big tree, you're going to |
|
|
|
01:07:20.450 --> 01:07:23.000 |
|
end up with these like little like |
|
|
|
01:07:23.000 --> 01:07:26.240 |
|
micro applicable rules that might not |
|
|
|
01:07:26.240 --> 01:07:27.750 |
|
work well when you get new Test |
|
|
|
01:07:27.750 --> 01:07:28.270 |
|
Samples. |
|
|
|
01:07:29.220 --> 01:07:31.190 |
|
Where if you have a shorter tree that |
|
|
|
01:07:31.190 --> 01:07:34.260 |
|
then you might have some uncertainty |
|
|
|
01:07:34.260 --> 01:07:36.147 |
|
left in your leaf nodes, but you can |
|
|
|
01:07:36.147 --> 01:07:38.300 |
|
have more confidence that will reflect |
|
|
|
01:07:38.300 --> 01:07:39.240 |
|
the true distribution. |
|
|
|
01:07:42.350 --> 01:07:45.630 |
|
So if we look at Decision trees versus |
|
|
|
01:07:45.630 --> 01:07:46.280 |
|
one and north. |
|
|
|
01:07:46.980 --> 01:07:49.500 |
|
They're actually kind of similar in a |
|
|
|
01:07:49.500 --> 01:07:49.950 |
|
way. |
|
|
|
01:07:49.950 --> 01:07:51.620 |
|
They both have piecewise linear |
|
|
|
01:07:51.620 --> 01:07:52.120 |
|
decisions. |
|
|
|
01:07:52.750 --> 01:07:54.620 |
|
So here's the boundary that I get with |
|
|
|
01:07:54.620 --> 01:07:56.420 |
|
one and N in this example. |
|
|
|
01:07:57.110 --> 01:08:00.550 |
|
It's going to be based on like if you |
|
|
|
01:08:00.550 --> 01:08:03.380 |
|
chop things up into cells where each |
|
|
|
01:08:03.380 --> 01:08:05.770 |
|
sample is like everything within the |
|
|
|
01:08:05.770 --> 01:08:07.550 |
|
cell is closest to a particular sample. |
|
|
|
01:08:08.260 --> 01:08:09.460 |
|
I would get this boundary. |
|
|
|
01:08:11.100 --> 01:08:12.915 |
|
And with the Decision tree you tend to |
|
|
|
01:08:12.915 --> 01:08:14.440 |
|
get, if you're doing 1 attribute at a |
|
|
|
01:08:14.440 --> 01:08:15.980 |
|
time, you get this access to line |
|
|
|
01:08:15.980 --> 01:08:16.630 |
|
boundary. |
|
|
|
01:08:16.630 --> 01:08:18.832 |
|
So it ends up being like going straight |
|
|
|
01:08:18.832 --> 01:08:20.453 |
|
over and then up and then straight over |
|
|
|
01:08:20.453 --> 01:08:22.160 |
|
and then down and then a little bit |
|
|
|
01:08:22.160 --> 01:08:23.320 |
|
over and then down. |
|
|
|
01:08:23.320 --> 01:08:25.226 |
|
But they're kind of similar. |
|
|
|
01:08:25.226 --> 01:08:28.220 |
|
So they're the overlap of those spaces |
|
|
|
01:08:28.220 --> 01:08:28.690 |
|
is similar. |
|
|
|
01:08:31.900 --> 01:08:34.170 |
|
The Decision tree also has the ability |
|
|
|
01:08:34.170 --> 01:08:36.042 |
|
for over stopping to improve |
|
|
|
01:08:36.042 --> 01:08:36.520 |
|
generalization. |
|
|
|
01:08:36.520 --> 01:08:38.530 |
|
While they can and doesn't the K&N you |
|
|
|
01:08:38.530 --> 01:08:40.700 |
|
can increase K to try to improve |
|
|
|
01:08:40.700 --> 01:08:42.110 |
|
generalization to make it like a |
|
|
|
01:08:42.110 --> 01:08:44.506 |
|
smoother boundary, but it doesn't have |
|
|
|
01:08:44.506 --> 01:08:46.540 |
|
like as doesn't have very many like |
|
|
|
01:08:46.540 --> 01:08:47.930 |
|
controls or knobs to tune. |
|
|
|
01:08:50.390 --> 01:08:53.010 |
|
And the true power that Decision trees |
|
|
|
01:08:53.010 --> 01:08:54.580 |
|
arise with ensembles. |
|
|
|
01:08:54.580 --> 01:08:56.920 |
|
So if you combine lots of these trees |
|
|
|
01:08:56.920 --> 01:08:59.250 |
|
together to make a prediction, then |
|
|
|
01:08:59.250 --> 01:09:01.050 |
|
suddenly it becomes very effective. |
|
|
|
01:09:01.750 --> 01:09:04.430 |
|
In practice, people don't usually use |
|
|
|
01:09:04.430 --> 01:09:06.620 |
|
this one Decision tree in machine |
|
|
|
01:09:06.620 --> 01:09:07.998 |
|
learning to make an automated |
|
|
|
01:09:07.998 --> 01:09:08.396 |
|
prediction. |
|
|
|
01:09:08.396 --> 01:09:10.710 |
|
They usually use a whole bunch of them |
|
|
|
01:09:10.710 --> 01:09:12.397 |
|
and then average the results or train |
|
|
|
01:09:12.397 --> 01:09:14.870 |
|
them in a way that they that they |
|
|
|
01:09:14.870 --> 01:09:17.126 |
|
incrementally build up your prediction. |
|
|
|
01:09:17.126 --> 01:09:18.850 |
|
And that's what I'll talk about when I |
|
|
|
01:09:18.850 --> 01:09:19.730 |
|
talk about ensembles. |
|
|
|
01:09:22.360 --> 01:09:23.750 |
|
So Decision trees are really a |
|
|
|
01:09:23.750 --> 01:09:26.740 |
|
component in two of the most successful |
|
|
|
01:09:26.740 --> 01:09:28.970 |
|
algorithms of all time, but they're not |
|
|
|
01:09:28.970 --> 01:09:29.630 |
|
the whole thing. |
|
|
|
01:09:30.940 --> 01:09:33.160 |
|
Here's an example of a Regression tree |
|
|
|
01:09:33.160 --> 01:09:34.470 |
|
for Temperature prediction. |
|
|
|
01:09:35.560 --> 01:09:37.200 |
|
Just so that I can make the tree simple |
|
|
|
01:09:37.200 --> 01:09:39.370 |
|
enough to put on a Slide, I set the Min |
|
|
|
01:09:39.370 --> 01:09:41.840 |
|
leaf size to 200 so there. |
|
|
|
01:09:41.840 --> 01:09:44.000 |
|
So I stopped splitting once the node |
|
|
|
01:09:44.000 --> 01:09:44.990 |
|
has 200 points. |
|
|
|
01:09:46.120 --> 01:09:49.080 |
|
And then I computed the root mean |
|
|
|
01:09:49.080 --> 01:09:50.450 |
|
squared error and the R2. |
|
|
|
01:09:51.680 --> 01:09:53.280 |
|
And so you can see for example like. |
|
|
|
01:09:54.430 --> 01:09:55.990 |
|
One thing that is interesting to me |
|
|
|
01:09:55.990 --> 01:09:58.278 |
|
about this is that I would have thought |
|
|
|
01:09:58.278 --> 01:09:59.872 |
|
that the temperature in Cleveland |
|
|
|
01:09:59.872 --> 01:10:01.510 |
|
yesterday would be the best predictor |
|
|
|
01:10:01.510 --> 01:10:03.150 |
|
of the temperature in Cleveland today, |
|
|
|
01:10:03.150 --> 01:10:05.056 |
|
but it's actually not the best |
|
|
|
01:10:05.056 --> 01:10:05.469 |
|
predictor. |
|
|
|
01:10:05.470 --> 01:10:09.090 |
|
So the best single like criteria is the |
|
|
|
01:10:09.090 --> 01:10:11.090 |
|
temperature in Chicago yesterday, |
|
|
|
01:10:11.090 --> 01:10:13.590 |
|
because I guess the weather like moves |
|
|
|
01:10:13.590 --> 01:10:15.460 |
|
from West to east a bit. |
|
|
|
01:10:16.850 --> 01:10:19.665 |
|
And I guess downward, so knowing the |
|
|
|
01:10:19.665 --> 01:10:21.477 |
|
weather in Chicago yesterday, whether |
|
|
|
01:10:21.477 --> 01:10:23.420 |
|
the weather was less than whether the |
|
|
|
01:10:23.420 --> 01:10:25.530 |
|
Temperature was less than 8.4 Celsius |
|
|
|
01:10:25.530 --> 01:10:26.950 |
|
or greater than 8.4 Celsius. |
|
|
|
01:10:27.590 --> 01:10:29.040 |
|
Is the best single thing that I can |
|
|
|
01:10:29.040 --> 01:10:29.290 |
|
know. |
|
|
|
01:10:30.480 --> 01:10:31.160 |
|
And then? |
|
|
|
01:10:32.290 --> 01:10:34.920 |
|
That reduces my initial squared error |
|
|
|
01:10:34.920 --> 01:10:36.400 |
|
was 112. |
|
|
|
01:10:38.170 --> 01:10:39.680 |
|
And then if you divide it by number of |
|
|
|
01:10:39.680 --> 01:10:41.560 |
|
Samples, then or. |
|
|
|
01:10:42.810 --> 01:10:44.720 |
|
Yeah, take divided by number of samples |
|
|
|
01:10:44.720 --> 01:10:45.960 |
|
and take square root or something to |
|
|
|
01:10:45.960 --> 01:10:47.390 |
|
get the per sample. |
|
|
|
01:10:48.300 --> 01:10:51.010 |
|
Then depending on that answer, then I |
|
|
|
01:10:51.010 --> 01:10:53.209 |
|
check to see what is the temperature in |
|
|
|
01:10:53.210 --> 01:10:55.458 |
|
Milwaukee yesterday or what is the |
|
|
|
01:10:55.458 --> 01:10:57.140 |
|
temperature in Grand Rapids yesterday. |
|
|
|
01:10:58.060 --> 01:10:59.600 |
|
And then depending on those answers, I |
|
|
|
01:10:59.600 --> 01:11:02.040 |
|
check Chicago again, a different value |
|
|
|
01:11:02.040 --> 01:11:04.170 |
|
of Chicago, and then I get my final |
|
|
|
01:11:04.170 --> 01:11:04.840 |
|
decision here. |
|
|
|
01:11:11.120 --> 01:11:13.720 |
|
Yeah, it's like my sister lives in |
|
|
|
01:11:13.720 --> 01:11:16.140 |
|
Harrisburg, so I always know that |
|
|
|
01:11:16.140 --> 01:11:17.750 |
|
they're going to get our weather like a |
|
|
|
01:11:17.750 --> 01:11:18.240 |
|
day later. |
|
|
|
01:11:19.020 --> 01:11:20.680 |
|
So it's like, it's really warm here. |
|
|
|
01:11:20.680 --> 01:11:22.190 |
|
They're like, it's cold, it's warm |
|
|
|
01:11:22.190 --> 01:11:22.450 |
|
here. |
|
|
|
01:11:22.450 --> 01:11:23.600 |
|
Well, I guess it will be warm for you |
|
|
|
01:11:23.600 --> 01:11:24.930 |
|
tomorrow or in two days. |
|
|
|
01:11:26.130 --> 01:11:26.620 |
|
Yeah. |
|
|
|
01:11:27.540 --> 01:11:29.772 |
|
But part of the reason that I share |
|
|
|
01:11:29.772 --> 01:11:31.300 |
|
this is that the one thing that's |
|
|
|
01:11:31.300 --> 01:11:32.890 |
|
really cool about Decision trees is |
|
|
|
01:11:32.890 --> 01:11:34.910 |
|
that you get some explanation, like you |
|
|
|
01:11:34.910 --> 01:11:37.450 |
|
can understand the data better by |
|
|
|
01:11:37.450 --> 01:11:39.600 |
|
looking at the tree like this kind of |
|
|
|
01:11:39.600 --> 01:11:41.860 |
|
violated my initial assumption that the |
|
|
|
01:11:41.860 --> 01:11:43.340 |
|
best thing to know for the Temperature |
|
|
|
01:11:43.340 --> 01:11:44.830 |
|
is your Temperature the previous day. |
|
|
|
01:11:45.460 --> 01:11:46.600 |
|
It's actually the temperature of |
|
|
|
01:11:46.600 --> 01:11:48.965 |
|
another city the previous day and you |
|
|
|
01:11:48.965 --> 01:11:51.100 |
|
can get you can create these rules that |
|
|
|
01:11:51.100 --> 01:11:53.030 |
|
help you understand, like how to make |
|
|
|
01:11:53.030 --> 01:11:53.740 |
|
predictions. |
|
|
|
01:11:56.130 --> 01:11:56.710 |
|
This is. |
|
|
|
01:11:56.710 --> 01:11:58.320 |
|
I'm not expecting you to read this now, |
|
|
|
01:11:58.320 --> 01:12:00.370 |
|
but this is the code to generate this |
|
|
|
01:12:00.370 --> 01:12:00.820 |
|
tree. |
|
|
|
01:12:06.080 --> 01:12:07.600 |
|
Right on Summary. |
|
|
|
01:12:08.580 --> 01:12:10.800 |
|
The key assumptions of this of the |
|
|
|
01:12:10.800 --> 01:12:12.570 |
|
Classification or Regression trees are |
|
|
|
01:12:12.570 --> 01:12:15.255 |
|
that Samples with similar features have |
|
|
|
01:12:15.255 --> 01:12:16.070 |
|
similar predictions. |
|
|
|
01:12:16.070 --> 01:12:17.590 |
|
So it's a similar assumption in Nearest |
|
|
|
01:12:17.590 --> 01:12:19.580 |
|
neighbor, except this time we're trying |
|
|
|
01:12:19.580 --> 01:12:21.680 |
|
to figure out how to like split up the |
|
|
|
01:12:21.680 --> 01:12:23.420 |
|
feature space to define that |
|
|
|
01:12:23.420 --> 01:12:25.159 |
|
similarity, rather than using like a |
|
|
|
01:12:25.160 --> 01:12:29.090 |
|
preset distance function like Euclidean |
|
|
|
01:12:29.090 --> 01:12:29.560 |
|
distance. |
|
|
|
01:12:30.970 --> 01:12:32.610 |
|
The model parameters are the split |
|
|
|
01:12:32.610 --> 01:12:34.560 |
|
criteria, each internal node, and then |
|
|
|
01:12:34.560 --> 01:12:36.630 |
|
the final prediction at each leaf node. |
|
|
|
01:12:38.200 --> 01:12:40.020 |
|
The designs are putting limits on the |
|
|
|
01:12:40.020 --> 01:12:42.080 |
|
tree growth and what kinds of splits |
|
|
|
01:12:42.080 --> 01:12:43.545 |
|
you can consider, like whether to split |
|
|
|
01:12:43.545 --> 01:12:45.260 |
|
on one attribute or whole groups of |
|
|
|
01:12:45.260 --> 01:12:48.930 |
|
attributes and then choosing their |
|
|
|
01:12:48.930 --> 01:12:50.030 |
|
criteria for this split. |
|
|
|
01:12:51.520 --> 01:12:52.120 |
|
|
|
|
|
01:12:53.300 --> 01:12:56.060 |
|
You Decision trees by themselves are |
|
|
|
01:12:56.060 --> 01:12:57.645 |
|
useful if you want some explainable |
|
|
|
01:12:57.645 --> 01:12:58.710 |
|
Decision function. |
|
|
|
01:12:58.710 --> 01:13:00.270 |
|
So they could be used for like medical |
|
|
|
01:13:00.270 --> 01:13:02.090 |
|
diagnosis for example, because you want |
|
|
|
01:13:02.090 --> 01:13:03.750 |
|
to be able to tell people like why. |
|
|
|
01:13:04.710 --> 01:13:07.324 |
|
Like why I know you have cancer, like |
|
|
|
01:13:07.324 --> 01:13:08.840 |
|
you don't want to just be like I use |
|
|
|
01:13:08.840 --> 01:13:10.270 |
|
this machine learning algorithm and it |
|
|
|
01:13:10.270 --> 01:13:12.070 |
|
says you have like a 93% chance of |
|
|
|
01:13:12.070 --> 01:13:13.750 |
|
having cancer and so sorry. |
|
|
|
01:13:15.000 --> 01:13:16.785 |
|
You want to be able to say like because |
|
|
|
01:13:16.785 --> 01:13:19.086 |
|
of like this thing and because of this |
|
|
|
01:13:19.086 --> 01:13:21.099 |
|
thing and because of this thing like |
|
|
|
01:13:21.100 --> 01:13:24.919 |
|
out of all these 1500 cases like 90% of |
|
|
|
01:13:24.920 --> 01:13:26.750 |
|
them ended up having cancer. |
|
|
|
01:13:26.750 --> 01:13:28.180 |
|
So we need to do, we need to do a |
|
|
|
01:13:28.180 --> 01:13:29.110 |
|
biopsy, right. |
|
|
|
01:13:29.110 --> 01:13:30.305 |
|
So you want some explanation. |
|
|
|
01:13:30.305 --> 01:13:31.900 |
|
A lot of times it's not always good |
|
|
|
01:13:31.900 --> 01:13:33.600 |
|
enough to have like a good prediction. |
|
|
|
01:13:35.590 --> 01:13:37.240 |
|
And they're also like really effective |
|
|
|
01:13:37.240 --> 01:13:38.520 |
|
as part of a ensemble. |
|
|
|
01:13:38.520 --> 01:13:39.650 |
|
And again, I think we might see a |
|
|
|
01:13:39.650 --> 01:13:40.650 |
|
Tuesday instead of Thursday. |
|
|
|
01:13:43.150 --> 01:13:44.960 |
|
It's not like a really good predictor |
|
|
|
01:13:44.960 --> 01:13:47.320 |
|
by itself, but it is really good as |
|
|
|
01:13:47.320 --> 01:13:47.770 |
|
part of an. |
|
|
|
01:13:48.500 --> 01:13:48.790 |
|
Alright. |
|
|
|
01:13:49.670 --> 01:13:51.250 |
|
So things you remember, Decision |
|
|
|
01:13:51.250 --> 01:13:52.690 |
|
Regression trees learn to split up the |
|
|
|
01:13:52.690 --> 01:13:54.590 |
|
feature space into partitions into |
|
|
|
01:13:54.590 --> 01:13:56.110 |
|
different cells with similar values. |
|
|
|
01:13:57.150 --> 01:13:59.070 |
|
And then Entropy is a really important |
|
|
|
01:13:59.070 --> 01:13:59.600 |
|
concept. |
|
|
|
01:13:59.600 --> 01:14:01.030 |
|
It's a measure of uncertainty. |
|
|
|
01:14:02.730 --> 01:14:05.170 |
|
Information gain measures how much |
|
|
|
01:14:05.170 --> 01:14:07.090 |
|
particular knowledge reduces the |
|
|
|
01:14:07.090 --> 01:14:08.710 |
|
prediction uncertainty, and that's the |
|
|
|
01:14:08.710 --> 01:14:10.260 |
|
basis for forming our tree. |
|
|
|
01:14:11.630 --> 01:14:13.650 |
|
So on Thursday I'm going to do a bit of |
|
|
|
01:14:13.650 --> 01:14:15.680 |
|
review of our concepts and then I think |
|
|
|
01:14:15.680 --> 01:14:17.200 |
|
most likely next Tuesday I'll talk |
|
|
|
01:14:17.200 --> 01:14:19.730 |
|
about ensembles and random forests and |
|
|
|
01:14:19.730 --> 01:14:21.540 |
|
give you an extensive example of how |
|
|
|
01:14:21.540 --> 01:14:23.560 |
|
it's used in the Kinect algorithm. |
|
|
|
01:14:24.800 --> 01:14:25.770 |
|
Alright, thanks everyone. |
|
|
|
01:14:25.770 --> 01:14:26.660 |
|
See you Thursday. |
|
|
|
01:19:07.020 --> 01:19:08.790 |
|
Hello. |
|
|
|
01:19:10.510 --> 01:19:11.430 |
|
Training an assault. |
|
|
|
|