Ensemble learners

Ensemble learners

November 30, 2019 9 By Stanley Isaacs


Creating an ensemble of learners is one way to make the learners
you’ve got better. So we’re not talking about
creating a new algorithm, but instead assembling together
several different algorithms or several different models to
create an ensemble learner. One thing I want to emphasize here is
that you can take what you learn here about ensemble learners and plug it
right in to what you’re already doing with your KNN and
linear regression models. Now, what we’ve been doing so far,
is that we’ve had one kind of learning method, say KNN, we plug our
data into there and we learn a model. We can query our model with an X and
it will give us a Y. So this is not an ensemble learner,
this is just a single learner. And the idea with ensemble
learners is that we have several additional learners. So, we might have a linear regression
based model, we might have a decision tree based model, we might have
a support vector machine based model. You could continue this on with any
different number of algorithms. They’re all trained using the same data,
and so now we have, in this case, four different models. To query this ensemble of learners,
we query each model by itself and combine the answers. So if we wanted to query this model
with X, we plug X into each model, the same X and then our Ys come out. So we have a Y output from each of
these models, how do we combine them? If we’re doing classification where for
instance we’re trying to identify what the thing is, we might have
each of these Ys vote on what it is. But we’re doing regression, and so
the typical thing to do here is to take the mean, and that is the result
for this ensemble learner. We can then test this
overall ensemble learner using this test data that we set aside. Why ensembles? Why do we use them,
why might they be better? Well, there’s a few reasons. First of all, ensembles often have lower error than
any individual method by themselves. Ensemble learners offer
less overfitting. The ensemble of learners typically
does not overfit as much as any individual learner by itself. Now why is that? Here’s at least an intuitive answer. As each kind of learner that you
might use has a sort of bias, it’s easiest to talk about that
in terms of linear regression in terms of what do I mean by bias. So clearly, with linear regression
our bias is that the data is linear. KNN has its own kind of bias, decision
trees have their own kind of bias, but when you put them together you tend
to reduce the biases because they’re fighting against each
other in some sort of way. Anyways that’s what
an ensemble learner is like if we use multiple types of learners.