Skip to content

February 9, 2014

K-Nearest Neighbours

I spent a few hours over the past few days working on an interactive K-NN (K-Nearest Neighbours) classification map.

Try it out here: http://projects.bitnode.co.uk/KNN/

If you’re unfamiliar with it, K-NN is a part of machine learning. Specifically, it can be used for classification (I.E does this belong to x or does it belong to y). The points you see on the map are training data. They essentially define the boundaries for classification, so that if we were to bring an unclassified point into the data set, we could decide which class it belongs to based upon the training data. The map is showing the classification of each individual pixel in the feature space.

So the algorithm for K-NN is really simple. You just look at K training points nearest to the particular input point (in this case, the location of a pixel) and average out the class. So for example if K=3, at pixel (10,10) we simply find the distance from the pixel to all the points of the training set, then look at the nearest 3 points, and average out their class. In this case, class is a colour (red or blue) and so we can just sum up the RGB components of the point individually and divide them by 3, giving us our averaged colour and therefore class which we can assign to the pixel in question.

Here are some interesting outputs:

 




Also an interesting bug I encountered while making this:

Try it for yourself here: http://projects.bitnode.co.uk/KNN/

0+
Read more from Programming

Share your thoughts, post a comment.

(required)
(required)

Note: HTML is allowed. Your email address will never be published.

Subscribe to comments

Fork me on GitHub