## Supervised Learning - Classification

Seth Juarez7/12/2010

## Supervised Learning

In supervised learning, the algorithm is given labeled examples in order to come up with an appropriate model that defines the data and can also correctly label future examples correctly (or adequately). Supervised learning can be grouped into the following depending on the actual label type:

1. Binary Classification (think yes/no)
2. Multi-class classification (any answer from a finite set)
3. Rgression (any answer from an infinite set)

In the machine library I am trying to put together, each of the three groups mentioned above can be separated into distinct .NET data types as follows:

1. Binary Classification (bool)
2. Multi-Class Classification (enum)
3. Regression (double, float, int, decimal, long, etc...)

As mentioned in my earlier post classes (which is how we generally describe our data or examples) can be decorated as follows:

public class Student
{
[Feature]
public string Name { get; set; }

[Feature]

[Feature]
public double GPA { get; set; }

[Feature]
public int Age { get; set; }

[Feature]
public bool Tall { get; set; }

[Feature]
public int Friends { get; set; }

[Label]
public bool Nice { get; set; }
}


Why the breaking change from Learn to Label? In the machine learning literature, the examples all have features as well as a label. The features are the data that is used to generalize based upon the appropriate label (which turns out to be the answer). Notice in the case above, we are using 6 features to learn a boolean label. In the way its been set up, this would be an example of binary classification.

## Binary Classification

In the case of our student class, we are trying to learn whether a particular student is nice or not given their Name, Grade, GPA, Age, Tallness, and number of Friends. Eventually, the library will automatically detect which type of learning it needs to do, but for now, here is how we generate the model:

Student[] students = Student.GetData();

// test point
Student s = new Student {
Name = "Seth",
Age = 30,
Friends = 16,
GPA = 4.0,
Tall = true
};

var model = new PerceptronModel<Student>();
var predictor = model.Generate(students);

s = predictor.Predict(s);


In essence, we get a bunch of students and spin up a new student on which we will run predictions. The classification algorithm used in this case is the Perceptron algorithm (more on this later). Once the model is generated, we can run a prediction by simply passing in the new student and the predictor fills in the appropriate property. Magic! This is coming from a guy whose magic repretoire only includes making a coin disappear by dropping it on the floor as well as the "I-can-pull-my-finger-off" trick that only amuses my 5 year old. It is actually using some really simple math to find a way to seperate the examples.

## Reusing what you've learned

Once you've generated the model, it would be a waste to have to regenerate it for every subsequent run of the program. As such, there is a way to save the model and later reuse it:

var model = new PerceptronModel<Student>();
var predictor = model.Generate(students);
predictor.Save(path);
...
Student s = new Student {
Name = "Seth",
Age = 30,
Friends = 16,
GPA = 4.0,
Tall = true
};

var model = new PerceptronModel<Student>();
predictor.Predict(s);


As one of my goals is to actually help out in the understanding of these models, the serialized xml also includes some information regarding your data (although it is not needed for the actual algorithm:

<?xml version="1.0"?>
<perceptron type="ml.Tests.Model.Student">
<weight>
<v size="6">
<e>435.552223888056</e>
<e>-4.9275362318840576</e>
<e>-123.6006996501749</e>
<e>50.744252873563212</e>
<e>-45.477261369315343</e>
<e>-62.145927036481758</e>
</v>
</weight>
<bias>-11.525237381309346</bias>
<!--The following section is for informational purposes only-->
<model>
<features>
<feature type="System.Boolean" converter="None">Tall</feature>
<feature type="System.Int32" converter="None">Age</feature>
<feature type="System.Double" converter="None">GPA</feature>
<feature type="System.String" converter="NameToLength">Name</feature>
</features>
<learn>Nice</learn>
</model>
</perceptron>


Notice that in this particular model, the portion with the largest number (435.5522) corresponds to the Friends feature. This means that the number of friends (multiplied by 4, more on this too later) is a strong indicator of niceness.

## In Summary

The neatest thing about these things is how creepily acurate they are! Next time, I will try to show exactly what the perceptron (or any linear classifier for that matter) is actually doing. Please drop me a line if you have any questions