Assignment 2
Due: Tuesday, October 13
For this assignment you will experiment with various classification models using
subsets of some real-world data sets. In particular, you will use the
K-Nearest-Neighbor algorithm to classify text documents, experiment with and
compare classifiers that are part of the
scikit-learn machine learning
package for Python, and use some additional preprocessing capabilities of pandas and
scikit-learn packages.
Please also see the video:
Notes on Assignment 2 (55 mins) and the corresponding PPT slides:
Notes on Assignment 2.
- K-Nearest-Neighbor (KNN) classification on Newsgroups [Dataset:
newsgroups.zip]
For this problem you will use a subset of the 20 Newsgroup data set.
The full data set contains 20,000 newsgroup documents, partitioned (nearly)
evenly across 20 different newsgroups and has been often used for
experiments in text applications of machine learning techniques, such as
text classification and text clustering (see the
description of the full
dataset). The assignment data set contains a subset of 1000 documents
and a vocabulary of 5,500 terms. Each document belongs to one of two classes
Hockey (class label 1) and Microsoft Windows (class label 0). The data has
already been split (80%, 20%) into training and test data. The class labels
for the training and test data are also provided in separate files. The
training and test data are on term x document format, containing a row for each term in the vocabulary and a
column for each document. The values in the table represent raw term
occurence counts. The data has already been preprocessed to extract tokens, remove
stop words and perform stemming (so, the terms in the vocabulary are stems not full
terms). Please be sure to read the readme.txt file in the
distribution.
Your tasks in this problem are the following [Note: for this
problem you should not use scikit-learn for classification, but create your
own KNN classifer. You may use Pandas, NumPy, standard Python libraries, and Matplotlib.]
- Create your own KNN classifier function. Your classifier
should allow as input the training data matrix, the training labels, the
instance to be classified, the value of K, and should return the predicted
class for the instance and the indices of the top K neighbors. Your classifier should work
with Euclidean distance as well as Cosine distance (which is 1 minus the
Cosine similarity). You may create
two separate classifiers, or add the distance metric as a parameter in the
classifier function (an example implementation of a KNN classifier was
provided in class examples).
- Create an evaluation function to
measure the accuracy of your classifier. This function will call the classifier function in part a on all the test instances
and in each case compares the actual test class label to the predicted class
label. It should take as input the training data, the training labels, the
test instances, the labels for test instances, and the value of K. Your
evaluation function should return the Classification Accuracy (ratio
of correct predictions to the number of test instances) [See class notes:
Classification & Prediction - Review of
Basic Concepts].
- Run your evaluation function on a range of values for K
from 5 to 100 (in increments of 5) in order to
compare accuracy values for different numbers of neighbors. Do this both
using Euclidean Distance as well as Cosine similarity measure. Present the results as graphs with K in the x-axis
and the evaluation metric (accuracy) on the y-axis.
Use a single plot to compare the two version of the classifier
(Eculidean distance version vs. cosine similarity version).
- Next, modify the training and test data sets so that
term weights are converted to TFxIDF weights (instead of raw term
frequencies). [See class notes on Text Categorization]. Then, rerun your
evaluation (only for the Cosine similairty version of the classifier) on the range of K values (as above) and compare the results to
the results without using TFxIDF weights.
- Create a new classifier based on the Rocchio Method adapted
for text categorization [See class notes on Text Categorization].
You should separate the training function from the classifiation function. The training part for the classifier can be
implemented as a function that takes as input the training data matrix and the training labels,
returning the prototype vectors for each class. The classification part can
be implemented as another function that would take as input the prototypes
returned from the training function and the instance to be classified. This
function should measure Cosine
similarity of the test instance to each prototype vector. Your output should
indicate the predicted class for the test instance and the similarity values of the
instance to each of the category prototypes. Finally, use your evaluation
function to compare your results to
the best KNN results you obtained in part d. [Note: your functions
should work regardless of the number of categories (class labels) and should
not be limited to two-class categorization scenario.]
- Classification using scikit-learn [Dataset:
bank_data.csv]
For this problem you will experiment with various classifiers provided as
part of the scikit-learn (sklearn) machine learning module, as well
as with some of its preprocessing and model evaluation capabilities. [Note:
This module is already part of the Anaconda distributions. You will work with a modified subset of a real data set of customers for a
bank. This is the same data set used in Assignment 1. The data is provided in a CSV formatted file with the first row
containing the attribute names. The description of the the different
fields in the data are provided in this
document.
Your tasks in this problem are the following:
- Load and preprocess the data using Pandas or similar
tools. Specifically, you need to separate the target attribute
("pep")
from the portion of the data to be used for training and testing. You will
need to convert the selected dataset into the Standard Spreadsheet format (i.e.,
convert categorical attributes into numeric by creating dummy variables).
Finally, split the transformed data into training and test sets
(using 80%-20% randomized split). [Review Jupyter Notebooks
from class to see examples of how to perform these tasks.]
- Run scikit-learn's KNN classifier on the test set. Note: in the case of
KNN, you should first normalize the data so that all attributes are in the
same scale (normalize so that the values are between 0 and 1). Generate the
confusion matrix (visualize it using Matplotlib), as well as the
classification report. Experiment with different values of K and the weight
parameter (i.e., with or without distance weighting) for KNN to see if you can improve accuracy (you do not need to provide the
details of all of your experimentation, but provide a short discussion on what
parameters worked
best as well as your final results).
- Using the non-normalized training and
test data, perform classification using scikit-learn's decision tree classifier
(using the default parameters). As above, generate the confusion
matrix, classification report, and average accuracy scores the classifier.
Compare the average accuracy score on the test and the
training data sets. What does the comparison tell you in terms of
bias-variance trade-off?
- Create another decision
tree model (trained on the non-normalized training data) using "entropy"
as the selection criteria, min_samples_split=10, and
max_depth=5. For this model generate a
visualization of tree and submit it as a separate file (png, jpg, or
pdf) or embed it in the Jupyter Notebook.
- Data Analysis and Predictive Modeling on
Census data [Dataset:
adult-modified.csv]
For this problem you will use a simplified version of the
Adult Census Data Set. In the subset provided here, some of the
attributes have been removed and some preprocessing has been performed.
Your tasks in this problem are the following:
- Preprocessing and data analysis:
- Examine the data for missing values. In case of categorical
attributes, remove instances with missing values. In the case of numeric
attributes, impute and fill-in the missing values using the attribute
mean.
- Examine the characteristics of the attributes, including summary
statistics for the attributes, histograms illustrating the
distribtions of numeric attributes, and bar graphs showing value counts
for categorical attributes.
- Perform the following cross-tabulations
(including generating bar charts): education+race, work-class+income,
work-class+race, and race+income. In the latter case (race+income) also
create a table or chart showing percentages of each race category that
fall in the low-income group. Discuss your observations from this
analysis.
- Compare and contrast the characteristics of the low-income
and high-income categories across the different attributes.
- Predictive Modeling and Model Evaluation:
- Using either Pandas or Scikit-learn, create dummy variables for the
categorical attributes. Then separate the target attribute ("income_>50K")
from the attributes used for training. [Note: you need to drop "income_<=50K"
which is also created as a dummy variable in earlier steps). Split the
data into training and test sets (80%-20% split).
- Use
scikit-learn to build classifiers usinng Naive Bayes (Gaussian), decision
tree (using "gini" as selection criteria), and linear discriminant
analysis (LDA). For each of these perform 10-fold cross-validation
on the training data
(using cross-validation module in scikit-learn) and report the overall
average accuracy. Compare this to the model accuracy on the training
data. Finally, run your model on the set-aside test data.
Notes on Submission: You must submit your Jupyter Notebooks
(similar to examples in class) which includes your documented code, results of
your interactions, and any discussions or explanations of the results. Please
organize your notebook so that it's clear what parts of the notebook correspond
to which problems in the assignment. Please submit the notebook in both IPYNB
and HTML formats (along with any auxiliary files). Your assignment should be
submitted via D2L.
|