9923170071 / 8108094992 info@dimensionless.in




anova for feature selection







APPLICATION OF ANALYSIS OF VARIANCE(ANOVA) IN FEATURE SELECTION

Feature selection is one of the important topics in the field of data science. Feature selection is extremely important in machine learning primarily because it serves as a fundamental technique to direct the use of variables to what’s most efficient and effective for a given machine learning system.

What is feature selection?

In some dataset there may be some features which are either redundant of some other features or may be irrelevant in the context of the dataset, deleting those features do not hamper the model accuracy so much ,but they make the model more complex. So,we select a subset of the set of features from the datset,this process is known as feature selection.

What is the importance of feature selection?

Machine learning works on a simple rule-if you put garbage in you will receive garbage out. Unnecessary features make the model more complex. It is very much necessary to select the necessary features when the number of features is large. Including unnecessary features in the model may result in overfitting of the model.
Feature selection methods aid in our mission to create an accurate predictive model. They help by choosing features that will give you as good or better accuracy whilst requiring less data. Feature selection methods can be used to identify and remove unneeded, irrelevant and redundant attributes from data that do not contribute to the accuracy of a predictive model or may in fact decrease the accuracy of the model. An empirical bias/variance analysis as feature selection progresses indicates that the most accurate feature set corresponds to the best bias-variance tradeoff point for the learning algorithm.

What are the different types of feature selection method?

Various methodologies and techniques can be used to select the optimum feature space that will give the best accuracy.

  • Filter method
  • Wrapper method
  • Embedded method

Can we apply ANOVA for feature selection?

In Filter based feature selection method we use different statistical tools to select the features with best predicting power.We select an appropriate statistical tool that provides a score for each of the feature columns . The features with the best scores are included in the model and the other features are kept in the dataset but not used for analysis.

In [2]:
Out[2]:

ANOVA is a statistical test to examine whether there is a significant difference between the means of several datasets. ANOAVA partitions the total variability in the sample data into two components, variation within and variation between the classes. Total variability in the dataset is described by the total sum of squares. So,
Total sum of squares(SST)=Between group sum of squares(SSA)+Within group sum of squares(SSE)
Between group sum of squares is also known as Treatment sum of squares and within group sum of squares is also known as Error sum of squares. SSE tells the proportion of the variance explained by the feature or groups of features to the total variance in the dataset. The features that explained largest proportion of the variance should be retained. Suppose there are a total of K treatments under a feature and each treatment has $n_i$ number of observations ,hence total number of observation=$\sum_{i=1}^k n_i$

The F-statistic examines whether when we group the numerical feature by the target vector, the means for each group are significantly different. Features are ranked by sorting them according to the p value in ascending order. If tie occurs,sort them by F-statistic in descending order. The features are labeled as ‘important’ ,’marginal’ and ‘unimportant’ with values above 0.998,between 0.997 and 0.998 and below 0.997 respectively.

What can be done if F-statistic is not a good measure for classification?

In [4]:
Out[4]:

Horizontal feature is better than vertical one so it has higher value of the F-statistic.But in some cases none of the features is good enough for classification,i.e F statistic is not good enough for classification.In that case we define F-statistic as function of our data,we define the projection of the feature classes on the axis which is not inside the variables(Fisher discriminant).

In [15]:
Out[15]:

Although IRIS Dataset has only four features I have demonstrated the the process using Python just for reference.

In [6]:
In [7]:
In [8]:
Out[8]:
In [9]:
In [10]:
In [11]:
Out[11]:
In [13]:
In [14]:

I have used Random Forest on both the original dataset and on the dataset after selecting the optimum features and it is noticed that the model accuracy does not hamper after deleting two features.(*I have used the model on the IRIS dataset which is very much simple and there is no need for feature selection )

P.S-Feature extraction is different from feature selection. Feature extraction creates new features from functions of the original features, whereas feature selection returns a subset of the features.

Principle Component Analysis in Regression

[cf]header[/cf]

[cf]body[/cf]

We will apply pca on wine dataset

Applying PCA on relevant predictors

Analyzing components of the output






Creating biplot

Calculating proportion of variance

Creating scree plot and cumulative plots




Building model using PC1 to PC4

Making Predictions

We cannot convert test data into principal components, by applying pca. Instead we have to apply same transformations on test data as we did for train data

Making predictions