9923170071 / 8108094992 info@dimensionless.in




anova for feature selection







APPLICATION OF ANALYSIS OF VARIANCE(ANOVA) IN FEATURE SELECTION

                                              - AYAN KUNDU

Feature selection is one of the important topics in the field of data science. Feature selection is extremely important in machine learning primarily because it serves as a fundamental technique to direct the use of variables to what’s most efficient and effective for a given machine learning system.

What is feature selection?

In some dataset there may be some features which are either redundant of some other features or may be irrelevant in the context of the dataset, deleting those features do not hamper the model accuracy so much ,but they make the model more complex. So,we select a subset of the set of features from the datset,this process is known as feature selection.

What is the importance of feature selection?

Machine learning works on a simple rule-if you put garbage in you will receive garbage out. Unnecessary features make the model more complex. It is very much necessary to select the necessary features when the number of features is large. Including unnecessary features in the model may result in overfitting of the model.
Feature selection methods aid in our mission to create an accurate predictive model. They help by choosing features that will give you as good or better accuracy whilst requiring less data. Feature selection methods can be used to identify and remove unneeded, irrelevant and redundant attributes from data that do not contribute to the accuracy of a predictive model or may in fact decrease the accuracy of the model. An empirical bias/variance analysis as feature selection progresses indicates that the most accurate feature set corresponds to the best bias-variance tradeoff point for the learning algorithm.

What are the different types of feature selection method?

Various methodologies and techniques can be used to select the optimum feature space that will give the best accuracy.

  • Filter method
  • Wrapper method
  • Embedded method

Can we apply ANOVA for feature selection?

In Filter based feature selection method we use different statistical tools to select the features with best predicting power.We select an appropriate statistical tool that provides a score for each of the feature columns . The features with the best scores are included in the model and the other features are kept in the dataset but not used for analysis.

In [2]:
from IPython.display import Image
Image('/home/ayan.kundu/Desktop/download.png')
Out[2]:

ANOVA is a statistical test to examine whether there is a significant difference between the means of several datasets. ANOAVA partitions the total variability in the sample data into two components, variation within and variation between the classes. Total variability in the dataset is described by the total sum of squares. So,
Total sum of squares(SST)=Between group sum of squares(SSA)+Within group sum of squares(SSE)
Between group sum of squares is also known as Treatment sum of squares and within group sum of squares is also known as Error sum of squares. SSE tells the proportion of the variance explained by the feature or groups of features to the total variance in the dataset. The features that explained largest proportion of the variance should be retained. Suppose there are a total of K treatments under a feature and each treatment has $n_i$ number of observations ,hence total number of observation=$\sum_{i=1}^k n_i$

                                       F-statistic=(SSA/(K-1))/(SSE/(N-K))
                                       p-value=prob[F(K-1,N-K)>F-statistic]

The F-statistic examines whether when we group the numerical feature by the target vector, the means for each group are significantly different. Features are ranked by sorting them according to the p value in ascending order. If tie occurs,sort them by F-statistic in descending order. The features are labeled as ‘important’ ,’marginal’ and ‘unimportant’ with values above 0.998,between 0.997 and 0.998 and below 0.997 respectively.

What can be done if F-statistic is not a good measure for classification?

In [4]:
Image('/home/ayan.kundu/Desktop/download (1).png')
Out[4]:

Horizontal feature is better than vertical one so it has higher value of the F-statistic.But in some cases none of the features is good enough for classification,i.e F statistic is not good enough for classification.In that case we define F-statistic as function of our data,we define the projection of the feature classes on the axis which is not inside the variables(Fisher discriminant).

In [15]:
Image('/home/ayan.kundu/Desktop/download (2).png')
Out[15]:

Although IRIS Dataset has only four features I have demonstrated the the process using Python just for reference.

In [6]:
# Load the necessary libraries
from sklearn.datasets import load_iris
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import f_classif
In [7]:
# Load iris data
iris = load_iris()

# Create features and target
x = iris.data
y = iris.target
In [8]:
iris
Out[8]:
{'data': array([[5.1, 3.5, 1.4, 0.2],
        [4.9, 3. , 1.4, 0.2],
        [4.7, 3.2, 1.3, 0.2],
        [4.6, 3.1, 1.5, 0.2],
        [5. , 3.6, 1.4, 0.2],
        [5.4, 3.9, 1.7, 0.4],
        [4.6, 3.4, 1.4, 0.3],
        [5. , 3.4, 1.5, 0.2],
        [4.4, 2.9, 1.4, 0.2],
        [4.9, 3.1, 1.5, 0.1],
        [5.4, 3.7, 1.5, 0.2],
        [4.8, 3.4, 1.6, 0.2],
        [4.8, 3. , 1.4, 0.1],
        [4.3, 3. , 1.1, 0.1],
        [5.8, 4. , 1.2, 0.2],
        [5.7, 4.4, 1.5, 0.4],
        [5.4, 3.9, 1.3, 0.4],
        [5.1, 3.5, 1.4, 0.3],
        [5.7, 3.8, 1.7, 0.3],
        [5.1, 3.8, 1.5, 0.3],
        [5.4, 3.4, 1.7, 0.2],
        [5.1, 3.7, 1.5, 0.4],
        [4.6, 3.6, 1. , 0.2],
        [5.1, 3.3, 1.7, 0.5],
        [4.8, 3.4, 1.9, 0.2],
        [5. , 3. , 1.6, 0.2],
        [5. , 3.4, 1.6, 0.4],
        [5.2, 3.5, 1.5, 0.2],
        [5.2, 3.4, 1.4, 0.2],
        [4.7, 3.2, 1.6, 0.2],
        [4.8, 3.1, 1.6, 0.2],
        [5.4, 3.4, 1.5, 0.4],
        [5.2, 4.1, 1.5, 0.1],
        [5.5, 4.2, 1.4, 0.2],
        [4.9, 3.1, 1.5, 0.1],
        [5. , 3.2, 1.2, 0.2],
        [5.5, 3.5, 1.3, 0.2],
        [4.9, 3.1, 1.5, 0.1],
        [4.4, 3. , 1.3, 0.2],
        [5.1, 3.4, 1.5, 0.2],
        [5. , 3.5, 1.3, 0.3],
        [4.5, 2.3, 1.3, 0.3],
        [4.4, 3.2, 1.3, 0.2],
        [5. , 3.5, 1.6, 0.6],
        [5.1, 3.8, 1.9, 0.4],
        [4.8, 3. , 1.4, 0.3],
        [5.1, 3.8, 1.6, 0.2],
        [4.6, 3.2, 1.4, 0.2],
        [5.3, 3.7, 1.5, 0.2],
        [5. , 3.3, 1.4, 0.2],
        [7. , 3.2, 4.7, 1.4],
        [6.4, 3.2, 4.5, 1.5],
        [6.9, 3.1, 4.9, 1.5],
        [5.5, 2.3, 4. , 1.3],
        [6.5, 2.8, 4.6, 1.5],
        [5.7, 2.8, 4.5, 1.3],
        [6.3, 3.3, 4.7, 1.6],
        [4.9, 2.4, 3.3, 1. ],
        [6.6, 2.9, 4.6, 1.3],
        [5.2, 2.7, 3.9, 1.4],
        [5. , 2. , 3.5, 1. ],
        [5.9, 3. , 4.2, 1.5],
        [6. , 2.2, 4. , 1. ],
        [6.1, 2.9, 4.7, 1.4],
        [5.6, 2.9, 3.6, 1.3],
        [6.7, 3.1, 4.4, 1.4],
        [5.6, 3. , 4.5, 1.5],
        [5.8, 2.7, 4.1, 1. ],
        [6.2, 2.2, 4.5, 1.5],
        [5.6, 2.5, 3.9, 1.1],
        [5.9, 3.2, 4.8, 1.8],
        [6.1, 2.8, 4. , 1.3],
        [6.3, 2.5, 4.9, 1.5],
        [6.1, 2.8, 4.7, 1.2],
        [6.4, 2.9, 4.3, 1.3],
        [6.6, 3. , 4.4, 1.4],
        [6.8, 2.8, 4.8, 1.4],
        [6.7, 3. , 5. , 1.7],
        [6. , 2.9, 4.5, 1.5],
        [5.7, 2.6, 3.5, 1. ],
        [5.5, 2.4, 3.8, 1.1],
        [5.5, 2.4, 3.7, 1. ],
        [5.8, 2.7, 3.9, 1.2],
        [6. , 2.7, 5.1, 1.6],
        [5.4, 3. , 4.5, 1.5],
        [6. , 3.4, 4.5, 1.6],
        [6.7, 3.1, 4.7, 1.5],
        [6.3, 2.3, 4.4, 1.3],
        [5.6, 3. , 4.1, 1.3],
        [5.5, 2.5, 4. , 1.3],
        [5.5, 2.6, 4.4, 1.2],
        [6.1, 3. , 4.6, 1.4],
        [5.8, 2.6, 4. , 1.2],
        [5. , 2.3, 3.3, 1. ],
        [5.6, 2.7, 4.2, 1.3],
        [5.7, 3. , 4.2, 1.2],
        [5.7, 2.9, 4.2, 1.3],
        [6.2, 2.9, 4.3, 1.3],
        [5.1, 2.5, 3. , 1.1],
        [5.7, 2.8, 4.1, 1.3],
        [6.3, 3.3, 6. , 2.5],
        [5.8, 2.7, 5.1, 1.9],
        [7.1, 3. , 5.9, 2.1],
        [6.3, 2.9, 5.6, 1.8],
        [6.5, 3. , 5.8, 2.2],
        [7.6, 3. , 6.6, 2.1],
        [4.9, 2.5, 4.5, 1.7],
        [7.3, 2.9, 6.3, 1.8],
        [6.7, 2.5, 5.8, 1.8],
        [7.2, 3.6, 6.1, 2.5],
        [6.5, 3.2, 5.1, 2. ],
        [6.4, 2.7, 5.3, 1.9],
        [6.8, 3. , 5.5, 2.1],
        [5.7, 2.5, 5. , 2. ],
        [5.8, 2.8, 5.1, 2.4],
        [6.4, 3.2, 5.3, 2.3],
        [6.5, 3. , 5.5, 1.8],
        [7.7, 3.8, 6.7, 2.2],
        [7.7, 2.6, 6.9, 2.3],
        [6. , 2.2, 5. , 1.5],
        [6.9, 3.2, 5.7, 2.3],
        [5.6, 2.8, 4.9, 2. ],
        [7.7, 2.8, 6.7, 2. ],
        [6.3, 2.7, 4.9, 1.8],
        [6.7, 3.3, 5.7, 2.1],
        [7.2, 3.2, 6. , 1.8],
        [6.2, 2.8, 4.8, 1.8],
        [6.1, 3. , 4.9, 1.8],
        [6.4, 2.8, 5.6, 2.1],
        [7.2, 3. , 5.8, 1.6],
        [7.4, 2.8, 6.1, 1.9],
        [7.9, 3.8, 6.4, 2. ],
        [6.4, 2.8, 5.6, 2.2],
        [6.3, 2.8, 5.1, 1.5],
        [6.1, 2.6, 5.6, 1.4],
        [7.7, 3. , 6.1, 2.3],
        [6.3, 3.4, 5.6, 2.4],
        [6.4, 3.1, 5.5, 1.8],
        [6. , 3. , 4.8, 1.8],
        [6.9, 3.1, 5.4, 2.1],
        [6.7, 3.1, 5.6, 2.4],
        [6.9, 3.1, 5.1, 2.3],
        [5.8, 2.7, 5.1, 1.9],
        [6.8, 3.2, 5.9, 2.3],
        [6.7, 3.3, 5.7, 2.5],
        [6.7, 3. , 5.2, 2.3],
        [6.3, 2.5, 5. , 1.9],
        [6.5, 3. , 5.2, 2. ],
        [6.2, 3.4, 5.4, 2.3],
        [5.9, 3. , 5.1, 1.8]]),
 'target': array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
        1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
        1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
        2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
        2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]),
 'target_names': array(['setosa', 'versicolor', 'virginica'], dtype='<U10'),
 'DESCR': 'Iris Plants Database\n====================\n\nNotes\n-----\nData Set Characteristics:\n    :Number of Instances: 150 (50 in each of three classes)\n    :Number of Attributes: 4 numeric, predictive attributes and the class\n    :Attribute Information:\n        - sepal length in cm\n        - sepal width in cm\n        - petal length in cm\n        - petal width in cm\n        - class:\n                - Iris-Setosa\n                - Iris-Versicolour\n                - Iris-Virginica\n    :Summary Statistics:\n\n    ============== ==== ==== ======= ===== ====================\n                    Min  Max   Mean    SD   Class Correlation\n    ============== ==== ==== ======= ===== ====================\n    sepal length:   4.3  7.9   5.84   0.83    0.7826\n    sepal width:    2.0  4.4   3.05   0.43   -0.4194\n    petal length:   1.0  6.9   3.76   1.76    0.9490  (high!)\n    petal width:    0.1  2.5   1.20  0.76     0.9565  (high!)\n    ============== ==== ==== ======= ===== ====================\n\n    :Missing Attribute Values: None\n    :Class Distribution: 33.3% for each of 3 classes.\n    :Creator: R.A. Fisher\n    :Donor: Michael Marshall (MARSHALL%PLU@io.arc.nasa.gov)\n    :Date: July, 1988\n\nThis is a copy of UCI ML iris datasets.\nhttp://archive.ics.uci.edu/ml/datasets/Iris\n\nThe famous Iris database, first used by Sir R.A Fisher\n\nThis is perhaps the best known database to be found in the\npattern recognition literature.  Fisher\'s paper is a classic in the field and\nis referenced frequently to this day.  (See Duda & Hart, for example.)  The\ndata set contains 3 classes of 50 instances each, where each class refers to a\ntype of iris plant.  One class is linearly separable from the other 2; the\nlatter are NOT linearly separable from each other.\n\nReferences\n----------\n   - Fisher,R.A. "The use of multiple measurements in taxonomic problems"\n     Annual Eugenics, 7, Part II, 179-188 (1936); also in "Contributions to\n     Mathematical Statistics" (John Wiley, NY, 1950).\n   - Duda,R.O., & Hart,P.E. (1973) Pattern Classification and Scene Analysis.\n     (Q327.D83) John Wiley & Sons.  ISBN 0-471-22361-1.  See page 218.\n   - Dasarathy, B.V. (1980) "Nosing Around the Neighborhood: A New System\n     Structure and Classification Rule for Recognition in Partially Exposed\n     Environments".  IEEE Transactions on Pattern Analysis and Machine\n     Intelligence, Vol. PAMI-2, No. 1, 67-71.\n   - Gates, G.W. (1972) "The Reduced Nearest Neighbor Rule".  IEEE Transactions\n     on Information Theory, May 1972, 431-433.\n   - See also: 1988 MLC Proceedings, 54-64.  Cheeseman et al"s AUTOCLASS II\n     conceptual clustering system finds 3 classes in the data.\n   - Many, many more ...\n',
 'feature_names': ['sepal length (cm)',
  'sepal width (cm)',
  'petal length (cm)',
  'petal width (cm)']}
In [9]:
# Create an SelectKBest object to select features with two best ANOVA F-Values
fvalue_selector = SelectKBest(f_classif, k=2)

# Apply the SelectKBest object to the features and target
X_kbest = fvalue_selector.fit_transform(x, y)
In [10]:
# Show results
print('Original number of features:', x.shape[1])
print('Reduced number of features:', X_kbest.shape[1])
Original number of features: 4
Reduced number of features: 2
In [11]:
X_kbest
Out[11]:
array([[1.4, 0.2],
       [1.4, 0.2],
       [1.3, 0.2],
       [1.5, 0.2],
       [1.4, 0.2],
       [1.7, 0.4],
       [1.4, 0.3],
       [1.5, 0.2],
       [1.4, 0.2],
       [1.5, 0.1],
       [1.5, 0.2],
       [1.6, 0.2],
       [1.4, 0.1],
       [1.1, 0.1],
       [1.2, 0.2],
       [1.5, 0.4],
       [1.3, 0.4],
       [1.4, 0.3],
       [1.7, 0.3],
       [1.5, 0.3],
       [1.7, 0.2],
       [1.5, 0.4],
       [1. , 0.2],
       [1.7, 0.5],
       [1.9, 0.2],
       [1.6, 0.2],
       [1.6, 0.4],
       [1.5, 0.2],
       [1.4, 0.2],
       [1.6, 0.2],
       [1.6, 0.2],
       [1.5, 0.4],
       [1.5, 0.1],
       [1.4, 0.2],
       [1.5, 0.1],
       [1.2, 0.2],
       [1.3, 0.2],
       [1.5, 0.1],
       [1.3, 0.2],
       [1.5, 0.2],
       [1.3, 0.3],
       [1.3, 0.3],
       [1.3, 0.2],
       [1.6, 0.6],
       [1.9, 0.4],
       [1.4, 0.3],
       [1.6, 0.2],
       [1.4, 0.2],
       [1.5, 0.2],
       [1.4, 0.2],
       [4.7, 1.4],
       [4.5, 1.5],
       [4.9, 1.5],
       [4. , 1.3],
       [4.6, 1.5],
       [4.5, 1.3],
       [4.7, 1.6],
       [3.3, 1. ],
       [4.6, 1.3],
       [3.9, 1.4],
       [3.5, 1. ],
       [4.2, 1.5],
       [4. , 1. ],
       [4.7, 1.4],
       [3.6, 1.3],
       [4.4, 1.4],
       [4.5, 1.5],
       [4.1, 1. ],
       [4.5, 1.5],
       [3.9, 1.1],
       [4.8, 1.8],
       [4. , 1.3],
       [4.9, 1.5],
       [4.7, 1.2],
       [4.3, 1.3],
       [4.4, 1.4],
       [4.8, 1.4],
       [5. , 1.7],
       [4.5, 1.5],
       [3.5, 1. ],
       [3.8, 1.1],
       [3.7, 1. ],
       [3.9, 1.2],
       [5.1, 1.6],
       [4.5, 1.5],
       [4.5, 1.6],
       [4.7, 1.5],
       [4.4, 1.3],
       [4.1, 1.3],
       [4. , 1.3],
       [4.4, 1.2],
       [4.6, 1.4],
       [4. , 1.2],
       [3.3, 1. ],
       [4.2, 1.3],
       [4.2, 1.2],
       [4.2, 1.3],
       [4.3, 1.3],
       [3. , 1.1],
       [4.1, 1.3],
       [6. , 2.5],
       [5.1, 1.9],
       [5.9, 2.1],
       [5.6, 1.8],
       [5.8, 2.2],
       [6.6, 2.1],
       [4.5, 1.7],
       [6.3, 1.8],
       [5.8, 1.8],
       [6.1, 2.5],
       [5.1, 2. ],
       [5.3, 1.9],
       [5.5, 2.1],
       [5. , 2. ],
       [5.1, 2.4],
       [5.3, 2.3],
       [5.5, 1.8],
       [6.7, 2.2],
       [6.9, 2.3],
       [5. , 1.5],
       [5.7, 2.3],
       [4.9, 2. ],
       [6.7, 2. ],
       [4.9, 1.8],
       [5.7, 2.1],
       [6. , 1.8],
       [4.8, 1.8],
       [4.9, 1.8],
       [5.6, 2.1],
       [5.8, 1.6],
       [6.1, 1.9],
       [6.4, 2. ],
       [5.6, 2.2],
       [5.1, 1.5],
       [5.6, 1.4],
       [6.1, 2.3],
       [5.6, 2.4],
       [5.5, 1.8],
       [4.8, 1.8],
       [5.4, 2.1],
       [5.6, 2.4],
       [5.1, 2.3],
       [5.1, 1.9],
       [5.9, 2.3],
       [5.7, 2.5],
       [5.2, 2.3],
       [5. , 1.9],
       [5.2, 2. ],
       [5.4, 2.3],
       [5.1, 1.8]])
In [13]:
#implementing RandomForest on original dataset and calculating accuracy
from sklearn.ensemble import RandomForestClassifier
from sklearn.cross_validation import train_test_split
from sklearn.metrics import accuracy_score
x=iris.data
y=iris.target
x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=0.2,random_state=100)
model= RandomForestClassifier(n_estimators=130,max_features=None)
model.fit(x_train,y_train)
model.score(x_train,y_train)
pred=model.predict(x_test)
accuracy=accuracy_score(y_test,pred)
print(accuracy)
0.9666666666666667
In [14]:
#implementing RandomForest on the dataset after feature selection and calculating accuracy
from sklearn.ensemble import RandomForestClassifier
from sklearn.cross_validation import train_test_split
x=X_kbest
y=iris.target
x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=0.2,random_state=100)
model= RandomForestClassifier(n_estimators=150,max_features=None)
model.fit(x_train,y_train)
model.score(x_train,y_train)
pred=model.predict(x_test)
accuracy=accuracy_score(y_test,pred)
print(accuracy)
0.9666666666666667

I have used Random Forest on both the original dataset and on the dataset after selecting the optimum features and it is noticed that the model accuracy does not hamper after deleting two features.(*I have used the model on the IRIS dataset which is very much simple and there is no need for feature selection )

P.S-Feature extraction is different from feature selection. Feature extraction creates new features from functions of the original features, whereas feature selection returns a subset of the features.

Principle Component Analysis in Regression

[cf]header[/cf]

[cf]body[/cf]

We will apply pca on wine dataset

wine = read.csv
("https://storage.googleapis.
com/dimensionless/
Analytics/wine.csv")

Applying PCA on relevant predictors

pca<-prcomp(wine[,3:7],scale=TRUE)

Analyzing components of the output

#Std Dev
pca$sdev
## [1] 1.45691795 1.16557440 0.99526725 0.72328569 0.07160523
# Loadings
pca$rotation
##                     PC1        PC2        PC3         PC4         PC5
## WinterRain   0.09395915  0.7384046 -0.1256430 -0.65563602  0.01689675
## AGST        -0.32836427 -0.3806578  0.6264975 -0.59544647  0.01486508
## HarvestRain  0.03679770 -0.5244412 -0.7238807 -0.44675373 -0.00390888
## Age         -0.66342357  0.1258942 -0.1914225  0.10156506  0.70502609
## FrancePop    0.66472828 -0.1377328  0.1762640 -0.07536942  0.70881341
# Principal Components
pca$x
##               PC1         PC2         PC3         PC4          PC5
##  [1,] -2.66441523  0.01812071 -0.19940771 -0.26187403  0.017848626
##  [2,] -2.31090775  1.27230388  0.17749206  0.09070174 -0.006316369
##  [3,] -2.31872688 -0.42425903  0.34077385  0.31372038 -0.067315308
##  [4,] -1.55060520 -0.23588712 -0.23518124  1.69094289 -0.101731306
##  [5,] -1.35803408 -0.06913418 -0.82614968  0.15237445 -0.073508609
##  [6,] -1.77313036 -1.24596188  0.30308288 -0.33015372 -0.062254812
##  [7,] -0.83734190  0.14770821 -1.90545030 -1.40861601 -0.059226672
##  [8,] -1.17507833  1.74417439  1.38340778 -1.06038701 -0.003711288
##  [9,] -0.49978424  1.43298732  0.48615479  0.39280758  0.049944991
## [10,] -0.01341322  0.49601115 -0.91321708  0.70204963  0.066036711
## [11,] -0.75505205 -1.14907041  1.34584178  0.68608150  0.093179804
## [12,]  0.56223704 -0.19991293 -2.22360713  0.32097131  0.062303660
## [13,]  0.22813081  1.59605527  0.45968547 -0.71903876  0.121180565
## [14,]  0.47318950  0.92227025  0.01377674 -0.14755601  0.084300103
## [15,]  0.65743468 -0.89650446 -1.56747979 -0.66837607  0.043747752
## [16,]  0.60397262 -0.98362933 -0.69683131 -0.53748100  0.042134220
## [17,]  0.67149628  0.27205617  0.92090308  0.03475269  0.053849458
## [18,]  0.76315093 -0.37837929  0.90694860  0.13667046  0.053372925
## [19,]  1.81242805  0.18510809 -1.13339807  1.48444569  0.007580131
## [20,]  0.83436088 -1.66846501  1.33756198  0.62859729  0.028001330
## [21,]  1.52887804 -0.59071652 -0.11300095 -0.06358380  0.010558586
## [22,]  1.33939957 -0.90295396  0.65594023 -0.56734753 -0.015034228
## [23,]  1.05051137 -2.71675250  0.74697721 -0.89482443 -0.075496028
## [24,]  2.38846524  1.80061406  0.03888058 -0.12744556 -0.110346034
## [25,]  2.34283421  1.57421714  0.69629625  0.15256833 -0.159098209

Creating biplot

biplot(pca,scale=0)

Calculating proportion of variance

pr.var<-pca$sdev^2
pve<-pr.var/sum(pr.var)

Creating scree plot and cumulative plots

plot(pve, xlab ="Principal Component", 
     ylab ="Proportion of Variance Explained", ylim=c(0 ,1) ,type="b")

plot(cumsum (pve), xlab ="Principal Component", 
     ylab =" Cumulative Proportion of Variance Explained ", ylim=c(0 ,1), type="b")


Building model using PC1 to PC4

predictor<-pca$x[,1:4]
wine<-cbind(wine,predictor)
model<-lm(Price~PC1+PC2+PC3+PC4,data=wine)
summary(model)
## 
## Call:
## lm(formula = Price ~ PC1 + PC2 + PC3 + PC4, data = wine)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -0.46899 -0.24789 -0.00215  0.20607  0.52709 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  7.06722    0.05889 120.016  < 2e-16 ***
## PC1         -0.25487    0.04125  -6.178 4.91e-06 ***
## PC2          0.12730    0.05156   2.469   0.0227 *  
## PC3          0.41744    0.06039   6.913 1.03e-06 ***
## PC4         -0.18647    0.08309  -2.244   0.0363 *  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.2944 on 20 degrees of freedom
## Multiple R-squared:  0.8292, Adjusted R-squared:  0.795 
## F-statistic: 24.27 on 4 and 20 DF,  p-value: 1.964e-07

Making Predictions

We cannot convert test data into principal components, by applying pca. Instead we have to apply same transformations on test data as we did for train data

wineTest = read.csv("https://storage.googleapis.com/dimensionless/Analytics/wine_test.csv")
wineTest
##   Year  Price WinterRain    AGST HarvestRain Age FrancePop
## 1 1979 6.9541        717 16.1667         122   4  54835.83
## 2 1980 6.4979        578 16.0000          74   3  55110.24
pca_test<-predict(pca,wineTest[,3:7])
class(pca_test)
## [1] "matrix"
pca_test
##           PC1       PC2       PC3        PC4        PC5
## [1,] 2.303725 0.5946824 0.4101509 -0.3722356 -0.2074747
## [2,] 2.398317 0.2242893 0.8925278  0.7329912 -0.2649691
# Converting to data frame
pca_test<-as.data.frame(pca_test)
pca_test
##        PC1       PC2       PC3        PC4        PC5
## 1 2.303725 0.5946824 0.4101509 -0.3722356 -0.2074747
## 2 2.398317 0.2242893 0.8925278  0.7329912 -0.2649691

Making predictions

pred_pca<-predict(object = model, newdata=pca_test)
pred_pca
##        1        2 
## 6.796398 6.720412
wineTest$Price
## [1] 6.9541 6.4979