Bi-Variate Analysis
In my previous post, we have covered Uni-Variate Analysis as an initial stage of data-exploration process. In this post, we will cover the bi-variate analysis. Objective of the bi-variate analysis is to understand the relationship between each pair of variables using statistics and visualizations. We need to analyze relationship between:
- the target variable and each predictor variable
- two predictor variables
Why Analyze Relationship between Target Variable and each Predictor Variable
It is important to analyze the relationship between the predictor and the target variables to understand the trend for the following reasons:
- The bi-variate analysis and our model should communicate the same story. This will help in understand and analysing the accuracy of our models, and to make sure, that our model has not over-fit the training data.
- If the data has too many predictor variables, we should include only those predictor variables in our regression models which show some trend with the target variable. Our aim with the regression models is to understand the story each significant variable is communicating, and its behaviour with other predictor variables and the target variable. A variable that has no pattern with the target variable, may not have a direct relation with the target variable (while a transformation of this variable might have a direct relation).
- If we understand the correlations and trends between the predictor and the target variables, we can arrive at better and faster transformations of predictor variables, to get more accurate models faster.
Eg. Following curve indicates logarithmic relation between target and predictor variables. A curve of below-mentioned shape indicates that there is a logarithmic relation between x and y. In order to transform the above curve into linear, we need to take an exponential of 10 of the predictor variable. Hence, a simple scatter plot can give us the best estimate of variables – transformations required to arrive at the appropriate model.
Why Analyze Relationship between 2 Predictor Variables
- It is important to understand the correlations between each pair of predictor variables. Correlated variables lead to multi-collinearity. Essentially, two correlated variables are transmitting the same information, and hence are redundant. Multi-collinearity leads to inflated error term and wider confidence interval (reflecting greater uncertainty in the estimate).
- When there are too many variables in a data-set, we use techniques like PCA for dimensionality reduction. Dimensionality reduction techniques work upon reducing the correlated variables, to reduce the extraneous information and so that we run our modelling algorithms on the variables that explain the maximum variance.
Method to do Bi-Variate Analysis
We have understood why bi-variate analysis is an essential step to data exploration. Now we will discuss the techniques to bi-variate analysis.
For illustration, we will use Hitters data-set from library ISLR. This is Major League Baseball Data from the 1986 and 1987 seasons. It is a data frame with 322 observations of major league players on 20 variables. The target variable is Salary while the rest of the 19 are dependent variables. Through this data-set, we will demonstrate bi-variate analysis between:
- two continuous variables,
- one continuous and one categorical variable,
- two categorical variables
Following is the summary of all the variables in Hitters data-set:
1 2 3 |
data(Hitters) hitters=Hitters summary(hitters) |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
## AtBat Hits HmRun Runs ## Min. : 16.0 Min. : 1 Min. : 0.00 Min. : 0.00 ## 1st Qu.:255.2 1st Qu.: 64 1st Qu.: 4.00 1st Qu.: 30.25 ## Median :379.5 Median : 96 Median : 8.00 Median : 48.00 ## Mean :380.9 Mean :101 Mean :10.77 Mean : 50.91 ## 3rd Qu.:512.0 3rd Qu.:137 3rd Qu.:16.00 3rd Qu.: 69.00 ## Max. :687.0 Max. :238 Max. :40.00 Max. :130.00 ## ## RBI Walks Years CAtBat ## Min. : 0.00 Min. : 0.00 Min. : 1.000 Min. : 19.0 ## 1st Qu.: 28.00 1st Qu.: 22.00 1st Qu.: 4.000 1st Qu.: 816.8 ## Median : 44.00 Median : 35.00 Median : 6.000 Median : 1928.0 ## Mean : 48.03 Mean : 38.74 Mean : 7.444 Mean : 2648.7 ## 3rd Qu.: 64.75 3rd Qu.: 53.00 3rd Qu.:11.000 3rd Qu.: 3924.2 ## Max. :121.00 Max. :105.00 Max. :24.000 Max. :14053.0 ## ## CHits CHmRun CRuns CRBI ## Min. : 4.0 Min. : 0.00 Min. : 1.0 Min. : 0.00 ## 1st Qu.: 209.0 1st Qu.: 14.00 1st Qu.: 100.2 1st Qu.: 88.75 ## Median : 508.0 Median : 37.50 Median : 247.0 Median : 220.50 ## Mean : 717.6 Mean : 69.49 Mean : 358.8 Mean : 330.12 ## 3rd Qu.:1059.2 3rd Qu.: 90.00 3rd Qu.: 526.2 3rd Qu.: 426.25 ## Max. :4256.0 Max. :548.00 Max. :2165.0 Max. :1659.00 ## ## CWalks League Division PutOuts Assists ## Min. : 0.00 A:175 E:157 Min. : 0.0 Min. : 0.0 ## 1st Qu.: 67.25 N:147 W:165 1st Qu.: 109.2 1st Qu.: 7.0 ## Median : 170.50 Median : 212.0 Median : 39.5 ## Mean : 260.24 Mean : 288.9 Mean :106.9 ## 3rd Qu.: 339.25 3rd Qu.: 325.0 3rd Qu.:166.0 ## Max. :1566.00 Max. :1378.0 Max. :492.0 ## ## Errors Salary NewLeague ## Min. : 0.00 Min. : 67.5 A:176 ## 1st Qu.: 3.00 1st Qu.: 190.0 N:146 ## Median : 6.00 Median : 425.0 ## Mean : 8.04 Mean : 535.9 ## 3rd Qu.:11.00 3rd Qu.: 750.0 ## Max. :32.00 Max. :2460.0 ## NA's :59 |
Before we conduct the bi-variate analysis, we’ll seperate continuous and factor variables and perform basic cleaning through following code:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
#deleting rows with missing values in target variable hitters=hitters[-which(is.na(hitters$Salary)),] #seperating continuous and factor variables hitters_factor=data.frame(1:nrow(hitters)) hitters_cont=data.frame(1:nrow(hitters)) i=1 while(i<=length(names(hitters))) { if(class(hitters[,i])=="factor") hitters_factor=cbind(hitters_factor,hitters[i]) else hitters_cont = cbind(hitters_cont,hitters[i]) i=i+1 } hitters_cont = hitters_cont[,-1] hitters_factor=hitters_factor[,-1] |
Techniques to do Bi-Variate Analysis.
Please note, that the way to do the bi-variate analysis is same irrespective of predictor or target variable.
Bi-Variate Analysis between Two Continuous Variables
To do the bi-variate analysis between two continuous variables, we have to look at scatter plot between the two variables. The pattern of the scatter plot indicates the relationship between the two variables. As long as there is a pattern between two variables, there can be a transformation applied to the predictor / target variable to achieve a linear relationship for modelling purpose. If no pattern is observed, this implies no relationship possible between the two variables. The strength of the linear relationship between two continuous variables can be quantified using Pearson Correlation.A correlation coefficient of -1 indicates high negative correlation, 0 indicates no correlation, 1 indicates high positive correlation.
Correlation is simply the normalized co-variance with the standard deviation of both the factors. This is done to ensure we get a number between +1 and -1. Co-variance is very difficult to compare as it depends on the units of the two variables. So, we prefer to use correlation for the same.
Please note: * If two variables are linearly related, it means they should have high Pearson Correlation Coefficient. * If two variables are correlated does not indicate they are linearly related. This is because correlation is deeply impacted by outliers. Eg. the correlation for both the below graphs is same, but the linear relation is not there in the second graph:
- If two variables are related, it does not mean they are necessarily correlated. Eg. in the below graph of y=x^2-1, x and y are related but not correlated (with correlation coefficient of 0).
1 2 3 |
x=c(-1, -.75, -.5, -.25, 0, .25, .5, .75, 1) y=x^2-1 plot(x,y, col="dark green","l") |
1 |
cor(x,y) |
1 |
## [1] 0 |
For our hitters illustration, following are the correlations and the scatter-plots:
1 |
cor(hitters_cont[1:4]) |
1 2 3 4 5 |
## AtBat Hits HmRun Runs ## AtBat 1.0000000 0.9639691 0.5551022 0.8998291 ## Hits 0.9639691 1.0000000 0.5306274 0.9106301 ## HmRun 0.5551022 0.5306274 1.0000000 0.6310759 ## Runs 0.8998291 0.9106301 0.6310759 1.0000000 |
This gives the correlation-coefficients between the continuous variables in hitters data-set. Since it is difficult to analyse so many values, we prefer to attain a quick visualization of correlation through scatter plots. Following command gives the scatter plots for the first 4 continuous variables in hitters data-set:
1 |
pairs(hitters_cont[1:4], col="brown") |
Observations:
- Linear pattern can be observed between AtBat and Hits, and can be confirmed from the correlation value = 0.96
- Linear pattern can be observed between Hits and Runs, and can be confirmed from the correlation value = 0.91
- Linear pattern can be observed between AtBat and Runs, and can be confirmed from the correlation value = .899
To get a scatter plot and correlation between two continuous variables:
1 |
plot(hitters_cont[,1], hitters_cont[,2], col="yellow", xlab="AtBat", ylab="Hits") |
The graph shows a strong positive linear correlation.
Pearson Correlation Coefficient between AtBat and Hits
1 |
cor(hitters_cont[,1], hitters_cont[,2]) |
1 |
## [1] 0.9639691 |
Correlation value of .96 verifies our claim of strong positive correlation. We can obtain better visualizations of the correlations through library corrgram and corrplot:
1 2 |
library(corrgram) corrgram(hitters) |
Strong blue means strong +ve correlation. Strong red means strong negative correlation. Dark color means strong correlation. Weak color means weak correlation.
To find correlation between each continuous variable
1 2 3 |
library(corrplot) continuous_correlation=cor(hitters_cont) corrplot(continuous_correlation, method= "circle", type = "full", is.corr=T, diag=T) |