9923170071 / 8108094992 info@dimensionless.in
Dimensionless Technologies | Learn Data Science and Business Analytics

Dimensionless Technologies | Learn Data Science and Business Analytics

We provide online courses on Data Science and Business Analytics.Sign up today to learn data science and business analytics online.

Sourced through Scoop.it from: dimensionless.in

Bi-Variate Analysis

by Samridhi Dutta | Jan 5, 2017 

 

In my previous post, we have covered Uni-Variate Analysis as an initial stage of data-exploration process. In this post, we will cover the bi-variate analysis. Objective of the bi-variate analysis is to understand the relationship between each pair of variables using statistics and visualizations. We need to analyze relationship between:

the target variable and each predictor variable two predictor variables

Why Analyze Relationship between Target Variable and each Predictor Variable

It is important to analyze the relationship between the predictor and the target variables to understand the trend for the following reasons:

The bi-variate analysis and our model should communicate the same story. This will help in understand and analysing the accuracy of our models, and to make sure, that our model has not over-fit the training data. If the data has too many predictor variables, we should include only those predictor variables in our regression models which show some trend with the target variable. Our aim with the regression models is to understand the story each significant variable is communicating, and its behaviour with other predictor variables and the target variable. A variable that has no pattern with the target variable, may not have a direct relation with the target variable (while a transformation of this variable might have a direct relation). If we understand the correlations and trends between the predictor and the target variables, we can arrive at better and faster transformations of predictor variables, to get more accurate models faster.
Eg. Following curve indicates logarithmic relation between target and predictor variables. A curve of below-mentioned shape indicates that there is a logarithmic relation between x and y. In order to transform the above curve into linear, we need to take an exponential of 10 of the predictor variable. Hence, a simple scatter plot can give us the best estimate of variables – transformations required to arrive at the appropriate model.

Why Analyze Relationship between 2 Predictor Variables

It is important to understand the correlations between each pair of predictor variables. Correlated variables lead to multi-collinearity. Essentially, two correlated variables are transmitting the same information, and hence are redundant. Multi-collinearity leads to inflated error term and wider confidence interval (reflecting greater uncertainty in the estimate). When there are too many variables in a data-set, we use techniques like PCA for dimensionality reduction. Dimensionality reduction techniques work upon reducing the correlated variables, to reduce the extraneous information and so that we run our modelling algorithms on the variables that explain the maximum variance.

Method to do Bi-Variate Analysis

We have understood why bi-variate analysis is an essential step to data exploration. Now we will discuss the techniques to bi-variate analysis.

For illustration, we will use Hitters data-set from library ISLR. This is Major League Baseball Data from the 1986 and 1987 seasons. It is a data frame with 322 observations of major league players on 20 variables. The target variable is Salary while the rest of the 19 are dependent variables. Through this data-set, we will demonstrate bi-variate analysis between:

two continuous variables, one continuous and one categorical variable, two categorical variables

Following is the summary of all the variables in Hitters data-set:

data(Hitters) hitters=Hitters summary(hitters) ## AtBat Hits HmRun Runs ## Min. : 16.0 Min. : 1 Min. : 0.00 Min. : 0.00 ## 1st Qu.:255.2 1st Qu.: 64 1st Qu.: 4.00 1st Qu.: 30.25 ## Median :379.5 Median : 96 Median : 8.00 Median : 48.00 ## Mean :380.9 Mean :101 Mean :10.77 Mean : 50.91 ## 3rd Qu.:512.0 3rd Qu.:137 3rd Qu.:16.00 3rd Qu.: 69.00 ## Max. :687.0 Max. :238 Max. :40.00 Max. :130.00 ## ## RBI Walks Years CAtBat ## Min. : 0.00 Min. : 0.00 Min. : 1.000 Min. : 19.0 ## 1st Qu.: 28.00 1st Qu.: 22.00 1st Qu.: 4.000 1st Qu.: 816.8 ## Median : 44.00 Median : 35.00 Median : 6.000 Median : 1928.0 ## Mean : 48.03 Mean : 38.74 Mean : 7.444 Mean : 2648.7 ## 3rd Qu.: 64.75 3rd Qu.: 53.00 3rd Qu.:11.000 3rd Qu.: 3924.2 ## Max. :121.00 Max. :105.00 Max. :24.000 Max. :14053.0 ## ## CHits CHmRun CRuns CRBI ## Min. : 4.0 Min. : 0.00 Min. : 1.0 Min. : 0.00 ## 1st Qu.: 209.0 1st Qu.: 14.00 1st Qu.: 100.2 1st Qu.: 88.75 ## Median : 508.0 Median : 37.50 Median : 247.0 Median : 220.50 ## Mean : 717.6 Mean : 69.49 Mean : 358.8 Mean : 330.12 ## 3rd Qu.:1059.2 3rd Qu.: 90.00 3rd Qu.: 526.2 3rd Qu.: 426.25 ## Max. :4256.0 Max. :548.00 Max. :2165.0 Max. :1659.00 ## ## CWalks League Division PutOuts Assists ## Min. : 0.00 A:175 E:157 Min. : 0.0 Min. : 0.0 ## 1st Qu.: 67.25 N:147 W:165 1st Qu.: 109.2 1st Qu.: 7.0 ## Median : 170.50 Median : 212.0 Median : 39.5 ## Mean : 260.24 Mean : 288.9 Mean :106.9 ## 3rd Qu.: 339.25 3rd Qu.: 325.0 3rd Qu.:166.0 ## Max. :1566.00 Max. :1378.0 Max. :492.0 ## ## Errors Salary NewLeague ## Min. : 0.00 Min. : 67.5 A:176 ## 1st Qu.: 3.00 1st Qu.: 190.0 N:146 ## Median : 6.00 Median : 425.0 ## Mean : 8.04 Mean : 535.9 ## 3rd Qu.:11.00 3rd Qu.: 750.0 ## Max. :32.00 Max. :2460.0 ## NA’s :59

Before we conduct the bi-variate analysis, we’ll seperate continuous and factor variables and perform basic cleaning through following code:

#deleting rows with missing values in target variable hitters=hitters[-which(is.na(hitters$Salary)),] #seperating continuous and factor variables hitters_factor=data.frame(1:nrow(hitters)) hitters_cont=data.frame(1:nrow(hitters)) i=1 while(i<=length(names(hitters))) { if(class(hitters[,i])==”factor”) hitters_factor=cbind(hitters_factor,hitters[i]) else hitters_cont = cbind(hitters_cont,hitters[i]) i=i+1 } hitters_cont = hitters_cont[,-1] hitters_factor=hitters_factor[,-1]

Techniques to do Bi-Variate Analysis.

Please note, that the way to do the bi-variate analysis is same irrespective of predictor or target variable.

Bi-Variate Analysis between Two Continuous Variables

To do the bi-variate analysis between two continuous variables, we have to look at scatter plot between the two variables. The pattern of the scatter plot indicates the relationship between the two variables. As long as there is a pattern between two variables, there can be a transformation applied to the predictor / target variable to achieve a linear relationship for modelling purpose. If no pattern is observed, this implies no relationship possible between the two variables. The strength of the linear relationship between two continuous variables can be quantified using Pearson Correlation.A correlation coefficient of -1 indicates high negative correlation, 0 indicates no correlation, 1 indicates high positive correlation.

Correlation is simply the normalized co-variance with the standard deviation of both the factors. This is done to ensure we get a number between +1 and -1. Co-variance is very difficult to compare as it depends on the units of the two variables. So, we prefer to use correlation for the same.

Please note: * If two variables are linearly related, it means they should have high Pearson Correlation Coefficient. * If two variables are correlated does not indicate they are linearly related. This is because correlation is deeply impacted by outliers. Eg. the correlation for both the below graphs is same, but the linear relation is not there in the second graph:

If two variables are related, it does not mean they are necessarily correlated. Eg. in the below graph of y=x^2-1, x and y are related but not correlated (with correlation coefficient of 0). x=c(-1, -.75, -.5, -.25, 0, .25, .5, .75, 1) y=x^2-1 plot(x,y, col=”dark green”,”l”)

cor(x,y) ## [1] 0

For our hitters illustration, following are the correlations and the scatter-plots:

cor(hitters_cont[1:4]) ## AtBat Hits HmRun Runs ## AtBat 1.0000000 0.9639691 0.5551022 0.8998291 ## Hits 0.9639691 1.0000000 0.5306274 0.9106301 ## HmRun 0.5551022 0.5306274 1.0000000 0.6310759 ## Runs 0.8998291 0.9106301 0.6310759 1.0000000

This gives the correlation-coefficients between the continuous variables in hitters data-set. Since it is difficult to analyse so many values, we prefer to attain a quick visualization of correlation through scatter plots. Following command gives the scatter plots for the first 4 continuous variables in hitters data-set:

pairs(hitters_cont[1:4], col=”brown”)

Observations:

Linear pattern can be observed between AtBat and Hits, and can be confirmed from the correlation value = 0.96 Linear pattern can be observed between Hits and Runs, and can be confirmed from the correlation value = 0.91 Linear pattern can be observed between AtBat and Runs, and can be confirmed from the correlation value = .899

To get a scatter plot and correlation between two continuous variables:

plot(hitters_cont[,1], hitters_cont[,2], col=”yellow”, xlab=”AtBat”, ylab=”Hits”)

The graph shows a strong positive linear correlation.

Pearson Correlation Coefficient between AtBat and Hits

cor(hitters_cont[,1], hitters_cont[,2]) ## [1] 0.9639691

Correlation value of .96 verifies our claim of strong positive correlation. We can obtain better visualizations of the correlations through library corrgram and corrplot:

library(corrgram) corrgram(hitters)

Strong blue means strong +ve correlation. Strong red means strong negative correlation. Dark color means strong correlation. Weak color means weak correlation.

To find correlation between each continuous variable

library(corrplot) continuous_correlation=cor(hitters_cont) corrplot(continuous_correlation, method= “circle”, type = “full”, is.corr=T, diag=T)

This gives a good visual representation of the correlation and relationship between variables, especially when the number of variables is high. Dark blue and large circle represents high +ve correlation. Dark red and large circle represents high -ve correlation. Weak colors and smaller circles represent weak correlation.

Bi-Variate Analysis between Two Categorical Variables

2-way Frequency Table: We can make a 2-way frequency table to understand the relationship between two categorical variables.

2-Way Frequency Table

head(hitters_factor) ## League Division NewLeague ## -Alan Ashby N W N ## -Alvin Davis A W A ## -Andre Dawson N E N ## -Andres Galarraga N E N ## -Alfredo Griffin A W A ## -Al Newman N E A tab=table(League=hitters_factor$League, Division=hitters_factor$Division)#gives the frequency count tab ## Division ## League E W ## A 68 71 ## N 61 63

This gives the frequency count of League vs Division

Chi-Square Tests of Association: To understand if there is an association / relation between 2 categorical variables.

Chi-Square Test for Hitters Data-set

chisq.test(tab) ## ## Pearson’s Chi-squared test with Yates’ continuity correction ## ## data: tab ## X-squared = 0, df = 1, p-value = 1

The chisquare probability value = 1, we retain the null hypothesis that there is not sufficient association between the two factor variables.

Visualization of two categorical variables can be obtained through Heat Map and Fluctuation Charts as illustrated in the example:

Heat Map

library(ggplot2) runningcounts.df <- as.data.frame(tab) ggplot(runningcounts.df, aes(League,Division)) + geom_tile(aes(fill = Freq), colour = “black”) + scale_fill_gradient(low = “white”, high = “red”)+theme_classic()

Dark Color represents higher frequency and lighter colors represent lower frequencies.

Fluctuation Plot

ggplot(runningcounts.df, aes(League, Division)) + geom_point(aes(size = Freq, color = Freq, stat = “identity”, position = “identity”), shape = 15) + scale_size_continuous(range = c(3,15)) + scale_color_gradient(low = “white”, high = “black”)+theme_bw()

Dark Color represents higher frequency and lighter colors represent lower frequencies.

Bi-Variate Analysis between a Continuous Variable and a Categorical Variable

Aggregations can be obtained using functions xtabs, aggregate or using dplyr library. Eg. In the Hitters Data-set, we will use Factor Variable: “Division”” and Continuous Var: “Salary”.

Aggregation of Salary Division-Wise

xtabs(hitters$Salary ~ hitters$Division) #gives the division-wise sum of salaries ## hitters$Division ## E W ## 80531.01 60417.50 aggregate(hitters$Salary, by=list(hitters$Division), mean,na.rm=T) #gives the division-wise mean of salaries ## Group.1 x ## 1 E 624.2714 ## 2 W 450.8769 hitters%>%group_by(Division)%>%summarise(Sum_Salary=sum(Salary, na.rm=T), Mean_Salary=mean(Salary,na.rm=T), Min_Salary=min(Salary, na.rm=T),Max_Salary = max(Salary, na.rm=T)) ## # A tibble: 2 × 5 ## Division Sum_Salary Mean_Salary Min_Salary Max_Salary ## <fctr> <dbl> <dbl> <dbl> <dbl> ## 1 E 80531.01 624.2714 67.5 2460 ## 2 W 60417.50 450.8769 68.0 1900 T-Test: 2-Sample test (paired or unpaired) can be used to understand if there is a relationship between a continuous and a categorical variable. 2-sample T test can be used for categorical variables with only two levels. For more than two levels, we will use Anova.

Eg. Conduct T-test on Hitters data-set to check if Division (a predictor factor variable) has an impact on Salary (continuous target variable). H0: Mean of salary for Divisions E and W is same, so there is not much impact for divisions on salary HA: Mean of salay is different; so there is a deep impact of divisions on salary

df=data.frame(hitters$Division, hitters$Salary) head(df) ## hitters.Division hitters.Salary ## 1 W 475.0 ## 2 W 480.0 ## 3 E 500.0 ## 4 E 91.5 ## 5 W 750.0 ## 6 E 70.0 library(dplyr) df%>%filter(hitters.Division ==”W”)%>%data.frame()->sal_distribution_W df%>%filter(hitters.Division==”E”)%>%data.frame()->sal_distribution_E x=sal_distribution_E$hitters.Salary y=sal_distribution_W$hitters.Salary t.test(x, y, alternative=”two.sided”, mu=0) ## ## Welch Two Sample t-test ## ## data: x and y ## t = 3.145, df = 218.46, p-value = 0.001892 ## alternative hypothesis: true difference in means is not equal to 0 ## 95 percent confidence interval: ## 64.73206 282.05692 ## sample estimates: ## mean of x mean of y ## 624.2714 450.8769

2-sided indicates two tailed test. P-value of .19% indicates that we can reject the null hypothesis and there is a significant difference in mean salary based on division. Hence, division does make an impact on the salary.

Anova: Analysis of variance (ANOVA) is a collection of statistical models used to analyze the differences among group means and their associated procedures (such as “variation” among and between groups).

Anova

anova=aov(Salary~Division, data = hitters) summary(anova) ## Df Sum Sq Mean Sq F value Pr(>F) ## Division 1 1976102 1976102 10.04 0.00171 ** ## Residuals 261 51343011 196717 ## — ## Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ‘ 1

Since probability value is < 1%, so, the difference between the average salary for different divisions is significant.

Visualization can be obtained through box-plots, bar-plots and density plots.

Box Plots to establish relationship between Division variable and Salary Variable in Hitters data-set

x=plot(hitters$Division, hitters$Salary, col=”red”, xlab=”Division”, ylab=”Salary”)

x ## $stats ## [,1] [,2] ## [1,] 67.500 68 ## [2,] 215.000 165 ## [3,] 517.143 375 ## [4,] 850.000 725 ## [5,] 1800.000 1500 ## ## $n ## [1] 129 134 ## ## $conf ## [,1] [,2] ## [1,] 428.8074 298.5649 ## [2,] 605.4786 451.4351 ## ## $out ## [1] 1975.000 1861.460 2460.000 1925.571 2412.500 2127.333 1940.000 1900.000 ## ## $group ## [1] 1 1 1 1 1 1 1 2 ## ## $names ## [1] “E” “W”

The output indicates that there are 129 observations for E and 134 observations for W.
The statistics of the output indicate:
Minimum salary values for E and W divisions are: 67.5, 68 respectively.
Salary values at 1st quartile for E and W divisions are 215 and 165 respectively.
Salary values at 2nd quartile for E and w divisions are 517.14 and 375 respectively. This implies the median salary value of E division is 50% higher than W.
Salary values at 3rd quartile for E and W divisions are 850 and 725 respectively.
Maximum salary values for E and W divisions are 1800 and 1500 respectively.

Outliar values are indicated by $out values as classified in the $group.
Outliar salary values are 1975, 1861.46, 2460, 1925.57, 2412.50, 2127.33, 1940 for Division E (group 1).
Outliar salary value is 1900 for division W (group 2).

We can also use ggplot to make more beautiful visuals:

p=ggplot(hitters,aes(x=Division, y=Salary)) p+geom_boxplot(aes(color=Division, fill=Division))+theme_classic()+xlab(“Division”)+ylab(“Salary”)

To understand the distribution of first 4 continuous variable with respect to division:

plot(hitters_cont[1:4], col=hitters$Division)

plot(hitters$Salary, col=hitters$Division, lwd=2, cex = as.numeric(hitters$Division), ylab=”Salary”) legend(230, 2500, c(“E”, “W”), pch=1, col=(1:2))

The plot shows lower salaries for W while higher frequency of high salaries for E.

Bar Plot of Salary of players division-wise

hitters%>%group_by(Division)%>%summarize(Salary=sum(Salary))%>%data.frame()->df df$Percentage=round(df$Salary/sum(df$Salary),3) df ## Division Salary Percentage ## 1 E 80531.01 0.571 ## 2 W 60417.50 0.429 p=ggplot(df, aes(x=Division, y=Salary, fill=Division)) p+geom_bar(stat=”identity”, alpha=.7)+ geom_text(aes(label=Percentage, col=Division, vjust=-.25))+guides(color=FALSE)+ theme_classic()+ggtitle(“Division-Wise Salary”)

The above graph gives a visualization of frequency count of Salary buckets as per the division

Dodged Graph

p=ggplot(hitters, aes(x=Salary)) p+geom_histogram(aes(fill=Division), position = “dodge”, bins=30)+xlab(“Salary”)+theme_classic()

Stacked Graph

p=ggplot(hitters, aes(x=Salary)) p+geom_histogram(aes(fill=Division), position = “stack”, alpha=.7)+xlab(“Salary”)+theme_classic() ## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.

Creating Graphs in Seperate Grids

p+geom_histogram(aes(color=Division, fill=Division), bins=25, alpha=.3)+facet_grid(Division~.)+xlab(“Salary”) + theme_classic()

The histogram gives a clear demarcated pictorial representation of the salary for the 2 divisions.

Density Plot for Salary

p=ggplot(hitters, aes(x=Salary, col=Division, fill=Division)) p+geom_density(alpha=.4)+theme_classic() + ylab(“Density”)

The plot indicates high density for division W in the salary range 0 to 1000.

Conclusion

Bi-Variate Analysis provides a visual representation of the inter-relationship between the predictor variables. If the correlation between the predictor variables is high, then we would have to reduce the correlated variables in order to avoid multi-collinearity in the prediction models.

We also need to understand the bi-variate relationship between target variables and the predictor variables based on which we understand, analyze and validate our modeling results.

In essence it is one of the most important steps which gives us the insights on the interaction between the variables.

Read more at Dimensionless.in

Data Mining Applications

Data Mining Applications

Data Mining is primarily used today by companies with a strong consumer focus — retail, financial, communication, and marketing organizations, to “drill down” into their transactional data and determine pricing, customer preferences and product positioning, impact on sales, customer satisfaction and corporate profits. With data mining, a retailer can use point-of-sale records of customer purchases to develop products and promotions to appeal to specific customer segments.

Here is the list of 14 other important areas where data mining is widely used:

Future Healthcare:

Future healthcare with Data Mining

Data mining holds great potential to improve health systems. It uses data and analytics to identify best practices that improve care and reduce costs. Researchers use data mining approaches like multi-dimensional databases, machine learning, soft computing, data visualization and statistics. Mining can be used to predict the volume of patients in every category. Processes are developed that make sure that the patients receive appropriate care at the right place and at the right time. Data mining can also help healthcare insurers to detect fraud and abuse.

Market Basket Analysis:

Market basket analysis is a modelling technique based upon a theory that if you buy a certain group of items you are more likely to buy another group of items. This technique may allow the retailer to understand the purchase behaviour of a buyer. This information may help the retailer to know the buyer’s needs and change the store’s layout accordingly. Using differential analysis comparison of results between different stores, between customers in different demographic groups can be done.

Education:

There is a new emerging field, called Educational Data Mining, concerns with developing methods that discover knowledge from data originating from educational Environments. The goals of EDM are identified as predicting students’ future learning behaviour, studying the effects of educational support, and advancing scientific knowledge about learning. Data mining can be used by an institution to take accurate decisions and also to predict the results of the student. With the results the institution can focus on what to teach and how to teach. Learning pattern of the students can be captured and used to develop techniques to teach them.

Manufacturing Engineering:

Knowledge is the best asset a manufacturing enterprise would possess. Data mining tools can be very useful to discover patterns in complex manufacturing process. Data mining can be used in system-level designing to extract the relationships between product architecture, product portfolio, and customer needs data. It can also be used to predict the product development span time, cost, and dependencies among other tasks.

CRM:

Customer Relationship Management is all about acquiring and retaining customers, also improving customers’ loyalty and implementing customer focused strategies. To maintain a proper relationship with a customer a business need to collect data and analyse the information. This is where data mining plays its part. With data mining technologies the collected data can be used for analysis. Instead of being confused where to focus to retain customer, the seekers for the solution get filtered results.

Fraud Detection:

Billions of dollars have been lost to the action of frauds. Traditional methods of fraud detection are time consuming and complex. Data mining aids in providing meaningful patterns and turning data into information. Any information that is valid and useful is knowledge. A perfect fraud detection system should protect information of all the users. A supervised method includes collection of sample records. These records are classified fraudulent or non-fraudulent. A model is built using this data and the algorithm is made to identify whether the record is fraudulent or not.

Intrusion Detection:

Any action that will compromise the integrity and confidentiality of a resource is an intrusion. The defensive measures to avoid an intrusion includes user authentication, avoid programming errors, and information protection. Data mining can help improve intrusion detection by adding a level of focus to anomaly detection. It helps an analyst to distinguish an activity from common everyday network activity. Data mining also helps extract data which is more relevant to the problem.

Lie Detection:

Apprehending a criminal is easy whereas bringing out the truth from him is difficult. Law enforcement can use mining techniques to investigate crimes, monitor communication of suspected terrorists. This filed includes text mining also. This process seeks to find meaningful patterns in data which is usually unstructured text. The data sample collected from previous investigations are compared and a model for lie detection is created. With this model processes can be created according to the necessity.

Customer Segmentation:

Traditional market research may help us to segment customers but data mining goes in deep and increases market effectiveness. Data mining aids in aligning the customers into a distinct segment and can tailor the needs according to the customers Market is always about retaining the customers. Data mining allows to find a segment of customers based on vulnerability and the business could offer them with special offers and enhance satisfaction.

Financial Banking:

With computerised banking everywhere huge amount of data is supposed to be generated with new transactions. Data mining can contribute to solving business problems in banking and finance by finding patterns, causalities, and correlations in business information and market prices that are not immediately apparent to managers because the volume data is too large or is generated too quickly to screen by experts. The managers may find this information for better segmenting, targeting, acquiring, retaining and maintaining a profitable customer.

Corporate Surveillance:

Corporate surveillance is the monitoring of a person or group’s behaviour by a corporation. The data collected is most often used for marketing purposes or sold to other corporations, but is also regularly shared with government agencies. It can be used by the business to tailor their products desirable by their customers. The data can be used for direct marketing purposes, such as the targeted advertisements on Google and Yahoo, where ads are targeted to the user of the search engine by analyzing their search history and emails.

Research Analysis:

History shows that we have witnessed revolutionary changes in research. Data mining is helpful in data cleaning, data pre-processing and integration of databases. The researchers can find any similar data from the database that might bring any change in the research. Identification of any co-occurring sequences and the correlation between any activities can be known. Data visualization and visual data mining provide us with a clear view of the data.

Criminal Investigation:

Criminology is a process that aims to identify crime characteristics. Actually crime analysis includes exploring and detecting crimes and their relationships with criminals. The high volume of crime datasets and also the complexity of relationships between these kinds of data have made criminology an appropriate field for applying data mining techniques. Text based crime reports can be converted into word processing files. These information can be used to perform crime matching process.

Bio Informatics:

Data Mining approaches seem ideally suited for Bioinformatics, since it is data-rich. Mining biological data helps to extract useful knowledge from massive datasets gathered in biology, and in other related life sciences areas such as medicine and neuroscience. Applications of data mining to bioinformatics include gene finding, protein function inference, disease diagnosis, disease prognosis, disease treatment optimization, protein and gene interaction network reconstruction, data cleansing, and protein subcellular location prediction.

Read also Data science in various domain