What’s After MBA: Upgrade Your Skills

Image result for whats after mba

source: Career Core

 

After completing high school, I seemed to run after the best colleges or universities in the country or abroad to complete my graduation in flying colours because that was necessary to land myself a good job or take the next step in my career. However, once I completed my graduation, I was unsure and did not know what to do next – whether to get placed with a big organization or continue higher studies.

While some decide on earning straightway, others tend to fulfill their dream of getting a management degree and completing their MBA from a reputed organization. The sky is the limit when you enroll in an MBA program full of belief, hope, desire, and enthusiasm. However, once the graduation time comes closer, we tend to be lost in our choices as to what to do next after acquiring an MBA degree.

 

Facts About MBA Graduates       

 

At first, I would look into some statistics. Since 2010, the recent grad’s employer demand has reached its highest level as announced in its forward-looking Corporate Recruiters Survey by GMAC which is an organization that administers in the GMAT exam. In a survey which was conducted in the year, 2016 in the month of February and March stated that the recruiters prefer to hire recent graduates from the business schools which was about eighty-eight percent compared to the eighty percent of the companies that hired in the previous year.

In the current MBA market, jobs are still there and there is no reason to worry. However, you might not be satisfied with the job role or the company you are getting recruited for and hence you need to reskill yourself like I did for that particular role and be ready with the opportunity comes. You need to have patience and keeping improving your skills.

 

Have a Plan

 

All MBA graduates must have a plan, and hence not having a plan for an MBA graduate is not typical of. It’s a once in a lifetime experience that students get to pursue an MBA. Generally, in advance, students know what they would do after graduation. On the back of several years of work experience, and after careful consideration, the decision to do an MBA was taken. Most MBA graduates have a clear picture of their career and know what they want to do. It is not a wise strategy to decide what to do after obtaining an MBA degree.

I was fully aware of what I would want to achieve in life and was always looked after by the admission committee. Based on self-understanding, a clear thought out career strategy is what the admissions officers would want to see. Why you want to do an MBA and your goals after completing is what they are most interested in.

 

Based on the professional path of mine, the MBA program was considered by me. The right MBA programs could only be selected when in mind you have clear career goals. Whether an MBA meets your need or how your career could be boosted could only be known after that. On your own scale, each program value could be rated and the facts could be uncovered. For a newly minted MBA grad, it is very important to be realistic. Adam Heyler once said in his youtube channel that your CV could become credible and your network would be expanded if you have the MBA degree. But the lack of work experience would not certainly be made up the MBA degree. Time management is also an important factor which is taught by MBA.

 

The Post MBA Dilemma

 

Image result for what's after mba

source: PrepAdviser

 

The current job market possesses a tremendous challenge to every professional, even someone with a lucrative degree as MBA. Gone are those days when an MBA degree guarantees you a high paying job in a big firm. Nowadays, the wave of entrepreneurship has engulfed many and hence so many graduates are moving towards entrepreneurship and starting out their venture. However, it is not a piece of cake to be an entrepreneur in this competitive market and almost ninety percent of start-ups fail after its inception. I was not into entrepreneurship and choose to upgrade myself and follow my dream.

I pursued my MBA in Finance which is often a go-to choice for many students during their graduation certainly due to the prospects of working in major insurance or a banking company. I wanted to work as a Business Analyst, Risk Analyst, and so on. Thus it was pertinent for me to develop an analyst intuition and master the analytical tools such as SQL, Excel, Tableau, etc. If you are interested in working as a Decision Scientist or a Data Scientist, you need to upgrade your skills like me to more advanced skills like Machine Learning, Deep Learning and so on.

However, once I found the potential that data carries and the diverse nature of this field, I wanted to expand my horizons and work as a Data Science consultant in some big corporation and hence I started exploring other domains like Marketing, Human Resource and so on. MBA in Marketing is another such lucrative career with high post-graduation opportunity. Some of the work designation after completing MBA in marketing are – Research Manager or a Senior Analyst, Marketing Analyst, and so on. Data is the new oil and all marketing firms are using the unprecedented potential of data to market their product to the right customers and stay ahead in the race.

Being a Marketing Analyst, you would be responsible to gather data from various sources and thus having the skills of data collection or web scraping is very important. Additionally, I learned at least of data visualization tool like Excel, Tableau, Power BI and others to analyse the performance of different marketing camp gain which could be presented to the shareholders who would make the final business decision. Overall, it was about finding patterns in the data using various tools and ease the process of making decisions for the stakeholders.

MBA in Human Resource may not be as lucrative as the above two but certainly has its own share of value in terms of responsibility and decision making. Whether or not you have been employed as an HR after your graduation has no relation to the fact that you need to Master HR Analytics which I did that would help in dealing with employees.

As an HR professional, you would be engaged mostly in employee relations and thus it is necessary to understand the satisfaction level of each employee and deal with them separately. Onboarding a resource garners a huge amount of financial cost and hence predicting the attrition probability of an employee could avoid financial loss. Thus, data collection and machine learning are two of the important skills which I learned further along with my interpersonal skills.

Supply Chain Management has been in demand and I realized it is important to understand applications of Data Science in this regard because it could take me a long way in my career. The impact of supply chain dynamics could be analysed using the right analytical tools. Data could be collected and leveraged to identify the efficiency of the supply chain.

Additionally, the price fluctuations, commodities availability could also be analysed using Data. If you master Data Analytics like me, you could reduce the risk burden of an organization.

Healthcare management is another important field where students pursue an MBA which deals with practices related to the Healthcare industry. As Data Science had a vast application in the healthcare industry, I had to get my hands dirty and learn the nitty-gritty of analyzing a healthcare dataset. In the HealthCare careful usage of data could lead to ground-breaking achievements in the field of medical science. Applying analytics with relevant data could help in reducing medical cost and also channel the right medicine for a patient.

Deep Learning has made tremendous progress in the HealthCare industry and hence I took some time to understand the underlying working structure of neural networks. It could unearth hidden information from the patient’s data and help in prescribing the appropriate Medicare of the patient.

 

Conclusion

 

Though this was a generic overview of the skills I mastered for my own career aspirations after pursuing my MBA. In general, analytics is the need of the hour and every MBA graduate or each professional irrespective of the field they are in could certainly dive into this field without any prior relevant experience. In the beginning, I felt a bit overwhelmed by the vastness of the field but as I moved along I found it interesting and gradually get inclined towards the field.

Overall, along with the management skills, the technical expertise to deal with data and derive relevant information from it would land you a much higher role of a manager or a consultant in a firm which I eventually managed to achieve where you would be the decision maker for your team. Upskilling is very important in today’s world to stay relevant and keep in touch with the rapid advancement in technology.

Dimensionless has several blogs and training to get you started with Data Analytics and Data Science.

Follow this link, if you are looking to learn about data science online!

Additionally, if you are having an interest in Learning AWS Big Data, Learn AWS Course Online to boost your career

Furthermore, if you want to read more about data science and big data, you can read our blogs here

The Upcoming Revolution in Predictive Analytics (And Data Science)

The Upcoming Revolution in Predictive Analytics (And Data Science)

The Next Generation of Data Science

Quite literally, I am stunned.

I have just completed my survey of data (from articles, blogs, white papers, university websites, curated tech websites, and research papers all available online) about predictive analytics.

And I have a reason to believe that we are standing on the brink of a revolution that will transform everything we know about data science and predictive analytics.

But before we go there, you need to know: why the hype about predictive analytics? What is predictive analytics?

Let’s cover that first.

 Importance of Predictive Analytics

Black Samsung Tablet Computer

By PhotoMix Ltd

 

According to Wikipedia:

Predictive analytics is an area of statistics that deals with extracting information from data and using it to predict trends and behavior patterns. The enhancement of predictive web analytics calculates statistical probabilities of future events online. Predictive analytics statistical techniques include data modeling, machine learning, AI, deep learning algorithms and data mining.

Predictive analytics is why every business wants data scientists. Analytics is not just about answering questions, it is also about finding the right questions to answer. The applications for this field are many, nearly every human endeavor can be listed in the excerpt from Wikipedia that follows listing the applications of predictive analytics:

From Wikipedia:

Predictive analytics is used in actuarial science, marketing, financial services, insurance, telecommunications, retail, travel, mobility, healthcare, child protection, pharmaceuticals, capacity planning, social networking, and a multitude of numerous other fields ranging from the military to online shopping websites, Internet of Things (IoT), and advertising.

In a very real sense, predictive analytics means applying data science models to given scenarios that forecast or generate a score of the likelihood of an event occurring. The data generated today is so voluminous that experts estimate that less than 1% is actually used for analysis, optimization, and prediction. In the case of Big Data, that estimate falls to 0.01% or less.

Common Example Use-Cases of Predictive Analytics

 

Components of Predictive Analytics

Components of Predictive Analytics

 

A skilled data scientist can utilize the prediction scores to optimize and improve the profit margin of a business or a company by a massive amount. For example:

  • If you buy a book for children on the Amazon website, the website identifies that you have an interest in that author and that genre and shows you more books similar to the one you just browsed or purchased.
  • YouTube also has a very similar algorithm behind its video suggestions when you view a particular video. The site identifies (or rather, the analytics algorithms running on the site identifies) more videos that you would enjoy watching based upon what you are watching now. In ML, this is called a recommender system.
  • Netflix is another famous example where recommender systems play a massive role in the suggestions for ‘shows you may like’ section, and the recommendations are well-known for their accuracy in most cases
  • Google AdWords (text ads at the top of every Google Search) that are displayed is another example of a machine learning algorithm whose usage can be classified under predictive analytics.
  • Departmental stores often optimize products so that common groups are easy to find. For example, the fresh fruits and vegetables would be close to the health foods supplements and diet control foods that weight-watchers commonly use. Coffee/tea/milk and biscuits/rusks make another possible grouping. You might think this is trivial, but department stores have recorded up to 20% increase in sales when such optimal grouping and placement was performed – again, through a form of analytics.
  • Bank loans and home loans are often approved with the credit scores of a customer. How is that calculated? An expert system of rules, classification, and extrapolation of existing patterns – you guessed it – using predictive analytics.
  • Allocating budgets in a company to maximize the total profit in the upcoming year is predictive analytics. This is simple at a startup, but imagine the situation in a company like Google, with thousands of departments and employees, all clamoring for funding. Predictive Analytics is the way to go in this case as well.
  • IoT (Internet of Things) smart devices are one of the most promising applications of predictive analytics. It will not be too long before the sensor data from aircraft parts use predictive analytics to tell its operators that it has a high likelihood of failure. Ditto for cars, refrigerators, military equipment, military infrastructure and aircraft, anything that uses IoT (which is nearly every embedded processing device available in the 21st century).
  • Fraud detection, malware detection, hacker intrusion detection, cryptocurrency hacking, and cryptocurrency theft are all ideal use cases for predictive analytics. In this case, the ML system detects anomalous behavior on an interface used by the hackers and cybercriminals to identify when a theft or a fraud is taking place, has taken place, or will take place in the future. Obviously, this is a dream come true for law enforcement agencies.

So now you know what predictive analytics is and what it can do. Now let’s come to the revolutionary new technology.

Meet Endor – The ‘Social Physics’ Phenomenon

 

Image result for endor image free to use

End-to-End Predictive Analytics Product – for non-tech users!

 

In a remarkable first, a research team at MIT, USA have created a new science called social physics, or sociophysics. Now, much about this field is deliberately kept highly confidential, because of its massive disruptive power as far as data science is concerned, especially predictive analytics. The only requirement of this science is that the system being modeled has to be a human-interaction based environment. To keep the discussion simple, we shall explain the entire system in points.

  • All systems in which human beings are involved follow scientific laws.
  • These laws have been identified, verified experimentally and derived scientifically.
  • Bylaws we mean equations, such as (just an example) Newton’s second law: F = m.a (Force equals mass times acceleration)
  • These equations establish laws of invariance – that are the same regardless of which human-interaction system is being modeled.
  • Hence the term social physics – like Maxwell’s laws of electromagnetism or Newton’s theory of gravitation, these laws are a new discovery that are universal as long as the agents interacting in the system are humans.
  • The invariance and universality of these laws have two important consequences:
    1. The need for large amounts of data disappears – Because of the laws, many of the predictive capacities of the model can be obtained with a minimal amount of data. Hence small companies now have the power to use analytics that was mostly used by the FAMGA (Facebook, Amazon, Microsoft, Google, Apple) set of companies since they were the only ones with the money to maintain Big Data warehouses and data lakes.
    2. There is no need for data cleaning. Since the model being used is canonical, it is independent of data problems like outliers, missing data, nonsense data, unavailable data, and data corruption. This is due to the orthogonality of the model ( a Knowledge Sphere) being constructed and the data available.
  • Performance is superior to deep learning, Google TensorFlow, Python, R, Julia, PyTorch, and scikit-learn. Consistently, the model has outscored the latter models in Kaggle competitions, without any data pre-processing or data preparation and cleansing!
  • Data being orthogonal to interpretation and manipulation means that encrypted data can be used as-is. There is no need to decrypt encrypted data to perform a data science task or experiment. This is significant because the independence of the model functioning even for encrypted data opens the door to blockchain technology and blockchain data to be used in standard data science tasks. Furthermore, this allows hashing techniques to be used to hide confidential data and perform the data mining task without any knowledge of what the data indicates.

Are You Serious?

Image result for OMG image

That’s a valid question given these claims! And that is why I recommend everyone who has the slightest or smallest interest in data science to visit and completely read and explore the following links:

  1. https://www.endor.com
  2. https://www.endor.com/white-paper
  3. http://socialphysics.media.mit.edu/
  4. https://en.wikipedia.org/wiki/Social_physics

Now when I say completely read, I mean completely read. Visit every section and read every bit of text that is available on the three sites above. You will soon understand why this is such a revolutionary idea.

  1. https://ssir.org/book_reviews/entry/going_with_the_idea_flow#
  2. https://www.datanami.com/2014/05/21/social-physics-harnesses-big-data-predict-human-behavior/

These links above are articles about the social physics book and about the science of sociophysics in general.

For more details, please visit the following articles on Medium. These further document Endor.coin, a cryptocurrency built around the idea of sharing data with the public and getting paid for using the system and usage of your data. Preferably, read all, if busy, at least read Article No, 1.

  1. https://medium.com/endor/ama-session-with-prof-alex-sandy-pentland
  2. https://medium.com/endor/endor-token-distribution
  3. https://medium.com/endor/https-medium-com-endor-paradigm-shift-ai-predictive-analytics
  4. https://medium.com/endor/unleash-the-power-of-your-data

Operation of the Endor System

Upon every data set, the first action performed by the Endor Analytics Platform is clustering, also popularly known as automatic classification. Endor constructs what is known as a Knowledge Sphere, a canonical representation of the data set which can be constructed even with 10% of the data volume needed for the same project when deep learning was used.

Creation of the Knowledge Sphere takes 1-4 hours for a billion records dataset (which is pretty standard these days).

Now an explanation of the mathematics behind social physics is beyond our scope, but I will include the change in the data science process when the Endor platform was compared to a deep learning system built to solve the same problem the traditional way (with a 6-figure salary expert data scientist).

An edited excerpt from Link here

From Appendix A: Social Physics Explained, Section 3.1, pages 28-34 (some material not included):

Prediction Demonstration using the Endor System:

Data:
The data that was used in this example originated from a retail financial investment platform
and contained the entire investment transactions of members of an investment community.
The data was anonymized and made public for research purposes at MIT (the data can be
shared upon request).

 

Summary of the dataset:
– 7 days of data
– 3,719,023 rows
– 178,266 unique users

 

Automatic Clusters Extraction:
Upon first analysis of the data the Endor system detects and extracts “behavioral clusters” – groups of
users whose data dynamics violates the mathematical invariances of the Social Physics. These clusters
are based on all the columns of the data, but is limited only to the last 7 days – as this is the data that
was provided to the system as input.

 

Behavioural Clusters Summary

Number of clusters:268,218
Clusters sizes: 62 (Mean), 15 (Median), 52508 (Max), 5 (Min)
Clusters per user:164 (Mean), 118 (Median), 703 (Max), 2 (Min)
Users in clusters: 102,770 out of the 178,266 users
Records per user: 6 (Median), 33 (Mean): applies only to users in clusters

 

Prediction Queries
The following prediction queries were defined:
1. New users to become “whales”: users who joined in the last 2 weeks that will generate at least
$500 in commission in the next 90 days
2. Reducing activity : users who were active in the last week that will reduce activity by 50% in the
next 30 days (but will not churn, and will still continue trading)
3. Churn in “whales”: currently active “whales” (as defined by their activity during the last 90 days),
who were active in the past week, to become inactive for the next 30 days
4. Will trade in Apple share for the first time: users who had never invested in Apple share, and
would buy it for the first time in the coming 30 days

 

Knowledge Sphere Manifestation of Queries
It is again important to note that the definition of the search queries is completely orthogonal to the
extraction of behavioral clusters and the generation of the Knowledge Sphere, which was done
independently of the queries definition.

Therefore, it is interesting to analyze the manifestation of the queries in the clusters detected by the system: Do the clusters contain information that is relevant to the definition of the queries, despite the fact that:

1. The clusters were extracted in a fully automatic way, using no semantic information about the
data, and –

2. The queries were defined after the clusters were extracted, and did not affect this process.

This analysis is done by measuring the number of clusters that contain a very high concentration of
“samples”; In other words, by looking for clusters that contain “many more examples than statistically
expected”.

A high number of such clusters (provided that it is significantly higher than the amount
received when randomly sampling the same population) proves the ability of this process to extract
valuable relevant semantic insights in a fully automatic way.

 

Comparison to Google TensorFlow

In this section a comparison between prediction process of the Endor system and Google’s
TensorFlow is presented. It is important to note that TensorFlow, like any other Deep Learning library,
faces some difficulties when dealing with data similar to the one under discussion:

1. An extremely uneven distribution of the number of records per user requires some canonization
of the data, which in turn requires:

2. Some manual work, done by an individual who has at least some understanding of data
science.

3. Some understanding of the semantics of the data, that requires an investment of time, as
well as access to the owner or provider of the data

4. A single-class classification, using an extremely uneven distribution of positive vs. negative
samples, tends to lead to the overfitting of the results and require some non-trivial maneuvering.

This again necessitates the involvement of an expert in Deep Learning (unlike the Endor system
which can be used by Business, Product or Marketing experts, with no perquisites in Machine
Learning or Data Science).

 

Traditional Methods

An expert in Deep Learning spent 2 weeks crafting a solution that would be based
on TensorFlow and has sufficient expertise to be able to handle the data. The solution that was created
used the following auxiliary techniques:

1.Trimming the data sequence to 200 records per customer, and padding the streams for users
who have less than 200 records with neutral records.

2.Creating 200 training sets, each having 1,000 customers (50% known positive labels, 50%
unknown) and then using these training sets to train the model.

3.Using sequence classification (RNN with 128 LSTMs) with 2 output neurons (positive,
negative), with the overall result being the difference between the scores of the two.

Observations (all statistics available in the white paper – and it’s stunning)

1.Endor outperforms Tensor Flow in 3 out of 4 queries, and results in the same accuracy in the 4th
.
2.The superiority of Endor is increasingly evident as the task becomes “more difficult” – focusing on
the top-100 rather than the top-500.

3.There is a clear distinction between “less dynamic queries” (becoming a whale, churn, reduce
activity” – for which static signals should likely be easier to detect) than the “Who will trade in
Apple for the first time” query, which are (a) more dynamic, and (b) have a very low baseline, such
that for the latter, Endor is 10x times more accurate!

4.As previously mentioned – the Tensor Flow results illustrated here employ 2 weeks of manual
improvements done by a Deep Learning expert, whereas the Endor results are 100% automatic and the entire prediction process in Endor took 4 hours.

Clearly, the path going forward for predictive analytics and data science is Endor, Endor, and Endor again!

Predictions for the Future

Personally, one thing has me sold – the robustness of the Endor system to handle noise and missing data. Earlier, this was the biggest bane of the data scientist in most companies (when data engineers are not available). 90% of the time of a professional data scientist would go into data cleaning and data preprocessing since our ML models were acutely sensitive to noise. This is the first solution that has eliminated this ‘grunt’ level work from data science completely.

The second prediction: the Endor system works upon principles of human interaction dynamics. My intuition tells me that data collected at random has its own dynamical systems that appear clearly to experts in complexity theory. I am completely certain that just as this tool developed a prediction tool with human society dynamical laws, data collected in general has its own laws of invariance. And the first person to identify these laws and build another Endor-style platform on them will be at the top of the data science pyramid – the alpha unicorn.

Final prediction – democratizing data science means that now data scientists are not required to have six-figure salaries. The success of the Endor platform means that anyone can perform advanced data science without resorting to TensorFlow, Python, R, Anaconda, etc. This platform will completely disrupt the entire data science technological sector. The first people to master it and build upon it to formalize the rules of invariance in the case of general data dynamics will for sure make a killing.

It is an exciting time to be a data science researcher!

Data Science is a broad field and it would require quite a few things to learn to master all these skills.

Dimensionless has several resources to get started with.

To Learn Data Science, Get Data Science Training in Pune and Mumbai from Dimensionless Technologies.

To learn more about analytics, be sure to have a look at the following articles on this blog:

Machine Learning for Transactional Analytics

and

Text Analytics and its applications

Enjoy data science!

Machine Learning Algorithms Every Data Scientist Should Know

Machine Learning Algorithms Every Data Scientist Should Know

Types Of ML Algorithms

There are a huge number of ML algorithms out there. Trying to classify them leads to the distinction being made in types of the training procedure, applications, the latest advances, and some of the standard algorithms used by ML scientists in their daily work. There is a lot to cover, and we shall proceed as given in the following listing:

  1. Statistical Algorithms
  2. Classification
  3. Regression
  4. Clustering
  5. Dimensionality Reduction
  6. Ensemble Algorithms
  7. Deep Learning
  8. Reinforcement Learning
  9. AutoML (Bonus)

1. Statistical Algorithms

Statistics is necessary for every machine learning expert. Hypothesis testing and confidence intervals are some of the many statistical concepts to know if you are a data scientist. Here, we consider here the phenomenon of overfitting. Basically, overfitting occurs when an ML model learns so many features of the training data set that the generalization capacity of the model on the test set takes a toss. The tradeoff between performance and overfitting is well illustrated by the following illustration:

Overfitting - from Wikipedia

Overfitting – from Wikipedia

 

Here, the black curve represents the performance of a classifier that has appropriately classified the dataset into two categories. Obviously, training the classifier was stopped at the right time in this instance. The green curve indicates what happens when we allow the training of the classifier to ‘overlearn the features’ in the training set. What happens is that we get an accuracy of 100%, but we lose out on performance on the test set because the test set will have a feature boundary that is usually similar but definitely not the same as the training set. This will result in a high error level when the classifier for the green curve is presented with new data. How can we prevent this?

Cross-Validation

Cross-Validation is the killer technique used to avoid overfitting. How does it work? A visual representation of the k-fold cross-validation process is given below:

From Quora

The entire dataset is split into equal subsets and the model is trained on all possible combinations of training and testing subsets that are possible as shown in the image above. Finally, the average of all the models is combined. The advantage of this is that this method eliminates sampling error, prevents overfitting, and accounts for bias. There are further variations of cross-validation like non-exhaustive cross-validation and nested k-fold cross validation (shown above). For more on cross-validation, visit the following link.

There are many more statistical algorithms that a data scientist has to know. Some examples include the chi-squared test, the Student’s t-test, how to calculate confidence intervals, how to interpret p-values, advanced probability theory, and many more. For more, please visit the excellent article given below:

Learning Statistics Online for Data Science

2. Classification Algorithms

Classification refers to the process of categorizing data input as a member of a target class. An example could be that we can classify customers into low-income, medium-income, and high-income depending upon their spending activity over a financial year. This knowledge can help us tailor the ads shown to them accurately when they come online and maximises the chance of a conversion or a sale. There are various types of classification like binary classification, multi-class classification, and various other variants. It is perhaps the most well known and most common of all data science algorithm categories. The algorithms that can be used for classification include:

  1. Logistic Regression
  2. Support Vector Machines
  3. Linear Discriminant Analysis
  4. K-Nearest Neighbours
  5. Decision Trees
  6. Random Forests

and many more. A short illustration of a binary classification visualization is given below:

binary classification visualization

From openclassroom.stanford.edu

 

For more information on classification algorithms, refer to the following excellent links:

How to train a decision tree classifier for churn prediction

3. Regression Algorithms

Regression is similar to classification, and many algorithms used are similar (e.g. random forests). The difference is that while classification categorizes a data point, regression predicts a continuous real-number value. So classification works with classes while regression works with real numbers. And yes – many algorithms can be used for both classification and regression. Hence the presence of logistic regression in both lists. Some of the common algorithms used for regression are

  1. Linear Regression
  2. Support Vector Regression
  3. Logistic Regression
  4. Ridge Regression
  5. Partial Least-Squares Regression
  6. Non-Linear Regression

For more on regression, I suggest that you visit the following link for an excellent article:

Multiple Linear Regression & Assumptions of Linear Regression: A-Z

Another article you can refer to is:

Logistic Regression: Concept & Application

Both articles have a remarkably clear discussion of the statistical theory that you need to know to understand regression and apply it to non-linear problems. They also have source code in Python and R that you can use.

4. Clustering

Clustering is an unsupervised learning algorithm category that divides the data set into groups depending upon common characteristics or common properties. A good example would be grouping the data set instances into categories automatically, the process being used would be any of several algorithms that we shall soon list. For this reason, clustering is sometimes known as automatic classification. It is also a critical part of exploratory data analysis (EDA). Some of the algorithms commonly used for clustering are:

  1. Hierarchical  Clustering – Agglomerative
  2. Hierarchical Clustering – Divisive
  3. K-Means Clustering
  4. K-Nearest Neighbours Clustering
  5. EM (Expectation Maximization) Clustering
  6. Principal Components Analysis Clustering (PCA)

An example of a common clustering problem visualization is given below:

clustering problem visualization

From Wikipedia

 

The above visualization clearly contains three clusters.

Another excellent article on clustering refer the link

You can also refer to the following article:

 

ML Methods for Prediction and Personalization

5. Dimensionality Reduction

Dimensionality Reduction is an extremely important tool that should be completely clear and lucid for any serious data scientist. Dimensionality Reduction is also referred to as feature selection or feature extraction. This means that the principal variables of the data set that contains the highest covariance with the output data are extracted and the features/variables that are not important are ignored. It is an essential part of EDA (Exploratory Data Analysis) and is nearly always used in every moderately or highly difficult problem. The advantages of dimensionality reduction are (from Wikipedia):

  1. It reduces the time and storage space required.
  2. Removal of multi-collinearity improves the interpretation of the parameters of the machine learning model.
  3. It becomes easier to visualize the data when reduced to very low dimensions such as 2D or 3D.
  4. It avoids the curse of dimensionality.

The most commonly used algorithm for dimensionality reduction is Principal Components Analysis or PCA. While this is a linear model, it can be converted to a non-linear model through a kernel trick similar to that used in a Support Vector Machine, in which case the technique is known as Kernel PCA. Thus, the algorithms commonly used are:

  1. Principal Component Analysis (PCA)
  2. Non-Negative Matrix Factorization (NMF)
  3. Kernel PCA
  4. Linear Discriminant Analysis (LDA)
  5. Generalized Discriminant Analysis (kernel trick again)

The result of a  is visualized below:

PCA operation visulaization

By Nicoguaro – Own work, CC BY 4.0, https://commons.wikimedia.org/w/index.php?curid=46871195

 

You can refer to this article for a general discussion of dimensionality reduction:

This article below gives you a brief description of dimensionality reduction using PCA by coding an ML example:

MULTI-VARIATE ANALYSIS

6. Ensembling Algorithms

Ensembling means combining multiple ML learners together into one pipeline so that the combination of all the weak learners makes an ML application with higher accuracy than each learner taken separately. Intuitively, this makes sense, since the disadvantages of using one model would be offset by combining it with another model that does not suffer from this disadvantage. There are various algorithms used in ensembling machine learning models. The three common techniques usually employed in  practice are:

  1. Simple/Weighted Average/Voting: Simplest one, just takes the vote of models in Classification and average in Regression.
  2. Bagging: We train models (same algorithm) in parallel for random sub-samples of data-set with replacement. Eventually, take an average/vote of obtained results.
  3. Boosting: In this models are trained sequentially, where (n)th model uses the output of (n-1)th model and works on the limitation of the previous model, the process stops when result stops improving.
  4. Stacking: We combine two or more than two models using another machine learning algorithm.

(from Amardeep Chauhan on Medium.com)

In all four cases, the combination of the different models ends up having the better performance that one single learner. One particular ensembling technique that has done extremely well on data science competitions on Kaggle is the GBRT  model or the Gradient Boosted Regression Tree model.

 

We include the source code from the scikit-learn module for Gradient Boosted Regression Trees since this is one of the most popular ML models which can be used in competitions like Kaggle, HackerRank, and TopCoder.

Refer Link here

GradientBoostingClassifier supports both binary and multi-class classification. The following example shows how to fit a gradient boosting classifier with 100 decision stumps as weak learners:


 

GradientBoostingRegressor supports a number of different loss functions for regression which can be specified via the argument loss; the default loss function for regression is least squares ('ls').


 

You can also refer to the following article which discusses Random Forests, which is a (rather basic) ensembling method.

Introduction to Random forest

 

7. Deep Learning

In the last decade, there has been a renaissance of sorts within the Machine Learning community worldwide. Since 2002, neural networks research had struck a dead end as the networks of layers would get stuck in local minima in the non-linear hyperspace of the energy landscape of a three layer network. Many thought that neural networks had outlived their usefulness. However, starting with Geoffrey Hinton in 2006, researchers found that adding multiple layers of neurons to a neural network created an energy landscape of such high dimensionality that local minima were statistically shown to be extremely unlikely to occur in practice. Today, in 2019, more than a decade of innovation later, this method of adding addition hidden layers of neurons to a neural network is the classical practice of the field known as deep learning.

Deep Learning has truly taken the computing world by storm and has been applied to nearly every field of computation, with great success. Now with advances in Computer Vision, Image Processing, Reinforcement Learning, and Evolutionary Computation, we have marvellous feats of technology like self-driving cars and self-learning expert systems that perform enormously complex tasks like playing the game of Go (not to be confused with the Go programming language). The main reason these feats are possible is the success of deep learning and reinforcement learning (more on the latter given in the next section below). Some of the important algorithms and applications that data scientists have to be aware of in deep learning are:

  1. Long Short term Memories (LSTMs) for Natural Language Processing
  2. Recurrent Neural Networks (RNNs) for Speech Recognition
  3. Convolutional Neural Networks (CNNs) for Image Processing
  4. Deep Neural Networks (DNNs) for Image Recognition and Classification
  5. Hybrid Architectures for Recommender Systems
  6. Autoencoders (ANNs) for Bioinformatics, Wearables, and Healthcare

 

Deep Learning Networks typically have millions of neurons and hundreds of millions of connections between neurons. Training such networks is such a computationally intensive task that now companies are turning to the 1) Cloud Computing Systems and 2) Graphical Processing Unit (GPU) Parallel High-Performance Processing Systems for their computational needs. It is now common to find hundreds of GPUs operating in parallel to train ridiculously high dimensional neural networks for amazing applications like dreaming during sleep and computer artistry and artistic creativity pleasing to our aesthetic senses.

 

Artistic Image Created By A Deep Learning Network

Artistic Image Created By A Deep Learning Network. From blog.kadenze.com.

 

For more on Deep Learning, please visit the following links:

Machine Learning and Deep Learning : Differences

For information on a full-fledged course in deep learning, visit the following link:

Deep Learning

8. Reinforcement Learning (RL)

In the recent past and the last three years in particular, reinforcement learning has become remarkably famous for a number of achievements in cognition that were earlier thought to be limited to humans. Basically put, reinforcement learning deals with the ability of a computer to teach itself. We have the idea of a reward vs. penalty approach. The computer is given a scenario and ‘rewarded’ with points for correct behaviour and ‘penalties’ are imposed for wrong behaviour. The computer is provided with a problem formulated as a Markov Decision Process, or MDP. Some basic types of Reinforcement Learning algorithms to be aware of are (some extracts from Wikipedia):

 

1.Q-Learning

Q-Learning is a model-free reinforcement learning algorithm. The goal of Q-learning is to learn a policy, which tells an agent what action to take under what circumstances. It does not require a model (hence the connotation “model-free”) of the environment, and it can handle problems with stochastic transitions and rewards, without requiring adaptations. For any finite Markov decision process (FMDP), Q-learning finds a policy that is optimal in the sense that it maximizes the expected value of the total reward over any and all successive steps, starting from the current state. Q-learning can identify an optimal action-selection policy for any given FMDP, given infinite exploration time and a partly-random policy. “Q” names the function that returns the reward used to provide the reinforcement and can be said to stand for the “quality” of an action taken in a given state.

 

2.SARSA

State–action–reward–state–action (SARSA) is an algorithm for learning a Markov decision process policy. This name simply reflects the fact that the main function for updating the Q-value depends on the current state of the agent “S1“, the action the agent chooses “A1“, the reward “R” the agent gets for choosing this action, the state “S2” that the agent enters after taking that action, and finally the next action “A2” the agent choose in its new state. The acronym for the quintuple (st, at, rt, st+1, at+1) is SARSA.

 

3.Deep Reinforcement Learning

This approach extends reinforcement learning by using a deep neural network and without explicitly designing the state space. The work on learning ATARI games by Google DeepMind increased attention to deep reinforcement learning or end-to-end reinforcement learning. Remarkably, the computer agent DeepMind has achieved levels of skill higher than humans at playing computer games. Even a complex game like DOTA 2 was won by a deep reinforcement learning network based upon DeepMind and OpenAI Gym environments that beat human players 3-2 in a tournament of best of five matches.

For more information, go through the following links:

Reinforcement Learning: Super Mario, AlphaGo and beyond

and

How to Optimise Ad CTR with Reinforcement Learning

 

Finally:

9. AutoML (Bonus)

If reinforcement learning was cutting edge data science, AutoML is bleeding edge data science. AutoML (Automated Machine Learning) is a remarkable project that is open source and available on GitHub at the following link that, remarkably, uses an algorithm and a data analysis approach to construct an end-to-end data science project that does data-preprocessing, algorithm selection,hyperparameter tuning, cross-validation and algorithm optimization to completely automate the ML process into the hands of a computer. Amazingly, what this means is that now computers can handle the ML expertise that was earlier in the hands of a few limited ML practitioners and AI experts.

AutoML has found its way into Google TensorFlow through AutoKeras, Microsoft CNTK, and Google Cloud Platform, Microsoft Azure, and Amazon Web Services (AWS). Currently it is a premiere paid model for even a moderately sized dataset and is free only for tiny datasets. However, one entire process might take one to two or more days to execute completely. But at least, now the computer AI industry has come full circle. We now have computers so complex that they are taking the machine learning process out of the hands of the humans and creating models that are significantly more accurate and faster than the ones created by human beings!

The basic algorithm used by AutoML is Network Architecture Search and its variants, given below:

  1. Network Architecture Search (NAS)
  2. PNAS (Progressive NAS)
  3. ENAS (Efficient NAS)

The functioning of AutoML is given by the following diagram:

how autoML works

From cloud.google.com

 

For more on AutoML, please visit the link

and

Top 10 Artificial Intelligence Trends in 2019

 

If you’ve stayed with me till now, congratulations; you have learnt a lot of information and cutting edge technology that you must read up on, much, much more. You could start with the links in this article, and of course, Google is your best friend as a Machine Learning Practitioner. Enjoy machine learning!

Follow this link, if you are looking to learn data science online!

You can follow this link for our Big Data course, which is a step further into advanced data analysis and processing!

Additionally, if you are having an interest in learning Data Science, click here to start the Online Data Science Course

Furthermore, if you want to read more about data science, read our Data Science Blogs

MULTI-VARIATE ANALYSIS

WHY DO MULTI-VARIATE ANALYSIS

  1. Every data-set comprises of multiple variables, so we need to understand how the multiple variables interact with each other.
  2. After we understand uni-variate analysis – where we understand the behaviour of each distribution, and bi-variate analysis – where we understand how each variable relates to the other variables; we need to understand what behaviour change will happen in the trend on introduction of more variables.
  3. Multi-variate analysis has good application in clustering, where we need to visualize how multiple variables show different patterns in different clusters.
  4. When there are too many inter-correlated variables in the data, we’ll have to do a dimensionality reduction through techniques like Principal Component Analysis and Factor Analysis. We will cover Dimensionality Reduction Techniques in different post.

We will illustrate multi-variate analysis with the following case study:

Data:

Each row corresponds to annual spending by different customers of a whole sale distributor who sells milk / fresh grocery frozen detergent papers and delicassen in 3 different regions – Linson, Aporto and Others (Coded 1/2/3 respectively) through 2 different channels – Horeca (Hotel / Restaurant / Cafe) or Retail Channel (Coded 1/2 respectively)

PROCEDURE TO ANALYZE MULTIPLE VARIABLES

I. TABLES

Tables can be generated using xtabs function, tapply function, aggregate function and dplyr library

To get the spending on milk channel-wise and region-wise, using xtabs function

To get percentage spending

To get %age spending on grocery channel-wise and region-wise, using aggregate function

To get %age spending on frozen channel-wise and region-wise, using tapply function

Percentage spending

To get %age spending on detergent_paper channel-wise and region-wise, using dplyr library

II. STATISTICAL TESTS

Anova

Anova can be used to understand, how a continuous variable is dependent on categorical independent variables.

In the following code we are trying to understand if sales of milk is a function of Region and Channel and their interaction.

This shows that expense of milk is dependent on channel

Chi-Square Test

Chisquare Test to understand the association between 2 factor variables

Probability is very high, 11.37%, hence we fail to reject the null hypothesis. Hence, we conclude that there is no association between channel and region.

III. CLUSTERING

Multi-Variate analysis has a very wide application in unsupervised learning. Clustering has the maximum applications of multi-variate understanding and visualizations. Many times we prefer to perform clustering before applying the regression algorithms to get more accurate predictions for each cluster.

We will do hierarchical clustering for our case study, using the following steps:

1. Seperating the columns to be analyzed

Let’s get a sample data comprising of all the items whose expenditure is to be analyzed i.e all columns except Channel and Region – like fresh, milk, grocery, frozen etc.

2. Scaling the data, to get all the columns into same scale. This is done using calculation of z-score:

3. Identifying the appropriate number of clusters for k-means clustering

Though 2 clusters / 3 clusters show the maximum variance. In this case-study we are deviding the data into 10 clusters to get more specific results, visualizations and target strategies.

We can also use within-sum-of-squares method to find the number of clusters.

Also read:
Data Exploration and Uni-Variate Analysis
Bi-Variate Analysis
Data-Cleaning, Categorization and Normalization

4. Finding the most suitable number of clusters through wss method

5. Plot wss using ggplot2 Library

We will plot the within-sum-of-squares distance using ggplot library:

We notice that after cluster 10, the wss distance increases drastically. So we can choose 10 clusters.

5. Dividing data into 10 clusters

We will apply kmeans algorithm to divide the data into 10 clusters:

6. Checking the Attributes of k-means Object

We will check the centers and size of the clusters

7. Visualizing the Clusters

7. Profiling Clusters

Getting Cluster-wise summaries through mean function

8. Population-Wise Summaries

9. Z-Value Normalisation

z score = (cluster_mean-population_mean)/population_sd

Where-ever we have very high z-scores it indicates, that cluster is different from the population. * Very-high z-score for fresh in cluster 8 and 9
* Very-high z-score for milk in cluster 5,6 and 9
* Very-high z-score for grocery in cluster 5 and 6
* Very-high z-score for frozen products in cluster 7, 9 and 10
* Very-high z-score for detergents paper in cluster 5 and 6

We would like to find why these clusters are so different from the population

IV. MULTI-VARIATE VISUALIZATIONS

  1. To understand the correlations between each column

We observe positive correlation between:

  • Milk & Grocery
  • Milk & Detergents_Paper
  • Grocery & Detergents_Paper

Next we will import the ggplot2 library to do the graphical representations of data data-frame.

We’ll also add the column cluster number to the data-frame object “data”.

Next we will check the cluster-wise views and how the patterns differ cluster-wise.

Milk vs Grocery vs Fresh cluster wise analysis

  • We notice that if expenditure on milk is high, expenditure on grocery or fresh is high, but not both
  • We notice cluster 4 contains data points on the high end of milk or grocery
  • Cluster 3 has got people with high spending on milk and average spending on grocery
Relationship between Milk, Grocery and Fresh across Region across Channel

  • Region 3 has more people than Region 1 and 2
  • In Region 3 we observe an increasing trend between milk and fresh and grocery
  • In Region 1 we notice that there is an increasing trend between milk and grocery but fresh is low
  • In Region 2 we notice medium purchase of milk and grocery and fresh
  • High milk / grocery sales and medium fresh sales is through channel 2
  • In channel 2 there is an increasing trend between consumption of milk and consumption of grocery
  • Cluster 4 has either high sales of milk or grocery or both
  • Channel 2 contributes to high sales of milk and grocery, while low and medium sales of fresh

Milk vs Grocery vs Frozen Products Cluster wise analysis

  • Very high sales of frozen products by cluster 11 and cluster 7
  • People purchasing high quantities of milk and grocery are purchasing low quantities of frozen products
Relationship between Milk, Grocery and Frozen Products across Region

  • In Region 2 and Region 3, we have clusters 1 and 3 respectively, which have high expenditure pattern on frozen products
Relationship between Milk, Grocery and Frozen across Channel

  • We notice that channel 1 has many people with high purchase pattern of frozen products
  • Channel 2 has some clusters (cluster no.: 5 and 6) with very high purchase pattern of milk

Relationship between Frozen Products, Grocery and Detergents Paper across Region across Channel

  • In channel-2, people who are spending high on grocery are also spending low on frozen
  • High sales of detergents paper and grocery are observed through channel 2
  • Sales of frozen products is almost nil through channel 2
  • Cluster 4 has high expenditure on Detergents_Paper
  • Through channel 2 sales of frozen products is 0

Relationship between Milk, Delicassen and Detergents Paper across Region

  • People who spend high on milk hardly spend on Delicassen, though in region 3 we do see comparitively more expenditure on Delicassen
  • Cluster 3 in region 3 has very high expenditure on delicassen and high expenditure on milk
  • Cluster 4 has high consumption pattern on milk and detergents paper

Relationship between Milk, Grocery and Detergents Paper across Channel

  • Channel 2 is having an increasing trend between milk and Detergents Paper
  • Where sales of detergents paper is high, the sales of milk is also high
  • Channel 4 has high expense pattern on Detergents Paper or Milk
Relationship between Milk, Grocery and Detergents Paper across Region across Channel

  • Channel 2 is having an increasing trend between milk and Detergents Paper
  • Where sales of detergents paper is high, the sales of milk is also high
  • Channel 4 has high expense pattern on Detergents Paper or Milk
Relationship between Milk, Grocery and Detergents Paper across Region across Channel

  • There is a linear trend between Milk and Grocery in channel 2
  • There is a linear trend between Grocery and Detergent Paper
  • Channel 4 has high comption of grocery and detergents paper or grocery
  • Cluster 10 has medium consumption of milk, grocery and detergents paper
  • Cluster 6 has low consumption of milk and grocery and detergents paper
  • Cluster 2 has lowest consumption of milk grocery and detergents paper

Based on the above understanding of cluster-wise trends, we can devise cluster-wise, region-wise, channel-wise strategies to improve the sales.

V. DIMENSIONALITY REDUCTION TECHNIQUES

We use dimensionality reduction techniques like PCA to transform larger number of independent variables into a smaller set of variables:

Principal Component Analysis

Principal component analysis (PCA) tries to explain the variance-covariance structure of a set of variables through a few linear combinations of these variables. Its general objectives are: data reduction and interpretation. Principal components is often more effective in summarizing the variability in a set of variables when these variables are highly correlated.

Also, PCA is normally an intermediate step in the data analysis since the new variables created (the predictions) can be used in subsequent analysis such as multivariate regression and cluster analysis.

We will discuss PCA in my further posts.

Introduction to Random forest


Introduction

Random forest is one of those algorithms which comes to the mind of every data scientist to apply on a given problem. It has been around for a long time and has successfully been used for such a wide number of tasks that it has become common to think of it as a basic need. It is a versatile algorithm and can be used for both regression and classification.
This post aims at giving an informal introduction of Random Forest and its implementation in R.


Table of contents

  1. What is a Decision Tree?
  2. What is Random Forest?
  3. Random forest in R.
  4. Pros and Cons?
  5. Applications

What is a Decision Tree?

Decision tree is a simple, deterministic data structure for modelling decision rules for a specific classification problem. At each node, one feature is selected to make separating decision. We can stop splitting once the leaf node has optimally less data points. Such leaf node then gives us insight into the final result (Probabilities for different classes in case of classfication).
Refer the figure below for a clearer understanding:

decision_tree

How does it split?

The most decisive factor for the efficiency of a decision tree is the efficiency of its splitting process. We split at each node in such a way that the resulting purity is maximum. Well, purity just refers to how well we can segregate the classes and increase our knowledge by the split performed. An image is worth a thousand words. Have a look at the image below for some intuition:

gini

Two popular methods for splitting are:

  1. Gini Impurity
  2. Information Gain

Explaining each of these methods in detail is beyond the scope of this post, but I highly recommend you to go through the given links for an in-depth understanding.

Visualization:

Each split leads to a straight line classifying the dataset into two parts. Thus, the final decision boundary will consist of straight lines (boxes).

  • Each split leads to a straight line classifying the dataset into two parts. Thus, the final decision boundary will consist of straight lines (or boxes).
dt_boundary
  • In comparison to regression, a decision tree can fit a stair case boundary to classify data.
reg vs dt

What is Random Forest?

Random forest is just an improvement over the top of the decision tree algorithm. The core idea behind Random Forest is to generate multiple small decision trees from random subsets of the data (hence the name “Random Forest”).
Each of the decision tree gives a biased classifier (as it only considers a subset of the data). They each capture different trends in the data. This ensemble of trees is like a team of experts each with a little knowledge over the overall subject but thourough in their area of expertise.
Now, in case of classification the majority vote is considered to classify a class. In analogy with experts, it is like asking the same multiple choice question to each expert and taking the answer as the one that most no. of experts vote as correct. In case of Regression, we can use the avg. of all trees as our prediction.In addition to this, we can also weight some more decisive trees high relative to others by testing on the validation data.

Visualization:

  • Majority vote is taken from the experts (trees) for classification.
voting
  • We can also use probabilities and set the threshold for classification.
rf

Major hyperparameters in Random Forest

  1. ntree : Number of trees to grow in the forest. Typical values is around 100. More trees sometimes leads to overfitting.
  2. mtry : Number of variables randomly sampled as candidates at each split for a particular tree.
  3. replace: Sampling shoud be done with or without replacement.

Decision boundary in Random Forest:

As Random Forest uses ensemble of trees, it is capable of generating complex decision boundaries. Below are the kinds of decision boundaries that Random Forest can generate:

rf_boundary
rf_boundary

Random forest in R.


I highly encourage you to play with the hyperparameters for a while and see their effect on the output. ***

Pros and Cons?

Pros:

  • One of the most accurate decision models.
  • Works well on large datasets.
  • Can be used to extract variable importance.
  • Do not require feature engineering (scaling and normalization)

Cons:

  • Overfitting in case of noisy data.
  • Unlike decision trees, results are difficult to interpret.
  • Hyperparamters needs good tuning for high accuracy.

Applications

Random forests have successfully been implemented in a variety of fields. Some applications include:

  • Object recognition.
  • Molecular Biology (Analyzing amino acid sequences)
  • Remote sensing (Pattern recognition)
  • Astronomy (Star Galaxy classification, etc)

Additional resources:

I highly recommend you to go through the links below for an in-depth understanding of the Maths behind this algorithm.

  1. Random forest (University of British Columbia)
  2. Random forest Intuition

 

Data Cleaning, Categorization and Normalization

Data Cleaning, Categorization and Normalization

Data Cleaning, categorization and normalization is the most important step towards the data. Data that is captured is generally dirty and is unfit for statistical analysis. It has to be first cleaned, standardized, categorized and normalized, and then explored.

Definition of Clean Data

Happy families are all alike; every unhappy family is unhappy in its own way – Leo Tolstoy

Like families, clean datasets are all alike but every messy dataset is unreadable by our modeling algorithms.Clean datasets provide a standardized way to link the structure of a dataset with its semantics.

We will take the following text file with dirty data:

%% Data
Sonu ,1861, 1892, male
Arun , 1892, M
1871, Monica, 1937, Female
1880, RUCHI, F
Geetu, 1850, 1950, fem
BaLa, 1893,1863
% Names, birth dates, death dates, gender

Let us start with tidying the above data. We’ll adopt following steps for the same:

1. Read the unclean data from the text file and analyse the structure, content, and quality of data.

The following functions lets us read the data that is technically correct or close to it:

  • read.table
  • read.csv
  • read.csv2
  • read.delim
  • read.delim2

When the rows in the data file are not uniformly formatted, we can consider reading in the text line-by-line and transforming the data to rectangular text ourself.

The variable txt is a vercot of type “character” having 8 elements, equal to number of lines in our txt file.

2. Delete the irrelevant/duplicate data. This improves data protection, increases the speed of processing and reduces the overall costs.

In our eg., we will delete the comments from the data. Comments are the lines followed by “%” sign using following code.

Following is the code:

3. Split lines into seperate fields. This can be done using strsplit function:

Here, txt was a vector of type characters, while, after splitting we have our output stored in the variable “fields” which is of “list” type.

4. Standardize and Categorize fields

It is a crucial process where the data is defined, formatted, represented and structured in all data layers. Development of schema or attributes is involved in this process.

The goal of this step is to make sure every row has same number of fields, and the fields are in the same order. In read.table command, any fields that are less than the maximum number of fields in a row get appended by NA. One advantage of do-it-yourself approach is that we don’t have to make this assumption. The easiest way to standardize rows is to write a function that takes a single character vector as input, and assigns the values in the right order.

The above function takes each line in the txt as the input (as x). The function returns a vector of class character in a standard order: Name, Birth Year, Death Year, Gender. Where-ever, the field will be missing, NA will be introduced. The grep statement is used to get the location of alphabetical characters in our input variable x. There are 2 kinds of alphabetical characters in our input string, one representing name and other representing sex. Both are accordingly assigned in the out vector at 1st and 4th positions. The year of birth and year of death can be recognized as “less than 1890” or “greater than 1890” respectively.

To retrieve the fields for each row in the example, we need to apply this function to every row of fields.

stdfields=lapply(fields, Categorize)

Our Categorize function here is quite fragile, it crashes for eg. when our input vector contains 3 or more character fields. Only the data analyst should determine how generalized should our categorize function be.

5. Transform to Data to the Frame type

First we will copy the elements of list to a matrix which is then coerced into a data-frame.

Here, we have made a matrix by row. The number of rows in the matrix will be same as number of elements in stdfields. There-after we have converted the matrix in data.frame formats.

6. Data Normalization

It is the systematic process to ensure the data structure is suitable or serves the purpose. Here the undesirable characteristics of the data are eliminated or updated to improve the consistency and the quality. The goal of this process is to reduce redundancy, inaccuracy and to organize the data.

String normalization techniques are aimed at transforming a variety of strings to a smaller set of string values which are more easily processed.

  • Remove the white spaces. We can use str_trim function from stringr library.

sapply function will return object of type matrix. So we again recovert it into data-frame type

  • Converting all the letters in the column “Name” to upper-case for standardization

  • Normalize the gender variable We have to normalize the gender variable. It is in 5 different formats. Following are the two ways to normalize gender variable:

One is using ^ operator. This will find the words in the Gender column which begin with m and f respectively.

Following is the code:

There is another method of approximate string matching using string distances. A string distance measures how much 2 strings differ from each other. It measures how many operations are required to turn one string to another. Eg.

So, here there are 2 operations required to convert pqr to qpr

  • replace q: pqr -> ppr
  • replace p: ppr -> qpr

We’ll use adist function in the gender column in the following manner:

First is male – 0 replacement away from “male” and 2 replacements away from “female”.
Second is M is 4 replacements away from “male” and 6 replacements away from “female”.
Third is Female 2 replacements away from “male” and 1 replacement away from “female”.

We can use which.min() function on each row to get minimum distance from the codes.

it contains the column number where the distance is minimum from our codes. If the distance from column number 1 is minimum, then we will substitute gender with code[1], and if the distance from column number 2 is minimum, then we will substitute the gender with code[2]. Following is the code:

  • Normalize the data-types

Let’s retrieve the classes of each column in data-frame at:

We will convert Birth and Death columns to Numeric data-types using transform function.

The data as represented in dat data frame is clean, standardized, categorized and normalized.

Cleaning the Date Variable

When we talk to data cleaning, we can’t miss cleaning and normalizing the date variable. Date variable mostly is generally keyed in by different people in different formats. So, we always face the problem in standardizing it.

In the following example, we’ll standardize and normalize the date variable:

We notice the date variable is in a mixed format. We will first substitute “/” and ” ” in the date variable with “-”, in our attempt to standardize dates.

In the below code, we will make the assumption that month can’t be in the first-2 or last-2 characters. The month in all cases comes in the middle of date and year; or year and date.

Next, we will write a function to find the first-two digits of each date:

  • Split the date string and store the output in dt. Eg. “110798” to dt = ‘1’ ‘1’ ‘0’ ‘7’ ‘9’ ‘8’
  • If dt[2]=“-”, then first_2 numbers will be 0 and dt[1]
  • We will convert the first_2 characters to numeric type.

Next, we will write a function to find the last-2 digits of each date:

  • Split the date string and store the output in dt. Eg. “110798” to dt = ‘1’ ‘1’ ‘0’ ‘7’ ‘9’ ‘8’
  • If dt[2]=“-”, then first_2 numbers will be 0 and dt[1]
  • We will convert the first_2 characters to numeric type.

Middle 2 digits can be alphabetic or numeric. We will write the function to find the middle 2 characters. We will adopt following steps to reach the middle characters:

  • Split the date string and store the output in dt. Eg. “110798” to dt = ‘1’ ‘1’ ‘0’ ‘7’ ‘9’ ‘8’
  • If dt[3]=“-”, then, dt[4:5] are the middle characters. But, if dt[5] is a “-”, then, dt[4] (single digit) is the middle character.
  • If dt[3]!=“-”, then, dt[3:4] are the middle characters. But, if dt[4] is a “-”, then, dt[3] (single digit)is the middle character.
  • If the middle character is a single digit, then we need to append 0 to the single digit, to return a 2 digit middle character.
  • Later in the code, we will evaluate if the middle character is an alphabetic or numeric, and process accordingly.

Next we will initialize variables before applying a for-loop to process and segregate the date, month and year for each date field.

  • df is a dummy data.frame object.
  • datf will be the final data frame object which will comprise of fields dd, mm, year, month format (i.e numeric or alphabetical) and concatenated date in the standard format.
  • snum refers to serial number of each date in the vector “date”.
  • We will keep the default date to be 01, default month to be 01 and default year to be 1990 incase of missing fields in these variables.
  • Mon_Format refers to whether month is in numeric or alphabetical format.
  • cat_date field contains date, month and year concatenated.
  • format_date will contain the cat_date converted to “date” format.

Now, for each date field, we will find the first-two numbers, the last-two numbers and the middle characters. Then we’ll apply following rules:

  • If number of characters in our date is 4, then this represents the year.
  • If first_2>31, then the first-two numbers represent the year.
  • If first_2<31, then, first_2 represents the date.
  • If 2nd number is a “-”, then 1st number represents the single digit date.
  • If last_2>31, then the last-two numbers represent the year.
  • If last_2<31, then the last-two numbers represent the date.
  • If the 2nd last number is a “-”, then the last number represents the single digit date.
  • If middle characters are non-alphabetical (i.e if middle characters are numeric), then mid-num represents the numeric middle characters. Mon_Format will be labelled numeric in such case.
  • If mid-num>31, then mid-num represents the year, else if mid-num<12, then mid-num represents the month.
  • If the date has an alphabetical character, then the first 3 alphabets represent the month. The Mon_Format will be alphabetical in this case.
  • Next we will concatenate the date in following format: dd-mm-yy, where mm can be numeric (with mon_format as “num”) or alphabetic (with mon_format as “alpha”) and store into variable “cat_date”.
  • We will change the format of cat_date to “date” data-type and store in the variable “formatted_date”.

Following is the code:

The vector formatted_date contains the date in the correct format.

The datal.frame datf comprises of all the entries segregated as date, month and year.

We note, that date vector now contains all the dates in the standard format. The class of variable date is “date”.

I would like to conclude this article by:

“Bad Data leads to Bad Decisions”

“Good Data improves Efficient Decisions”

Also read:

Data Exploration and Uni-Variate Analysis
Bi-Variate Analysis