9923170071 / 8108094992 info@dimensionless.in
Your First Step in Machine Learning with R

Your First Step in Machine Learning with R

 

Machine Learning is the study of statistics and algorithms which help computers to arrive at conclusions without any external guidance, solely depending upon recurring trends and patterns in the available data.

Machine Learning follows various techniques to solve essential problems. They are as follows:

  • Supervised Learning – The data provided is labeled with the output variable. In the case of categorical labels, classification algorithms are used and in case of continuous labels, regression algorithms are used.
  • Unsupervised Learning – The data provided is unlabeled and clustering algorithms are used to identify different groups in the data.
  • Semi-Supervised Learning – Unlabeled data is grouped together and a new label is devised for the same. Facebook’s facial recognition is a popular example of semi-supervised learning. When the algorithm identifies that a face falls in a group of similar faces, it gets tagged with the respective person’s name if the person has been tagged even as low as twice or thrice.
  • Reinforcement Learning- In this case, algorithms learn using feedback from the environment they are acting upon and get rewarded for correct predictions and penalized for incorrect ones.

For the introductory stage, we will commence with supervised and unsupervised learning techniques. In fact, even highly skilled professionals who have been engaged in their work for several years, continue to research and grow their knowledge in these techniques since these are the most common and relevant to most of our problems which seek solutions.

 

These are the models which come under supervised learning:

Regression Models:

  • Linear Regression
  • Lasso and Ridge Regression
  • Decision Tree Regressor
  • Random Forest Regressor
  • Support Vector Regressor
  • Neural Networks

 

Classification Models:

  • Logistic Regression
  • Naive Bayes Classifier
  • Support Vector Classifier
  • Decision Trees
  • Boosted Trees
  • Random Forest
  • Neural Networks
  • Nearest Neighbor

All these models might feel extremely overwhelming and hard to grasp, but with R’s extensively diverse libraries and ease of implementation, one can literally implement these algorithms in just a few lines of code. All one needs to have is a conceptual understanding of the algorithms such that the model can be tweaked sensibly as per requirement. You can follow our Data Science course to build up your concepts from scratch to excellence.

Now let us explore this extraordinary language to enhance our machine learning experience!

 

What is R?

R was a language essentially developed for scientists and mathematicians/statisticians who could easily explore complex data with relative ease and track recurring patterns and trends at a much higher pace when compared to traditional techniques. With the evolution of Data Science, R took a leap and started serving the corporate and IT sector along with the academic sector. This happened when skilled statisticians and data experts started migrating into IT when they found sprouting opportunities there to harness their skills in the industry. They brought along R with themselves and set a milestone right where they stood.

 

Is R as Relevant as Python?

There is a constant debate as to whether Python is more competent and relevant that R. It must be made clear that this is mostly a fruitless discussion since both these languages are founding pillars of advanced Data Science and Machine Learning. R started evolving from a mathematical perspective and Python from a programming perspective, but they have come to serve the same purpose of solving analytical problems, and have competently done so for several years. It is simply one’s choice of comfort to move along with either of them.

 

What are the Basic Operations in R with Respect to Machine Learning?

In order to solve machine learning problems, one has to explore a bit further than plain programming. R provides a series of libraries which needs to be kept at hand while exploring myriad data in order to minimize obstacles while analysis.

R can do the following operations on Data related structures:

 

Vectors:

Vectors can be compared to lists or columns which can store a series of data of similar type. They can be compared to arrays in general programming terms. Vectors can be implemented using the following code:

Vector1 = c(93,34,6.7,10)

R supports several operations in Vectors.

  • Sequence Generation: sequence = c(1:100)
    • Appending: vector1 = c(vector1,123)
    • Vector Addition:

v1 = c(1,2,3,4)

v2 = c(9,8,7,6)

v1+v2 returns (10,10,10,10)

  • Indexing: Indexing starts with 1 in case of R.

v1[1] will return 1

v1[c(1,3)] will return 1st and 3rd elements (1,3)

v1[1:3] will return 1st to 3rd elements (1,2,3)

 

Data Frames:

Data Frames are data structures which read data directly into memory and saves it in a tabular and readable format. It is extremely easy to create data frames in R:

Vector1 = c(1,2,3,4)

Vector2 = c(‘a’,’b’,’c’,’d’)

df=data.frame(numbers=Vector1, chars=Vector2)

R supports the following operations on data frames:

  • The shape of the data frame (the number of rows and columns)
    • Unique value counts of columns
    • Addition of columns
    • Deleting columns
    • Sorting based on given columns
    • Conditional selections
    • Discovery and Deletion of Duplicates

Now let us explore data on a fundamental level with R and see a simple end to end process beginning from reading data to predicting the results. For this purpose, we will use a supervised machine learning approach for the time being.

 

Step 1: Read Data

quality = read.csv(‘quality.csv’)

You can collect this data from here. This data is for a classification task where the dependent variable or the variable to be predicted is ‘PoorCare’. The dataset has 14 columns overall including ‘MemberID’ which is the unique key identifier.

 

Step 2: Analyze the Dataset

Observe the different columns and their respective characteristics. This will help to formulate an initial idea about the data and help to devise useful techniques during the exploratory data analysis stage.

Code to get summarized description of the data:

str(quality)

Since this dataset is simple and small, we will not be going into a detailed analysis.

 

Step 3: Dividing Data into Training and Testing Sets

Every machine learning algorithm has some data it learns from and another set on which it quizzes itself to test the validity of its learning. These sets are called the training and testing sets respectively. This is how to go about creating them.

install.packages(“caTools”) #This library provides the essential functionality for splitting data

library(caTools)# Randomly split data

set.seed(88) #This is the initiation point for a random function to randomize from

split = sample.split(quality$PoorCare, SplitRatio = 0.75)

This means 75% of the data will be allocated to the training set and the remaining 25% to the testing set. The variable ‘split’ now has a series of TRUE and FALSE values corresponding to samples in the record and have been randomly allocated. TRUE maps to a training set and FALSE to testing set.

#Create training and testing sets

qualityTrain = subset(quality, split == TRUE) #Selects all the records which has been assigned value ‘TRUE’ by the ‘split’ function

qualityTest = subset(quality, split == FALSE) #Selects all the records which has been assigned value ‘FALSE’ by the ‘split’ function

 

Step 4: Modeling

Since our problem is a classification problem, we will start with a basic supervised learning algorithm for classification: Logistic regression. The internal programming can be overlooked if need be but as was mentioned above, it is imperative to know the concept behind every model. Here is a simple overview of Logistic Regression:

Logistic regression is a linear model and follows the simple linear equation of y= mx+c. The only thing which differentiates it from a regression model is the sigmoid function which effectively divides the probability outputs and maps them to binary classes. One can even play with various thresholds to change the probability limit for classification. Multi class classification is also possible with the help of Logistic Regression and is implemented with a technique called the one-vs-all method. But that is out of scope for this article but will be taken up in another article which is a bit more advanced.

So let us train our first model!

# Logistic Regression Model

QualityLog = glm(PoorCare ~ OfficeVisits + Narcotics,data=qualityTrain, family=binomial) #The family argument specifies which model to use. ‘binomial’ means that the glm function will use a logistic regression model.

Call: glm(formula = PoorCare ~ OfficeVisits + Narcotics, family = binomial, data = qualityTrain)

 

Step 5: Prediction

After the model is trained on the training set, we need to see how it performs on similar data. For this, we will use the test set.

predictTest = predict(QualityLog, type = “response”, newdata = qualityTest)

To view or evaluate the results, a simple matrix called the confusion matrix can be used. It gives the count against true and predicted values:

table(qualityTest$PoorCare,predictTest >= 0.3)

 #0.3 is the threshold value for the sigmoid function. If logistic regression gives probability outcome greater than 0.3, it will be predicted as belonging to class 1, otherwise 0.

   FALSE  TRUE

0    19        5

1     2         6

From this confusion matrix, a series of evaluation metrics can be calculated. Some of the primary ones are as follows:

  • Accuracy
  • Recall
  • Precision
  • F1 score

Based on the problem’s demand, the appropriate evaluation metric needs to be selected such that the model can be optimized accordingly and the threshold values can be decided.

 

Conclusion

This was a very simple pipeline of how a machine learning problem is solved and only offers a peek into the efficiency of R as a language.R has several more functionalities and libraries which can perform advanced tasks in few simple lines of code. It not only helps the programmers to easily accomplish desired tasks but also increases the time and memory efficiency of the code since R libraries are optimized by experts. Detailed and more in-depth discussions and explanations on various other models and their optimization techniques can be found in our Data Science courses and blogs!

Follow this link, if you are looking to learn data science online!

You can follow this link for our Big Data course, which is a step further into advanced data analysis and processing!

Additionally, if you are having an interest in learning Data Science, click here to start the Online Data Science Course

Furthermore, if you want to read more about data science, read our Data Science Blogs

Is Data Scraping One of the Most Demanded Skill in Data Science?

Is Data Scraping One of the Most Demanded Skill in Data Science?

 

Imagine every living and non-living entity hooked to the internet, generating bits of information as long as connectivity is maintained. This generation of small bits is a vital sign to deduce if the entity is active or inactive in the wide global network called the Internet. All these vitals are recorded and stored which makes the Internet an easily accessible hub hosting an overwhelming volume of data, generated with immense speed with every passing moment. This data can be extracted to study recurring patterns and trends in order to help in the deduction of advanced insights and useful predictions in any given domain as long as the data is relevant.

This is where the concept of Data Scraping or Web scraping crawls in and demands well deserved attention.

 

What is Data Scraping?

 

Data Scraping is the act of automating the process of extracting information from a source of unstructured data like websites, tables, visual and even audio sources in order to restructure them and make them ingestible for machine learning systems. These systems then absorb the structured data, analyze them, and provide intelligent insights on the same.

Previously, data scraping was not a very popular skill since the internet was in its adolescent phase and there were seldom any innovations or undergoing research which suggested ways to utilize such unstructured data. However, with the evolution of technology and especially machine learning and data science over the last two decades, the internet has become almost equivalent to oil fields lying around in the Arabian subcontinent! Literally.

The volume of data which is generated by the global network of the internet is extremely overwhelming and concerns almost all major and minor sectors which run our modern world. If we leave this data lying around in its dormant state just for the eyes of people and deprive machines of the same, it will not only be an unfair use of vast expanses of storage units but also drain humans of highly promising opportunities in the near future.  Major industries seem to have grasped this fact and are putting out job openings for people who happen to have an experience with data scraping and keep these coveted skills at hand.

 

Why is Data Scraping a Desirable Add On to a Data Scientist?

 

When a Data Scientist is armed with web scrapping skills, he/she can easily evade data blockers. For instance, if the data provided by a client is insufficient, the first step a competent data scientist can take is to look for relevant websites under the same domain and check if it is possible to retrieve valuable data from these websites. If the required data is not found, only then the client could be approached for further data. The latter process will extend the timelines unnecessarily and also prevent the client from having a smooth experience. Also, if the client again provides insufficient data, another similar loop will be generated and time will be extended further. On the other hand, the former process promises higher value addition both in terms of data (since the internet is usually loaded with rich data) and client experience.

Furthermore, one can assume that a Data Scientist possesses decent programming skills. With such skills, she/he can easily make use of the following:

  • Scraper is written from scratch
  • Web scraping libraries

When data scraper code is written from scratch, there is the flexibility of extreme customization. When web scraping libraries are used, which are available in abundance, a decent programmer can appropriately tweak the library code based on the domain data in order to optimize the results.

 

With good programming knowledge, even the following vital points can be taken care of:

  • Scaling
  • Time Optimization
  • Memory Optimization
  • Optimum Network Configuration

If data scraping skills are missing in hired individuals, ambitious firms, who plan on handling large scale client data, will have to take the aid of Data Service Companies which provide services in the field of Data Handling and Machine Learning. However, if these firms hire a handful of Data Scientists/Engineers who are skilled in designing web scraping code or know how to tweak inbuilt data scraping libraries for optimum results, it will cost the firm much less in terms of investment on data gathering. With Data Scraping, it is extremely easy to impute missing data with the latest information without declaring that the data in use is irrelevant altogether. For instance, if there are a hundred records concerning the population of different countries and every feature is available other than the historical population data, one can easily scrap the web for year wise population of a given country and fill in the relevant details with one piece of code.

Acquiring data scraping skills will, no doubt, increase an applicant’s overall value on a relative scale.

 

How Relevant is Data Scraping in the Present World?

 

When data is extracted through web scraping techniques, real-time data is added to your existing database. This helps to track current trends and also provides real-life service-based data for research purpose. When a firm chooses to enable their product, the system will have to process and analyze real-world data in every instance. Scraped data provides the environment to the machine for learning from realistic information and helps it to be on par with real-time trends and patterns.

This also comes to great use when firms need to monitor their implemented products and take up audience review and feedback from multiple sources. Scraping information directly can provide the firm with a generic idea of the product’s performance and can also help in suggesting ways of improvement.

 

So, What are the Most Useful Coding Languages for Data Scraping?

 

While choosing a coding language it is important to keep in mind the features of the language under use. It must satisfy important criteria like flexibility, scaling, maintainability, database integration and ease of use. Even though the speed/efficiency of data scraping is more dependent on the speed of your network and less on your code optimization, it is still advisable to prefer optimizations anytime. Here are a few coding languages which provide efficient data scraping libraries and are easy to implement:

  • Python

Python is an excellent language to implement data scraping and is in fact, the most recommended. It provides a score of libraries like Beautiful Soup and Scrapy for easy data extraction and takes care of format and scaling issues. People with minimum knowledge on programming can also implement these on a fundamental scale.

  • C and C++

Both these languages are high-performance object-oriented languages. This means that it is possible to optimize code heavily using these languages. However, the cost to develop such code is extremely high compared to other languages as it requires extreme code specialization.

  • Node.js

Node.js is good for small scale projects and is especially recommended for crawling dynamic websites. However, the communications in node.js face instability issues and it is recommended to not use it for large scale projects.

  • PHP

Even though it is possible to implement data scraping using PHP, it is the least recommended language to do so. This is because PHP lacks support for multi-threading which can, in turn, lead to complicated issues during code runs.

All this being said, it is important to understand that coding languages are just the tools to reach a desired goal. If you are comfortable with a certain language, it will be advisable to learn data scraping techniques in that very language as it provides an upper hand to you because of your existing command over the language.

 

What is the Intensity of Demand for Data Scraping as a Skill?

 

According to research conducted by KDnuggets.com on the professional network of LinkedIn, it was found that 54 industries require Web Scraping Specialists! The top five sectors included the industries: Computer Software, Information Technology and Services, the Financial sector, Internet domain and finally the Marketing and Advertising industry. It was even found that non-technical jobs also had a high demand for data scraping specialists. This must not come as a shock to anybody since the relevance of data has upgraded to such a level over the last decade that the industries are trying to brace themselves from future impacts with as much data as possible. Data has indeed become the golden key for all modern industries to a secure and profitable future. One needs to master the right skills to master the age of data we live in today.

 

Conclusion

 

As is certain from the above discussion, we can say without much doubt that Data Scraping skills have definitely become one of the most sought after and coveted skills of the 21st century. It is recommended to not only aspiring data scientists but also technical professionals to have such skills handy since it only leads to value addition for both the employing firm and the employed individual.

If you are interested in learning core Data Science and subsidiary skills which come along with it, these links can help you with the same:

Follow this link, if you are looking to learn data science online!

You can follow this link for our Big Data course, which is a step further into advanced data analysis and processing!

Additionally, if you are having an interest in learning Data Science, click here to start the Online Data Science Course

Furthermore, if you want to read more about data science, read our Data Science Blogs

Univariate Analysis – A Key to the Mystery Behind Data!

Univariate Analysis – A Key to the Mystery Behind Data!

 

Exploratory Data Analysis or EDA is that stage of Data Handling where the Data is intensely studied and the myriad limits are explored. EDA literally helps to unfold the mystery behind such data which might not make sense at first glance. However, with detailed analysis, we can use the same data to provide miraculous results which can help boost large scale business decisions with excellent accuracy. This not only helps business conglomerations to evade likely pitfalls in the future but also helps them to leverage from the best possible schemes that might emerge in the near future.

 

EDA employs three primary statistical techniques to go about this exploration:

  • Univariate Analysis
  • Bivariate Analysis
  • Multivariate Analysis

Univariate, as the name suggests, means ‘one variable’ and studies one variable at a time to help us formulate conclusions such as follows:

  • Outlier detection
  • Concentrated points
  • Pattern recognition
  • Required transformations

 

In order to understand these points, we will take up the iris dataset which is furnished by fundamental python libraries like scikit-learn.

The iris dataset is a very simple dataset and consists of just 4 specifications of iris flowers: sepal length and width, petal length and width (all in centimeters). The objective of this dataset is to identify the type of iris plant a flower belongs to. There are three such categories: Iris Setosa, Iris Versicolour, Iris Virginica).

So let’s dig right in then!

 

1. Description Based Analysis

 

The purpose of this stage is to get an initial idea about each variable independently. This helps to identify the irregularities and probable patterns in the variables. Python’s inbuilt panda’s library helps to execute this task with extreme ease by literally using just one line of code.


Code:

data = datasets.load_iris()

The iris dataset is in dictionary format and thus, needs to be changed to data frame format so that the panda’s library can be leveraged.

We will store the independent variables in ‘X’. ‘data’ will be extracted and converted as follows:

X = data[‘data’]  #extract


X = pd.DataFrame(X) #convert

On conversion to the required format, we just need to run the following code to get the desired information:

X.describe() #One simple line to get the entire description for every column

Output:

Output for desired code

 

  • Count refers to the number of records under each column.
  • Mean gives the average of all the samples combined. Also, it is important to note that the mean gets highly affected by outliers and skewed data and we will soon be seeing how to detect skewed data just with the help of the above information.
  • Std or Standard Deviation is the measure of the “spread” of data in simple terms. With the help of std we can understand if a variable has values populated closely around the mean or if they are distributed over a wide range.
  • Min and Max give the minimum and maximum values of the columns across all records/samples.

 

25%, 50%, and 75% constitute the most interesting bit of the description. The percentiles refer to the respective percentage of records which behave a certain way. It can be interpreted in the following way:

  1. 25% of the flowers have sepal length equal to or less than 5.1 cm.
  2. 50% of the flowers have a sepal width equal to or less than 3.0 cm and so on.

50% is also interpreted as the median of the variable. It represents the data present centrally in the variable. For example, if a variable has values in the range 1 and 100 and its median is 80, it would mean that a lot of data points are inclined towards a higher value. In simpler terms, 50% or half of the data points have values greater than or equal to 80.

Now that the performance of mean and median is demonstrated, from the behavior of these numbers, one can conclude if the data is skewed. If the difference is high, it suggests that the distribution is skewed and if it is almost negligible, it is indicative of a normal distribution.

These options work well with continuous variables like the ones mentioned above. However, for categorical variables which have distinct values, such a description seldom makes any sense. For instance, the mean of a categorical variable would barely be of any value.

 

For such cases, we use yet another pandas operation called ‘value_counts()’. The usability of this function can be demonstrated through our target variable ‘y’. y was extracted in the following manner:

y = data[‘target’] #extract

This is done since the iris dataset is in dictionary format and stores the target variable in a list corresponding to the key named as ‘target’. After the extraction is completed, convert the data into a pandas Series. This must be done as the function value_counts() is only applicable to pandas Series.

y = pd.Series(y) #convert


y.value_counts()

On applying the function, we get the following result:

Output:

2    50

1    50

0    50

dtype: int64

 

This means that the categories, ‘0’, ‘1’ and ‘2’ have an equal number of counts which is 50. The equal representation means that there will be minimum bias during training. For example, if data tends to have more records representing one particular category ‘A’, the training model used will tend to learn that the category ‘A’ is the most recurrent and will have the tendency to predict a record as record ‘A’. When unequal representations are found, any one of the following must be followed:

  • Gather more data
  • Generate samples
  • Eliminate samples

Now let us move on to visual techniques to analyze the same data, but reveal further hidden patterns!

 

2.  Visualization Based Analysis

 

Even though a descriptive analysis is highly informative, it does not quite furnish details with regard to the pattern that might arise in the variable. With the difference between the mean and median we may be able to figure out the presence of skewed data, but will not be able to pinpoint the exact reason owing to this skewness. This is where visualizations come into the picture and aid us to formulate solutions for the myriad patterns that might arise in the variables independently.

Lets start with observing the frequency distribution of sepal width in our dataset.

frequency distribution of sepal

Std: 0.435
Mean: 3.057
Median (50%): 3.000

 

The red dashed line represents the median and the black dashed line represents the mean. As you must have observed, the standard deviation in this variable is the least. Also, the difference between the mean and the median is not significant. This means that the data points are concentrated towards the median, and the distribution is not skewed. In other words, it is a nearly Gaussian (or normal) distribution. This is how a Gaussian distribution looks like:

Normal Distribution generation graph

Normal Distribution generated from random data

 

The data of the above distribution is generated through the random. The normal function of the numpy library (one of the python libraries to handle arrays and lists).

It must always be one’s aim to achieve a Gaussian distribution before applying modeling algorithms. This is because, as has been studied, the most recurrent distribution in real life scenarios is the Gaussian curve. This has led to the designing of algorithms over the years in such a way that they mostly cater to this distribution and assume beforehand that the data will follow a Gaussian trend. The solution to handle this is to transform the distribution accordingly.

Let us visualize the other variables and understand what the distributions mean.

Sepal Length:

image result for distribution mean graph

Std: 0.828
Mean: 5.843
Median: 5.80

 

As is visible, the distribution of Sepal Length is over a wide range of values (4.3cm to 7.9cm) and thus, the standard deviation for sepal length is higher than that of sepal width. Also, the mean and median have almost an insignificant difference between them. This clarifies that the data is not skewed. However, here visualization comes to great use because we can clearly see that distribution is not perfectly Gaussian since the tails of the distribution have ample data. In Gaussian distribution, approximately 5% of the data is present in the tailing regions. From this visualization, however, we can be sure that the data is not skewed.

Petal Length:

petal length graph

Std: 1.765
Mean: 3.758
Median: 4.350

This is a very interesting graph since we found an unexpected gap in the distribution. This can either mean that the data is missing or the feature does not apply to that missing value. In other words, the petal lengths of iris plants never have the length in the range 2 to 3! The mean is thus, justifiably inclined towards the left and the median shows the centralized value of the variable which is towards the right since most of the data points are concentrated in a Gaussian curve towards the right.  If you move on to the next visual and observe the pattern of petal width, you will come across an even more interesting revelation.

 

Petal Width:

petal width graph

std: 0.762
mean: 1.122
median: 1.3

In the case of Petal Width, most of the values in the same region as in the petal length diagram, relative to the frequency distribution, are missing. Here the values in the range 0.5 cm to 1.0 cm are almost absent (but not completely absent). A repetitive low value simultaneously in the same area corresponding to two different frequency distributions is indicative of the fact that the data is missing and also confirmatory of the fact that petals of the size of the missing values are present in nature, but went unrecorded.

This conclusion can be followed with further data gathering or one can simply continue to work with the limited data present since it is not always possible to gather data representing every element of a given subject.

Conclusively, using histograms we came to know about the following:

  • Data distribution/pattern
  • Skewed distribution or not
  • Missing data

Now with the help of another univariate analysis tool, we can find out if our data is inlaid with anomalies or outliers. Outliers are data points which do not follow the usual pattern and have unpredictable behavior. Let us find out how to find outliers with the help of simple visualizations!

We will use a plot called the Box plot to identify the features/columns which are inlaid with outliers.

Box Plot for Iris Dataset
Box Plot for Iris Dataset

 

The box plot is a visual representation of five important aspects of a variable, namely:

  • Minimum
  • Lower Quartile
  • Median
  • Upper Quartile
  • Maximum

As can be seen from the above graph, each variable is divided into four parts using three horizontal lines. Each section contains approximately 25% of the data.  The area enclosed by the box is 50% of the data which is located centrally and the horizontal green line represents the median. One can identify an outlier if the point is spotted beyond the max and min lines.

From the plot, we can say that sepal_width has outlying points. These points can be handled in two ways:

  • Discard the outliers
  • Study the outliers separately

Sometimes outliers are imperative bits of information, especially in cases where anomaly detection is a major concern. For instance, during the detection of fraudulent credit card behavior, detection of outliers is all that matters.

 

Conclusion

 

Overall, EDA is a very important step and requires lots of creativity and domain knowledge to dig up maximum patterns from available data. Keep following this space to know more about bi-variate and multivariate analysis techniques. It only gets interesting from here on!

Follow this link, if you are looking to learn data science online!

You can follow this link for our Big Data course, which is a step further into advanced data analysis and processing!

Additionally, if you are having an interest in learning Data Science, click here to start the Online Data Science Course

Furthermore, if you want to read more about data science, read our Data Science Blogs