The Upcoming Revolution in Predictive Analytics (And Data Science)

The Upcoming Revolution in Predictive Analytics (And Data Science)

The Next Generation of Data Science

Quite literally, I am stunned.

I have just completed my survey of data (from articles, blogs, white papers, university websites, curated tech websites, and research papers all available online) about predictive analytics.

And I have a reason to believe that we are standing on the brink of a revolution that will transform everything we know about data science and predictive analytics.

But before we go there, you need to know: why the hype about predictive analytics? What is predictive analytics?

Let’s cover that first.

 Importance of Predictive Analytics

Black Samsung Tablet Computer

By PhotoMix Ltd

 

According to Wikipedia:

Predictive analytics is an area of statistics that deals with extracting information from data and using it to predict trends and behavior patterns. The enhancement of predictive web analytics calculates statistical probabilities of future events online. Predictive analytics statistical techniques include data modeling, machine learning, AI, deep learning algorithms and data mining.

Predictive analytics is why every business wants data scientists. Analytics is not just about answering questions, it is also about finding the right questions to answer. The applications for this field are many, nearly every human endeavor can be listed in the excerpt from Wikipedia that follows listing the applications of predictive analytics:

From Wikipedia:

Predictive analytics is used in actuarial science, marketing, financial services, insurance, telecommunications, retail, travel, mobility, healthcare, child protection, pharmaceuticals, capacity planning, social networking, and a multitude of numerous other fields ranging from the military to online shopping websites, Internet of Things (IoT), and advertising.

In a very real sense, predictive analytics means applying data science models to given scenarios that forecast or generate a score of the likelihood of an event occurring. The data generated today is so voluminous that experts estimate that less than 1% is actually used for analysis, optimization, and prediction. In the case of Big Data, that estimate falls to 0.01% or less.

Common Example Use-Cases of Predictive Analytics

 

Components of Predictive Analytics

Components of Predictive Analytics

 

A skilled data scientist can utilize the prediction scores to optimize and improve the profit margin of a business or a company by a massive amount. For example:

  • If you buy a book for children on the Amazon website, the website identifies that you have an interest in that author and that genre and shows you more books similar to the one you just browsed or purchased.
  • YouTube also has a very similar algorithm behind its video suggestions when you view a particular video. The site identifies (or rather, the analytics algorithms running on the site identifies) more videos that you would enjoy watching based upon what you are watching now. In ML, this is called a recommender system.
  • Netflix is another famous example where recommender systems play a massive role in the suggestions for ‘shows you may like’ section, and the recommendations are well-known for their accuracy in most cases
  • Google AdWords (text ads at the top of every Google Search) that are displayed is another example of a machine learning algorithm whose usage can be classified under predictive analytics.
  • Departmental stores often optimize products so that common groups are easy to find. For example, the fresh fruits and vegetables would be close to the health foods supplements and diet control foods that weight-watchers commonly use. Coffee/tea/milk and biscuits/rusks make another possible grouping. You might think this is trivial, but department stores have recorded up to 20% increase in sales when such optimal grouping and placement was performed – again, through a form of analytics.
  • Bank loans and home loans are often approved with the credit scores of a customer. How is that calculated? An expert system of rules, classification, and extrapolation of existing patterns – you guessed it – using predictive analytics.
  • Allocating budgets in a company to maximize the total profit in the upcoming year is predictive analytics. This is simple at a startup, but imagine the situation in a company like Google, with thousands of departments and employees, all clamoring for funding. Predictive Analytics is the way to go in this case as well.
  • IoT (Internet of Things) smart devices are one of the most promising applications of predictive analytics. It will not be too long before the sensor data from aircraft parts use predictive analytics to tell its operators that it has a high likelihood of failure. Ditto for cars, refrigerators, military equipment, military infrastructure and aircraft, anything that uses IoT (which is nearly every embedded processing device available in the 21st century).
  • Fraud detection, malware detection, hacker intrusion detection, cryptocurrency hacking, and cryptocurrency theft are all ideal use cases for predictive analytics. In this case, the ML system detects anomalous behavior on an interface used by the hackers and cybercriminals to identify when a theft or a fraud is taking place, has taken place, or will take place in the future. Obviously, this is a dream come true for law enforcement agencies.

So now you know what predictive analytics is and what it can do. Now let’s come to the revolutionary new technology.

Meet Endor – The ‘Social Physics’ Phenomenon

 

Image result for endor image free to use

End-to-End Predictive Analytics Product – for non-tech users!

 

In a remarkable first, a research team at MIT, USA have created a new science called social physics, or sociophysics. Now, much about this field is deliberately kept highly confidential, because of its massive disruptive power as far as data science is concerned, especially predictive analytics. The only requirement of this science is that the system being modeled has to be a human-interaction based environment. To keep the discussion simple, we shall explain the entire system in points.

  • All systems in which human beings are involved follow scientific laws.
  • These laws have been identified, verified experimentally and derived scientifically.
  • Bylaws we mean equations, such as (just an example) Newton’s second law: F = m.a (Force equals mass times acceleration)
  • These equations establish laws of invariance – that are the same regardless of which human-interaction system is being modeled.
  • Hence the term social physics – like Maxwell’s laws of electromagnetism or Newton’s theory of gravitation, these laws are a new discovery that are universal as long as the agents interacting in the system are humans.
  • The invariance and universality of these laws have two important consequences:
    1. The need for large amounts of data disappears – Because of the laws, many of the predictive capacities of the model can be obtained with a minimal amount of data. Hence small companies now have the power to use analytics that was mostly used by the FAMGA (Facebook, Amazon, Microsoft, Google, Apple) set of companies since they were the only ones with the money to maintain Big Data warehouses and data lakes.
    2. There is no need for data cleaning. Since the model being used is canonical, it is independent of data problems like outliers, missing data, nonsense data, unavailable data, and data corruption. This is due to the orthogonality of the model ( a Knowledge Sphere) being constructed and the data available.
  • Performance is superior to deep learning, Google TensorFlow, Python, R, Julia, PyTorch, and scikit-learn. Consistently, the model has outscored the latter models in Kaggle competitions, without any data pre-processing or data preparation and cleansing!
  • Data being orthogonal to interpretation and manipulation means that encrypted data can be used as-is. There is no need to decrypt encrypted data to perform a data science task or experiment. This is significant because the independence of the model functioning even for encrypted data opens the door to blockchain technology and blockchain data to be used in standard data science tasks. Furthermore, this allows hashing techniques to be used to hide confidential data and perform the data mining task without any knowledge of what the data indicates.

Are You Serious?

Image result for OMG image

That’s a valid question given these claims! And that is why I recommend everyone who has the slightest or smallest interest in data science to visit and completely read and explore the following links:

  1. https://www.endor.com
  2. https://www.endor.com/white-paper
  3. http://socialphysics.media.mit.edu/
  4. https://en.wikipedia.org/wiki/Social_physics

Now when I say completely read, I mean completely read. Visit every section and read every bit of text that is available on the three sites above. You will soon understand why this is such a revolutionary idea.

  1. https://ssir.org/book_reviews/entry/going_with_the_idea_flow#
  2. https://www.datanami.com/2014/05/21/social-physics-harnesses-big-data-predict-human-behavior/

These links above are articles about the social physics book and about the science of sociophysics in general.

For more details, please visit the following articles on Medium. These further document Endor.coin, a cryptocurrency built around the idea of sharing data with the public and getting paid for using the system and usage of your data. Preferably, read all, if busy, at least read Article No, 1.

  1. https://medium.com/endor/ama-session-with-prof-alex-sandy-pentland
  2. https://medium.com/endor/endor-token-distribution
  3. https://medium.com/endor/https-medium-com-endor-paradigm-shift-ai-predictive-analytics
  4. https://medium.com/endor/unleash-the-power-of-your-data

Operation of the Endor System

Upon every data set, the first action performed by the Endor Analytics Platform is clustering, also popularly known as automatic classification. Endor constructs what is known as a Knowledge Sphere, a canonical representation of the data set which can be constructed even with 10% of the data volume needed for the same project when deep learning was used.

Creation of the Knowledge Sphere takes 1-4 hours for a billion records dataset (which is pretty standard these days).

Now an explanation of the mathematics behind social physics is beyond our scope, but I will include the change in the data science process when the Endor platform was compared to a deep learning system built to solve the same problem the traditional way (with a 6-figure salary expert data scientist).

An edited excerpt from Link here

From Appendix A: Social Physics Explained, Section 3.1, pages 28-34 (some material not included):

Prediction Demonstration using the Endor System:

Data:
The data that was used in this example originated from a retail financial investment platform
and contained the entire investment transactions of members of an investment community.
The data was anonymized and made public for research purposes at MIT (the data can be
shared upon request).

 

Summary of the dataset:
– 7 days of data
– 3,719,023 rows
– 178,266 unique users

 

Automatic Clusters Extraction:
Upon first analysis of the data the Endor system detects and extracts “behavioral clusters” – groups of
users whose data dynamics violates the mathematical invariances of the Social Physics. These clusters
are based on all the columns of the data, but is limited only to the last 7 days – as this is the data that
was provided to the system as input.

 

Behavioural Clusters Summary

Number of clusters:268,218
Clusters sizes: 62 (Mean), 15 (Median), 52508 (Max), 5 (Min)
Clusters per user:164 (Mean), 118 (Median), 703 (Max), 2 (Min)
Users in clusters: 102,770 out of the 178,266 users
Records per user: 6 (Median), 33 (Mean): applies only to users in clusters

 

Prediction Queries
The following prediction queries were defined:
1. New users to become “whales”: users who joined in the last 2 weeks that will generate at least
$500 in commission in the next 90 days
2. Reducing activity : users who were active in the last week that will reduce activity by 50% in the
next 30 days (but will not churn, and will still continue trading)
3. Churn in “whales”: currently active “whales” (as defined by their activity during the last 90 days),
who were active in the past week, to become inactive for the next 30 days
4. Will trade in Apple share for the first time: users who had never invested in Apple share, and
would buy it for the first time in the coming 30 days

 

Knowledge Sphere Manifestation of Queries
It is again important to note that the definition of the search queries is completely orthogonal to the
extraction of behavioral clusters and the generation of the Knowledge Sphere, which was done
independently of the queries definition.

Therefore, it is interesting to analyze the manifestation of the queries in the clusters detected by the system: Do the clusters contain information that is relevant to the definition of the queries, despite the fact that:

1. The clusters were extracted in a fully automatic way, using no semantic information about the
data, and –

2. The queries were defined after the clusters were extracted, and did not affect this process.

This analysis is done by measuring the number of clusters that contain a very high concentration of
“samples”; In other words, by looking for clusters that contain “many more examples than statistically
expected”.

A high number of such clusters (provided that it is significantly higher than the amount
received when randomly sampling the same population) proves the ability of this process to extract
valuable relevant semantic insights in a fully automatic way.

 

Comparison to Google TensorFlow

In this section a comparison between prediction process of the Endor system and Google’s
TensorFlow is presented. It is important to note that TensorFlow, like any other Deep Learning library,
faces some difficulties when dealing with data similar to the one under discussion:

1. An extremely uneven distribution of the number of records per user requires some canonization
of the data, which in turn requires:

2. Some manual work, done by an individual who has at least some understanding of data
science.

3. Some understanding of the semantics of the data, that requires an investment of time, as
well as access to the owner or provider of the data

4. A single-class classification, using an extremely uneven distribution of positive vs. negative
samples, tends to lead to the overfitting of the results and require some non-trivial maneuvering.

This again necessitates the involvement of an expert in Deep Learning (unlike the Endor system
which can be used by Business, Product or Marketing experts, with no perquisites in Machine
Learning or Data Science).

 

Traditional Methods

An expert in Deep Learning spent 2 weeks crafting a solution that would be based
on TensorFlow and has sufficient expertise to be able to handle the data. The solution that was created
used the following auxiliary techniques:

1.Trimming the data sequence to 200 records per customer, and padding the streams for users
who have less than 200 records with neutral records.

2.Creating 200 training sets, each having 1,000 customers (50% known positive labels, 50%
unknown) and then using these training sets to train the model.

3.Using sequence classification (RNN with 128 LSTMs) with 2 output neurons (positive,
negative), with the overall result being the difference between the scores of the two.

Observations (all statistics available in the white paper – and it’s stunning)

1.Endor outperforms Tensor Flow in 3 out of 4 queries, and results in the same accuracy in the 4th
.
2.The superiority of Endor is increasingly evident as the task becomes “more difficult” – focusing on
the top-100 rather than the top-500.

3.There is a clear distinction between “less dynamic queries” (becoming a whale, churn, reduce
activity” – for which static signals should likely be easier to detect) than the “Who will trade in
Apple for the first time” query, which are (a) more dynamic, and (b) have a very low baseline, such
that for the latter, Endor is 10x times more accurate!

4.As previously mentioned – the Tensor Flow results illustrated here employ 2 weeks of manual
improvements done by a Deep Learning expert, whereas the Endor results are 100% automatic and the entire prediction process in Endor took 4 hours.

Clearly, the path going forward for predictive analytics and data science is Endor, Endor, and Endor again!

Predictions for the Future

Personally, one thing has me sold – the robustness of the Endor system to handle noise and missing data. Earlier, this was the biggest bane of the data scientist in most companies (when data engineers are not available). 90% of the time of a professional data scientist would go into data cleaning and data preprocessing since our ML models were acutely sensitive to noise. This is the first solution that has eliminated this ‘grunt’ level work from data science completely.

The second prediction: the Endor system works upon principles of human interaction dynamics. My intuition tells me that data collected at random has its own dynamical systems that appear clearly to experts in complexity theory. I am completely certain that just as this tool developed a prediction tool with human society dynamical laws, data collected in general has its own laws of invariance. And the first person to identify these laws and build another Endor-style platform on them will be at the top of the data science pyramid – the alpha unicorn.

Final prediction – democratizing data science means that now data scientists are not required to have six-figure salaries. The success of the Endor platform means that anyone can perform advanced data science without resorting to TensorFlow, Python, R, Anaconda, etc. This platform will completely disrupt the entire data science technological sector. The first people to master it and build upon it to formalize the rules of invariance in the case of general data dynamics will for sure make a killing.

It is an exciting time to be a data science researcher!

Data Science is a broad field and it would require quite a few things to learn to master all these skills.

Dimensionless has several resources to get started with.

To Learn Data Science, Get Data Science Training in Pune and Mumbai from Dimensionless Technologies.

To learn more about analytics, be sure to have a look at the following articles on this blog:

Machine Learning for Transactional Analytics

and

Text Analytics and its applications

Enjoy data science!

7 Technical Concept Every Data Science Beginner Should Know

7 Technical Concept Every Data Science Beginner Should Know

Welcome to Data Science!

 

So you want to learn data science but you don’t know where to start? Or you are a beginner and you want to learn the basic concepts? Welcome to your new career and your new life! You will discover a lot of things on your journey to becoming a data scientist and being part of a new revolution. I am a firm believer that you can learn data science and become a data scientist regardless of your age, your background, your current knowledge level, your gender, and your current position in life. I believe – from experience – that anyone can learn anything at any stage in their lives. What is required is just determination, persistence, and a tireless commitment to hard work. Nothing else matters as far as learning new things – or learning data science – is concerned. Your commitment, persistence, and your investment in your available daily time is enough.

I hope you understood my statement. Anyone can learn data science if you have the right motivation. In fact, I believe anyone can learn anything at any stage in their lives, if they invest enough time, effort and hard work into it, along with your current occupation. From my experience, I strongly recommend that you continue your day job and work on data science as a side hustle, because of the hard work that will be involved. Your commitment is more important than your current life situation. Carrying on a full-time job and working on data science part-time is the best way to go if you want to learn in the best possible manner.

 

Technical Concepts of Data Science

So what are the important concepts of data science that you should know as a beginner? They are, in order of sequential learning, the following:

  1. Python Programming
  2. R Programming
  3. Statistics & Probability
  4. Linear Algebra
  5. Data Preparation and Data ETL*
  6. Machine Learning with Python and R
  7. Data Visualization and Summary

*Extraction, Transformation, and Loading

Now if you were to look at the above list an go to a library, you would, most likely, come back with 9-10 books at an average of 1000 pages each. Even if you could speed-read, 10,000 pages is a lot to get through. I could list the best books for each topic in this post, but even the most seasoned reader would balk at 10,000 pages. And who reads books these days? So what I am going to give you is a distilled extract on each of those topics. Keep in mind, however, that every topic given above could be a series of blog posts in its own right, and these 80-word paragraphs are just a tiny taste of each topic and there is an ocean of depth involved in every topic. You might ask if that is the case, how can everybody be a possible candidate for data scientist role? Two words: Persistence and Motivation. With the right amount of these two characteristics, anyone can be anything they want to be.

 

1) Python Programming:

Python is one of the most popular programming languages in the world. It is the ABC of data science because Python is the language every beginner starts with on data science. It is universally used for any purposes since it is so amazingly versatile. Python can be used for web applications and websites with Django, microservices with Flask, general programming projects with the standard library from PyPI, GUIs with PyQt5 or Tkinter, Interoperability with Jython (Java), Cython (C) and nearly other programming language are available today.

Of course, Python is the also first language used for data science with the standard stack of scikit-learn (machine learning), pandas (data manipulation), matplotlib and seaborn (visualization) and numpy (vectorized computation). Nowadays, the most common technology used is the Anaconda distribution, available from www.anaconda.com. Current version 2018.12 or Anaconda Distribution 5. To learn more about Python, I strongly recommend the following books: Head First Python and the Python Cookbook.

 

2) R Programming

R is The Best Language for statistical needs since it is a language designed by statisticians, for statisticians. If you know statistics and mathematics well, you will enjoy programming in R. The language gives you the best support available for every probability distribution, statistics functions, mathematical functions, plotting, visualization, interoperability, and even machine learning and AI. In fact, everything that you can do in Python can be done in R. R is the second most popular language for data science in the world, second only to Python. R has a rich ecosystem for every data science requirement and is the favorite language of academicians and researchers in the academic domain.

Learning Python is not enough to be a professional data scientist. You need to know R as well. A good book to start with is R For Data Science, available at Amazon at a very reasonable price. Some of the most popular packages in R that you need to know are ggplot2, ThreeJS, DT (tables), network3D, and leaflet for visualization, dplyr and tidyr for data manipulation, shiny and R Markdown for reporting, parallel, Rcpp and data.table for high performance computing and caret, glmnet, and randomForest for machine learning.

 

3)  Statistics and Probability

This is the bread and butter of every data scientist. The best programming skills in the world will be useless without knowledge of statistics. You need to master statistics, especially practical knowledge as used in a scientific experimental analysis. There is a lot to cover. Any subtopic given below can be a blog-post in its own right. Some of the more important areas that a data scientist needs to master are:

  1. Analysis of Experiments
  2. Tests of Significance
  3. Confidence Intervals
  4. Probability Distributions
  5. Sampling Theory
  6. Central Limit Theorem
  7. Bell Curve
  8. Dimensionality Reduction
  9. Bayesian Statistics

Some places on the Internet to learn Statistics from are the MIT OpenCourseWare page Introduction to Statistics and Probability, and the Khan Academy Statistics and Probability Course. Good books to learn statistics: Naked Statistics, by Charles Wheelan which is an awesome comic-like but highly insightful book which can be read enjoyably by anyone including those from non-technical backgrounds and Practical Statistics for Data Scientists by Peter Bruce and Andrew Bruce.

 

4) Linear Algebra

Succinctly, linear algebra is about vectors, matrices and the operations that can be performed on vectors and matrices. This is a fundamental area for data science since every operation we do as a data scientist has a linear algebra background, or, as data scientists, we usually work with collections of vectors or matrices. So we have the following topics in Linear Algebra, all of which are covered in the following world-famous book, Linear Algebra and its Applications by Gilbert Strang, an MIT professor. You can also go to the popular MIT OpenCourseWare page, Linear Algebra (MIT OCW). These two resources cover everything you need to know. Some of the most fundamental concepts that you can also Google or bring up on Wikipedia are:

  1. Vector Algebra
  2. Matrix Algebra
  3. Operations on Matrices
  4. Determinants
  5. Eigenvalues and Eigenvectors
  6. Solving Linear Systems of Equations
  7. Computer-Aided Algebra Software (Mathematica, Maple, MATLAB, etc)

 

5) Data Preparation and Data ETL (Extraction, Transformation, and Loading)

By IAmMrRob on Pixabay

 

Yes – welcome to one of the more infamous sides of data science! If data science has a dark side, this is it. Know for sure that unless your company has some dedicated data engineers who do all the data munging and data wrangling for you, 90% of your time on the job will be spent on working with raw data. Real world data has major problems. Usually, it’s unstructured, in the wrong formats, poorly organized, contains many missing values, contains many invalid values, and contains types that are not suitable for data mining.

Dealing with this problem takes up a lot of the time of a data scientist. And your data scientist’s analysis has the potential to go massively wrong when there is invalid and missing data. Practically speaking, unless you are unusually blessed, you will have to manage your own data, and that means conducting your own ETL (Extraction, Transformation, and Loading). ETL is a data mining and data warehousing term that means loading data from an external data store or data mart into a form suitable for data mining and in a state suitable for data analysis (which usually involves a lot of data preprocessing). Finally, you often have to load data that is too big for your working memory – a problem referred to as external loading. During your data wrangling phase, be sure to look into the following components:

  1. Missing data
  2. Invalid data
  3. Data preprocessing
  4. Data validation
  5. Data verification
  6. Automating the Data ETL Pipeline
  7. Automation of Data Validation and Verification

Usually, expert data scientists try to automate this process as much as possible, since a human being would be wearied by this task very fast and is remarkably prone to errors, which will not happen in the case of a Python or an R script doing the same operations. Be sure to try to automate every stage in your data processing pipeline.

 

6) Machine Learning with Python and R

An expert machine learning scientist has to be proficient in the following areas at the very least:

Data Science Topics Listing

Data Science Topics Listing – Thomas

 

Now if you are just starting out in Machine Learning (ML), Python, and R, you will gain a sense of how huge the field is and the entire set of lists above might seem more like advanced Greek instead of Plain Jane English. But not to worry; there are ways to streamline your learning and to consume as little time as possible in learning or becoming able to learn nearly every single topic given above. After you learn the basics of Python and R, you need to go on to start building machine learning models. From experience, I suggest you break up your time into 50% of Python and 50% of R and spend as much time as possible spending time without switching your languages or working between languages. What do I mean? Spend maximum time learning one programming language at one time. That will prevent syntax errors and conceptual errors and language confusion problems.

Now, on the job, in real life, it is much more likely that you will work in a team and be responsible for only one part of the work. However, if your working in a startup or learning initially, you will end up doing every phase of the work yourself. Be sure to give yourself time to process information and to spend sufficient time for your brain to rest and get a handle on the topics you are trying to learn. For more info, do check out the Learning How to Learn MOOC on Coursera, which is the best way to learn mathematical or scientific topics without ending up with burn out. In fact, I would recommend this approach to every programmer out there trying to learn a programming language, or anything considered difficult, like Quantum Mechanics and Quantum Computation or String Theory, or even Microsoft F# or Microsoft C# for a non-Java programmer.

I strongly recommend the book, Hands-On Machine Learning with Scikit-Learn and TensorFlow to learn Python for Data Science. The R book was given earlier in the section on R.

 

7) Data Visualization and Summary

Common tools that you have with which you can produce powerful visualizations include:

  1. Matplotlib
  2. Seaborn
  3. Bokeh
  4. ggplot2
  5. plot.ly
  6. D3.js
  7. Tableau
  8. Google Data Studio
  9. Microsoft Power BI Desktop

Some involve coding, some are drag-and-drop, some are difficult for beginners, some have no coding at all. All of these tools will help you with data visualization. But one of the most overlooked but critical practical functions of a data scientist has been included under this heading: summarisation. 

Summarisation means the practical result of your data science workflow. What does the result of your analysis mean for the operation of the business or the research problem that you are currently working on? How do you convert your result to the maximum improvement for your business? Can you measure the impact this result will have on the profit of your enterprise? If so, how? Being able to come out of a data science workflow with this result is one of the most important capacities of a data scientist. And most of the time, efficient summarisation = excellent knowledge of statistics. Please know for sure that statistics is the start and the end of every data science workflow. And you cannot afford to be ignorant about it. Refer to the section on statistics or google the term for extra sources of information.

How Can I Learn Everything Above In the Shortest Possible Time?

You might wonder – How can I learn everything given above? Is there a course ora pathway to learn every single concept described in this article at one shot? It turns out – there is. There is a dream course for a data scientist that contains nearly everything talked about in this article.

Want to Become a Data Scientist? Welcome to Dimensionless Technologies! It just so happens that the course: Data Science using Python and R, a ten-week course that includes ML, Python and R programming, Statistics, Github Account Project Guidance, and Job Placement, offers nearly every component spoken about above, and more besides. You don’t know to buy the books or do any of the courses other than this to learn the topics in this article. Everything is covered by this single course, tailormade to convert you to a data scientist within the shortest possible time. For more, I’d like to refer you to the following link:

Data Science using R & Python

Does this seem too good to be true? Perhaps, because this is a paid course. With a scholarship concession, you could end up paying around INR 40,000 for this ten-week course, two weeks of which you can register for 5,000 and pay the remainder after two weeks trial period to see if this course really suits you. If it doesn’t, you can always drop out after two weeks and be poorer by just 5k. But in most cases, this course has been found to carry genuine worth. And nothing worthwhile was achieved without some payment, right?

In case you want to learn more about data science, please check out the following articles:

Data Science: What to Expect in 2019

and:

Big Data and Blockchain

Also, see:

AI and intelligent applications

and:

Evolution of Chatbots & their Performance

All the best, and enjoy data science. Every single day of your life!

Are the Data Scientists New Business Analysts?

Are the Data Scientists New Business Analysts?

Introduction

Data Industry is on boom today and it seems no shortage of intelligent opinions about the job responsibilities and roles accelerating the data industry. Most of the people are usually confused between the role of a Data Scientist and the Data Analyst. Even if both of them deal with Data only still there are plenty of significant differences that make them suitable for different job positions.

Here, we will discuss how to differentiate Data Scientist from Data Analyst, and their job roles too. Before we switch on the actual topic, let us have a quick look at the differences. Later on, we will try to find out the reasons for the diminishing gap between data scientists and business analysts today. We will try to analyse if there is actually any gap between the two roles and look further into it.

Difference Between a Data Scientist and Business Analyst

A company relies on its business analysts to gain business insights by interpreting and analyzing data and predicting trends-related aspects which help in making critical business decisions. Business analysts also focus on end-to-end automation to eliminate manual intervention and optimizing business process flows which can increase the productivity and turnaround time for an efficient and successful end result. They also recommend systems changes needed to optimize an organization’s overall execution.

Data scientists, on the other hand, specialize and purely rely on data which is further broken down to simpler facts and figures by using tools such as statistical calculations, big data technology, and subject matter expertise. They use data comparison algorithms and methodologies to identify and determine potential competitors or resolve day-to-day business issues.

Business analysts often work on preconceived notions or judgments related to the factors that help drive the businesses. Data scientists, whereas; have had an edge over business analysts, as they leverage data related algorithms which provide accuracy and also use mathematical, statistical, and fact-based predictions.

As organizations are proactively defining new initiatives and campaigns to evaluate the existing strategy on how big data can help to transform their businesses, the role of business analyst is slowly but certainly widening into a major role.

Upgradation in Duties of Business Analysts and Data Scientists

In recent times, there have been a lot of advancements in the data science industry. With these advancements, different businesses are in better shape to extract much more value out of their data. With increased expectation, there is a shift in the roles of both data scientists and business analysts now. The data scientists have moved from statistical focus phase to more of a research phase. But the business analysts are now filling in the gap left by data scientists and are taking their roles up.

We can see it as an upgrade in both the job roles. Business analysts now hold the business angle firm but are also handling the statistical and technical part of the things too. Business analysts are now more into predictive analytics. They have reached a stage now where they can use off-the-shelf algorithms for predictions in their business domains. BA’s are not limited to just reporting and business but now are more into the prescriptive analytics too. They are handling the role of model building, data warehousing and statistical analysing.

Keep a note here that Business analysts are in no way replacing Data scientists. Data scientists are now researching new methods and algorithms which can be used by Business analysts combined with their business acumen in specific business domains.

Recent Advancements in Data Analytics

Data analytics is a field which witnesses a continuous revolution. Since data is becoming increasingly valuable with each passing time, it has been now treated with great care and concern. To cope up with the constant changes in the industries and societies as a whole, new tools, techniques, theories and trends and always introduced in the data analytics sector. In this article, we will go through some of the latest data analytics opportunities which have come up in the industry.

1. Self-service BI

With self-service BI tools, such as Tableau, Qlik Sense, Power BI, and Domo, managers can obtain current business information in graphical form on demand. While a certain amount of setup by IT may be needed at the outset and when adding a data source, most of the work in cleaning data and creating analyses can be done by business analysts, and the analyses can update automatically from the latest data any time they are opened.

Managers can then interact with the analyses graphically to identify issues that need to be addressed. In a BI-generated dashboard or “story” about sales numbers, that might mean drilling down to find underperforming stores, salespeople, and products, or discovering trends in year-over-year same-store comparisons. These discoveries might in turn guide decisions about future stocking levels, product sales and promotions, and even the building of additional stores in under-served areas.

2. Artificial Intelligence and Machine Learning

Artificial intelligence is one such data analytics opportunity which is finding widespread adoption in all businesses and decision-making applications. As per Gartner 2018, as much as 41 per cent of organizations have already adopted AI into some aspect of their functioning already while the rest 59 per cent are striving hard to do the same. There is considerable research going on at present to incorporate artificial intelligence into the field of data science too. With data becoming larger and more complex with each passing minute, management of such data is getting out of manual capacities very soon. Scholars have therefore now turned to the use of AI for storing, handling, manipulating and managing larger chunks of data in a safe environment.

3. R language

Data scientists have a number of option to analyze data using statistical methods. One of the most convenient and powerful methods is to use the free R programming language. R is one of the best ways to create reproducible, high-quality analysis since unlike a spreadsheet, R scripts can be audited and re-run easily. The R language and its package repositories provide a wide range of statistical techniques, data manipulation and plotting, to the point that if a technique exists, it is probably implemented in an R package. R is almost as strong in its support for machine learning, although it may not be the first choice for deep neural networks, which require higher-performance computing than R currently delivers.

R is available as free open source and is embedded into dozens of commercial products, including Microsoft Azure Machine Learning Studio and SQL Server 2016.

4. Big Data

the applications of the Big Data world. Well, most of us are now more than familiar with terms like Hadoop, Spark, NO-SQL, Hive, Cloud etc. We know there are at least 20 NO-SQL databases and a number of other Big Data solutions emerging every month. But which of these technologies see prospects going forward? Which technologies are going to fetch you big benefits?

Why the Role Update?

1. Advancement in technology

There have been a lot of technological advancements in data science. Machine learning, deep learning, automatic data processing are just to name few. With all these new technologies, organisations are expecting more out of their business analysts. Organisations are looking to leverage all these technologies into their decision-making process. To fulfil this, business analysts need to upgrade their role and take the role of data scientists too. Also, data scientists are more towards researching new methods and algorithms. They are the ones now bringing innovation in data science one after another.

2. Identification of more areas of application

Organisations are now able to explore more areas where they can leverage the power of data science. With more applications, organisations are aiming to automate their decision-making process. Business analysts need to step up for more diversified applications. Hence, they have to expand their skillset and takes upgraded roles. Decision scientists are more towards finding newer methods which can help the BA’s in solving complex business problems.

3. Increase in complexity of the business problem

Applications of data science in business are getting both complicated and complex day by day. With an increase in complexity. business analysts have now more prominent and complex roles. This can be one reason where the new BA’s may need to expand their skillset. This is due to the fact that organisations are expecting more out.

4. Growth of data

There has been a tremendous increase in data generation, practices like BIG data are coming as a prominent player in the picture. Business analysts today may need to be handy with Big data technologies rather than just having a business mindset towards the problem.

5. Lack of qualified talent

Today, there is also lack of qualified professionals in data science. This results in one individual taking multiple roles like BA, data engineer, data scientist etc. There are no clear boundaries between these roles in most of the organisations today. So a business analyst today, should also have knowledge of maths and technology. This is one reason too about business analysts acting as data scientists in many organisations.

The Tools of the Trade

The world of a business analyst is business-model centric. Either they are reporting, discussing, or modifying the business model. Not only must they be proficient with Microsoft Office, but they also must be excellent researchers and problem-solvers. Elite communication skills are also a must, as business analysts interact with every facet of the business. They must also be “team players” and able to interact and work with all departments within a company.

Data scientist’s job descriptions are much different than business analysts. They are mathematicians and understand the programming language, as opposed to reporting writers and company communicators. They, therefore, have a different set of tools they use. Utilizing programming languages, understanding the principles of machine learning, and being able to generate and apply mathematical models are critical skills for a data scientist.

The commonality between business analysts and data scientists is that both of them require generating and communicating figure-rich reports. The software used to generate such reports may be the same between the two different positions, but the content of the reports will be substantially different.

Which is Right for You?

If deciding between a future career between a business analyst and a data scientist, envisioning the type of position you want should steer you in the right direction. Do you like interacting with people? Do you like summarizing information to make reports? If so, you are more likely to be happy with a business analyst position than a data scientist. This is because data scientists work more independently. Data scientists are also more technical in nature. So if you have a more technical background, a career as a data scientist might before you.

Summary

In any case, organisations are now on the lookout for new age business analysts. They need to be a combo of the intelligence of knowing the right analytic tools, big data technology, and machine learning. Companies should rather not simply rely on business analysts to predict the future of a business. So if you are a business analyst then you have a lot to learn to stay relevant. But the good news is, there are various data science programs which can help you retool to stay competitive.

Follow this link, if you are looking to learn more about data science online!

You can follow this link for our Big Data course!

Additionally, if you are having an interest in learning Data Science, click here to start

Furthermore, if you want to read more about data science, you can read our blogs here

Also, the following are some blogs you may like to read

How to train a decision tree classifier for churn prediction

AI and intelligent applications

What is Predictive Model Performance Evaluation

 

Top 5 Advantages of AWS Big Data Speciality

Top 5 Advantages of AWS Big Data Speciality

The Biggest Disruption in the IT Sector

Now unless you’ve been a hermit or a monk living in total isolation, you will have heard of Amazon Web Services and AWS Big Data. It’s a sign of an emerging global market and the entire world becoming smaller and smaller every day.  Why? The current estimate for the cloud computing market in 2020, according to Forbes (a new prediction, highly reliable), is a staggering 411 Billion USD$! Visit the following link to read more and see the statistics for yourself:

https://www.forbes.com/sites/louiscolumbus/2017/10/18/cloud-computing-market-projected-to-reach-411b-by-2020

To know more, refer to Wikipedia for the following terms by clicking on them, which mark, in order the evolution of cloud computing (I will also provide the basic information to keep this article as self-contained as possible):

Wikmedia

1. Software-as-a-Service (SaaS)

This was the beginning of the revolution called cloud computing. Companies and industries across verticals understood that they could let experts manage their software development, deployment, and management for them, leaving them free to focus on their key principle – adding value to their business sector. This was mostly confined to the application level. Follow the heading link for more information, if required.

2. Platform-as-a-Service (PaaS)

PaaS began when companies started to understand that they could outsource both software management and operating systems and maintenance of these platforms to other companies that specialized in taking care of them. Basically, this was SaaS taken to the next level of virtualization, on the Internet. Amazon was the pioneer, offering SaaS and PaaS services worldwide from the year 2006. Again the heading link gives information in depth.

3. Infrastructure-as-a-Service (IaaS)

After a few years in 2011, the big giants like Microsoft, Google, and a variety of other big names began to realize that this was an industry starting to boom beyond all expectations, as more and more industries spread to the Internet for worldwide visibility. However, Amazon was the market leader by a big margin, since it had a five-year head start on the other tech giants. This led to unprecedented disruption across verticals, as more and more companies transferred their IT requirements to IaaS providers like Amazon, leading to (in some cases) savings of well over 25% and per-employee cost coming down by 30%.

After all, why should companies set up their own servers, data warehouse centres, development centres, maintenance divisions, security divisions, and software and hardware monitoring systems if there are companies that have the world’s best experts in every one of these sectors and fields that will do the job for you at less than 1% of the cost the company would incur if they had to hire staff, train them, monitor them, buy their own hardware, hire staff for that as well – the list goes on-and-on. If you are already a tech giant like, say Oracle, you have everything set up for you already. But suppose you are a startup trying to save every penny – and there and tens of thousands of such startups right now – why do that when you have professionals to do it for you?

There is a story behind how AWS got started in 2006 – I’m giving you a link, so as to not make this article too long:

https://medium.com/@furrier/original-content-the-story-of-aws-and-andy-jassys-trillion-dollar-baby

For even more information on AWS and how Big Data comes into the picture, I recommend the following blog:

Introduction to AWS Big Data

AWS Big Data Speciality

OK. So now you may be thinking, so this is cloud computing and AWS – but what does it have to do with Big Data Speciality, especially for startups? Let’s answer that question right now.

A startup today has a herculean task ahead of them.

Not only do they have to get noticed in the big booming startup industry, they also have to scale well if their product goes viral and receives a million hits in a day and provide security for their data in case a competitor hires hackers from the Dark Web to take down their site, and also follow up everything they do on social media with a division in their company managing only social media, and maintain all their hardware and software in case of outages. If you are a startup counting every penny you make, how much easier is it for you to outsource all your computing needs (except social media) to an IaaS firm like AWS.

You will be ready for anything that can happen, and nothing will take down your website or service other than your own self. Oh, not to mention saving around 1 million USD$ in cost over the year! If you count nothing but your  own social media statistics, every company that goes viral has to manage Big Data! And if your startup disrupts an industry, again, you will be flooded with GET requests, site accesses, purchases, CRM, scaling problems, avoiding downtime, and practically everything a major tech company has to deal with!  

Bro, save your grey hairs, and outsource all your IT needs (except social media – that you personally need to do) to Amazon with AWS!

And the Big Data Speciality?

Having laid the groundwork, let’s get to the meat of our article. The AWS certified Big Data Speciality website mentions the following details:

From https://aws.amazon.com/certification/certified-big-data-specialty/

The AWS Certified Big Data – Specialty exam validates technical skills and experience in designing and implementing AWS services to derive value from data. The examination is for individuals who perform complex Big Data analyses and validates an individual’s ability to:

  • Implement core AWS Big Data services according to basic architecture best practices

  • Design and maintain Big Data

  • Leverage tools to automate data analysis

So, what is an AWS Big Data Speciality certified expert? Nothing more than an internationally recognized certification that says that you, as a data scientist can work professionally in AWS and Big Data’s requirements in Data Science.

Please note: the eligibility criteria for an AWS Big Data Speciality Certification is:

From https://aws.amazon.com/certification/certified-big-data-specialty/

To put it in layman’s terms, if you, the data scientist, were Priyanka Chopra, getting the AWS Big Data Speciality certification passed successfully is the equivalent of going to Hollywood and working in the USA starring in Quantico!

Suddenly, a whole new world is open at your feet!

But don’t get too excited: unless you already have five years experience with Big Data, there’s a long way to go. But work hard, take one step at a time, don’t look at the goal far ahead but focus on every single day, one day, one task at a time, and in the end you will reach your destination. Persistence, discipline and determination matters. As simple as that.

Certification

From whizlabs.org

Five Advantages of an AWS Big Data Speciality

1. Massive Increase in Income as a Certified AWS Big Data Speciality Professional (a long term 5 years plus goal)

Everyone who’s anyone in data science knows that a data scientist in the US earns an average of 100,000 USD$ every year. But what is the average salary of an AWS Big Data Speciality Certified professional? Hold on to your hat’s folks; it’s 160,000 $USD starting salary. And with just two years of additional experience, that salary can cross 250,000 USD$ every year if you are a superstar at your work. Depending upon your performance on the job Do you still need a push to get into AWS? The following table shows the average starting salaries for specialists in the following Amazon products: (from www.dezyre.com)

Top Paying AWS Skills According to Indeed.com

AWS SkillSalary
DynamoDB$141,813
Elastic MapReduce (EMR)$136,250
CloudFormation$132,308
Elastic Cache$125,625
CloudWatch$121,980
Lambda$121,481
Kinesis$121,429
Key Management Service$117,297
Elastic Beanstalk$114,219
Redshift$113,950

2. Wide Ecosystem of Tools, Libraries, and Amazon Products

AWS

From slideshare.net

Amazon Web Services, compared to other Cloud IaaS services, has by far the widest ecosystem of products and tools. As a Big Data specialist, you are free to choose your career path. Do you want to get into AI? Do you have an interest in ES3 (storage system) or HIgh-Performance Serverless computing (AWS Lambda).  You get to choose, along with the company you work for. I don’t know about you, but I’m just writing this article and I’m seriously excited!

3. Maximum Demand Among All Cloud Computing jobs

If you manage to clear the certification in AWS, then guess what – AWS certified professionals have by far the maximum market demand! Simply because more than half of all Cloud Computing IaaS companies use AWS. The demand for AWS certifications is the maximum right now. To mention some figures: in 2019, 350,000 professionals will be required for AWS jobs. 60% of cloud computing jobs ask for AWS skills (naturally, considering that it has half the market share).

4. Worldwide Demand In Every Country that Has IT

It’s not just in the US that demand is peaking. There are jobs available in England, France, Australia, Canada, India, China, EU – practically every nation that wants to get into IT will welcome you with open arms if you are an AWS certified professional. And look no further than this site. AWS training will be offered soon, here: on Dimensionless.in. Within the next six months at the latest!

5. Affordable Pricing and Free One Year Tier to Learn AWS

Amazon has always been able to command the lowest prices because of its dominance in the market share. AWS offers you a free 1 year of paid services on its cloud IaaS platform. Completely free for one year. AWS training materials are also less expensive compared to other offerings. The following features are offered free for one single year under Amazon’s AWS free tier system:

https://aws.amazon.com/free/

The following is a web-scrape of their free-tier offering:

Freemium

AWS Free Tier One Year Resources Available

There were initially seven pages in the Word document that I scraped from www.aws.com/free. To really have a  look, go to the website on the previous link and see for yourself on the following link (much more details in much higher resolution). Please visit this last mentioned link. That alone will show you why AWS is sitting pretty on top of the cloud – literally.

Final Words

Right now, AWS rules the roost in cloud computing. But there is competition from Microsoft, Google, and IBM. Microsoft Azure has a lot of glitches which costs a lot to fix. Google Cloud Platform is cheaper but has very high technical support charges. A dark horse here: IBM Cloud. Their product has a lot of offerings and a lot of potential. Third only to Google and AWS. If you are working and want to go abroad or have a thirst for achievement, go for AWS. Totally. Finally, good news, all Dimensionless current students and alumni, the languages that AWS is built on has 100% support for Python! (It also supports, Go, Ruby, Java, Node.js, and many more – but Python has 100% support).

Keep coming to this website – expect to see AWS courses here in the near future!

AWS

AWS in the Cloud

 

Business Analysis (BA) Career Path

Career path in Business Analysis

More organizations are adopting data-driven and technology-focused approaches to business and hence the need for analytics expertise continues to grow. As a result, career opportunities in analytics are around every corner. Due to this identifying analytics talent has become a priority for companies in nearly every industry, from healthcare, finance, and telecommunications to retail, energy, and sports.

In this blog, we will be talking about different career paths and option in the Business Analytics field. Furthermore, we will be discussing the qualifications required for being a business analyst and what are the primary roles a Business Analyst handles at a firm

In this blog, we will be discussing

  1. Who are Business Analyst
  2. Qualifications required for BA role
  3. Career options in BA role
  4. Career growth in BA role
  5. Responsibilities of a Business Analyst
  6. Expected Salary Packages in BA

Who are Business Analyst

Business analysts, also known as management analysts, work for all kinds of businesses, nonprofit organizations, and government agencies. Certainly, job functions can vary depending on the position, the work of business analysts involves studying business processes and operating procedures in search of ways to improve an organization’s operational efficiency and achieve better performance. Simply put, a Business Analyst is someone who works with people within an organization to understand their business problems and needs and then to interpret, translate and document those business needs in terms of specific business requirements for solution providers to implement.

Qualifications required to become a Business Analyst

Most entry-level business analyst positions require at least a bachelor’s degree. Therefore beginning Business Analysts need to have either a strong business background or extensive IT knowledge. Likewise, you can start to work as a business analyst with job responsibilities that include collecting, analyzing, communicating and documenting requirements, user-testing and so on. Entry-level jobs may include industry/domain expert, developer, and/or quality assurance.

With sufficient experience and good performance, a young professional can move into a junior business analyst position. In contrast, some choose instead to return to school to get master’s degrees before beginning work as business analysts in large organizations or consultancies.

Skills required to be a Business Analyst

Professional business analysts play a critical role in a company’s productivity, efficiency, and profitability. Hence, essential skill sets range from communication and interpersonal skills to problem-solving and critical thinking. Let us discuss each in a bit more detail

Communication Skills

First of all, Business analysts spend a significant amount of time interacting with clients, users, management, and developers. Therefore, being an effective communicator is key. You will be expected to facilitate work meetings, ask the right questions, and actively listen to your colleagues to take in new information and build relationships.

Problem-Solving Skills

Every project you work on is, at its core, is around developing a solution to a problem. Business analysts work to build a shared understanding of problems, outline the parameters of the project, and determine potential solutions. Hence, problem-solving skill is a must-have for this job position.

Negotiation Skills

A business analyst is an intermediary between a variety of people with various types of personalities: clients, developers, users, management, and IT. Therefore, you have to be able to achieve a profitable outcome for your company while finding a solution for the client that makes them happy. This balancing act demands the ability to influence a mutual solution and maintaining professional relationships.

Critical Thinking Skills

Business analysts must assess multiple choices before leading the team toward a solution. Effectively doing so requires a critical review of data, documentation, user input surveys, and workflow. They ask probing questions until every issue is evaluated in its entirety to determine the best conflict resolution. Therefore, critical thinking skill is a must have pre-requisite for this job position.

Career options in BA role

A career path of a business analyst usually begins with working at an entry level, and gradually with experience and with acquiring a better understanding of how businesses function, growing up the ladder.

Also, Business Analysts enjoy a seamless transition to different roles according to one’s interest because the profession consists of a set of skills which are highly specialized and can be applied to any industry and to any subject matter area successfully. As a result, this allows for the Business Analyst to move between industry, company and subject matter area with ease which becomes their career progression and a focus of professional development.

Other roles that one can take up after gaining experience as a Business Analyst can be

  1. Operations Manager
  2. Product Owner
  3. Management Consultant
  4. Project Manager
  5. Subject Matter Expert
  6. Business Architect
  7. Program Manager

Career growth in BA role

Once you have several years of experience in the industry, you will finally reach a pivotal turning point where you can choose the next step in your business analyst career. After three to five years, you can be positioned to move up into roles such as IT business analyst, senior/lead business analyst or product manager.

But broadly beyond all the fancy names given to designations, we can consider four levels of professional analytics roles:

Level 1: The Business Analyst

  • Analyzes information for patterns and trends
  • Applies analytics to solve business problems
  • Identifies processes and business areas in need of improvement

Level 2: The Data Scientist

  • Builds analytics models and algorithms
  • Implements technical solutions to solve business problems
  • Extracts meaning from and interprets data

Level 3: The Analytics Decision Maker

  • Leverages data to influence decision-making, strategy, and operations
  • Explores and integrates the use of data to gain competitive advantages
  • Uses analytics to drive growth and create better organizational outcomes

Level 4: The Analytics Leader

  • Leads advanced analytics projects
  • Aligns business and analytics within the organization
  • Oversees data management and data governance

Responsibilities of a Business Analyst

Modern Analyst identifies several characteristics that make up the role of a business analyst as follows:

  • Working with the business to identify opportunities for improvement in business operations and processes
  • Involved in the design or modification of business systems or IT systems
  • Interacting with the business stakeholders and subject matter experts in order to understand their problems and needs
  • The analyst gathers, documents, and analyzes business needs and requirements
  • Solving business problems and, as needed, designs technical solutions
  • The analyst documents the functional and, sometimes, technical design of the system
  • Interacting with system architects and developers to ensure the system is properly implemented
  • Test the system and create system documentation and user manuals

Expected Salary Packages of BA

The average salary of a business analyst in India is around 6.5 L.P.A. As one continues to gain the experience in this field, the salary gets more lucrative.

The more experience you have as a business analyst, the more likely you are to be assigned larger and/or more complex projects. After eight to 10 years in various business analysis positions, you can advance to chief technology officer or work as a consultant. You can take the business analyst career path as far as you would like, progressing through management levels as far as your expertise, talents, and desires take you.

Conclusion

So with so many interesting, promising and rewarding options available for Business Analysts, they need to first get a firm hold about the basics of data analysis. You can also have a look at this post to know more about what are the different components in data science. It will help you to boost your business analyst career.

We, at Dimensionless Technologies, offer data science course which helps to make you industry ready. Do go through our website and let us know how we may help you.