Top 10 reasons why Dimensionless is the Best Data Science Course Provider Online

Top 10 reasons why Dimensionless is the Best Data Science Course Provider Online

Introduction

Data Science was called “The sexiest work of the 21st Century” by the Harvard Review. Data researchers as problematic solvers and analysts identify patterns, notice developments and make fresh findings and often use real-time information, machine learning, and IA. This is where Data Science Course comes into the picture.

There is a strong demand for information researchers and qualified data scientists. Projections from IBM suggest that by 2020 the figure of information researchers will achieve 28%. In the United States alone, there will be 2,7 million positions for all US information experts. In addition, we were provided more access to detailed analyzes by strong software programs.

Dimensionless Tech offers the finest online data science course and big data coaching to meet the requirement, offering extensive course coverage and case studies, completely hands-on-driven meetings with personal attention to each individual. This assessment is a gold mine with invaluable insights. To satisfy the elevated requirement. We only provide internet LIVE instruction for instructors and not instruction in the school.

About Dimensionless Technologies

Dimensionless Technologies is a training firm providing online live training in the sector of data science. Courses include–R&P data science, deep learning, large-scale analysis. It was created in 2014, with the goal of offering quality data science training for an inexpensive cost, by 2 IITians Himanshu Arora & Kushagra Singhania.
Dimensionless provides a range of internet Data Science Live lessons. Dimensionless intends to overcome the constraints by giving them the correct skillset with the correct methodology, versatile, adaptable and versatile at the correct moment, which will assist learners to create informed business choices and sail towards a successful profession.

Why Dimensionless Technologies

Experienced Faculty and Industry experts

Data science is a very vast field and hence a comprehensive grasp over this subject requires a lot of effort. With our experienced faculties, we are committed to impart quality and practical knowledge to all the learners. Our faculty through their vast experience (10 plus industry experience) in the data science industry is best suited to show the right path to all students towards their success journey on the path of data science. Our trainer’s boast of their high academic career as well (IITian’s)!

End to End domain-specific projects

We, at Dimensionless, believe that concepts can be learned best when all the theory learned in the classroom can actually be implemented. With our meticulously designed courses and projects, we make sure our students get hands-on the projects ranging from pharma, retail, and insurance domains to banking and financial sector problems! End-to-end projects make sure that students understand the entire problem-solving lifecycle in data science

Up to date and adaptive courses

All our courses have been developed based on the recent trends in data science. We have made sure to include all the industry requirements for data scientists. Courses start from level 0 and assume no prerequisites. Courses make learners traverse from basic introductions to advanced concepts gradually with the constant assistance of our experienced faculties. Courses cover all the concepts to a great depth such that learners are never left wanting for more! Our courses have something or other for everyone whether you are a beginner or a professional.

Resource assistance

Dimensionless technologies have all the required hardware setup from running a regression equation to training a deep neural network. Our online-lab provides learners with a platform where they can execute all their projects. A laptop with bare minimum configuration (2GB RAM and Windows 7) is sufficient enough to pave your way into the world of deep learning. Pre-setup environments save a lot of time of learners in installing all the required tools. All the software requirements are loaded right in front of the accelerated learning

Live and interactive sessions

Dimensionless provides classes through live interactive classes on our platform. All the classes are taken live by instructors and are not in any pre-recorded format. Such format enables our learners to keep up their learning in the comfort of their own homes. You don’t need to waste your time and expenses in any travel and can take classes from any location of your preference. Also, after each class, we provide the recorded video of it to all our learners so that they can go through it to clear all their doubts. All trainers are available to post classes to clear the doubts as well

Lifetime access to study materials

Dimensionless provides lifetime access to the learning material provided in the course. Many other course providers provide access only till the time one is continuing with classes. With all the resources available thereafter, learnings for our students will not stop even after they have taken up our entire course

Placement assistance

Dimensionless technologies provide placement assistance to all its students. With highly experienced faculties and contacts in the industry, we make sure our students get their data science job and kick start their career. We help in all stages of placement assistance. From resume-building to final interviews, Dimensionless technologies is by your side to help you achieve all your goals

Course completion certificate

Apart from the training, we issue a course completion certificate once the training is complete. The certificate brings credibility to the resume of the learners and will help them in fetching their data science dream jobs

Small batch sizes

We make sure that we have small batch sizes of students. Keeping the batch size small allows us to focus on students individually and impart them a better learning experience. With personalized attention, we make sure students are able to learn as much possible and helps us to clear all their doubts as well

Conclusion

If you want to start a profession in data science, dimensionless systems have the correct classes for you. Not just all key ideas and techniques are covered but they are also implemented and used in real-world company issues.

Follow this link, if you are looking to learn data science online!

You can follow this link for our Big Data course! This course will equip you with the exact skills required. Packed with content, this course teaches you all about AWS tools and prepares you for your next ‘Data Engineer’ role

Additionally, if you are having an interest in learning Data Science, click here to start the Online Data Science Course

Furthermore, if you want to read more about data science, read our Data Science Blogs

Concept of Cluster Analysis in Data Science

A Comprehensive Guide to Data Mining: Techniques, Tools and Application

A Comprehensive Introduction to Data Wrangling and Its Importance

Face Recognition Using Python

Face Recognition Using Python

Introduction

AI is revolutionizing the world. Face recognition is one such spectrum of it. Almost most of us use face recognition systems. They are everywhere. One can find them  in devices like our mobile or platforms like Facebook or applications like Photo gallery apps or advanced security cameras.

In this blog, we are going to have our hands dirty with facial recognition in python and learn how can we train a model to learn faces from images! Before we start with the implementation, let us dive down a little into basics of face recognition theory

 

What is Face Recognition?

The issue is answered by a face identification scheme: does an image’s face match the image’s face? A face recognition scheme requires a face picture and predicts if the face corresponds to other pictures in the database supplied. Face-recognition schemes have been developed to compare and forecast possible face match irrespective of speech, face hair, and age.

Facial recognition is the process of identifying or verifying the identity of a person using their face. It captures, analyzes and compares patterns based on the person’s facial details.

  • The face detection process detects and points out human faces in images.
  • The face capture process transforms camera feed (a face) into a set of mathematical representation based on the person’s facial features.
  • The face match process verifies if two faces are of the same person.

Today it’s considered to be the most natural of all biometric measurements.

What are the Steps in Face Recognition?

Step 1: Detecting the Faces

Face detection is the first phase in our pipeline. We must put the images in a picture before trying to divide them. Methods such as HOG can be used to define the images in a specified picture. Histograph of Oriented Gradients The distribution (histogram) of gradient instructions is used as characteristics in the HOG function descriptor. Gradients (X and Y derivatives) are helpful in an image because the size of the gradient is wide around edges and angles, and we know that edges and corners are more informed about the shape of an object than flat regions. HOG is more like a manner to detect a picture of the picture, by identifying the corners by the comparison of the various sections of the picture

Step 2: Face Landmark Estimation

Moreover, we have to cope with issues such as faces in various directions. Such images look completely different from a computer and the similarity between them on their own can not be found. We can use an algorithm known as face-point assessment to do this. Vahid Kazemi and Josephine Sullivan have created an strategy in 2014. The fundamental concept is that we will have 68 particular points on every face (called sights). Once we understand where there are distinct face characteristics, we can scale the picture for a single person, spin it and shear it.

Step 3: Face Encoding

We need a way to obtain a few fundamental readings from each face at this point. Then we could evaluate the unfamiliar face in the same manner and discover the most close-known face. This can be done with profound teaching (CNNs). Incorporation of characteristics from prior measures must be created. We can once recognize this embedding for an unidentified face.

Step 4: Classifying Unknown Faces into Known Ones

In fact, this is a simpler phase. All we have to do is discover the individual who has the nearest measurement to our sample picture in our database of recognized individuals. We can do this using an algorithm for fundamental teaching machines. All we have to do is train a classifier to measure from a fresh sample picture and show which recognized individual is nearest to each other. It requires milliseconds to run this classifier. The classificator outcome is the person’s name!

 

Transfer Learning for Face Recognition

Transfer training is a computer training process in which a model created for a job is used again as the basis for a second job model. It is an approach popular in the field of in-depth learning, where prequalified models are used to start computer vision and natural language treatment work, given the huge computer and time resources required to develop neural network models on these problems. We use transfer learning in our blog as well. For face detection and recognition, we use pre-built designs. Training a face recognition model is a very costly job. You need a bunch of information and computing energy to train profound facial recognition teaching models.

For our assignment, we will currently use python’s facial recognition library. The book uses the profound teaching model educated by a threefold loss function. The Siamese network we call. “Siamese” implies linked or attached. Perhaps you heard of Siamese twins? Siamese networks may be formed by convolutionary structures and dense or layers of LSTM. We will use the Convolutionary Siamese Network since we will cope with pictures to identify the faces. You can understand the architecture by this image :

Conventional Siamese Network Architecture

This is the fundamental algorithm:

  1. we take two photographs (Figures 1 and 2). The last layer of the CNN generates a permanent shape matrix (picture embedding), the last part of which is the CNN. We get two embeddings as two pictures are feed. (h2 and h1). (h1).
  2. The absolute range is calculated between the vectors.
  3. Then a sigmoid function passes through measurements and the resemblance value is generated.
  4. The scores are nearer to 1 if the pictures are comparable or nearer to 0.

 

Implementation

Getting the libraries

The first step is to load all the libraries. We will be using the face_recognition library for detection and recognition in this case. This library provides out of the box methods to perform various tasks involved during a facial recognition process.

 

Generating the Encodings for the Knows Users

In this section, we are trying to convert images of the known users into a mathematical representation. This mathematical representation is a high dimensional vector. We can call this high dimensional vector as an embedding. Each image has it’s own 1 embedding. These embeddings are important to describe an image in a high dimensional space. 

The code below tries to identify a face in a given image. Once the model detects the face, it extracts out facial features and passes them to another model which converts these features into a mathematical representation known as embeddings. In the end, we collate all the images and their corresponding embedding in a list. 

This is a set of true values for us. All the users present in this list are the ones which we want to recognize correctly. Any user out of this set should be called out as an “unknown” by the model!

 

Matching New Users

In the previous section, we generated embeddings for known users. Now, we need to generate these embeddings for the new test users whom we want to predict through our model.

We have a written a predict face utility function which will take in the input path of the test image and will return the name of the recognized person!

 

Getting the Predictions

The previous utility function takes one image as input. Below code, basically iterates over multiple test images present in a folder. It passes it to the predict function and collects the predicted name. All the results are stored in a data frame!

 

Calculating Model Metrics

This is an extension to measure the metrics of the model. We are calculating accuracy, specificity, recall and F1 score of our face prediction model.

 

Summary

Security is now one of the areas that most use face recognition. Facial recognition is a very efficient instrument which enforcers can use the technology to identify criminals and software businesses to assist consumers to access the technology. It is possible to further develop this technology to be used in other ways, like ATMs, private records or other delicate equipment. This may outdated other safety steps, including passwords and buttons.

In the subways and in the other rail networks, innovators also seek to introduce facial identification. You want to use this technology to pay for your transport charge, using faces as credit cards. The facial recognition takes your picture, runs it through a scheme and charges the account you have earlier developed instead of getting to go to a stand and purchase a ticket. This can rationalize the method and dramatically optimize traffic flow. Here’s the future.

Follow this link, if you are looking to learn data science online!

You can follow this link for our Big Data courseThis course will equip you with the exact skills required. Packed with content, this course teaches you all about AWS tools and prepares you for your next ‘Data Engineer’ role

Additionally, if you are having an interest in learning Data Science, click here to start the Online Data Science Course

Furthermore, if you want to read more about data science, read our Data Science Blogs

Concept of Cluster Analysis in Data Science

A Comprehensive Guide to Data Mining: Techniques, Tools and Application

A Comprehensive Introduction to Data Wrangling and Its Importance

 

The Upcoming Revolution in Predictive Analytics (And Data Science)

The Upcoming Revolution in Predictive Analytics (And Data Science)

The Next Generation of Data Science

Quite literally, I am stunned.

I have just completed my survey of data (from articles, blogs, white papers, university websites, curated tech websites, and research papers all available online) about predictive analytics.

And I have a reason to believe that we are standing on the brink of a revolution that will transform everything we know about data science and predictive analytics.

But before we go there, you need to know: why the hype about predictive analytics? What is predictive analytics?

Let’s cover that first.

 Importance of Predictive Analytics

Black Samsung Tablet Computer

By PhotoMix Ltd

 

According to Wikipedia:

Predictive analytics is an area of statistics that deals with extracting information from data and using it to predict trends and behavior patterns. The enhancement of predictive web analytics calculates statistical probabilities of future events online. Predictive analytics statistical techniques include data modeling, machine learning, AI, deep learning algorithms and data mining.

Predictive analytics is why every business wants data scientists. Analytics is not just about answering questions, it is also about finding the right questions to answer. The applications for this field are many, nearly every human endeavor can be listed in the excerpt from Wikipedia that follows listing the applications of predictive analytics:

From Wikipedia:

Predictive analytics is used in actuarial science, marketing, financial services, insurance, telecommunications, retail, travel, mobility, healthcare, child protection, pharmaceuticals, capacity planning, social networking, and a multitude of numerous other fields ranging from the military to online shopping websites, Internet of Things (IoT), and advertising.

In a very real sense, predictive analytics means applying data science models to given scenarios that forecast or generate a score of the likelihood of an event occurring. The data generated today is so voluminous that experts estimate that less than 1% is actually used for analysis, optimization, and prediction. In the case of Big Data, that estimate falls to 0.01% or less.

Common Example Use-Cases of Predictive Analytics

 

Components of Predictive Analytics

Components of Predictive Analytics

 

A skilled data scientist can utilize the prediction scores to optimize and improve the profit margin of a business or a company by a massive amount. For example:

  • If you buy a book for children on the Amazon website, the website identifies that you have an interest in that author and that genre and shows you more books similar to the one you just browsed or purchased.
  • YouTube also has a very similar algorithm behind its video suggestions when you view a particular video. The site identifies (or rather, the analytics algorithms running on the site identifies) more videos that you would enjoy watching based upon what you are watching now. In ML, this is called a recommender system.
  • Netflix is another famous example where recommender systems play a massive role in the suggestions for ‘shows you may like’ section, and the recommendations are well-known for their accuracy in most cases
  • Google AdWords (text ads at the top of every Google Search) that are displayed is another example of a machine learning algorithm whose usage can be classified under predictive analytics.
  • Departmental stores often optimize products so that common groups are easy to find. For example, the fresh fruits and vegetables would be close to the health foods supplements and diet control foods that weight-watchers commonly use. Coffee/tea/milk and biscuits/rusks make another possible grouping. You might think this is trivial, but department stores have recorded up to 20% increase in sales when such optimal grouping and placement was performed – again, through a form of analytics.
  • Bank loans and home loans are often approved with the credit scores of a customer. How is that calculated? An expert system of rules, classification, and extrapolation of existing patterns – you guessed it – using predictive analytics.
  • Allocating budgets in a company to maximize the total profit in the upcoming year is predictive analytics. This is simple at a startup, but imagine the situation in a company like Google, with thousands of departments and employees, all clamoring for funding. Predictive Analytics is the way to go in this case as well.
  • IoT (Internet of Things) smart devices are one of the most promising applications of predictive analytics. It will not be too long before the sensor data from aircraft parts use predictive analytics to tell its operators that it has a high likelihood of failure. Ditto for cars, refrigerators, military equipment, military infrastructure and aircraft, anything that uses IoT (which is nearly every embedded processing device available in the 21st century).
  • Fraud detection, malware detection, hacker intrusion detection, cryptocurrency hacking, and cryptocurrency theft are all ideal use cases for predictive analytics. In this case, the ML system detects anomalous behavior on an interface used by the hackers and cybercriminals to identify when a theft or a fraud is taking place, has taken place, or will take place in the future. Obviously, this is a dream come true for law enforcement agencies.

So now you know what predictive analytics is and what it can do. Now let’s come to the revolutionary new technology.

Meet Endor – The ‘Social Physics’ Phenomenon

 

Image result for endor image free to use

End-to-End Predictive Analytics Product – for non-tech users!

 

In a remarkable first, a research team at MIT, USA have created a new science called social physics, or sociophysics. Now, much about this field is deliberately kept highly confidential, because of its massive disruptive power as far as data science is concerned, especially predictive analytics. The only requirement of this science is that the system being modeled has to be a human-interaction based environment. To keep the discussion simple, we shall explain the entire system in points.

  • All systems in which human beings are involved follow scientific laws.
  • These laws have been identified, verified experimentally and derived scientifically.
  • Bylaws we mean equations, such as (just an example) Newton’s second law: F = m.a (Force equals mass times acceleration)
  • These equations establish laws of invariance – that are the same regardless of which human-interaction system is being modeled.
  • Hence the term social physics – like Maxwell’s laws of electromagnetism or Newton’s theory of gravitation, these laws are a new discovery that are universal as long as the agents interacting in the system are humans.
  • The invariance and universality of these laws have two important consequences:
    1. The need for large amounts of data disappears – Because of the laws, many of the predictive capacities of the model can be obtained with a minimal amount of data. Hence small companies now have the power to use analytics that was mostly used by the FAMGA (Facebook, Amazon, Microsoft, Google, Apple) set of companies since they were the only ones with the money to maintain Big Data warehouses and data lakes.
    2. There is no need for data cleaning. Since the model being used is canonical, it is independent of data problems like outliers, missing data, nonsense data, unavailable data, and data corruption. This is due to the orthogonality of the model ( a Knowledge Sphere) being constructed and the data available.
  • Performance is superior to deep learning, Google TensorFlow, Python, R, Julia, PyTorch, and scikit-learn. Consistently, the model has outscored the latter models in Kaggle competitions, without any data pre-processing or data preparation and cleansing!
  • Data being orthogonal to interpretation and manipulation means that encrypted data can be used as-is. There is no need to decrypt encrypted data to perform a data science task or experiment. This is significant because the independence of the model functioning even for encrypted data opens the door to blockchain technology and blockchain data to be used in standard data science tasks. Furthermore, this allows hashing techniques to be used to hide confidential data and perform the data mining task without any knowledge of what the data indicates.

Are You Serious?

Image result for OMG image

That’s a valid question given these claims! And that is why I recommend everyone who has the slightest or smallest interest in data science to visit and completely read and explore the following links:

  1. https://www.endor.com
  2. https://www.endor.com/white-paper
  3. http://socialphysics.media.mit.edu/
  4. https://en.wikipedia.org/wiki/Social_physics

Now when I say completely read, I mean completely read. Visit every section and read every bit of text that is available on the three sites above. You will soon understand why this is such a revolutionary idea.

  1. https://ssir.org/book_reviews/entry/going_with_the_idea_flow#
  2. https://www.datanami.com/2014/05/21/social-physics-harnesses-big-data-predict-human-behavior/

These links above are articles about the social physics book and about the science of sociophysics in general.

For more details, please visit the following articles on Medium. These further document Endor.coin, a cryptocurrency built around the idea of sharing data with the public and getting paid for using the system and usage of your data. Preferably, read all, if busy, at least read Article No, 1.

  1. https://medium.com/endor/ama-session-with-prof-alex-sandy-pentland
  2. https://medium.com/endor/endor-token-distribution
  3. https://medium.com/endor/https-medium-com-endor-paradigm-shift-ai-predictive-analytics
  4. https://medium.com/endor/unleash-the-power-of-your-data

Operation of the Endor System

Upon every data set, the first action performed by the Endor Analytics Platform is clustering, also popularly known as automatic classification. Endor constructs what is known as a Knowledge Sphere, a canonical representation of the data set which can be constructed even with 10% of the data volume needed for the same project when deep learning was used.

Creation of the Knowledge Sphere takes 1-4 hours for a billion records dataset (which is pretty standard these days).

Now an explanation of the mathematics behind social physics is beyond our scope, but I will include the change in the data science process when the Endor platform was compared to a deep learning system built to solve the same problem the traditional way (with a 6-figure salary expert data scientist).

An edited excerpt from Link here

From Appendix A: Social Physics Explained, Section 3.1, pages 28-34 (some material not included):

Prediction Demonstration using the Endor System:

Data:
The data that was used in this example originated from a retail financial investment platform
and contained the entire investment transactions of members of an investment community.
The data was anonymized and made public for research purposes at MIT (the data can be
shared upon request).

 

Summary of the dataset:
– 7 days of data
– 3,719,023 rows
– 178,266 unique users

 

Automatic Clusters Extraction:
Upon first analysis of the data the Endor system detects and extracts “behavioral clusters” – groups of
users whose data dynamics violates the mathematical invariances of the Social Physics. These clusters
are based on all the columns of the data, but is limited only to the last 7 days – as this is the data that
was provided to the system as input.

 

Behavioural Clusters Summary

Number of clusters:268,218
Clusters sizes: 62 (Mean), 15 (Median), 52508 (Max), 5 (Min)
Clusters per user:164 (Mean), 118 (Median), 703 (Max), 2 (Min)
Users in clusters: 102,770 out of the 178,266 users
Records per user: 6 (Median), 33 (Mean): applies only to users in clusters

 

Prediction Queries
The following prediction queries were defined:
1. New users to become “whales”: users who joined in the last 2 weeks that will generate at least
$500 in commission in the next 90 days
2. Reducing activity : users who were active in the last week that will reduce activity by 50% in the
next 30 days (but will not churn, and will still continue trading)
3. Churn in “whales”: currently active “whales” (as defined by their activity during the last 90 days),
who were active in the past week, to become inactive for the next 30 days
4. Will trade in Apple share for the first time: users who had never invested in Apple share, and
would buy it for the first time in the coming 30 days

 

Knowledge Sphere Manifestation of Queries
It is again important to note that the definition of the search queries is completely orthogonal to the
extraction of behavioral clusters and the generation of the Knowledge Sphere, which was done
independently of the queries definition.

Therefore, it is interesting to analyze the manifestation of the queries in the clusters detected by the system: Do the clusters contain information that is relevant to the definition of the queries, despite the fact that:

1. The clusters were extracted in a fully automatic way, using no semantic information about the
data, and –

2. The queries were defined after the clusters were extracted, and did not affect this process.

This analysis is done by measuring the number of clusters that contain a very high concentration of
“samples”; In other words, by looking for clusters that contain “many more examples than statistically
expected”.

A high number of such clusters (provided that it is significantly higher than the amount
received when randomly sampling the same population) proves the ability of this process to extract
valuable relevant semantic insights in a fully automatic way.

 

Comparison to Google TensorFlow

In this section a comparison between prediction process of the Endor system and Google’s
TensorFlow is presented. It is important to note that TensorFlow, like any other Deep Learning library,
faces some difficulties when dealing with data similar to the one under discussion:

1. An extremely uneven distribution of the number of records per user requires some canonization
of the data, which in turn requires:

2. Some manual work, done by an individual who has at least some understanding of data
science.

3. Some understanding of the semantics of the data, that requires an investment of time, as
well as access to the owner or provider of the data

4. A single-class classification, using an extremely uneven distribution of positive vs. negative
samples, tends to lead to the overfitting of the results and require some non-trivial maneuvering.

This again necessitates the involvement of an expert in Deep Learning (unlike the Endor system
which can be used by Business, Product or Marketing experts, with no perquisites in Machine
Learning or Data Science).

 

Traditional Methods

An expert in Deep Learning spent 2 weeks crafting a solution that would be based
on TensorFlow and has sufficient expertise to be able to handle the data. The solution that was created
used the following auxiliary techniques:

1.Trimming the data sequence to 200 records per customer, and padding the streams for users
who have less than 200 records with neutral records.

2.Creating 200 training sets, each having 1,000 customers (50% known positive labels, 50%
unknown) and then using these training sets to train the model.

3.Using sequence classification (RNN with 128 LSTMs) with 2 output neurons (positive,
negative), with the overall result being the difference between the scores of the two.

Observations (all statistics available in the white paper – and it’s stunning)

1.Endor outperforms Tensor Flow in 3 out of 4 queries, and results in the same accuracy in the 4th
.
2.The superiority of Endor is increasingly evident as the task becomes “more difficult” – focusing on
the top-100 rather than the top-500.

3.There is a clear distinction between “less dynamic queries” (becoming a whale, churn, reduce
activity” – for which static signals should likely be easier to detect) than the “Who will trade in
Apple for the first time” query, which are (a) more dynamic, and (b) have a very low baseline, such
that for the latter, Endor is 10x times more accurate!

4.As previously mentioned – the Tensor Flow results illustrated here employ 2 weeks of manual
improvements done by a Deep Learning expert, whereas the Endor results are 100% automatic and the entire prediction process in Endor took 4 hours.

Clearly, the path going forward for predictive analytics and data science is Endor, Endor, and Endor again!

Predictions for the Future

Personally, one thing has me sold – the robustness of the Endor system to handle noise and missing data. Earlier, this was the biggest bane of the data scientist in most companies (when data engineers are not available). 90% of the time of a professional data scientist would go into data cleaning and data preprocessing since our ML models were acutely sensitive to noise. This is the first solution that has eliminated this ‘grunt’ level work from data science completely.

The second prediction: the Endor system works upon principles of human interaction dynamics. My intuition tells me that data collected at random has its own dynamical systems that appear clearly to experts in complexity theory. I am completely certain that just as this tool developed a prediction tool with human society dynamical laws, data collected in general has its own laws of invariance. And the first person to identify these laws and build another Endor-style platform on them will be at the top of the data science pyramid – the alpha unicorn.

Final prediction – democratizing data science means that now data scientists are not required to have six-figure salaries. The success of the Endor platform means that anyone can perform advanced data science without resorting to TensorFlow, Python, R, Anaconda, etc. This platform will completely disrupt the entire data science technological sector. The first people to master it and build upon it to formalize the rules of invariance in the case of general data dynamics will for sure make a killing.

It is an exciting time to be a data science researcher!

Data Science is a broad field and it would require quite a few things to learn to master all these skills.

Dimensionless has several resources to get started with.

To Learn Data Science, Get Data Science Training in Pune and Mumbai from Dimensionless Technologies.

To learn more about analytics, be sure to have a look at the following articles on this blog:

Machine Learning for Transactional Analytics

and

Text Analytics and its applications

Enjoy data science!

Machine Learning Algorithms Every Data Scientist Should Know

Machine Learning Algorithms Every Data Scientist Should Know

Types Of ML Algorithms

There are a huge number of ML algorithms out there. Trying to classify them leads to the distinction being made in types of the training procedure, applications, the latest advances, and some of the standard algorithms used by ML scientists in their daily work. There is a lot to cover, and we shall proceed as given in the following listing:

  1. Statistical Algorithms
  2. Classification
  3. Regression
  4. Clustering
  5. Dimensionality Reduction
  6. Ensemble Algorithms
  7. Deep Learning
  8. Reinforcement Learning
  9. AutoML (Bonus)

1. Statistical Algorithms

Statistics is necessary for every machine learning expert. Hypothesis testing and confidence intervals are some of the many statistical concepts to know if you are a data scientist. Here, we consider here the phenomenon of overfitting. Basically, overfitting occurs when an ML model learns so many features of the training data set that the generalization capacity of the model on the test set takes a toss. The tradeoff between performance and overfitting is well illustrated by the following illustration:

Overfitting - from Wikipedia

Overfitting – from Wikipedia

 

Here, the black curve represents the performance of a classifier that has appropriately classified the dataset into two categories. Obviously, training the classifier was stopped at the right time in this instance. The green curve indicates what happens when we allow the training of the classifier to ‘overlearn the features’ in the training set. What happens is that we get an accuracy of 100%, but we lose out on performance on the test set because the test set will have a feature boundary that is usually similar but definitely not the same as the training set. This will result in a high error level when the classifier for the green curve is presented with new data. How can we prevent this?

Cross-Validation

Cross-Validation is the killer technique used to avoid overfitting. How does it work? A visual representation of the k-fold cross-validation process is given below:

From Quora

The entire dataset is split into equal subsets and the model is trained on all possible combinations of training and testing subsets that are possible as shown in the image above. Finally, the average of all the models is combined. The advantage of this is that this method eliminates sampling error, prevents overfitting, and accounts for bias. There are further variations of cross-validation like non-exhaustive cross-validation and nested k-fold cross validation (shown above). For more on cross-validation, visit the following link.

There are many more statistical algorithms that a data scientist has to know. Some examples include the chi-squared test, the Student’s t-test, how to calculate confidence intervals, how to interpret p-values, advanced probability theory, and many more. For more, please visit the excellent article given below:

Learning Statistics Online for Data Science

2. Classification Algorithms

Classification refers to the process of categorizing data input as a member of a target class. An example could be that we can classify customers into low-income, medium-income, and high-income depending upon their spending activity over a financial year. This knowledge can help us tailor the ads shown to them accurately when they come online and maximises the chance of a conversion or a sale. There are various types of classification like binary classification, multi-class classification, and various other variants. It is perhaps the most well known and most common of all data science algorithm categories. The algorithms that can be used for classification include:

  1. Logistic Regression
  2. Support Vector Machines
  3. Linear Discriminant Analysis
  4. K-Nearest Neighbours
  5. Decision Trees
  6. Random Forests

and many more. A short illustration of a binary classification visualization is given below:

binary classification visualization

From openclassroom.stanford.edu

 

For more information on classification algorithms, refer to the following excellent links:

How to train a decision tree classifier for churn prediction

3. Regression Algorithms

Regression is similar to classification, and many algorithms used are similar (e.g. random forests). The difference is that while classification categorizes a data point, regression predicts a continuous real-number value. So classification works with classes while regression works with real numbers. And yes – many algorithms can be used for both classification and regression. Hence the presence of logistic regression in both lists. Some of the common algorithms used for regression are

  1. Linear Regression
  2. Support Vector Regression
  3. Logistic Regression
  4. Ridge Regression
  5. Partial Least-Squares Regression
  6. Non-Linear Regression

For more on regression, I suggest that you visit the following link for an excellent article:

Multiple Linear Regression & Assumptions of Linear Regression: A-Z

Another article you can refer to is:

Logistic Regression: Concept & Application

Both articles have a remarkably clear discussion of the statistical theory that you need to know to understand regression and apply it to non-linear problems. They also have source code in Python and R that you can use.

4. Clustering

Clustering is an unsupervised learning algorithm category that divides the data set into groups depending upon common characteristics or common properties. A good example would be grouping the data set instances into categories automatically, the process being used would be any of several algorithms that we shall soon list. For this reason, clustering is sometimes known as automatic classification. It is also a critical part of exploratory data analysis (EDA). Some of the algorithms commonly used for clustering are:

  1. Hierarchical  Clustering – Agglomerative
  2. Hierarchical Clustering – Divisive
  3. K-Means Clustering
  4. K-Nearest Neighbours Clustering
  5. EM (Expectation Maximization) Clustering
  6. Principal Components Analysis Clustering (PCA)

An example of a common clustering problem visualization is given below:

clustering problem visualization

From Wikipedia

 

The above visualization clearly contains three clusters.

Another excellent article on clustering refer the link

You can also refer to the following article:

 

ML Methods for Prediction and Personalization

5. Dimensionality Reduction

Dimensionality Reduction is an extremely important tool that should be completely clear and lucid for any serious data scientist. Dimensionality Reduction is also referred to as feature selection or feature extraction. This means that the principal variables of the data set that contains the highest covariance with the output data are extracted and the features/variables that are not important are ignored. It is an essential part of EDA (Exploratory Data Analysis) and is nearly always used in every moderately or highly difficult problem. The advantages of dimensionality reduction are (from Wikipedia):

  1. It reduces the time and storage space required.
  2. Removal of multi-collinearity improves the interpretation of the parameters of the machine learning model.
  3. It becomes easier to visualize the data when reduced to very low dimensions such as 2D or 3D.
  4. It avoids the curse of dimensionality.

The most commonly used algorithm for dimensionality reduction is Principal Components Analysis or PCA. While this is a linear model, it can be converted to a non-linear model through a kernel trick similar to that used in a Support Vector Machine, in which case the technique is known as Kernel PCA. Thus, the algorithms commonly used are:

  1. Principal Component Analysis (PCA)
  2. Non-Negative Matrix Factorization (NMF)
  3. Kernel PCA
  4. Linear Discriminant Analysis (LDA)
  5. Generalized Discriminant Analysis (kernel trick again)

The result of a  is visualized below:

PCA operation visulaization

By Nicoguaro – Own work, CC BY 4.0, https://commons.wikimedia.org/w/index.php?curid=46871195

 

You can refer to this article for a general discussion of dimensionality reduction:

This article below gives you a brief description of dimensionality reduction using PCA by coding an ML example:

MULTI-VARIATE ANALYSIS

6. Ensembling Algorithms

Ensembling means combining multiple ML learners together into one pipeline so that the combination of all the weak learners makes an ML application with higher accuracy than each learner taken separately. Intuitively, this makes sense, since the disadvantages of using one model would be offset by combining it with another model that does not suffer from this disadvantage. There are various algorithms used in ensembling machine learning models. The three common techniques usually employed in  practice are:

  1. Simple/Weighted Average/Voting: Simplest one, just takes the vote of models in Classification and average in Regression.
  2. Bagging: We train models (same algorithm) in parallel for random sub-samples of data-set with replacement. Eventually, take an average/vote of obtained results.
  3. Boosting: In this models are trained sequentially, where (n)th model uses the output of (n-1)th model and works on the limitation of the previous model, the process stops when result stops improving.
  4. Stacking: We combine two or more than two models using another machine learning algorithm.

(from Amardeep Chauhan on Medium.com)

In all four cases, the combination of the different models ends up having the better performance that one single learner. One particular ensembling technique that has done extremely well on data science competitions on Kaggle is the GBRT  model or the Gradient Boosted Regression Tree model.

 

We include the source code from the scikit-learn module for Gradient Boosted Regression Trees since this is one of the most popular ML models which can be used in competitions like Kaggle, HackerRank, and TopCoder.

Refer Link here

GradientBoostingClassifier supports both binary and multi-class classification. The following example shows how to fit a gradient boosting classifier with 100 decision stumps as weak learners:


 

GradientBoostingRegressor supports a number of different loss functions for regression which can be specified via the argument loss; the default loss function for regression is least squares ('ls').


 

You can also refer to the following article which discusses Random Forests, which is a (rather basic) ensembling method.

Introduction to Random forest

 

7. Deep Learning

In the last decade, there has been a renaissance of sorts within the Machine Learning community worldwide. Since 2002, neural networks research had struck a dead end as the networks of layers would get stuck in local minima in the non-linear hyperspace of the energy landscape of a three layer network. Many thought that neural networks had outlived their usefulness. However, starting with Geoffrey Hinton in 2006, researchers found that adding multiple layers of neurons to a neural network created an energy landscape of such high dimensionality that local minima were statistically shown to be extremely unlikely to occur in practice. Today, in 2019, more than a decade of innovation later, this method of adding addition hidden layers of neurons to a neural network is the classical practice of the field known as deep learning.

Deep Learning has truly taken the computing world by storm and has been applied to nearly every field of computation, with great success. Now with advances in Computer Vision, Image Processing, Reinforcement Learning, and Evolutionary Computation, we have marvellous feats of technology like self-driving cars and self-learning expert systems that perform enormously complex tasks like playing the game of Go (not to be confused with the Go programming language). The main reason these feats are possible is the success of deep learning and reinforcement learning (more on the latter given in the next section below). Some of the important algorithms and applications that data scientists have to be aware of in deep learning are:

  1. Long Short term Memories (LSTMs) for Natural Language Processing
  2. Recurrent Neural Networks (RNNs) for Speech Recognition
  3. Convolutional Neural Networks (CNNs) for Image Processing
  4. Deep Neural Networks (DNNs) for Image Recognition and Classification
  5. Hybrid Architectures for Recommender Systems
  6. Autoencoders (ANNs) for Bioinformatics, Wearables, and Healthcare

 

Deep Learning Networks typically have millions of neurons and hundreds of millions of connections between neurons. Training such networks is such a computationally intensive task that now companies are turning to the 1) Cloud Computing Systems and 2) Graphical Processing Unit (GPU) Parallel High-Performance Processing Systems for their computational needs. It is now common to find hundreds of GPUs operating in parallel to train ridiculously high dimensional neural networks for amazing applications like dreaming during sleep and computer artistry and artistic creativity pleasing to our aesthetic senses.

 

Artistic Image Created By A Deep Learning Network

Artistic Image Created By A Deep Learning Network. From blog.kadenze.com.

 

For more on Deep Learning, please visit the following links:

Machine Learning and Deep Learning : Differences

For information on a full-fledged course in deep learning, visit the following link:

Deep Learning

8. Reinforcement Learning (RL)

In the recent past and the last three years in particular, reinforcement learning has become remarkably famous for a number of achievements in cognition that were earlier thought to be limited to humans. Basically put, reinforcement learning deals with the ability of a computer to teach itself. We have the idea of a reward vs. penalty approach. The computer is given a scenario and ‘rewarded’ with points for correct behaviour and ‘penalties’ are imposed for wrong behaviour. The computer is provided with a problem formulated as a Markov Decision Process, or MDP. Some basic types of Reinforcement Learning algorithms to be aware of are (some extracts from Wikipedia):

 

1.Q-Learning

Q-Learning is a model-free reinforcement learning algorithm. The goal of Q-learning is to learn a policy, which tells an agent what action to take under what circumstances. It does not require a model (hence the connotation “model-free”) of the environment, and it can handle problems with stochastic transitions and rewards, without requiring adaptations. For any finite Markov decision process (FMDP), Q-learning finds a policy that is optimal in the sense that it maximizes the expected value of the total reward over any and all successive steps, starting from the current state. Q-learning can identify an optimal action-selection policy for any given FMDP, given infinite exploration time and a partly-random policy. “Q” names the function that returns the reward used to provide the reinforcement and can be said to stand for the “quality” of an action taken in a given state.

 

2.SARSA

State–action–reward–state–action (SARSA) is an algorithm for learning a Markov decision process policy. This name simply reflects the fact that the main function for updating the Q-value depends on the current state of the agent “S1“, the action the agent chooses “A1“, the reward “R” the agent gets for choosing this action, the state “S2” that the agent enters after taking that action, and finally the next action “A2” the agent choose in its new state. The acronym for the quintuple (st, at, rt, st+1, at+1) is SARSA.

 

3.Deep Reinforcement Learning

This approach extends reinforcement learning by using a deep neural network and without explicitly designing the state space. The work on learning ATARI games by Google DeepMind increased attention to deep reinforcement learning or end-to-end reinforcement learning. Remarkably, the computer agent DeepMind has achieved levels of skill higher than humans at playing computer games. Even a complex game like DOTA 2 was won by a deep reinforcement learning network based upon DeepMind and OpenAI Gym environments that beat human players 3-2 in a tournament of best of five matches.

For more information, go through the following links:

Reinforcement Learning: Super Mario, AlphaGo and beyond

and

How to Optimise Ad CTR with Reinforcement Learning

 

Finally:

9. AutoML (Bonus)

If reinforcement learning was cutting edge data science, AutoML is bleeding edge data science. AutoML (Automated Machine Learning) is a remarkable project that is open source and available on GitHub at the following link that, remarkably, uses an algorithm and a data analysis approach to construct an end-to-end data science project that does data-preprocessing, algorithm selection,hyperparameter tuning, cross-validation and algorithm optimization to completely automate the ML process into the hands of a computer. Amazingly, what this means is that now computers can handle the ML expertise that was earlier in the hands of a few limited ML practitioners and AI experts.

AutoML has found its way into Google TensorFlow through AutoKeras, Microsoft CNTK, and Google Cloud Platform, Microsoft Azure, and Amazon Web Services (AWS). Currently it is a premiere paid model for even a moderately sized dataset and is free only for tiny datasets. However, one entire process might take one to two or more days to execute completely. But at least, now the computer AI industry has come full circle. We now have computers so complex that they are taking the machine learning process out of the hands of the humans and creating models that are significantly more accurate and faster than the ones created by human beings!

The basic algorithm used by AutoML is Network Architecture Search and its variants, given below:

  1. Network Architecture Search (NAS)
  2. PNAS (Progressive NAS)
  3. ENAS (Efficient NAS)

The functioning of AutoML is given by the following diagram:

how autoML works

From cloud.google.com

 

For more on AutoML, please visit the link

and

Top 10 Artificial Intelligence Trends in 2019

 

If you’ve stayed with me till now, congratulations; you have learnt a lot of information and cutting edge technology that you must read up on, much, much more. You could start with the links in this article, and of course, Google is your best friend as a Machine Learning Practitioner. Enjoy machine learning!

Follow this link, if you are looking to learn data science online!

You can follow this link for our Big Data course, which is a step further into advanced data analysis and processing!

Additionally, if you are having an interest in learning Data Science, click here to start the Online Data Science Course

Furthermore, if you want to read more about data science, read our Data Science Blogs

What’s New in TensorFlow 2.0

What’s New in TensorFlow 2.0

New Features in TensorFlow 2.0

TensorFlow 2.0 is coming soon. And boy, are we super-excited! TensorFlow first began the trend of open-sourcing AI and DL frameworks for use by the community. And what has been the result? TensorFlow has become an entire ML ecosystem for all kinds of AI technology. Just to give you an idea,  here are the features that an absolutely incredible community has added to the original TensorFlow package:

TF 2.0 Features

From Medium.com  

Features of TensorFlow contributed from the Open Source Community

TensorFlow started out as a difficult-to-learn framework for deep learning from Google. With one difference – it was open-sourced. That may appear as stupidity for a commercial company that focuses on profits, but it was the right thing to do. Because the open source community took it up as their own property and ported it to nearly every platform available today including mobile, web, IoT, embedded, Edge Computing and so much more. And even more: from Python and C, it was ported to JavaScript, C++, C#, Node.js, F#, React.js, Go, Julia, R, Rust, Android, Swift, Kotlin, and even a port to Scala, Haskell, and numerous other coding languages. Then, after that complete conquest, Google went into the next level for optimization – hardware.

Which means – now we have CUDA (library for executing ML code on GPUs) v8-v9-v10 (9.2 left out), GPGPU, GPU-Native Code, TPU (Tensor Processing Unit – custom hardware provided by Google specially designed for TensorFlow), Cloud TPUs, FPGAs (Field-Programmable Gate Arrays – Custom Programmable Hardware), ASIC (Application Specific Integrated Circuits) chip hardware specially designed for TensorFlow, and now MKL for Intel, BLAS optimization, LINPACK optimization (the last three all low-level software optimization for matrix algebra, vector algebra, and linear algebra packages), and so much more that I can’t fit it into the space I have to write this article. To give you a rough idea of what the TensorFlow architecture looks like now, have a look at this highly limited graphic:

Some of TensorFlow features

Source: planspaces.org

Note: XLA stands for A(X)ccelerated Linear Algebra compiler still in development that provides highly optimized computational performance gains.

And Now TensorFlow 2.0

This release is expected shortly in the next six months from Google. Some of its most exciting features are:

  1. Keras Integration as the Main API instead of raw TensorFlow code
  2. Simplified and Integrated Workflow
  3. Eager Execution
  4. More Support for TensorFlow Lite and TensorFlow Edge Computing
  5. Extensions to TensorFlow.js for Web Applications and Node.js
  6. TensorFlow Integration for Swift and iOS
  7. TensorFlow Optimization for Android
  8. Unified Programming Paradigms (Directed Acyclic Graph/Functional and Stack/Sequential)
  9. Support for the new upcoming WebGPU Chrome RFC proposal
  10. Integration of tf.contrib best Package implementations into the core package
  11. Expansion of tf.contrib into Separate Repos
  12. TensorFlow AIY (Artificial Intelligence for Yourself) support
  13. Improved TPU & TPU Pod support, Distributed Computation Support
  14. Improved HPC integration for Parallel Computing
  15. Support for TPU Pods up to v3
  16. Community Integration for Development, Support and Research
  17. Domain-Specific Community Support
  18. Extra Support for Model Validation and Reuse
  19. End-to-End ML Pipelines and Products available at TensorFlow Hub

And yes – there is still much more that I can’t cover in this blog.  

Wow – that’s an Ocean! What can you Expand Upon?

Yes – that is an ocean. But to keep things as simple as possible (and yes – stick to the word limit – cause I could write a thousand words on  every one of these topics and end up with a book instead of a blog post!) we’ll focus on the most exciting and striking topics (ALL are exciting – we’ll cover the ones with the most scope for our audience).

1. Keras as the Main API to TensorFlow

Keras Deep Learning

From www.keras.io

Earlier, comments like these below were common on the Internet:

“TensorFlow is broken” – Reddit user

“Implementation so tightly coupled to specification that there is no scope for extension and modification easily in TensorFlow” – from a post on Blogger.com

“We need a better way to design deep learning systems than TensorFlow” – Google Plus user

Understanding the feedback from the community, Keras was created as an open source project designed to be an easier interface to TensorFlow. Its popularity grew very rapidly, and now nearly 95% of ML tasks happening in the real world can be written just using Keras. Packaged as ‘Deep Learning for Humans’, Keras is simpler to use.  Though, of course, PyTorch gives it a real run for the money as far as simplicity is concerned!

In TensorFlow 2.0, Keras has been adopted as the main API to interact with TensorFlow. Support for pure TensorFlow has not been removed, and thus TensorFlow 2.0 will be completely backwards-compatible, including a conversion tool that can be used to convert TensorFlow 1.x to TensorFlow 2.0 where implementation details differ. Kind of like the Python tool 2to3.py! So now, Keras is the main API for TensorFlow deep learning applications – which takes out a huge amount of unnecessary complexity burdens from the ML engineer.

2. Simplified and Integrated WorkFlow

There is a step-by-step simpler and extremely flexible workflow process for designing deep learning models: (visit https://medium.com/tensorflow/whats-coming-in-tensorflow-2-0 for more details)

  1. Use tf.data for data loading and preprocessing or use NumPy.
  2. Use Keras or Premade Estimators to do your model construction and validation work.
  3. Use tf.function for DAG graph-based execution or use eager execution ( a technique to smoothly debug and run your deep learning model, on by default in TF 2.0).
  4. For TPUs, GPUs, distributed computing, or TPU Pods, utilize Distribution Strategy for high-performance-computing distributed deep learning applications.
  5. TF 2.0 standardizes SavedModel as a serialized version of a TensorFlow graph for a variety of different platforms like Mobile, JavaScript, Edge, Lite, TensorBoard, TensorHub, and TensorServing. This makes it easier to move models around different architectures. This was one feature that was highly necessary compared to the older scenario.

This means that now even novices at machine learning can perform deep learning tasks with relative ease. And of course, did we mention the wide variety of end-to-end pluggable deep learning solutions available at TensorHub and on the Tutorials section? And guess what – they’re all free to download and use for commercial purposes. Google, you are truly the best friend of the open source community!

3. Expanded Support for Mobile (Android and iOS), Web (JavaScript), TF Lite, TF Edge and IoT

TF Lite Architecture

From Medium.com

In all the above platforms, where computational and memory resources are scarce, there is a common trend in TF 2.0 that extends over most of these platforms.

  1. Greater support for various ops in TF 2.0 and several deployment techniques
  2. SIMD+ support for WebAssembly
  3. Support for Swift (iOS) in Colab.
  4. Increased support for data input pipelines, and data visualization libraries in JavaScript.
  5. A smaller and lighter footprint for Edge Computing, Mobile Computing and IoT
  6. Better support for audio and text-based models
  7. Easier conversion of trained TF 2.0 graphs
  8. Increased and improved mobile model optimization techniques

As you can see, Google knows that Edge and Mobile is the future as far as computing is concerned, and has designed its products accordingly. TF Mobile should be replaced by TF Lite soon.

4. Unified Programming Models and Methodologies

There are two/three major ways to code deep learning networks in Keras. They are:

  1. Symbolic or Declarative APIs
  2. Imperative APIs / Subclassing

We shall look at both of them in turn, in a very minute way. For more on this topic, visit https://medium.com/tensorflow/what-are-symbolic-and-imperative-apis-in-tensorflow-2-0

Symbolic/Declarative/Graph-Based/Functional API

We build models symbolically by describing the structure of its DAG (Directed Acyclic Graph) or a sequential stack. This following image is an example of Keras code written symbolically.

Keras code

From Medium.com TensorFlow publication

This looks familiar to most of us since this is how we use Keras usually. The advantages of this process are that it’s easy to visualize, has debugging errors usually only at compile time, and corresponds to our mental model of the deep learning network and is thus easy to work with.

Stack-Based/Subclassing/Imperative API

The following code is an example of the Sequential paradigm or subclassing paradigm to building a deep learning network:

Subclassing

From Medium.com TensorFlow publication (code still in development)

Rather similar to Object Oriented Python, this style was first introduced into the deep learning community in 2015 and has since been used by a variety of deep learning libraries. TF 2.0 has complete support for it. Although it appears simpler, it has some serious disadvantages.

Imperative models are not a data structure that is transparent but an opaque class instead. You are prone to many errors at runtime following this approach. As a deep learning practitioner, you are obliged to know both symbolic as well as imperative and subclassing methodologies of coding your deep neural network. For example, recursive or recurrent neural networks cannot be defined by the symbolic programming model. So it is good to know both. But be aware of the disparate advantages and disadvantages of them!

5. TensorFlow AIY

AIY / DIY

From slideshare.com

This is a brand new offering from Google and other AI companies such as Intel. AIY stands for Artificial Intelligence for Yourself (a play on DIY – Do It Yourself) and is a new marketing scheme from Google to show consumers how easy it is to use TensorFlow in your own DIY devices to create your own AI-enabled projects and gadgets. This is a very welcome trend, since it literally brings the power of AI to the masses, at a very low price. I honestly feel that now the day is nearing when schoolchildren will bring their AIY projects for school exhibitions and that the next generation of whiz kids will be chock full of AI expertise and development of new and highly creative and innovative AI products. This is a fantastic trend and now I have my own to-buy-and-play-with list if I can order these products on Google at a minimal shipping charge. So cool!

6. Guidelines and New Incentives for Community Participation and Research Papers

We are running out of the word limit very fast! I hoped to cover TPUs and TPU Pods and Distributed Computation, but for right now, this is my final point. Realizing and recognizing the massive role the open source community has played in the development of TensorFlow as a worldwide brand for deep learning neural nets, the company has set up various guidelines to introduce domain-specific innovation and the authoring of research papers and white papers from the TensorFlow community, in collaboration with each other. To quote:

From the website https://www.tensorflow.org/community/roadmap :

Community

  • Continue public feedback on significant design decisions through the Request-for-Comment (RFC) process.

  • Create a contributors’ guide to augment our published governance and process.

  • Grow global TensorFlow communities and user groups.

  • Collaborate with partners to co-develop and publish research papers.

  • Continue to publish blog posts and YouTube videos showcasing applications of TensorFlow and build user case studies for high impact application

In fact, when I read more of the benefits of participating in the TensorFlow community open source development process, I could not help it, I joined the TensorFlow development community, myself as well!

TensorFlow Community

A Dimensionless Technologies employee contributing to TensorFlow!

Who knows – maybe, God-willing, one day my code will be a part of TensorFlow 2.0/2.x! Or – even better – there could be a research paper published under my name with collaborators, perhaps. The world is now built around open source technologies, and as a developer, there has never been a better time to be alive!

In Conclusion

So don’t forget, on the day of writing this blog article, 31th January 2019, TensorFlow 2.0 is yet to be released, but since its an open source project, there are no secrets and Google is (literally) being completely ‘open’ about the steps it will take to take TF further as the world market leader in deep learning. I hope this article has increased your interest in AI, open source development, Google, TensorFlow, deep learning, and artificial neural nets. Finally, I would like to point you to some other articles on this blog that focus on Google TensorFlow. Visit any of the following blog posts for more details on TensorFlow, Artificial intelligence Trends and Deep Learning:

Top 10 Data Science Tools (other than SQL Python R)

Top Trends for Data Science in 2019

Finally, do apply for our Deep Learning course (link given below) if you truly wish to become acquainted with TensorFlow in detail:

Deep Learning

May the joy of learning something new never leave you, no matter how old or young you are. Cheers!

Top Trends for Data Science in 2019

Top Trends for Data Science in 2019

Trends to Watch Out For and Prepare Yourself

Ok. So there’s been a lot of coverage by various websites, data science gurus, and AI experts about what 2019 holds in store for us. Everywhere you look, we have new fads and concepts for the new year. This article is going to be rather different. We are going to highlight the dark horses – the trends that no one has thought about but will completely disrupt the working IT environment (for both good and bad – depends upon which side of the disruption you are on), in a significant manner. So, in order to give you a taste of what’s coming up, let’s go through the top four (plus 1 (bonus) = five) top trends of 2019 for data science:

  1. AutoML
  2. Interoperability (ONNX)
  3. Cyber Data Science Crime
  4. Cloud AI-as-a-Service
  5. (Bonus) Quantum Computation & Data Science

1. AutoML (& AutoKeras)

How AutoML works
Google AutoML Architecture
From https://cloud.google.com/automl/

This single innovation is going to change the way machine learning works in the real world. Earlier, deep learning and even advanced machine learning was an aristocratic property of PhD holders and other research scientists. AutoML has changed that entire domain – especially now that AutoKeras is out. 

AutoML automates machine learning. It chooses the best architecture by analyzing the data – through a technology called Neural Architecture Search (NAS), tries out various models and gives you the best possible hyperparameters for your scenario automatically! Now, this was priced at the ridiculous price of 76$ USD per hour, but we now have a free open source competitor, AutoKeras.

The open source killer of AutoML
From https://www.pyimagesearch.com

AutoKeras is an open source free alternative to AutoML developed at University of Texas A & M DATA lab and the open source community. This project should make a lot of deep learning accessible to everyone on the planet who can code even a little. To give you an example, this is the code used to train an arbitrary image classifier with deep learning:

From: https://autokeras.com/

Folks, it really doesn’t get simpler than this!

Note:Of course, the entire training and testing process will take more than a day to complete at the very least, but less if you have some high-throughput GPUs or Google’s TPUs (Tensor Processing Units – custom hardware for data science computation) or plenty of money to spend on the cloud infrastructure computation resources of AutoML.

2. Interoperability (ONNX)

For those of you are new as to what interoperability means to neural networks – we now have several Deep Learning Neural Network Software Libraries competing with each other for market dominance. The most highly rated ones are:

  1. TensorFlow
  2. Caffe
  3. Theano
  4. Torch & PyTorch
  5. Keras
  6. MXNet
  7. Chainer
  8. CNTK

However, converting an artificial neural network written in CNTK (Microsoft Cognitive Tool Kit) to Caffe is a laborious task. Why can’t we simply have one single standard so that discoveries in AI can be shared with the public and with the open source community?

To solve this problem, the following standard has been proposed:

Open Neural Network Exchange Format

One Neural Network Standard over them all.
From https://www.softwarelab.it

ONNX is a standard in which deep learning networks can be represented as a directed acyclic computation graph which is compatible with every deep learning framework available today (almost). Watch out for this release since if proper transparency is enforced, we could see decades worth of research happen in this single year!

3. Cyber Data Science Crime

There are allegations that the entire US elections conducted last year was a socially engineered manipulation of the US democratic process using data science and data mining techniques. The most incriminated product was Facebook, on which fake news were spread by Russian agents and the Russian intelligence forces, leading to an external agency deciding who the US president would be instead of the US people themselves. And yes, one of the major tools used was data science! So, this is not so much a new trend as an already existing phenomenon, one that needs to be recognized and dealt with effectively.

While this is a controversial trend or topic, it needs to be addressed. If the world’s most technologically advanced nation can be manipulated by its sworn enemies to elect a leader (Trump) that no one really wanted, then how much more easily can nations like India or the UK be manipulated as well?

This has already begun in a small way in India with BJP social media departments putting up pictures of clearly identifiable cities (there was one of Dubai) as cities in Gujarat on WhatsApp. This trend will not change any time soon. There needs to be a way to filter the truth from the lies. The threat comes not from information but from misinformation.

Are you interested in the elections? Then get involved in Facebook. What happened in the USA in 2018 could easily happen in India in 2019. The very fabric of democracy could break apart. As data scientists, we need to be aware of every issue in our field. And we owe it to the public – and to ourselves – to be honest and hold ourselves to the highest levels of integrity.

We could do more than thirty blog posts on this topic alonebut we digress.

4. Cloud AI-as-a-Service

AI-as-a-Service Overview
From: http://www.digitaljournal.com

To understand Cloud AI-as-a-Service, we need to know that maintaining an AI in-house analytics solution is overkill as far as most companies are concerned. It is so much easier to outsource the construction, deployment and maintenance costs of an AI system to a company that provides it online at a far lesser cost than what the effort and difficulty would be otherwise in maintaining and updating an in-built version that has to be managed by a separate department with some very hard-to-find and esoteric skills. There are so many start-ups in this area alone over the last year (over 100) that to list all of them would be a difficult task. Of course, as usual, all the big names are extremely involved.

This is a trend that has already manifested, and will only continue to grow in popularity. There are already a number of major players in this AI as a Service offering including but not limited to Google, IBM, Amazon, Nvidia, Oracle, and many, many more. In this upcoming year, companies without AI will fall and fail spectacularly. Hence the importance of keeping AI open to the public for all as cheaply as possible. What will be the end result? Only time will tell.

5. Quantum Computing and AI (Bonus topic)

Quantum Computing is very much an active research topic as of now. And the country with the greatest advances is not the US, but China. Even the EU has invested over 1 Billion Euros in its quest to build a feasible quantum computer. A little bit of info about quantum computing, in 5 crisp points:

  1. It has the potential to become the greatest quantum leap since the invention of the computer itself. (pun intended)
  2. However, practical hardware difficulties have kept quantum computers constructed so far as laboratory experiments alone.
  3. If a quantum computer that can manipulate 100-200 qubits (quantum bits) is built, every single encryption algorithm used today will be broken quite easily.
  4. The difficulty in keeping single atoms in isolated states consistently (decoherence) makes current research more of an academic choice.
  5. Experts say a fully functional quantum computer could be 5-15 years away. But, also, it would herald a new era in the history of mankind.

In fact, the greatest example of a quantum computer today is the human brain itself. If we develop quantum computing to practical levels, we could also gain the information to create a truly intelligent computer. 

Cognition in real world AI that is self-aware. How awesome is that?

D-Wave Quantum Computer processor
(From wikipedia.org)

Conclusion

So there you have it. The five most interesting and important trends that could become common in the year 2019 (although the jury will differ on the topic of the quantum computer – it could work this year or ten years from now – but it’s immensely exciting).

What are your thoughts? Do you find any of these topics worth further investigation? Feel free to comment and share!

For additional information, I strongly recommend the articles given below:

https://dimensionless.in/how-to-land-a-job-as-a-data-scientist-in-2019/
https://dimensionless.in/what-is-cloud-computing-aws-vs-gcs/

Alternate views or thoughts? Share your feelings below!