What’s After MBA: Upgrade Your Skills

Image result for whats after mba

source: Career Core

 

After completing high school, I seemed to run after the best colleges or universities in the country or abroad to complete my graduation in flying colours because that was necessary to land myself a good job or take the next step in my career. However, once I completed my graduation, I was unsure and did not know what to do next – whether to get placed with a big organization or continue higher studies.

While some decide on earning straightway, others tend to fulfill their dream of getting a management degree and completing their MBA from a reputed organization. The sky is the limit when you enroll in an MBA program full of belief, hope, desire, and enthusiasm. However, once the graduation time comes closer, we tend to be lost in our choices as to what to do next after acquiring an MBA degree.

 

Facts About MBA Graduates       

 

At first, I would look into some statistics. Since 2010, the recent grad’s employer demand has reached its highest level as announced in its forward-looking Corporate Recruiters Survey by GMAC which is an organization that administers in the GMAT exam. In a survey which was conducted in the year, 2016 in the month of February and March stated that the recruiters prefer to hire recent graduates from the business schools which was about eighty-eight percent compared to the eighty percent of the companies that hired in the previous year.

In the current MBA market, jobs are still there and there is no reason to worry. However, you might not be satisfied with the job role or the company you are getting recruited for and hence you need to reskill yourself like I did for that particular role and be ready with the opportunity comes. You need to have patience and keeping improving your skills.

 

Have a Plan

 

All MBA graduates must have a plan, and hence not having a plan for an MBA graduate is not typical of. It’s a once in a lifetime experience that students get to pursue an MBA. Generally, in advance, students know what they would do after graduation. On the back of several years of work experience, and after careful consideration, the decision to do an MBA was taken. Most MBA graduates have a clear picture of their career and know what they want to do. It is not a wise strategy to decide what to do after obtaining an MBA degree.

I was fully aware of what I would want to achieve in life and was always looked after by the admission committee. Based on self-understanding, a clear thought out career strategy is what the admissions officers would want to see. Why you want to do an MBA and your goals after completing is what they are most interested in.

 

Based on the professional path of mine, the MBA program was considered by me. The right MBA programs could only be selected when in mind you have clear career goals. Whether an MBA meets your need or how your career could be boosted could only be known after that. On your own scale, each program value could be rated and the facts could be uncovered. For a newly minted MBA grad, it is very important to be realistic. Adam Heyler once said in his youtube channel that your CV could become credible and your network would be expanded if you have the MBA degree. But the lack of work experience would not certainly be made up the MBA degree. Time management is also an important factor which is taught by MBA.

 

The Post MBA Dilemma

 

Image result for what's after mba

source: PrepAdviser

 

The current job market possesses a tremendous challenge to every professional, even someone with a lucrative degree as MBA. Gone are those days when an MBA degree guarantees you a high paying job in a big firm. Nowadays, the wave of entrepreneurship has engulfed many and hence so many graduates are moving towards entrepreneurship and starting out their venture. However, it is not a piece of cake to be an entrepreneur in this competitive market and almost ninety percent of start-ups fail after its inception. I was not into entrepreneurship and choose to upgrade myself and follow my dream.

I pursued my MBA in Finance which is often a go-to choice for many students during their graduation certainly due to the prospects of working in major insurance or a banking company. I wanted to work as a Business Analyst, Risk Analyst, and so on. Thus it was pertinent for me to develop an analyst intuition and master the analytical tools such as SQL, Excel, Tableau, etc. If you are interested in working as a Decision Scientist or a Data Scientist, you need to upgrade your skills like me to more advanced skills like Machine Learning, Deep Learning and so on.

However, once I found the potential that data carries and the diverse nature of this field, I wanted to expand my horizons and work as a Data Science consultant in some big corporation and hence I started exploring other domains like Marketing, Human Resource and so on. MBA in Marketing is another such lucrative career with high post-graduation opportunity. Some of the work designation after completing MBA in marketing are – Research Manager or a Senior Analyst, Marketing Analyst, and so on. Data is the new oil and all marketing firms are using the unprecedented potential of data to market their product to the right customers and stay ahead in the race.

Being a Marketing Analyst, you would be responsible to gather data from various sources and thus having the skills of data collection or web scraping is very important. Additionally, I learned at least of data visualization tool like Excel, Tableau, Power BI and others to analyse the performance of different marketing camp gain which could be presented to the shareholders who would make the final business decision. Overall, it was about finding patterns in the data using various tools and ease the process of making decisions for the stakeholders.

MBA in Human Resource may not be as lucrative as the above two but certainly has its own share of value in terms of responsibility and decision making. Whether or not you have been employed as an HR after your graduation has no relation to the fact that you need to Master HR Analytics which I did that would help in dealing with employees.

As an HR professional, you would be engaged mostly in employee relations and thus it is necessary to understand the satisfaction level of each employee and deal with them separately. Onboarding a resource garners a huge amount of financial cost and hence predicting the attrition probability of an employee could avoid financial loss. Thus, data collection and machine learning are two of the important skills which I learned further along with my interpersonal skills.

Supply Chain Management has been in demand and I realized it is important to understand applications of Data Science in this regard because it could take me a long way in my career. The impact of supply chain dynamics could be analysed using the right analytical tools. Data could be collected and leveraged to identify the efficiency of the supply chain.

Additionally, the price fluctuations, commodities availability could also be analysed using Data. If you master Data Analytics like me, you could reduce the risk burden of an organization.

Healthcare management is another important field where students pursue an MBA which deals with practices related to the Healthcare industry. As Data Science had a vast application in the healthcare industry, I had to get my hands dirty and learn the nitty-gritty of analyzing a healthcare dataset. In the HealthCare careful usage of data could lead to ground-breaking achievements in the field of medical science. Applying analytics with relevant data could help in reducing medical cost and also channel the right medicine for a patient.

Deep Learning has made tremendous progress in the HealthCare industry and hence I took some time to understand the underlying working structure of neural networks. It could unearth hidden information from the patient’s data and help in prescribing the appropriate Medicare of the patient.

 

Conclusion

 

Though this was a generic overview of the skills I mastered for my own career aspirations after pursuing my MBA. In general, analytics is the need of the hour and every MBA graduate or each professional irrespective of the field they are in could certainly dive into this field without any prior relevant experience. In the beginning, I felt a bit overwhelmed by the vastness of the field but as I moved along I found it interesting and gradually get inclined towards the field.

Overall, along with the management skills, the technical expertise to deal with data and derive relevant information from it would land you a much higher role of a manager or a consultant in a firm which I eventually managed to achieve where you would be the decision maker for your team. Upskilling is very important in today’s world to stay relevant and keep in touch with the rapid advancement in technology.

Dimensionless has several blogs and training to get you started with Data Analytics and Data Science.

Follow this link, if you are looking to learn about data science online!

Additionally, if you are having an interest in Learning AWS Big Data, Learn AWS Course Online to boost your career

Furthermore, if you want to read more about data science and big data, you can read our blogs here

Learn Data Science with the Best Available Free Courses Online

Learn Data Science with the Best Available Free Courses Online

Data Scientist Training Free of Charge

Now, in theory, it is possible to become a data scientist, without paying a dime. What we want to do in this article is to list out the best of the best options to learn what you need to know to become a data scientist. Many articles offer 4-5 courses under each heading. What I have done is to search through the Internet covering all free courses and choose the single best course for each topic.

These courses have been carefully curated and offer the best possible option if you’re learning for free. However – there’s a caveat. An interesting twist to this entire story.  Interested? Read on! And please – make sure you complete the full article.

Topics For A Data Scientist Course

The basic topics that a data scientist needs to know are:

  1. Machine Learning Theory and Applications
  2. Python Programming
  3. R Programming
  4. SQL
  5. Statistics & Probability
  6. Linear Algebra
  7. Calculus Basics (short)
  8. Machine Learning in Python
  9. Machine Learning in R
  10. Tableau

So let’s get to it. Here is the list of the best possible options to learn every one of these topics, carefully selected and curated.

 

Machine Learning – Stanford University – Andrew Ng (audit option)

Machine Learning Course From Stanford University

Machine learning course

The world-famous course for machine learning with the highest rating of all the MOOCs in Coursera, from Andrew Ng, a giant in the ML field and now famous worldwide as an online instructor. Uses MATLAB/Octave. From the website:

This course provides a broad introduction to machine learning, data mining, and statistical pattern recognition. Topics include:

(i) Supervised learning (parametric/non-parametric algorithms, support vector machines, kernels, neural networks)

(ii) Unsupervised learning (clustering, dimensionality reduction, recommender systems, deep learning)

(iii) Best practices in machine learning (bias/variance theory; innovation process in machine learning and AI)

The course will also draw from numerous case studies and applications, so that you’ll also learn how to apply learning algorithms to building smart robots (perception, control), text understanding (web search, anti-spam), computer vision, medical informatics, audio, database mining, and other areas.

This course is extremely effective and has many benefits. However, you will need high levels of self-discipline and self-motivation. Statistics show that 90% of those who sign up for a MOOC without a classroom or group environment never complete the course.

 

Learn Python The Hard Way – Zed Shaw – Free Online Access

 

learn python

 

Learn Python The Hard Way Online Access

You may ask me, why do I want to learn the hard way? Shouldn’t we learn the smart way and not the hard way? Don’t worry. This ebook, online course, and web site is a highly popular way to learn Python. Ok,  so it says the hard way. Well, the only way to learn how to code is to practice what you have learned. This course integrates practice with learning. Other Python books you have to take the initiative to practice.

Here, this book shows you what to practice, how to practice. There is only one con here – although this is the best self-driven method, most people will not complete all of it. The main reason is that there is no external instructor for supervision and a group environment to motivate you. However, if you want to learn Python by yourself, then this is the best way. But not the optimal one, as you will see at the end of this article since the cost of the book is 30$ USD (2100 INR approx).

Interactive R and Data Science Programming – SwiRl

Interactive R and Data Science Course (In Console)

 

interactive R programming course Swirl

 

Swirlstats is a wonderful tool to learn R and data science scripting in R interactively and intuitively by teaching you R commands from within the R console. This might seem like a very simple tool, but as you use it, you will notice its elegance in teaching you literally how to express yourselves in R and the finer nuances of the language and integration with the console and tidyverse. This is a powerful method of learning R and what is more, it is also a lot of fun!

Descriptive and Inferential Statistics

Course on Statistics and Probability from KhanAcademy

 

khanacademy's profile picture

KhanAcademy is a free non-profit organization on a mission – they want to provide a world-class education to you regardless of where you may be in the world. And they’re doing a fantastic job! This course has been covered in several very high profile blogs and Quora posts as the best online course for statistics – period. What is more, it is extremely high quality and suitable for beginners –  and – free! This organization is doing wonderful work. More power to them!

Mathematics for Data Science

Now the basic mathematics for data science content includes linear algebra, single-variable, discrete mathematics, and multivariable calculus (selected topics) and basics of differential equations.  Now you could take all of these topics separately in KhanAcademy and that is a good option for Linear Algebra and Multivariate Calculus (in addition to Statistics and Probability).

For Linear Algebra, the link of what you need to know given in a course in KhanAcademy is given below:

Course on Linear Algebra From KhanAcademy

Course view with khan academy

 

For Multivariate Calculus

Course on MultiVariate Calculus From KhanAcademy

Mutlivariate calcus from khan academy

These courses are completely free and very accessible to beginners.

Discrete Mathematics

This topic deserves a section to itself because discrete mathematics is the foundation of all computer science. There are a variety of options available to learn discrete mathematics, from ebooks to MOOCs, but today, we’ll focus on the best possible option. MIT (Massachusetts Institute of Technology) is known as one of the best colleges in the world and they have an Open information initiative known as MIT OpenCourseWare (MIT OCW). These are actual videos of the lectures taken by the students at one of the best engineering colleges in the world. You will benefit a lot if you follow the lectures at this link, they give all the basic concepts as clearly as possible. It’s a bit technical because this is open mostly for students at an advanced level. The link is given below:

MIT OpenCourseWare Course: Mathematics for Computer Science

Image result for MIT OCW logo

For beginners, one slightly less technical option is the following course:

Course on Discrete Mathematics for Computer Science

It is also technical and from MIT but might be a little more accessible than the earlier option.

SQL

SQL (see-quel) or Structured Query Language is a must-learn if you are a data scientist. You will be working with a lot of databases, and SQL is the language used to access and generate data from database systems like Oracle and Microsoft SQL Server. The best free course I could find online is undoubtedly the one below:

Udemy Course for SQL Beginners

 

SQL for Newcomers - A Crash Course

SQL For Newcomers – A Free Crash Course from Udemy.com.

5 hours-plus of every SQL command and concept you need to know. And – completely free.

Machine Learning with Scikit-Learn

 

logo for Scikit

scikit learning course

 

Scikit-Learn Online Documentation Main Page

We have covered Python, R, Machine Learning using MATLAB, Data Science with R (SwiRl teaches data science as well), Statistics, Probability, Linear Algebra, and Basic Calculus. Now we just need to get a course for Data Science with Python, and we are done! Now I looked at many options but was not satisfied. So instead of a course, I have provided you with a link to the scikit-learn documentation. Why?

Because that’s as good as an online course by itself. If you read through the main sections, get the code (Ctrl-X, Ctrl-V) and execute it in an Anaconda environment, and then play around with it, experiment, and observe and read up on what every line does, you will already know who to solve standard textbook problems. I recommend the following order:

  1. Classification
  2. Regression
  3. Clustering
  4. Preprocessing
  5. Model Evaluation
  6. 5 classification examples (execute)
  7. 5 regression examples (run them)
  8. 5 clustering examples (ditto)
  9. 6 sample preprocessing functions
  10. Dimensionality Reduction
  11. Model Selection
  12. Hyperparameter Tuning

Machine Learning with R

 

Logo for Oreilly's R for Dsta Science course

 

Online Documentation for Machine Learning in R with Tidyverse

This book is free to learn online. Get the data files, get the script files, use RStudio, and just as with Python, play, enjoy, experiment, execute, and explore. A little hard work will have you up and running with R in no time! But make sure you try as many code examples as possible. The libraries you can focus on are:

  1. dplyr (data manipulation)
  2. tidyr (data preprocessing “tidying”)
  3. ggplot2 (graphical package)
  4. purrr (functional toolkit)
  5. readr (reading rectangular data files easily)
  6. stringr (string manipulation)
  7. tibble (dataframes)

Tableau

To make it short, simple, and sweet, since we have already covered SQL and this content is for beginners, I recommend the following course:

Udemy Course on Tableau for Beginners

This is a course on Udemy rated 4.2/5 and completely free. You will learn everything you need to work with Tableau (the most commonly used corporate-level visualization tool). This is an extremely important part of your skill set. You can make all the greatest analyses, but if you don’t visualize them and do it well, management will never buy into your machine learning solution, and neither will anyone who doesn’t know the technical details of ML (which is a large set of people on this planet). Visualization is important. Please make sure to learn the basics (at least!) of Tableau.

Tableau course image

From Unsplash

 

Kaggle Micro-Courses (Add-Ons – Short Concise Tutorials)

Kaggle Micro-Courses (from www.kaggle.com!)

Kaggle Micro-Courses (from www.kaggle.com!)

 

Kaggle Learn Home Page

Kaggle is a wonderful site to practice your data science skills, but recently, they have added a set of hands-on courses to learn data science practicals. And, if I do say, so myself, it’s brilliant. Very nicely presented, superb examples, clear and concise explanations. And of course, you will cover more than we discussed earlier. Please, if you read through all the courses discussed so far in this article, and if you do just the courses at Kaggle.com, you will have spent your time wisely (though not optimally – as we shall see).

Kaggle Learn

Kaggle Learn

Dimensionless Technologies

 

Dimensonless technologies logo

Dimensionless Technologies

 

Now, if you are reading this article, you might have a fundamental question. This is a blog of a company that offers courses in data science, deep learning, and cloud computing. Why would we want to list all our competitors and publish it on our site? Isn’t that negative publicity?

Quite the opposite. 

This is the caveat we were talking about.

Our course is a better solution than every single option given above!

We have nothing to hide.

And we have an absolutely brilliant top-class product.

Every option given above is a separate course by itself.

And they all suffer from a very prickly problem – you need to have excellent levels of discipline and self-motivation to complete just one of the courses above – let alone all ten.

 

You also have no classroom environment, no guidance for doubts and questions, and you need to know the basics about programming.

Our product is the most cost-effective option in the market for learning data science, as well as the most effective methodology for everyone – every course is conducted live in a classroom environment from the comfort of your home. You can work at a standard job, spend two hours on the internet every day, do extra work and reading on weekends, and become a professional data scientist in 6 months time.

We also have personalized GitHub project portfolio creation, management, and faculty guidance. Not to mention individual attention for each student.

And IITians for faculty who also happen to have 9+ years of industry experience.

So when we say that our product is the best on the market, we really mean it. Because of the live session teaching of the classes, which no other option on the Internet today has.

 

Am I kidding? Absolutely not. And you can get started with Dimensionless Technologies Data Science with Python and R course for just 70-odd USD. Which is the most cost-effective option on the market!

And unlike all the 10 courses and resources detailed above, instead of doing 10 courses, you just need to do one single course, with the extracted meat of all that you need to know as a data scientist. And yes, we cover:

  1. Machine Learning
  2. Python Programming
  3. R Programming
  4. SQL
  5. Statistics & Probability
  6. Linear Algebra
  7. Calculus Basics
  8. Machine Learning in Python
  9. Machine Learning in R
  10. Tableau
  11. GitHub Personal Project Portfolio Creation
  12. Live Remote Daily Sessions
  13. Experts with Industrial Experience
  14. A Classroom Environment (to keep you motivated)
  15. Individual Attention to Every Student

I hope this information has you seriously interested. Please sign up for the course – you will not regret it.

And we even have a two-week trial for you to experience the course for yourself.

Choose wisely and optimally.

Unleash the data scientist within!

 

An excellent general article on emerging state-of-the-art technology, AI, and blockchain:

The Exciting Future with Blockchain and Artificial Intelligence

For more on data science, check out our blog:

Blog

And of course, enjoy machine learning!

7 Technical Concept Every Data Science Beginner Should Know

7 Technical Concept Every Data Science Beginner Should Know

Welcome to Data Science!

 

So you want to learn data science but you don’t know where to start? Or you are a beginner and you want to learn the basic concepts? Welcome to your new career and your new life! You will discover a lot of things on your journey to becoming a data scientist and being part of a new revolution. I am a firm believer that you can learn data science and become a data scientist regardless of your age, your background, your current knowledge level, your gender, and your current position in life. I believe – from experience – that anyone can learn anything at any stage in their lives. What is required is just determination, persistence, and a tireless commitment to hard work. Nothing else matters as far as learning new things – or learning data science – is concerned. Your commitment, persistence, and your investment in your available daily time is enough.

I hope you understood my statement. Anyone can learn data science if you have the right motivation. In fact, I believe anyone can learn anything at any stage in their lives, if they invest enough time, effort and hard work into it, along with your current occupation. From my experience, I strongly recommend that you continue your day job and work on data science as a side hustle, because of the hard work that will be involved. Your commitment is more important than your current life situation. Carrying on a full-time job and working on data science part-time is the best way to go if you want to learn in the best possible manner.

 

Technical Concepts of Data Science

So what are the important concepts of data science that you should know as a beginner? They are, in order of sequential learning, the following:

  1. Python Programming
  2. R Programming
  3. Statistics & Probability
  4. Linear Algebra
  5. Data Preparation and Data ETL*
  6. Machine Learning with Python and R
  7. Data Visualization and Summary

*Extraction, Transformation, and Loading

Now if you were to look at the above list an go to a library, you would, most likely, come back with 9-10 books at an average of 1000 pages each. Even if you could speed-read, 10,000 pages is a lot to get through. I could list the best books for each topic in this post, but even the most seasoned reader would balk at 10,000 pages. And who reads books these days? So what I am going to give you is a distilled extract on each of those topics. Keep in mind, however, that every topic given above could be a series of blog posts in its own right, and these 80-word paragraphs are just a tiny taste of each topic and there is an ocean of depth involved in every topic. You might ask if that is the case, how can everybody be a possible candidate for data scientist role? Two words: Persistence and Motivation. With the right amount of these two characteristics, anyone can be anything they want to be.

 

1) Python Programming:

Python is one of the most popular programming languages in the world. It is the ABC of data science because Python is the language every beginner starts with on data science. It is universally used for any purposes since it is so amazingly versatile. Python can be used for web applications and websites with Django, microservices with Flask, general programming projects with the standard library from PyPI, GUIs with PyQt5 or Tkinter, Interoperability with Jython (Java), Cython (C) and nearly other programming language are available today.

Of course, Python is the also first language used for data science with the standard stack of scikit-learn (machine learning), pandas (data manipulation), matplotlib and seaborn (visualization) and numpy (vectorized computation). Nowadays, the most common technology used is the Anaconda distribution, available from www.anaconda.com. Current version 2018.12 or Anaconda Distribution 5. To learn more about Python, I strongly recommend the following books: Head First Python and the Python Cookbook.

 

2) R Programming

R is The Best Language for statistical needs since it is a language designed by statisticians, for statisticians. If you know statistics and mathematics well, you will enjoy programming in R. The language gives you the best support available for every probability distribution, statistics functions, mathematical functions, plotting, visualization, interoperability, and even machine learning and AI. In fact, everything that you can do in Python can be done in R. R is the second most popular language for data science in the world, second only to Python. R has a rich ecosystem for every data science requirement and is the favorite language of academicians and researchers in the academic domain.

Learning Python is not enough to be a professional data scientist. You need to know R as well. A good book to start with is R For Data Science, available at Amazon at a very reasonable price. Some of the most popular packages in R that you need to know are ggplot2, ThreeJS, DT (tables), network3D, and leaflet for visualization, dplyr and tidyr for data manipulation, shiny and R Markdown for reporting, parallel, Rcpp and data.table for high performance computing and caret, glmnet, and randomForest for machine learning.

 

3)  Statistics and Probability

This is the bread and butter of every data scientist. The best programming skills in the world will be useless without knowledge of statistics. You need to master statistics, especially practical knowledge as used in a scientific experimental analysis. There is a lot to cover. Any subtopic given below can be a blog-post in its own right. Some of the more important areas that a data scientist needs to master are:

  1. Analysis of Experiments
  2. Tests of Significance
  3. Confidence Intervals
  4. Probability Distributions
  5. Sampling Theory
  6. Central Limit Theorem
  7. Bell Curve
  8. Dimensionality Reduction
  9. Bayesian Statistics

Some places on the Internet to learn Statistics from are the MIT OpenCourseWare page Introduction to Statistics and Probability, and the Khan Academy Statistics and Probability Course. Good books to learn statistics: Naked Statistics, by Charles Wheelan which is an awesome comic-like but highly insightful book which can be read enjoyably by anyone including those from non-technical backgrounds and Practical Statistics for Data Scientists by Peter Bruce and Andrew Bruce.

 

4) Linear Algebra

Succinctly, linear algebra is about vectors, matrices and the operations that can be performed on vectors and matrices. This is a fundamental area for data science since every operation we do as a data scientist has a linear algebra background, or, as data scientists, we usually work with collections of vectors or matrices. So we have the following topics in Linear Algebra, all of which are covered in the following world-famous book, Linear Algebra and its Applications by Gilbert Strang, an MIT professor. You can also go to the popular MIT OpenCourseWare page, Linear Algebra (MIT OCW). These two resources cover everything you need to know. Some of the most fundamental concepts that you can also Google or bring up on Wikipedia are:

  1. Vector Algebra
  2. Matrix Algebra
  3. Operations on Matrices
  4. Determinants
  5. Eigenvalues and Eigenvectors
  6. Solving Linear Systems of Equations
  7. Computer-Aided Algebra Software (Mathematica, Maple, MATLAB, etc)

 

5) Data Preparation and Data ETL (Extraction, Transformation, and Loading)

By IAmMrRob on Pixabay

 

Yes – welcome to one of the more infamous sides of data science! If data science has a dark side, this is it. Know for sure that unless your company has some dedicated data engineers who do all the data munging and data wrangling for you, 90% of your time on the job will be spent on working with raw data. Real world data has major problems. Usually, it’s unstructured, in the wrong formats, poorly organized, contains many missing values, contains many invalid values, and contains types that are not suitable for data mining.

Dealing with this problem takes up a lot of the time of a data scientist. And your data scientist’s analysis has the potential to go massively wrong when there is invalid and missing data. Practically speaking, unless you are unusually blessed, you will have to manage your own data, and that means conducting your own ETL (Extraction, Transformation, and Loading). ETL is a data mining and data warehousing term that means loading data from an external data store or data mart into a form suitable for data mining and in a state suitable for data analysis (which usually involves a lot of data preprocessing). Finally, you often have to load data that is too big for your working memory – a problem referred to as external loading. During your data wrangling phase, be sure to look into the following components:

  1. Missing data
  2. Invalid data
  3. Data preprocessing
  4. Data validation
  5. Data verification
  6. Automating the Data ETL Pipeline
  7. Automation of Data Validation and Verification

Usually, expert data scientists try to automate this process as much as possible, since a human being would be wearied by this task very fast and is remarkably prone to errors, which will not happen in the case of a Python or an R script doing the same operations. Be sure to try to automate every stage in your data processing pipeline.

 

6) Machine Learning with Python and R

An expert machine learning scientist has to be proficient in the following areas at the very least:

Data Science Topics Listing

Data Science Topics Listing – Thomas

 

Now if you are just starting out in Machine Learning (ML), Python, and R, you will gain a sense of how huge the field is and the entire set of lists above might seem more like advanced Greek instead of Plain Jane English. But not to worry; there are ways to streamline your learning and to consume as little time as possible in learning or becoming able to learn nearly every single topic given above. After you learn the basics of Python and R, you need to go on to start building machine learning models. From experience, I suggest you break up your time into 50% of Python and 50% of R and spend as much time as possible spending time without switching your languages or working between languages. What do I mean? Spend maximum time learning one programming language at one time. That will prevent syntax errors and conceptual errors and language confusion problems.

Now, on the job, in real life, it is much more likely that you will work in a team and be responsible for only one part of the work. However, if your working in a startup or learning initially, you will end up doing every phase of the work yourself. Be sure to give yourself time to process information and to spend sufficient time for your brain to rest and get a handle on the topics you are trying to learn. For more info, do check out the Learning How to Learn MOOC on Coursera, which is the best way to learn mathematical or scientific topics without ending up with burn out. In fact, I would recommend this approach to every programmer out there trying to learn a programming language, or anything considered difficult, like Quantum Mechanics and Quantum Computation or String Theory, or even Microsoft F# or Microsoft C# for a non-Java programmer.

I strongly recommend the book, Hands-On Machine Learning with Scikit-Learn and TensorFlow to learn Python for Data Science. The R book was given earlier in the section on R.

 

7) Data Visualization and Summary

Common tools that you have with which you can produce powerful visualizations include:

  1. Matplotlib
  2. Seaborn
  3. Bokeh
  4. ggplot2
  5. plot.ly
  6. D3.js
  7. Tableau
  8. Google Data Studio
  9. Microsoft Power BI Desktop

Some involve coding, some are drag-and-drop, some are difficult for beginners, some have no coding at all. All of these tools will help you with data visualization. But one of the most overlooked but critical practical functions of a data scientist has been included under this heading: summarisation. 

Summarisation means the practical result of your data science workflow. What does the result of your analysis mean for the operation of the business or the research problem that you are currently working on? How do you convert your result to the maximum improvement for your business? Can you measure the impact this result will have on the profit of your enterprise? If so, how? Being able to come out of a data science workflow with this result is one of the most important capacities of a data scientist. And most of the time, efficient summarisation = excellent knowledge of statistics. Please know for sure that statistics is the start and the end of every data science workflow. And you cannot afford to be ignorant about it. Refer to the section on statistics or google the term for extra sources of information.

How Can I Learn Everything Above In the Shortest Possible Time?

You might wonder – How can I learn everything given above? Is there a course ora pathway to learn every single concept described in this article at one shot? It turns out – there is. There is a dream course for a data scientist that contains nearly everything talked about in this article.

Want to Become a Data Scientist? Welcome to Dimensionless Technologies! It just so happens that the course: Data Science using Python and R, a ten-week course that includes ML, Python and R programming, Statistics, Github Account Project Guidance, and Job Placement, offers nearly every component spoken about above, and more besides. You don’t know to buy the books or do any of the courses other than this to learn the topics in this article. Everything is covered by this single course, tailormade to convert you to a data scientist within the shortest possible time. For more, I’d like to refer you to the following link:

Data Science using R & Python

Does this seem too good to be true? Perhaps, because this is a paid course. With a scholarship concession, you could end up paying around INR 40,000 for this ten-week course, two weeks of which you can register for 5,000 and pay the remainder after two weeks trial period to see if this course really suits you. If it doesn’t, you can always drop out after two weeks and be poorer by just 5k. But in most cases, this course has been found to carry genuine worth. And nothing worthwhile was achieved without some payment, right?

In case you want to learn more about data science, please check out the following articles:

Data Science: What to Expect in 2019

and:

Big Data and Blockchain

Also, see:

AI and intelligent applications

and:

Evolution of Chatbots & their Performance

All the best, and enjoy data science. Every single day of your life!

The Demand and Salary Of A Data Scientist

The Demand and Salary Of A Data Scientist

Data, Data Generated Everywhere

Mind-Blowing  Statistics

The amount of data that is generated every day is mind-boggling. There was an article on Forbes by Bernard Marr that blew my mind. Here are some excerpts. For the full article, go to Link

 

There are 2.5 quintillion bytes of data created each day. Over the last two years alone 90 percent of the data in the world was generated.

On average, Google now processes more than 40,000 searches EVERY second (3.5 billion searches per day)!

Every minute of the day:

Snapchat users share 527,760 photos

More than 120 professionals join LinkedIn

Users watch 4,146,600 YouTube videos

456,000 tweets are sent on Twitter

Instagram users post 46,740 photos

With 2 billion active users Facebook is still the largest social media platform.

Here are some more intriguing Facebook statistics:

1.5 billion people are active on Facebook daily

Europe has more than 307 million people on Facebook

There are five new Facebook profiles created every second!

More than 300 million photos get uploaded per day

Every minute there are 510,000 comments posted and 293,000 statuses updated (on Facebook)

And all this data was gathered 21st May, last year!

Data Scientist Salary

Photo by rawpixel on Unsplash

 

So I decided to do a more up to date survey. The data below was from an article written on 25th Jan 2019, given at the following link:

 

By 2020, the accumulated volume of big data will increase from 4.4 zettabytes to roughly 44 zettabytes or 44 trillion GB.

Originally, data scientists maintained that the volume of data would double every two years thus reaching the 40 ZB point by 2020. That number was later bumped to 44ZB when the impact of IoT was brought into consideration.

The rate at which data is created is increased exponentially. For instance, 40,000 search queries are performed per second (on Google alone), which makes it 3.46 million searches per day and 1.2 trillion every year.

Every minute Facebook users send roughly 31.25 million messages and watch 2.77 million videos.

The data gathered is no more text-only. An exponential growth in videos and photos is equally prominent. On YouTube alone, 300 hours of video are uploaded every minute.

IDC estimates that by 2020, business transactions (including both B2B and B2C) via the internet will reach up to 450 billion per day.

Globally, the number of smartphone users will grow to 6.1 billion by 2020 (this will overtake the number of basic fixed phone subscriptions).

In just 5 years the number of smart connected devices in the world will be more than 50 billion – all of which will create data that can be shared, collected and analyzed.

Photo by Fancycrave on UnsplashSo what does that mean for us, as data scientists?

Data = raw information. Information = processed data.

Theoretically, inside every 100 MB of the 44,000,000,000,000,000 GB  available in the world, today produced as data there lies a possible business-sector disrupting insight!

But who has the skills to look through 44 trillion GB of data?

The answer: Data Scientists! With Creativity and Originality in their Out-of-the-Box Thinking, as well as Disciplined Focus

data scientist daily wages

Here is a description estimating the salaries for data scientists followed by a graphic which shows you why data science is so hyped right now:

From Quora

Answer by Vidita Mehta

Salary Trends in Data Analytics

Freshers in Analytics get paid more than then any other field, they can be paid up-to 6-7 Lakhs per annum (LPA) minus any experience, 3-7 years experienced professional can expect around 10-11 LPA and anyone with more than 7-10 years can expect, 20-30 LPA.

Opportunities in tier 2 cities can be higher, but the pay-scale of Tier 1 cities is much higher.

E-commerce is the most rewarding career with great pay-scale especially for Fresher’s, offering close to 7-8 LPA, while Analytics service provider offers the lowest packages, 6 LPA.

It is advised to combine your skills to attract better packages, skills such as SAS, R Python, or any open source tools, offers around 13 LPA.

Machine Learning is the new entrant in analytics field, attracting better packages when compared to the skills of big data, however for a significant leverage, acquiring the skill sets of both Big Data and Machine Learning will fetch you a starting salary of around 13 LPA.

Combination of knowledge and skills makes you unique in the job market and hence attracts high pay packages.

Picking up the top five tools of big data analytics, like R, Python, SAS, Tableau, Spark along with popular Machine Learning Algorithms, NoSQL Databases, Data Visualization, will make you irresistible for any talent hunter, where you can demand a high pay package.

As a professional, you can upscale your salary by upskilling in the analytics field.

So there is no doubt about the demand or the need for data scientists in the 21st century.

Now we have done a survey for India. but what about the USA?

The following data is an excerpt from an article by IBM< which tells the story much better than I ever could:

From: Forbes magazine

 

Jobs requiring machine learning skills are paying an average of $114,000.

Advertised data scientist jobs pay an average of $105,000 and advertised data engineering jobs pay an average of $117,000.59% of all Data Science and Analytics (DSA) job demand is in Finance and Insurance, Professional Services, and IT.

Annual demand for the fast-growing new roles of data scientist, data developers, and data engineers will reach nearly 700,000 openings by 2020.

By 2020, the number of jobs for all US data professionals will increase by 364,000 openings to 2,720,000 according to IBM.

Data Science and Analytics (DSA) jobs remain open an average of 45 days, five days longer than the market average.

And yet still more! Look below:

 

By 2020 the number of Data Science and Analytics job listings is projected to grow by nearly 364,000 listings to approximately 2,720,000 The following is the summary of the study that highlights how in-demand data science and analytics skill sets are today and are projected to be through 2020.

There were 2,350,000 DSA job listings in 2015

By 2020, DSA jobs are projected to grow by 15%

Demand for Data scientists and data engineers is projectedto grow byneary40%

DSA jobs advertise average salaries of 80,265 USD$

81% of DSA jobs require workers with 3-5 years of experience or more.

For more, see: Link
  • Machine learning, big data, and data science skills are the most challenging to recruit for and potentially can create the greatest disruption to ongoing product development and go-to-market strategies if not filled.

So where does Dimensionless Technologies, with courses in Python, R, Deep Learning, NLP, Big Data, Analytics, and AWS coming soon, stand in the middle of all the demand?

The answer: right in the epicentre of the data science earthquake that is no hitting our IT sector harder than ever.The main reason I say this is because of the salaries increasing like your tummy after you finish your fifth Domino’s Dominator Cheese and Pepperoni Pizza in a row everyday for seven days! Have a look at the salaries for data science:

In India

From Quora

Do you know which city in India pays highest salaries to data scientist?

Mumbai pays the highest salary in India around 12.19L p.a.

Report of Data Analytics Salary of the Top Companies in India

  • Accenture’s Data Analytics Salary in India: 90% gets a salary of about Rs 980,000 per year
  • Tata Consultancy Services Limited Data Analytics Salary in India: 90% of the employees get a salary of about Rs 550,000 per year. A bonus of Rs 20,000 is paid to the employees.
  • EY (Ernst & Young) Data Analytics Salary in India: 75% of the employees get a salary of Rs 620,000 and 90% of the employees get a salary of Rs 770,000.
  • HCL Technologies Ltd. Data Analytics Salary in India: 90% of the people are paid Rs 940,000 per year approximately.

 

In the USA

Data Scientist salaries in united state

From glassdoor.com

 

To convert into INR, in the US, the salaries of a data scientist stack up as follows:

Lowest: 86,000 USD = 6,020,000 INR per year (60 lakh per year)

Average: 117,00 USD = 8,190,000 INR per year (81 lakh  per year)

Highest: 157,000 USD = 10,990,000 INR per year(109 lakh per year or approximately one crore)

at the exchange rate of 70 INR = 1 USD.

By now you should be able to understand why everyone is running after data science degrees and data science certifications everywhere.

The only other industry that offers similar salaries is cloud computing.

A Personal View

On my own personal behalf, I often wondered – why does everyone talk about following your passion and not just about the money. The literature everywhere advertises“Follow your heart and it will lead you to the land of your dreams”. But then I realized – passion is more than your dreams. A dream, if it does not serve others in some way, is of no inspirational value. That is when I found the fundamental role – focus on others achieving their hearts desires, and you will automatically discover your passion. I have many interests, and I found my happiness doing research in advanced data science and quantum computing and dynamical systems, focusing on experiments that combine all three of them together as a single unified theory. I found that that was my dream. But, however, I have a family and I need to serve them. I need to earn.

Thus I relegated my dreams of research to a part-time level and focused fully on earning for my extended family, and serving them as best as I can. Maybe you will come to your own epiphany moment yourself reading this article. What do you want to do with your life? Personally, I wish to improve the lives of those around me, especially the poor and the malnourished. That feeds my heart. Hence my career decision – invest wisely in the choices that I make to garner maximum benefit for those around me. And work on my research papers in the free time that I get.

So my hope for you today is: having read this article, understand the rich potential that lies before you if you can complete your journey as a data scientist. The only reason that I am not going into data science myself is that I am 34 years old and no longer in the prime of my life to follow this American dream. Hence I found my niche in my interest in research. And further, I realized that a fundamental ‘quantum leap’ would be made if my efforts were to succeed. But as for you, the reader of this article, you may be inspired or your world-view expanded by reading this article and the data contained within. My advice to you is: follow your heart. It knows you best and will not betray you into any false location. Data science is the future for the world. make no mistake about that. And – from whatever inspiration you have received go forward boldly and take action. Take one day at a time. Don’t look at the final goal. Take one day at a time. If you can do that, you will definitely achieve your goals.

Company wise salaries

The salary at the top, per year. From glassdoor.com. Try not to drool. 🙂

Finding Your Passion

Many times when you’re sure you’ve discovered your passion and you run into a difficult topic, that leaves you stuck, you are prone to the famous impostor syndrome. “Maybe this is too much for me. Maybe this is too difficult for me. Maybe this is not my passion. Otherwise, it wouldn’t be this hard for me.” My dear friend, this will hit you. At one point or the other. At such moments, what I do, based upon lessons from the following course, which I highly recommend to every human being on the planet, is:  Take a break. Do something different that completely removes the mind from your current work. Be completely immersed in something else. Or take a nap. Or – best of all – go for a run or a cycle. Exercise. Workout.  This gives your brain cells rest and allows them to process the data in the background. When you come back to your topic, fresh, completely free of worry and tension, completely recharged, you will have an insight into the problem for you that completely solves it. Guaranteed. For more information, I highly suggest the following two resources:

 

 or the most popular MOOC of all time, based on the same topic: Coursera

 

How to learn powerful mental tools

Learning How to Learn – Coursera and IEEE

 

This should be your action every time you feel stuck. I have completely finished this MOOC and the book and it has given me the confidence to tackle any subject in the world, including quantum mechanics, topology, string theory, and supersymmetry theory. I strongly recommend this resource (from experience).

Conclusion

Dimensionless | Data Science Courses

So Dimensionless Technologies (link given above) is your entry point to all things data science. Before you go to TensorFlow, Hadoop, Keras, Hive, Pig, MapReduce, BigQuery, BigTable, you need to know the following topics first:  

Data Science using R & Python

Python and R – the A, B, C, D, E, F, and G of data science!

Big Data Analytics NLP

Big Data and Analytics – this is what we talked about in this post!

Deep Learning

Deep Learning – the X, Y, and Z of data science today!

For further reading, I strongly recommend the following blog posts:

2019 Predictions for AI & Analytics

and:

Big Data : Meaning, Components, Collection & Analysis

All the best. Your passion is not just a feeling. It is a choice you make the day in and a day out whether you like it or not. That is the definition of character – to do what must be done even if you don’t feel like it. Internalize this advice, and there will be no limits to how high you can go. All the best!

What’s New in TensorFlow 2.0

What’s New in TensorFlow 2.0

New Features in TensorFlow 2.0

TensorFlow 2.0 is coming soon. And boy, are we super-excited! TensorFlow first began the trend of open-sourcing AI and DL frameworks for use by the community. And what has been the result? TensorFlow has become an entire ML ecosystem for all kinds of AI technology. Just to give you an idea,  here are the features that an absolutely incredible community has added to the original TensorFlow package:

TF 2.0 Features

From Medium.com  

Features of TensorFlow contributed from the Open Source Community

TensorFlow started out as a difficult-to-learn framework for deep learning from Google. With one difference – it was open-sourced. That may appear as stupidity for a commercial company that focuses on profits, but it was the right thing to do. Because the open source community took it up as their own property and ported it to nearly every platform available today including mobile, web, IoT, embedded, Edge Computing and so much more. And even more: from Python and C, it was ported to JavaScript, C++, C#, Node.js, F#, React.js, Go, Julia, R, Rust, Android, Swift, Kotlin, and even a port to Scala, Haskell, and numerous other coding languages. Then, after that complete conquest, Google went into the next level for optimization – hardware.

Which means – now we have CUDA (library for executing ML code on GPUs) v8-v9-v10 (9.2 left out), GPGPU, GPU-Native Code, TPU (Tensor Processing Unit – custom hardware provided by Google specially designed for TensorFlow), Cloud TPUs, FPGAs (Field-Programmable Gate Arrays – Custom Programmable Hardware), ASIC (Application Specific Integrated Circuits) chip hardware specially designed for TensorFlow, and now MKL for Intel, BLAS optimization, LINPACK optimization (the last three all low-level software optimization for matrix algebra, vector algebra, and linear algebra packages), and so much more that I can’t fit it into the space I have to write this article. To give you a rough idea of what the TensorFlow architecture looks like now, have a look at this highly limited graphic:

Some of TensorFlow features

Source: planspaces.org

Note: XLA stands for A(X)ccelerated Linear Algebra compiler still in development that provides highly optimized computational performance gains.

And Now TensorFlow 2.0

This release is expected shortly in the next six months from Google. Some of its most exciting features are:

  1. Keras Integration as the Main API instead of raw TensorFlow code
  2. Simplified and Integrated Workflow
  3. Eager Execution
  4. More Support for TensorFlow Lite and TensorFlow Edge Computing
  5. Extensions to TensorFlow.js for Web Applications and Node.js
  6. TensorFlow Integration for Swift and iOS
  7. TensorFlow Optimization for Android
  8. Unified Programming Paradigms (Directed Acyclic Graph/Functional and Stack/Sequential)
  9. Support for the new upcoming WebGPU Chrome RFC proposal
  10. Integration of tf.contrib best Package implementations into the core package
  11. Expansion of tf.contrib into Separate Repos
  12. TensorFlow AIY (Artificial Intelligence for Yourself) support
  13. Improved TPU & TPU Pod support, Distributed Computation Support
  14. Improved HPC integration for Parallel Computing
  15. Support for TPU Pods up to v3
  16. Community Integration for Development, Support and Research
  17. Domain-Specific Community Support
  18. Extra Support for Model Validation and Reuse
  19. End-to-End ML Pipelines and Products available at TensorFlow Hub

And yes – there is still much more that I can’t cover in this blog.  

Wow – that’s an Ocean! What can you Expand Upon?

Yes – that is an ocean. But to keep things as simple as possible (and yes – stick to the word limit – cause I could write a thousand words on  every one of these topics and end up with a book instead of a blog post!) we’ll focus on the most exciting and striking topics (ALL are exciting – we’ll cover the ones with the most scope for our audience).

1. Keras as the Main API to TensorFlow

Keras Deep Learning

From www.keras.io

Earlier, comments like these below were common on the Internet:

“TensorFlow is broken” – Reddit user

“Implementation so tightly coupled to specification that there is no scope for extension and modification easily in TensorFlow” – from a post on Blogger.com

“We need a better way to design deep learning systems than TensorFlow” – Google Plus user

Understanding the feedback from the community, Keras was created as an open source project designed to be an easier interface to TensorFlow. Its popularity grew very rapidly, and now nearly 95% of ML tasks happening in the real world can be written just using Keras. Packaged as ‘Deep Learning for Humans’, Keras is simpler to use.  Though, of course, PyTorch gives it a real run for the money as far as simplicity is concerned!

In TensorFlow 2.0, Keras has been adopted as the main API to interact with TensorFlow. Support for pure TensorFlow has not been removed, and thus TensorFlow 2.0 will be completely backwards-compatible, including a conversion tool that can be used to convert TensorFlow 1.x to TensorFlow 2.0 where implementation details differ. Kind of like the Python tool 2to3.py! So now, Keras is the main API for TensorFlow deep learning applications – which takes out a huge amount of unnecessary complexity burdens from the ML engineer.

2. Simplified and Integrated WorkFlow

There is a step-by-step simpler and extremely flexible workflow process for designing deep learning models: (visit https://medium.com/tensorflow/whats-coming-in-tensorflow-2-0 for more details)

  1. Use tf.data for data loading and preprocessing or use NumPy.
  2. Use Keras or Premade Estimators to do your model construction and validation work.
  3. Use tf.function for DAG graph-based execution or use eager execution ( a technique to smoothly debug and run your deep learning model, on by default in TF 2.0).
  4. For TPUs, GPUs, distributed computing, or TPU Pods, utilize Distribution Strategy for high-performance-computing distributed deep learning applications.
  5. TF 2.0 standardizes SavedModel as a serialized version of a TensorFlow graph for a variety of different platforms like Mobile, JavaScript, Edge, Lite, TensorBoard, TensorHub, and TensorServing. This makes it easier to move models around different architectures. This was one feature that was highly necessary compared to the older scenario.

This means that now even novices at machine learning can perform deep learning tasks with relative ease. And of course, did we mention the wide variety of end-to-end pluggable deep learning solutions available at TensorHub and on the Tutorials section? And guess what – they’re all free to download and use for commercial purposes. Google, you are truly the best friend of the open source community!

3. Expanded Support for Mobile (Android and iOS), Web (JavaScript), TF Lite, TF Edge and IoT

TF Lite Architecture

From Medium.com

In all the above platforms, where computational and memory resources are scarce, there is a common trend in TF 2.0 that extends over most of these platforms.

  1. Greater support for various ops in TF 2.0 and several deployment techniques
  2. SIMD+ support for WebAssembly
  3. Support for Swift (iOS) in Colab.
  4. Increased support for data input pipelines, and data visualization libraries in JavaScript.
  5. A smaller and lighter footprint for Edge Computing, Mobile Computing and IoT
  6. Better support for audio and text-based models
  7. Easier conversion of trained TF 2.0 graphs
  8. Increased and improved mobile model optimization techniques

As you can see, Google knows that Edge and Mobile is the future as far as computing is concerned, and has designed its products accordingly. TF Mobile should be replaced by TF Lite soon.

4. Unified Programming Models and Methodologies

There are two/three major ways to code deep learning networks in Keras. They are:

  1. Symbolic or Declarative APIs
  2. Imperative APIs / Subclassing

We shall look at both of them in turn, in a very minute way. For more on this topic, visit https://medium.com/tensorflow/what-are-symbolic-and-imperative-apis-in-tensorflow-2-0

Symbolic/Declarative/Graph-Based/Functional API

We build models symbolically by describing the structure of its DAG (Directed Acyclic Graph) or a sequential stack. This following image is an example of Keras code written symbolically.

Keras code

From Medium.com TensorFlow publication

This looks familiar to most of us since this is how we use Keras usually. The advantages of this process are that it’s easy to visualize, has debugging errors usually only at compile time, and corresponds to our mental model of the deep learning network and is thus easy to work with.

Stack-Based/Subclassing/Imperative API

The following code is an example of the Sequential paradigm or subclassing paradigm to building a deep learning network:

Subclassing

From Medium.com TensorFlow publication (code still in development)

Rather similar to Object Oriented Python, this style was first introduced into the deep learning community in 2015 and has since been used by a variety of deep learning libraries. TF 2.0 has complete support for it. Although it appears simpler, it has some serious disadvantages.

Imperative models are not a data structure that is transparent but an opaque class instead. You are prone to many errors at runtime following this approach. As a deep learning practitioner, you are obliged to know both symbolic as well as imperative and subclassing methodologies of coding your deep neural network. For example, recursive or recurrent neural networks cannot be defined by the symbolic programming model. So it is good to know both. But be aware of the disparate advantages and disadvantages of them!

5. TensorFlow AIY

AIY / DIY

From slideshare.com

This is a brand new offering from Google and other AI companies such as Intel. AIY stands for Artificial Intelligence for Yourself (a play on DIY – Do It Yourself) and is a new marketing scheme from Google to show consumers how easy it is to use TensorFlow in your own DIY devices to create your own AI-enabled projects and gadgets. This is a very welcome trend, since it literally brings the power of AI to the masses, at a very low price. I honestly feel that now the day is nearing when schoolchildren will bring their AIY projects for school exhibitions and that the next generation of whiz kids will be chock full of AI expertise and development of new and highly creative and innovative AI products. This is a fantastic trend and now I have my own to-buy-and-play-with list if I can order these products on Google at a minimal shipping charge. So cool!

6. Guidelines and New Incentives for Community Participation and Research Papers

We are running out of the word limit very fast! I hoped to cover TPUs and TPU Pods and Distributed Computation, but for right now, this is my final point. Realizing and recognizing the massive role the open source community has played in the development of TensorFlow as a worldwide brand for deep learning neural nets, the company has set up various guidelines to introduce domain-specific innovation and the authoring of research papers and white papers from the TensorFlow community, in collaboration with each other. To quote:

From the website https://www.tensorflow.org/community/roadmap :

Community

  • Continue public feedback on significant design decisions through the Request-for-Comment (RFC) process.

  • Create a contributors’ guide to augment our published governance and process.

  • Grow global TensorFlow communities and user groups.

  • Collaborate with partners to co-develop and publish research papers.

  • Continue to publish blog posts and YouTube videos showcasing applications of TensorFlow and build user case studies for high impact application

In fact, when I read more of the benefits of participating in the TensorFlow community open source development process, I could not help it, I joined the TensorFlow development community, myself as well!

TensorFlow Community

A Dimensionless Technologies employee contributing to TensorFlow!

Who knows – maybe, God-willing, one day my code will be a part of TensorFlow 2.0/2.x! Or – even better – there could be a research paper published under my name with collaborators, perhaps. The world is now built around open source technologies, and as a developer, there has never been a better time to be alive!

In Conclusion

So don’t forget, on the day of writing this blog article, 31th January 2019, TensorFlow 2.0 is yet to be released, but since its an open source project, there are no secrets and Google is (literally) being completely ‘open’ about the steps it will take to take TF further as the world market leader in deep learning. I hope this article has increased your interest in AI, open source development, Google, TensorFlow, deep learning, and artificial neural nets. Finally, I would like to point you to some other articles on this blog that focus on Google TensorFlow. Visit any of the following blog posts for more details on TensorFlow, Artificial intelligence Trends and Deep Learning:

Top 10 Data Science Tools (other than SQL Python R)

Top Trends for Data Science in 2019

Finally, do apply for our Deep Learning course (link given below) if you truly wish to become acquainted with TensorFlow in detail:

Deep Learning

May the joy of learning something new never leave you, no matter how old or young you are. Cheers!

Top 5 Advantages of AWS Big Data Speciality

Top 5 Advantages of AWS Big Data Speciality

The Biggest Disruption in the IT Sector

Now unless you’ve been a hermit or a monk living in total isolation, you will have heard of Amazon Web Services and AWS Big Data. It’s a sign of an emerging global market and the entire world becoming smaller and smaller every day.  Why? The current estimate for the cloud computing market in 2020, according to Forbes (a new prediction, highly reliable), is a staggering 411 Billion USD$! Visit the following link to read more and see the statistics for yourself:

https://www.forbes.com/sites/louiscolumbus/2017/10/18/cloud-computing-market-projected-to-reach-411b-by-2020

To know more, refer to Wikipedia for the following terms by clicking on them, which mark, in order the evolution of cloud computing (I will also provide the basic information to keep this article as self-contained as possible):

Wikmedia

1. Software-as-a-Service (SaaS)

This was the beginning of the revolution called cloud computing. Companies and industries across verticals understood that they could let experts manage their software development, deployment, and management for them, leaving them free to focus on their key principle – adding value to their business sector. This was mostly confined to the application level. Follow the heading link for more information, if required.

2. Platform-as-a-Service (PaaS)

PaaS began when companies started to understand that they could outsource both software management and operating systems and maintenance of these platforms to other companies that specialized in taking care of them. Basically, this was SaaS taken to the next level of virtualization, on the Internet. Amazon was the pioneer, offering SaaS and PaaS services worldwide from the year 2006. Again the heading link gives information in depth.

3. Infrastructure-as-a-Service (IaaS)

After a few years in 2011, the big giants like Microsoft, Google, and a variety of other big names began to realize that this was an industry starting to boom beyond all expectations, as more and more industries spread to the Internet for worldwide visibility. However, Amazon was the market leader by a big margin, since it had a five-year head start on the other tech giants. This led to unprecedented disruption across verticals, as more and more companies transferred their IT requirements to IaaS providers like Amazon, leading to (in some cases) savings of well over 25% and per-employee cost coming down by 30%.

After all, why should companies set up their own servers, data warehouse centres, development centres, maintenance divisions, security divisions, and software and hardware monitoring systems if there are companies that have the world’s best experts in every one of these sectors and fields that will do the job for you at less than 1% of the cost the company would incur if they had to hire staff, train them, monitor them, buy their own hardware, hire staff for that as well – the list goes on-and-on. If you are already a tech giant like, say Oracle, you have everything set up for you already. But suppose you are a startup trying to save every penny – and there and tens of thousands of such startups right now – why do that when you have professionals to do it for you?

There is a story behind how AWS got started in 2006 – I’m giving you a link, so as to not make this article too long:

https://medium.com/@furrier/original-content-the-story-of-aws-and-andy-jassys-trillion-dollar-baby

For even more information on AWS and how Big Data comes into the picture, I recommend the following blog:

Introduction to AWS Big Data

AWS Big Data Speciality

OK. So now you may be thinking, so this is cloud computing and AWS – but what does it have to do with Big Data Speciality, especially for startups? Let’s answer that question right now.

A startup today has a herculean task ahead of them.

Not only do they have to get noticed in the big booming startup industry, they also have to scale well if their product goes viral and receives a million hits in a day and provide security for their data in case a competitor hires hackers from the Dark Web to take down their site, and also follow up everything they do on social media with a division in their company managing only social media, and maintain all their hardware and software in case of outages. If you are a startup counting every penny you make, how much easier is it for you to outsource all your computing needs (except social media) to an IaaS firm like AWS.

You will be ready for anything that can happen, and nothing will take down your website or service other than your own self. Oh, not to mention saving around 1 million USD$ in cost over the year! If you count nothing but your  own social media statistics, every company that goes viral has to manage Big Data! And if your startup disrupts an industry, again, you will be flooded with GET requests, site accesses, purchases, CRM, scaling problems, avoiding downtime, and practically everything a major tech company has to deal with!  

Bro, save your grey hairs, and outsource all your IT needs (except social media – that you personally need to do) to Amazon with AWS!

And the Big Data Speciality?

Having laid the groundwork, let’s get to the meat of our article. The AWS certified Big Data Speciality website mentions the following details:

From https://aws.amazon.com/certification/certified-big-data-specialty/

The AWS Certified Big Data – Specialty exam validates technical skills and experience in designing and implementing AWS services to derive value from data. The examination is for individuals who perform complex Big Data analyses and validates an individual’s ability to:

  • Implement core AWS Big Data services according to basic architecture best practices

  • Design and maintain Big Data

  • Leverage tools to automate data analysis

So, what is an AWS Big Data Speciality certified expert? Nothing more than an internationally recognized certification that says that you, as a data scientist can work professionally in AWS and Big Data’s requirements in Data Science.

Please note: the eligibility criteria for an AWS Big Data Speciality Certification is:

From https://aws.amazon.com/certification/certified-big-data-specialty/

To put it in layman’s terms, if you, the data scientist, were Priyanka Chopra, getting the AWS Big Data Speciality certification passed successfully is the equivalent of going to Hollywood and working in the USA starring in Quantico!

Suddenly, a whole new world is open at your feet!

But don’t get too excited: unless you already have five years experience with Big Data, there’s a long way to go. But work hard, take one step at a time, don’t look at the goal far ahead but focus on every single day, one day, one task at a time, and in the end you will reach your destination. Persistence, discipline and determination matters. As simple as that.

Certification

From whizlabs.org

Five Advantages of an AWS Big Data Speciality

1. Massive Increase in Income as a Certified AWS Big Data Speciality Professional (a long term 5 years plus goal)

Everyone who’s anyone in data science knows that a data scientist in the US earns an average of 100,000 USD$ every year. But what is the average salary of an AWS Big Data Speciality Certified professional? Hold on to your hat’s folks; it’s 160,000 $USD starting salary. And with just two years of additional experience, that salary can cross 250,000 USD$ every year if you are a superstar at your work. Depending upon your performance on the job Do you still need a push to get into AWS? The following table shows the average starting salaries for specialists in the following Amazon products: (from www.dezyre.com)

Top Paying AWS Skills According to Indeed.com

AWS SkillSalary
DynamoDB$141,813
Elastic MapReduce (EMR)$136,250
CloudFormation$132,308
Elastic Cache$125,625
CloudWatch$121,980
Lambda$121,481
Kinesis$121,429
Key Management Service$117,297
Elastic Beanstalk$114,219
Redshift$113,950

2. Wide Ecosystem of Tools, Libraries, and Amazon Products

AWS

From slideshare.net

Amazon Web Services, compared to other Cloud IaaS services, has by far the widest ecosystem of products and tools. As a Big Data specialist, you are free to choose your career path. Do you want to get into AI? Do you have an interest in ES3 (storage system) or HIgh-Performance Serverless computing (AWS Lambda).  You get to choose, along with the company you work for. I don’t know about you, but I’m just writing this article and I’m seriously excited!

3. Maximum Demand Among All Cloud Computing jobs

If you manage to clear the certification in AWS, then guess what – AWS certified professionals have by far the maximum market demand! Simply because more than half of all Cloud Computing IaaS companies use AWS. The demand for AWS certifications is the maximum right now. To mention some figures: in 2019, 350,000 professionals will be required for AWS jobs. 60% of cloud computing jobs ask for AWS skills (naturally, considering that it has half the market share).

4. Worldwide Demand In Every Country that Has IT

It’s not just in the US that demand is peaking. There are jobs available in England, France, Australia, Canada, India, China, EU – practically every nation that wants to get into IT will welcome you with open arms if you are an AWS certified professional. And look no further than this site. AWS training will be offered soon, here: on Dimensionless.in. Within the next six months at the latest!

5. Affordable Pricing and Free One Year Tier to Learn AWS

Amazon has always been able to command the lowest prices because of its dominance in the market share. AWS offers you a free 1 year of paid services on its cloud IaaS platform. Completely free for one year. AWS training materials are also less expensive compared to other offerings. The following features are offered free for one single year under Amazon’s AWS free tier system:

https://aws.amazon.com/free/

The following is a web-scrape of their free-tier offering:

Freemium

AWS Free Tier One Year Resources Available

There were initially seven pages in the Word document that I scraped from www.aws.com/free. To really have a  look, go to the website on the previous link and see for yourself on the following link (much more details in much higher resolution). Please visit this last mentioned link. That alone will show you why AWS is sitting pretty on top of the cloud – literally.

Final Words

Right now, AWS rules the roost in cloud computing. But there is competition from Microsoft, Google, and IBM. Microsoft Azure has a lot of glitches which costs a lot to fix. Google Cloud Platform is cheaper but has very high technical support charges. A dark horse here: IBM Cloud. Their product has a lot of offerings and a lot of potential. Third only to Google and AWS. If you are working and want to go abroad or have a thirst for achievement, go for AWS. Totally. Finally, good news, all Dimensionless current students and alumni, the languages that AWS is built on has 100% support for Python! (It also supports, Go, Ruby, Java, Node.js, and many more – but Python has 100% support).

Keep coming to this website – expect to see AWS courses here in the near future!

AWS

AWS in the Cloud