Learn Data Science with the Best Available Free Courses Online

Learn Data Science with the Best Available Free Courses Online

Data Scientist Training Free of Charge

Now, in theory, it is possible to become a data scientist, without paying a dime. What we want to do in this article is to list out the best of the best options to learn what you need to know to become a data scientist. Many articles offer 4-5 courses under each heading. What I have done is to search through the Internet covering all free courses and choose the single best course for each topic.

These courses have been carefully curated and offer the best possible option if you’re learning for free. However – there’s a caveat. An interesting twist to this entire story.  Interested? Read on! And please – make sure you complete the full article.

Topics For A Data Scientist Course

The basic topics that a data scientist needs to know are:

  1. Machine Learning Theory and Applications
  2. Python Programming
  3. R Programming
  4. SQL
  5. Statistics & Probability
  6. Linear Algebra
  7. Calculus Basics (short)
  8. Machine Learning in Python
  9. Machine Learning in R
  10. Tableau

So let’s get to it. Here is the list of the best possible options to learn every one of these topics, carefully selected and curated.

 

Machine Learning – Stanford University – Andrew Ng (audit option)

Machine Learning Course From Stanford University

Machine learning course

The world-famous course for machine learning with the highest rating of all the MOOCs in Coursera, from Andrew Ng, a giant in the ML field and now famous worldwide as an online instructor. Uses MATLAB/Octave. From the website:

This course provides a broad introduction to machine learning, data mining, and statistical pattern recognition. Topics include:

(i) Supervised learning (parametric/non-parametric algorithms, support vector machines, kernels, neural networks)

(ii) Unsupervised learning (clustering, dimensionality reduction, recommender systems, deep learning)

(iii) Best practices in machine learning (bias/variance theory; innovation process in machine learning and AI)

The course will also draw from numerous case studies and applications, so that you’ll also learn how to apply learning algorithms to building smart robots (perception, control), text understanding (web search, anti-spam), computer vision, medical informatics, audio, database mining, and other areas.

This course is extremely effective and has many benefits. However, you will need high levels of self-discipline and self-motivation. Statistics show that 90% of those who sign up for a MOOC without a classroom or group environment never complete the course.

 

Learn Python The Hard Way – Zed Shaw – Free Online Access

 

learn python

 

Learn Python The Hard Way Online Access

You may ask me, why do I want to learn the hard way? Shouldn’t we learn the smart way and not the hard way? Don’t worry. This ebook, online course, and web site is a highly popular way to learn Python. Ok,  so it says the hard way. Well, the only way to learn how to code is to practice what you have learned. This course integrates practice with learning. Other Python books you have to take the initiative to practice.

Here, this book shows you what to practice, how to practice. There is only one con here – although this is the best self-driven method, most people will not complete all of it. The main reason is that there is no external instructor for supervision and a group environment to motivate you. However, if you want to learn Python by yourself, then this is the best way. But not the optimal one, as you will see at the end of this article since the cost of the book is 30$ USD (2100 INR approx).

Interactive R and Data Science Programming – SwiRl

Interactive R and Data Science Course (In Console)

 

interactive R programming course Swirl

 

Swirlstats is a wonderful tool to learn R and data science scripting in R interactively and intuitively by teaching you R commands from within the R console. This might seem like a very simple tool, but as you use it, you will notice its elegance in teaching you literally how to express yourselves in R and the finer nuances of the language and integration with the console and tidyverse. This is a powerful method of learning R and what is more, it is also a lot of fun!

Descriptive and Inferential Statistics

Course on Statistics and Probability from KhanAcademy

 

khanacademy's profile picture

KhanAcademy is a free non-profit organization on a mission – they want to provide a world-class education to you regardless of where you may be in the world. And they’re doing a fantastic job! This course has been covered in several very high profile blogs and Quora posts as the best online course for statistics – period. What is more, it is extremely high quality and suitable for beginners –  and – free! This organization is doing wonderful work. More power to them!

Mathematics for Data Science

Now the basic mathematics for data science content includes linear algebra, single-variable, discrete mathematics, and multivariable calculus (selected topics) and basics of differential equations.  Now you could take all of these topics separately in KhanAcademy and that is a good option for Linear Algebra and Multivariate Calculus (in addition to Statistics and Probability).

For Linear Algebra, the link of what you need to know given in a course in KhanAcademy is given below:

Course on Linear Algebra From KhanAcademy

Course view with khan academy

 

For Multivariate Calculus

Course on MultiVariate Calculus From KhanAcademy

Mutlivariate calcus from khan academy

These courses are completely free and very accessible to beginners.

Discrete Mathematics

This topic deserves a section to itself because discrete mathematics is the foundation of all computer science. There are a variety of options available to learn discrete mathematics, from ebooks to MOOCs, but today, we’ll focus on the best possible option. MIT (Massachusetts Institute of Technology) is known as one of the best colleges in the world and they have an Open information initiative known as MIT OpenCourseWare (MIT OCW). These are actual videos of the lectures taken by the students at one of the best engineering colleges in the world. You will benefit a lot if you follow the lectures at this link, they give all the basic concepts as clearly as possible. It’s a bit technical because this is open mostly for students at an advanced level. The link is given below:

MIT OpenCourseWare Course: Mathematics for Computer Science

Image result for MIT OCW logo

For beginners, one slightly less technical option is the following course:

Course on Discrete Mathematics for Computer Science

It is also technical and from MIT but might be a little more accessible than the earlier option.

SQL

SQL (see-quel) or Structured Query Language is a must-learn if you are a data scientist. You will be working with a lot of databases, and SQL is the language used to access and generate data from database systems like Oracle and Microsoft SQL Server. The best free course I could find online is undoubtedly the one below:

Udemy Course for SQL Beginners

 

SQL for Newcomers - A Crash Course

SQL For Newcomers – A Free Crash Course from Udemy.com.

5 hours-plus of every SQL command and concept you need to know. And – completely free.

Machine Learning with Scikit-Learn

 

logo for Scikit

scikit learning course

 

Scikit-Learn Online Documentation Main Page

We have covered Python, R, Machine Learning using MATLAB, Data Science with R (SwiRl teaches data science as well), Statistics, Probability, Linear Algebra, and Basic Calculus. Now we just need to get a course for Data Science with Python, and we are done! Now I looked at many options but was not satisfied. So instead of a course, I have provided you with a link to the scikit-learn documentation. Why?

Because that’s as good as an online course by itself. If you read through the main sections, get the code (Ctrl-X, Ctrl-V) and execute it in an Anaconda environment, and then play around with it, experiment, and observe and read up on what every line does, you will already know who to solve standard textbook problems. I recommend the following order:

  1. Classification
  2. Regression
  3. Clustering
  4. Preprocessing
  5. Model Evaluation
  6. 5 classification examples (execute)
  7. 5 regression examples (run them)
  8. 5 clustering examples (ditto)
  9. 6 sample preprocessing functions
  10. Dimensionality Reduction
  11. Model Selection
  12. Hyperparameter Tuning

Machine Learning with R

 

Logo for Oreilly's R for Dsta Science course

 

Online Documentation for Machine Learning in R with Tidyverse

This book is free to learn online. Get the data files, get the script files, use RStudio, and just as with Python, play, enjoy, experiment, execute, and explore. A little hard work will have you up and running with R in no time! But make sure you try as many code examples as possible. The libraries you can focus on are:

  1. dplyr (data manipulation)
  2. tidyr (data preprocessing “tidying”)
  3. ggplot2 (graphical package)
  4. purrr (functional toolkit)
  5. readr (reading rectangular data files easily)
  6. stringr (string manipulation)
  7. tibble (dataframes)

Tableau

To make it short, simple, and sweet, since we have already covered SQL and this content is for beginners, I recommend the following course:

Udemy Course on Tableau for Beginners

This is a course on Udemy rated 4.2/5 and completely free. You will learn everything you need to work with Tableau (the most commonly used corporate-level visualization tool). This is an extremely important part of your skill set. You can make all the greatest analyses, but if you don’t visualize them and do it well, management will never buy into your machine learning solution, and neither will anyone who doesn’t know the technical details of ML (which is a large set of people on this planet). Visualization is important. Please make sure to learn the basics (at least!) of Tableau.

Tableau course image

From Unsplash

 

Kaggle Micro-Courses (Add-Ons – Short Concise Tutorials)

Kaggle Micro-Courses (from www.kaggle.com!)

Kaggle Micro-Courses (from www.kaggle.com!)

 

Kaggle Learn Home Page

Kaggle is a wonderful site to practice your data science skills, but recently, they have added a set of hands-on courses to learn data science practicals. And, if I do say, so myself, it’s brilliant. Very nicely presented, superb examples, clear and concise explanations. And of course, you will cover more than we discussed earlier. Please, if you read through all the courses discussed so far in this article, and if you do just the courses at Kaggle.com, you will have spent your time wisely (though not optimally – as we shall see).

Kaggle Learn

Kaggle Learn

Dimensionless Technologies

 

Dimensonless technologies logo

Dimensionless Technologies

 

Now, if you are reading this article, you might have a fundamental question. This is a blog of a company that offers courses in data science, deep learning, and cloud computing. Why would we want to list all our competitors and publish it on our site? Isn’t that negative publicity?

Quite the opposite. 

This is the caveat we were talking about.

Our course is a better solution than every single option given above!

We have nothing to hide.

And we have an absolutely brilliant top-class product.

Every option given above is a separate course by itself.

And they all suffer from a very prickly problem – you need to have excellent levels of discipline and self-motivation to complete just one of the courses above – let alone all ten.

 

You also have no classroom environment, no guidance for doubts and questions, and you need to know the basics about programming.

Our product is the most cost-effective option in the market for learning data science, as well as the most effective methodology for everyone – every course is conducted live in a classroom environment from the comfort of your home. You can work at a standard job, spend two hours on the internet every day, do extra work and reading on weekends, and become a professional data scientist in 6 months time.

We also have personalized GitHub project portfolio creation, management, and faculty guidance. Not to mention individual attention for each student.

And IITians for faculty who also happen to have 9+ years of industry experience.

So when we say that our product is the best on the market, we really mean it. Because of the live session teaching of the classes, which no other option on the Internet today has.

 

Am I kidding? Absolutely not. And you can get started with Dimensionless Technologies Data Science with Python and R course for just 70-odd USD. Which is the most cost-effective option on the market!

And unlike all the 10 courses and resources detailed above, instead of doing 10 courses, you just need to do one single course, with the extracted meat of all that you need to know as a data scientist. And yes, we cover:

  1. Machine Learning
  2. Python Programming
  3. R Programming
  4. SQL
  5. Statistics & Probability
  6. Linear Algebra
  7. Calculus Basics
  8. Machine Learning in Python
  9. Machine Learning in R
  10. Tableau
  11. GitHub Personal Project Portfolio Creation
  12. Live Remote Daily Sessions
  13. Experts with Industrial Experience
  14. A Classroom Environment (to keep you motivated)
  15. Individual Attention to Every Student

I hope this information has you seriously interested. Please sign up for the course – you will not regret it.

And we even have a two-week trial for you to experience the course for yourself.

Choose wisely and optimally.

Unleash the data scientist within!

 

An excellent general article on emerging state-of-the-art technology, AI, and blockchain:

The Exciting Future with Blockchain and Artificial Intelligence

For more on data science, check out our blog:

Blog

And of course, enjoy machine learning!

Accurate Bitcoin Price Forecasting with Python – Blockchain Applications of Data Science Part 2

Accurate Bitcoin Price Forecasting with Python – Blockchain Applications of Data Science Part 2

Recap

We discussed earlier in Part 1 of Blockchain Applications of Data Science on this blog how the world could be made to become much more profitable for not just a select set of the super-rich but also to the common man, to anyone who participates in creating a digitally trackable product. We discussed how large scale adoption of cryptocurrencies and blockchain technology worldwide could herald a change in the economic demography of the world that could last for generations to come. In this article, we discuss how AI and data science can be used to tackle one of the most pressing questions of the blockchain revolution – how to model the future price of the Bitcoin cryptocurrency for trading for massive profit.

A Detour

But first, we take a short detour to explore another aspect of cryptocurrency that is not commonly talked about. Looking at the state of the world right now, it should be discussed more and I feel compelled to share this information with you before we skip to the juicy part about cryptocurrency price forecasting.

The Environmental Impact of Cryptocurrency Mining

Now, two fundamental assumptions. I assume you’ve read Part 1, which contained a link to a visual guide of how cryptocurrencies work. In case you missed the latter, here’s a link for you to check again.

The following articles speak about the impact of cryptocurrency mining on the environment. Read at least one partially at the very least so that you will understand as we progress with this article:

https://www.theguardian.com/technology/2018/jan/17/bitcoin-electricity-usage-huge-climate-cryptocurrency

So cryptocurrency mining involves a huge wastage of computational resources, energy, and enough electrical power to run an entire country. This is mainly due to the model of the Proof-of-Work PoW mining system used by Bitcoin. For more, see the following article..

https://www.investopedia.com/tech/whats-environmental-impact-cryptocurrency/

In PoW mining, miners compete against each other in a desperate race to see who can find the solution to a mathematical hashing problem the quickest. And in every race, only one miner is rewarded with the Bitcoin value.

image result for Ethereum goes Green
Ethereum goes Green! (From Pixabay)

 

In a significant step forward, Vitalin Buterik’s Ethereum cryptocurrency has shifted to Proof-of-Stake based (PoS) mining system. This makes the mining process significantly less energy intensive than PoW. Some claim the energy savings may be 99.9% more efficient than PoW. Whatever the statistics may be, a PoS based mining process is a big step forward and may completely change the way the environmentalists feel about cryptocurrencies.

So by shifting to PoS mining we can save a huge amount of energy. That is a caveat you need to remember and be aware about because Bitcoin uses PoW mining only. It would be a dream come true for an environmentalist if Bitcoin could shift to PoS mining. Let’s hope and pray that it happens.

Now back to our main topic.

Use AI and Data Science to Predict Future Prices of Cryptocurrency – Including the Burst of the Bitcoin Bubble

What is a blockchain? A distributed database that is decentralized and has no central point of control. As on Feb 2018, the Bitcoin blockchain on a full node was 160-odd GB in size. Now in April 2019, it is 210 GB in size. So this is the question I am going to pose to you. Would it be possible to use the data in the blockchain distributed database to identify patterns and statistical invariances to invest minimally with maximum possible profit? Can we forecast and build models to predict the prices of cryptocurrency in the future using AI and data science? The answer is a definite yes.

Practical Considerations

You may wonder if applying data science techniques and statistical analysis can actually produce information that can help in forecasting the future price of bitcoin. I came across a remarkable kernel on www.Kaggle.com (a website for data scientists to practice problems and compete with each other in competitions) by a user with the handle wayward artisan and the profile name Tania J. I thought it was worth sharing since this is a statistical analysis of the rise and the fall of the bitcoin bubble vividly illustrating how statistical methods helped this user to forecast the future price of bitcoin. The entire kernel is very large and interesting, please do visit it at the link given below. Just the start and the middle section of the kernel is given here because of space considerations and intellectual property considerations as well.

image result for home for data science @ kaggle
Your home for data science.

A Kaggle Kernel That Modelled the Bitcoin Bubble Burst Within Reasonable Error Limits

This following kernel uses cryptocurrency financial data scraped from www.coinmarketcap.com. It is a sobering example of how AI predictions actually predicted the collapse of the bitcoin bubble, prompting as many sellers to sell as they did. Coming across this kernel is one of the main motivations to write this article. I have omitted a lot of details, especially building the model and analyzing its accuracy. I just wanted to show that it was possible.

For more details, visit the kernel on Kaggle at the link:
https://www.kaggle.com/taniaj/cryptocurrency-price-forecasting (Please visit this page, all aspiring data scientists. And pay attention to every concept discussed and used. Use Google and Wikipedia and you will learn a lot.)

A subset of the code is given below (the first section):

<subsequent code not shown for brevity>

The dataset is available at the following link as a csv file in Microsoft Excel:

 

We focus on one of the middle sections with the first ARIMA model with SARIMAX (do look up Wikipedia and Google Search to learn about ARIMA and SARIMAX) which does the actual prediction at the time that the bitcoin bubble burst (only a subset of the code is shown). Visit the Kaggle kernel page on the link below this extract to get the entire code:

<data analysis and model analysis code section not shown here for brevity>

<more code, not shown>

From
Kaggle Code

This code and the code earlier in the kernel (not shown for the sake of brevity) that built the model for accuracy gave the following predictions as output:

Bitcoin price forecasting at the time of the burst of the Bitcoin bubble

What do we learn? Surprisingly, the model captures the Bitcoin bubble burst with a remarkably accurate prediction (error levels ~ 10%)!

Conclusion

So, does AI and data science have anything to do with blockchain technology and cryptocurrency? The answer is a resounding, yes. Expect data science, statistical analysis, neural networks, and probability model distributions to play a heavy part when you want to forecast cryptocurrency prices.

For all the data science students out there, I am going to include one more screen from the same kernel on Kaggle (link):

The reason I want to show you this screen is that the terms and statistical lingo like kurtosis and heteroskedasticity are statistics concepts that you need to master in order to conduct forecasts like this, the main reason being to analyze the accuracy of the model you have constructed. The output window is given below:

So yes, blockchain technology and cryptocurrencies have a lot of overlap with applications. But also remember, data science can be applied to any field where finance is a factor.

For more on blockchain and data science, see:

A Beginner’s Guide to Big Data and Blockchain

Enjoy data science!

Follow this link, if you are looking to learn data science online!

You can follow this link for our Big Data course, which is a step further into advanced data analysis and processing!

Additionally, if you are having an interest in learning Data Science, click here to start the Online Data Science Course

Furthermore, if you want to read more about data science, read our Data Science Blogs

What’s New in TensorFlow 2.0

What’s New in TensorFlow 2.0

New Features in TensorFlow 2.0

TensorFlow 2.0 is coming soon. And boy, are we super-excited! TensorFlow first began the trend of open-sourcing AI and DL frameworks for use by the community. And what has been the result? TensorFlow has become an entire ML ecosystem for all kinds of AI technology. Just to give you an idea,  here are the features that an absolutely incredible community has added to the original TensorFlow package:

TF 2.0 Features

From Medium.com  

Features of TensorFlow contributed from the Open Source Community

TensorFlow started out as a difficult-to-learn framework for deep learning from Google. With one difference – it was open-sourced. That may appear as stupidity for a commercial company that focuses on profits, but it was the right thing to do. Because the open source community took it up as their own property and ported it to nearly every platform available today including mobile, web, IoT, embedded, Edge Computing and so much more. And even more: from Python and C, it was ported to JavaScript, C++, C#, Node.js, F#, React.js, Go, Julia, R, Rust, Android, Swift, Kotlin, and even a port to Scala, Haskell, and numerous other coding languages. Then, after that complete conquest, Google went into the next level for optimization – hardware.

Which means – now we have CUDA (library for executing ML code on GPUs) v8-v9-v10 (9.2 left out), GPGPU, GPU-Native Code, TPU (Tensor Processing Unit – custom hardware provided by Google specially designed for TensorFlow), Cloud TPUs, FPGAs (Field-Programmable Gate Arrays – Custom Programmable Hardware), ASIC (Application Specific Integrated Circuits) chip hardware specially designed for TensorFlow, and now MKL for Intel, BLAS optimization, LINPACK optimization (the last three all low-level software optimization for matrix algebra, vector algebra, and linear algebra packages), and so much more that I can’t fit it into the space I have to write this article. To give you a rough idea of what the TensorFlow architecture looks like now, have a look at this highly limited graphic:

Some of TensorFlow features

Source: planspaces.org

Note: XLA stands for A(X)ccelerated Linear Algebra compiler still in development that provides highly optimized computational performance gains.

And Now TensorFlow 2.0

This release is expected shortly in the next six months from Google. Some of its most exciting features are:

  1. Keras Integration as the Main API instead of raw TensorFlow code
  2. Simplified and Integrated Workflow
  3. Eager Execution
  4. More Support for TensorFlow Lite and TensorFlow Edge Computing
  5. Extensions to TensorFlow.js for Web Applications and Node.js
  6. TensorFlow Integration for Swift and iOS
  7. TensorFlow Optimization for Android
  8. Unified Programming Paradigms (Directed Acyclic Graph/Functional and Stack/Sequential)
  9. Support for the new upcoming WebGPU Chrome RFC proposal
  10. Integration of tf.contrib best Package implementations into the core package
  11. Expansion of tf.contrib into Separate Repos
  12. TensorFlow AIY (Artificial Intelligence for Yourself) support
  13. Improved TPU & TPU Pod support, Distributed Computation Support
  14. Improved HPC integration for Parallel Computing
  15. Support for TPU Pods up to v3
  16. Community Integration for Development, Support and Research
  17. Domain-Specific Community Support
  18. Extra Support for Model Validation and Reuse
  19. End-to-End ML Pipelines and Products available at TensorFlow Hub

And yes – there is still much more that I can’t cover in this blog.  

Wow – that’s an Ocean! What can you Expand Upon?

Yes – that is an ocean. But to keep things as simple as possible (and yes – stick to the word limit – cause I could write a thousand words on  every one of these topics and end up with a book instead of a blog post!) we’ll focus on the most exciting and striking topics (ALL are exciting – we’ll cover the ones with the most scope for our audience).

1. Keras as the Main API to TensorFlow

Keras Deep Learning

From www.keras.io

Earlier, comments like these below were common on the Internet:

“TensorFlow is broken” – Reddit user

“Implementation so tightly coupled to specification that there is no scope for extension and modification easily in TensorFlow” – from a post on Blogger.com

“We need a better way to design deep learning systems than TensorFlow” – Google Plus user

Understanding the feedback from the community, Keras was created as an open source project designed to be an easier interface to TensorFlow. Its popularity grew very rapidly, and now nearly 95% of ML tasks happening in the real world can be written just using Keras. Packaged as ‘Deep Learning for Humans’, Keras is simpler to use.  Though, of course, PyTorch gives it a real run for the money as far as simplicity is concerned!

In TensorFlow 2.0, Keras has been adopted as the main API to interact with TensorFlow. Support for pure TensorFlow has not been removed, and thus TensorFlow 2.0 will be completely backwards-compatible, including a conversion tool that can be used to convert TensorFlow 1.x to TensorFlow 2.0 where implementation details differ. Kind of like the Python tool 2to3.py! So now, Keras is the main API for TensorFlow deep learning applications – which takes out a huge amount of unnecessary complexity burdens from the ML engineer.

2. Simplified and Integrated WorkFlow

There is a step-by-step simpler and extremely flexible workflow process for designing deep learning models: (visit https://medium.com/tensorflow/whats-coming-in-tensorflow-2-0 for more details)

  1. Use tf.data for data loading and preprocessing or use NumPy.
  2. Use Keras or Premade Estimators to do your model construction and validation work.
  3. Use tf.function for DAG graph-based execution or use eager execution ( a technique to smoothly debug and run your deep learning model, on by default in TF 2.0).
  4. For TPUs, GPUs, distributed computing, or TPU Pods, utilize Distribution Strategy for high-performance-computing distributed deep learning applications.
  5. TF 2.0 standardizes SavedModel as a serialized version of a TensorFlow graph for a variety of different platforms like Mobile, JavaScript, Edge, Lite, TensorBoard, TensorHub, and TensorServing. This makes it easier to move models around different architectures. This was one feature that was highly necessary compared to the older scenario.

This means that now even novices at machine learning can perform deep learning tasks with relative ease. And of course, did we mention the wide variety of end-to-end pluggable deep learning solutions available at TensorHub and on the Tutorials section? And guess what – they’re all free to download and use for commercial purposes. Google, you are truly the best friend of the open source community!

3. Expanded Support for Mobile (Android and iOS), Web (JavaScript), TF Lite, TF Edge and IoT

TF Lite Architecture

From Medium.com

In all the above platforms, where computational and memory resources are scarce, there is a common trend in TF 2.0 that extends over most of these platforms.

  1. Greater support for various ops in TF 2.0 and several deployment techniques
  2. SIMD+ support for WebAssembly
  3. Support for Swift (iOS) in Colab.
  4. Increased support for data input pipelines, and data visualization libraries in JavaScript.
  5. A smaller and lighter footprint for Edge Computing, Mobile Computing and IoT
  6. Better support for audio and text-based models
  7. Easier conversion of trained TF 2.0 graphs
  8. Increased and improved mobile model optimization techniques

As you can see, Google knows that Edge and Mobile is the future as far as computing is concerned, and has designed its products accordingly. TF Mobile should be replaced by TF Lite soon.

4. Unified Programming Models and Methodologies

There are two/three major ways to code deep learning networks in Keras. They are:

  1. Symbolic or Declarative APIs
  2. Imperative APIs / Subclassing

We shall look at both of them in turn, in a very minute way. For more on this topic, visit https://medium.com/tensorflow/what-are-symbolic-and-imperative-apis-in-tensorflow-2-0

Symbolic/Declarative/Graph-Based/Functional API

We build models symbolically by describing the structure of its DAG (Directed Acyclic Graph) or a sequential stack. This following image is an example of Keras code written symbolically.

Keras code

From Medium.com TensorFlow publication

This looks familiar to most of us since this is how we use Keras usually. The advantages of this process are that it’s easy to visualize, has debugging errors usually only at compile time, and corresponds to our mental model of the deep learning network and is thus easy to work with.

Stack-Based/Subclassing/Imperative API

The following code is an example of the Sequential paradigm or subclassing paradigm to building a deep learning network:

Subclassing

From Medium.com TensorFlow publication (code still in development)

Rather similar to Object Oriented Python, this style was first introduced into the deep learning community in 2015 and has since been used by a variety of deep learning libraries. TF 2.0 has complete support for it. Although it appears simpler, it has some serious disadvantages.

Imperative models are not a data structure that is transparent but an opaque class instead. You are prone to many errors at runtime following this approach. As a deep learning practitioner, you are obliged to know both symbolic as well as imperative and subclassing methodologies of coding your deep neural network. For example, recursive or recurrent neural networks cannot be defined by the symbolic programming model. So it is good to know both. But be aware of the disparate advantages and disadvantages of them!

5. TensorFlow AIY

AIY / DIY

From slideshare.com

This is a brand new offering from Google and other AI companies such as Intel. AIY stands for Artificial Intelligence for Yourself (a play on DIY – Do It Yourself) and is a new marketing scheme from Google to show consumers how easy it is to use TensorFlow in your own DIY devices to create your own AI-enabled projects and gadgets. This is a very welcome trend, since it literally brings the power of AI to the masses, at a very low price. I honestly feel that now the day is nearing when schoolchildren will bring their AIY projects for school exhibitions and that the next generation of whiz kids will be chock full of AI expertise and development of new and highly creative and innovative AI products. This is a fantastic trend and now I have my own to-buy-and-play-with list if I can order these products on Google at a minimal shipping charge. So cool!

6. Guidelines and New Incentives for Community Participation and Research Papers

We are running out of the word limit very fast! I hoped to cover TPUs and TPU Pods and Distributed Computation, but for right now, this is my final point. Realizing and recognizing the massive role the open source community has played in the development of TensorFlow as a worldwide brand for deep learning neural nets, the company has set up various guidelines to introduce domain-specific innovation and the authoring of research papers and white papers from the TensorFlow community, in collaboration with each other. To quote:

From the website https://www.tensorflow.org/community/roadmap :

Community

  • Continue public feedback on significant design decisions through the Request-for-Comment (RFC) process.

  • Create a contributors’ guide to augment our published governance and process.

  • Grow global TensorFlow communities and user groups.

  • Collaborate with partners to co-develop and publish research papers.

  • Continue to publish blog posts and YouTube videos showcasing applications of TensorFlow and build user case studies for high impact application

In fact, when I read more of the benefits of participating in the TensorFlow community open source development process, I could not help it, I joined the TensorFlow development community, myself as well!

TensorFlow Community

A Dimensionless Technologies employee contributing to TensorFlow!

Who knows – maybe, God-willing, one day my code will be a part of TensorFlow 2.0/2.x! Or – even better – there could be a research paper published under my name with collaborators, perhaps. The world is now built around open source technologies, and as a developer, there has never been a better time to be alive!

In Conclusion

So don’t forget, on the day of writing this blog article, 31th January 2019, TensorFlow 2.0 is yet to be released, but since its an open source project, there are no secrets and Google is (literally) being completely ‘open’ about the steps it will take to take TF further as the world market leader in deep learning. I hope this article has increased your interest in AI, open source development, Google, TensorFlow, deep learning, and artificial neural nets. Finally, I would like to point you to some other articles on this blog that focus on Google TensorFlow. Visit any of the following blog posts for more details on TensorFlow, Artificial intelligence Trends and Deep Learning:

Top 10 Data Science Tools (other than SQL Python R)

Top Trends for Data Science in 2019

Finally, do apply for our Deep Learning course (link given below) if you truly wish to become acquainted with TensorFlow in detail:

Deep Learning

May the joy of learning something new never leave you, no matter how old or young you are. Cheers!

Top 10 Advantages of a Data Science Certification

Top 10 Advantages of a Data Science Certification

Data science is a booming industry, with potentially millions of job openings by 2020, according to the latest analyst’s business predictions. But what if you want to learn data science without the heavy cost of a postgraduate degree or the US university MOOC specialization? What is the best way to prepare for this upcoming wave of opportunity and maximize your chances for a 100K+ USD (annual) job? Well – there are many challenges that stand before you in such a case. Not only is the market saturated with an abundance of existing fresh talent, but most of the training you receive in college has no relationship to the actual type of work you get on the job. With so many engineering graduates passing out every year from so many established institutions such as the IITs, how can you hope to realistically compete? Well – there is one possibility you can choose if you wish to stand out from the rest of the competition – high-quality data science programs or courses. And in this article, we are going to list the top ten advantages of choosing such a course compared to other options, like a Ph.D., or an online MOOC Specialization from a US university (which are very tempting options, especially if you have the money for them).

Top Ten Advantages of Data Science Certification

1. Stick to Essentials, Cut the Fluff.

Now if you are a professional data scientist, no one expects you to derive any AI algorithms from first principles. You also don’t need to extensively dig into the (relatively) trivial history behind each algorithm, nor learn SVD (Singular Value Decomposition) or Gaussian Elimination on a real matrix without a computer to assist you. There is so much material that an academic degree covers that is never used on the job! Yes, you need to have an intuitive idea about the algorithms. But unless you’re going in for ML research, there’s not much use of knowing, say, Jacobians or Hessians in depth. Professional data scientists work in very different domains while compared to academic researchers or academic counterparts. Learn what you need on the job. If you try to cover everything mentioned in class, you’ve already lost the race. Focus on learning bare essentials thoroughly. You always have Google and StackOverflow to assist you as long as you’re not writing an exam!

2. Learning from Instructors with Work Experience, not PhD scientists!

Now from whom should you receive training? From PhD academics who’ve never worked on a real professional project but have published extensively, or instructors with real-life professional project experience? Very often, the teachers and instructors in colleges and universities belong to the former category, and you are remarkably fortunate if you have an instructor who has that invaluable component called industry experience. The latter category are rare and difficult to find, and you are lucky – even remarkably so – if you are studying under them. They will be able to teach you with context to the job experience in real-life, which is always exactly what you need the most.

3. Working with the Latest Technology Stacks.

Now, who would be better able to land you a job – teachers who teach what they studied ten years ago, or professionals who work with the latest tools available in the industry? It’s undoubtedly true that the people with industry experience can help you to choose what technologies you should learn and master. Academics, in comparison, could even be working with technology stacks over ten years old! Please try to stick with instructors who have work experience.

4. Individual Attention.

In a college or a MOOC with thousands of students, it’s simply not possible for each student to get individual attention. However, in data science programs, it is true that every student will receive individual attention tailored to their needs, which is exactly what you need. Every student is different and will have their own understanding of the projects available. This customized attention that is available when batch sizes are less than 30-odd is the greatest advantage such students have over college and MOOC students.

5. GitHub Project Portfolio Guidance.

Every college lecturer will advise you to develop a GitHub project portfolio, but they cannot give your individual profile genuine attention. The reason for that is that they have too many students and requirements upon their time to be able to spend time with individual project portfolios and actually mentor you in designing and establishing your own project portfolio. However, data science programs are different and it is genuinely possible for the instructors to mentor you individually in designing your project portfolios. Experienced industry professionals can even help you identify ‘niches’  within your field in which you can shine and carve out a special brand for your own project specialties so that you can really distinguish yourself and be a class apart from the rest of your competition.

6. Mentoring even After Getting Placed in a Company and Working by Yourself.

Trust me, no college professor will be able or even available to help you once you get placed within the industry since your domains will be so different. However, its a very different story with industry professionals who become instructors. You can even go to them or contact them for guidance even after placement, which is, simply not something most academic professors will be able to do unless they too have industry experience, which is very rare.

7. Placement Assistance.

People who have worked in the industry will know the importance of having company referrals in the placement process. It is one thing to have a cold call with a  company with no internal referrals. Having someone already established within the company you apply to can be the difference between a successful and unsuccessful recruitment process. Every industry professional will have contacts in many companies, which puts them in a unique position to aid you at the time of placement opportunities.

8. Learn Critical but Non-Technical Job Skills, such as Networking, Communication, and Teamwork

teamwork in data science

While it is important to know the basics, one reason why brilliant students do badly in the industry after they get a job is the lack of soft skills like communication and teamwork. A job in the industry is so much more than bare skills studied in class. You need to be able to communicate effectively and to work well in teams, which can be guided by industry professionals but not by professors since they will have no experience in this area because they have never worked in the industry. Professionals will know who to guide you with regard to this aspect of your expertise, since its a case of being in that position and having learnt the necessary skills in the industry through their job experiences and work capacities.

9. Reduced Cost Requirements

It is one thing to be able to sponsor your own PhD doctoral fees.  It is quite another thing to learn the very same skills for less than 1% of the cost of a PhD degree in, say, the USA. Not only is it financially less demanding, but you also don’t have to worry about being able to pay off massive student loans through industry work and fat paychecks, often at the cost of compromising your health or your family needs. Why take a Rs. 75 lakh student loan, when you can get the same outcome from a course less than 0.5% of the price? The takeaways will still be the same! In most cases, you will even receive better training through the data science program than an academic qualification because your instructors will have job experience.

10. Highly Reduced Time Requirements

A PhD degree takes, on average, 5 years. A data science program gets you job-ready in a few months time. Why don’t you decide which is better for you? This is especially true when you already have job experience in another domain or you are more than 23-25 years old, and doing a full PhD program could put you on the wrong side of 30 with almost no job experience. Please go for the data science program, since the time spent working in your 20s is critical for most companies who are hiring today since they consider you to a be a good ‘çultural fit’ for the company environment, especially when you have less than 3-4 years experience.

Summary

Thus, its easy to see that in so many ways, a data science program can be much better for you than a data science degree. So, the critical takeaway for this article is that there is no need to spend Rs. 75,000,000+ for skills which you can acquire for Rs. 35,000 max. It really is a no-brainer. These data science programs really offer true value for money. In case you’re interested, please do check out the following data science programs, each of which have every one of the advantages listed above:

Data Science Programs Offered by Dimensionless.in

  1. Data Science with Python and R: https://dimensionless.in/data-science-using-r-python/ 
  2. Big Data Analytics and NLP: https://dimensionless.in/big-data-analytics-nlp/
  3. Deep Learning: https://dimensionless.in/deep-learning/

All the best and happy learning!

8 Data Science Projects to Build your Portfolio

8 Data Science Projects to Build your Portfolio

Introduction

A decade ago, machine learning was simply a concept but today it has changed the way we interact with technology. Devices are becoming smarter, faster and better, with Machine Learning at the helm.

Thus, we have designed a comprehensive list of projects in Machine Learning course that offers a hands-on experience with ML and how to build actual projects using the Machine Learning algorithms. Furthermore, this course is a follow up to our Introduction to Machine Learning course and delves further deeper into the practical applications of Machine Learning.

Progressing step by step

In this blog, we will have a look at projects divided mostly into two different levels i.e. Beginners and Advanced. First, projects mentioned under the beginner heading cover important concepts of a particular technique/algorithm. Similarly, projects under advanced category involve the application of multiple algorithms along with key concepts to reach the solution of the problem at hand.

Projects offered by Dimensionless Technologies

We have tried to take a more exciting approach to Machine Learning, by not working on simply the theory of it, but instead by using the technology to actually build real-world projects that you can use. Furthermore, you will learn how to write the codes and then see them in action and actually learn how to think like a machine learning expert.

Following are some of the projects among many others that they cover in their courses:

Disease Detection — In this project, you will use the K-nearest neighbor algorithm to help detect breast cancer malignancies by using a support vector machine.

Credit Card Fraud Detection — In this project, you are going to do a credit card fraud detection and going to focus on anomaly detection by using probability densities.

Stock Market Clustering Project — In this project, you will use a K-means clustering algorithm to identify related companies by finding correlations among stock market movements over a given time span.

Beginners

1) Iris Flowers Classification ML Project– Learn about Supervised Machine Learning Algorithms

Iris flowers dataset is one of the best data sets in classification literature. The classification of the iris flowers machine learning project is often referred to as the “Hello World” of machine learning. Furthermore, this dataset has numeric attributes and beginners need to figure out how to load and handle data. Also, the iris dataset is small which easily fits into the memory and does not require any special transformations or scaling, to begin with.

Iris Dataset can be downloaded from UCI ML Repository — Download Iris Flowers Dataset

The goal of this machine learning project is to classify the flowers into among the three species — virginica, setosa, or versicolor based on length and width of petals and sepals.

2) Social Media Sentiment Analysis using Twitter Dataset

Platforms like Twitter, Facebook, YouTube, Reddit generate huge amounts of big data that can be mined in various ways to understand trends, public sentiments, and opinions. A sentiment analyzer learns about various sentiments behind a “content piece” through machine learning and predicts the same using AI. Also, Twitter data is considered a definitive entry point for beginners to practice sentiment analysis. Hence, using Twitter dataset, one can get a captivating blend of tweet contents and other related metadata such as hashtags, retweets, location and more which pave way for insightful analysis. Using Twitter data you can find out what the world is saying about a topic whether it is movies, sentiments about any trending topic. Probably, working with the Twitter dataset will help you understand the challenges associated with social media data mining and also learn about classifiers in depth.

3) Sales Forecasting using Walmart Dataset

Walmart dataset has sales data for 98 products across 45 outlets. Also, the dataset contains sales per store, per department on weekly basis. The goal of this machine learning project is to forecast sales for each department in each outlet consequently which will help them make better data-driven decisions for channel optimization and inventory planning. Certainly, the challenging aspect of working with Walmart dataset is that it contains selected markdown events which affect sales and should be taken into consideration.

Want to work with Walmart Dataset? Access the Complete Solution Here — Walmart Store Sales Forecasting Machine Learning Project

4. Play Money Ball

In the book Moneyball, the Oakland A’s revolutionized baseball through analytical player scouting. Furthermore, they built a competitive squad while spending only 1/3 of what large market teams like the Yankees were paying for salaries.

First, if you haven’t read the book yet, you should check it out. Ceratinly, It’s one of our favorites!

Fortunately, the sports world has a ton of data to play with. Data for teams, games, scores, and players are all tracked and freely available online.

There are plenty of fun machine learning projects for beginners. For example, you could try…

  • Sports Betting… Predict box scores given the data available at the time right before each new game.
  • Talent scouting… Use college statistics to predict which players would have the best professional careers.
  • General managing… Create clusters of players based on their strengths in order to build a well-rounded team.

Sports is also an excellent domain for practicing data visualization and exploratory analysis. You can use these skills to help you decide which types of data to include in your analyses.

Data Sources

  • Sports Statistics Database — Sports statistics and historical data covering many professional sports and several college ones. The clean interface makes it easier for web scraping.
  • Sports Reference — Another database of sports statistics. More cluttered interface, but individual tables can be exported as CSV files.
  • cricsheet.org — Ball-by-ball data for international and IPL cricket matches. CSV files for IPL and T20 internationals matches are available.

5) Titanic Data Set

As the name suggests (no points for guessing), this dataset provides the data on all the passengers who were aboard the RMS Titanic when it sank on 15 April 1912 after colliding with an iceberg in the North Atlantic ocean. Also, it is the most commonly used and referred to data set for beginners in data science. With 891 rows and 12 columns, this data set provides a combination of variables based on personal characteristics such as age, class of ticket and sex, and tests one’s classification skills.

Objective: Predict the survival of the passengers aboard RMS Titanic.

Advance level projects

This is where an aspiring data scientist makes the final push into the big leagues. After acquiring the necessary basics and honing them in the first two levels, it is time to confidently play the big game. Certainly, these datasets provide a platform for putting to use all the learnings and take on new, and more complex challenges.

1) Yelp Data Set

This data set is a part of the Yelp Dataset Challenge conducted by crowd-sourced review platform, Yelp. It is a subset of the data of Yelp’s businesses, reviews, and users, provided by the platform for educational and academic purposes.

In 2017, the tenth round of the Yelp Dataset Challenge was held and the data set contained information about local businesses in 12 metropolitan areas across 4 countries.

Rich data comprising 4,700,000 reviews, 156,000 businesses, and 200,000 pictures provides an ideal source of data for multi-faceted data projects. Projects such as natural language processing and sentiment analysis, photo classification, and graph mining among others, are some of the projects that can be carried out using this dataset containing diverse data. The data set is available in JSON and SQL formats.

Objective: Provide insights for operational improvements using the data available.

2) Chicago Crime Data Set

With the increasing demand to analyze large amounts of data within small time frames, organizations prefer working with the data directly over samples. Consequently, this presents a herculean task for a data scientist with a limitation of time.

This dataset contains information on reported incidents of crime in the city of Chicago from 2001 to the present. It does not contain data from the most recent seven days. Not included in the data set, is data on murder, where data is recorded for each victim.

It contains 6.51 million rows and 22 columns and is a multi-classification problem. In order to achieve mastery over working with abundant data, this dataset can serve as the ideal stepping stone.

Objective: Explore the data, and provide insights and forecasts about crimes in Chicago.

3) KDD Cup

KKD cup is a popular data mining and knowledge discovery competition held annually. It is one of the first-ever data science competition which dates back to 1997.

Every year, the KDD cup provides data scientists with an opportunity to work with data sets across different disciplines. Some of the problems tackled in the past include

  1. Identifying which authors correspond to the same person
  2. Predicting the click-through rate of ads using the given query and user information
  3. Development of algorithms for Computer Aided Detection (CAD) of early-stage breast cancer among others.

The latest edition of the challenge was held in 2017 and required participants to predict the traffic flow through highway tollgates.

Objective: Solve or make predictions for the problem presented every year.

Conclusion

Undertaking different kinds of projects is one of the good ways through which one can progress in any field. Certainly, this allows an individual to have hands on at the problems faced during the implementation phase. Also, it is easier to learn concepts by applying them. Finally, you will have a feeling of doing actual work rather than just being all lost in the theoretical part.

There are wonderful competitions available on kaggle and other similar data science competition platforms. Hence, make sure you take some time out and jump into these competitions. Whether you are a beginner or a pro, certainly, there is a lot of learning available while attempting these projects.