9923170071 / 8108094992 info@dimensionless.in
Data Science Trend Forecast: 2020

Data Science Trend Forecast: 2020

Taking into consideration the positive trends of Data Science from previous years, there lies an immense well of possibilities that awaits us in the future, that is, the upcoming year 2020. Some of these Data Science Trend Forecast for 2020 can be foreseen as follows:

  • Augmented Analysis

Complicated code and extensions will no longer be required to get deep insights from data. The augmented analysis helps layman users/analysts (in machine learning/data science) to make use of AI to analyze data. This will change the way data is consumed, created and shared across all data-intensive fields. Already several BI and analytics tools are trying to implement AI assistance full force in their platforms.

  • Continuous/ Real-time Intelligence

There is intensive activity ongoing every second in real-time platforms. If through some method, one can plug into this data, real-time user experience can be enhanced manifold. Continuous or real-time intelligence aims to do just that by analyzing data in real-time so that instant results can be provided to the user while he is still surfing the platform. It can also help increase profit margins by re-aligning the platform as per the observed interaction of the user.

  • NLP

Natural Language Processing is a very important segment of Artificial Intelligence since most real-world data are in text or voice format. To process such data, advanced NLP techniques are required which are being innovated with each passing data. Today, we can read, understand, classify and even create unique text documents with the help of machines. Further developments like intelligent summarization, entity recognition and task management using text input and much more are expected to happen, owing to the intense research and increasing data-science experts choosing NLP specialisation.

  • Conversational technology

There has already been a visible surge in the performance of voice assistants in 2019. In 2020, it is expected to further improve such that the conversational systems become more sensitive to the human language and also more humane in their response. By more humane, it will mean that the systems can keep track of previous responses and questions (which is not a very developed feature in any voice assistant in the market to this day). Also, most client interactions are expected to be taken over by conversational technology, thus, increasing response rate and efficiency.

  • Explainable AI

The last decade has seen massive growth in AI aided decisions for sure, but it has been a persistent problem to be able to explain these decisions or why the AI wants to go a certain way instead of another. Recently, however, a lot of research has increased the scope of explainable AI. 2020 can further be invested in understanding problems like say, how and why a certain neural network arrived at a certain decision. This will indefinitely increase the faith of clients on adolescent technology.

  • Persistent memory/ In-memory computation

In-memory computing or IMC can deliver extremely high-performance tasks due to optimized memory architecture. It also has become more feasible due to the decreasing expense of memory which owes credit to constantly emerging innovations.

  • Data Fabric

Data Fabric helps in the smooth access and sharing of data in distributed environments. It is usually custom made and helps in the transfer and store of data, data pipelines, APIs and previously used data services that have a chance of being re-invoked. Trusted and efficient data fabric can help to catalyze data science pipelines and reduce delays in customer-client interaction/iterations.

  • Advances in Quantum Computing

The research in Quantum Computing has a very high momentum at the moment. Even though the whole architecture of Quantum computing is at a very basic stage, increased investments and research are helping the field to grow by inches every passing day. A quantum computer is said to perform calculations which will take general computers a few years, in just a few seconds! As remarkable as it sounds, it can bestow superpowers to mankind! Imagine munching on years and years of historical data to arrive at conclusions about the future in just a few seconds. A whole lot of astonishing things await us, and we must be blessed to be a part of this century.

It is expected that India’s job openings in the analytics sector will double to about 200000 or two lakh jobs in 2020. Here is what 2020 for job seekers in data science will look like:

  • Fields like finance, IT, professional services and insurance will see a boom in demand for data science and analytics.
  • Having analytics skills like MapReduce, Apache Pig, Machine learning and Hadoop can provide an edge over other competitors in the field. The most fundamental in-demand skills will be Python and Machine Learning. Statistics is an added advantage.
  • Vacancies for roles like data developers, data engineers and data scientists will go over 700,000 by 2020.
  • The most promising sectors that will tend to create increasing opportunities include Aviation, Agriculture, Security, Healthcare and Automation.
  • The average salaries in India in development roles like Data Scientist or Data Engineer will range from 5 to 8 Lakh per annum.
  • The average salaries in India in management/strategizing roles like data architect or business intelligence manager will range from 10 to 20 Lakh per annum.

As exciting as all of it sounds, there is always a bag of unforeseen advancements that are bound to take us all by surprise, as has always happened with Data Science and AI in the past. So, hold tight for yet another mind-boggling ride through the lanes of technology this 2020!

Best of 2019 Trends in Data Science

Best of 2019 Trends in Data Science

Data Science has seen a massive boom in the past few years. It has also been claimed that it is indefinitely one of the fastest-growing fields in the IT/academic sector. One of the most hyped Trends in Data Science this year was that the sector saw a major hike in jobs as compared to the past years!

Such an unprecedented growth owes all its dues to the unimaginable benefits that artificial intelligence has brought to the plate of mankind for the very first time. It was never before imagined that external machines could aid us with such sophistication as is present today. Owing to this, it is imperative that an individual, irrespective of his/her calling, must have at least a superficial knowledge about the past advances and future possibilities of this field of study. Even if it is the job of scientists and engineers to figure out solutions using machine learning and data science, the solutions, undoubtedly is bound to affect all our lives in the upcoming years. Moreover, if you are planning to plug into the huge well of job openings in data science, exploring the past and upcoming trends in this field will surely take you a step ahead.

Looking back on the achievements of the year 2019, there is much which has happened. Here is a brief glimpse of what Trends in Data Science of 2019 looked like:

  • Accessible AI

The once-popular belief that AI technology was only meant for high-scale and high-tech industries, is now an old wives’ tale. AI has spread so rapidly across every phase of our lives, that sometimes we do not even realize that we are being aided by AI. For instance, recommendations that we get on online forums are something we have become very used to in recent times. However, very few have the conscious knowledge that the recommendations are regulated by AI technology. There are also several instances where a layman can use AI to get optimized outputs, like in automated machine learning pipelines. We even have improvised AI-aided security systems, music systems and voice assistants in our very homes! Overall, the impact of AI in everyday lives saw a massive boost in 2019, and it is only bound to increase.

  • The rapid growth of IoT products

As was already forecasted, the number of machines/devices which came online in 2019 was immense. Billions were invested in research to back the uprising IoT industry. Today it is nothing out of the ordinary to control home appliances like television and air conditioners with our smartphones or lock our and unlock our cars from even the opposite end of the globe. Bringing devices online not only makes the user experience far smoother but also generates crucial data for analysis. With such data, several unopened gates can be explored across several domains. The investments and count of IoT devices are expected to go up at an increasing rate in the upcoming years.

  • Evolution of Predictive Analysis

The concept of predictive analysis is to use past data to learn recurring patterns, such that it can predict outcomes of future events based on the patterns learnt. Today, with increasing data it becomes extensively important to make use of optimized predictive solutions. Big data comes into picture here and significant advancements have been made in 2019 about it. Tools like PySpark and MLLib have helped scale simple predictive solutions to extensive data.

  • Migration of Dark Data

Dark data is very old data which has probably been sitting in obsolete archives like old systems or even files in storage rooms! There is a general understanding that such unexplored data can show us the way to crucial insights about past trends which can help grab useful opportunities and even avoid unwanted loopholes. Therefore, there has been visible initiatives to make dark data more available to present-day systems with the help of efficient storage and migration tools.

  • Implementation of Regulations

In 2018, General Data Protection Regulation (GDPR) brought in a few data governance rules to emphasize the importance of data governance. The rules were laid down so fast that even at the year-end, several companies dealing with data are still trying to comply wholly with all the principles laid down. These principles have not only created a standard for data consumption and data handling domains but are also bound to shape the future of data handling with great impact.

  • DataOps

DataOps is an initiative to bring in some order in the way the data science pipeline functions. It is essentially a reflection of agile and DevOps methods in the field of data science. In 2019, it has been one of the major concerns of management in data science to integrate DataOps into their respective teams. Previously, such integration was not possible since the generic pipeline was still in making or under research. However, now, with a more robust structure, integrating DataOps can mean wonders for data science teams.

  • Edge Computing

As stated by Gartner, Inc. cloud computing and edge computing has evolved to become a complementary model in 2019. Edge computing goes by the concept of “more the proximity (or closeness to the source of computation), better is the efficiency”. Edge computing allows workloads to be located closer to the consumers and thus, reduces latency several-fold.

There is, however, a huge recurring gap when it comes to the need and availability of skilled people who can launch and contribute to these developments significantly. India contributed to 6% of job openings worldwide in 2019, which scales to around 97000 jobs!

The job trends of 2019 looked as follows:

  • BFSI sector had a massive demand for analytics professionals, followed by the e-commerce and telecom sectors. The banking and financial sectors continued to have high demand throughout.
  • Python served as a great skill to attract employers to skilled job seekers
  • A 2% increase in jobs offering over 15 Lakh per annum was observed
  • Also, 21% of jobs demanded young talent in data science, a great contrast to all previous years. 70% of job openings were for professionals with less than 5 years of experience.
  • The top in-demand designations were Analytics Manager, Business Analyst, Research Analyst, Data Analyst, SAS Analyst, Analytics Consultants, Statistical Analyst and Hadoop Developer
  • Big data skills like Hadoop and Spark were extremely in demand due to the growing rate of data.
  • Telecom industry saw a fall in demand for data science professionals.
  • The median salary of analytics jobs was just over 11 Lakh per annum.

On to 2020 now!

Read more of our blogs here!

Accurate Bitcoin Price Forecasting with Python – Blockchain Applications of Data Science Part 2

Accurate Bitcoin Price Forecasting with Python – Blockchain Applications of Data Science Part 2

Recap

We discussed earlier in Part 1 of Blockchain Applications of Data Science on this blog how the world could be made to become much more profitable for not just a select set of the super-rich but also to the common man, to anyone who participates in creating a digitally trackable product. We discussed how large scale adoption of cryptocurrencies and blockchain technology worldwide could herald a change in the economic demography of the world that could last for generations to come. In this article, we discuss how AI and data science can be used to tackle one of the most pressing questions of the blockchain revolution – how to model the future price of the Bitcoin cryptocurrency for trading for massive profit.

A Detour

But first, we take a short detour to explore another aspect of cryptocurrency that is not commonly talked about. Looking at the state of the world right now, it should be discussed more and I feel compelled to share this information with you before we skip to the juicy part about cryptocurrency price forecasting.

The Environmental Impact of Cryptocurrency Mining

Now, two fundamental assumptions. I assume you’ve read Part 1, which contained a link to a visual guide of how cryptocurrencies work. In case you missed the latter, here’s a link for you to check again.

The following articles speak about the impact of cryptocurrency mining on the environment. Read at least one partially at the very least so that you will understand as we progress with this article:

https://www.theguardian.com/technology/2018/jan/17/bitcoin-electricity-usage-huge-climate-cryptocurrency

So cryptocurrency mining involves a huge wastage of computational resources, energy, and enough electrical power to run an entire country. This is mainly due to the model of the Proof-of-Work PoW mining system used by Bitcoin. For more, see the following article..

https://www.investopedia.com/tech/whats-environmental-impact-cryptocurrency/

In PoW mining, miners compete against each other in a desperate race to see who can find the solution to a mathematical hashing problem the quickest. And in every race, only one miner is rewarded with the Bitcoin value.

image result for Ethereum goes Green
Ethereum goes Green! (From Pixabay)

 

In a significant step forward, Vitalin Buterik’s Ethereum cryptocurrency has shifted to Proof-of-Stake based (PoS) mining system. This makes the mining process significantly less energy intensive than PoW. Some claim the energy savings may be 99.9% more efficient than PoW. Whatever the statistics may be, a PoS based mining process is a big step forward and may completely change the way the environmentalists feel about cryptocurrencies.

So by shifting to PoS mining we can save a huge amount of energy. That is a caveat you need to remember and be aware about because Bitcoin uses PoW mining only. It would be a dream come true for an environmentalist if Bitcoin could shift to PoS mining. Let’s hope and pray that it happens.

Now back to our main topic.

Use AI and Data Science to Predict Future Prices of Cryptocurrency – Including the Burst of the Bitcoin Bubble

What is a blockchain? A distributed database that is decentralized and has no central point of control. As on Feb 2018, the Bitcoin blockchain on a full node was 160-odd GB in size. Now in April 2019, it is 210 GB in size. So this is the question I am going to pose to you. Would it be possible to use the data in the blockchain distributed database to identify patterns and statistical invariances to invest minimally with maximum possible profit? Can we forecast and build models to predict the prices of cryptocurrency in the future using AI and data science? The answer is a definite yes.

Practical Considerations

You may wonder if applying data science techniques and statistical analysis can actually produce information that can help in forecasting the future price of bitcoin. I came across a remarkable kernel on www.Kaggle.com (a website for data scientists to practice problems and compete with each other in competitions) by a user with the handle wayward artisan and the profile name Tania J. I thought it was worth sharing since this is a statistical analysis of the rise and the fall of the bitcoin bubble vividly illustrating how statistical methods helped this user to forecast the future price of bitcoin. The entire kernel is very large and interesting, please do visit it at the link given below. Just the start and the middle section of the kernel is given here because of space considerations and intellectual property considerations as well.

image result for home for data science @ kaggle
Your home for data science.

A Kaggle Kernel That Modelled the Bitcoin Bubble Burst Within Reasonable Error Limits

This following kernel uses cryptocurrency financial data scraped from www.coinmarketcap.com. It is a sobering example of how AI predictions actually predicted the collapse of the bitcoin bubble, prompting as many sellers to sell as they did. Coming across this kernel is one of the main motivations to write this article. I have omitted a lot of details, especially building the model and analyzing its accuracy. I just wanted to show that it was possible.

For more details, visit the kernel on Kaggle at the link:
https://www.kaggle.com/taniaj/cryptocurrency-price-forecasting (Please visit this page, all aspiring data scientists. And pay attention to every concept discussed and used. Use Google and Wikipedia and you will learn a lot.)

A subset of the code is given below (the first section):

import pandas as pd
from pandas import DataFrame
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from datetime import datetime, timedelta
from statsmodels.tsa.arima_model import ARIMA
from statsmodels.tsa.statespace.sarimax import SARIMAX
from statsmodels.graphics.tsaplots import plot_acf, plot_pacf
from statsmodels.tsa.stattools import adfuller
from statsmodels.tsa.seasonal import seasonal_decompose
from scipy import stats
import statsmodels.api as sm
from itertools import product
from math import sqrt
from sklearn.metrics import mean_squared_error 

import warnings
warnings.filterwarnings('ignore')

%matplotlib inline

colors = ["windows blue", "amber", "faded green", "dusty purple"]
sns.set(rc={"figure.figsize": (20,10), "axes.titlesize" : 18, "axes.labelsize" : 12, 
            "xtick.labelsize" : 14, "ytick.labelsize" : 14 })

<subsequent code not shown for brevity>

The dataset is available at the following link as a csv file in Microsoft Excel:

 

We focus on one of the middle sections with the first ARIMA model with SARIMAX (do look up Wikipedia and Google Search to learn about ARIMA and SARIMAX) which does the actual prediction at the time that the bitcoin bubble burst (only a subset of the code is shown). Visit the Kaggle kernel page on the link below this extract to get the entire code:

<data analysis and model analysis code section not shown here for brevity>

def invboxcox(y,lmbda):
   if lmbda == 0:
      return(np.exp(y))
   else:
      return(np.exp(np.log(lmbda*y+1)/lmbda))


# Prediction
btc_month_pred = btc_month[['Close']]
date_list = [datetime(2018, 6, 30), datetime(2018, 7, 31), datetime(2018, 8, 31)]
future = pd.DataFrame(index=date_list, columns= btc_month.columns)
btc_month_pred = pd.concat([btc_month_pred, future])

#btc_month_pred['forecast'] = invboxcox(best_model.predict(start=0, end=75), lmbda)
btc_month_pred['forecast'] = invboxcox(best_model.predict(start=datetime(2014, 1, 31), end=datetime(2018, 8, 31)), lmbda)

btc_month_pred.Close.plot(linewidth=3)
btc_month_pred.forecast.plot(color='r', ls='--', label='Predicted Close', linewidth=3)
plt.legend()
plt.grid()
plt.title('Bitcoin monthly forecast')
plt.ylabel('USD')

<more code, not shown>

From
Kaggle Code

This code and the code earlier in the kernel (not shown for the sake of brevity) that built the model for accuracy gave the following predictions as output:

Bitcoin price forecasting at the time of the burst of the Bitcoin bubble

What do we learn? Surprisingly, the model captures the Bitcoin bubble burst with a remarkably accurate prediction (error levels ~ 10%)!

Conclusion

So, does AI and data science have anything to do with blockchain technology and cryptocurrency? The answer is a resounding, yes. Expect data science, statistical analysis, neural networks, and probability model distributions to play a heavy part when you want to forecast cryptocurrency prices.

For all the data science students out there, I am going to include one more screen from the same kernel on Kaggle (link):

The reason I want to show you this screen is that the terms and statistical lingo like kurtosis and heteroskedasticity are statistics concepts that you need to master in order to conduct forecasts like this, the main reason being to analyze the accuracy of the model you have constructed. The output window is given below:

      parameters         aic
20  (1, 0, 1, 1) -198.417805
18  (1, 0, 0, 1) -198.151123
6   (0, 1, 0, 1) -197.632770
8   (0, 1, 1, 1) -197.415548
22  (1, 0, 2, 1) -197.335392
                                 Statespace Model Results                                
=========================================================================================
Dep. Variable:                         close_box   No. Observations:                   63
Model:             SARIMAX(1, 1, 0)x(1, 1, 1, 4)   Log Likelihood                 103.209
Date:                           Sun, 03 Jun 2018   AIC                           -198.418
Time:                                   19:11:23   BIC                           -189.845
Sample:                               04-30-2013   HQIC                          -195.046
                                    - 06-30-2018                                         
Covariance Type:                             opg                                         
==============================================================================
                 coef    std err          z      P>|z|      [0.025      0.975]
------------------------------------------------------------------------------
ar.L1          0.3760      0.121      3.119      0.002       0.140       0.612
ar.S.L4       -0.2801      0.085     -3.312      0.001      -0.446      -0.114
ma.S.L4       -0.7974      0.192     -4.163      0.000      -1.173      -0.422
sigma2         0.0015      0.000      5.331      0.000       0.001       0.002
===================================================================================
Ljung-Box (Q):                       29.83   Jarque-Bera (JB):                94.63
Prob(Q):                              0.88   Prob(JB):                         0.00
Heteroskedasticity (H):               0.21   Skew:                             1.21
Prob(H) (two-sided):                  0.00   Kurtosis:                         8.77
===================================================================================

Warnings:
[1] Covariance matrix calculated using the outer product of gradients (complex-step).

So yes, blockchain technology and cryptocurrencies have a lot of overlap with applications. But also remember, data science can be applied to any field where finance is a factor.

For more on blockchain and data science, see:

A Beginner’s Guide to Big Data and Blockchain

Enjoy data science!

Follow this link, if you are looking to learn data science online!

You can follow this link for our Big Data course, which is a step further into advanced data analysis and processing!

Additionally, if you are having an interest in learning Data Science, click here to start the Online Data Science Course

Furthermore, if you want to read more about data science, read our Data Science Blogs

A Personal Digital Assets Manager – Blockchain Applications of Data Science Part 1

A Personal Digital Assets Manager – Blockchain Applications of Data Science Part 1

The Potential of Blockchain Technology

 

Unless you’ve been living with your head under a rock for the last 4 years, you will definitely have heard of Bitcoin. You would also have heard about the technology behind Bitcoin, Blockchain. Now cryptocurrencies are banned in most cases in India and China, but the Americas and Europe still use cryptocurrencies extensively. And in my opinion, Asia stands to lose a lot if blockchain is not adopted extensively everywhere. Because make no mistake about it – blockchain technology will change the world as we know it. Forever.

Blockchain is the technology powering Bitcoin and other cryptocurrencies. To explain what blockchain is and what bitcoin is you can go through anyone of the articles below. Don’t worry these articles are carefully selected to be as interesting and fun to read as possible. (This also gives me space to add my own original ideas instead of copying or rewording existing articles – and I have plenty (of ideas)!

References to Understand Blockchain

For technical readers:

Wiki-Blockchain

For non-technical readers:

Whats blockchain in plain English- Quora

For the researchers:

NCBI-Article

For those of you with no time and who like visual explanations:

Graphics Reuters- Visual Explanations

In fact, that last link is so amazingly simple visual and clear that I recommend everyone read it. Just so that we’re on the same page.

Exciting Applications

Image result for bitcoin
From Pixabay

 

Cut to the chase. A little confession here. I was asked to do this article nearly 16 days ago. Now I have some experience with blockchain before since having gone through it extensively as a research topic for my own blog. Then a remarkable idea hit me. An idea for a startup that could (in theory) become a multi-billion dollar enterprise. I spent a few days refining it, even going so far as to see if I could start this company with this area myself, until reality set in – I lacked the experience and the business skills.

No sooner had this realization struck me and the excitement cooled a little, another idea to improve blockchain struck me, and I promise to sketch out that idea as well. I am doing this for two reasons:

  • I am staunch support of the FOSS (free open source software movement and would like to be credited with the idea, and I am starting a free to use, open source project on GitHub – working on it, currently moving towards an alpha release as of now.  
  • I believe in the power of technology to remove economic inequality. Now you may say that technology has evolved to the point that 4-5 monolithic companies dominate the entire world. But I believe that technology when used ethically has the potential to create more opportunities than it removes.
  • Blockchain has two major problems – energy consumption and resource consumption. But there are techniques that can alleviate both of these problems. We’ll deal with that as well in Part 2.
  • Finally, the vaunted hype about security for blockchain and cryptocurrencies is ridiculous when you think about it. For the sake of brevity, I will address the main security issues with blockchain in a separate article on Medium – (not here, since it has no relation to data science).

Application – A Personal Blockchain For Every Person On The Planet

In points (I assume you’ve gone through the graphical explanation of blockchain at least – if not you can review it here):

  • The trouble with end products of all types that are produced today is that there are so many intermediaries between the producer and the consumer that the producers receives a pittance compared to the end final price. It would be nice if we could track a product everywhere that it is used.
  • This is also applicable for books, music, articles, poems, pictures, any digital content of any sort. Currently Amazon and YouTube monopolize content distribution, the latter with a complete disregard for copyright and media ownership and payment. Suppose we had a tracking system that viewed every view of a video, and rewarded the original producer for it?
  • To emphasize the previous point, let us consider the case of Lindsey Stirling. Lindsey Stirling is a famous contemporary violinist who dances while playing. Her 118 video uploads have earned her 2,575,305,706 views, 2.5 billion approx, and her earnings from YouTube ads last month was 100K a month. Her net worth as on 10th April 2019 is 12 million USD (12,000,000).
  • But suppose Lindsey Stirling distributed her videos at a price of 1 USD every view. Her net worth would be 2.6 billion USD at the very least! She would be a multi-billionaire had this platform existed. It doesn’t – yet. And because it doesn’t exist she is 2.49 billion USD poorer!

Now everyone who knows blockchain technology will now realize this idea, the concept, and how blockchain can be used to overcome this problem – and its power. Disruptive power!

The Solution

The blockchain is a service that immutably assigns ownership.

The blockchain is also a database that stores every single transaction on a particular digitisable entity.

Finally, the Ethereum smart contract technology means that we can assign payments to go to every person on his own personal blockchain of all his digitisable goods.

This means we can build a world where producer pays a user-defined amount to every entity which created a particular digitisable product.

On this platform or website or marketplace, producers can adjust their prices and their payments and consumers can buy directly from them.

Everything can be tracked on the blockchain. Your own database of your own transactions can be used with smart contracts to pay the maximum possible fee to the most deserving person in the supply chain – fixed by each producer.

Hugely, Massively Disruptive

If you are interested or want to know more, you can leave a comment below with your email address. If you want to be a part of this new revolution and the new decentralised world – with all services provided free – please provide a comment below asking for my email ID with a statement of what and how you want to contribute to this endeavor. I promise to reply to every sincere query.

This is a fledgling project and a lot of work remains to be done. I will be writing articles and creating a team to work on this idea. Those of you who are interested please mail me at thomascherickal@gmail.com.

This will be an open source project and all services have to be offered free of cost. How do you go about making a profit from this? You don’t! The only way this can be fair to all players in countries like India is if it is specially designed to be applicable to anyone.

So this article gave a small glimpse into a world without intermediaries, corporations, money-making middlemen, and running purely on smart contracts. This is applicable to AI and data science since this technology will not reach anywhere significant without extensive use of AI and data science.

The more data that is available, the more analysis can be performed on it. And unless we have analysts who are running monitoring fraud detection systems fulltime on such a system, we might as well never build it – because blockchain data integrity cannot be hacked, but cryptocurrencies are hackable and have been hacked extensively since the beginning of Bitcoin.

For Part 2 of this series on Blockchain Applications of Data Science, you can go to the link below:

https://dimensionless.in/how-a-kaggle-jupyter-notebook-with-a-python-kernel-correctly-predicted-the-burst-of-the-bitcoin-bubble-with-reasonable-accuracy

For more on AI and Blockchain, I suggest that you refer to the article below:

A Beginner’s Guide to Big Data and Blockchain

and you can go to the link below to access our course on Python and R.

Data Science using R & Python

As always, enjoy artificial intelligence! You are privileged to be at the forefront of humanity’s push into the new uncharted future. All the best!