9923170071 / 8108094992 info@dimensionless.in

Introduction

One of the most difficult tasks in Big data is to select an apt programming language for its applications. Most data scientists prefer R and Python for all the data processing and all the machine learning tasks whereas for all the Hadoop developers, Java, and Scala are the most preferred languages. Scala did not have much traction early, but with key technologies like Scala and Kafka playing a big role, Scala has recently gained a lot of traction for all the good reasons. It is somewhere between R/Python and Java for developers out there!

With spark written entirely in Scala, more than 70% of the Big data practitioners use Scala as the programming language. With spark enabling big data across multiple organisations and its high dependency on Scala, Scala is one important language to keep it in your arsenal. In this blog, we look to understand how to implement spark application using Scala

 

What is Spark?

Apache Spark is a map-reduce based cluster computing framework which enables lightning-fast processing of data. Spark provides a more efficient cover over Hadoop’s map-reduce framework including stream/batch processing and interactive queries. Spark leverages it’s the in-memory model to speed up the processing within applications to make them lightning quick!

Spark is designed for handling a broad variety of tasks and computations, such as batch requests, any recursive or repetitive algorithms, or be it any interactive queries, or live streaming data. It decreases the leadership strain of keeping distinct instruments in addition to helping all these workloads in a corresponding scheme.

 

What is Scala?

Scala is a programming language which is a mix of both object-oriented and functional paradigm to allow applications to run on a large scale. Scala is mostly influenced by Java and is more of a statically typed programing language. Java and Scala have a lot of similarities between them. To begin with, Scala is coded in a manner very similar to Java. To add, Scala can make use of many Java libraries and other third-party plugins

Also, Scala is named after its feature of ‘scalability’ in which most of the programming languages like R and Python lack a great deal. Scala allows us to perform multiple common programming tasks in a more of a clean, crisp and elegant manner. It is a pure object-oriented programming language providing additional capabilities from the functional paradigm

 

Spark+Scala Overview

Each Spark implementation at an uber level consists of a driver program running the primary feature of the user and performing multiple simultaneous activities on a grid. The primary abstraction provided by Spark is a resilient distributed dataset (RDD), a set of components partitioned across cluster nodes that can be worked on in conjunction. RDDs can be created by initializing and transforming any Scala collection in the main driver program of the spark context. Users may also request Spark to keep an RDD in memory so that it can be effectively recycled through simultaneous activities.

 

Understanding MapReduce

MapReduce is an algorithm for distributed data processing. It was introduced by Google in one of its technical publication. This algorithm finds its inspiration from the functional paradigm of the programming world. Map reduce, in a cluster environment, is highly efficient in running large data chunks parallelly. Powered by divide and conquer technique, it takes care of dividing any input tasks to a smaller number of subtasks such that they can be run parallelly

MapReduce Algorithm uses the following three main steps:

  1. Map Function
  2. Shuffle Function
  3. Reduce Function

 

1. Map Function

Mapping is the first task of the MapReduce Algorithm. It takes input tasks and divides them into smaller parallel executable sub-tasks. In the end, all the required computations happen over these sub-tasks hence providing the real-essence of parallel computation

This step performs the following two sub-steps:

  • Splitting step takes input data from a source and divides into smaller datasets
  • Mapping happens after splitting step. It takes smaller datasets and performs computation on each of them

The output from this mapping module is a key-value pair like (Key, Value)

 

2.Shuffle Function

It is is the second step in MapReduce Algorithm. It is more like a combine function. It essentially performs the following two sub-steps:

It takes a list of outputs from the mapper module and performs the following two sub-steps on every key-value pair.

  • Merging: This step combines key-value pairs having the same keys. It can be viewed as an aggregate function applied to all keys. This step returns an unsorted list like (Key, List(Value)).
  • Sorting: This step is in charge of transforming the merge step input to a sorted key-value pair (sorted based on keys). This step gives back a key value list<Key, List<Value>> as a sorted key-value pair output.

 

3. Reduce Function

Reduce function is the ultimate step of the map-reduce algorithm. It has only one task i.e. reduce step. Having a sorted key-value pair as input, it’s sole responsibility is to run a reduce algorithm(a group function) to assemble all the keys together.

 

Understanding RDD

Spark as a framework is heavily built on the top of its basic element i.e RDD (Resilient Distributed Dataset). It is an immutable collection of objects that are distributed. Each RDD dataset is split into logical partitions that can be calculated on various cluster nodes. RDDs may comprise any sort of objects from Python, Java, or Scala, including user classes.
Formal, an RDD is a set of documents read-only, partitioned. RDDs can be developed using either stable storage information or other RDDs deterministic activities. RDD is a fault-tolerant set of parallel-operable components.

There are two different ways to create RDDs-parallelising an existing collection in the driver’s program or referencing a data set in an externally stored system like a disk or Hadoop clusters.

To accomplish fast and effective MapReduce activities, Spark uses the RDD notion. Let’s talk about how and why MapReduce activities take place.

 

Goal

Our goal in this article will be to understand the implementation of spark using Scala. Below, we will implement two Scala codes to understanding it’s functioning. First will be the word count example. Here, we will be counting the frequency of words present in the document. Our second, task will be a utilizing rich ml library of spark. We will implement our first decision tree classifier here!

 

Setting up Spark and Scala

Our first task is to install the requirements before we can start actually implementing.

 

Installing Java

sudo apt-get install openjdk-7-jdk

Installing Scala

sudo apt-get install scala

Installing Spark

tar -xvf spark-2.4.1-bin-hadoop2.7.tgz

Installing OpenSSH

sudo apt-get install openssh-server

Editing the bashrc file for spark shell

sudo nano ~/.bashrc 
export PATH = $PATH: /usr/local/spark/bin

Running the spark shell

spark-shell

 

Understanding Word Count Example in Scala

 

Step 1: Creating a Spark Session

Every program needs an entry point to begin the execution. In Scala, we need to do that through a spark session object. Spark session is the entry point for our dataset to commence execution

 

Step 2: Data read and dataset formation

Data read can be performed with many available API’s. We can use read.text API or even textFile API which is generally used to read RDD’s.

 

Step 3: Split and group by word

Dataset has a lot of similar features as of RDD API. It has features like a map, groupByKey, etc all available. In the code below, lines are being split by spaces to obtain words. Later on, different words are grouped together 

We are not generating any (key / value) pair here, as you can notice that. The reason for this is that unlike RDD, the dataset is low abstract. Each valuation has a line with several rows, and any column can behave as a grouping key as in the database.

 

Step 4: Counting the occurrences of the words

After grouping all the words as keys, we can indeed count the occurrences of all the words in the set. For this, we can use the reduceByKey function of RDD’s or can directly employ the count function available

 

Step 5: Displaying the results

Once we are done counting the word occurrences, we need to figure out a way of displaying them. There are multiple ways of doing this. We will be using the collect function to display our results in this case

You can find the implementation of the word-count example below

package com.oreilly.learningsparkexamples.mini.scala
import org.apache.spark._
import org.apache.spark.SparkContext._
object WordCountObject {
   def main(args: Array[String]) {
      val inputData = args(0)
      val outputData = args(1)
      val conf = new SparkConf().setAppName("wordCountExample")
      // Creating a sparkContext for app entry
      val sc = new SparkContext(conf)
      // Loading the input data
     val input =  sc.textFile(inputData)
      // Splitting sentences into words based on empty spaces
      val all_words = input.flatMap(line => line.split(" "))
      // Transforming words word count key value pairs.
      val wordCounts = all_words.map(word => (word, 1)).reduceByKey{case (x, y) => x + y}
      // Logging the results to the output file
      counts.saveAsTextFile(outputData)
    }
}

 

Summary

In this blog, we looked at Scala programming language showcasing its abilities from its functional paradigm and implementing them with the Spark. Spark has been a key player in the big data industry for quite a long time now. Spark provides all its full-fledged features in the form of Scala API’s. Hence it is very clear that in order to unlock Spark to its full potential, Scala is the way forward!

Follow this link, if you are looking to learn data science online!

You can follow this link for our Big Data course!

Additionally, if you are having an interest in learning Data Science, click here to start the Online Data Science Course

Furthermore, if you want to read more about data science, read our Data Science Blogs

Top 5 Statistical Concepts for Data Scientist

Concept of Cross-Validation in R

Top 10 Machine Learning Algorithms