Skip to main content

Apache Spark , the end of MapReduce and Hadoop?

Is Apache Spark the end of MapReduce?

 All the previous articles have discussed about the Hadoop framework and the way it performs actions on data through MapReduce programs.However, there are some drawbacks to the MapReduce framework.Let us analyze the key differences between MapReduce and Apache Spark in a detailed fashion.

MapReduce runs in three phases.
1)The mapper program is fed data from HDFS and corresponding metadata is fetched and map operation is applied on the data.
2)The temporary data is written into local filesystem instead of HDFS for the reducer to apply operation on the key value pairs returned by Mapper.
3)The reducer picks data from the local file system(output of Mapper) and writes the output back to the HDFS systems.

However there are inherent problems in this regard.Imagine we have a case where the MapReduceprogram fails due to some network/read/write error and the output is not output.This might not be a bg issue incase the data read is small or there is no urgent need for the output to be deployed.However, in the case we have huge data running complex Mapreduce programs , in case the the MapReduce job fails, we have no option except to rerun the job.However, in case of bigger failures such as namenode data failure in Hadoop cluster or network outage, there is a significant delay and interruption caused in the Data pipeline.

The second major problem with the Map Reduce framework is the complexity of joins.Any join function over a structured dataset is not an easy task and takes some good programming and understanding of the framework to develop the same.

Similar to Hadoop, Spark also provides streaming making it seamlessly possible to work in Java Python and Scala.However,since spark is natively written in Scala which runs on JVM, it is always better to develop programs in Scala.

Other great advantage  of Spark is caching data transformations.When applying transformations,one needs to always keep the current state of data for later debug and memory purposes.In a complex pipeline, there are several instances where different teams use data after a set transformations.To help you understand this better imagine a healthcare company processing the medical claims data which is used by data warehousing, software/app teams and analytical and business teams.They have different use cases and require data at different transformation levels.Here is an example pipeline.

 



Comments

  1. I enjoy what you guys are usually up too. This sort of clever work and coverage! Keep up the wonderful works guys I’ve added you guys to my blog roll.

    Online training in USA

    ReplyDelete

Post a Comment

Popular posts from this blog

Let us 'Sqoop' it ! .

SQOOP - The bridge between traditional and novel big data systems. By now,we have seen articles about MapReduce programs to write programs using Hadoop MapReduce framework.However, all the operations were actually performed on sample text files. Here comes Sqoop to our rescue.Apache Sqoop is a tool developed to fetch/put data from traditional SQL based data storage systems like MySQL,PostgreSQL,MS SQL Server.Sqoop can also be used to fetch/push data from NoSQL systems too.This versatility is because of Sqoop's architecture abstracted on MapReduce framework. Sqoop has an extension framework that makes it possible from and to any external storage system that has bulk data transfer capabilities.A Sqoop connector is a modular component that uses this framework to enable Sqoop imports and exports.Sqoop comes with connectors for working with a range of versatile popular databases including MySQL,PostgreSQL,Oracle,SQL Server,DB2 and Netezza.Apart from the above connectors Sqoop als...

Cloudera setup

Installing Cloudera is a best way to kick start the cloud setup. Follow the below steps to setup Cloudera on your windows machine: 1) Download VMware player to open cloudera machine from your windows machine link :  https://www.vmware.com/products/player/playerpro-evaluation.html Install VMWare player. 2.) Download the Cloudera VM. Do the signup and stuff required to download cloudera VM. link :  http://www.cloudera.com/downloads.html Click on quick starts from the above link , select the latest version and VMWare and click on download. Approximately 5GB of data would be downloaded. So sit back and relax . Upon completion of Clodera VM download , extract the downloaded zip file to a convenient location. Launching the VM 1.) Open the VMWare player and click on open a virtual machine . Open the VM from the path where you have extracted the ClouderaVM .                               ...

Hive Example

Use Case : A super market would record the sales in a file . Whenever an item is sold , the name of item , number of units of sale and cost of each unit in a comma separated format. A sample file would look like below Apple,10,10 Mango,20,5 Guava,10,3 Banana,30,4 Apple,10,5 At the end of the day we are required to find  the total sales per each item. Expected Output : Apple 150 Mango 100 Guava 30 Banana 120 Implementing in HIVE Getting started with HIVE: Open a terminal and type hive , this will open the hive shell Create and use sales database : Create database : create database salesdb; Use the database : use salesdb; Create sales table: CREATE TABLE ITEM_SALES_RECORD ( ITEM_NAME string, UNITS int, UNIT_COST decimal)  ROW FORMAT DELIMITED  FIELDS TERMINATED BY ","  LINES TERMINATED BY "\n"; NOTE : Table names and column names are not case sensitive. Insert data into table from file: Use the java file to gener...