Skip to main content

Posts

Apache Spark , the end of MapReduce and Hadoop?

Is Apache Spark the end of MapReduce?  All the previous articles have discussed about the Hadoop framework and the way it performs actions on data through MapReduce programs.However, there are some drawbacks to the MapReduce framework.Let us analyze the key differences between MapReduce and Apache Spark in a detailed fashion. MapReduce runs in three phases. 1)The mapper program is fed data from HDFS and corresponding metadata is fetched and map operation is applied on the data. 2)The temporary data is written into local filesystem instead of HDFS for the reducer to apply operation on the key value pairs returned by Mapper. 3)The reducer picks data from the local file system(output of Mapper) and writes the output back to the HDFS systems. However there are inherent problems in this regard.Imagine we have a case where the MapReduceprogram fails due to some network/read/write error and the output is not output.This might not be a bg issue incase the data read is sma...
Recent posts

Let us 'Sqoop' it ! .

SQOOP - The bridge between traditional and novel big data systems. By now,we have seen articles about MapReduce programs to write programs using Hadoop MapReduce framework.However, all the operations were actually performed on sample text files. Here comes Sqoop to our rescue.Apache Sqoop is a tool developed to fetch/put data from traditional SQL based data storage systems like MySQL,PostgreSQL,MS SQL Server.Sqoop can also be used to fetch/push data from NoSQL systems too.This versatility is because of Sqoop's architecture abstracted on MapReduce framework. Sqoop has an extension framework that makes it possible from and to any external storage system that has bulk data transfer capabilities.A Sqoop connector is a modular component that uses this framework to enable Sqoop imports and exports.Sqoop comes with connectors for working with a range of versatile popular databases including MySQL,PostgreSQL,Oracle,SQL Server,DB2 and Netezza.Apart from the above connectors Sqoop als...

Hive Example

Use Case : A super market would record the sales in a file . Whenever an item is sold , the name of item , number of units of sale and cost of each unit in a comma separated format. A sample file would look like below Apple,10,10 Mango,20,5 Guava,10,3 Banana,30,4 Apple,10,5 At the end of the day we are required to find  the total sales per each item. Expected Output : Apple 150 Mango 100 Guava 30 Banana 120 Implementing in HIVE Getting started with HIVE: Open a terminal and type hive , this will open the hive shell Create and use sales database : Create database : create database salesdb; Use the database : use salesdb; Create sales table: CREATE TABLE ITEM_SALES_RECORD ( ITEM_NAME string, UNITS int, UNIT_COST decimal)  ROW FORMAT DELIMITED  FIELDS TERMINATED BY ","  LINES TERMINATED BY "\n"; NOTE : Table names and column names are not case sensitive. Insert data into table from file: Use the java file to gener...

Launching Map Reduce Job

Use Case : A super market would record the sales in a file . Whenever an item is sold , the name of item , number of units of sale and cost of each unit in a comma separated format. A sample file would look like below Apple,10,10 Mango,20,5 Guava,10,3 Banana,30,4 Apple,10,5 At the end of the day we are required to find  the total sales per each item. Expected Output : Apple 150 Mango 100 Guava 30 Banana 120 Input file : Download the input file from the below location https://github.com/nachiketagudi/Map-Reduce/blob/master/SampleInput.txt place this file at /home/cloudera Map Reduce Executable File : Download the java file from the below location https://github.com/nachiketagudi/Map-Reduce/blob/master/SalesPerItem.java In the Cloudera VM Open eclipse and create a new java project and name it MapReducePractice. Create a new java class and name it SalesPerItem.java and the package name "com.nachiketa.mapreduce.example". After creating the f...

Working with HDFS - Hadoop Distributed file system

Architecture: The concept of HDFS is to split the file into blocks and save multiple copies of the blocks on different nodes. The advantage of doing this is multiple operations can be done on the blocks and later the results can be aggregated , also having multiple copies will address the fault tolerance. HDFS will have the following nodes 1.) Name node - This will store the metadata of the file like how many blocks the file is split into and which nodes have this copies. This will also decide how to split the file and where to store these blocks 2.) Secondary Name node - This is for the fail over of the Name node. 3.) Data Node  - These can be any in number depending on the requirement. These nodes will actually store the data . These nodes will send heart beat to the Name node periodically. In a clustered hadoop environment , a node would be on a physical machine. In a pseudo cluster environment ( like the cloudera VM) all the nodes would run on the same...

Cloudera setup

Installing Cloudera is a best way to kick start the cloud setup. Follow the below steps to setup Cloudera on your windows machine: 1) Download VMware player to open cloudera machine from your windows machine link :  https://www.vmware.com/products/player/playerpro-evaluation.html Install VMWare player. 2.) Download the Cloudera VM. Do the signup and stuff required to download cloudera VM. link :  http://www.cloudera.com/downloads.html Click on quick starts from the above link , select the latest version and VMWare and click on download. Approximately 5GB of data would be downloaded. So sit back and relax . Upon completion of Clodera VM download , extract the downloaded zip file to a convenient location. Launching the VM 1.) Open the VMWare player and click on open a virtual machine . Open the VM from the path where you have extracted the ClouderaVM .                               ...