Architecture: The concept of HDFS is to split the file into blocks and save multiple copies of the blocks on different nodes. The advantage of doing this is multiple operations can be done on the blocks and later the results can be aggregated , also having multiple copies will address the fault tolerance. HDFS will have the following nodes 1.) Name node - This will store the metadata of the file like how many blocks the file is split into and which nodes have this copies. This will also decide how to split the file and where to store these blocks 2.) Secondary Name node - This is for the fail over of the Name node. 3.) Data Node - These can be any in number depending on the requirement. These nodes will actually store the data . These nodes will send heart beat to the Name node periodically. In a clustered hadoop environment , a node would be on a physical machine. In a pseudo cluster environment ( like the cloudera VM) all the nodes would run on the same...