traditional data storage techniques can’t store or process large volume of structured and unstructured data sets. those data are referred as big data. hadoop on the other hand is a tool that is used to handle big data. it is an open-source framework manufactured by the apache software foundation.
affluenz it academy will emphasize on how to design distributed applications to manage “big data” using hadoop. this will also detail how to use pig and spark to write scripts in order to process data sets on hadoop cluster.
Module 1 - introduction of big data
Module 2 - introduction to hadoop
Module 3 - hadoop components
Module 4 - hadoop architecture and hdfs
Module 5 - live project on big data & hadoop
Module 6 - certification & closure
Module 1 - introduction of big data
Module 2 - introduction to hadoop
Module 3 - hadoop components
Module 4 - advance hadoop architecture and hdfs
Module 5 - hadoop mapreduce framework
Module 6 - live project on big data & hadoop
Module 7 - certification & closure