Abstract
Big Data comprises both structured and unstructured data collected from various sources. For collecting, managing, storing and analyzing the large dataset, an efficient tool is required. Hadoop is an open source framework which processes large dataset and MapReduce in Hadoop is an effective programming model reduces the computation time of large scale database in a distributed architecture. A machine and deep learning algorithm based on MapReduce implemented in huge dataset will reduce processing time. This paper aims to study various MapReduce based model and algorithms to analyze huge data. Also, predicts the way of implementing algorithms in MapReduce to reduce the computing time.