Abstracto

Hadoop File System with Elastic Replication Management: An Overview

Mamatha S R, Saheli G S, Rajesh R, Arti Arya

This paper gives an overview of how Hadoop File System manages massive data as well as handles small files. As data is exponentially pouring in from all sides in all domains, it has become a necessity to manage and analyze such huge amount of data to extract useful information. This huge amount of data is technically termed as Big Data, which in turn falls under Data Science. Currently a lot of research is going on how to handle such vast pool of data. The Apache Hadoop is a software framework that uses simple programming paradigm to process and analyze large data sets(Big Data) across clusters of computers. The Hadoop Distributed File System(HDFS) is one such technology that manages the Big Data efficiently. In this paper, an insight of ”how HDFS handles big as well as small amount of data” is presented, reviewed and analyzed. As a result, summarized limitations of existing sys- tems are described in the paper along with the future scope of it.

Descargo de responsabilidad: este resumen se tradujo utilizando herramientas de inteligencia artificial y aún no ha sido revisado ni verificado.

Indexado en

Academic Keys
ResearchBible
CiteFactor
Cosmos SI
Búsqueda de referencia
Universidad Hamdard
Catálogo mundial de revistas científicas
director académico
Factor de impacto de revistas innovadoras internacionales (IIJIF)
Instituto Internacional de Investigación Organizada (I2OR)
Cosmos

Ver más