Open Access Open Access  Restricted Access Subscription Access
Open Access Open Access Open Access  Restricted Access Restricted Access Subscription Access

Shared Disk Big Data Analytics using Apache Hadoop


Affiliations
1 Computer Science and Engineering, University of Trichy, India
2 V.R.S. College of Engineering and Technology, Arasur, Villupuram, India
     

   Subscribe/Renew Journal


Big Data is a term connected to information sets whose size is past the capacity of customary programming advancements to catch, store, oversee and prepare inside a passable slipped by time. The well known supposition around Huge Data examination is that it requires web scale adaptability: over many figure hubs with connected capacity. In this paper, we wrangle on the need of an enormously adaptable disseminated registering stage for Enormous Data examination in customary organizations. For associations which needn't bother with a flat, web request adaptability in their investigation stage, Big Data examination can be based on top of a customary POSIX Group File Systems utilizing a mutual stockpiling model. In this study, we looked at a broadly utilized bunched record framework: (SF-CFS) with Hadoop Distributed File System (HDFS) utilizing mainstream Guide diminish. In our investigations VxCFS couldn't just match the execution of HDFS, yet, additionally beat much of the time. Along these lines, endeavors can satisfy their Big Data examination need with a customary and existing shared stockpiling model without relocating to an alternate stockpiling model in their information focuses. This likewise incorporates different advantages like soundness and vigor, a rich arrangement of elements and similarity with customary examination application.

Keywords

BigData, Hadoop, Clustered File Systems, Investigation, Cloud.
User
Subscription Login to verify subscription
Notifications
Font Size

Abstract Views: 188

PDF Views: 4




  • Shared Disk Big Data Analytics using Apache Hadoop

Abstract Views: 188  |  PDF Views: 4

Authors

R. Gayathri
Computer Science and Engineering, University of Trichy, India
M. Balaanand
V.R.S. College of Engineering and Technology, Arasur, Villupuram, India

Abstract


Big Data is a term connected to information sets whose size is past the capacity of customary programming advancements to catch, store, oversee and prepare inside a passable slipped by time. The well known supposition around Huge Data examination is that it requires web scale adaptability: over many figure hubs with connected capacity. In this paper, we wrangle on the need of an enormously adaptable disseminated registering stage for Enormous Data examination in customary organizations. For associations which needn't bother with a flat, web request adaptability in their investigation stage, Big Data examination can be based on top of a customary POSIX Group File Systems utilizing a mutual stockpiling model. In this study, we looked at a broadly utilized bunched record framework: (SF-CFS) with Hadoop Distributed File System (HDFS) utilizing mainstream Guide diminish. In our investigations VxCFS couldn't just match the execution of HDFS, yet, additionally beat much of the time. Along these lines, endeavors can satisfy their Big Data examination need with a customary and existing shared stockpiling model without relocating to an alternate stockpiling model in their information focuses. This likewise incorporates different advantages like soundness and vigor, a rich arrangement of elements and similarity with customary examination application.

Keywords


BigData, Hadoop, Clustered File Systems, Investigation, Cloud.