Open Access Open Access  Restricted Access Subscription Access
Open Access Open Access Open Access  Restricted Access Restricted Access Subscription Access

Map-Reduce Based High Performance Clustering on Large Scale Dataset Using Parallel Data Processing


Affiliations
1 Computer Science and Engineering, JCT College of Engineering and Technology, Anna University, Chennai, Tamilnadu, India
     

   Subscribe/Renew Journal


The amount of data in our world has been exploding, and analyzing large data sets-so-called big data-will become a key basis of competition, reinforcement new waves of productivity growth, innovation, and consumer surplus. Big data refers to the size of a dataset that has grown too large to be manipulated through traditional methods. These methods include capture, storage, and processing of the data in a tolerable amount of time. Apache Hadoop is an open-source software framework for storage and large scale processing of data-sets on clusters of hardware. It works with Map Reduce software framework for easily writing applications which process vast amounts of data (multi-terabyte data-sets) in-parallel on large clusters (thousands of nodes) of commodity hardware in a reliable, fault-tolerant manner. Clustering analysis is an unsupervised learning task that consist on classify objects into group. Then, the objects from one group share similar feature and are different from objects belonging to other groups. This paper shows that Map Reduce framework K-means clustering algorithm can obtain a higher performance when handling large scale document automatic classification in a multimode environment.

Keywords

Bigdata, Hadoop, Map-Reduce, Clustering, HDFS.
User
Subscription Login to verify subscription
Notifications
Font Size

Abstract Views: 189

PDF Views: 2




  • Map-Reduce Based High Performance Clustering on Large Scale Dataset Using Parallel Data Processing

Abstract Views: 189  |  PDF Views: 2

Authors

Anusha Vasudevan
Computer Science and Engineering, JCT College of Engineering and Technology, Anna University, Chennai, Tamilnadu, India
M. Swetha
Computer Science and Engineering, JCT College of Engineering and Technology, Anna University, Chennai, Tamilnadu, India
H. Hyba
Computer Science and Engineering, JCT College of Engineering and Technology, Anna University, Chennai, Tamilnadu, India
G. Rajiv Suresh Kumar
Computer Science and Engineering, JCT College of Engineering and Technology, Anna University, Chennai, Tamilnadu, India

Abstract


The amount of data in our world has been exploding, and analyzing large data sets-so-called big data-will become a key basis of competition, reinforcement new waves of productivity growth, innovation, and consumer surplus. Big data refers to the size of a dataset that has grown too large to be manipulated through traditional methods. These methods include capture, storage, and processing of the data in a tolerable amount of time. Apache Hadoop is an open-source software framework for storage and large scale processing of data-sets on clusters of hardware. It works with Map Reduce software framework for easily writing applications which process vast amounts of data (multi-terabyte data-sets) in-parallel on large clusters (thousands of nodes) of commodity hardware in a reliable, fault-tolerant manner. Clustering analysis is an unsupervised learning task that consist on classify objects into group. Then, the objects from one group share similar feature and are different from objects belonging to other groups. This paper shows that Map Reduce framework K-means clustering algorithm can obtain a higher performance when handling large scale document automatic classification in a multimode environment.

Keywords


Bigdata, Hadoop, Map-Reduce, Clustering, HDFS.