Open Access
Subscription Access
Open Access
Subscription Access
Map-Reduce Based High Performance Clustering on Large Scale Dataset Using Parallel Data Processing
Subscribe/Renew Journal
The amount of data in our world has been exploding, and analyzing large data sets-so-called big data-will become a key basis of competition, reinforcement new waves of productivity growth, innovation, and consumer surplus. Big data refers to the size of a dataset that has grown too large to be manipulated through traditional methods. These methods include capture, storage, and processing of the data in a tolerable amount of time. Apache Hadoop is an open-source software framework for storage and large scale processing of data-sets on clusters of hardware. It works with Map Reduce software framework for easily writing applications which process vast amounts of data (multi-terabyte data-sets) in-parallel on large clusters (thousands of nodes) of commodity hardware in a reliable, fault-tolerant manner. Clustering analysis is an unsupervised learning task that consist on classify objects into group. Then, the objects from one group share similar feature and are different from objects belonging to other groups. This paper shows that Map Reduce framework K-means clustering algorithm can obtain a higher performance when handling large scale document automatic classification in a multimode environment.
Keywords
Bigdata, Hadoop, Map-Reduce, Clustering, HDFS.
User
Subscription
Login to verify subscription
Font Size
Information
Abstract Views: 210
PDF Views: 2