Cost based Model for Big Data Processing with Hadoop Architecture

Authors

  • Mayank Bhushan

  • Sumit Kumar Yadav

Keywords:

big data, hadoop, cloud computing, mapreduce

Abstract

With fast pace growth in technology, we are getting more options for making better and optimized systems. For handling huge amount of data, scalable resources are required. In order to move data for computation, measurable amount of time is taken by the systems. Here comes the technology of Hadoop, which works on distributed file system. In this, huge amount of data is stored in distributed manner for computation. Many racks save data in blocks with characteristic of fault tolerance, having at least three copies of a block. Map Reduce framework use to handle all computation and produce result. Jobtracker and Tasktracker work with MapReduce and processed current as well as historical data that2019;s cost is calculated in this paper.

How to Cite

Mayank Bhushan, & Sumit Kumar Yadav. (2014). Cost based Model for Big Data Processing with Hadoop Architecture. Global Journal of Computer Science and Technology, 14(C2), 13–17. Retrieved from https://computerresearch.org/index.php/computer/article/view/69

Cost based Model for Big Data Processing with Hadoop Architecture

Published

2014-01-15