Hadoop is a software platform that lets one easily write and run applications that process vast amounts of data.
Moreover, Hadoop implements MapReduce, using the Hadoop Distributed File System (HDFS). MapReduce will divide your applications into many small blocks of work.
HDFS creates multiple replicas of data blocks for reliability, placing them on compute nodes around the cluster. MapReduce can then process the data where it is located.
In addition, Hadoop has been demonstrated on clusters with 2000 nodes. The current design target is 10,000 node clusters.
- Scalable: Hadoop can reliably store and process petabytes.
- Economical: It distributes the data and processing across clusters of commonly available computers. These clusters can number into the thousands of nodes.
- Efficient: By distributing the data, Hadoop can process it in parallel on the nodes where the data is located. This makes it extremely rapid.
- Reliable: Hadoop automatically maintains multiple copies of data and automatically redeploys computing tasks based on failures.
In a hurry? Add it to your Download Basket!
- Changes since Hadoop 2.5.0:
- MAPREDUCE-6033. Major bug reported by Yu Gao and fixed by Yu Gao. Users are not allowed to view their own jobs, denied by JobACLsManager.
- HADOOP-11065. Blocker bug reported by Karthik Kambatla and fixed by Karthik Kambatla. Rat check should exclude **/build/**.
- HADOOP-11001. Blocker bug reported by Karthik Kambatla and fixed by Karthik Kambatla (scripts). Fix test-patch to work with the git repo.