How Does Hadoop handle the Big Data problem?

How Does Hadoop handle the Big Data problem?


Let us now see what the software architecture Hadoop brings which makes it so unique and powerful. For this, we shall see a little about what happens inside Hadoop and what it has done differently to solve the problem of Big Data. Grace Murray Hopper the great American computer scientist gave an example to understand the distributed computing paradigm; this is how he explained it: historically, oxen were employed to carry the load. So, when the load increased, people didn’t consider increasing the load; rather several oxen were put together to pull the heavy load.  And the same idea is applied while analyzing Big Data.


When this idea is applied to computing load it is called distributed computing! Till the point of introduction of Hadoop, as the data increased we tried to increase the computation power, which has worked unto a certain point. But a point was reached where there was an enormous amount of data and additionally it was constantly increasing too. So to solve this problem of increased requirement without having to increase the capacity, distributed computing is being use. Distributed frameworks, which use a cluster of computers are the solution for such scenarios where the data to be processed is very large.


Having a cluster of computers t process the large data has the two-fold price advantage: custom hardware with higher configurations are super expensive and the license fee of traditional RDBMS is expensive while Hadoop is free and open source. Hadoop is one of the frameworks which help applications to be developed on the distributed computing concepts.


When talking of Hadoop there are two major concepts that set up the core of Hadoop. These comprise of the file system which is the Hadoop distributed file system also known as HDFS and then there is the programming framework on top of it which works in tandem with HDFS known as MapReduce framework. See, in many ways, we can think Hadoop provides a layer between the user and the cluster of machines under it and it has a lot a features like an operating system would have which manages multiple nodes under it. Thus, users don’t need to worry about the multiple storage and computation resources, which will be handled by Hadoop and would be abstracted from the users’ point of view.

Updated: April 18, 2015 — 2:48 pm

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Protected by WP Anti Spam

hadoop online training hyderabad © 2015 Frontier Theme