HDFS Architecture

HDFS Architecture

Apache HDFS is highly fault-tolerant and can be used for processing large data sets on and run on low-cost commodity hardware. Key architectural considerations which make HDFS suitable for applications that have large data sets –

Hardware Failure

An Apache Hadoop cluster typically consist of hundreds of servers machines, each one storing a part of the file system’s data. The fact that there are a huge number of components and that each component has a non-trivial probability of failure means that component have to take failure into account. Detecting faults and being able to quickly and automatically recover from them is a core architectural goal.

Large Data Sets

Applications that run on HDFS have large data sets. A typical file in HDFS is gigabytes to terabytes in size. As a result, HDFS is tuned to support large files. It can provide high aggregate data bandwidth and scale to hundreds of nodes in a single cluster.

Moving Computation is Cheaper than Moving Data

Computation requests can be much more efficient if it is executed near the data it operates on, especially true when the size of the data set is huge. This minimizes network congestion and increases the overall throughput of the system. It is often better to migrate the compute closer to where the data is located rather than moving the data to where the application is running.

Simple Coherency Model

HDFS applications need a "write-once-read-many" access model for files. A file once created, written, and closed need not be changed, except for appends and truncates. Appending the content to the end of the files is supported but updates at arbitrary points is not optimal. This assumption simplifies data coherency issues and enables high throughput data access.

Last updated