4.1. Hardware selection for slave nodes (DataNodes, TaskTrackers, and RegionServers)

The following recommendations are based on the Hortonworkers’ experience gleaned in production data centers:

Choosing a server platform

Typically, dual-socket servers are optimal for Hadoop deployments. For medium to large clusters, using these servers is a best choice over the entry-level servers, because of the load-balancing and parallelization capabilities. In terms of density, it is advisable to select server hardware that fits into a low number of rack units. Typically, 1U or 2U servers are used in 19” racks or cabinets.

Storage options

For general-purpose Hadoop applications, we recommend using relatively large number of hard drives (typically eight to twelve SATA LFF drives) per server. At the time of this publication, typical capacity in production environments is around 2 TB per drive. Based on our experience, highly I/O intensive environments have started using 12 x 2 TB SATA drives. The optimal balance between cost and performance is generally achieved with 7,200 RPM SATA drives. If your current or predicted storage is experiencing a growth rate you should also consider using 3 TB disks.  

SFF disks are being adopted in some configurations for better disk bandwidth. However, we recommend that you monitor your cluster for any potential disk failures because more disks will increase the rate of disk failures. If you do have large number of disks per server, you can use two disk controllers, so that the I/O load can be shared across multiple cores. Hortonworks strongly recommends using either SATA or SAS interconnects only.

Once you have set-up an HDFS cluster using a low-cost reliable storage option, you will observe that the old data stays on the cluster indefinitely and the storage demands will grow quickly. With 12-drive sys­tems, you typically get 24 TB or 36 TB per node. Using this storage capacity in a node is only practical with Hadoop release 1.0.0 or later (because the failures are handled gracefully allowing machines to con­tinue serving from their remaining disks).   

It is important to note that Hadoop is storage intensive and seek efficient, but does not require fast and expensive hard drives. If your workload pattern is not I/O intensive, it is safe to add only four or six disks per node. Note that power costs are proportional to the number of disks and not to terabytes. We there­fore recommend that you add disks for storage and not for seeks.

[Note]Note

RAID vs. JBOD:

Using RAID on Hadoop slave machines is not recommended, because Hadoop orchestrates data redundancy across all the slave nodes. However, it is strongly recommended to use RAID for Hadoop master machines (especially the NameNode server).

As a final consideration, we strongly recommend purchasing disk drives with good MTBF numbers, because the slave nodes in Hadoop suffer routine probabilistic failures.

Your slave nodes do not need expensive support contracts that offer services like replacement of disks within two hours or less. Hadoop is designed to adapt to slave node disk failure and therefore you should treat maintenance activity for the slave nodes as an ongoing task rather than an emergency.

It is good to be able to swap out disks without taking the server out of the rack, though switching them off (briefly) is an inexpensive operation in a Hadoop cluster. Using SSD disks for master nodes can increase your costs for bulk storage at present. As the costs for these disks decrease, it could present opportunities in future.

Memory sizing

In a Hadoop cluster, it is critical to provide sufficient memory to keep the processors busy without swap­ping and without incurring excessive costs for non-standard motherboards. Depending on the number of cores, your slave nodes typically require 24 GB to 48 GB of RAM for Hadoop applications. For large clusters, this amount of memory sufficiently provides extra RAM (approximately 4 GB) for the Hadoop framework and for your query and analysis processes (HBase and/or Map/Reduce).

To detect and correct random transient errors introduced due to thermodynamic effects and cosmic rays, we strongly recommend using error correcting code (ECC) memory.  Error-correcting RAM allows you to trust the quality of your computations. Some parts (chip-kill/chip spare) have been shown to offer better protection than traditional designs, as they show less recurrence of bit errors. (See, DRAM Errors in the Wild: A Large-Scale Field Study, Schroeder et al, 2009.)

If you want to retain the option of adding more memory to your servers in future, ensure there is space to do this alongside the initial memory modules. It is expensive to replace all the memory modules.

Memory provisioning

Memory can also be provisioned at commodity prices on low-end server motherboards. It is typical to over-provision memory. The unused RAM will be consumed either by your Hadoop applications (typically when you run more processes in parallel) or by the infrastructure (used for caching disk data to improve performance).

Processors

Although it is important to understand your workload pattern, for most systems we recommend using medium clock speed processors with less than two sockets. For most workloads, the extra performance per node is not cost-effective. For large clusters, use at least two quad core CPU for the slave machines.

Power consideration

Power is a major concern when designing Hadoop clusters. Instead of purchasing the biggest and fastest nodes, it is important to analyze the power utilization for the existing hardware. We observed huge sav­ings in pricing and power by avoiding fastest CPUs, redundant power supplies, etc.

Nowadays, vendors are building machines for cloud data centers that are designed to reduce cost, power, and are light-weight. Supermicro, Dell, and HP all have such product lines for cloud providers. So, if you are buying in large volume, we recommend evaluating the stripped-down “cloud servers”.

For slave nodes, a single power supply unit (PSU) is sufficient, but for master servers use redundant PSUs. Server designs that share PSUs across adjacent servers can offer increased reliability without increased cost.

Some co-location sites bill based on the maximum-possible power budget and not the actual budget. In such a location the benefits of the power saving features of the latest CPUs are not realized completely. We therefore recommend checking the power billing options of the site in advance.

[Note]Note

Power consumption of the cluster:

Electricity and cooling account for 33.33% to 50% of the equipment total life cycle cost in the modern data centers.

Network

This is the most challenging parameter to estimate because Hadoop workloads vary a lot. The key is buy­ing enough network capacity at reasonable cost so that all nodes in the cluster can communicate with each other at reasonable speeds. Large clusters typically use dual 1 GB links for all nodes in each 20-node rack and 2*10 GB interconnect links per rack going up to a pair of central switches.

A good network design will consider the possibility of unacceptable congestion at critical points in the net­work under realistic loads. Generally accepted oversubscription ratios are around 4:1 at the server access layer and 2:1 between the access layer and the aggregation layer or core. Lower oversubscription ratios can be considered if higher performance is required. Additionally, we also recommend having 1 GE oversubscription between racks.

It is critical to have dedicated switches for the cluster instead of trying to allocate a VC in existing switches -the load of a Hadoop cluster would impact the rest of the users of the switch. It is also equally critical to work with the networking team to ensure that the switches suit both Hadoop and their monitoring tools.

Design the networking so as to retain the option of adding more racks of Hadoop/HBase servers. Getting the networking wrong can be expensive to fix. The quoted bandwidth of a switch is analogous to the miles per gallon ratings of an automobile -you are unlikely to replicate it.  ‘’Deep buffering’’ is preferable to low-latency in switches. Enabling Jumbo Frames across the cluster improves bandwidth through better checksums and possibly also provide packet integrity.

[Note]Note

Network strategy for your Hadoop clusters:

Analyze the ratio of network-to-computer cost. Ensure that the network cost is always around 20% of your total cost. Network costs should include your complete network, core switches, rack switches, any net­work cards needed, etc. Keep in mind that Hadoop grew up with commodity!