Adding Druid to a cluster
Also available as:

Configure Druid for high availability

To make Druid highly available, you need to configure a sophisticated, multinode structure.

The architecture of a Druid cluster includes a number of nodes, two of which are designated Overlord and Coordinator. Within each Overlord and Coordinator domain, ZooKeeper determines which node is the Active node. The other nodes supporting each process are in Standby state until an Active node stops running and the Standby nodes receive the failover. Multiple Historical and Realtime nodes also serve to support a failover mechanism. But for Broker and Realtime processes, there are no designated Active and Standby nodes. Muliple instances of Druid Broker processes are required for HA. Recommendations: Use an external, virtual IP address or load balancer to direct user queries to multiple Druid Broker instances. A Druid Router can also serve as a mechanism to route queries to multiple broker nodes.

  • You ensured that no local storage is used.
  • You installed MySQL or Postgres as the metadata storage layer. You cannot use Derby because it does not support a multinode cluster with HA.
  • You configured your metadata storage for HA mode to avoid outages that impact cluster operations.
  • You planned to dedicate at least three ZooKeeper nodes to HA mode.
  1. Enable Namenode HA using the Ambari wizard.
  2. Install the Druid Overlord, Coordinator, Broker, Realtime, and Historical processes on multiple nodes that are distributed among different hardware servers.
  3. Ensure that the replication factor for each datasource is greater than 1 in the Coordinator process rules. If no datasource rule configurations were changed, no action is required because the default value is 2.