2. Configure Ambari properties across the HA cluster

To enable Ambari to run relocate_host_component.py, edit the cluster configuration file on each failover host in the HA cluster, using a text editor.

In /etc/cluster/cluster.conf, set values for each of the following properties:

  • <server>=<ambari-hostname / ip>

  • <port>=<8080>

  • <protocol>=<http / https>

  • <user>=<admin>

  • <password>=<admin>

  • <cluster>=<cluster-name>

  • <output>=</var/log/ambari_relocate.log>

For example, the Hadoop daemon section of cluster.conf on the NameNode localhost in an HA cluster will look like:

<hadoop 
__independent_subtree="1" __max_restarts="10" __restart_expire_time="600" name="NameNode Process"
daemon="namenode" boottime="10000" probetime="10000" stoptime="10000" url="http://10.0.0.30:50070/dfshealth.jsp"
pid="/var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid" path="/"

ambariproperties="server=localhost,port=8080,protocol=http,user=admin,password=admin,cluster=c1,output=/var/log/ambari_relocate.log"

/>

The relocate_host_component.py script reassigns components on failover of any host in the HA cluster, when you start or restart Ambari server.


loading table of contents...