1. Starting HDP Services

Start all the Hadoop services in the following order:

  • HDFS

  • YARN

  • ZooKeeper

  • HBase

  • Hive Metastore

  • HiveServer2

  • WebHCat

  • Oozie

Instructions

  1. Start HDFS

    1. Execute this command on the NameNode host machine:

      su -l hdfs -c "/usr/lib/hadoop/sbin/hadoop-daemon.sh --config /etc/hadoop/conf start namenode" 
    2. Execute this command on the Secondary NameNode host machine:

      su -l hdfs -c "/usr/lib/hadoop/sbin/hadoop-daemon.sh --config /etc/hadoop/conf start secondarynamenode” 
    3. Execute this command on all DataNodes:

      su -l hdfs -c "/usr/lib/hadoop/sbin/hadoop-daemon.sh --config /etc/hadoop/conf start datanode"
  2. Start YARN

    1. Execute this command on the ResourceManager host machine:

      su - yarn -c "export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop-yarn/sbin/yarn-daemon.sh --config /etc/hadoop/conf start resourcemanager"
    2. Execute this command on the History Server host machine:

      su - mapred -c "export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop-mapreduce/sbin/mr-jobhistory-daemon.sh --config /etc/hadoop/conf start historyserver"
    3. Execute this command on all NodeManagers:

      su - yarn -c "export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop-yarn/sbin/yarn-daemon.sh --config /etc/hadoop/conf start nodemanager"
  3. Start ZooKeeper. Execute this command on the ZooKeeper host machine machine(s).

    su - zookeeper -c "export  ZOOCFGDIR=/etc/zookeeper/conf ; export ZOOCFG=zoo.cfg ; source /etc/zookeeper/conf/zookeeper-env.sh ; /usr/lib/zookeeper/bin/zkServer.sh start"
  4. Start HBase

    1. Execute this command on the HBase Master host machine:

      su -l hbase -c "/usr/lib/hbase/bin/hbase-daemon.sh --config /etc/hbase/conf start master; sleep 25"
    2. Execute this command on all RegionServers:

      su -l hbase -c "/usr/lib/hbase/bin/hbase-daemon.sh --config /etc/hbase/conf start regionserver" 
  5. Start Hive Metastore. On the Hive Metastore host machine, execute the following command:

    su - hive -c "env HADOOP_HOME=/usr JAVA_HOME=/usr/jdk64/jdk1.6.0_31 /tmp/startMetastore.sh /var/log/hive/hive.out /var/log/hive/hive.log /var/run/hive/hive.pid /etc/hive/conf.server" 

    where, $HIVE_LOG_DIR is the directory where Hive server logs are stored. For example, /var/logs/hive.

  6. Start HiveServer2. On the Hive Server2 host machine, execute the following command:

    su - hive -c "env JAVA_HOME=/usr/jdk64/jdk1.6.0_31 /tmp/startHiveserver2.sh /var/log/hive/hive-server2.out /var/log/hive/hive-server2.log /var/run/hive/hive-server.pid /etc/hive/conf.server"

    where $HIVE_LOG_DIR is the directory where Hive server logs are stored. For example, /var/logs/hive.

  7. Start WebHCat. On the WebHCat host machine, execute the following command:

    su -l hcat -c "/usr/lib/hcatalog/sbin/webhcat_server.sh start"

  8. Start Oozie. Execute these commands on the Oozie host machine.

    <login as $OOZIE_USER>
    /usr/lib/oozie/bin/oozie-start.sh

    where $OOZIE_USER is the Oozie user. For example, oozie.