4. Start MapReduce JobHistory Server

  1. Change permissions on the container-executor file.

    chown -R root:hadoop /usr/lib/hadoop-yarn/bin/container-executor
    chmod -R 6050 /usr/lib/hadoop-yarn/bin/container-executor
    

    [Note]Note

    If these permissions are not set, the healthcheck script will return an errror stating that the datanode is UNHEALTHY.

  2. Execute these commands from the JobHistory server to set up directories on HDFS :

    su $HDFS_USER
    hadoop fs -mkdir -p /mr-history/tmp
    hadoop fs -chmod -R 1777 /mr-history/tmp
    hadoop fs -mkdir -p /mr-history/done
    hadoop fs -chmod -R 1777 /mr-history/done
    hadoop fs -chown -R $MAPRED_USER:$HDFS_USER /mr-history
    
    hadoop fs -mkdir -p /app-logs
    hadoop fs -chmod -R 1777 /app-logs 
    hadoop fs -chown yarn /app-logs 

  3. Execute these commands from the JobHistory server:

    <login as $MAPRED_USER>
    export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec/
    /usr/lib/hadoop-mapreduce/sbin/mr-jobhistory-daemon.sh --config $HADOOP_CONF_DIR start historyserver

where:

  • $HDFS_USER is the user owning the HDFS services. For example, hdfs.

  • $MAPRED_USER is the user owning the MapRed services. For example, mapred.

  • $HADOOP_CONF_DIR is the directory for storing the Hadoop configuration files. For example, /etc/hadoop/conf.


loading table of contents...