4.4. Complete the Stack Upgrade

  1. Because the file system version has now changed you must start the NameNode manually. On the NameNode host:

     su -l $HDFS_USER -c "export HADOOP_LIBEXEC_DIR=/usr/lib/hadoop/libexec && /usr/lib/hadoop/sbin/hadoop-daemon.sh start namenode -upgrade" 

    Depending on the size of your system, this step may take up to 10 minutes.

  2. This upgrade can take a long time depending on the number of files you have. You can use tail to monitor the log file so that you can track your progress:

     tail -f /var/log/$HDFS_LOG_FOLDER/hadoop-hdfs-namenode-$HOSTNAME.log 

    Look for lines that confirm the upgrade is complete, one line per name directory, such as Upgrade of /hadoop/hdfe/namenode is complete. You can also look for Registered FSNamesystem State MBean which follows the upgrade of all name directories.

  3. Prepare the NameNode to work with Ambari:

    1. Open the Ambari Web GUI. If it has been open throughout the process, do a hard reset on your browser to force a reload.

    2. On the Services view, click HDFS to open the HDFS service.

    3. Click View Host to open the NameNode host details page.

    4. Use the dropdown menu to stop the NameNode.

    5. On the Services view, restart the HDFS service. Make sure it passes the ServiceCheck. It is now under Ambari's control.

  4. After the DataNodes are started, HDFS exits safemode. To monitor the status:

    sudo su -l $HDFS_USER -c "hdfs dfsadmin -safemode get”

    Depending on the size of your system, this may take up to 10 minutes or so. When HDFS exits safemode, this is displayed as a response to the command:

    Safe mode is OFF
  5. Make sure that the HDFS upgrade was successful. Go through steps 2 and 3 in Section 9.1 to create new versions of the logs and reports. Substitute "new" for "old" in the file names as necessary

  6. Compare the old and new versions of the following:

    • dfs-old-fsck-1.log versus dfs-new-fsck-1.log.

      The files should be identical unless the hadoop fsck reporting format has changed in the new version.

    • dfs-old-lsr-1.log versus dfs-new-lsr-1.log.

      The files should be identical unless the the format of hadoop fs -lsr reporting or the data structures have changed in the new version.

    • dfs-old-report-1.log versus fs-new-report-1.log

      Make sure all DataNodes previously belonging to the cluster are up and running.

  7. Use the Ambari Web Services view to start YARN.

  8. Use the Ambari Web Services view to start MapReduce2.

  9. Upgrade HBase:

    1. Make sure that all HBase components - RegionServers and HBase Master - are stopped.

    2. Use the Ambari Web Services view, start the ZooKeeper service. Wait until the ZK service is up and running.

    3. On the HBase Master host, make these configuration changes:

      1. In HBASE_CONFDIR/hbase-site.xml, set the property dfs.client.read.shortcircuit to false.

      2. In the configuration file, find the value of the hbase.tmp.dir property and make sure that the directory exists and is readable and writeable for the HBase service user and group.

        chown -R $HBASE_USER:$HADOOP_GROUP $HBASE.TMP.DIR
      3. Go to the Upgrade Folder and check in the saved global configuration file named global_<$TAG> for the value of the property hbase_pid_dir and hbase_log_dir. Make sure that the directories are readable and writeable for the HBase service user and group.

        chown -R $HBASE_USER:$HADOOP_GROUP $hbase_pid_dir
        chown -R $HBASE_USER:$HADOOP_GROUP $hbase_log_dir

        Do this on every host where a RegionServer is installed as well as on the HBase Master host.

      4. Upgrade HBase. You must be the HBase service user.

        su $HBASE_USER
        /usr/lib/hbase/bin/hbase upgrade -execute

        Make sure that the output contains the string "Successfully completed Znode upgrade".

      5. Use the Services view to start the HBase service. Make sure that Service Check passes.

  10. Upgrade Oozie:

    1. On the Services view, make sure YARN and MapReduce2 are running.

    2. Make sure that the Oozie service is stopped.

    3. Upgrade Oozie. You must be the Oozie service user. On the Oozie host:

      su $OOZIE_USER
      /usr/lib/oozie/bin/ooziedb.sh upgrade -run

      Make sure that the output contains the string "Oozie DB has been upgrade to Oozie version 'OOZIE Build Version'".

    4. Prepare the WAR file:

      [Note]Note

      The Oozie server must be not running for this step. If you get the message "ERROR: Stop Oozie first", it means the script still thinks it's running. Check, and if needed, remove the process id (pid) file indicated in the output.

      /usr/lib/oozie/bin/oozie-setup.sh prepare-war

      Make sure that the output contains the string "New Oozie WAR file with added".

    5. Modify the following configuration properties in oozie-site.xml. On the Ambari Server, use  /var/lib/ambari-server/resources/scripts/configs.sh  to inspect and update the configuration properties as described here.

      Table II.6.2. Properties to Modify
      Action Property Name Property Value
      Add oozie.service.URIHadnlerService.uri.handlers org.apache.oozie.dependency.FSURIHandler,org.apache.oozie.dependency.HCatURIHandler
      Add oozie.service.coord.push.check.requeue.interval 30000
      Add oozie.services.ext org.apache.oozie.service.PartitionDependencyManagerService,org.apache.oozie.service.HCatAccessorService
      Add/Modify oozie.service.SchemaService.wf.ext.schemas

      shell-action-0.1.xsd,email-action-0.1.xsd,hive-action-0.2.xsd,sqoop-action-0.2.xsd,ssh-action-0.1.xsd,distcp-action-0.1.xsd,shell-action-0.2.xsd,oozie-sla-0.1.xsd,oozie-sla-0.2.xsd [a]

      [a] Use this list if you have not modified the default Ambari values. If you have added custom schemas, make sure they exist after the modification. The schemas being added here are shell-action-0.2.xsd, oozie-sla-0.1.xsd, and oozie-sla-0.2.xsd. You can add these to your existing list.

    6. Replace the content of /user/oozie/share in HDFS. On the Oozie server host:

      1. Extract the Oozie sharelib into a tmp folder.

        mkdir -p /tmp/oozie_tmp
        cp /usr/lib/oozie/oozie-sharelib.tar.gz /tmp/oozie_tmp
        cd /tmp/oozie_tmp
        tar xzvf oozie-sharelib.tar.gz
      2. Back up the/user/oozie/share folder in HDFS and then delete it. If you have any custom files in this folder back them up separately and then add them back after the share folder is updated.

        su -l hdfs -c "hdfs dfs -copyToLocal /user/oozie/share /tmp/oozie_tmp/oozie_share_backup"
        su -l hdfs -c "hdfs dfs -rm -r /user/oozie/share"
      3. Add the latest share libs that you extracted in step 1. After you have added the files, modify ownership and acl.

        su -l hdfs -c "hdfs dfs -copyFromLocal /tmp/oozie_tmp/share /user/oozie/."
        su -l hdfs -c "hdfs dfs -chown -R oozie:hadoop /user/oozie"
        su -l hdfs -c "hdfs dfs -chmod -R 755 /user/oozie"
    7. Use the Services view to start the Oozie service. Make sure that ServiceCheck passes for Oozie.

  11. Make sure Ganglia no longer attempts to monitor JobTracker.

    1. Make sure Ganglia is stopped.

    2. Log into the host where JobTracker was installed (and where ResourceManager is installed after the upgrade).

    3. Backup the folder /etc/ganglia/hdp/HDPJobTracker .

    4. Remove the folder /etc/ganglia/hdp/HDPJobTracker.

    5. Remove the folder $ganglia_runtime_dir/HDPJobTracker.

      [Note]Note

      For the value of $ganglia_runtime_dir, in the Upgrade Folder, check the saved global configuration file global_<$TAG>.

  12. Use the Services view to start the remaining services back up.

  13. The upgrade is now fully functional but not yet finalized. Using the finalize comand removes the previous version of the NameNode and DataNode's storage directories.

    [Important]Important

    After the upgrade is finalized, the system cannot be rolled back. Usually this step is not taken until a thorough testing of the upgrade has been performed.

    The upgrade must be finalized before another upgrade can be performed.

    [Note]Note

    Directories used by Hadoop 1 services set in /etc/hadoop/conf/taskcontroller.cfg are not automatically deleted after upgrade. Administrators can choose to delete these directories after the upgrade.

    To finalize the upgrade:

    sudo su -l $HDFS_USER -c "hadoop dfsadmin -finalizeUpgrade"

    where $HDFS_USER is the HDFS Service user (by default, hdfs).


loading table of contents...