1. Preparing for the Upgrade

Use the following steps to prepare your system for the upgrade.

  1. Use the Services View on the Ambari Web UI to stop all services, including MapReduce and all clients, running on HDFS. Do not stop HDFS yet.

  2. Run fsck with the following flags and send the results to a log. The resulting file contains a complete block map of the file system. You use this log later to confirm the upgrade.

    su $HDFS_USER
    hadoop fsck / -files -blocks -locations > /tmp/dfs-old-fsck-1.log 

    where $HDFS_USER is the HDFS Service user (by default, hdfs).

  3. Prepare other logs for comparing your system's state before and after the upgrade.

    [Note]Note

    You must be the HDFS service user (by default, hdfs) when you run these commands.

    1. Capture the complete namespace of the filesystem. (The following command does a recursive listing of the root file system.)

      su $HDFS_USER
      hadoop dfs -lsr / > /tmp/dfs-old-lsr-1.log 

      where $HDFS_USER is the HDFS Service user (by default, hdfs).

    2. Create a list of all the DataNodes in the cluster.

      su $HDFS_USER
      hadoop dfsadmin -report > /tmp/dfs-old-report-1.log

      where $HDFS_USER is the HDFS Service user (by default, hdfs).

    3. Optional: copy all or unrecoverable only data stored in HDFS to a local file system or to a backup instance of HDFS.

    4. Optional: create the logs again and check to make sure the results are identical.

  4. Save the namespace. You must be the HDFS service user to do this and you will need to put the cluster in Safe Mode.

    hadoop dfsadmin -safemode enter
    hadoop dfsadmin -saveNamespace
  5. Copy the following checkpoint files into a backup directory. You can find the directory by using the Services View in the UI. Select the HDFS service, the Configs tab, in the Namenode section, look up the property NameNode Directories. It will be on your NameNode host.

    • dfs.name.dir/edits

    • dfs.name.dir/image/fsimage

  6. Stop HDFS. Make sure all services in the cluster are completely stopped.

  7. If you are upgrading Hive, back up the Hive database.

  8. Move the conf.save directory for Ambari server and agents to a back up location:

    mv /etc/ambari-server/conf.save/ /etc/ambari-server/conf.save.bak 
    mv /etc/ambari-agent/conf.save/ /etc/ambari-agent/conf.save.bak 

loading table of contents...