4.1. Prepare for the Stack Upgrade

  1. Make sure that you have saved the namespace. You should have done this here. The upgrade will fail if you do not save the namespace. If you have not saved the namespace yet:

    1. Restart Ambari Server and Ambari Agents.

    2. Restart HDFS only.

    3. On the NameNode host:

      su $HDFS_USER
      hadoop dfsadmin -safemode enter
      hadoop dfsadmin -saveNamespace
    4. Stop the HDFS service and wait for it to be fully stopped.

    5. Stop the Ambari Server and Ambari Agents.

  2. Prepare for the upgrade:

    1. Create an "Upgrade Folder", for example /work/upgrade_hdp_2, on a host that can communicate with Ambari Server. The Ambari Server host would be a suitable candidate.

    2. Copy the upgrade script to the Upgrade Folder. The script is available here: /var/lib/ambari-server/resources/scripts/UpgradeHelper_HDP2.py on the Ambari Server host.

    3. Make sure that Python is available on the host and that the version is 2.6 or higher:

      python --version
      [Note]Note

      For RHEL/Centos/Oracle Linux 5, you must use Python 2.6.

  3. Start Ambari Server only. On the Ambari Server host:

    ambari-server start
  4. Back up current configuration settings and the component host mappings from MapReduce:

    1. Go to the Upgrade Folder.

    2. Execute the backup-configs action:

      python UpgradeHelper_HDP2.py --hostname $HOSTNAME --user $USERNAME --password $PASSWORD --clustername $CLUSTERNAME backup-configs

      Where

      • $HOSTNAME is the name of the Ambari Server host

      • $USERNAME is the admin user for Ambari Server

      • $PASSWORD is the password for the admin user

      • $CLUSTERNAME is the name of the cluster

      This step produces a set of files named TYPE_TAG, where TYPE is the configuration type and TAG is the tag. These files contain copies of the various configuration settings for the current (pre-upgrade) cluster. You can use these files as a reference later.

    3. Execute the save-mr-mapping action:

      python UpgradeHelper_HDP2.py --hostname $HOSTNAME --user $USERNAME --password $PASSWORD --clustername $CLUSTERNAME save-mr-mapping

      This step produces a file named mr_mapping that stores the host level mapping of MapReduce components such as MapReduce JobTracker/TaskTracker/Client.

  5.  Delete all the MapReduce server components installed on the cluster.

    1. If you are not already there, go to the Upgrade Folder.

    2. Execute the delete-mr action.

      python UpgradeHelper_HDP2.py --hostname $HOSTNAME --user $USERNAME --password $PASSWORD --clustername $CLUSTERNAME delete-mr

      Optionally, execute the delete script with the -n option to view, verify, and validate API calls, if necessary.

      [Note]Note

      Running the delete script with the -n option exposes API calls but does not remove installed components. Use the -n option for validation purposes only.

    3. The script asks you to confirm that you have executed the save-mr-mapping action and that you have a file named mr_mapping in the Upgrade Folder.


loading table of contents...