4. Upgrading HDP 1.3.9 to 1.3.10

Before You Begin

Make sure you know what HDP components need to be upgraded at your installation. Decide if you are going to upgrade using a local repository or a remote repository.

Upgrading to 1.3.10

Use the following instructions to upgrade HDP 1.3.9 to HDP 1.3.10 manually:

  1. Download the appropriate hdp.repo file for your OS:

    RHEL/CENTOS/ORACLE LINUX 5//public-repo-1.hortonworks.com/HDP/centos5/1.x/updates/1.3.10.0/hdp.repo
    RHEL/CENTOS/ORACLE LINUX 6//public-repo-1.hortonworks.com/HDP/centos6/1.x/updates/1.3.10.0/hdp.repo
    SLES 11 SP1//public-repo-1.hortonworks.com/HDP/suse11/1.x/updates/1.3.10.0/hdp.repo

    OR Download the HDP RPMs single repository tarball. (For further information, see the local repository instructions.)

  2. Stop all services.

    If you are managing your deployment via Ambari, open Ambari Web, browse to Services and use the Service Actions command to stop each service.

    If you have a manually installed cluster, use the instructions provided here.

    [Note]Note

    If you are upgrading an HA NameNode configuration, keep your JournalNodes running while performing this upgrade procedure. Upgrade, rollback and finalization operations on HA NameNodes must be performed with all JournalNodes running.

  3. Run the fsck command as instructed below and fix any errors. (The resulting file will contain a complete block map of the file system.)

    su $HDFS_USER
    hadoop fsck / -files -blocks -locations > dfs-old-fsck-1.log 

    where $HDFS_USER is the HDFS Service user. For example, hdfs.

  4. Use the following instructions to compare the status before and after the upgrade:

    [Note]Note

    The following commands must be executed by the user running the HDFS service (by default, the user is hdfs).

    1. Capture the complete namespace of the file system. Run the recursive listing of the root file system:

      su $HDFS_USER
      hadoop dfs -lsr / > dfs-old-lsr-1.log 

      where $HDFS_USER is the HDFS Service user. For example, hdfs.

    2. Run report command to create a list of DataNodes in the cluster.

      su $HDFS_USER
      hadoop dfsadmin -report > dfs-old-report-1.log

      where $HDFS_USER is the HDFS Service user. For example, hdfs.

    3. Copy all or unrecoverable data stored in HDFS to a local file system or to a backup instance of HDFS.

    4. Optionally, repeat the steps 3 (a) through 3 (c) and compare the results with the previous run to verify that the state of the file system remains unchanged.

  5. As an HDFS user, execute the following command to save namespace:

    su $HDFS_USER
    hadoop dfsadmin -safemode enter
    hadoop dfsadmin -saveNamespace

    where $HDFS_USER is the HDFS Service user. For example, hdfs.

  6. Copy the following checkpoint files into a backup directory:

    • dfs.name.dir/current/edits

    • dfs.name.dir/image/fsimage

  7. Stop the HDFS service.

    If you are managing your deployment via Ambari, open Ambari Web, browse to Services and use the Service Actions command to stop the service.

    If you are installing manually, use the instructions provided here.

    [Note]Note

    Verify that all the HDP services in the cluster are stopped.

  8. If you have Oozie installed:

    Back up the files in the following directories on the Oozie server host, and make sure all files, including *site.xml files, are copied.

    mkdir oozie-conf-bak cp -R /etc/oozie/conf/* oozie-conf-bak 

    Remove the old Oozie directories on all Oozie server and client hosts

    rm -R /etc/oozie/conf/* oozie-conf-bak 
  9. Back up the Hive Database.

  10. Upgrade the stack on all Agent hosts.

    Operating SystemInstructionsCommands
    RHEL/CentOs/Oracle Linux
     Upgrade the following components.
    yum upgrade "collectd*" "gccxml*" "pig*" "hadoop*" "oozie" "oozie-client" "sqoop*"
    "zookeeper*" "hbase*" "webhcat" "hive*" "hcatalog" hdp_mon_nagios_addons
     Verify that the components were upgraded.
    yum list installed | grep
    HDP-$old-stack-version-number               
    SLES
     Upgrade the following components.
    zypper up "collectd*" "epel-release*" "gccxml*" "pig*" "hadoop*"  "oozie" "oozie-client" "sqoop*" "zookeeper*" "hbase*" "hive*" "hcatalog" hdp_mon_nagios_addons    
      
    zypper up -r HDP-1.3.10.0             
     Verify that the components were upgraded.
    rpm -qa | grep hadoop, rpm -qa | grep hive and rpm -qa| grep hcatalog
  11. Complete the Stack upgrade.

    If this is an Ambari-managed cluster, update the Repository Base URLs to use the HDP 1.3.10 repositories for HDP and HDP-UTILS. For Ambari 1.6.1 or earlier, enter:

    ambari-server upgradestack HDP-1.3 //public-repo-1.hortonworks.com/HDP/{$os}/1.x/updates/1.3.10.0 {$os}

    where {$os} is the Operating System Family (OS Family). See the following table:

     

    Table 1.1. Operating Systems mapped to each OS Family

    OS FamilyOperating System
    redhat5Red Hat 5, CentOS 5, Oracle Linux 5
    redhat6Red Hat 6, CentOS 6, Oracle Linux 6
    sles11SUSE Linux Enterprise Server 11 sp1


  12. Restart services.

    If you are managing your deployment via Ambari, open Ambari Web, browse to Services and use the Service Actions command to start each service.

    If you have a manually installed cluster, use the Starting HDP Services instructions.

[Note]Note

Remember to restart Hue as the root user: /etc/init.d/hue restart


loading table of contents...