8. Upgrading From HDP 2.1.2 to HDP 2.1.3

This section describes how to upgrade an existing HDP 2.1.2 installation to HDP 2.1.3.

If you are upgrading from a previous HDP version, such as HDP 2.0, please follow the complete Stack upgrade instruction to HDP 2.1. See:

Before You Begin

  • Make sure you know what HDP components need to be upgraded at your installation.

  • Think about whether you're going to upgrade using a local repository or a remote repository.

To upgrade from HDP 2.1.2 to HDP 2.1.3, do the following:

  1. Download the appropriate hdp.repo file for your OS:

    RHEL/CENTOS/ORACLE LINUX 5http://public-repo-1.hortonworks.com/HDP/centos5/2.x/updates/2.1.3.0/hdp.repo
    RHEL/CENTOS/ORACLE LINUX 6http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.1.3.0/hdp.repo
    SLES 11http://public-repo-1.hortonworks.com/HDP/suse11/2.x/updates/2.1.3.0/hdp.repo

    OR Download the HDP RPMs single repository tarball. (For further information, see the local repository instructions.)

  2. Stop all services.

    If you are managing your deployment via Ambari, open Ambari Web, browse to Services and use the Service Actions command to stop each service.

    If you are upgrading manually, follow the instructions in the HDP 2.1.3 Reference Guide.

    [Note]Note

    If you are upgrading an HA NameNode configuration, keep your JournalNodes running while performing this upgrade procedure. Upgrade, rollback and finalization operations on HA NameNodes must be performed with all JournalNodes running.

  3. Back up Oozie files.

    If you have Oozie installed and running, back up the files in the following directories on the Oozie server host, and make sure all files, including *site.xml files, are copied:

    mkdir oozie-conf-bak 
    cp -R /etc/oozie/conf/* oozie-conf-bak
                        
  4. Upgrade the stack on all Agent hosts.

    The following instructions include all possible components that can be upgraded. If your installation does not use a particular component, skip those installation instructions.

    Operating SystemInstructionsCommands
    RHEL/CentOs/Oracle LinuxRemove WebHCat components:
    yum erase "webhcat*"
     If you haven't done so already, stop the Hive Metastore. 
     

    Upgrade Hive and HCatalog. On the Hive and HCatalog host machines, enter:

    yum upgrade hive
    yum erase hcatalog
    yum install
    hive-hcatalog
     

    Upgrade the Hive Metastore database schema. On the Hive host machine, enter:

    $HIVE_HOME/bin/schematool -upgradeSchema -dbType <$databaseType>

    The value for $databaseType can be derby, mysql, oracle, or postgres.

     Upgrade the following components:
    yum upgrade "collectd*" "gccxml*" "pig*" "hadoop*" "phoenix*" "knox*" "tez*" "falcon*" "storm*" "sqoop*"
    "zookeeper*" "hbase*" "hive*" hdp_mon_nagios_addons
      
    yum install webhcat-tar-hive webhcat-tar-pig
    yum install "hive-*"
    yum install oozie oozie-client
    rpm -e --nodeps bigtop-jsvcyum install bigtop-jsvc             
     Verify that the components were upgraded. Enter:
    yum list installed | grep
    HDP-$old-stack-version-number               
    SLESRemove WebHCat, components:
    zypper remove webhcat\*                                        
     If you haven't done so already, stop the Hive Metastore. 
     

    Upgrade Hive and HCatalog. On the Hive and HCatalog host machines, enter:

    zypper install hive-hcatalog                                  
     

    Upgrade the Hive Metastore database schema. On the Hive host machine, enter:

    $HIVE_HOME/bin/schematool -upgradeSchema -dbType <$databaseType>
     Upgrade the following components:
    zypper up "collectd*" "epel-release*" "gccxml*" "pig*" "hadoop*" "phoenix*" "knox*" "falcon*" "tez*" 
    "storm*" "sqoop*" "zookeeper*" "hbase*" "hive*" hdp_mon_nagios_addons    
      
    zypper install webhcat-tar-hive webhcat-tar-pig
    zypper up -r HDP-2.1.3.0
    zypper install hive\*zypper install oozie oozie-client                
     Verify that the components were upgraded:
    rpm -qa | grep hadoop, rpm -qa | grep hive and rpm -qa| grep hcatalog
     If components were not upgraded, upgrade them:
    yast --update hadoop hcatalog hive

  5. If you are upgrading from an HA NameNode configuration, restart all JournalNodes. On each JournalNode host, enter the following command:

    su -l {HDFS_USER} -c “/usr/lib/hadoop/sbin/hadoop-daemon.sh start journalnode”
  6. Complete the Stack upgrade.

    For HDP 2.1, use the version of Hue shipped with HDP 2.1. If you have a previous version of Hue, follow the instructions for upgrading Hue in Installing HDP Manually.

    If this is an Ambari-managed cluster, update the Repository Base URLs to use the HDP 2.1.3 repositories for HDP and HDP-UTILS. For Ambari 1.6.0 or earlier, enter:

    ambari-server upgradestack HDP-2.1 http://public-repo-1.hortonworks.com/HDP/{$os}/2.x/updates/2.1.3.0 {$os}

    where {$os} is the Operating System Family (OS Family). See the following table:

     

    Table 4.1. Operating Systems mapped to each OS Family

    OS FamilyOperating System
    redhat5Red Hat 5, CentOS 5, Oracle Linux 5
    redhat6Red Hat 6, CentOS 6, Oracle Linux 6
    sles11SUSE Linux Enterprise Server 11


  7. Finalize the upgrade.

    If you are not yet ready to discard your backup, you can start the upgraded HDFS without finalizing the upgrade. (At this stage, you can still roll back if need be.)

    Verify your filesystem health. When you are ready to commit to this upgrade (are certain that you will not want to roll back), discard your backup and finalize the upgrade.

    As $HDFS_USER, execute the following command:

    hdfs dfsadmin -finalizeUpgrade


loading table of contents...