HORTONWORKS PATCH RELEASE Version HDP-2.1.4.0 July 2014 CONTENTS -------- * NOTICE OF LIMITED SUPPORT * Features * Defects Fixed * Downloads * Version Info * Contact * Copyright NOTICE OF LIMITED SUPPORT -------------------------- This patch release of HDP 2.1.4 is tested and supported for CentOS 6. All other OS will be updated shortly with the next maintenance release. FEATURES -------- * Hue 2.5.0 * Preemption support DEFECTS FIXED ------------- The following YARN and MapReduce fixes are included: BUG-17995: [YARN-2074] Preemption of AM containers shouldn't count towards AM failures BUG-17996: [YARN-1957] ProportionalCapacitPreemptionPolicy handling of corner cases BUG-18039: [MAPREDUCE-4951 and 5900] Container preemption interpreted as task failures and eventually job failures BUG-18120: [YARN-1408] Preemption caused Invalid State Event: ACQUIRED at KILLED and caused a task timeout for 30mins BUG-18468: [YARN-2144] Add logs when preemption occurs. BUG-18535: [YARN-2124] ProportionalCapacityPreemptionPolicy cannot work because it's initialized before scheduler initialized BUG-18536: [YARN-2125] ProportionalCapacityPreemptionPolicy should only log CSV when debug enabled BUG-19086: [YARN-2022] Preempting an Application Master container can be kept as least priority when multiple applications are marked for preemption by ProportionalCapacityPreemptionPolicy BUG-19182: [YARN-2181] Add preemption info to RM Web UI and RM logs. BUG-19735: [MAPREDUCE-5956] MapReduce AM should not use maxAttempts to determine if this is the last retry BUG-19944: The short-circuit cache doesn't correctly time out replicas that haven't been used in a while BUG-20419: [MAPREDUCE-6002] MR task should prevent report error to AM when process is shutting down BUG-20530: [HDFS-6604] Major dependency change in hive causes wrong hadoop configurations to be loaded The following Hue fixes are included: BUG-19530: User getting errors when trying to refresh and import new members of ldap groups BUG-19762: beeswax server gets "_message='java.lang.OutOfMemoryError: unable to create new native thread', errorCode=0)" and query hangs until Hue is restarted BUG-20528: Add force_username_uppercase option to Hue for AD authorization. DOWNLOADS AND INSTALLATION -------------------------- This patch release is provided as a complete HDP 2.1 distribution. You do not need to install HDP 2.1 GA prior to installing this patch release. If you are installing for the first time, please make sure you meet the Minimum System Requirements: Manual http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1-latest/bk_installing_manually_book/content/rpm-chap1-2.html Ambari http://docs.hortonworks.com/HDPDocuments/Ambari-1.6.1.0/bk_using_Ambari_book/content/ambari-chap1-2.html To install the update on a remote repository: 1. Download the repo file. a. Manual. Download the hdp.repo file for your OS. http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.1.4.0/hdp.repo Note: Further documentation on installing using a local repository is available here: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1-latest/bk_installing_manually_book/content/rpm-chap1-3.html Further documentation on installing using a remote repository is available here: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1-latest/bk_installing_manually_book/content/rpm-chap1-3.html b. Ambari. Download the Ambari file for your OS. wget http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/1.6.1/ambari.repo cp ambari.repo /etc/yum.repos.d Note: Further documentation on installing Ambari is available here: http://docs.hortonworks.com/HDPDocuments/Ambari-1.6.1.0/bk_using_Ambari_book/content/ambari-chap2.html 2. If you have previously installed HDP, you must upgrade. Stop all services. If you are managing your deployment via Ambari, open Ambari Web, browse to Services and use the Service Actions command to stop each service. If you are upgrading manually, follow the instructions in the HDP 2.1.3 Reference Guide. Note: If you are upgrading an HA NameNode configuration, keep your JournalNodes running while performing this upgrade procedure. Upgrade, rollback and finalization operations on HA NameNodes must be performed with all JournalNodes running. 3. If you have Oozie installed/running: a. Back up the files in the following directories on the Oozie server host, and make sure all files, including *site.xml files, are copied: mkdir oozie-conf-bak cp -R /etc/oozie/conf/* oozie-conf-bak b. Remove the old Oozie directories on all Oozie server and client hosts: rm -R /etc/oozie/conf/* oozie-conf-bak 4. Upgrade the stack on all Agent hosts. The following instructions include all possible components that can be upgraded. If your installation does not use a particular component, skip those installation instructions. a. Remove WebHCat components: yum erase "webhcat*" Note: If you haven't done so already, stop the Hive Metastore. b. Upgrade Hive and HCatalog. On the Hive and HCatalog host machines, enter: yum upgrade hive yum erase hcatalog yum install hive-hcatalog c. Upgrade the Hive Metastore database schema. On the Hive host machine, enter: $HIVE_HOME/bin/schematool -upgradeSchema -dbType <$databaseType> The value for $databaseType can be derby, mysql, oracle, or postgres. d. Upgrade the following components: yum upgrade "collectd*" "gccxml*" "pig*" "hadoop*" "phoenix*" "knox*" "tez*" "falcon*" "storm*" "sqoop*" "zookeeper*" "hbase*" "hive*" hdp_mon_nagios_addons yum install webhcat-tar-hive webhcat-tar-pig yum install "hive-*" yum install oozie oozie-client rpm -e --nodeps bigtop-jsvcyum install bigtop-jsvc e. Verify that the components were upgraded. Enter: yum list installed | grep HDP-$old-stack-version-number 5. If you are upgrading from an HA NameNode configuration, restart all JournalNodes. On each JournalNode host, enter the following command: su -l {HDFS_USER} -c “/usr/lib/hadoop/sbin/hadoop-daemon.sh start journalnode” 6. Complete the Stack upgrade. For HDP 2.1, use the version of Hue shipped with HDP 2.1. If you have a previous version of Hue, follow the instructions for upgrading Hue in Installing HDP Manually. If this is an Ambari-managed cluster, update the Repository Base URLs to use the HDP 2.1.3 repositories for HDP and HDP-UTILS. For Ambari 1.6.0 or earlier, enter: ambari-server upgradestack HDP-2.1 http://public-repo-1.hortonworks.com/HDP/redhat6/2.x/updates/2.1.4.0 redhat6 7. Finalize the upgrade. If you are not yet ready to discard your backup, you can start the upgraded HDFS without finalizing the upgrade. (At this state, you can still roll back if need be.) Verify your filesystem health. When you are ready to commit to this upgrade (are certain that you will not want to roll back), discard your backup and finalize the upgrade. As the $HDFS_USER, execute the following command: hdfs dfsadmin -finalizeUpgrade RPMs ----- Alternately, you can download the HDP RPMs and hdp.repo as a single tar file: RHEL/CENTOS/ORACLE LINUX 6: http://public-repo-1.hortonworks.com/HDP/centos6/HDP-2.1.4.0-centos6-rpm.tar.gz Using this this tar file, follow the local repo installation instructions provided here: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1-latest/bk_reference/content/deployinghdp_appendix_chap4_3_1_2.html VERSION INFO ------------- If you installed HDP 2.1 for the first time, you should see all of these versions: ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Name | Release-Version | Git Repo | Branch Name | Git Hash | RPM Count ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- accumulo | 1.5.1.2.1.4.0-632 | git@github.com:hortonworks/accumulo.git | 2.1-maint | 312099477a146a19f367ee944e2f9219e9fcfdff | 12 falcon | 0.5.0.2.1.4.0-632 | git@github.com:hortonworks/falcon.git | 2.1-maint | e2e41d74ee91fd04879c4d2fd057e369fceb32e2 | 4 flume | 1.4.0.2.1.4.0-632 | git@github.com:hortonworks/flume.git | 2.1-maint | 902c2e63136662e97aa93589aeba40ab4a68a232 | 3 hadoop | 2.4.0.2.1.4.0-632 | git@github.com:hortonworks/hadoop.git | 2.1-maint | 5875a08eb52c3c2dc082b48eba7282be7dc8c6c2 | 22 hbase | 0.98.0.2.1.4.0-632-hadoop2 | git@github.com:hortonworks/hbase.git | 2.1-maint | 060e67deb5c42821d78cc23940341f6dd394d43e | 8 hive | 0.13.0.2.1.4.0-632 | git@github.com:hortonworks/hive.git | 2.1-maint | 742cba80e91e48bf0cc4615b7141440efa7b82c0 | 11 hue | 2.5.0.2.1.4.0-632 | git@github.com:hortonworks/sandbox-shared.git | 2.1-maint | e8aa0ced957598e4f3964e7de565406566f872db | 10 knox | 0.4.0.2.1.4.0-632 | git@github.com:hortonworks/knox.git | 2.1-maint | 18fb865431ead5003f39d9cb01c4631550d643e1 | 2 mahout | 0.9.0.2.1.4.0-632 | git@github.com:hortonworks/mahout.git | 2.1-maint | 2d684f61204c1c8f185f00f3108d24b4fc6fda57 | 3 oozie | 4.0.0.2.1.4.0-632 | git@github.com:hortonworks/oozie.git | 2.1-maint | e4dd0d82a313092dc6ab63755cdd3be3524b9676 | 3 phoenix | 4.0.0.2.1.4.0-632 | git@github.com:hortonworks/phoenix.git | 2.1-maint | f9931d0c847ce5a51b7dd382df33060340dcb94d | 2 pig | 0.12.1.2.1.4.0-632 | git@github.com:hortonworks/pig.git | 2.1-maint | 0bf3a788866a8ab91c838ad8a034960e3359eb53 | 3 sqoop | 1.4.4.2.1.4.0-632 | git@github.com:hortonworks/sqoop.git | 2.1-maint | d3c37763356e55bbf152053f6db24b1bfe582972 | 3 storm | 0.9.1.2.1.4.0-632 | git@github.com:hortonworks/storm.git | 2.1-maint | e003b13d4ec0cc0266a3f4bc67185ac80ace78ff | 2 tez | 0.4.0.2.1.4.0-632 | git@github.com:hortonworks/tez.git | 2.1-maint | 2b1ae6533de5f0be991b21bf5f576f72356b89b2 | 2 zookeeper | 3.4.5.2.1.4.0-632 | git@github.com:hortonworks/zookeeper.git | 2.1-maint | d7de11f6619e1c454af5768739dc2014e671e3a0 | 3 ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- CONTACT ------- For any questions regarding this patch release, please contact your Support representative directly. COPYRIGHT -------- This work by Hortonworks, Inc. is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.