2.4.8.  Upgrade Hive and WebHCat

  1. Upgrade the Hive Metastore database schema. Restart the Hive Metastore Database (MySQL, Oracle, or Postgres) server, not the Hive Metastore process.

    On each HiveServer2 host, run:

    su -l <HIVE_USER> -c "export HIVE_CONF_DIR=/etc/hive/conf.server/" "/usr/hdp/current/hive-metastore/bin/schematool -upgradeSchema -dbType <DATABASE_TYPE>"

    where <DATABASE_TYPE> is mysql, oracle or postgres and <HIVE_USER> is the Hive Service user. For example, hive.

    [Note]Note

    If you are using Postgres 8 and Postgres 9, you should reset the Hive Metastore database owner to

    <HIVE_USER>: psql -U <POSTGRES_USER> -c ALTER DATABASE <HIVE-METASTORE-DB-NAME> OWNER TO <HIVE_USER>

  2. Save the old Hive configuration and add symlink from /etc/hive/conf:

    mv /etc/hive/conf /etc/hive/conf.saved;

    mv /etc/hive/conf.server /etc/hive/conf.server.saved

    ln -s /usr/hdp/current/hive-client/conf /etc/hive/conf

    ls -la /etc/hive

  3. If you use Tez as the Hive execution engine, and if the variable hive.server2.enabled.doAs is set to true, you must create a scratch directory on the NameNode host for the username that will run the HiveServer2 service.

    su -l <HDFS_USER> -c "hdfs dfs -mkdir /tmp/hive-<HIVE_USER>"

    su -l <HDFS_USER> -c "hdfs dfs -chmod 777 /tmp/hive-<HIVE_USER>"

    where <HIVE_USER> is the Hive Service user. For example, hive. And where <HDFS_USER> is the HDFS Service user. For example, hdfs.

  4. From Ambari Web, browse to Services > Hive > Configs. Under the Advanced tab, add the following properties to Advanced hive-site, only if these properties do not already exist on the cluster:

    Name

    Value

    hive.cluster.delegation.token.store.zookeeper.connectString

    The ZooKeeper token store connect string. For example:

    ZooKeeperHost:2181

    hive.zookeeper.quorum

    The comma-separated list of ZooKeeper hosts to talk to. For example:

    ZooKeeperHost1:2181, ZooKeeperHost2:2181

  5. For WebHCat , upload new Pig, Hive, and Sqoop tarballs to HDFS. Run the following command from a node that has the HDP clients installed:

    su -l <HDFS_USER> -c "hdfs dfs -mkdir -p /hdp/apps/2.3.x.y-z/pig/"
    su -l <HDFS_USER> -c "hdfs dfs -mkdir -p /hdp/apps/2.3.x.y-z/hive/"
    su -l <HDFS_USER> -c "hdfs dfs -mkdir -p /hdp/apps/2.3.x.y-z/sqoop/"
    su -l <HDFS_USER> -c "hdfs dfs -put /usr/hdp/current/pig-client/pig.tar.gz /hdp/apps/2.3.x.y-z/pig/"
    su -l <HDFS_USER> -c "hdfs dfs -put /usr/hdp/current/hive-client/hive.tar.gz /hdp/apps/2.3.x.y-z/hive/"
    su -l <HDFS_USER> -c "hdfs dfs -put /usr/hdp/current/sqoop-client/sqoop.tar.gz /hdp/apps/2.3.x.y-z/sqoop/"
    su -l <HDFS_USER> -c "hdfs dfs -chmod -R 555 /hdp/apps/2.3.x.y-z/pig"
    su -l <HDFS_USER> -c "hdfs dfs -chmod -R 444 /hdp/apps/2.3.x.y-z/pig/pig.tar.gz"
    su -l <HDFS_USER> -c "hdfs dfs -chmod -R 555 /hdp/apps/2.3.x.y-z/hive"
    su -l <HDFS_USER> -c "hdfs dfs -chmod -R 444 /hdp/apps/2.3.x.y-z/hive/hive.tar.gz"
    su -l <HDFS_USER> -c "hdfs dfs -chmod -R 555 /hdp/apps/2.3.x.y-z/sqoop"
    su -l <HDFS_USER> -c "hdfs dfs -chmod -R 444 /hdp/apps/2.3.x.y-z/sqoop/sqoop.tar.gz"
    su -l <HDFS_USER> -c "hdfs dfs -chown -R <HDFS_USER>:<HADOOP_GROUP> /hdp"     

    where <HDFS_USER> is the HDFS Service user. For example, hdfs.

  6. In Ambari Web, browse to Services > Hive and start Hive.

  7. After Hive has started, select Run Service Check from the Service Actions menu. Confirm the check passes.


loading table of contents...