Upgrading HDP Manually
Also available as:
PDF
loading table of contents...

Configure and Start Apache Hive and Apache HCatalog

[Warning]Warning

In HDP 2.1.3 and later, the Decimal data type is now treated as the type Decimal(10,0): 10 digits of precision and 0 scale. This is a change from the variable precision and scale available in Hive 0.11.0 and Hive 0.12.0, which allowed up to 38 digits of precision and unlimited scale.

To avoid unintended rounding of decimal data, sites that were previously running Apache Hive 0.11.0 and Hive 0.12.0 may need to migrate tables with Decimal columns after upgrading to Hive 0.13.0. For details, see the Apache Hive wiki. For assistance with upgrades that involve Decimal data, please contact Hortonworks Support.

  1. Prior to starting the upgrade process, set the following in your hive configuration file:

    datanucleus.autoCreateSchema=false
  2. Stop the Hive Metastore, if you have not done so already.

  3. Upgrade the Hive Metastore database schema by running the upgrade scripts included in HDP for your metastore database and then running the schematool to upgrade to Hive 14:

    DatabaseRun as user...This metastore upgrade script

    MySQL

    root

    >cd /usr/hdp/current/hive-metastore/scripts/metastore/upgrade/mysql

    > mysql hivemysql

    > source upgrade-0.11.0-to-0.12.0.mysql.sql

    > source upgrade-0.12.0-to-0.13.0.mysql.sql

    > source upgrade-0.13.0-to-0.14.0.mysql.sql

    Postgres

    root

    cd /usr/hdp/current/hive-metastore/scripts/metastore/upgrade/postgres

    psql -d hive -a -f upgrade-0.11.0-to-0.12.0.postgres.sql

    psql -d hive -a -f upgrade-0.12.0-to-0.13.0.postgres.sql

    psql -d hive -a -f upgrade-0.13.0-to-0.14.0.postgres.sql

    Oracle

    root

    cd /usr/hdp/2.3.4.0-<version>/hive-metastore/scripts/metastore/upgrade/oracle sqlplus

    SQL> @upgrade-0.11.0-to-0.12.0.oracle.sql

    SQL> @upgrade-0.12.0-to-0.13.0.oracle.sql

    SQL> @upgrade-0.13.0-to-0.14.0.oracle.sql

    /usr/hdp/current/hive-metastore/bin/schematool -upgradeSchema -dbType <$databaseType>

    [Note]Note

    If you are using Postgres 8 and Postgres 9, reset the Hive Metastore database owner to the <HIVE_USER>:

    sudo <POSTGRES_USER>

    ALTER DATABASE <HIVE-METASTORE-DB-NAME> OWNER TO <HIVE_USER>

    [Note]Note

    If you are using Oracle 11, you may see the following error message:

    14/11/17 14:11:38 WARN conf.HiveConf: HiveConf of name hive.optimize.mapjoin.mapreduce does not exist
    14/11/17 14:11:38 WARN conf.HiveConf: HiveConf of name hive.heapsize does not exist
    14/11/17 14:11:38 WARN conf.HiveConf: HiveConf of name hive.server2.enable.impersonation does not exist
    14/11/17 14:11:38 WARN conf.HiveConf: HiveConf of name hive.semantic.analyzer.factory.impl does not exist
    14/11/17 14:11:38 WARN conf.HiveConf: HiveConf of name hive.auto.convert.sortmerge.join.noconditionaltask does not exist
    Metastore connection URL: jdbc:oracle:thin:@//ip-172-31-42-1.ec2.internal:1521/XE
    Metastore Connection Driver : oracle.jdbc.driver.OracleDriver
    Metastore connection User: hiveuser
    Starting upgrade metastore schema from version 0.13.0 to 0.14.0
    Upgrade script upgrade-0.13.0-to-0.14.0.oracle.sql
    Error: ORA-00955: name is already used by an existing object (state=42000,code=955)
    Warning in pre-upgrade script pre-0-upgrade-0.13.0-to-0.14.0.oracle.sql: Schema script failed, errorcode 2
    Completed upgrade-0.13.0-to-0.14.0.oracle.sql
    schemaTool completed

    You can safely ignore this message. The error is in the pre-upgrade script and can be ignored; the schematool succeeded.

  4. Edit hive-site.xml to update the value of hive.metadata.export.location to the new, joint hive-hcatalog jar (previously hcatalog-core.jar):

    <property>
     <name>hive.metadata.export.location</name>
     <value>export HIVE_AUX_JARS_PATH=/usr/hdp/2.3.4.0-<version>/hive-hcatalog/share/hcatalog/hive-hcatalog-core.jar</value>
    </property>
  5. Edit hive-env.sh to point to the new hive-hcatalog.jar:

    if [ "${HIVE_AUX_JARS_PATH}" != " " ]; then
    export HIVE_AUX_JARS_PATH=/usr/hdp/current/hive-hcatalog/share/hcatalog/hive- hcatalog-core.jar:${HIVE_AUX_JARS_PATH}
    else
    export HIVE_AUX_JARS_PATH=/usr/hdp/current/hive-hcatalog/share/hcatalog/hive- hcatalog-core.jar
    fi
  6. Edit the hive-site.xml file and modify the properties based on your environment. Search for TODO in the file for the properties to replace.

    • Edit the connection properties for your Hive metastore database in hive- site.xml:

      <property>
       <name>javax.jdo.option.ConnectionURL</name>
       <value>jdbc:mysql://TODO-HIVE-METASTORE-DB-SERVER:TODO-HIVE-METASTORE-DB-PORT/
       TODO-HIVE-METASTORE-DB-NAME?createDatabaseIfNotExist=true</value>
       <description>Enter your Hive Metastore Connection URL, 
         for example if MySQL: 
         jdbc:mysql://localhost:3306/mysql?createDatabaseIfNotExist=true
       </description>
      </property>
      
      <property>
       <name>javax.jdo.option.ConnectionUserName</name>
       <value>TODO-HIVE-METASTORE-DB-USER-NAME</value>
       <description>Enter your Hive Metastore database user name.</description>
      </property>
      
      <property>
       <name>javax.jdo.option.ConnectionPassword</name>
       <value>TODO-HIVE-METASTORE-DB-PASSWORD</value>
       <description>Enter your Hive Metastore database password.</description>
      </property>
      
      <property>
       <name>javax.jdo.option.ConnectionDriverName</name>
       <value>TODO-HIVE-METASTORE-DB-CONNECTION-DRIVER-NAME</value>
       <description>Enter your Hive Metastore Connection Driver Name, for example if 
       MySQL: com.mysql.jdbc.Driver</description>
      </property>
    • Edit the following properties in the hive-site.xml file:

      <property>
       <name>fs.file.impl.disable.cache</name>
       <value>false</value>
       <description>Set to false or remove fs.file.impl.disable.cache</description> 
      </property>
       
      <property>
       <name>fs.hdfs.impl.disable.cache</name>
       <value>false</value>
       <description>Set to false or remove fs.hdfs.impl.disable.cache</description>
      </property>
    • Optional: If you want Hive Authorization, set the following Hive authorization parameters in the hive-site.xml file:

      <property>
       <name>hive.security.authorization.enabled</name>
       <value>true</value>
      </property>
      
      <property>
       <name>hive.security.authorization.manager</name>
       <value>org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider</value>
      </property>
      
      <property>
       <name>hive.security.metastore.authorization.manager</name>
       <value>org.apache.hadoop.hive.ql.security.authorization.
         StorageBasedAuthorizationProvider</value>
      </property>
      
      <property>
       <name>hive.security.authenticator.manager</name>
       <value>org.apache.hadoop.hive.ql.security.ProxyUserAuthenticator</ value>
      </property>
    • For a remote Hive metastore database, use the following hive-site.xml property value to set the IP address (or fully-qualified domain name) and port of the metastore host.

      To enable HiveServer2, leave this property value empty.

      <property>
        <name>hive.metastore.uris</name>
        <value>thrift://$metastore.server.full.hostname:9083</value>
        <description>URI for client to contact metastore server. 
          To enable HiveServer2, leave the property value empty. 
        </description>
      </property>
      [Note]Note

      You can also use the HDP utility script to fine-tune your configuration settings based on node hardware specifications.

  7. Disable autocreation of schemas to match HDP 2.1+ configurations. Edit hive-site.xml to set the value of datanucleus.autoCreateSchema to false.

    <property>
     <name>datanucleus.autoCreateSchema</name>
     <value>true</value>
     <description>Creates necessary schema on a startup if one doesn't exist.
     </description>
    </property>
  8. Start Hive. On the Hive Metastore host machine, run the following commands:

    su - hive

    nohup /usr/hdp/current/hive-metastore/bin/hive --service metastore>/var/log/hive/hive.out 2>/var/log/hive/hive.log &

  9. Start Hive Server2.

    On the Hive Server2 host machine, run the following commands:

    su - hive

    /usr/hdp/current/hive-server2/bin/hiveserver2 >/var/log/hive/hiveserver2.out 2> /var/log/hive/hiveserver2.log&

    where $HIVE_USER is the Hive Service user. For example, hive.