17. Configure and Start Apache WebHCat

  1. You must replace your configuration after upgrading. Copy /etc/webhcat/conf from the template to the conf directory in webhcat hosts.

  2. Modify the Apache WebHCat configuration files.

    1. Upload Pig, Hive and Sqoop tarballs to HDFS as the $HDFS_User (in this example, hdfs):

      hdfs dfs -mkdir -p /hdp/apps/2.2.8.0-<$version>/pig/
                          
      hdfs dfs -mkdir -p /hdp/apps/2.2.8.0-<$version>/hive/
      
      hdfs dfs -mkdir -p /hdp/apps/2.2.8.0-<$version>/sqoop/
      
      hdfs dfs -put /usr/hdp/2.2.8.0-<$version>/pig/pig.tar.gz /hdp/apps/2.2.8.0-<$version>/pig/
      
      hdfs dfs -put /usr/hdp/2.2.8.0-<$version>/hive/hive.tar.gz /hdp/apps/2.2.8.0-<$version>/hive/
      
      hdfs dfs -put /usr/hdp/2.2.8.0-<$version>/sqoop/sqoop.tar.gz /hdp/apps/2.2.8.0-<$version>/sqoop/
      
      hdfs dfs -chmod -R 555 /hdp/apps/2.2.8.0-<$version>/pig
      
      hdfs dfs -chmod -R 444 /hdp/apps/2.2.8.0-<$version>/pig/pig.tar.gz
      
      hdfs dfs -chmod -R 555 /hdp/apps/2.2.8.0-<$version>/hive
      
      hdfs dfs -chmod -R 444 /hdp/apps/2.2.8.0-<$version>/hive/hive.tar.gz
      
      hdfs dfs -chmod -R 555 /hdp/apps/2.2.8.0-<$version>/sqoop
      
      hdfs dfs -chmod -R 444 /hdp/apps/2.2.8.0-<$version>/sqoop/sqoop.tar.gz
      
      hdfs dfs -chown -R hdfs:hadoop /hdp
    2. Update the following properties in the webhcat-site.xml configuration file, as their values have changed:

      <property>
       <name>templeton.pig.archive</name>
       <value>hdfs:///hdp/apps/${hdp.version}/pig/pig.tar.gz</value>
      </property>
       
      <property>
       <name>templeton.hive.archive</name>
       <value>hdfs:///hdp/apps/${hdp.version}/hive/hive.tar.gz</value>
      </property>
       
      <property>
       <name>templeton.streaming.jar</name>
       <value>hdfs:///hdp/apps/${hdp.version}/mapreduce/
         hadoop-streaming.jar</value>
       <description>The hdfs path to the Hadoop streaming jar file.</description>
      </property>
       
      <property>
       <name>templeton.sqoop.archive</name>
       <value>hdfs:///hdp/apps/${hdp.version}/sqoop/sqoop.tar.gz</value>
       <description>The path to the Sqoop archive.</description>
      </property>
       
      <property>
       <name>templeton.sqoop.path</name>
       <value>sqoop.tar.gz/sqoop/bin/sqoop</value>
       <description>The path to the Sqoop executable.</description>
      </property>
       
      <property>
       <name>templeton.sqoop.home</name>
       <value>sqoop.tar.gz/sqoop</value>
       <description>The path to the Sqoop home in the exploded archive.
         </description>
      </property> 
      [Note]Note

      You do not need to modify ${hdp.version}.

    3. Remove the following obsolete properties from webhcat-site.xml:

      <property>
       <name>templeton.controller.map.mem</name>
       <value>1600</value>
       <description>Total virtual memory available to map tasks.</description>
      </property>
      
      <property>
       <name>hive.metastore.warehouse.dir</name>
       <value>/path/to/warehouse/dir</value>
      </property> 
    4. Add new proxy users, if needed. In core-site.xml, make sure the following properties are also set to allow WebHcat to impersonate your additional HDP 2.2 groups and hosts:

      <property>
       <name>hadoop.proxyuser.hcat.groups</name>
       <value>*</value>
      </property> 
       
      <property>
       <name>hadoop.proxyuser.hcat.hosts</name>
       <value>*</value>
      </property> 

      Where:

      hadoop.proxyuser.hcat.group

      Is a comma-separated list of the Unix groups whose users may be impersonated by 'hcat'.

      hadoop.proxyuser.hcat.hosts

      A comma-separated list of the hosts which are allowed to submit requests by 'hcat'.

  3. Start WebHCat:

    sudo su -l $WEBHCAT_USER -c "/usr/hdp/current/hive-hcatalog/sbin/webhcat_server.sh start"

  4. Smoke test WebHCat.

    1. At the WebHCat host machine, run the following command:

      http://$WEBHCAT_HOST_MACHINE:50111/templeton/v1/status

    2. If you are using a secure cluster, run the following command:

      curl --negotiate -u: http://cluster.$PRINCIPAL.$REALM:50111/templeton/v1/status {"status":"ok","version":"v1"}[machine@acme]$


loading table of contents...