Data Access
Also available as:
PDF
loading table of contents...

Troubleshooting

The following is only a small, partial list of issues and recommended solutions.

  • Hive transaction queries on an Oracle database fail with the error org.apache.hadoop.hive.ql.lockmgr.LockException: No record of lock could be found, may have timed out

    This error can be caused by a bug in the BoneCP connection pooling library. In this case, Hortonworks recommends that you set the datanucleus.connectionPoolingType property to dbcp so the DBCP library is used.

  • Error related to character set used for MySQL: "Specified key was too long; max key length is 767 bytes"

    MySQL is the default database used by the Hive metastore. Depending on several factors, such as the version and configuration of MySQL, Hive developers may encounter an error message similar to the following:

    An exception was thrown while adding/validating classes) : Specified key was too long; max key length is 767 bytes 

    Administrators can resolve this issue by altering the Hive metastore database to use the Latin-1 character set, as shown in the following example:

    mysql> ALTER DATABASE <metastore_database_name> character set latin1; 
  • Limitations when using the timestamp.formats SerDe parameter

    The timestamp.formats SerDe parameter, introduced in HDP 2.3, produces the following behaviors:

    • Displays only 3 decimal digits when it returns values, but it accepts more decimal digits.

      For example, if you run the following commands:

      drop table if exists src_hbase_ts;
      
      create table src_hbase_ts( rowkey string, ts1 string, ts2 string, ts3 string, ts4 string ) 
      STORED BY 'org.apache.hadoop.hive. hbase. HBaseStorageHandler' WITH SERDEPROPERTIES 
      ('hbase.columns.mapping' = 'm:ts1,m:ts2,m:ts3,m:ts4') TBLPROPERTIES ('hbase.table.name' = 'hbase_ts');
      
      insert into src_hbase_ts values ('1','2011-01-01T01:01: 01.111111111', '2011-01-01T01:01: 01.123456111', 
      '2011-01-01T01:01: 01.111111111', '2011-01-01T01:01: 01.134567890');
      
      drop table if exists hbase_ts_1;
      
      create external table hbase_ts_1( rowkey string, ts1 timestamp, ts2 timestamp, ts3 timestamp, ts4 timestamp ) 
      STORED BY 'org.apache.hadoop.hive. hbase. HBaseStorageHandler' WITH SERDEPROPERTIES 
      ( 'hbase.columns.mapping' = 'm:ts1,m:ts2,m:ts3,m:ts4', 'timestamp.formats' = "yyyy-MM-dd'T'HH:mm:ss.SSSSSSSSS") 
      TBLPROPERTIES ('hbase.table.name' = 'hbase_ts');
      
      select * from hbase_ts_1;

      The timestamp.formats parameter displays:

      1 2011-01-01 01:01:01.111 2011-01-01 01:01:01.123 2011-01-01 01:01:01.111 2011-01-01 01:01:01.134

      When the expected output is:

      1 2011-01-01 01:01:01.111111111 2011-01-01 01:01:01.123456111 2011-01-01 01:01:01.111111111 2011-0
    • The yyyy-MM-dd’T'HH:mm:ss.SSSSSSSSS format accepts any timestamp data up to .SSSSSSSSS decimal digits (9 places to the left of the decimal) instead of only reading data with .SSSSSSSSS decimal digits (9 places to the left of the decimal).

      For example, if you run the following commands:

      drop table if exists src_hbase_ts; create table src_hbase_ts( rowkey string, ts1 string, ts2 string, ts3 string, ts4 string ) 
      STORED BY 'org.apache.hadoop. hive. hbase.HBaseStorageHandler' WITH SERDEPROPERTIES 
      ('hbase.columns.mapping' = 'm:ts1,m:ts2,m:ts3,m:ts4') TBLPROPERTIES ('hbase.table.name' = 'hbase_ts');
      
      insert into src_hbase_ts values ('1','2011-01-01T01:01: 01.111111111', '2011-01-01T01:01: 01.111', 
      '2011-01-01T01:01: 01.11', '2011-01-01T01:01:01.1');
      
      drop table if exists hbase_ts_1;
      
      create external table hbase_ts_1( rowkey string, ts1 timestamp, ts2 timestamp, ts3 timestamp, ts4 timestamp ) 
      STORED BY 'org.apache.hadoop. hive. hbase.HBaseStorageHandler' WITH SERDEPROPERTIES 
      ( 'hbase.columns.mapping' = 'm:ts1,m:ts2,m:ts3,m:ts4', 'timestamp.formats' = "yyyy-MM-dd'T'HH:mm:ss.SSSSSSSSS") 
      TBLPROPERTIES ('hbase.table.name' = 'hbase_ts');
      
      select * from hbase_ts_1;

      The actual output is:

      1 2011-01-01 01:01:01.111 2011-01-01 01:01:01.111 2011-01-01 01:01:01.11 2011-01-01 01:01:01.1

      When the expected output is:

      1 2011-01-01 01:01:01.111 NULL NULL NULL
  • DROP TABLE and DROP PARTITION do not update table content

    When HDFS is encrypted and the Hadoop trash directory feature is enabled, the DROP TABLE and DROP PARTITION commands might not update the table. In this case, creating a table with the same name as before results in a table with old data.

    When Trash is enabled, the data file for the table should be moved to the Trash bin. If the table is inside an Encryption zone, this move operation is not allowed. For information on HDFS, see HDFS "Data at Rest" Encryption.

    To work around this, use the PURGE command, as shown in the following example.

    drop table if exists hbase_ts_1 PURGE;