Hortonworks Data Platform
Also available as:
PDF

Known Issues

Summary of known issues for this release.

Hortonworks Bug ID Apache JIRA Apache component Summary
RMP-11408 ZEPPELIN-2170 Zeppelin

Description of the problem or behavior

Zeppelin does not show all WARN messages thrown by spark-shell at the Zeppelin's notebook level.

Workaround

There is currently no workaround for this.

BUG-106917 N/A Sqoop

Description of the problem or behavior

In HDP 3, managed Hive tables must be transactional (hive.strict.managed.tables=true). Transactional tables with Parquet format are not supported by Hive. Hive imports with --as-parquetfile must use external tables by specifying --external-table-dir.

Associated error message

Table db.table failed strict managed table checks due to the following reason: Table is marked as a managed table but is not transactional. 

Workaround

When using --hive-import with --as-parquetfile, users must also provide --external-table-dir with a fully qualified location of the table:
sqoop import ... --hive-import --as-parquetfile --external-table-dir hdfs:///path/to/table
BUG-106494 N/A Documentation, Hive

Description of Problem

When you partition a Hive column of type double, if the column value is 0.0, the actual partition directory is created as "0". An AIOB exception occurs.

Associated error message

2018-06-28T22:43:55,498 ERROR 441773a0-851c-4b25-9e47-729183946a26 main exec.StatsTask: 
Failed to run stats task org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.IndexOutOfBoundsException: 
Index: 8, Size: 8 at org.apache.hadoop.hive.ql.metadata.Hive.setPartitionColumnStatistics(Hive.java:4395) 
~hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT at org.apache.hadoop.hive.ql.stats.ColStatsProcessor.persistColumnStats(ColStatsProcessor.java:179) 
~hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT at org.apache.hadoop.hive.ql.stats.ColStatsProcessor.process(ColStatsProcessor.java:83) 
~hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT at org.apache.hadoop.hive.ql.exec.StatsTask.execute(StatsTask.java:108) hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT 
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:205) hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT at 
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT 
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2689) hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT 
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2341) hive-exec-4.0.0-SNAPSHOT.jar:4.0.0-SNAPSHOT at org.apache.hadoop.hive.ql.Driver.run

Workaround

Do not partition columns of type double.

BUG-106379 N/A Documentation Hive

Description of the Problem

The upgrade process fails to perform necessary compaction of ACID tables and can cause permanent data loss.

Workaround

If you have ACID tables in your Hive metastore, enable ACID operations in Ambari or set Hive configuration properties to enable ACID. If ACID operations are disabled, the upgrade process does not convert ACID tables. This causes permanent loss of data; you cannot recover data in your ACID tables later.

BUG-106286 N/A Documentation Hive

Description of the Problem

The upgrade process might fail to make a backup of the Hive metastore, which is critically important.

Workaround

Manually make a manual backup of your Hive metastore database before upgrading. Making a backup is especially important if you did not use Ambari to install Hive and create the metastore database, but highly recommended in all cases. Ambari might not have the necessary permissions to perform the backup automatically. The upgrade can succeed even if the backup fails, so having a backup is critically important.

BUG-106266 OOZIE-3156 Oozie

Description of the problem or behavior

When check() method of SshActionExecutor gets invoked, Oozie will execute the command "ssh <host-ip> ps -p <pid>" to determine whether the ssh action completes or not. If the connection to the host fails during the action status check, the command will return with an error code, but the action status will be determined as OK, which may not be correct.

Associated error message

Ssh command exits with the exit status of the remote command or with 255 if an error occurred

Workaround

Retrying the connection solves the problem.

BUG-102672 N/A Sqoop

Description of the problem or behavior

In HDP 3, managed Hive tables must be transactional (hive.strict.managed.tables=true). Writing transactional table with HCatalog is not supported by Hive. This leads to errors during HCatalog Sqoop imports if the specified Hive table does not exist or is not external.

Associated error message

Store into a transactional table db.table from Pig/Mapreduce is not supported

Workaround

Before running the HCatalog import with Sqoop, the user must create the external table in Hive. The --create-hcatalog-table does not support creating external tables.

BUG-101227 N/A Spark

Description of the problem or behavior

When Spark Thriftserver has to run several queries concurrently, some of them can fail with a timeout exception when performing broadcast join.

Associated error message

Caused by: java.util.concurrent.TimeoutException: Futures timed out after [300 seconds]
	at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
	at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
	at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
	at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
	at scala.concurrent.Await$.result(package.scala:107)
	at org.apache.spark.sql.execution.joins.BroadcastHashJoin.doExecute(BroadcastHashJoin.scala:107)

Workaround

You can resolve this issue by increasing the spark.sql.broadcastTimeout value.

BUG-101082 N/A Documentation, Hive

Description of the problem or behavior

When running Beeline in batch mode, queries killed by the Workload Management process can on rare occasions mistakenly return success on the command line.

Workaround

There is currently no workaround.

BUG-100187 SPARK-23942 Spark

Description of the problem or behavior

In Spark, users can register QueryExecutionListener to add callbacks for query executions, for example, for an action such as collect, foreach and show in DataFrame. You may use spark.session().listenerManager().register(...) or spark.sql.queryExecutionListeners configuration to set the query execution listener. This usually works in other API languages as well; however, this was a bug in collect Python API - the callback was not being called. Now, it is being called correctly within Spark side.

Workaround

Workaround is to manually call the callbacks right after collect in Python API with a try-catch.

BUG-98628 HBASE-20530 HBase

Description of the problem or behavior

When running the restore of an incremental backup, the restore task may fail with the error "java.io.IOException: No input paths specified in job". This only happens intermittently.

Workaround

Because exact causes of the error are unknown, there is no known workaround. Re-running the restore task may succeed; it may not.

BUG-96402 HIVE-18687 Hive

Description of the problem or behavior

When HiveServer2 is running in HA (high-availability) mode in HDP 3.0.0, resource plans are loaded in-memory by all HiveServer2 instances. If a client makes changes to a resource plan, the changes are reflected (pushed) only in the HiveServer2 to which the client is connected.

Workaround

In order for the resource plan changes to be reflected on all HiveServer2 instances, all HiveServer2 instances has to be restarted so that they can reload the resource plan from metastore.

BUG-95909 RANGER-1960 Ranger

Description of problem or behavior

Delete snapshot operation fails even if the user has Administrator privilege because the namespace is not considered in the Authorization flow for HBase Ranger plugin.

Associated error message

ERROR: org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient permissions for user '<username>' (action=admin)

Workaround

For the delete snapshot operation to succeed, you need to be system-wide Administrator privileges.

BUG-91996 LIVY-299 Livy Zeppelin

Description of the problem or behavior

Livy Spark interpreter will only print the output of the last code line on the output. For example, if we submitted:

print(10) print(11)

Only "11" will be printed out, the output of first line "10" will be ignored.

Workaround

If you want to see the output of a particular line, then it must be the last line in the code block.

BUG-91364 AMBARI-22506 Zeppelin

Description of the problem or behavior

Pie charts in Zeppelin does not display the correct distribution as per the provided data. This occurs when there is a "," in data i.e. there is number formatting applied to the data.

Workaround

Add a manual configuration setting in Zeppelin's JDBC interpreter setting. Add "phoenix.phoenix.query.numberFormat" with value "#.#".

BUG-89714 N/A Ranger

Description of the problem or behavior

Sudden increase in Login Session audit events from Ranger Usersync and Ranger Tagsync.

Associated error message

If policy storage DB size increases suddenly, then periodically backup and purge 'x_auth_sess' table periodically.

Workaround

Take a backup of the policy DB store and purge 'x_auth_sess' table from Ranger DB schema.

BUG-88614 N/A Hive

Description of the problem or behavior

RDMBS schema for Hive metastore contains an index HL_TXNID_INDEX defined as

CREATE INDEX HL_TXNID_INDEX ON HIVE_LOCKS USING hash (HL_TXNID);

Hash indexes are not recommended by PostgreSQL. For more information, see https://www.postgresql.org/docs/9.4/static/indexes-types.html

Workaround

It's recommended that this index is changed to type BTREE.

BUG-79238 N/A Documentation, HBase, HDFS, Hive, MapReduce, Zookeeper

Description of the problem or behavior

SSL is deprecated and its use in production is not recommended. Use TLS.

Workaround

In Ambari: Use ssl.enabled.protocols=TLSv1|TLSv1.1|TLSv1.2 and security.server.disabled.protocols=SSL|SSLv2|SSLv3. For help configuring TLS for other components, contact customer support. Documentation will be provided in a future release.

BUG-60904 KNOX-823 Knox

Description of the problem or behavior

When Ambari is being proxied by Apache Knox, the QuickLinks are not rewritten to go back through the gateway. If all access to Ambari is through Knox in the deployment, the new Ambari QuickLink profile may be used to hide and/or change URLs to go through Knox permanently. Future release will make these reflect the gateway appropriately.

Workaround

There is currently no workaround.

BUG-101836

HIVE-19416,

HIVE-19820

Hive

Description of the problem or behavior

Statistics-based optimizations for metadata-only queries, such as count, count(distinct <partcol>), do not currently work for managed tables.

BUG-103495 N/A HBase

Description of the problem or behavior

Because the region assignment is refactored in HBase, there are unclear issues that may affect the stability of this feature. If you rely on RegionServer Groups feature, you are recommended to wait until a future HDP 3.x release, which will return the stability of this features as it was available in HBase 1.x/HDP 2.x releases.

Workaround

There is currently no workaround.

BUG-98727 N/A HBase

Description of the problem or behavior

Because the region assignment is refactored in HBase, there are unclear issues that may affect the stability of this feature. If you rely on Region replication feature, you are recommended to wait until a future HDP 3.x release, which will return the stability of this features as it was available in HBase 1.x/HDP 2.x releases.

Workaround

There is currently no workaround.

BUG-101836 N/A Hive

Description of the problem or behavior

Statistics-based optimizations for metadata-only queries, such as count, count(distinct <partcol>), do not currently work for managed tables.

Workaround

There is currently no workaround.

BUG-107399 N/A Knox

Description of the problem or behavior

After upgrade from previous HDP versions, certain topology deployments may return a 503 error.This includes, but may not be limited to, knoxsso.xml for the KnoxSSO enabled services.

Workaround

When this is encountered, a minor change through Ambari (whitespace even) to the knoxsso topology (or any other with this issue) and restart of the Knox gateway server should eliminate the issue.

BUG-107236 N/A Ranger

Description of the problem or behavior

Atlas Rest sync source is not supported for tagsync.

Workaround

Using Kafka is recommended.

BUG-107434 N/A Hive

Description of the problem or behavior

Tables that have buckets must be recreated when you upgrade a cluster to HDP 3.0. The hash function for bucketing has changed in HDP 3.0, causing a problem in certain operations, such as INSERT and JOIN. In these cases, Hive does not handle queries correctly if you mix old and new tables in the same query. To avoid this problem, recreate bucketed tables using the following workaround after upgrading, but before running any queries on the cluster.

Workaround:

  1. Identify tables that have obsolete buckets. You can identify these tables by the TBLPROPERTY bucketing_version. If bucketing_version is 1, you need to recreate the table.

    Example: SHOW TBLPROPERTIES x;

    +---------------------------+-------------+
    |         prpt_name         | prpt_value  |
    +---------------------------+-------------+
    | bucketing_version         | 1           |
    | numFiles                  | 1           |
    …

    Proceed to the next step.

  2. Using a CTAS statement, create a new table. For example, create table x_new based on the old table x:

    Example: CREATE TABLE x_new AS SELECT * FROM x;

  3. Verify that table x_new has all the data.
  4. Drop old table x.
  5. Create a schema named the same as the old table x.

    Example: CREATE TABLE x …

  6. Insert the data from table x_new into the table x schema.

    Example: INSERT INTO x SELECT * FROM x_new;

  7. Verify that the data is in table x.
  8. Drop the table x_new.