Release Notes
Also available as:
PDF

Known Issues

Hortonworks Bug IDApache JIRAApache ComponentSummary
BUG-50023PHOENIX-3916Phoenix

Description of Problem: The hbck repair tool sometimes generates local indexes that are inconsistent with table data when overlapping regions are encountered.

Workaround: If you know the database schema, fix this issue by dropping and recreating all local indexes of the table after the hbck tool completes operation. Alternatively, rebuild the local indexes using the following ALTER query:

ALTER INDEX IF EXISTS index_name ON data_table_name REBUILD

BUG-60904 KNOX-823 Knox

Description of Problem: When Ambari is being proxied by Apache Knox the QuickLinks are not rewritten to go back through the gateway.

Workaround: If all access to Ambari is through Knox, the new Ambari quicklink profile may be used to hide and/or change URLs to go through Knox permanently. Future release will make these reflect the gateway appropriately.

BUG-65977 SPARK-14922 Spark

Description of Problem: Since Spark 2.0.0, `DROP PARTITION BY RANGE` does not support relative logical operators. In other words, only '=' is supported while `<', '>', '<=', '>=' are not.

Error Message:

scala> sql().show
org.apache.spark.sql.catalyst.parser.ParseException:
mismatched input }(line 1, pos 31)

== SQL ==
alter table t drop partition (b<1)
-------------------------------^^^

Workaround: To drop partition, use the exact match with '='.

scala> sql().show
BUG-70956N/AZeppelin

Description of Problem: A Hive query submitted to the %jdbc interpreter returns a proxy validation error.

Associated error messages:

  • HiveSQLException: Failed to validate proxy privilege of zeppelin for <user>

  • The hiveserver2.log file lists a permission denied exception: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=<user>, access=WRITE, inode="/user/<user>":hdfs:hdfs:drwxr-xr-x at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319 … org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)

Workaround:

  1. Create the user account on all worker nodes. For example, if the account is user3, issue the following command: $ adduser user3

  2. Restart the %jdbc interpreter.

BUG-70956N/AZeppelin

Description of Problem: When used with Hive, the %jdbc interpreter might require Hadoop common jar files that need to be added manually.

Workaround:

  1. On the Interpreters page, add a new repository.

  2. Set the repository ID to “hortonworks”.

  3. Set the URL to “http://nexus-private.hortonworks.com/nexus/content/groups/public/”.

  4. Identify the version of HDP running on the cluster. You can find this in the Zeppelin UI by clicking on your user ID and choosing "About Zeppelin." The number after the Zeppelin version is the HDP version number.

  5. In the Dependencies section on the Interpreters page, remove existing jar files and add the following three jar files, with the correct HDP version for your cluster. The following example uses version 2.6.0.0-484:

    • org.apache.hive:hive-jdbc::standalone:1.2.1000.2.6.0.0-484 Xerces:xerces:2.4.0

    • org.apache.hadoop:hadoop-common:2.7.3.2.6.0.0-484

BUG-74152 PHOENIX-3688 Phoenix

Description of Problem: Rebuild(ALTER INDEX IDX ON TABLE REBUILD) of indexes created on the table having row_timestamp column will result in no data visible to the User for that Index.

Workaround: Drop the index and recreate the same index. There will not be any extra overhead of recreating index when compared with rebuild Index.

BUG-75179 ZEPPELIN-2170 Zeppelin

Description of Problem: Zeppelin does not show all WARN messages thrown by spark-shell. The log level that comes as output at the Zeppelin notebook level cannot be changed .

Workaround: Currently, there is no known workaround.

BUG-76996N/ASpark 2 (Livy)

Description of Problem: When upgrading from HDP-2.5.x to HDP-2.6.0 and using Spark2, the Livy interpreter is configured with a scope of 'global', and should be changed to 'scoped'.

Workaround: After upgrading from HDP 2.5 to HDP 2.6, set the interpreter mode for %livy (Spark 2) to "scoped" using the pulldown menu in the %livy section of the Interpreters page.

BUG-78919N/AZeppelin

Description of problem: "ValueError: No JSON object could be decoded" when restarting Zeppelin, when the disk is 100% full.

Associated error message: Get following in error logs

Traceback (most recent call last):
File , line 312, in &lt;module&gt;
Master().execute()
File , line 280, in execute
method(env)
File , line 182, in start
self.update_kerberos_properties()
File , line 232, in update_kerberos_properties
config_data = self.get_interpreter_settings()
File , line 207, in get_interpreter_settings
config_data = json.loads(config_content)
File , line 339, in loads
 _default_decoder.decode(s)

Workaround: Free up some space in disk, then delete /etc/zeppelin/conf/*.json, then restart zeppelin server

BUG-79238N/ARanger

Component Affected: Ranger, all

Description of Problem: SSL is deprecated; its use in production is not recommended. Use TLS.

Workaround: For Ambari: Use ssl.enabled.protocols=TLSv1|TLSv1.1|TLSv1.2 and security.server.disabled.protocols=SSL|SSLv2|SSLv3. For help configuring TLS for other components, contact customer support. Documentation will be provided in a future release.

BUG-80656N/AZeppelin

Description of Problem: Zeppelin fails to start during the upgrade process from HDP 2.5 to HDP 2.6. The error starts with

Exception in thread "main" org.apache.shiro.config.ConfigurationException: Unable to instantiate class org.apache.zeppelin.server.ActiveDirectoryGroupRealm for object named 'activeDirectoryRealm'. Please ensure you've specified the fully qualified class name correctly.

Workaround: This error is due to a change in configuration class for Active Directory.

In HDP 2.5:

org.apache.zeppelin.server.ActiveDirectoryGroupRealm

In HDP 2.6:

org.apache.zeppelin.realm.ActiveDirectoryGroupRealm

To resolve this issue, choose one of the following two alternatives:

  • Proceed with the upgrade, and change the configuration in the shiro.ini file after the upgrade is complete (when Ambari allows configuration change).

  • At time of failure, change the class name in /usr/hdp/current/zeppelin-server/conf/shiro.ini, and then start Zeppelin manually.

BUG-80901N/AZeppelin

Component Affected: Zeppelin/Livy

Description of Problem: This occurs when running applications through Zeppelin/Livy that requires 3rd-party libraries. These libraries are not installed on all nodes in the cluster but they are installed on their edge nodes. Running in yarn-client mode, this all works as the job is submitted and runs on the edge node where the libraries are installed. In yarn-cluster mode,this fails because the libraries are missing.

Workaround: Set the location inspark.jars in spark-defaults.conf. For Livy, you will need to set livy.spark.jars (the HDFS location) in livy interpreters conf. Both are globally applicable. The jars need to be present on the livy machine in both cases. Updating livy conf is preferable since it affects only the Zeppelin users.

BUG-81637N/ASpark

Description of Problem: Executing concurrent queries over Spark via Spark1-llap package spawns multiple threads. This may cause multiple queries to fail. However, this will not break the spark thrift server. Spark 1.6 is built using Scala 2.10, which is where this issue manifests (i.e. " synchronize reflection code as scala 2.10 reflection is not threadsafe self"). This issue was subsequently fixed in Scala 2.11 based on this patch https://issues.scala-lang.org/browse/SI-6240.

Associated error messages:

  • scala.reflect.internal.Symbols$CyclicReference: illegal cyclic reference involving class LlapContext
  • SparkExecuteStatementOperation: Error running hive query:
    org.apache.hive.service.cli.HiveSQLException: scala.reflect.internal.Symbols$CyclicReference: illegal cyclic reference involving class LlapContext

Workaround: Isolate the broken queries and re-run them one by one. This will limit the query to one spawned thread.

BUG-86418N/AZeppelin

Description of Problem: After upgrading from Ambari 2.4.2 to Ambari 2.5.2 and subsequent HDP stack upgrade from 2.5 to 2.6, jdbc(hive) interpreter fails to work correctly in Zeppelin.

Associated Error Message: You might see one of the following errors in the Zeppelin stacktrace after running jdbc(hive):

  • Error in doAs

  • Failed to validate proxy privilege of zeppelin

Workaround:

  1. Make sure hadoop.proxyuser.zeppelin.groups=* and hadoop.proxyuser.zeppelin.hosts=* are set in HDFS core-site.xml. If not, then configure these properties and restart all stale services. (AMBARI-21772 is currently tracking this item).

  2. Make user hive.url is configured correctly in Zeppelin's JDBC hive interpreter.

    [Note]Note

    The URL configured might be wrong, especially on secured and/or wire-encrypted clusters, due to a known issue that we will address in a future release.

  3. Restart HS2.

BUG-87128N/AMahout

Since Mahout is deprecated in favor of Spark ML, and every code change carries the risk of creating additional incompatibilities, we will document these difficulties rather than change these established behaviors in Mahout. These issues affect only Mahout.

  • BUG-80945: Potential DoS vulnerability due to ReadLine usage in 9 files

  • BUG-80944: Potential JavaScript vulnerability in RecommenderServlet.java

  • BUG-80942: Potential Path Manipulation vulnerability due to path input usage in 8 files

  • BUG-80941: Potential DoS vulnerability due to StringBuilder usage in ConfusionMatrix.java

  • BUG-80940: Potential DoS vulnerability due to ReadLine usage in 8 additional files

  • BUG-80938: Potential invalid use of Object.equals in FastMap.java and FastByIDMap.java

BUG-88614N/AHive

Description of Problem: RDMBS schema for Hive metastore contains an index HL_TXNID_INDEX defined as

CREATE INDEX HL_TXNID_INDEX ON HIVE_LOCKS USING hash (HL_TXNID);

Hash indexes are not recommended by Postgres. Details can be found in https://www.postgresql.org/docs/9.4/static/indexes-types.html.

Workaround: It's recommended that this index is changed to type BTREE.

BUG-89714N/ARanger

Description of problem: Sudden increase in Login Session audit events from Ranger Usersync and Ranger Tagsync.

Workaround: If policy storage DB size increases suddenly, periodically backup and purge 'x_auth_sess' table. Take backup of policy DB store and purge 'x_auth_sess' table from Ranger db schema.

BUG-91304 HIVE-18099 Ambari, Hive, MapReduce, Tez

Description of problem: Running Hive with Tez fails to load configured native library. For example, Snappy compression library.

Associated error message:

 java.lang.RuntimeException: java.io.IOException: Unable to get CompressorType for codec (org.apache.hadoop.io.compress.SnappyCodec). This is most likely due to missing native libraries for the codec.

Workaround: Add the configuration parameter mapreduce.admin.user.env to tez-site.xml, specifying the native library path. For example,

<property>
  <name>mapreduce.admin.user.env</name>
  <value>LD_LIBRARY=./tezlib/lib/</value>
</property>
BUG-91364 AMBARI-22506 Zeppelin

Description of problem: The pie chart does not display the correct distribution as per data. This occurs when there is a "," in data i.e. there is number formatting applied to data.

Associated error message: No error message.

Workaround: A manual config in Zeppelin's JDBC interpreter setting, i.e. to add "phoenix.phoenix.query.numberFormat" with value "#.#".

BUG-91996LIVY-299Livy, Zeppelin

Description of Problem: Livy Spark interpreter will only print out the last line of code in the output. For example, if you submit the following:

print(10)
print(11)

Livy will only print "11" and ignore the first line.

Workaround: If you want to see the output of a particular line, it must be the last line in the code block in a para.

BUG-94623HIVE-12505Hive

Description of problem: Spark does not handle the INSERT INTO OVERWRITE operations correctly when an HDFS quota is set on the Trash folder. This can result in Spark recording the result of the operation incorrectly. If Spark is unable to move files into the trash due to a quota limit, they will incorrectly be recorded as part of the result.

Workaround: An available patch permanently deletes files when the quota on the trash is reached.

BUG-95909RANGER-1960Ranger

Description of problem: Delete snapshot fails even if user has Admin privilege due to namespace is not considered in Authorization flow for Hbase Ranger plugin.

Associated error message: ERROR: org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient permissions for user '<username>' (action=admin)

Workaround: For delete snapshot to succeed, user needs to be system-wide admin.

BUG-96378SPARK-23355Spark

Description of problem or behavior: spark.sql.hive.convertMetastoreParquet and spark.sql.hive.convertMetastoreOrc are the option to use Spark's built-in reader and writer instead of Hive SerDe. If these options are true, Spark ignores the table properties. Since Apache Spark 2.0, Spark ignores Parquet Hive table properties because convertMetastoreParquet is true by default.

Workaround:

  • Use CREATE TABLE USING HIVE syntax for table properties.

  • Use spark.hadoop configurations explicitly every time for the missed properties.

  • Use Hive SerDe by convertMetastoreParquet=false.

    [Note]Note

    Vectorized reader is also turned off.

BUG-97052HIVE-17403Spark

Description of problem: Concatenating ORC tables from Spark can cause data loss.

Workaround: Use Hive to concatenate ORC tables instead.

BUG-98058SQOOP-3291Sqoop

Description of problem: Job data is published to listeners (for example, through Atlas as sqoop.job.data.publish.class) during Hive and HCat imports. Currently this happens before the Hive import completes, so it gets reported even if Hive import fails.

Workaround: Currently, there is no known workaround.

BUG-100266PHOENIX-3521, PHOENIX-4190Phoenix

Description of Problem: A rare condition can occur with queries against index tables that may not return all expected records. This only happens when a Region of that index table is being compacted and also scanned at the same time. This issue is difficult to reproduce as this incorrect result does not happen each time.

RMP-7861 HBASE-14138 HBase

Description of Problem: Only an hbase superuser can perform HBbase backup-and-restore.

BUG-103805 SPARK-24322 Spark

Description of problem: This is an issue with the Spark ORC file native implementation where the timestamp type returns incorrect millisecond values with spark.sql.orc.impl=native.

Workaround: Set spark.sql.orc.impl=hive.