Release Notes
Also available as:
PDF

Known Issues

Hortonworks Bug IDApache JIRAApache ComponentSummary
BUG-60904 KNOX-823 Knox

Component Affected: Knox

Description of Problem: When Ambari is being proxied by Apache Knox the QuickLinks are not rewritten to go back through the gateway.

Workaround: If all access to Ambari is through Knox in the deployment the new Ambari quicklink profile may be used to hide and/or change URLs to go through Knox permanently. A future release will make these reflect the gateway appropriately.

BUG-61512 FALCON-1965 Falcon

Component Affected: Falcon

Description of Problem: When upgrading to HDP 2.6 or later with Falcon, you might encounter the following error when starting the ActiveMQ server:
 ERROR - [main:] ~ Failed to start ActiveMQ JMS Message Broker. Reason: java.lang.NegativeArraySizeException (BrokerService:528) You might also encounter this error if downgrading Falcon from HDP 2.6 to an earlier release.

Workaround: If you encounter this error, delete the ActiveMQ history and then restart Falcon. If you want to retain the history, be sure to back up the ActiveMQ history prior to deleting it.


cd <ACTIVEMQ_DATA_DIR> 

rm -rf ./localhost 

cd /usr/hdp/current/falcon-server

su -l <FALCON_USER> 

./bin/falcon-stop

./bin/falcon-start
BUG-65977 SPARK-14922 Spark

Component Affected: Spark

Description of Problem: Since Spark 2.0.0, `DROP PARTITION BY RANGE` is not supported grammatically. In other words, only '=' is supported while `<', '>', '<=', '>=' aren't.

Associated Error Message:

scala> sql("alter table t drop partition (b<1) ").show
org.apache.spark.sql.catalyst.parser.ParseException:
mismatched input '<' expecting {')', ','}(line 1, pos 31)

== SQL ==
alter table t drop partition (b<1)
-------------------------------^^^ 

Workaround: To drop partition, use the exact match with '='. scala> sql("alter table t drop partition (b=0) ").show

BUG-68628SPARK-16605, SPARK-16628Spark

Component Affected: Spark

Description of Problem: Column names are not getting created for Spark DataFrame

Workaround: There are two workaround options for this issue:

  • Use Spark to create the tables instead of Hive.

  • Set: sqlContext.setConf("spark.sql.hive.convertMetastoreOrc", "false")

BUG-68632 SPARK-18355 Spark

Component Affected: Spark

Description of Problem: The property spark.sql.hive.convertMetastoreOrc is set to "true" by default. This may impact performance.

Workaround: You can set this property as "false" to gain some performance improvement. However, you cannot use this property for ORC tables with new columns made by `ALTER TABLE`. The default is false for safety.

BUG-70956N/AZeppelin

Component Affected: Zeppelin

Description of Problem: A Hive query submitted to the %jdbc interpreter returns a proxy validation error.

Associated error messages:

  • HiveSQLException: Failed to validate proxy privilege of zeppelin for <user>

  • The hiveserver2.log file lists a permission denied exception: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=<user>, access=WRITE, inode="/user/<user>":hdfs:hdfs:drwxr-xr-x at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319 … org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)

Workaround:

  1. Create the user account on all worker nodes. For example, if the account is user3, issue the following command: $ adduser user3

  2. Restart the %jdbc interpreter.

BUG-70956N/AZeppelin

Component Affected: Zeppelin

Description of Problem: When used with Hive, the %jdbc interpreter might require Hadoop common jar files that need to be added manually.

Workaround:

  1. On the Interpreters page, add a new repository.

  2. Set the repository ID to “hortonworks”.

  3. Set the URL to “http://nexus-private.hortonworks.com/nexus/content/groups/public/”.

  4. Identify the version of HDP running on the cluster. You can find this in the Zeppelin UI by clicking on your user ID and choosing "About Zeppelin." The number after the Zeppelin version is the HDP version number.

  5. In the Dependencies section on the Interpreters page, remove existing jar files and add the following three jar files, with the correct HDP version for your cluster. The following example uses version 2.6.0.0-484:

    • org.apache.hive:hive-jdbc::standalone:1.2.1000.2.6.0.0-484 Xerces:xerces:2.4.0

    • org.apache.hadoop:hadoop-common:2.7.3.2.6.0.0-484

BUG-74152 PHOENIX-3688 Phoenix

Component Affected: Phoenix

Description of Problem: Rebuild(ALTER INDEX IDX ON TABLE REBUILD) of indexes created on the table having row_timestamp column will result in no data visible to the User for that Index.

Workaround: Drop the index and recreate the same index. There will not be any extra overhead of recreating index when compared with rebuild Index.

BUG-75179 ZEPPELIN-2170 Zeppelin

Component Affected: Zeppelin

Description of Problem: Zeppelin does not show all WARN messages thrown by spark-shell. At the Zeppelin notebook level, we cannot change the log level that comes as an output.

Associated Error Message: There is no error message. This error is only related to increasing or decreasing logging in notebook output.

Workaround: Currently, there is no workaround for this.

BUG-76417N/AYARN

Component Affected: YARN

Description of problem: The default heap size (Xmx) for YARN Timeline Service may not be enough depending on your workloads. Please look at yarn.timeline-service.entity-group-fs-store.app-cache-size in conjunction.

Workaround: Increase the Timeline Service heap size to be a minimum of 8GB, increasing further as the total load increases.

BUG-76996N/ASpark

Component Affected: Spark (Livy)

Description of Problem: When upgrading from HDP-2.5.x to HDP-2.6.0, the Livy interpreter is configured with a scope of 'Global'.

Workaround: After upgrading from HDP 2.5 to HDP 2.6, set the scope for the Livy interpreter.

BUG-76996N/ASpark 2 (Livy)

Description of Problem: When upgrading from HDP-2.5.x to HDP-2.6.0 and using Spark2, the Livy interpreter is configured with a scope of 'global', and should be changed to 'scoped'.

Workaround: After upgrading from HDP 2.5 to HDP 2.6, set the interpreter mode for %livy (Spark 2) to "scoped" using the pulldown menu in the %livy section of the Interpreters page.

BUG-77238 ZEPPELIN-2261 Spark

Component Affected: Spark (Livy), Zeppelin

Description of Problem: Encrypted calls to Livy REST APIs are not supported, and Zeppelin calls to Livy over an encrypted connection are not supported.

BUG-77311N/AZeppelin

Description of Problem: When one user restarts the %livy interpreter from the Interpreters (admin) page, other users' sessions restart too.

Workaround: Restart the %livy interpreter from within a notebook.

BUG-78035 ZEPPELIN-1263 Spark

Component Affected: Zeppelin

Description of Problem: spark.driver.memory will not take effect, the driver memory is always 1G.

Workaround: To change the driver memory, specify it in the SPARK_DRIVER_MEMORY property on the interpreter setting page for your spark interpreter.

BUG-78244RANGER-1484Ranger

See RangerUI: Escape of policy condition text entered in the policy form for a complete description and workaround options.

BUG-80656 Zeppelin

Component Affected: Zeppelin

Description of Problem: Zeppelin fails to start during the upgrade process from HDP 2.5 to HDP 2.6. The error starts with

Exception in thread "main" org.apache.shiro.config.ConfigurationException: Unable to instantiate class org.apache.zeppelin.server.ActiveDirectoryGroupRealm for object named 'activeDirectoryRealm'. Please ensure you've specified the fully qualified class name correctly.

Workaround: This error is due to a change in configuration class for Active Directory.

In HDP 2.5:

org.apache.zeppelin.server.ActiveDirectoryGroupRealm

In HDP 2.6:

org.apache.zeppelin.realm.ActiveDirectoryGroupRealm

To resolve this issue, choose one of the following two alternatives:

  • Proceed with the upgrade, and change the configuration in the shiro.ini file after the upgrade is complete (when Ambari allows configuration change).

  • At time of failure, change the class name in /usr/hdp/current/zeppelin-server/conf/shiro.ini, and then start Zeppelin manually.

BUG-80901N/AZeppelin

Component Affected: Zeppelin/Livy

Description of Problem: This occurs when running applications through Zeppelin/Livy that requires 3rd-party libraries. These libraries are not installed on all nodes in the cluster but they are installed on their edge nodes. Running in yarn-client mode, this all works as the job is submitted and runs on the edge node where the libraries are installed. In yarn-cluster mode,this fails because the libraries are missing.

Workaround: Set the location inspark.jars in spark-defaults.conf. For Livy, you will need to set livy.spark.jars (the HDFS location) in livy interpreters conf. Both are globally applicable. The jars need to be present on the livy machine in both cases. Updating livy conf is preferable since it affects only the Zeppelin users.

HOPS-35N/A Ambari, Falcon

Description of Problem: Falcon check service is failing in PPC

Workaround: You will need to install the Berkeley db jar file into the ambari-server host. You can do this by following these steps:

  1. wget -O je-5.0.73.jar http://search.maven.org/remotecontent?filepath=com/sleepycat/je/5.0.73/je-5.0.73.jar

  2. cp je-5.0.73.jar /usr/share/

  3. chmod 644 /usr/share/je-5.0.73.jar

  4. ambari-server setup --jdbc-db=bdb --jdbc-driver=/usr/share/je-5.0.73.jar

  5. ambari-server restart

  6. Restart Falcon service

N/A N/AN/A

Description of problem: Open JDK 8u242 is not supported as it causes Kerberos failure.

Workaround: Use a different version of Open JDK.

Technical Service BulletinApache JIRAApache ComponentSummary
TSB-327HDFS-5698HDFS

CVE-2018-11768: HDFS FSImage Corruption (potential DoS, file/dir takeover)

In very large clusters, the in-memory format to store the user, group, acl, and extended attributes may exceed the size of the on disk format, causing corruption of fsImage.

For more information on this issue, see the corresponding Knowledge article: TSB 2021-327:CVE-2018-11768: HDFS FSImage Corruption (potential DoS, file/dir takeover)

TSB-405N/AN/A

Impact of LDAP Channel Binding and LDAP signing changes in Microsoft Active Directory

Microsoft has introduced changes in LDAP Signing and LDAP Channel Binding to increase the security for communications between LDAP clients and Active Directory domain controllers. These optional changes will have an impact on how 3rd party products integrate with Active Directory using the LDAP protocol.

Workaround

Disable LDAP Signing and LDAP Channel Binding features in Microsoft Active Directory if they are enabled

For more information on this issue, see the corresponding Knowledge article: TSB-2021 405: Impact of LDAP Channel Binding and LDAP signing changes in Microsoft Active Directory

TSB-406N/AHDFS

CVE-2020-9492 Hadoop filesystem bindings (ie: webhdfs) allows credential stealing

WebHDFS clients might send SPNEGO authorization header to remote URL without proper verification. A maliciously crafted request can trigger services to send server credentials to a webhdfs path (ie: webhdfs://…) for capturing the service principal

For more information on this issue, see the corresponding Knowledge article: TSB-2021 406: CVE-2020-9492 Hadoop filesystem bindings (ie: webhdfs) allows credential stealing

TSB-434HADOOP-17208, HADOOP-17304Hadoop

KMS Load Balancing Provider Fails to invalidate Cache on Key Delete

For more information on this issue, see the corresponding Knowledge article: TSB 2020-434: KMS Load Balancing Provider Fails to invalidate Cache on Key Delete

TSB-465N/AHBase

Corruption of HBase data stored with MOB feature

For more information on this issue, see the corresponding Knowledge article: TSB 2021-465: Corruption of HBase data stored with MOB feature on upgrade from CDH 5 and HDP 2

TSB-497N/ASolr

CVE-2021-27905: Apache Solr SSRF vulnerability with the Replication handler

The Apache Solr ReplicationHandler (normally registered at "/replication" under a Solr core) has a "masterUrl" (also "leaderUrl" alias) parameter. The “masterUrl” parameter is used to designate another ReplicationHandler on another Solr core to replicate index data into the local core. To help prevent the CVE-2021-27905 SSRF vulnerability, Solr should check these parameters against a similar configuration used for the "shards" parameter.

For more information on this issue, see the corresponding Knowledge article: TSB 2021-497: CVE-2021-27905: Apache Solr SSRF vulnerability with the Replication handler

TSB-512N/AHBase

HBase MOB data loss

HBase tables with the MOB feature enabled may encounter problems which result in data loss.

For more information on this issue, see the corresponding Knowledge article: TSB 2021-512: HBase MOB data loss