Release Notes
Also available as:

Known Issues

Hortonworks Bug IDDPS ServiceSummary
BUG-87028DPS Platform UI

Summary: Cluster status is not consistent between the DLM App and the DPS Platform pages.

Description: The status for NodeManager and for the DataNode on a cluster can take up to 10 minutes to update in DPS Platform.

Workaround: If the information does not update, wait a few minutes and refresh the page to view the status.

BUG-90784DLM Service UI

Summary: Ranger UI does not display Deny policy items

Description: When a policy with deny conditions is created on Ranger-admin in a replication relationship, the Policy Details page in Ranger does not display the deny policy items.

Workaround: If Deny policy items do not appear on the Ranger admin Policy Detail page then please update the respective service-def with enableDenyAndExceptionsInPolicies="true" option.

Refer to section "2.2 Enhanced Policy model" in

 DLM Service UI

Summary: Under some circumstances, a successful HDFS data transfer displays incorrect information, instead of the actual bytes transferred.


The bytes transferred are not properly shown when map tasks are killed because of nodes being lost in a cluster. In an attempt to recover, new map tasks are launched, resulting in improperly displayed statistics.

Workaround: None


Issue: During installation, DP Profiler fails to start the DP Profiler service with error: No java installations was detected.

Description: There is an issue in the way DP Profiler is locating the Java path. If Java is installed from a tarball, as opposed to via a package manager like yum, it does not get added to the system path.

Workaround: The system on which DP Profiler is being installed must have Java in the system's PATH variable, so that Java can be detected correctly.

BUG-91018DLM Engine, API

Issue: If a slash is appended to the HDFS path, then HDFS replication fails for Ranger.

Description: When defining HDFS replication policies, including a slash at the end of the HDFS path causes the replication job to fail.

Workaround: Do not add a slash at the end of the HDFS path. This is a limitation when HDFS replication policy is created through a REST API call.


Summary: Spark history from DSS jobs are filling up HDFS capacity.

Description: The profiler jobs in DSS cause a lot of information to be generated in spark-history on HDFS. These can fill up HDFS capacity if not managed properly.


  1. Login to Ambari on the cluster.

  2. Select Spark2>Configs>Custom spark2-defaults.

  3. Add the following, one per line:


    This causes spark history from jobs older than 7 days to be cleaned up once per day. Adjust the values to your needs.