DLM Release Notes
Also available as:
PDF

Known Issues

DLM has the following known issues, scheduled for resolution in a future release. Where available, a workaround has been provided.

Hortonworks Bug ID Category Summary
BUG-87028 DPS Platform UI

Problem: Cluster status is not consistent between the DLM App and the DPS Platform pages.

Description: The status for NodeManager and for the DataNode on a cluster can take up to 10 minutes to update in DPS Platform.

Workaround: If the information does not update, wait a few minutes and refresh the page to view the status.

BUG-100998 Hive cloud replication

Problem: DLM: Hive cloud replication succeeds, but doesn't copy the data

Workaround: When upgrading HDP to 2.6.5 with DLM 1.1, you must add the following parameters in Ambari, in HDFS core-site:

fs.s3a.fast.upload = true

fs.s3a.fast.upload.buffer = disk

fs.s3a.multipart.size = 67108864

BUG-101787 Replication failure in TDE setup

Problem: HDFS replication with both TDE and plain directory fails

Description: HDFS Replication fails when some files are encrypted and some are unencrypted. If the source directory is unencrypted, but contains both encrypted and unencrypted subfolders, then replication jobs fail with checksum mismatch error.

Workaround: Ensure that all folders in a source root directory have the same encryption setting (enabled/not enabled or same key)

BUG-77340 Restart of HiveServer2 and Knox

Problem: HS2 failover requires knox restart if cookie use is enabled for HS2

Description: When HiveServer2 is accessed via Knox Gateway and HiveServer2 has cookie-based authentication enabled, a HiveServer2 restart requires that Knox also be restarted to get Knox-HiveServer2 interaction working again.

Workaround: Set hive.server2.thrift.http.cookie.auth.enabled=false in hive-site.xml in Ambari.

BUG-99572 Hive cloud cluster setup

Problem: Hive Replication [ Onprem to Cloud ] not working when using Credential Providers

Description: Hive commands do not work on a cloud cluster when AWS S3 credentials are set up using hadoop.security.credential.provider.path configuration.

Workaround: Set credentials using fs.s3a.security.credential.provider.path configuration

BUG-99489 Replication failure in TDE setup

Problem: beacon user is not added to kms policy by default

Error Message: When source/destination dataset is setup with TDE, replication fails with message:
User:beacon not allowed to do 'DECRYPT_EEK' on '<keyname>’

Workaround: Encryption key management and policies are not handled by DLM. Set up the keys and key policies manually.

BUG-102726 Replication policy create failure when directory name conflicts.

Problem: HDFS: Onprem to Onprem: Parent dataset identification failed.

Error Message: Replication policy create job fails with message:
Source dataset already in replication

Description: Creating a replication policy fails when a directory name is the prefix of another directory which is already part of the replication job. For example, if there is a replication policy created for data in /data/reporting, you cannot create another replication policy on /data/reporting, as “report” is a prefix of “reporting”.

This conflict only arises when the directories are at the same level. For example, replication for /data/2018/report would execute correctly.

Workaround: Rename the directories, if possible, to ensure directory names do not have a prefix match with other folders being replicated.