DLM Administration
Also available as:
PDF

Policy guidelines and considerations

You should take into consideration the following items when creating or modifying a replication policy.

Data security
  • If using TDE for encryption, the entire source directory must be either encrypted or not encrypted, otherwise policy creation fails.
  • If using an S3 cluster for your policy, your credentials must have been registered on the Cloud Credentials page.
  • On destination clusters, the DLM Engine must have been granted write permissions on folders being replicated.
  • Any user with access to the DLM UI has the ability to browse, within the DLM UI, the folder structure of any clusters enabled for DLM.

    Therefore, the DPS Admins and the Infra Admins can see folders, files, and databases in the DLM UI that they might not have access to in HDFS. The DataPlane Admin and Infra Admin cannot view from the DLM UI the content of files on the source or destination clusters. Nor do these administrators have the ability to modify or delete folders or files that are viewable from the DLM UI.

Policy properties and settings
  • Ensure that the frequency is set so that a job finishes before the next job starts. Jobs based on the same policy cannot overlap. If a job is not completed before another job starts, the second job does not execute and is given the status Skipped. If a job is consistently skipped, you might need to modify the frequency of the job.
  • Specify bandwidth per map, in MBps. Each map is restricted to consume only the specified bandwidth. This is not always exact. The map throttles back its bandwidth consumption during a copy in such a way that the net bandwidth used tends towards the specified value.
Cluster requirements
  • The target folder or database on the destination cluster must either be empty or not exist prior to starting a new policy instance.
  • The clusters you want to include in the replication policy must have been paired already.
  • On the Create Policy page, the only requirement for clusters to display in the Source Cluster or Destination Cluster fields is that they are DLM-enabled. You must ensure that the clusters you select are healthy before you start a policy instance (job).
Hive restrictions
  • ACID tables, external tables, storage handler-based tables (such as HBase), and column statistics are currently not replicated.
  • When creating a schedule for a Hive replication policy, you should set the frequency so that changes are replicated often enough to avoid overly large copies.
Time considerations
  • The first time you execute a job (an instance of a policy) with data that has not been previously replicated, Data Lifecycle Manager creates a new folder or database and bootstraps the data.

    During a bootstrap operation, all data is replicated from the source cluster to the destination. As a result, the initial execution of a job can take a significant amount of time, depending on how much data is being replicated, network bandwidth, and so forth. So you should plan the bootstrap accordingly.

    After initial bootstrap, data replication is performed incrementally, so only updated data is transferred. Data is in a consistent state only after incremental replication has captured any new changes that occurred during bootstrap.

  • Achieving a one-hour Recovery Point Objective (RPO) depends on how you set up your replication jobs and the configuration of your environment:
    • Select data in sizes that replicate within 30 minutes.
    • Set replication frequency to 45 minutes or fewer.
    • Ensure that network bandwidth is sufficient, so that data can move fast enough to meet your RPO.
    • Consider the rate of change of data being replicated.