Apache Ambari Release Notes
Also available as:
PDF

Known Issues

Ambari 2.7.0 has the following known issues, scheduled for resolution in a future release.

Table 1.5. Ambari 2.7.0 Known Issues

Apache Jira

HWX Jira

Problem

Solution

N/ABUG-110831

'ambari-server setup-ldap' fails with AttributeError when master_key is not persisted

  1. Installed ambari-server and configured password encryption, but chose not to persist master key:

    ===================
    Choose one of the following options:
      [1] Enable HTTPS for Ambari server.
      [2] Encrypt passwords stored in ambari.properties file.
      [3] Setup Ambari kerberos JAAS configuration.
      [4] Setup truststore.
      [5] Import certificate to truststore.
    ===========================================================================
    Enter choice, (1-5): 2
    Password encryption is enabled.
    Do you want to reset Master Key? [y/n] (n): y
    Master Key not persisted.
    Enter current Master Key:
    Enter new Master Key:
    Re-enter master key:
    Do you want to persist master key. If you choose not to persist, you need to
    provide the Master Key while starting the ambari server as an env variable
    named AMBARI_SECURITY_MASTER_KEY or the start will prompt for the master key.
    Persist [y/n] (y)? n
    Adjusting ambari-server permissions and ownership...
    Ambari Server 'setup-security' completed successfully.

    Then, export environment variable.

    export AMBARI_SECURITY_MASTER_KEY=hadoop

    Then, run LDAP setup with the following settings:

    ambari-server setup-ldap -v
    ====================
    Review Settings
    ====================
    Primary LDAP Host (ldap.ambari.apache.org):
     ctr-e138-1518143905142-473336-01-000002.hwx.site
    Primary LDAP Port (389):  389
    Use SSL [true/false] (false):  false
    User object class (posixUser):  posixUser
    User ID attribute (uid):  uid
    Group object class (posixGroup):  posixGroup
    Group name attribute (cn):  cn
    Group member attribute (memberUid):  memberUid
    Distinguished name attribute (dn):  dn
    Search Base (dc=ambari,dc=apache,dc=org):  dc=apache,dc=org
    Referral method [follow/ignore] (follow):  follow
    Bind anonymously [true/false] (false):  false
    Handling behavior for username collisions [convert/skip] for LDAP sync (skip):
     skip ambari.ldap.connectivity.bind_dn: uid=hdfs,ou=people,ou=dev,dc=apache,dc=org
    ambari.ldap.connectivity.bind_password: *****
    Save settings [y/n] (y)? y
  2. Issues:

    1. Master Key generation fails:

      INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
      Traceback (most recent call last):
        File "/usr/sbin/ambari-server.py", line 1060, in <module>
          mainBody()
        File "/usr/sbin/ambari-server.py", line 1030, in mainBody
          main(options, args, parser)
        File "/usr/sbin/ambari-server.py", line 980, in main
          action_obj.execute()
        File "/usr/sbin/ambari-server.py", line 79, in execute
          self.fn(*self.args, **self.kwargs)
        File "/usr/lib/ambari-server/lib/ambari_server/setupSecurity.py", line 860, in setup_ldap
          encrypted_passwd = encrypt_password(LDAP_MGR_PASSWORD_ALIAS, mgr_password, options)
        File "/usr/lib/ambari-server/lib/ambari_server/serverConfiguration.py", line 858, in encrypt_password
          return get_encrypted_password(alias, password, properties, options)
        File "/usr/lib/ambari-server/lib/ambari_server/serverConfiguration.py", line 867, in get_encrypted_password
          masterKey = get_original_master_key(properties, options)
        File "/usr/lib/ambari-server/lib/ambari_server/serverConfiguration.py", line 1022, in get_original_master_key
          if options is not None and options.master_key is not None and options.master_key:
      AttributeError: Values instance has no attribute 'master_key'
      [root@ctr-e138-1518143905142-473336-01-000002 ~]#
    2. Repeated prompt for Master Key, despite providing correct value.

    3. Returns an incorrect master key value and the shell repeats printing "ERROR: ERROR: Master key does not match." and scrolls the page

  3. The issues are seen when master key is not persisted as part of the initial password encryption step.

Persist the master key BEFORE setting up LDAP.
AMBARI-24536BUG-109839When SPNEGO is enabled (`ambari-server setup-kerberos`), the SSO (`ambari-server setup-sso`) redirect no longer works.No known workaround. Do not enable both kerberos and SSO using ambari-server setup.
N/ABug-109047

If your cluster has ever had a Jethro Data mpack installed, Ambari-2.7.0 upgrade fails due to NPE when processing Ambari Infra changes, with the following message:

2018-07-17 01:57:48,282 ERROR [main] SchemaUpgradeHelper:238 - Upgrade failed.
java.lang.NullPointerException
 at org.apache.ambari.server.upgrade.UpgradeCatalog270.updateInfraKerberosDescriptor(UpgradeCatalog270.java:1282)
 at org.apache.ambari.server.upgrade.UpgradeCatalog270.updateKerberosDescriptorArtifact(UpgradeCatalog270.java:1202)
 at org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.updateKerberosDescriptorArtifacts(AbstractUpgradeCatalog.java:797)
 at org.apache.ambari.server.upgrade.UpgradeCatalog270.executeDMLUpdates(UpgradeCatalog270.java:1052)
 at org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.upgradeData(AbstractUpgradeCatalog.java:985)
 at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeDMLUpdates(SchemaUpgradeHelper.java:236)
 at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:456)
2018-07-17 01:57:48,282 ERROR [main] SchemaUpgradeHelper:473 - Exception occurred during upgrade, failed
org.apache.ambari.server.AmbariException
 at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeDMLUpdates(SchemaUpgradeHelper.java:239)
 at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:456)
Caused by: java.lang.NullPointerException
 at org.apache.ambari.server.upgrade.UpgradeCatalog270.updateInfraKerberosDescriptor(UpgradeCatalog270.java:1282)
 at org.apache.ambari.server.upgrade.UpgradeCatalog270.updateKerberosDescriptorArtifact(UpgradeCatalog270.java:1202)
 at org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.updateKerberosDescriptorArtifacts(AbstractUpgradeCatalog.java:797)
 at org.apache.ambari.server.upgrade.UpgradeCatalog270.executeDMLUpdates(UpgradeCatalog270.java:1052)
 at org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.upgradeData(AbstractUpgradeCatalog.java:985)
 at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeDMLUpdates(SchemaUpgradeHelper.java:236)
... 1 more
IF your cluster has a Jethro Data mpack installed, consider upgrading Ambari 2.6.2.2 directly to Ambari-2.7.1.
N/ABUG-103704Slider service check fails during express upgrade in kerberos environment when HDFS client is not present on the host where smoke test command is issued.Move existing hdfs keytab from any other host to the host where slider service check failed.
N/ABUG-105092

If Ranger HA and/or Oozie Server HA is configured and a custom composite keytab file is being used, service checks for Ranger and Oozie will fail during the HDP 2.6 to HDP 3.0 Upgrade.

Re-create the custom Ranger and/or Oozie Server keytab files and re-try the service check, or ignore and proceed past the service check.

N/ABUG-105451Pre-upgrade checks can take a few minutes to run in large clusters.No known workaround.
N/ABUG-105515When installing a cluster on a small cluster, Ambari can place too many service on the first node.Use the installation wizard to place components on other servers in the cluster that have adequate resources to run them.
N/ABUG-105584If Knox is being used to access the Ambari Server and the Knox instance is being managed by the Ambari Server, Knox will be restarted as part of certain move master operations. This can cause the UI to unavailable.Use Ambari Server directly when performing move master operations as they may require Knox to be restarted, because Knox needs to be restarted to uptake the configuration telling it to point to the new location of the service being moved.
N/ABUG-105609Because of the ellipses added to component names, it can be tough to distinguish component actions for TIMELINE SERVICE V1.5, and TIMELINE SERVICE V2.0 READERHover the cursor over the component name to see the full name.
N/ABUG-105700When changing value of the ranger admin or usersync log directories, Ambari will not prompt you to restart Ranger components.Restart Ranger using the Ambari UI after editing either of these properties.
N/ABUG-106557Service checks for Atlas will fail when you have a wire encrypted cluster with Atlas installed,Try to remove Atlas, and re-add it.
N/ABUG-106672If an Ambari agents local cache of stack data has been corrupted any attempt to add a component to that host will timeout after one hour. 
N/ABUG-106836t's possible for administrators to create a short URL for an Ambari View that overlaps an existing short URL. The existing short URL will no longer work.Do not attempt to create duplicate short URL's, and if you do remove the newly created short URL.
N/ABUG-106995When saving a configuratoin change in Ambari, if there are any configuration validation warnings and you click Cancel, instead of Proceed Anyway, configuration elements will not be visible in the Advanced tab.Refresh the page or navigate away from the configuration section and back again.
N/ABUG-107022Ambari will show that a component has been successfuly decomissioned or recomissioned even if the process was not successful. Check the state of the component in the actual service before attempting to retry the operation. For example if decomissioning a DataNode, use the HDFS NameNode UI to check the state of that DataNode before retrying the operation in Ambari.
N/ABUG-107040The Ranger KMS Database Testing button will not display unless "Setup Database and Database User" is set to No.If warned about not completing testing for Ranger KMS, continue without testing the database.
N/ABUG-107057The OneFS service can disapear from the Customize Services step of the installation when navigating past this step and back again when the OneFS Management Pack has been installed, and OneFS is being used as the filesystem for the cluster.Go back to Step 4 (Choose Services), re-select OneFS, and proceed through the wizard.
N/ABUG-107067Service-level keytab regeneration should not be used in this release.Cluster-wide keytab regeneration is the only recommended approach for regenerating keytabs.
N/ABUG-107479The Zeppelin package will fail to install when using Amazon Linux 2.On the host you wish to install Zeppelin, just use `yum install zeppelin*` and restry the failed install operation in Ambari.
N/ABUG-109559

NPE when migrating users table during upgrade to Ambari 2.7.0 with Oracle DB:

{noformat}
2018-08-20 11:36:46,395 ERROR [main] SchemaUpgradeHelper:207 - Upgrade failed.
java.lang.NullPointerException
	at org.apache.ambari.server.upgrade.UpgradeCatalog270.convertUserCreationTimeToLong(UpgradeCatalog270.java:595)
	at org.apache.ambari.server.upgrade.UpgradeCatalog270.upgradeUserTables(UpgradeCatalog270.java:342)
	at org.apache.ambari.server.upgrade.UpgradeCatalog270.executeDDLUpdates(UpgradeCatalog270.java:318)
	at org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.upgradeSchema(AbstractUpgradeCatalog.java:970)
	at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:205)
	at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:448)
2018-08-20 11:36:46,395 ERROR [main] SchemaUpgradeHelper:473 - Exception occurred during upgrade, failed
org.apache.ambari.server.AmbariException
	at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:208)
	at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.main(SchemaUpgradeHelper.java:448)
Caused by: java.lang.NullPointerException
	at org.apache.ambari.server.upgrade.UpgradeCatalog270.convertUserCreationTimeToLong(UpgradeCatalog270.java:595)
	at org.apache.ambari.server.upgrade.UpgradeCatalog270.upgradeUserTables(UpgradeCatalog270.java:342)
	at org.apache.ambari.server.upgrade.UpgradeCatalog270.executeDDLUpdates(UpgradeCatalog270.java:318)
	at org.apache.ambari.server.upgrade.AbstractUpgradeCatalog.upgradeSchema(AbstractUpgradeCatalog.java:970)
	at org.apache.ambari.server.upgrade.SchemaUpgradeHelper.executeUpgrade(SchemaUpgradeHelper.java:205)
	... 1 more
{noformat}

This is caused by one or more records with a {{NULL}} value in the {{create_time}} field.

For example: ||user_id||user_name||user_type||create_time|| |1|admin|LOCAL|NULL|

Workaround:

Update the relevant records to not have a {{NULL}} in the {{create_time}} column.

For example:

{noformat}
UPDATE users SET create_time=systimestamp WHERE create_time IS NULL;
{noformat}

Solution:

During upgrade, protect against a {{null}} value for {{currentUserCreateTime.getValue()}} at:

{code:title=org/apache/ambari/server/upgrade/UpgradeCatalog270.java:595}
          dbAccessor.updateTable(USERS_TABLE, temporaryColumnName, currentUserCreateTime.getValue().getTime(),
              "WHERE " + USERS_USER_ID_COLUMN + "=" + currentUserCreateTime.getKey());
{code}
    

AMBARI-25069

SPEC-58, BUG-116328HDP 3.0.0 with local repository fails to deploy. Empty baseurl values written to the repo files when using a local repository causes an HDP stack installation failure.
  1. Go to the folder /usr/lib/ambari-server/web/javascipts using cd /usr/lib/ambari-server/web/javascripts

  2. Take a backup of app.js using cp app.js app.js_backup

  3. Edit the app.js file. Find the line(39892)onNetworkIssuesExist: function () {

    Change the line from :

    /**
       * Use Local Repo if some network issues exist
       */
      onNetworkIssuesExist: function () {
        if (this.get('networkIssuesExist')) {
          this.get('content.stacks').forEach(function (stack) {
              stack.setProperties({
                usePublicRepo: false,
                useLocalRepo: true
              });
              stack.cleanReposBaseUrls();
          });
        }
      }.observes('networkIssuesExist'),

    to

    /**
       * Use Local Repo if some network issues exist
       */
      onNetworkIssuesExist: function () {
        if (this.get('networkIssuesExist')) {
          this.get('content.stacks').forEach(function (stack) {
            if(stack.get('useLocalRepo') != true){
              stack.setProperties({
                usePublicRepo: false,
                useLocalRepo: true
              });
              stack.cleanReposBaseUrls();
            }
          });
        }
      }.observes('networkIssuesExist'), 
  4. Reload the page, and then start the create cluster wizard again.