Apache Ambari Release Notes
Also available as:

Known Issues

Ambari 2.5.2 has the following known issues, scheduled for resolution in a future release. Also, refer to the Ambari Troubleshooting Guide for additional information.

Table 1.6. Ambari 2.5.2 Known Issues

Apache Jira

HWX Jira





Oozie Hive job failed with NoClassDefFoundError

If Oozie HA is enabled and the Oozie Hive job fails with the following error in the yarn application log:

<<< Invocation of Main class completed<<<
 Failing Oozie Launcher, Main class[org.apache.oozie.action.hadoop.HiveMain], main() threw exception,
 org/apache/hadoop/hive/shims/ShimLoaderjava.lang.NoClassDefFoundError: org/apache/hadoop/hive/shims/ShimLoader
   at org.apache.hadoop.hive.conf.HiveConf$ConfVars.<clinit>(HiveConf.java:400)
   at org.apache.hadoop.hive.conf.HiveConf.<clinit>(HiveConf.java:109)
   at sun.misc.Unsafe.ensureClassInitialized(Native Method)

Use the following workaround:

- Run the following command as the oozie user:

oozie admin -oozie http://<oozie-server-host>:
11000/oozie -sharelibupdate

- Rerun the oozie hive job to verify


Ranger KMS start fails upon regenerate keytabs (on IOP upgraded cluster) with the following error:

Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/RANGER_KMS/", line 125, in <module>
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 329, in execute
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 865, in restart
    self.start(env, upgrade_type=upgrade_type)
  File "/var/lib/ambari-agent/cache/common-services/RANGER_KMS/", line 70, in start
  File "/var/lib/ambari-agent/cache/common-services/RANGER_KMS/", line 517, in enable_kms_plugin
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 166, in __init__
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 604, in action_create_on_execute
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 601, in action_delayed
    self.get_hdfs_resource_executor().action_delayed(action_name, self)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 328, in action_delayed
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 287, in _assert_valid
    self.target_status = self._get_file_status(target)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 430, in _get_file_status
    list_status = self.util.run_command(target, 'GETFILESTATUS', method='GET', ignore_status_codes=['404'], assertable_result=False)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 177, in run_command
    return self._run_command(*args, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 235, in _run_command
    _, out, err = get_user_call_output(cmd, user=self.run_user, logoutput=self.logoutput, quiet=False)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/get_user_call_output.py", line 61, in get_user_call_output
    raise ExecutionFailed(err_msg, code, files_output[0], files_output[1])
resource_management.core.exceptions.ExecutionFailed: Execution of 'curl -sS -L -w '%{http_code}' -X GET --negotiate -u : 
'http://natr66-kods-iop420tofnsec-r6-6.openstacklocal:50070/webhdfs/v1/ranger/audit?op=GETFILESTATUS' 1>/tmp/tmpPy6Oos 2>/tmp/tmpYjY6SZ' 
returned 7. curl: (7) couldn't connect to host401
Restart the Ambari server after Express Upgrade is completed..
N/ABUG-86606After upgrading a cluster from Ambari 2.4.2 and HDP 2.5.2, Zeppelin proxy user settings are not configured in core-site.xml. Manually configure hadoop.proxyuser.zeppelin.hosts and hadoop.proxyuser.zeppelin.groups in core-site.xml.
N/ABUG-82900If hosts are in maintenance mode during the Enable Kerberos wizard, the kerberos client is not installed, and keytabs and principals are not created for these hosts.For those hosts that were in maintenance mode while Kerberos was being enabled, the kerberos client needs to be installed. This is possible using the REST API, or using the Ambari Web UI by going to the affected host and choosing to Install clients. This will install the Kerberos client and allow you to successfully generate keytabs. To generate keytabs go to Admin > Kerberos > Regenerate Keytabs and choose the Only regenerate keytabs for missing hosts and components option.


NiFi service fails to start on a HDP + HDF integration cluster

with the following exception:

Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/NIFI/1.0.0/package/scripts/nifi_ca.py",
 line 114, in <module> CertificateAuthority().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
 line 329, in execute method(env)
  File "/var/lib/ambari-agent/cache/common-services/NIFI/1.0.0/package/scripts/nifi_ca.py",
 line 92, in start Execute((run_ca_script, params.jdk64_home, ca_server_script,
   params.nifi_config_dir + '/nifi-certificate-authority.json', params.nifi_ca_log_file_stdout,
   params.nifi_ca_log_file_stderr, status_params.nifi_ca_pid_file), user=params.nifi_user)
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py",
 line 155, in __init__ self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py",
 line 160, in run self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py",
 line 124, in run_action provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py",
 line 262, in action_run tries=self.resource.tries, try_sleep=self.resource.try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py",
 line 72, in inner result = function(command, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py",
 line 102, in checked_call tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py",
 line 150, in _call_wrapper result = _call(command, **kwargs_copy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py",
 line 303, in _call raise ExecutionFailed(err_msg, code, out, err)
 Execution of '/var/lib/ambari-agent/cache/common-services/NIFI/1.0.0/package/scripts/run_ca.sh /usr/jdk64/jdk1.8.0_112
 /usr/hdf/current/nifi/conf/nifi-certificate-authority.json /grid/0/log/nifi/nifi-ca.stdout /grid/0/log/nifi/nifi-ca.stderr
 /var/run/nifi/nifi-ca.pid' returned 126. -bash:
 /var/lib/ambari-agent/cache/common-services/NIFI/1.0.0/package/scripts/run_ca.sh: Permission denied

Use the following workaround:

- Take the host out of Maintenance Mode

- Use Install Clients on Host detail page of the host to install all of the clients which will include the Kerberos client.

- Regenerate keytabs using Admin > Kerberos > Regenerate Keytabs



No log information displays in Ambari Host Details > Log tab.

Host Environment: RHEL6 / MySql 5.6

Restart the Ambari Server host.



Hive client fails to start on a blueprint-deployed cluster with only Hive Metastore and ZooKeeper deployed.

No known workaround. Please contact Hortonworks Customer Support for assistance.



Hive Visual Explain Plan not working in the Internet Explorer or Edge browsers.

Use the Chrome or Firefox browser to work with the Hive Visual Explain Plan feature.



After restarting Hive Server Interactive (HSI); old, calculated values based on earlier YARN configs are applied for HSI restart. This happens because RESTART HSI does not automatically recalculate the HSI configs.

User will have to manually trigger the HSI/LLAP re-calculation, using any one of the following actions:

(try each workaround in the following order)

  • Turn off HSI, then turn on HSI manually, OR,

  • Re-select a queue from the drop-down menu. OR,

  • Adjust the number of LLAP nodes.(this slider is only editable if the selected queue is named "llap" and the queue is at root-level), OR

  • Adjust the Maximum Total Concurrent Queries value, if necessary.