4. Known Issues

Ambari 2.1 has the following known issues, scheduled for resolution in a future release. Please work around the following issues with Ambari 2.1:

 

Table 1.6. Ambari 2.1 Known Issues

Apache Jira

HWX Jira

Problem

Solution

BUG-47666

SNMPv3 is not supported.

The SNMPv3 notification method is not supported.

BUG-41365

With umask 027 and oracle database, Ranger Admin install runs into jdbc class not found issue.

Change the permission to 644 for the oracle jdbc jar at: /usr/share/java/ojdbc6.jar on all hosts and retry..

BUG-41331

Hyperlink text in the pdf documentation does not link to target topics.

Use hyperlinks in the html documentation to navigate between topics.

BUG-41308

DataNode Fails to Install on RHEL/CentOS 7.

During cluster install, DataNode fails to install with the following error:

resource_management.core.exceptions.
Fail: Execution of '/usr/bin/yum -d 0 -e 0 -y install snappy-devel' returned 1.
Error: Package: snappy-devel-1.0.5-1.el6.x86_64 (HDP-UTILS-1.1.0.20)
           Requires: snappy(x86-64) = 1.0.5-1.el6
           Installed: snappy-1.1.0-3.el7.x86_64 (@anaconda/7.1)
               snappy(x86-64) = 1.1.0-3.el7
           Available: snappy-1.0.5-1.el6.x86_64 (HDP-UTILS-1.1.0.20)
               snappy(x86-64) = 1.0.5-1.el6        

Hadoop requires the snappy-devel package that is a lower version that what is on the machine already. Run the following on the host and retry.

yum remove snappy
yum install snappy-devel            

BUG-41250

Widget edit dialog shows for read-only users.

The widget edit dialog is displayed on login by a read-only user. The user will not be able to edit or create a widget. Cancel the dialog, logout and login back in and the dialog will no longer be displayed.

AMBARI-12434BUG-41244

After closing Ranger Admin Wizard, if you re-open, the wizard opens to the step where you closed.

Refresh your browser to start from the beginning of the Wizard.

BUG-41177

When the Hive metastore service is installed on a separate host from the provided MySQL server it will fail to start with the following error:

File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
line 291, in _call raise Fail(err_msg) resource_management.core.exceptions.
Fail: Execution of 'export HIVE_CONF_DIR=/etc/hive/conf.server ; 
/usr/hdp/current/hive-metastore/bin/schematool 
-initSchema -dbType mysql -userName hive -passWord [PROTECTED]' returned 1. 
Metastore connection URL:
  jdbc:mysql: //revo2.hortonworks.local/hive?createDatabaseIfNotExist=trueMetastore
  Connection Driver : com.mysql.jdbc.Driver
Metastore connection User: hive  org.apache.hadoop.hive.metastore.HiveMetaException:
 Failed to get schema version.
*** schemaTool failed ***

Create a hive database user scoped to the host that the Hive metastore is installed on. So assuming the Hive metastore is installed on host1.hortonworks.local and the hive MySQL password is 'hive123' do this from the server on which MySQL is installed:

mysql> create user 'hive'@'host1.hortonworks.local' identified by 'hive123';
mysql> grant ALL on hive.* to 'hive'@'revo1.hortonworks.local';
mysql> flush privileges;

BUG-41165

When installing Knox with a custom pid directory and Ambari Agents running as a non-root user, Knox fails to start with the following error:

File "/var/lib/ambari-agent/cache/common-services/KNOX/
0.5.0.2.2/package/scripts/knox_gateway.py",
line 152, in start
os.unlink(params.knox_managed_pid_symlink)
OSError: [Errno 13] Permission denied: '/usr/hdp/current/knox-server/pids'

Remove the existing symlink and re-link it to the custom pid directory. If the custom pid directory is '/grid/0/run/knox/', this would be the proper commands to issue:

sudo rm /usr/hdp/current/knox-server/pids
sudo ln -s /grid/0/run/knox/ /usr/hdp/current/knox-server/pids

BUG-41009

Static view instances throw a NullPointerException when attempting to access the view.

If you use a static <instance> in your view, you will receive a NullPointerException in ambari-server.log when attempting to use the view. You must instead use a dynamic view instance. Remove the static <instance> from your view.xml.

 BUG-41044

After upgrading from HDP 2.1 and restarting Ambari, the Admin > Stack and Versions > Versions tab does not show in Ambari Web.

After performing an upgrade from HDP 2.1 and restarting Ambari Server and the Agents, if you browse to Admin > Stack and Versions in Ambari Web, the Versions tab does not display. Give all the Agent hosts in the cluster a chance connect to Ambari Server by wait for Ambari to show the Agent heartbeats as green and then refresh your browser.

AMBARI-12389BUG-41040

After adding Falcon to your cluster, the Oozie configuration is not properly updated.

After adding Falcon to your cluster using "Add Service", the Oozie configuration is not properly updated. After completion of Add Service wizard, add properties on Services > Oozie > Configs > Advanced > Custom oozie-site. The list of properties can be found here: https://github.com/apache/ambari/blob/branch-2.1/ambari-server/src/main/resources/common-services/FALCON/0.5.0.2.1/configuration/oozie-site.xml. Once added, Restart Oozie and execute this command on oozie server host:

su oozie -c '/usr/hdp/current/oozie-server/bin/oozie-setup.sh prepare-war'

Start Oozie.

AMBARI-12400BUG-41025On a Blueprint installed cluster, when "Enable Ranger for HBASE" is selected in the "Advanced ranger-hbase-plugin-properties" section, config validation may fail with "KeyError: hbase.coprocessor.region.classes."

In Services > HBase > Configs, add the following property to "Custom hbase-site". Restart HBase and then attempt to "Enable Ranger for HBASE" again.

hbase.coprocessor.region.classes=org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint
 BUG-41019

Accumulo Master Start fails on secure cluster if pid and log directories for YARN and MapReduce2 are customized.

If you plan to use Accumulo and a Kerberos-enabled cluster, do not customize the pid and log directories for YARN or MapReduce2.

AMBARI-12412BUG-41016

Storm has no metrics if service is installed via a Blueprint.

The following properties need to be added to storm-site. Browse to Services > Storm > Configs and add the following properties. Restart the Storm service.

topology.metrics.consumer.register=
[{'class': 'org.apache.hadoop.metrics2.sink.storm.StormTimelineMetricsSink'
, 'parallelism.hint': 1}]
metrics.reporter.register=
org.apache.hadoop.metrics2.sink.storm.StormTimelineMetricsReporter
AMBARI-12347BUG-40935 AMS does not work with Kerberos enabled when in distributed mode.

When AMS is configured for distributed mode and Kerberos is enabled (and a custom AMS ZooKeeper principal), you will see an error in /var/log/ambari-metrics-collector/ambari-metrics-collector.log:

22:36:44,699 ERROR [main] 
ConnectionManager$HConnectionImplementation:879 
- The node /ams-hbase-secure is not in ZooKeeper. 
It should have been written by the master. 
Check the value configured in 'zookeeper.znode.parent'. 
There could be a mismatch with the one configured in the master.

The issue is with the ZooKeeper principal generated for AMS. For example, if the principal is called amszk/<HOSTNAME>@<REALM>, without proper ACL setup Zookeeper does not work with custom principal name. Switch to u anse existing keytab if present on that host or create a new keytab as zookeeper/<HOSTNAME>@<REALM>

 BUG-40904 During rolling upgrade, if you browse to Services > Storm, the Quick Links are incorrect.

When performing a rolling upgrade, if you browse to the Storm service, the Quick Links show the content for the YARN service instead of the Storm. Refresh your browser to have the correct Storm Quick Links display.

AMBARI-12374BUG-40852

Unable to start NameNode when configured for HA with HDP 2.0 cluster.

Please contact Hortonworks support and reference BUG-40852.
 BUG-40775 If Kerberos is disabled, Accumulo tracer fails to start.

If you cluster includes Accumulo and you disabled Kerberos, Accumulo tracer will fail to start with the following error:

resource_management.core.exceptions.Fail: Execution of 'cat /var/lib/ambari
-agent/data/tmp/pass | ACCUMULO_CONF_DIR=/usr/hdp/current/accumulo
-tracer/conf/server /usr/hdp/current/accumulo-client/bin/accumulo shell -u root -f
/var/lib/ambari-agent/data/tmp/cmds' returned 1. Password: *****
2015-07-04 05:08:44,823 [trace.DistributedTrace] INFO : SpanReceiver
org.apache.accumulo.tracer.ZooTraceClient was loaded successfully.
2015-07-04 05:08:44,897 [shell.Shell] ERROR:
org.apache.accumulo.core.client.AccumuloSecurityException: Error BAD_CREDENTIALS 
for user root - Username or Password is Invalid

To correct this situation, the following command must be run on one of the hosts that has an Accumulo process:

ACCUMULO_CONF_DIR=/etc/accumulo/conf/server accumulo init --reset-security 
--user root

It will prompt for the root user's password. The password you enter must match the Accumulo root password in Ambari's configs for Accumulo. Then Accumulo may be started normally.

 BUG-40773 Kafka broker fails to start after disabling Kerberos security.

When enabling Kerberos, Kafka security configuration is set and all the ZooKeeper nodes in Kafka will have ACLs set so that only Kafka brokers can modify entries in ZooKeeper. If you disable Kerberos, you user must set all the Kafka ZooKeeper entries to world readable/writable prior to disabling Kerberos. Before disabling kerberos for Kafka. Log-in as user "kafka" on one of Kafka nodes:

kinit -k -t /etc/security/keytabs/kafka.service.keytab kafka/_HOST

where _HOST should be replaced by the hostname of that node. Run the following command to open zookeeper shell:

/usr/hdp/current/kafka-broker/bin/zookeeper-shell.sh hostname:2181

where hostname here should be replaced by one of the zookeeper nodes

setAcl /brokers world:anyone:crdwa 
setAcl /config world:anyone:crdwa 
setAcl /controller world:anyone:crdwa 
setAcl /admin world:anyone:crdwa

If the above commands do not run prior to disabling Kerberos, the only option is to set "zookeeper.connect property" to a new ZooKeeper root. This can be done by appending "/newroot" to "zookeeper.connect.property" string. For example "host1:port1,host2:port2,host3:port3/newroot"

AMBARI-12281BUG-40769

After making a configuration change, Ambari is not prompting to restart Services.

Refresh your browser and Ambari will show a restart indicator next to the services that need to be restarted.
 BUG-40694 The Slider view is not supported on a cluster with SSL (wire encryption) enabled. Only use the Slider view on clusters without wire encryption enabled. If it is required to run Slider on a cluster with wire encryption enabled, please contact Hortonworks support for further help.
AMBARI-12251BUG-40651

When viewing the Widgets browser, scrollbars are shown around the widget descriptions.

This is normal and can occur if you are using a browser on Linux since scrollbars are not hidden by default on some Linux X Window systems.

AMBARI-12282BUG-40615 When changing JDK to Oracle JDK 1.8 option during setup, the JDK is not automatically installed on all hosts. If you attempt to change the Ambari JDK setup (after you have already configured the JDK previously) and choose Option 1 to automatically download + install the Oracle JDK 1.8, the JDK is not automatically installed on all hosts. You must manually download and configured Oracle JDK 1.8 on all hosts to match that of the Ambari Server.
 BUG-40541 If there is a trailing slash in the Ranger External URL the NameNode will fail to startup. Remove the trailing slash from the External URL and and start up the Name Node.
AMBARI-12436BUG-40481

Falcon Service Check may fail when performing Rolling Upgrade, with the following error:

2015-06-25 18:09:07,235 ERROR - [main:]
 ~ Failed to start ActiveMQ JMS Message Broker.
 Reason: java.io.IOException: Invalid location: 1:6763311, :
 java.lang.NegativeArraySizeException (BrokerService:528) 
 java.io.IOException: Invalid location: 1:6763311, :
 java.lang.NegativeArraySizeException
 at
 org.apache.kahadb.journal.DataFileAccessor.readRecord(DataFileAccessor.java:94)

This condition is rare.

When performing a Rolling Upgrade from HDP 2.2 to HDP 2.3 and Falcon Service Check fails with the above error, browse to the Falcon ActiveMQ data dir (specified in falcon properties file), remove the corrupted queues, and stop and start the Falcon Server.

cd <ACTIVEMQ_DATA_DIR>
rm -rf ./localhost
cd /usr/hdp/current/falcon-server 
su -l <FALCON_USER> 
./bin/falcon-stop
./bin/falcon-start
 BUG-40323

After switching RegionServer ports, Ambari will report RegionServers are live and dead.

HBase maintains the list of dead servers and live servers according to it's semantics. Normally a new server coming up again with the same port will cause the old server to be removed from the dead server list. But due to port change, it will stay in that list for ~2 hours. If the server does not come at all, it will still be removed from the list after 2 hours. Ambari will alert, based on that list until the RegionServers are removed from the list by HBase.

AMBARI-12283BUG-40300

After adding or deleting ZooKeeper Servers to an existing cluster, Service Check fails.

After adding or deleting ZooKeeper Servers to an existing cluster, Service Check fails due to conflicting zk ids. Restart ZooKeeper service to clear the ids.

AMBARI-12179BUG-39646

When Wire Encryption is enabled, the Tez View cannot connect to ATS with a Local Cluster configuration.

If you configure the Tez View using the Local Cluster configuration, the view reads the "yarn.timeline-service.webapp.address" property to determine the ATS URL. When Wire Encryption is enabled, the view should read "yarn.timeline-service.webapp.https.address" instead. Because of this, the Tez View will not load the Tez job information and you will see an error in the view "Connection Refused". You cannot use Local Cluster configuration option and must manually enter the ATS Cluster configuration information when creating the view if Wire Encryption is enabled. Be sure to use the value from "yarn.timeline-service.webapp.https.address" for the YARN Timeline Server URL.
AMBARI-12284BUG-39643

After enabling Kerberos, Spark History Service fails to establish long-lived secure connection to YARN.

From Ambari Web, browse to Services > Spark > Configs and set "spark.history.kerberos.enabled" to "true" in spark-defaults.conf configuration. Restart the Spark service.

 BUG-38643 Storm ZooKeeper Servers property is not updated after deleting ZooKeeper instances.

After deleting a ZooKeeper Server instance, Ambari does not update the storm.zookeeper.servers property for Storm. From Ambari Web, browse to Services > Storm > Configs and filter for the "storm.zookeeper.servers" property. Update the property in storm-site to remove the recently deleted ZooKeeper Server from the list.

 BUG-38640 When running Ambari Server as non-root, kadmin couldn't open log file.

When running Ambari Server as non-root, when enabling Kerberos, if kadmin fails to authenticate, you will see the following error in ambari-server.log if Ambari cannot access the kadmind.log.

STDERR: Couldn't open log file /var/log/kadmind.log: Permission denied 
kadmin: GSS-API (or Kerberos) error while initializing kadmin interface

To avoid this error, be sure the kadmind.log file has 644 permissions.

 BUG-38000

User Views (Files, Hive and Pig) fail to load when accessing a Kerberos-enabled cluster.

See the Ambari Views Guide for more information on configuring your cluster and the User Views to work with a Kerberos-enabled cluster, including setting-up Ambari Server for Kerberos and having Ambari Server as a Hadoop proxy user.
HADOOP-11764BUG-33763 YARN ATS server will not start if /tmp is set to noexec.

You must specify a different directory for LevelDB (which is embedded in the ATS component) to use when generating temporal info. This can be done by adding the following information to the hadoop-env.sh configuration:

export JAVA_LIBRARY_PATH="${JAVA_LIBRARY_PATH}:/tktest"
export _JAVA_OPTIONS= "${_JAVA_OPTIONS} -Djava.io.tmpdir=/tktest"
 BUG-33557

With a Kerberos-enabled cluster that includes Storm, in Ambari Web > Services > Storm, the Summary values for Slots, Tasks, Executors and Topologies show as "n/a". Ambari Server log also includes the following ERROR:

24 Mar 2015 13:32:41,
288 ERROR [pool-2-thread-362] 
AppCookieManager:122 - 
SPNego authentication failed,
cannot get hadoop.auth cookie for URL: 
http: //c6402.ambari.apache.org:8744/api/
v1/topology/summary?field=topologies

When Kerberos is enabled, Storm API requires SPNEGO authentication. Refer to the Ambari Security Guide to Set Up Ambari for Kerberos to enable Ambari to authenticate against the Storm API via SPNEGO.

 BUG-33516 Ranger service cannot be installed in a cluster via Blueprints API.

You must first create your cluster (via Install Wizard or via Blueprints) and then add Ranger service to the cluster.

 BUG-32381

When using an Agent non-root configuration, if you attempt to register hosts automatically using SSH, the Agent registration will fail.

The option to automatically register hosts with SSH is not supported when using a Agent non-root configuration. You must manually register the Agents.

 BUG-32284 Adding client-only services does not automatically install component dependencies.

When adding client-only services to a cluster (using Add Service), Ambari does not automatically install dependent client components with the newly added clients. On hosts where client components need to be installed, browse to Hosts and to the Host Details page. Click + Add and select the client components to install on that host.

BUG-28245 Attempting to create a Slider app using the same name throws an uncaught JS error. After creating (and deleting) a Slider app, then attempting to create a Slider app again with that same name results in an uncaught error. The application does not show up in the Slider app list. Refresh your browser and the application will be shown in the list table.
AMBARI-12005BUG-24902 Setting cluster names hangs Ambari.

If you attempt to rename a cluster to a string > 100 chars, Ambari Server will hang. Restart Ambari Server to clear the hang.