5.6. Known Issues for Ambari

  • BUG-10115: The HostCleanup script does not appear to run when running distributed.

    Problem: During the host check step of the Cluster Install wizard or the Add hosts wizard, if warnings or errors are detected in your environments (such as installed packages or running processes), you will be provided information on how to execute the HostCleanup script. If you attempt to execute the HostCleanup script (using SSH, for example) distributed across all the hosts in your cluster without user interaction, the execution appears to hang since the script prompts for responses during execution.

    Workaround: Do not execute the script without user interaction. Execute the script on each host while attending the execute to be able to follow and respond to any prompts. python /usr/lib/python2.6/site-packages/ambari_agent/HostCleanup.py -k "users" To delete all resources, ignore option -k. Use "-s" for silent cleanup.

  • BUG-9969: Default Authorization Provider needs to be set in Hive configs.

    Problem: If you set up a secure cluster in Ambari and do not manually set the Default Authorization Provider in your Hive configurations, you will see errors.

    Workaround: Select Ambari Web > Services > Hive > Configs and set following properties for hive-site.xml:

     hive.security.authorization.enabled=true
    hive.security.authorization.manager=org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider
    hive.security.metastore.authorization.manager=org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider
    hive.security.authenticator.manager=org.apache.hadoop.hive.ql.security.ProxyUserAuthenticator
    
  • BUG-9797: Enable Security fails when the Ambari setup is re-run to set JAVA_HOME to Oracle JDK7.

    Problem:

    1. Run ambari-setup -s

    2. Install the Oracle JDK : /usr/jdk1.7.0_40

    3. Run setup: ambari-setup -j /usr/jdk1.7.0_40

    4. Install jdk1.7.0_40 on all hosts at JAVA_HOME path specified in above step.

    5. Install jce-7 policy on all hosts and unzip it at /usr/jdk1.7.0_40/jre/lib/security.

    6. Go through installation wizard and then Enable security wizard.

    7. Enable security fails as Ambari overrides all manually downloaded and unzipped jce-7 policy files with jce-6 policy files.

    Workaround: If you change your JDK, please remove the jce policy files.

  • BUG-9606: Firewall issues display during Host Checks at Install or at Add New Host on CentOS 5 and SLES 11.

    Problem:

    • Start "Add new host" wizard trying to add host with iptables running on Centos05 or SLES 11 host.

    • After host confirmed host checks display firewall issue.

    • Stop iptables on host manually.

    • Rerun checks. Host checks still report about firewall issue.

    • Refresh the page. Confirm process will repeat and finish without warnings this time, but with the message:

      All host checks passed on 1 registered hosts. Click here to see the check results.
    • Select Click here to see the check results. Host checks still report the firewall issue.

    Workaround: Confirm iptables is disabled or all necessary ports are open on all cluster hosts. Disable iptables manually or configure your network for the necessary ports (see Configuring Ports for Hadoop 1.x and Configuring Ports for Hadoop 2.x).

  • BUG-9597: Log4j property file is overwritten during HDFS/ZooKeeper/Oozie services Start.

    Problem: The Log4j property file is overwritten during HDFS/ZooKeeper/Oozie services Start. It is caused by service client's behavior when its state became installed_and_configured after Service Start:

    {'hdp-hadoop::client': stage => 2, service_state => installed_and_configured}
  • BUG-8898: Ambari no longer stops iptables on Ambari Server or Ambari Agent start.

    Problem: Prior to HDP 2.0, the Ambari server and agents automatically stopped iptables if they were already running. With the release of HDP 2.0, Ambari does not stop iptables.

    Workaround: Disable iptables manually or configure your network for the necessary ports (see Configuring Ports for Hadoop 1.x and Configuring Ports for Hadoop 2.x


loading table of contents...