6.7. Known Issues for Ambari

  • Ambari does not support running or installing stacks on Unbuntu.

  • The component version information displayed by Ambari is based on the Ambari Stack definition. If you have applied patches to the Stack and to your software repository, that component version might differ from the actual version installed. There is no functional impact on Ambari if the patch versions mismatch. If you have any questions on component versions, refer to the rpm version installed on the actual host.

  • Upgrading Ambari server from v1.3.x/1.4.1 to 1.4.3 using local repository may fail

    Problem: Upgrading the ambari server using a local repository may fail. Error displays as:

    "Checking database integrity... Database is consistent. Adjusting ambari-server permissions and ownership... ERROR: Exiting with exit code 1. Reason: /var/lib/ambari-server/resources/upgrade/dml/Ambari-DML-Postgres-FIX_LOCAL_REPO.sql: No such file or directory."

    Workaround: Load fix scripts on the Ambari server, in the following directory:

    cd /var/lib/ambari-server/resources/upgrade/dml

    Download fix scripts, using the following commands:

    wget https://raw2.github.com/apache/ambari/branch-1.4.3/ambari-server/src/main/resources/upgrade/dml/Ambari-DML-Postgres-FIX_LOCAL_REPO.sql
    wget https://raw2.github.com/apache/ambari/branch-1.4.3/ambari-server/src/main/resources/upgrade/dml/Ambari-DML-Oracle-FIX_LOCAL_REPO.sql
  • Missing LzoCodec settings in core-site.xml file

    Problem: After cluster install, the io.compression.codecs property in $HADOOP_CONF_DIR/core-site.xml was incorrect. It displays as:

    <property>
        <name>io.compression.codecs</name>
        <value>org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec</value>
     </property>

    Workaround: Use Ambari Web to modify the io.compression.codecs property. Select Services > HDFS > Configs > Advanced and modify to:

    <property>
        <name>io.compression.codecs</name>
        <value>org.apache.hadoop.io.compress.GzipCodec,com.hadoop.compression.lzo.LzoCodec,org.apache.hadoop.io.compress.DefaultCodec</value>
     </property>

    And add the io.compression.codec.lzo.class property to the Custom core-site.xml section:

    <property>
        <name>io.compression.codec.lzo.class</name>
        <value>com.hadoop.compression.lzo.LzoCodec</value>
    </property>
  • Exception shown when deleting Host Config Group.

    Problem: If a host is associated with a Host Config Group and you delete that host from the cluster, you see the following exception dialog:

     Transaction rolled back because transaction was set to RollbackOnly. 500 status code.

    The host is deleted from the cluster as expected, but the Config Group still shows the deleted host association.

    Workaround: Delete the host from the Config Group.

  • Incorrect hive.security.authorization.manager property after upgrade to Ambari 1.4.3.

    Problem: After upgrading to Ambari 1.4.3, the hive.security.authorization.manager property in $HIVE_CONFIG_DIR/hive- site.xml is incorrect. It is set to:

    <property>
       <name>hive.security.authorization.manager</name>
       <value>org.apache.hcatalog.security.HdfsAuthorizationProvider</value>
    </property>

    Workaround: Use Ambari Web to modify hive.security.authorization.manager property to the correct value. Select Services > Hive > Configs > Advanced and make the following changes:

    <property>
       <name>hive.security.authorization.manager</name>
       <value>org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider</value>
    </property>
  • Upgraded single-node cluster to HDP-2.0.6, may be missing yarn job summary entries.

    Problem: After upgrade of a cluster to HDP 2 using Ambari, you may notice that the yarn job summary entries are missing. This typically happens if the YARN ResourceManager host is shared with MapReduce2 components.

    Workaround: To fix this issue modify the log4j.properties file at /etc/hadoop/conf on the ResourceManager host by adding the following lines:

    [Note]Note

    Modify the value for log4j.appender.RMSUMMARY.File property to contain the actual value of yarn_log_dir_prefix and yarn_user. You can get the values from the latest global config type. Use configs.sh tool to read the global config type.

    #
    # Job Summary Appender 
    #
    # Use following logger to send summary to separate file defined by 
    # hadoop.mapreduce.jobsummary.log.file rolled daily:
    # hadoop.mapreduce.jobsummary.logger=INFO,JSA
    # 
    hadoop.mapreduce.jobsummary.logger=${hadoop.root.logger}
    hadoop.mapreduce.jobsummary.log.file=hadoop-mapreduce.jobsummary.log
    log4j.appender.JSA=org.apache.log4j.DailyRollingFileAppender
    # Set the ResourceManager summary log filename
    yarn.server.resourcemanager.appsummary.log.file=hadoop-mapreduce.jobsummary.log
    # Set the ResourceManager summary log level and appender
    yarn.server.resourcemanager.appsummary.logger=${hadoop.root.logger}
    #yarn.server.resourcemanager.appsummary.logger=INFO,RMSUMMARY
    
    # To enable AppSummaryLogging for the RM,
    # set yarn.server.resourcemanager.appsummary.logger to
    # <LEVEL>,RMSUMMARY in hadoop-env.sh
    
    # Appender for ResourceManager Application Summary Log
    # Requires the following properties to be set
    #    - hadoop.log.dir (Hadoop Log directory)
    #    - yarn.server.resourcemanager.appsummary.log.file (resource manager app summary log filename)
    #    - yarn.server.resourcemanager.appsummary.logger (resource manager app summary log level and appender)
    log4j.appender.RMSUMMARY=org.apache.log4j.RollingFileAppender
    log4j.appender.RMSUMMARY.File=[yarn_log_dir_prefix]/[yarn_user]/${yarn.server.resourcemanager.appsummary.log.file}
    log4j.appender.RMSUMMARY.MaxFileSize=256MB
    log4j.appender.RMSUMMARY.MaxBackupIndex=20
    log4j.appender.RMSUMMARY.layout=org.apache.log4j.PatternLayout
    log4j.appender.RMSUMMARY.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n
    log4j.appender.JSA.layout=org.apache.log4j.PatternLayout
    log4j.appender.JSA.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n
    log4j.appender.JSA.DatePattern=.yyyy-MM-dd
    log4j.appender.JSA.layout=org.apache.log4j.PatternLayout
    log4j.logger.org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary=${yarn.server.resourcemanager.appsummary.logger}
    log4j.additivity.org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary=false 

  • Hive check execute fails after upgrading from Ambari 1.4.1 to Ambari 1.4.2.

    Problem: When Ambari is upgraded to 1.4.2 and security is enabled, the Hive service check can fail due to a conflicting combination of authorization properties.

    Workaround: Disable authorization. Using Ambari UI, set hive.security.authorization.enabled to false. Or, verify that the correct combination of authorization properties are used. For example:

    hive.security.authorization.manager : org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider
    hive.security.metastore.authorization.manager : org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider
    hive.security.authenticator.manager : org.apache.hadoop.hive.ql.security.ProxyUserAuthenticator

  • Enable NameNode HA wizard freezes after you add a host to the cluster.

    Problem: After upgrading from Ambari 1.4.1 to Ambari 1.4.3 and from HDP 1.3.3 to HDP 2.0.6 Stack, the Enable NameNode HA wizard freezes when you add a new host to the cluster. The Enable HA Wizard outputs the following JS errors:

    Uncaught TypeError: Cannot set property 'addNNHost' of undefined db.js:428
    App.db.setRollBackHighAvailabilityWizardAddNNHost db.js:428
    module.exports.Em.Route.extend.step2.Em.Route.extend.next high_availability_routes.js:145
    Ember.StateManager.Ember.State.extend.sendRecursively ember-latest.js:15579
    Ember.StateManager.Ember.State.extend.send ember-latest.js:15564
    App.WizardStep5Controller.Em.Controller.extend.submit step5_controller.js:646
    ActionHelper.registeredActions.(anonymous function).handler ember-latest.js:19458
    (anonymous function) ember-latest.js:11250
    f.event.dispatch jquery-1.7.2.min.js:3
    h.handle.i jquery-1.7.2.min.js:3

    Workaround: Close the other open windows and login again from the current window.

  • Unable to start gmond process after upgrade to HDP 2.0.6 Stack from HDP 1.3.2 Stack.

    Problem: gmond process fails to start on a host during an upgrade

    Workaround: Use the following steps to work around the issue:

    1. Log onto the host where gmond fails to start.

    2. For the gmond process that fails go to the corresponding directory. For example for HDPSlaves go to:

      /var/run/ganglia/hdp/HDPSlaves/
    3. Remove the PID in the directory.

    4. Stop gmond.

      service hdp-gmond stop
    5. Start gmond.

      service hdp-gmond start

  • Add a step to modify fs.defaultFS in default companion file doc

    Problem:

  • After upgrading to Ambari 1.4.2, fs.checkpoint.size needs to be in appropriate units of bytes, not GBs.

    Problem: Ambari 1.4.1 and earlier assumed this setting to be in GB. The setting is in bytes.

    Workaround: Modify the fs.checkpoint.size property using Ambari Web. Select Services > HDFS > Configs > General and enter an appropriate integer value in bytes to set the HDFS maximum edit log size for checkpointing. For example, 500000000.

  • Log4j property file is overwritten during HDFS/ZooKeeper/Oozie services Start.

    Problem: The Log4j property file is overwritten during HDFS/ZooKeeper/Oozie services Start. When the client state became became installed_and_configured after Service Start:

    {'hdp-hadoop::client': stage => 2, service_state => installed_and_configured}

loading table of contents...