5.7. Known Issues for Ambari

  • Ambari does not support running or installing stacks on Unbuntu.

  • The component version information displayed by Ambari is based on the Ambari Stack definition. If you have applied patches to the Stack and to your software repository, that component version might differ from the actual version installed. There is no functional impact on Ambari if the patch versions mismatch. If you have any questions on component versions, refer to the rpm version installed on the actual host.

  • Missing LzoCodec settings in core-site.xml file

    Problem: After cluster install, the io.compression.codecs property in $HADOOP_CONF_DIR/core-site.xml was incorrect. It displays as:

    <property>
        <name>io.compression.codecs</name>
        <value>org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec</value>
     </property>

    Workaround: Use Ambari Web to modify the io.compression.codecs property. Select Services > HDFS > Configs > Advanced and modify to:

    <property>
        <name>io.compression.codecs</name>
        <value>org.apache.hadoop.io.compress.GzipCodec,com.hadoop.compression.lzo.LzoCodec,org.apache.hadoop.io.compress.DefaultCodec</value>
     </property>

    And add the io.compression.codec.lzo.class property to the Custom core-site.xml section:

    <property>
        <name>io.compression.codec.lzo.class</name>
        <value>com.hadoop.compression.lzo.LzoCodec</value>
    </property>
  • Incorrect hive.security.authorization.manager property after upgrade to Ambari 1.4.4.

    Problem: After upgrading to Ambari 1.4.4, the hive.security.authorization.manager property in $HIVE_CONFIG_DIR/hive- site.xml is incorrect. It is set to:

    <property>
       <name>hive.security.authorization.manager</name>
       <value>org.apache.hcatalog.security.HdfsAuthorizationProvider</value>
    </property>

    Workaround: Use Ambari Web to modify hive.security.authorization.manager property to the correct value. Select Services > Hive > Configs > Advanced and make the following changes:

    <property>
       <name>hive.security.authorization.manager</name>
       <value>org.apache.hadoop.hive.ql.security.authorization.StorageBasedAuthorizationProvider</value>
    </property>
  • Upgraded single-node cluster to HDP-2.0.6, may be missing yarn job summary entries.

    Problem: After upgrade of a cluster to HDP 2 using Ambari, you may notice that the yarn job summary entries are missing. This typically happens if the YARN ResourceManager host is shared with MapReduce2 components.

    Workaround: To fix this issue modify the log4j.properties file at /etc/hadoop/conf on the ResourceManager host by adding the following lines:

    [Note]Note

    Modify the value for log4j.appender.RMSUMMARY.File property to contain the actual value of yarn_log_dir_prefix and yarn_user. You can get the values from the latest global config type. Use configs.sh tool to read the global config type.

    #
    # Job Summary Appender 
    #
    # Use following logger to send summary to separate file defined by 
    # hadoop.mapreduce.jobsummary.log.file rolled daily:
    # hadoop.mapreduce.jobsummary.logger=INFO,JSA
    # 
    hadoop.mapreduce.jobsummary.logger=${hadoop.root.logger}
    hadoop.mapreduce.jobsummary.log.file=hadoop-mapreduce.jobsummary.log
    log4j.appender.JSA=org.apache.log4j.DailyRollingFileAppender
    # Set the ResourceManager summary log filename
    yarn.server.resourcemanager.appsummary.log.file=hadoop-mapreduce.jobsummary.log
    # Set the ResourceManager summary log level and appender
    yarn.server.resourcemanager.appsummary.logger=${hadoop.root.logger}
    #yarn.server.resourcemanager.appsummary.logger=INFO,RMSUMMARY
    
    # To enable AppSummaryLogging for the RM,
    # set yarn.server.resourcemanager.appsummary.logger to
    # <LEVEL>,RMSUMMARY in hadoop-env.sh
    
    # Appender for ResourceManager Application Summary Log
    # Requires the following properties to be set
    #    - hadoop.log.dir (Hadoop Log directory)
    #    - yarn.server.resourcemanager.appsummary.log.file (resource manager app summary log filename)
    #    - yarn.server.resourcemanager.appsummary.logger (resource manager app summary log level and appender)
    log4j.appender.RMSUMMARY=org.apache.log4j.RollingFileAppender
    log4j.appender.RMSUMMARY.File=[yarn_log_dir_prefix]/[yarn_user]/${yarn.server.resourcemanager.appsummary.log.file}
    log4j.appender.RMSUMMARY.MaxFileSize=256MB
    log4j.appender.RMSUMMARY.MaxBackupIndex=20
    log4j.appender.RMSUMMARY.layout=org.apache.log4j.PatternLayout
    log4j.appender.RMSUMMARY.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n
    log4j.appender.JSA.layout=org.apache.log4j.PatternLayout
    log4j.appender.JSA.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{2}: %m%n
    log4j.appender.JSA.DatePattern=.yyyy-MM-dd
    log4j.appender.JSA.layout=org.apache.log4j.PatternLayout
    log4j.logger.org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary=${yarn.server.resourcemanager.appsummary.logger}
    log4j.additivity.org.apache.hadoop.yarn.server.resourcemanager.RMAppManager$ApplicationSummary=false 

  • Unable to start gmond process after upgrade to HDP 2.0.6 Stack from HDP 1.3.2 Stack.

    Problem: gmond process fails to start on a host during an upgrade

    Workaround: Use the following steps to work around the issue:

    1. Log onto the host where gmond fails to start.

    2. For the gmond process that fails go to the corresponding directory. For example for HDPSlaves go to:

      /var/run/ganglia/hdp/HDPSlaves/
    3. Remove the PID in the directory.

    4. Stop gmond.

      service hdp-gmond stop
    5. Start gmond.

      service hdp-gmond start

  • After upgrading to Ambari 1.4.2, fs.checkpoint.size needs to be in appropriate units of bytes, not GBs.

    Problem: Ambari 1.4.1 and earlier assumed this setting to be in GB. The setting is in bytes.

    Workaround: Modify the fs.checkpoint.size property using Ambari Web. Select Services > HDFS > Configs > General and enter an appropriate integer value in bytes to set the HDFS maximum edit log size for checkpointing. For example, 500000000.

  • Log4j property file is overwritten during HDFS/ZooKeeper/Oozie services Start.

    Problem: The Log4j property file is overwritten during HDFS/ZooKeeper/Oozie services Start. When the client state became became installed_and_configured after Service Start:

    {'hdp-hadoop::client': stage => 2, service_state => installed_and_configured}

loading table of contents...