Support Matrices
Also available as:
PDF

Chapter 2. Ambari 2.5.1: Support

The matrices in this chapter outline what is supported for Ambari 2.5.1.

Determine Stack Compatibility

Use this table to determine whether your Ambari and HDP stack versions are compatible.

Ambari*

HDP 2.6

HDP 2.5

HDP 2.4

(deprecated)

HDP 2.3

(deprecated)

HDP 2.2

(deprecated)

2.5.x

 

2.4.x

 

2.2.2.18

  

2.2.1

  

2.2.0

   

* Ambari does not install Hue or HDP Search (Solr).

** If you plan to install and manage HDP 2.3.4 (or later), you must use Ambari 2.2.0 (or later). Do not use Ambari 2.1x with HDP 2.3.4 (or later).

Meet Minimum System Requirements

Browser Requirements

The Ambari Install Wizard runs as a browser-based Web application. You must have a machine capable of running a graphical browser to use this tool. The minimum required browser versions are:

Table 2.1. Ambari 2.5.1 Browser Requirements

Operating SystemBrowser
LinuxChrome 56.0.2924.87, 57.0.2987
Firefox 51, 52
Mac OS XChrome 56.0.2924.87, 57.0.2987
Firefox 51, 52
Safari 10.0.1, 10.0.3
WindowsChrome 56.0.2924.87, 57.0.2987
Edge 38
Firefox 51.0.1, 52.0
Internet Explorer 10, 11

On any platform, we recommend updating your browser to the latest, stable version.

Software Requirements

On each of your hosts:

  • yum and rpm (RHEL/CentOS/Oracle Linux)

  • zypper and php_curl (SLES)

  • apt (Debian/Ubuntu)

  • scp, curl, unzip, tar, and wget

  • OpenSSL (v1.01, build 16 or later)

  • Python

    For SLES 11:

    Python 2.6.x

    For SLES 12:

    Python 2.7.x

    For CentOS 7, Ubuntu 14, Ubuntu 16, and Debian 7:

    Python 2.7.x

JDK Requirements

The following Java Development Kits (JDKs) are supported:

Table 2.2. HDP 2.6.1 JDK Support

JDKVersion
Open SourceJDK8
JDK7, deprecated
OracleJDK 8, 64-bit (minimum JDK 1.8.0_77), default
JDK 7, 64-bit (minimum JDK 1.7_67), deprecated

Open JDK does not run on SLES 11.

[Note]Note

JDK support depends on your choice of HDP Stack.

More Information

Changing the JDK Version

Database Requirements

Ambari requires a relational database to store information about the cluster configuration and topology. If you install HDP Stack with Hive or Oozie, they also require a relational database.

The following table outlines these database requirements:

Component

Databases

Description

Ambari

PostgreSQL 9.1.13+,9.3, 9.4***

MariaDB 10*

MySQL 5.6****

Oracle 11gr2

Oracle 12c**

By default, Ambari installs an instance of PostgreSQL on the Ambari Server host. Optionally, you can use an existing instance of PostgreSQL, MySQL or Oracle.

* Use of an existing MariaDB 10 database is only supported with HDP 2.5 when running on RHEL/CentOS/Oracle Linux 7 or SLES 12.

** Use of an existing Oracle 12c database is only supported with HDP 2.3 or later.

*** Use of an existing PostgreSQL 9.4 is only supported with HDP 2.5 or later.

**** Use of an existing MySQL 5.6 database is only supported with the default, InnoDB engine.

[Important]Important

For the Ambari database, if you use an existing Oracle database, make sure the Oracle listener runs on a port other than 8080 to avoid conflict with the default Ambari port.

[Important]Important

Using the Microsoft SQL Server or SQL Anywhere database options are not supported.

More Information

Using Non-Default Databases - Ambari

Using Non-Default Databases - Hive

Using Non-Default Databases - Oozie

Installing Ranger

Changing the Default Ambari Server Port

Memory Requirements

The Ambari host should have at least 1 GB RAM, with 500 MB free.

To check available memory on any host, run:

free -m

If you plan to install the Ambari Metrics Service (AMS) into your cluster, you should review Using Ambari Metrics in Hortonworks Data Platform Apache Ambari Operations, for guidelines on resources requirements. In general, the host you plan to run the Ambari Metrics Collector host should have the following memory and disk space available based on cluster size:

Number of hostsMemory AvailableDisk Space
11024 MB10 GB
101024 MB20 GB
502048 MB50 GB
1004096 MB100 GB
3004096 MB100 GB
5008096 MB200 GB
100012288 MB200 GB
200016384 MB500 GB
[Note]Note

Use these values as guidelines. Be sure to test them for your specific environment.

More Information

Using Ambari Metrics

Package Size and Inode Count Requirements

Package Size and Inode Count Requirements

 SizeInodes
Ambari Server100MB5,000
Ambari Agent8MB1,000
Ambari Metrics Collector225MB4,000
Ambari Metrics Monitor1MB100
Ambari Metrics Hadoop Sink8MB100
After Ambari Server SetupN/A4,000
After Ambari Server StartN/A500
After Ambari Agent StartN/A200

*Size and Inode values are approximate

Check the Maximum Open File Descriptors

The recommended maximum number of open file descriptors is 10000, or more. To check the current value set for the maximum number of open file descriptors, execute the following shell commands on each host:

ulimit -Sn

ulimit -Hn

If the output is not greater than 10000, run the following command to set it to a suitable default:

ulimit -n 10000