Hortonworks Data Platform

Ambari Reference Topics

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

2014-07-15

Abstract

The Hortonworks Data Platform, powered by Apache Hadoop, is a massively scalable and 100% open source platform for storing, processing and analyzing large volumes of data. It is designed to deal with data from many sources and formats in a very quick, easy and cost-effective manner. The Hortonworks Data Platform consists of the essential set of Apache Hadoop projects including MapReduce, Hadoop Distributed File System (HDFS), HCatalog, Pig, Hive, HBase, Zookeeper and Ambari. Hortonworks is the major contributor of code and patches to many of these projects. These projects have been integrated and tested as part of the Hortonworks Data Platform release process and installation and configuration tools have also been included.

Unlike other providers of platforms built using Apache Hadoop, Hortonworks contributes 100% of our code back to the Apache Software Foundation. The Hortonworks Data Platform is Apache-licensed and completely open source. We sell only expert technical support, training and partner-enablement services. All of our technology is, and will remain free and open source. Please visit the Hortonworks Data Platform page for more information on Hortonworks technology. For more information on Hortonworks services, please visit either the Support or Training page. Feel free to Contact Us directly to discuss your specific needs.


Contents

1. Installing Ambari Agents Manually
1. RHEL/CentOS/Oracle Linux 5.x and 6.x
2. SLES
2. Customizing HDP Services
1. Customizing Services for a HDP 1.x Stack
1.1. Defining Service Users and Groups for HDP 1.x
1.2. Setting Properties That Depend on Service Usernames/Groups
1.3. Recommended Memory Configurations for the MapReduce Service
2. Customizing Services for a HDP 2.x Stack
2.1. Defining Service Users and Groups for HDP 2.x
2.2. Setting Properties That Depend on Service Usernames/Groups
3. Using Custom Host Names
4. Moving the Ambari Server
1. Back up Current Data
2. Update Agents
3. Install the New Server and Populate the Databases
5. Configuring LZO Compression
1. Configure core-site.xml for LZO
2. Running Compression with Hive Queries
2.1. Create LZO Files
2.2. Write Custom Java to Create LZO Files
6. Using Non-Default Databases
1. Using Non-Default Databases - Ambari
1.1. Using Ambari with Oracle
1.2. Using Ambari with MySQL
1.3. Using Ambari with PostgreSQL
1.4. Troubleshooting Ambari
2. Using Non-Default Databases - Hive
2.1. Using Hive with Oracle
2.2. Using Hive with MySQL
2.3. Using Hive with PostgreSQL
2.4. Troubleshooting Hive
3. Using Non-Default Databases - Oozie
3.1. Using Oozie with Oracle
3.2. Using Oozie with MySQL
3.3. Using Oozie with PostgreSQL
3.4. Troubleshooting Oozie
7. Setting up an Internet Proxy server for Ambari
8. Configuring Network Port Numbers
1. Default Network Port Numbers - Ambari
2. Ganglia Ports
3. Nagios Ports
4. Optional: Changing the Default Ambari Server Port
9. Changing the JDK Version on an Existing Cluster
10. Configuring NameNode High Availability
1. Setting Up NameNode High Availability
2. Rolling Back NameNode HA
2.1. Stop HBase
2.2. Checkpoint the Active NameNode
2.3. Stop All Services
2.4. Prepare the Ambari Server Host for Rollback
2.5. Restore the HBase Configuration
2.6. Delete ZK Failover Controllers
2.7. Modify HDFS Configurations
2.8. Recreate the Secondary NameNode
2.9. Re-enable the Secondary NameNode
2.10. Delete All JournalNodes
2.11. Delete the Additional NameNode
2.12. Verify your HDFS Components
2.13. Start HDFS
11. Configuring RHEL HA for Hadoop 1.x
1. Deploy the scripts
2. Configure Ambari properties across the HA cluster
3. Troubleshooting RHEL HA
12. Using Ambari Blueprints
13. Configuring HDP Stack Repositories for Red Hat Satellite
14. Configuring Storm for Supervision
15. Tuning Ambari Performance for large (>2K-node) clusters

loading table of contents...