Configure HTTP Authentication for HDFS, YARN, MapReduce2, HBase, Oozie, Falcon, and Storm
How to configure HTTP authentication for Hadoop components in a Kerberos environment.
Create a secret key used for signing authentication tokens. This file should
contain random data and be placed on every host in the cluster. It should also be
owned by the hdfs user and group owned by the hadoop group. Permissions should be set
to 440. For example:
dd if=/dev/urandom of=/etc/security/http_secret bs=1024 count=1 chown hdfs:hadoop /etc/security/http_secret chmod 440 /etc/security/http_secret
- In Ambari Web, browse to Services > HDFS > Configs .
Add or modify the following configuration properties to Advanced core-site:
hadoop.http.authentication.kerberos.principal HTTP/_HOST@ EXAMPLE.COM hadoop.http.filter.initializers org.apache.hadoop.security.AuthenticationFilterInitializer hadoop.http.authentication.cookie.domain hortonworks.localNote
The entries listed in the above table in bold and italicized are site-specific. The hadoop.http.authentication.cookie.domain property is based off of the fully qualified domain names of the servers in the cluster. For example if the FQDN of your NameNode is host1.hortonworks.local, the hadoop.http.authentication.cookie.domain should be set to hortonworks.local.
For HBase, you can enable Kerberos-authentication to HBase Web UIs by configuring
- In Ambari Web, browse to Services > HBase > Configs .
Add the following configuration properties to the custom
Hbase.security.authentication.spnego.kerberos.name.rules (Optional) Hbase.security.authentication.signature.secret.file(Optional)
- Save the configuration, then restart the affected services.