Command Line Installation
Also available as:
PDF
loading table of contents...

Configuring Spark 2

To configure Spark 2, edit the following configuration files on all nodes that run Spark 2 jobs. These configuration files reside in the Spark 2 client conf directory /usr/hdp/current/spark2-client/conf on each node.

  • If you plan to use Hive with Spark 2, hive-site.xml

  • spark-env.sh

  • spark-defaults.conf

  • spark-thrift-sparkconf.conf

[Note]Note

Note: the following instructions are for a non-Kerberized cluster.

hive-site.xml

If you plan to use Hive with Spark 2, create a hive-site.xml file in the Spark 2 client SPARK_HOME/conf directory. (Note: if you installed the Spark 2 tech preview you can skip this step.)

Edit the file so that it contains only the hive.metastore.uris property. Make sure that the hostname points to the URI where the Hive Metastore is running.

[Important]Important

hive-site.xml contains a number of properties that are not relevant to or supported by the Spark 2 thrift server. Ensure that your Spark 2 hive-site.xml file contains only the following configuration property.

For example:

<property>
     <name>hive.metastore.uris</name>
     <value>thrift://c6401.ambari.apache.org:9083</value>
     <description>URI for client contact metastore server</description>
</property>

spark-env.sh

Create a spark-env.sh file in the Spark 2 client /conf directory, and make sure the file has the following entries:

# Location where log files are stored (default: ${SPARK_HOME}/logs)
# This can be any directory where the spark user has R/W access
export SPARK_LOG_DIR=/var/log/spark2

# Location of the pid file (default: /tmp)
# This can be any directory where the spark user has R/W access
export SPARK_PID_DIR=/var/run/spark2

These settings are required for starting Spark 2 services (for example, the History Service and the Thrift server). The user who starts Spark 2 services needs to have read and write permissions to the log file and PID directory. By default these files are in the $SPARK_HOME directory, typically owned by root in RMP installation.

We recommend that you set HADOOP_CONF_DIR to the appropriate directory; for example:

set HADOOP_CONF_DIR=/etc/hadoop/conf

This minimizes the amount of work you need to do to set up environment variables before running Spark 2 applications.

spark-defaults.conf

Edit the spark-defaults.conf file in the Spark 2 client /conf directory.

  • Make sure the following values are specified, including hostname and port. For example:

    spark.yarn.historyServer.address c6401.ambari.apache.org:18080
    spark.history.ui.port 18080
    spark.eventLog.dir hdfs:///spark-history
    spark.eventLog.enabled true
    spark.history.fs.logDirectory hdfs:///spark-history
  • Delete the spark.yarn.services property, if specified in the file.

If you submit jobs programmatically in a way that spark-env.sh is not executed during the submit step, or if you wish to specify a different cluster version than the version installed on the client, set the following two additional property values:

spark.driver.extraJavaOptions -Dhdp.version=<HDP-version>
spark.yarn.am.extraJavaOptions -Dhdp.version=<HDP-version>

For example:

spark.driver.extraJavaOptions -Dhdp.version=2.6.0.0-3475
spark.yarn.am.extraJavaOptions -Dhdp.version=2.6.0.0-3475

spark-thrift-sparkconf.conf

Add the following properties and values to the spark-thrift-sparkconf.conf file:

spark.eventLog.dir hdfs:///spark-history
spark.eventLog.enabled true
spark.history.fs.logDirectory hdfs:///spark-history

Create a spark User

To use the Spark 2 History Service, run Hive queries as the spark user, or run Spark 2 jobs; the associated user must have sufficient HDFS access. One way of ensuring this is to add the user to the hdfs group.

The following example creates a spark user:

  • Create the spark user on all nodes. Add it to the hdfs group.

    useradd spark This command is only required for tarball spark installs, not rpm-based installs.

    usermod -a -G hdfs spark

  • Create the spark user directory under /user/spark:

    sudo su $HDFS_USER

    hdfs dfs -mkdir -p /user/spark

    hdfs dfs -chown spark:spark /user/spark

    hdfs dfs -chmod -R 755 /user/spark

Create an HDFS Directory

As the hdfs service user, create an HDFS directory called spark-history with user:spark, user group:hadoop, and permissions = 777:

hdfs dfs -mkdir /spark2-history
hdfs dfs -chown -R spark:hadoop /spark2-history
hdfs dfs -chmod -R 777 /spark2-history