Developing Apache Spark Applications
Also available as:

Load ORC Data into DataFrames Using Predicate Push-Down

DataFrames are similar to Spark RDDs but have higher-level semantics built into their operators. This allows optimization to be pushed down to the underlying query engine.

Here is the Scala API version of the SELECT query used in the previous section, using the DataFrame API:

val spark = SparkSession.builder().getOrCreate()
val people ="orc").load("peoplePartitioned")
people.filter(people("age") < 15).select("name").show()

DataFrames are not limited to Scala. There is a Java API and, for data scientists, a Python API binding:

from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
people ="orc").load("peoplePartitioned")
people.filter(people.age < 15).select("name").show()