Apache Spark - DataFrames

Introducing DataFrames

Under the covers, DataFrames are derived from data structures known as Resilient Distributed Datasets (RDDs). RDDs and DataFrames are immutable distributed collections of data. Let's take a closer look at what some of these terms mean before we understand how they relate to DataFrames:

  • Resilient: They are fault tolerant, so if part of your operation fails, Spark quickly recovers the lost computation.

  • Distributed: RDDs are distributed across networked machines known as a cluster.

  • DataFrame: A data structure where data is organized into named columns, like a table in a relational database, but with richer optimizations under the hood.

Without the named columns and declared types provided by a schema, Spark wouldn't know how to optimize the executation of any computation. Since DataFrames have a schema, they use the Catalyst Optimizer to determine the optimal way to execute your code.

DataFrames were invented because the business community uses tables in a relational database, Pandas or R DataFrames, or Excel worksheets. A Spark DataFrame is conceptually equivalent to these, with richer optimizations under the hood and the benefit of being distributed across a cluster.

Interacting with DataFrames

Once created (instantiated), a DataFrame object has methods attached to it. Methods are operations one can perform on DataFrames such as filtering, counting, aggregating and many others.

Example: In Scala, to create (instantiate) a DataFrame, use this syntax:val df = .... In Python it is simplydf = ...

To display the contents of the DataFrame, apply ashowoperation (method) on it using the syntaxdf.show().

The.indicates you are applying a method on the object. These concepts will be relevant whether you use Scala or Python.

In working with DataFrames, it is common to chain operations together, such as: df.select().filter().groupBy().

By chaining operations together, you don't need to save intermediate DataFrames into local variables (thereby avoiding the creation of extra objects).

Also note that you do not have to worry about how to order operations because the optimizier determines the optimal order of execution of the operations for you.

df.select(...).groupBy(...).filter(...)

versus

df.filter(...).select(...).groupBy(...)

DataFrames and SQL

DataFrame syntax is more flexible than SQL syntax. Here we illustrate general usage patterns of SQL and DataFrames.

Suppose we have a data set we loaded as a table calledmyTableand an equivalent DataFrame, calleddf. We have three fields/columns calledcol_1(numerictype),col_2(stringtype) andcol_3(timestamptype) Here are basic SQL operations and their DataFrame equivalents.

Notice that columns in DataFrames are referenced by$"" in Scala or usingcol("")in Python.

SQL

DataFrame (Scala)

DataFrame (Python)

SELECT col_1 FROM myTable

df.select($"col_1")

df.select(col("col_1"))

DESCRIBE myTable

df.printSchema

df.printSchema()

SELECT col_1 from myTable WHERE col_1 > 0

df.select($"col_1").filter($"col_1" > 0)

df.select(col("col_1")).filter(col("col_1") > 0)

..GROUP BY col_2 ORDER BY col_2

..groupBy($"col_2").orderBy($"col_2")

..groupBy(col("col_2")).orderBy(col("col_2"))

..WHERE year(col_3) > 1990

..filter(year($"col_3") > 1990)

..filter(year(col("col_3")) > 1990)

SELECT * from myTable limit 10

df.select("*").limit(10)

df.select("*").limit(10)

display(myTable)(text format)

df.show()

df.show()

display(myTable)(html format)

display(df)

display(df)

Hint: You can also run SQL queries with the special syntaxspark.sql("SELECT * FROM myTable")

Last updated