Apache Spark - DataFrames

Introducing DataFrames

Under the covers, DataFrames are derived from data structures known as Resilient Distributed Datasets (RDDs). RDDs and DataFrames are immutable distributed collections of data. Let's take a closer look at what some of these terms mean before we understand how they relate to DataFrames:

  • Resilient: They are fault tolerant, so if part of your operation fails, Spark quickly recovers the lost computation.

  • Distributed: RDDs are distributed across networked machines known as a cluster.

  • DataFrame: A data structure where data is organized into named columns, like a table in a relational database, but with richer optimizations under the hood.

Without the named columns and declared types provided by a schema, Spark wouldn't know how to optimize the executation of any computation. Since DataFrames have a schema, they use the Catalyst Optimizer to determine the optimal way to execute your code.

DataFrames were invented because the business community uses tables in a relational database, Pandas or R DataFrames, or Excel worksheets. A Spark DataFrame is conceptually equivalent to these, with richer optimizations under the hood and the benefit of being distributed across a cluster.

Interacting with DataFrames

Once created (instantiated), a DataFrame object has methods attached to it. Methods are operations one can perform on DataFrames such as filtering, counting, aggregating and many others.

Example: In Scala, to create (instantiate) a DataFrame, use this syntax:val df = .... In Python it is simplydf = ...

To display the contents of the DataFrame, apply ashowoperation (method) on it using the syntaxdf.show().

The.indicates you are applying a method on the object. These concepts will be relevant whether you use Scala or Python.

In working with DataFrames, it is common to chain operations together, such as: df.select().filter().groupBy().

By chaining operations together, you don't need to save intermediate DataFrames into local variables (thereby avoiding the creation of extra objects).

Also note that you do not have to worry about how to order operations because the optimizier determines the optimal order of execution of the operations for you.

df.select(...).groupBy(...).filter(...)

versus

df.filter(...).select(...).groupBy(...)

DataFrames and SQL

DataFrame syntax is more flexible than SQL syntax. Here we illustrate general usage patterns of SQL and DataFrames.

Suppose we have a data set we loaded as a table calledmyTableand an equivalent DataFrame, calleddf. We have three fields/columns calledcol_1(numerictype),col_2(stringtype) andcol_3(timestamptype) Here are basic SQL operations and their DataFrame equivalents.

Notice that columns in DataFrames are referenced by$"" in Scala or usingcol("")in Python.

Hint: You can also run SQL queries with the special syntaxspark.sql("SELECT * FROM myTable")

Last updated