What is SQLContext?

The official definition in the documentation of Spark is:

The entry point for running relational queries using Spark. Allows the creation of SchemaRDD objects and the execution of SQL queries.

The purpose of SQLContext is to introduce processing on structured data in Spark. Before it, spark only had RDDs to manipulate data. RDDs are simply a collection of rows (notice the absence of columns) that can be manipulated using lambda functions and other functionalities. SQLContext introduced objects that would add schema (like column name and data type column) to the data to make it similar to relational databases. The additional information about data would also open the gate to optimizations for data processing.

Looking more at the documentation, it shows that the SQLContext is a class introduced in version 1.0.0 and provides a set of functions that allow creating and manipulating a SchemaRDD object. Here is the list of functions:

  • cacheTable
  • createParquetFile
  • createSchemaRDD
  • logicalPlanToSparkQuery
  • parquetFile
  • registerRDDAsTable
  • sparkContext
  • sql
  • table
  • uncacheTable

The APIs revolve around inter-transformation of Parquet files and SchemaRDD objects. SchemaRDD objects are an RDD of Row objects that has an associated schema. In addition to standard RDD functions, SchemaRDDs can be used in relational queries, like as below:

Scala
// One method for defining the schema of an RDD is to make a case class with the desired column
  // names and types.
  case class Record(key: Int, value: String)

  val sc: SparkContext // An existing spark context.
  val sqlContext = new SQLContext(sc)

  // Importing the SQL context gives access to all the SQL functions and implicit conversions.
  import sqlContext._

  val rdd = sc.parallelize((1 to 100).map(i => Record(i, s"val_$i")))
  // Any RDD containing case classes can be registered as a table.  The schema of the table is
  // automatically inferred using scala reflection.
  rdd.registerAsTable("records")

  val results: SchemaRDD = sql("SELECT * FROM records")

The above code would not run on the latest versions of Spark because SchemaRDDs are now obsolete.

Currently, SQLContext is itslef not used and instead SparkSession is used to create a unified interface for many such different contexts like SQLContext, SparkContext HiveContext and others. Inside SparkSession, the SQLContext is still present. Also, instead of SchemaRDDs spark now uses DataSets and DataFrames to denote structured data.

How to Create SQLContext in Spark Using Scala?

Scala stands for scalable language. It was developed in 2003 by Martin Odersky. It is an object-oriented language that provides support for functional programming approach as well. Everything in Scala is an object. It is a statically typed language although unlike other statically typed languages like C, C++, or Java, it doesn’t require type information while writing the code. The type verification is done at the compile time. Static typing allows to building of safe systems by default. Smart built-in checks and actionable error messages, combined with thread-safe data structures and collections, prevent many tricky bugs before the program first runs.

This article focuses on discussing steps to create SQLContext in Spark using Scala.

Table of Content

  • What is SQLContext?
  • Creating SQLContext
    • 1. Using SparkContext
    • 2. Using Existing SQLContext Object
    • 3. Using SparkSession

Similar Reads

What is SQLContext?

The official definition in the documentation of Spark is:...

Creating SQLContext

1. Using SparkContext...