Sunday, January 10, 2021

Catalyst optimizer, Tungsten optimizer

Spark uses two engines to optimize and run the queries - Catalyst and Tungsten, in that order. Catalyst basically generates an optimized physical query plan from the logical query plan by applying a series of transformations like predicate pushdown, column pruning, and constant folding on the logical plan. This optimized query plan is then used by Tungsten to generate optimized code, that resembles hand written code, by making use of Whole-stage Codegen functionality introduced in Spark 2.0. This functionality has improved Spark's efficiency by a huge margin from Spark 1.6, which used the traditional Volcano Iterator Model.

Catalyst is based on functional programming constructs in Scala and designed with these key two purposes:

  • Easily add new optimization techniques and features to Spark SQL
  • Enable external developers to extend the optimizer (e.g. adding data source specific rules, support for new data types, etc.)

When you execute code, Spark SQL uses Catalyst's general tree transformation framework in four phases, as shown below:

No alt text provided for this image



Tungsten

The goal of Project Tungsten is to improve Spark execution by optimising Spark jobs for CPU and memory efficiency (as opposed to network and disk I/O which are considered fast enough). 

  1. Off-Heap Memory Management using binary in-memory data representation aka Tungsten row format and managing memory explicitly,
  2. Cache Locality which is about cache-aware computations with cache-aware layout for high cache hit rates
  3. Whole-Stage Code Generation (aka CodeGen).

property: spark.sql.tungsten.enabled to true

All thanks to below article.

https://www.linkedin.com/pulse/catalyst-tungsten-apache-sparks-speeding-engine-deepak-rajak/?articleId=6674601890514378752 

No comments:

Post a Comment