Cloudera Training Partner Logo

Apache Spark Application Performance Tuning

Cloudera Training Partner Logo

This hands-on training course teaches the key concepts and expertise developers need to optimize the performance of their Apache Spark applications. During the course, you will learn how to identify common causes of poor performance in Spark applications, techniques to avoid or resolve them, and best practices for monitoring Spark applications.

The course introduces the architecture and concepts behind Apache Spark and the underlying data platform, and then builds on this foundational understanding by teaching you how to optimize Spark application code. The course format focuses on instructor-led demonstrations that illustrate both performance issues and the techniques that address them, followed by hands-on exercises that give you the opportunity to practice what you've learned in an interactive notebook environment.

Course Contents

  • Spark Architecture
  • Data Sources and Formats
  • Inferring Schemas
  • Dealing With Skewed Data
  • Catalyst and Tungsten Overview
  • Mitigating Spark Shuffles
  • Partitioned and Bucketed Tables
  • Improving Join Performance
  • Pyspark Overhead and UDFs
  • Caching Data for Reuse
  • Workload XM (WXM) Introduction
  • What's New in Spark 3.0?

E-Book Symbol You will receive the original course documentation by Cloudera in English language as an E-Book (pdf).

Request in-house training now

Target Group

This course is aimed at software developers, engineers and data scientists who have experience developing Spark applications and want to learn how to improve the performance of their code. This is not an introduction to Spark.

Knowledge Prerequisites

Spark examples and practical exercises will be presented in Python and the ability to program in this language is required. Basic familiarity with the Linux command line is required. Basic knowledge of SQL is helpful.

We also recommend our training courses in Programming languages and software development and Linux.

Course Objective

Once you have successfully completed this course, you will be able to:

  • Understand the architecture and job execution of Apache Spark and how techniques such as lazy execution and pipelining can improve runtime performance
  • Evaluate the performance characteristics of core data structures such as RDD and DataFrames
  • Select the file formats that offer the best performance for your application
  • Identifying and resolving performance issues caused by data distortion
  • Use partitioning, bucketing and join optimizations to improve SparkSQL performance
  • Understanding the performance overhead of Python-based RDDs, DataFrames and user-defined functions
  • Take advantage of caching for better application performance
  • Understand how the Catalyst and Tungsten optimizers work
  • Learn how Workload XM can help troubleshoot and proactively monitor Spark application performance
  • Learn how the adaptive query execution engine improves performance

Classroom training

Do you prefer the classic training method? A course in one of our Training Centers, with a competent trainer and the direct exchange between all course participants? Then you should book one of our classroom training dates!

Online training

You wish to attend a course in online mode? We offer you online course dates for this course topic. To attend these seminars, you need to have a PC with Internet access (minimum data rate 1Mbps), a headset when working via VoIP and optionally a camera. For further information and technical recommendations, please refer to.

Tailor-made courses

You need a special course for your team? In addition to our standard offer, we will also support you in creating your customized courses, which precisely meet your individual demands. We will be glad to consult you and create an individual offer for you.
Request in-house training now
PDF SymbolYou can find the complete description of this course with dates and prices ready for download at as PDF.

This hands-on training course teaches the key concepts and expertise developers need to optimize the performance of their Apache Spark applications. During the course, you will learn how to identify common causes of poor performance in Spark applications, techniques to avoid or resolve them, and best practices for monitoring Spark applications.

The course introduces the architecture and concepts behind Apache Spark and the underlying data platform, and then builds on this foundational understanding by teaching you how to optimize Spark application code. The course format focuses on instructor-led demonstrations that illustrate both performance issues and the techniques that address them, followed by hands-on exercises that give you the opportunity to practice what you've learned in an interactive notebook environment.

Course Contents

  • Spark Architecture
  • Data Sources and Formats
  • Inferring Schemas
  • Dealing With Skewed Data
  • Catalyst and Tungsten Overview
  • Mitigating Spark Shuffles
  • Partitioned and Bucketed Tables
  • Improving Join Performance
  • Pyspark Overhead and UDFs
  • Caching Data for Reuse
  • Workload XM (WXM) Introduction
  • What's New in Spark 3.0?

E-Book Symbol You will receive the original course documentation by Cloudera in English language as an E-Book (pdf).

Request in-house training now

Target Group

This course is aimed at software developers, engineers and data scientists who have experience developing Spark applications and want to learn how to improve the performance of their code. This is not an introduction to Spark.

Knowledge Prerequisites

Spark examples and practical exercises will be presented in Python and the ability to program in this language is required. Basic familiarity with the Linux command line is required. Basic knowledge of SQL is helpful.

We also recommend our training courses in Programming languages and software development and Linux.

Course Objective

Once you have successfully completed this course, you will be able to:

  • Understand the architecture and job execution of Apache Spark and how techniques such as lazy execution and pipelining can improve runtime performance
  • Evaluate the performance characteristics of core data structures such as RDD and DataFrames
  • Select the file formats that offer the best performance for your application
  • Identifying and resolving performance issues caused by data distortion
  • Use partitioning, bucketing and join optimizations to improve SparkSQL performance
  • Understanding the performance overhead of Python-based RDDs, DataFrames and user-defined functions
  • Take advantage of caching for better application performance
  • Understand how the Catalyst and Tungsten optimizers work
  • Learn how Workload XM can help troubleshoot and proactively monitor Spark application performance
  • Learn how the adaptive query execution engine improves performance

Classroom training

Do you prefer the classic training method? A course in one of our Training Centers, with a competent trainer and the direct exchange between all course participants? Then you should book one of our classroom training dates!

Online training

You wish to attend a course in online mode? We offer you online course dates for this course topic. To attend these seminars, you need to have a PC with Internet access (minimum data rate 1Mbps), a headset when working via VoIP and optionally a camera. For further information and technical recommendations, please refer to.

Tailor-made courses

You need a special course for your team? In addition to our standard offer, we will also support you in creating your customized courses, which precisely meet your individual demands. We will be glad to consult you and create an individual offer for you.
Request in-house training now

PDF SymbolYou can find the complete description of this course with dates and prices ready for download at as PDF.