Karim Zaveri
In this course, you’ll learn how to define and schedule data pipelines that incrementally ingest and process data through multiple tables in the lakehouse using Delta Live Tables (DLT) in Spark SQL and Python. The course covers how to get started with DLT, how DLT tracks data dependencies in data pipelines, how to configure and run data pipelines using the Delta Live Tables UI, how to use Python or Spark SQL to define data pipelines that ingest and process data through multiple tables in the lakehouse using Auto Loader and DLT, how to use APPLY CHANGES INTO syntax to process Change Data Capture feeds, and how to review event logs and data artifacts created by pipelines and troubleshoot DLT syntax.
Issued on
July 17, 2025
Expires on
Does not expire