Writing flexible pipelines for data platforms with Dagster

RU / Day 2 / 10:45 / Track 1

Today we don't often talk about unifying the design of data pipelines, especially when you have to leave Java/Scala universe and build chains of components with mixed languages and technologies.

How to combine Spark + Scala jobs and Python apps? Dagster provides convenient components for writing and debugging these pipelines while having a large number of integrations with de facto standards of orchestration or computation systems, etc.

Andrey will explain why it's worth doing and how to write pipelines with reusable blocks and flexible architecture using Dagster.