Welcome to the blog!

My name is Bartosz Konieczny. I'm a freelance data engineer and author of the Data Engineering Design Patterns (O'Reilly) book. When I'm not helping clients solve data engineering challenges to drive business value, I enjoy sharing what I learned here.

Consulting β†’ Courses & trainings β†’ Data Engineering Design Patterns Book β†’ Github β†’ Most recent blog posts β†’

On time with data engineering systems - timeline of the data

Timely and accurate data is a Holy Grail for each data practitioner. To make it real, data engineers have to be careful about the transformations they make before exposing the dataset to consumers, but they also need to understand the timeline of the data.

Continue Reading β†’

Lateral column aliases in Apache Spark SQL

It's the second blog post about laterals in Apache Spark SQL. Previously you discovered how to combine queries with lateral subquery and lateral views. Now it's time to see a more local feature, lateral column aliases.

Continue Reading β†’

Lateral subquery, aka lateral join, and lateral views in Apache Spark SQL

Seven (!) years have passed since my blog post about Join types in Apache Spark SQL (2017). Coming from a software engineering background, I was so amazed that the world of joins doesn't stop on LEFT/RIGHT/FULL joins that I couldn't not blog about it ;) Time has passed but lucky me, each new project teaches me something.

Continue Reading β†’

The keyword I would like to know before thinking about watermarks

When I was learning about watermarks in Apache Flink, I saw they were taking the smallest event times instead of the biggest ones in Apache Spark Structured Streaming. From that I was puzzled... How is it possible the pipeline doesn't go back to the past? The answer came when I reread the Streaming Systems book. There was one keyword I had missed that clarified everything.

Continue Reading β†’

Abstracting column access in PySpark with Proxy design pattern

One of the biggest changes for PySpark has been the DataFrame API. It greatly reduces the JVM-to-PVM communication overhead and improves the performance. However, it also complexities the code. Probably, some of you have already seen, written, or worked with the code like this...

Continue Reading β†’

Data quality on Databricks - DQX

In the last blog post of the data quality on Databricks series we're going to discover a Databricks Labs product, the DQX library.

Continue Reading β†’

Data quality on Databricks - Spark Expectations

Previously we learned how to control data quality with Delta Live Tables. Now, it's time to see an open source library in action, Spark Expectations.

Continue Reading β†’

Data quality on Databricks - Delta Live Tables

Data quality is one of the key factors of a successful data project. Without a good quality, even the most advanced engineering or analytics work will not be trusted, therefore, not used. Unfortunately, data quality controls are very often considered as a work item to implement in the end, which sometimes translates to never.

Continue Reading β†’

Data contracts and Bitol project

Data contracts was a hot topic in the data space before LLMs and GenAI came out. They promised a better world with less communication issues between teams, leading to more reliable and trustworthy data. Unfortunately, the promise has been too hard to put into practice. Has been, or should I write "was"?

Continue Reading β†’

Apache Airflow XCom in Databricks with task values

If you have been working with Apache Airflow already, you certainly met XComs at some point. You know, these variables that you can "exchange" between tasks within the same DAG. If after switching to Databricks Workflows for data orchestration you're wondering how to do the same, there is good news. Databricks supports this exchange capability natively with Task values.

Continue Reading β†’

File trigger in Databricks

For over two years now you can leverage file triggers in Databricks Jobs to start processing as soon as a new file gets written to your storage. The feature looks amazing but hides some implementation challenges that we're going to see in this blog post.

Continue Reading β†’

The saveAsTable in Apache Spark SQL, alternative to insertInto

Is there an easier way to address the insertInto position-based data writing in Apache Spark SQL? Totally, if you use a column-based method such as saveAsTable with append mode.

Continue Reading β†’

Dealing with quotas and limits - Apache Spark Structured Streaming for Amazon Kinesis Data Streams

Using cloud managed services is often a love and hate story. On one hand, they abstract a lot of tedious administrative work to let you focus on the essentials. From another, they often have quotas and limits that you, as a data engineer, have to take into account in your daily work. These limits become even more serious when they operate in a latency-sensitive context, as the one of stream processing.

Continue Reading β†’

Overwriting partitioned tables in Apache Spark SQL

After publishing a release of my blog post about the insertInto trap, I got an intriguing question in the comments. The alternative to the insertInto, the saveAsTable method, doesn't work well on partitioned data in overwrite mode while the insertInto does. True, but is there an alternative to it that doesn't require using this position-based function?

Continue Reading β†’

The insertInto trap in Apache Spark SQL

Even though Apache Spark SQL provides an API for structured data, the framework sometimes behaves unexpectedly. It's the case of an insertInto operation that can even lead to some data quality issues. Why? Let's try to understand in this short article.

Continue Reading β†’