That Blue Cloud

Delta Lake 3.0 Announced

Delta Lake has been an absolute pleasure to work with for the last couple of years, and it solved plenty of ongoing issues with data lakes using the Delta file format. Now it becomes more powerful in its third version and evolution.
Delta Lake 3.0 Announced

Delta Lake has been an absolute pleasure to work with for the last couple of years, and it solved plenty of ongoing issues with data lakes using the Delta file format. Now it becomes more powerful in its third version and evolution.

Databricks recently announced the 3.0 version of the Delta Lake and Delta files, and in this article, we're going to look at what version 3.0 brings in this new iteration. I'm also hoping to create an example of how Delta Lake 3.0 works in the upcoming weeks, to demonstrate these new capabilities. I'm especially excited about UniForm.

Delta Universal Format (UniForm)

Delta files are powering Delta Lakes all over the globe, adding capabilities of databases to the traditional data lakes. Never mind the ACID transaction support, the MERGE capability alone is a powerful add-on to a plain Parquet-based lakes. It wasn't alone in making data lakes a better place, however; Apache Hudi and Apache Iceberg have been contributing to the ecosystem, too. But as usual, multiple competing file formats cause interoperability issues, requiring clients to support not just Delta files, but also Hudi and Iceberg files.

Delta Lake 3.0 brings a solution to this problem: The Delta engine will now generate the necessary metadata for Hudi and Iceberg files under the hood, so when you write with Delta, it can be read by any other client that supports those formats.

Image Credit: Databricks Blog

What's the benefit, you ask? Think it like this: You want to purchase a third-party tool that has the support for Apache Hudi and Apache Iceberg, as they are, well, Apache Foundation technologies. But you have invested so much into Delta Lake that you don't want to generate the same data files on your layers that's already there as Delta files. Solution? Delta Lake 3.0 will generate the necessary metadata for those files to be easily read and queried by your potential new purchase.

Does this mean that Iceberg and Hudi adapters will support this UniForm format? Not right now. It'll require the Apache Foundation to play along and support the same metadata generation process. Currently, it's read-only from Hudi and Iceberg side.

Delta Kernel

It also has been a pain to keep the connector extensions in sync for support of the Delta format, as each platform runs on a different application runtime and language. That requires the Delta protocol to be re-implemented in each runtime, and kept regularly updated.

Delta Kernel solves this by acting as middle-ground by wrapping the under-the-hood Delta operations in a set of Java libraries that can read from (and according to their website, write to, soon) Delta tables, without requiring the connectors to make a detailed implementation. This would allow the ecosystem to adopt Delta Lakes faster, as it won't require them to know how Delta works under the hood. It'll act as a framework that abstracts the platforms from Delta's inner workings.

Liquid Clustering

As the databases suffered from changing queries and requirements for as long as they existed, Delta Lakes suffer the same fates. You may create your tables with certain partitioning in mind to respond to today's query requirements fast, but it probably won't be as fast in the future when the clients start querying the data in various other ways.

Liquid Clustering brings a solution to the table by dynamically adjusting the partitioning based on the data patterns. According to Databricks, this would help avoid over-partitioning and under-partitioning that happens with Hive.

It's easy and straightforward to implement, but I'm yet to see it in action to make up my mind. In my experience, nothing dynamic works without any supervision. So, we'll see soon how good this promise is.

Further Reading

Announcing Delta Lake 3.0: New Universal Format Offers Automatic Compatibility for Apache Iceberg and Apache Hudi
New release unifies lakehouse storage formats and reinforces Delta Lake as the best choice for building an open lakehouse
Announcing Delta Lake 3.0 with New Universal Format and Liquid Clustering
We are excited to an
Release Delta Lake 3.0.0 Preview · delta-io/delta
We are excited to announce the preview release of Delta Lake 3.0.0. This release includes several exciting new features and artifacts. Highlights Here are the most important aspects of 3.0.0. Delta…
delta/kernel at master · delta-io/delta
An open-source storage framework that enables building a Lakehouse architecture with compute engines including Spark, PrestoDB, Flink, Trino, and Hive and APIs - delta-io/delta
Harun Legoz

Harun Legoz

I’m a cloud solutions architect with a coffee obsession. Have been building apps and data platforms for over 18 years, I also blog on Azure & Microsoft Fabric. Feel free to say hi on Twitter/X!

That Blue Cloud

Design awesome data platforms using Microsoft Fabric

That Blue Cloud

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to That Blue Cloud.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.