The Data Lakehouse Won. Now Pick a Table Format.
If you’ve been ignoring the data infrastructure conversation for the last few years, here’s where we landed in 2026: the data lakehouse won. The data warehouse vendors will fight about it for another decade, but the architectural argument is over.
Let me back up.
The Quick History
At the bottom of every modern data stack is a cloud storage bucket. S3, Azure Blob, GCS. Pick your hyperscaler. A bucket is dumb on purpose. It stores files cheaply and durably and doesn’t care what’s in them. No schemas, no transactions, no relational anything. Just objects.
When you dump raw logs, IoT telemetry, and CSV exports into a bucket without any organizing layer, congratulations, you have a data lake. Cheap, flexible, and almost completely useless for analytics until someone builds a pipeline to make sense of it.
The traditional answer to that mess was a data warehouse. Snowflake, Redshift, BigQuery, the whole gang. You force your data through ETL, conform it to a strict schema, and pay a premium to keep it sitting in the vendor’s proprietary storage format. You get fast SQL, ACID transactions, and a vendor lock-in problem so severe that exporting your data becomes a major friction point.
The lakehouse is what happens when someone finally says: what if we kept the cheap object storage, but added the warehouse features as a layer on top?
What a Lakehouse Actually Is
The trick is decoupling. Storage stays in your bucket. Compute is whatever engine you point at it. Metadata lives in an open table format that turns a pile of Parquet files into something that behaves like a real database table.
One copy of the data. Multiple engines can query it. Schema evolution, time travel, ACID transactions, all without copying everything into a proprietary system. From what I’ve read, teams that move from a pure warehouse to a lakehouse are able to cut storage costs noticeably in the process, and they stop fighting their ML team about getting access to the same data.
That’s the pitch, and it’s a good one. The hard part is picking your table format.
The Four Formats Worth Knowing
Apache Iceberg
Iceberg is the one to bet on if you care about not getting locked in. It came out of Netflix and even Snowflake and Databricks have been forced to support it. The metadata is hierarchical, which sounds boring but matters: it lets query engines skip enormous chunks of irrelevant data without listing directories one by one. Iceberg also handles partition evolution gracefully, so you can change your partitioning strategy without rewriting petabytes of history.
If I’m starting a new lakehouse in 2026 and I don’t have a strong reason to pick something else, it’s Iceberg.
Delta Lake
Delta is what Databricks ships and what everyone using Spark already knows. It uses an append-only transaction log in a _delta_log directory, and it’s beautifully integrated with the Databricks platform. Z-Ordering, native Spark performance, the whole ecosystem.
If your team lives inside Databricks, Delta is the obvious answer. If you don’t, the calculus is harder, because Delta’s openness has improved a lot but it still feels most at home in the Databricks world.
Apache Hudi
Hudi came out of Uber and it was built for one thing: high-frequency upserts. If your problem is Change Data Capture, streaming ingestion, or constant record-level updates, Hudi is probably your answer. It gives you two storage modes. Copy-on-Write rewrites files on update so reads stay fast. Merge-on-Read writes deltas and reconciles at query time, which is what you want when writes are heavy and reads are flexible.
Hudi is the right pick when your pipeline is full of UPSERT and you can’t afford to rewrite large files every time something changes.
Apache Paimon
Paimon is the newest of the four and it’s worth keeping an eye on. It came from the Flink world and uses an LSM-tree style organization, which is what databases like RocksDB use under the hood. The whole point is unifying batch and streaming in a single format. If you’re doing real-time event-driven work and don’t want to maintain a separate streaming and batch stack, Paimon is interesting.
It’s not the safe choice yet, but it’s the one I’d watch most closely over the next two years.
So Which One?
Honestly, the answer depends less on the format and more on which ecosystem you’re already in.
- Mostly Spark and Databricks? Delta.
- Streaming-heavy with constant upserts? Hudi.
- Real-time event-driven and willing to bet on newer tech? Paimon.
- Anything else, or you want to keep your options open? Iceberg.
The format wars have mostly converged. Most major engines support multiple formats now, and the gap between them on raw query performance has shrunk. The choice is more about operational fit than performance ceilings.
The lakehouse pattern itself is the real story. The format is just plumbing.
I’d appreciate a follow. You can subscribe with your email below. The emails go out once a week, or you can find me on Mastodon at @[email protected].
/ Data / Infrastructure / Lakehouse / Iceberg