Here’s my problem: I have gigabytes of LLM conversation logs in parquet in S3. I want to add per-row annotations (llm-as-a-judge scores), ideally without touching the original text data.
So for a given dataset, I want to add a new column. This seemed like a perfect use case for Iceberg. Iceberg does let you evolve the table schema, including adding a column. BUT you can only add a column with a default value. If I want to fill in that column with annotations, ICEBERG MAKES ME REWRITE EVERY ROW. So despite being based on parquet, a column-oriented format, I need to re-write the entire source text data (gigabytes of data) just to add ~1mb of annotations. This feels wildly inefficient.
I considered just storing the column in its own table and then joining them. This does work but the joins are annoying to work with, and I suspect query engines do not optimize well a "join on row_number" operation.
I've been exploring using little-known features of parquet like the file_path field to store column data in external files. But literally zero parquet clients support this.
I'm running out of ideas for how to work with this data efficiently. It's bad enough that I am considering building my own table format if I can’t find a solution. Anyone have suggestions?
joistef•1mo ago
The Delta Lake suggestion is tempting but I don't think it actually solves your core problem - you still need to rewrite rows to populate a new column with non-default values. Their deletion vectors help with row-level updates, not column additions.
The separate table approach is probably your best bet, but I'd avoid row_number joins. If you can do a one-time rewrite to add a content hash or deterministic row ID, your annotation table becomes (row_id, score) and the join is trivial. Yes, it's a one-time cost, but it pays off every time you add a new annotation column.
If a rewrite is truly off the table, the overlay pattern works: keep annotations in separate parquet files keyed by (file_path, row_group_index, row_offset_within_group). DuckDB handles this reasonably well. The ergonomics aren't great but it's honest about the tradeoff you're making.
The parquet metadata idea is clever but metadata is size-limited and not designed for per-row data - you'd be fighting the format.
What query engine are you using? That might change the calculus.