frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Red teamers arrested conducting a penetration test

https://www.infosecinstitute.com/podcast/red-teamers-arrested-conducting-a-penetration-test/
1•begueradj•6m ago•0 comments

Show HN: Open-source AI powered Kubernetes IDE

https://github.com/agentkube/agentkube
1•saiyampathak•10m ago•0 comments

Show HN: Lucid – Use LLM hallucination to generate verified software specs

https://github.com/gtsbahamas/hallucination-reversing-system
1•tywells•12m ago•0 comments

AI Doesn't Write Every Framework Equally Well

https://x.com/SevenviewSteve/article/2019601506429730976
1•Osiris30•15m ago•0 comments

Aisbf – an intelligent routing proxy for OpenAI compatible clients

https://pypi.org/project/aisbf/
1•nextime•16m ago•1 comments

Let's handle 1M requests per second

https://www.youtube.com/watch?v=W4EwfEU8CGA
1•4pkjai•17m ago•0 comments

OpenClaw Partners with VirusTotal for Skill Security

https://openclaw.ai/blog/virustotal-partnership
1•zhizhenchi•17m ago•0 comments

Goal: Ship 1M Lines of Code Daily

2•feastingonslop•27m ago•0 comments

Show HN: Codex-mem, 90% fewer tokens for Codex

https://github.com/StartripAI/codex-mem
1•alfredray•30m ago•0 comments

FastLangML: FastLangML:Context‑aware lang detector for short conversational text

https://github.com/pnrajan/fastlangml
1•sachuin23•33m ago•1 comments

LineageOS 23.2

https://lineageos.org/Changelog-31/
1•pentagrama•37m ago•0 comments

Crypto Deposit Frauds

2•wwdesouza•38m ago•0 comments

Substack makes money from hosting Nazi newsletters

https://www.theguardian.com/media/2026/feb/07/revealed-how-substack-makes-money-from-hosting-nazi...
2•lostlogin•38m ago•0 comments

Framing an LLM as a safety researcher changes its language, not its judgement

https://lab.fukami.eu/LLMAAJ
1•dogacel•40m ago•0 comments

Are there anyone interested about a creator economy startup

1•Nejana•42m ago•0 comments

Show HN: Skill Lab – CLI tool for testing and quality scoring agent skills

https://github.com/8ddieHu0314/Skill-Lab
1•qu4rk5314•42m ago•0 comments

2003: What is Google's Ultimate Goal? [video]

https://www.youtube.com/watch?v=xqdi1xjtys4
1•1659447091•42m ago•0 comments

Roger Ebert Reviews "The Shawshank Redemption"

https://www.rogerebert.com/reviews/great-movie-the-shawshank-redemption-1994
1•monero-xmr•44m ago•0 comments

Busy Months in KDE Linux

https://pointieststick.com/2026/02/06/busy-months-in-kde-linux/
1•todsacerdoti•45m ago•0 comments

Zram as Swap

https://wiki.archlinux.org/title/Zram#Usage_as_swap
1•seansh•58m ago•1 comments

Green’s Dictionary of Slang - Five hundred years of the vulgar tongue

https://greensdictofslang.com/
1•mxfh•59m ago•0 comments

Nvidia CEO Says AI Capital Spending Is Appropriate, Sustainable

https://www.bloomberg.com/news/articles/2026-02-06/nvidia-ceo-says-ai-capital-spending-is-appropr...
1•virgildotcodes•1h ago•2 comments

Show HN: StyloShare – privacy-first anonymous file sharing with zero sign-up

https://www.styloshare.com
1•stylofront•1h ago•0 comments

Part 1 the Persistent Vault Issue: Your Encryption Strategy Has a Shelf Life

1•PhantomKey•1h ago•0 comments

Show HN: Teleop_xr – Modular WebXR solution for bimanual robot teleoperation

https://github.com/qrafty-ai/teleop_xr
1•playercc7•1h ago•1 comments

The Highest Exam: How the Gaokao Shapes China

https://www.lrb.co.uk/the-paper/v48/n02/iza-ding/studying-is-harmful
2•mitchbob•1h ago•1 comments

Open-source framework for tracking prediction accuracy

https://github.com/Creneinc/signal-tracker
1•creneinc•1h ago•0 comments

India's Sarvan AI LLM launches Indic-language focused models

https://x.com/SarvamAI
2•Osiris30•1h ago•0 comments

Show HN: CryptoClaw – open-source AI agent with built-in wallet and DeFi skills

https://github.com/TermiX-official/cryptoclaw
1•cryptoclaw•1h ago•0 comments

ShowHN: Make OpenClaw respond in Scarlett Johansson’s AI Voice from the Film Her

https://twitter.com/sathish316/status/2020116849065971815
1•sathish316•1h ago•2 comments
Open in hackernews

Best way to annotate large parquet LLM logs without full rewrites?

2•platypii•1mo ago
I asked this on the Apache mailing list but haven’t found a good solution yet. Wondering if anyone has some ideas for how to engineer this?

Here’s my problem: I have gigabytes of LLM conversation logs in parquet in S3. I want to add per-row annotations (llm-as-a-judge scores), ideally without touching the original text data.

So for a given dataset, I want to add a new column. This seemed like a perfect use case for Iceberg. Iceberg does let you evolve the table schema, including adding a column. BUT you can only add a column with a default value. If I want to fill in that column with annotations, ICEBERG MAKES ME REWRITE EVERY ROW. So despite being based on parquet, a column-oriented format, I need to re-write the entire source text data (gigabytes of data) just to add ~1mb of annotations. This feels wildly inefficient.

I considered just storing the column in its own table and then joining them. This does work but the joins are annoying to work with, and I suspect query engines do not optimize well a "join on row_number" operation.

I've been exploring using little-known features of parquet like the file_path field to store column data in external files. But literally zero parquet clients support this.

I'm running out of ideas for how to work with this data efficiently. It's bad enough that I am considering building my own table format if I can’t find a solution. Anyone have suggestions?

Comments

joistef•1mo ago
A few things that might help:

The Delta Lake suggestion is tempting but I don't think it actually solves your core problem - you still need to rewrite rows to populate a new column with non-default values. Their deletion vectors help with row-level updates, not column additions.

The separate table approach is probably your best bet, but I'd avoid row_number joins. If you can do a one-time rewrite to add a content hash or deterministic row ID, your annotation table becomes (row_id, score) and the join is trivial. Yes, it's a one-time cost, but it pays off every time you add a new annotation column.

If a rewrite is truly off the table, the overlay pattern works: keep annotations in separate parquet files keyed by (file_path, row_group_index, row_offset_within_group). DuckDB handles this reasonably well. The ergonomics aren't great but it's honest about the tradeoff you're making.

The parquet metadata idea is clever but metadata is size-limited and not designed for per-row data - you'd be fighting the format.

What query engine are you using? That might change the calculus.

hodgesrm•1mo ago
If it's under a few terabytes, copy it to ClickHouse and store it in the native format, make your change, and dump it back out again. ClickHouse reads and writes Parquet very efficiently and supports efficient distribution of work across clusters. It's a rewrite but it's fast.