frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Strange Attractors

https://blog.shashanktomar.com/posts/strange-attractors
162•shashanktomar•2h ago•18 comments

S.a.r.c.a.s.m: Slightly Annoying Rubik's Cube Automatic Solving Machine

https://github.com/vindar/SARCASM
52•chris_overseas•3h ago•7 comments

Futurelock: A subtle risk in async Rust

https://rfd.shared.oxide.computer/rfd/0609
255•bcantrill•9h ago•107 comments

The Last PCB You'll Ever Buy [video]

https://www.youtube.com/watch?v=A_IUIyyqw0M
58•surprisetalk•4d ago•15 comments

Introducing architecture variants

https://discourse.ubuntu.com/t/introducing-architecture-variants-amd64v3-now-available-in-ubuntu-...
168•jnsgruk•1d ago•111 comments

Addiction Markets

https://www.thebignewsletter.com/p/addiction-markets-abolish-corporate
185•toomuchtodo•8h ago•162 comments

My Impressions of the MacBook Pro M4

https://michael.stapelberg.ch/posts/2025-10-31-macbook-pro-m4-impressions/
122•secure•16h ago•166 comments

Use DuckDB-WASM to query TB of data in browser

https://lil.law.harvard.edu/blog/2025/10/24/rethinking-data-discovery-for-libraries-and-digital-h...
143•mlissner•8h ago•35 comments

A theoretical way to circumvent Android developer verification

https://enaix.github.io/2025/10/30/developer-verification.html
94•sleirsgoevy•5h ago•59 comments

Fungus: The Befunge CPU(2015)

https://www.bedroomlan.org/hardware/fungus/
3•onestay42•38m ago•0 comments

Leaker reveals which Pixels are vulnerable to Cellebrite phone hacking

https://arstechnica.com/gadgets/2025/10/leaker-reveals-which-pixels-are-vulnerable-to-cellebrite-...
202•akyuu•1d ago•111 comments

Hacking India's largest automaker: Tata Motors

https://eaton-works.com/2025/10/28/tata-motors-hack/
136•EatonZ•3d ago•46 comments

Kerkship St. Jozef, Antwerp – WWII German Concrete Tanker

https://thecretefleet.com/blog/f/kerkship-st-jozef-antwerp-%E2%80%93-wwii-german-concrete-tanker
6•surprisetalk•1w ago•1 comments

Perfetto: Swiss army knife for Linux client tracing

https://lalitm.com/perfetto-swiss-army-knife/
101•todsacerdoti•14h ago•9 comments

How We Found 7 TiB of Memory Just Sitting Around

https://render.com/blog/how-we-found-7-tib-of-memory-just-sitting-around
105•anurag•1d ago•22 comments

Active listening: the Swiss Army Knife of communication

https://togetherlondon.com/insights/active-listening-swiss-army-knife
15•lucidplot•4d ago•6 comments

I Love My Wife, My Wife Is Dead

https://www.bingqiangao.com/poetry/i-love-my-wife-my-wife-is-dead
15•nsoonhui•1h ago•1 comments

Llamafile Returns

https://blog.mozilla.ai/llamafile-returns/
92•aittalam•2d ago•14 comments

Nix Derivation Madness

https://fzakaria.com/2025/10/29/nix-derivation-madness
153•birdculture•11h ago•54 comments

AI scrapers request commented scripts

https://cryptography.dog/blog/AI-scrapers-request-commented-scripts/
188•ColinWright•10h ago•132 comments

Show HN: Pipelex – Declarative language for repeatable AI workflows

https://github.com/Pipelex/pipelex
77•lchoquel•3d ago•15 comments

Signs of introspection in large language models

https://www.anthropic.com/research/introspection
104•themgt•1d ago•50 comments

Sustainable memristors from shiitake mycelium for high-frequency bioelectronics

https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0328965
109•PaulHoule•12h ago•53 comments

Pangolin (YC S25) Is Hiring a Full Stack Software Engineer (Open-Source)

https://docs.pangolin.net/careers/software-engineer-full-stack
1•miloschwartz•9h ago

Lording it, over: A new history of the modern British aristocracy

https://newcriterion.com/article/lording-it-over/
48•smushy•6d ago•98 comments

The cryptography behind electronic passports

https://blog.trailofbits.com/2025/10/31/the-cryptography-behind-electronic-passports/
133•tatersolid•14h ago•82 comments

OpenAI updates terms to forbid usage for medical and legal advice

https://openai.com/policies/usage-policies/
20•piskov•2h ago•11 comments

Attention lapses due to sleep deprivation due to flushing fluid from brain

https://news.mit.edu/2025/your-brain-without-sleep-1029
516•gmays•13h ago•251 comments

Apple reports fourth quarter results

https://www.apple.com/newsroom/2025/10/apple-reports-fourth-quarter-results/
123•mfiguiere•1d ago•180 comments

The 1924 New Mexico regional banking panic

https://nodumbideas.com/p/labor-day-special-the-1924-new-mexico
44•nodumbideas•1w ago•1 comments
Open in hackernews

Use DuckDB-WASM to query TB of data in browser

https://lil.law.harvard.edu/blog/2025/10/24/rethinking-data-discovery-for-libraries-and-digital-humanities/
143•mlissner•8h ago

Comments

mlissner•8h ago
OK, this is really neat: - S3 is really cheap static storage for files. - DuckDB is a database that uses S3 for its storage. - WASM lets you run binary (non-JS) code in your browser. - DuckDB-Wasm allows you to run a database in your browser.

Put all of that together, and you get a website that queries S3 with no backend at all. Amazing.

timeflex•7h ago
S3 might be relatively cheap for storing files, but with bandwidth you could easily be paying $230/mo. If you make it public facing & want to try to use their cloud reporting, metrics, etc. to prevent people for running up your bandwidth, your "really cheap" static hosting could easily cost you more than $500/mo.
theultdev•6h ago
R2 is S3 compatible with no egress fees.

Cloudflare actually has built in iceberg support for R2 buckets. It's quite nice.

Combine that with their pipelines it's a simple http request to ingest, then just point duckdb to the iceberg enabled R2 bucket to analyze.

greatNespresso•5h ago
Was about to jump in to say the same thing. R2 is a much cheaper alternative to S3 that just works and I have used it with DuckDB, works smoothly
apwheele•2h ago
For a demo of this (although not sure with duckdb wasm that it works with iceberg) https://andrewpwheeler.com/2025/06/29/using-duckdb-wasm-clou...
7952•4h ago
I think this approach makes sense for services with a small number of users relative to the data they are searching. That just isn't a good fit for a lot of hosted services. Think how much that TB's of data would cost on Algolia or similar services.

You have to store the data somehow anyway, and you have to retrieve some of it to service a query. If egress costs too much you could always change later to put the browser code on a server. Also it would presumably be possible to quantify the trade-off between processing the data client side and on the server.

simonw•2h ago
Stick it behind Cloudflare and it should be effectively free.
rubenvanwyk•6h ago
Or use R2 instead. It’s even easier.
thadt•6h ago
S3 is doing quite a lot of sophisticated lifting to qualify as no backend at all.

But yeah - this is pretty neat. Easily seems like the future of static datasets should wind up in something like this. Just data, with some well chosen indices.

theultdev•6h ago
Still qualifies imo. Everything is static and on a CDN.

Lack of server/dynamic code qualifies as no backend.

simonw•2h ago
I believe all S3 has to do here is respond to HTTP Range queries, which are supported by almost every static server out there - Apache, Nginx etc should all support the same trick.
thadt•1h ago
100%. I’m with y’all - this is what I would also call a “no-backend” solution and I’m all in on this type of approach for static data sets - this is the future, and could be served with a very simple web server.

I’m just bemused that we all refer to one of the larger, more sophisticated storage systems on the plant, composed of dozens of subsystems and thousands of servers as “no backend at all.” Kind of a “draw the rest of the owl”.

amazingamazing•6h ago
Neat. Can you use duckdb backed on another store like rocksdb or something? Also, I wonder how one stops ddos. Put the whole thing behind Cloudflare?
wewewedxfgdf•6h ago
I tried DuckDB - liked it a lot - was ready to go further.

But found it to be a real hassle to help it understand the right number of threads and the amount of memory to use.

This led to lots of crashes. If you look at the projects github issues you will see many OOM out of memory errors.

And then there was some indexed bug that crashed seemingly unrelated to memory.

Life is too short for crashy database software so I reluctantly dropped it. I was disappointed because it was exactly what I was looking for.

lalitmaganti•6h ago
+1 this was my experience trying it out as well. I find that for getting started and for simple usecases it works amazing. But I have quite a lot of concerns about how it scales to more complex and esoteric workloads.

Non-deterministic OOMs especially are some of the worst things in the sort of tools I'd want to use DuckDB in and as you say, I found it to be more common than I would like.

tuhgdetzhh•6h ago
I can recommend earlyoom (https://github.com/rfjakob/earlyoom). Instead of freezing or crashing your system this tool kills the memory eating process just in time (in this case duckdb). This allows you repeat with smaller chunks of the dataset, until it fits into your mem.
wewewedxfgdf•6h ago
Yeah memory and thread management is the job of the application, not me.
QuantumNomad_•5h ago
When I there is a specific program I want to run with a limit on how much memory it is allowed to allocate, I have found systemd-run to work well.

It uses cgroups to enforce resource limits.

For example, there’s a program I wrote myself which I run on one of my Raspberry Pi. I had a problem where my program would on rare occasions use up too much memory and I wouldn’t even be able to ssh into the Raspberry Pi.

I run it like this:

  systemd-run --scope -p MemoryMax=5G --user env FOOBAR=baz ./target/release/myprog
The only difficulty I had was that I struggled to find the right name to use in the MemoryMax=… part because they’ve changed the name of it around between versions so different Linux systems may or may not use the same name for the limit.

In order to figure out if I had the right name for it, I tested different names for it with a super small limit that I knew was less than the program needs even in normal conditions. And when I found the right name, the program would as expected be killed right off the bat and so then I could set the limit to 5G (five gigabytes) and be confident that if it exceeds that then it will be killed instead of making my Raspberry Pi impossible to ssh into again.

thenaturalist•3h ago
This looks amazing!

Have you used this in conjunction with DuckDB?

mritchie712•4h ago
what did you use instead? if you hit OOM with the dataset in duckdb, I'd think you'd hit the OOM with most other things on the same machine.
wewewedxfgdf•4h ago
The software should manage its own memory not require the developer to set specific memory thresholds. Sure, a good thing to be able to say "use no more than X RAM".
thenaturalist•3h ago
How long ago was this, or can you share more context about data and mem size you experienced this with?

DuckDB has introduced spilling to disk and some other tweaks since a good year now: https://duckdb.org/2024/07/09/memory-management

wewewedxfgdf•3h ago
3 days ago.

The final straw was an index which generated fine on MacOS and failed on Linux - exact same code.

Machine had plenty of RAM.

The thing is, it is really the responsibility of the application to regulate its behavior based on available memory. Crashing out just should not be an option but that's the way DuckDB is built.

jdnier•6h ago
Yesterday there was a somewhat similar DuckDB post, "Frozen DuckLakes for Multi-User, Serverless Data Access". https://news.ycombinator.com/item?id=45702831
85392_school•5h ago
This also reminded me of an approach using SQLite: https://news.ycombinator.com/item?id=45748186
pacbard•2h ago
I set up something similar at work. But it was before the DuckLake format was available, so it just uses manually generated Parquet files saved to a bucket and a light DuckDB catalog that uses views to expose the parquet files. This lets us update the Parquet files using our ETL process and just refresh the catalog when there is a schema change.

We didn't find the frozen DuckLake setup useful for our use case. Mostly because the frozen catalog kind of doesn't make sense with the DuckLake philosophy and the cost-benefit wasn't there over a regular duckdb catalog. It also made making updates cumbersome because you need to pull the DuckLake catalog, commit the changes, and re-upload the catalog (instead of just directly updating the Parquet files). I get that we are missing the time travel part of the DuckLake, but that's not critical for us and if it becomes important, we would just roll out a PostgreSQL database to manage the catalog.

SteveMoody73•5h ago
My initial thought is why query 1TB of data in a browser, maybe I'm the wrong target audience for this but it seems that it's pushing that everything has to be in a browser rather than using appropriate tools
cyanydeez•5h ago
Browsers are now the write-once works everywhere target. Where java failed, many hope browsers succeed. WASM is definitely a key to that, particularly because it can be output by tools like rust, so they can also be the appropriate tools.
majormajor•5h ago
Why pay for RAM for servers when you can let your users deal with it? ;)

(Does not seem like a realistic scenario to me for many uses, for RAM among other resource reasons.)

some_guy_nobel•4h ago
The one word answer is cost.

But, if you'd like to instead read the article, you'll see that they qualify the reasoning in the first section of the article, titled, "Rethinking the Old Trade-Off: Cost, Complexity, and Access".

simonw•2h ago
What appropriate tool would you use for this instead?
shawn-butler•1h ago
I doubt they are querying 1 TB of data in the browser. DuckDB-WASM issues http range requests on behalf of client to request only the bytes required, especially handy with parquet files (columnar format) that will exclude columns you don't even need.

But the article is a little light on technical details. In some cases it might make sense to bring the entire file client-side.

r3tr0•2h ago
It's one of the best tricks in the book.

We have been doing it for quite some time in our product to bring real time system observability with eBPF to the browser and have even found other techniques to really max-it-out beyond what you get off the shelf.

https://yeet.cx

leetrout•2h ago
I built something on top of DuckDB last year but it never got deployed. They wanted to trust Postgres.

I didn't use the in browser WASM but I did expose an api endpoint that passed data exploration queries directly to the backend like a knock off of what new relic does. I also use that same endpoint for all the graphs and metrics in the UI.

DuckDB is phenomenal tech and I love to use it with data ponds instead of data lakes although it is very capable of large sets as well.

whalesalad•2h ago
Cool thing about DuckDB is it can be embedded. We have a data pipeline that produces a duckdb file and puts it on S3. The app periodically checks that assets etag and pulls it down when it changes. Most of our DB interactions use PSQL, but we have one module that leverages DuckDB and this file for reads. So it's definitely not all-or-nothing.