frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

YouTube denies AI was involved with odd removals of tech tutorials

https://arstechnica.com/tech-policy/2025/10/youtube-denies-ai-was-involved-with-odd-removals-of-t...
1•pseudolus•1m ago•0 comments

"Our research is greatly sped up by AI but AI still needs us"

https://twitter.com/wtgowers/status/1984340182351634571
2•wrong-mexican•6m ago•0 comments

Serious Coding: The human–AI discipline for accurate and reliable development

https://isoform.ai/blog/manual-serious-coding-vs-vibe-coding
2•Chrisywz•8m ago•0 comments

Happy Halloween, HN

5•avonmach•15m ago•1 comments

Families say cost of housing means they'll have fewer or no children

https://www.npr.org/2025/10/31/nx-s1-5551108/housing-costs-birth-rate
2•toomuchtodo•15m ago•2 comments

Show HN: DeepShot – NBA game predictor with 70% accuracy using ML and stats

https://github.com/saccofrancesco/deepshot
2•frasacco05•16m ago•0 comments

Viagrid

https://github.com/opulo-inc/viagrid
1•weinzierl•16m ago•0 comments

The goal is right. The plan is fiction

https://federalnewsnetwork.com/commentary/2025/10/the-goal-is-right-the-plan-is-fiction/
1•tailefer•16m ago•0 comments

I Love My Wife, My Wife Is Dead

https://www.bingqiangao.com/poetry/i-love-my-wife-my-wife-is-dead
2•nsoonhui•20m ago•0 comments

Large Language Models Get All the Hype, but Small Models Do the Real Work

https://www.wsj.com/tech/ai/large-language-models-get-all-the-hype-but-small-models-do-the-real-w...
1•sonabinu•22m ago•0 comments

Show HN: Unrav.io Chrome Extension – Turn any page into insights

https://unrav.io
1•rriley•22m ago•0 comments

Ask HN: Folks who had taken VEP (voluntary layoff), was it a good decision?

2•locust101•23m ago•0 comments

Grady Booch's CS Library

https://www.librarycat.org/lib/gbooch
1•mooreds•26m ago•0 comments

Livestream of Last Iceland McDonald's Burger and Fries (10 Years Old)

https://snotrahouse.com/last-mcdonalds/
1•surprisetalk•27m ago•0 comments

Wine 10.18 (Dev) – Run Windows Applications on Linux, BSD, Solaris and macOS

https://gitlab.winehq.org/wine/wine/-/releases/wine-10.18
1•neustradamus•30m ago•0 comments

Tree Tools for Schools

https://www.treetoolsforschools.org.uk/activitymenu/?cat=tree_id
1•tetris11•31m ago•0 comments

What's new in Swift: October 2025 Edition

https://www.swift.org/blog/whats-new-in-swift-october-2025/
1•frizlab•32m ago•0 comments

Little KWin Helpers

https://blog.broulik.de/2025/10/little-kwin-helpers/
1•LorenDB•32m ago•0 comments

The Hasselblad Cameras of Project Mercury

https://www.thequantumcat.space/p/the-hasselblad-cameras-of-project
1•LorenDB•32m ago•0 comments

DPRK Adopts EtherHiding: Nation-State Malware Hiding on Blockchains

https://cloud.google.com/blog/topics/threat-intelligence/dprk-adopts-etherhiding
2•hentrep•42m ago•0 comments

Show HN: Local Push-to-Transcribe

https://github.com/spacefarers/Transcrybe
2•spacefarers•43m ago•0 comments

Biotech Hunter – Crunchbase for Life Sciences

https://biotechhunter.com/
2•johnys•47m ago•0 comments

WASM-Adventure

https://github.com/euhmeuh/wasm-adventure
1•todsacerdoti•48m ago•0 comments

Can you get a star in Super Mario 64 using only one button? [video]

https://www.youtube.com/watch?v=-7VhlsqeeqI
1•unleaded•51m ago•0 comments

The Department of Defense Wants Less Proof Its Software Works

https://www.eff.org/deeplinks/2025/10/department-defense-wants-less-proof-its-software-works
5•hn_acker•54m ago•1 comments

The Situation: How Much Less Free Are We?

https://www.lawfaremedia.org/article/the-situation--how-much-less-free-are-we
2•hn_acker•55m ago•0 comments

Will Paramount Cancel Jon Stewart?

https://www.newyorker.com/culture/the-new-yorker-interview/will-paramount-cancel-jon-stewart
3•mitchbob•57m ago•2 comments

Interesting Research Programs from the 2010s (2020)

https://bcmullins.github.io/interesting-research-2010s/
2•vinhnx•57m ago•0 comments

TeraAgent: Simulating Half a Trillion Agents

https://arxiv.org/abs/2509.24063
1•jonbaer•58m ago•0 comments

Show HN: I've built a web based pdf/docx/pptx editor, the format .ldf

https://learny.academy/about
1•yeargun•1h ago•2 comments
Open in hackernews

Use DuckDB-WASM to query TB of data in browser

https://lil.law.harvard.edu/blog/2025/10/24/rethinking-data-discovery-for-libraries-and-digital-humanities/
130•mlissner•7h ago

Comments

mlissner•7h ago
OK, this is really neat: - S3 is really cheap static storage for files. - DuckDB is a database that uses S3 for its storage. - WASM lets you run binary (non-JS) code in your browser. - DuckDB-Wasm allows you to run a database in your browser.

Put all of that together, and you get a website that queries S3 with no backend at all. Amazing.

timeflex•5h ago
S3 might be relatively cheap for storing files, but with bandwidth you could easily be paying $230/mo. If you make it public facing & want to try to use their cloud reporting, metrics, etc. to prevent people for running up your bandwidth, your "really cheap" static hosting could easily cost you more than $500/mo.
theultdev•4h ago
R2 is S3 compatible with no egress fees.

Cloudflare actually has built in iceberg support for R2 buckets. It's quite nice.

Combine that with their pipelines it's a simple http request to ingest, then just point duckdb to the iceberg enabled R2 bucket to analyze.

greatNespresso•3h ago
Was about to jump in to say the same thing. R2 is a much cheaper alternative to S3 that just works and I have used it with DuckDB, works smoothly
apwheele•55m ago
For a demo of this (although not sure with duckdb wasm that it works with iceberg) https://andrewpwheeler.com/2025/06/29/using-duckdb-wasm-clou...
7952•2h ago
I think this approach makes sense for services with a small number of users relative to the data they are searching. That just isn't a good fit for a lot of hosted services. Think how much that TB's of data would cost on Algolia or similar services.

You have to store the data somehow anyway, and you have to retrieve some of it to service a query. If egress costs too much you could always change later to put the browser code on a server. Also it would presumably be possible to quantify the trade-off between processing the data client side and on the server.

simonw•53m ago
Stick it behind Cloudflare and it should be effectively free.
rubenvanwyk•5h ago
Or use R2 instead. It’s even easier.
thadt•5h ago
S3 is doing quite a lot of sophisticated lifting to qualify as no backend at all.

But yeah - this is pretty neat. Easily seems like the future of static datasets should wind up in something like this. Just data, with some well chosen indices.

theultdev•4h ago
Still qualifies imo. Everything is static and on a CDN.

Lack of server/dynamic code qualifies as no backend.

simonw•52m ago
I believe all S3 has to do here is respond to HTTP Range queries, which are supported by almost every static server out there - Apache, Nginx etc should all support the same trick.
amazingamazing•5h ago
Neat. Can you use duckdb backed on another store like rocksdb or something? Also, I wonder how one stops ddos. Put the whole thing behind Cloudflare?
wewewedxfgdf•5h ago
I tried DuckDB - liked it a lot - was ready to go further.

But found it to be a real hassle to help it understand the right number of threads and the amount of memory to use.

This led to lots of crashes. If you look at the projects github issues you will see many OOM out of memory errors.

And then there was some indexed bug that crashed seemingly unrelated to memory.

Life is too short for crashy database software so I reluctantly dropped it. I was disappointed because it was exactly what I was looking for.

lalitmaganti•4h ago
+1 this was my experience trying it out as well. I find that for getting started and for simple usecases it works amazing. But I have quite a lot of concerns about how it scales to more complex and esoteric workloads.

Non-deterministic OOMs especially are some of the worst things in the sort of tools I'd want to use DuckDB in and as you say, I found it to be more common than I would like.

tuhgdetzhh•4h ago
I can recommend earlyoom (https://github.com/rfjakob/earlyoom). Instead of freezing or crashing your system this tool kills the memory eating process just in time (in this case duckdb). This allows you repeat with smaller chunks of the dataset, until it fits into your mem.
wewewedxfgdf•4h ago
Yeah memory and thread management is the job of the application, not me.
QuantumNomad_•4h ago
When I there is a specific program I want to run with a limit on how much memory it is allowed to allocate, I have found systemd-run to work well.

It uses cgroups to enforce resource limits.

For example, there’s a program I wrote myself which I run on one of my Raspberry Pi. I had a problem where my program would on rare occasions use up too much memory and I wouldn’t even be able to ssh into the Raspberry Pi.

I run it like this:

  systemd-run --scope -p MemoryMax=5G --user env FOOBAR=baz ./target/release/myprog
The only difficulty I had was that I struggled to find the right name to use in the MemoryMax=… part because they’ve changed the name of it around between versions so different Linux systems may or may not use the same name for the limit.

In order to figure out if I had the right name for it, I tested different names for it with a super small limit that I knew was less than the program needs even in normal conditions. And when I found the right name, the program would as expected be killed right off the bat and so then I could set the limit to 5G (five gigabytes) and be confident that if it exceeds that then it will be killed instead of making my Raspberry Pi impossible to ssh into again.

thenaturalist•2h ago
This looks amazing!

Have you used this in conjunction with DuckDB?

mritchie712•3h ago
what did you use instead? if you hit OOM with the dataset in duckdb, I'd think you'd hit the OOM with most other things on the same machine.
wewewedxfgdf•3h ago
The software should manage its own memory not require the developer to set specific memory thresholds. Sure, a good thing to be able to say "use no more than X RAM".
thenaturalist•2h ago
How long ago was this, or can you share more context about data and mem size you experienced this with?

DuckDB has introduced spilling to disk and some other tweaks since a good year now: https://duckdb.org/2024/07/09/memory-management

wewewedxfgdf•1h ago
3 days ago.

The final straw was an index which generated fine on MacOS and failed on Linux - exact same code.

Machine had plenty of RAM.

The thing is, it is really the responsibility of the application to regulate its behavior based on available memory. Crashing out just should not be an option but that's the way DuckDB is built.

jdnier•4h ago
Yesterday there was a somewhat similar DuckDB post, "Frozen DuckLakes for Multi-User, Serverless Data Access". https://news.ycombinator.com/item?id=45702831
85392_school•4h ago
This also reminded me of an approach using SQLite: https://news.ycombinator.com/item?id=45748186
pacbard•59m ago
I set up something similar at work. But it was before the DuckLake format was available, so it just uses manually generated Parquet files saved to a bucket and a light DuckDB catalog that uses views to expose the parquet files. This lets us update the Parquet files using our ETL process and just refresh the catalog when there is a schema change.

We didn't find the frozen DuckLake setup useful for our use case. Mostly because the frozen catalog kind of doesn't make sense with the DuckLake philosophy and the cost-benefit wasn't there over a regular duckdb catalog. It also made making updates cumbersome because you need to pull the DuckLake catalog, commit the changes, and re-upload the catalog (instead of just directly updating the Parquet files). I get that we are missing the time travel part of the DuckLake, but that's not critical for us and if it becomes important, we would just roll out a PostgreSQL database to manage the catalog.

SteveMoody73•4h ago
My initial thought is why query 1TB of data in a browser, maybe I'm the wrong target audience for this but it seems that it's pushing that everything has to be in a browser rather than using appropriate tools
cyanydeez•4h ago
Browsers are now the write-once works everywhere target. Where java failed, many hope browsers succeed. WASM is definitely a key to that, particularly because it can be output by tools like rust, so they can also be the appropriate tools.
majormajor•3h ago
Why pay for RAM for servers when you can let your users deal with it? ;)

(Does not seem like a realistic scenario to me for many uses, for RAM among other resource reasons.)

some_guy_nobel•3h ago
The one word answer is cost.

But, if you'd like to instead read the article, you'll see that they qualify the reasoning in the first section of the article, titled, "Rethinking the Old Trade-Off: Cost, Complexity, and Access".

simonw•51m ago
What appropriate tool would you use for this instead?
shawn-butler•6m ago
I doubt they are querying 1 TB of data in the browser. DuckDB-WASM issues http range requests on behalf of client to request only the bytes required, especially handy with parquet files (columnar format) that will exclude columns you don't even need.

But the article is a little light on technical details. In some cases it might make sense to bring the entire file client-side.

r3tr0•1h ago
It's one of the best tricks in the book.

We have been doing it for quite some time in our product to bring real time system observability with eBPF to the browser and have even found other techniques to really max-it-out beyond what you get off the shelf.

https://yeet.cx

leetrout•59m ago
I built something on top of DuckDB last year but it never got deployed. They wanted to trust Postgres.

I didn't use the in browser WASM but I did expose an api endpoint that passed data exploration queries directly to the backend like a knock off of what new relic does. I also use that same endpoint for all the graphs and metrics in the UI.

DuckDB is phenomenal tech and I love to use it with data ponds instead of data lakes although it is very capable of large sets as well.

whalesalad•50m ago
Cool thing about DuckDB is it can be embedded. We have a data pipeline that produces a duckdb file and puts it on S3. The app periodically checks that assets etag and pulls it down when it changes. Most of our DB interactions use PSQL, but we have one module that leverages DuckDB and this file for reads. So it's definitely not all-or-nothing.