frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

I Write Games in C (yes, C)

https://jonathanwhiting.com/writing/blog/games_in_c/
45•valyala•2h ago•19 comments

We Mourn Our Craft

https://nolanlawson.com/2026/02/07/we-mourn-our-craft/
226•ColinWright•1h ago•241 comments

SectorC: A C Compiler in 512 bytes

https://xorvoid.com/sectorc.html
30•valyala•2h ago•4 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
128•AlexeyBrin•8h ago•25 comments

Brookhaven Lab's RHIC Concludes 25-Year Run with Final Collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
8•gnufx•1h ago•1 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
71•vinhnx•5h ago•9 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
130•1vuio0pswjnm7•8h ago•160 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
836•klaussilveira•22h ago•251 comments

U.S. Jobs Disappear at Fastest January Pace Since Great Recession

https://www.forbes.com/sites/mikestunson/2026/02/05/us-jobs-disappear-at-fastest-january-pace-sin...
179•alephnerd•2h ago•124 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
57•thelok•4h ago•8 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
1064•xnx•1d ago•613 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
85•onurkanbkrc•7h ago•5 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
493•theblazehen•3d ago•178 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
215•jesperordrup•12h ago•77 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
14•momciloo•2h ago•0 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
231•alainrk•7h ago•365 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
575•nar001•6h ago•261 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
41•rbanffy•4d ago•8 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
30•marklit•5d ago•3 comments

History and Timeline of the Proco Rat Pedal (2021)

https://web.archive.org/web/20211030011207/https://thejhsshow.com/articles/history-and-timeline-o...
19•brudgers•5d ago•4 comments

Selection Rather Than Prediction

https://voratiq.com/blog/selection-rather-than-prediction/
8•languid-photic•3d ago•1 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
114•videotopia•4d ago•35 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
80•speckx•4d ago•90 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
278•isitcontent•22h ago•38 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
289•dmpetrov•23h ago•156 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
201•limoce•4d ago•112 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
558•todsacerdoti•1d ago•272 comments

Microsoft Account bugs locked me out of Notepad – are Thin Clients ruining PCs?

https://www.windowscentral.com/microsoft/windows-11/windows-locked-me-out-of-notepad-is-the-thin-...
6•josephcsible•29m ago•1 comments

Making geo joins faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
155•matheusalmeida•2d ago•48 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
22•sandGorgon•2d ago•12 comments
Open in hackernews

ANN v3: 200ms p99 query latency over 100B vectors

https://turbopuffer.com/blog/ann-v3
109•_peregrine_•2w ago

Comments

jascha_eng•2w ago
This is legitimately pretty impressive. I think the rule of thumb is now, go with postgres(pgvector) for vector search until it breaks, then go with turbopuffer.
_peregrine_•2w ago
seems like a good rule of thumb to me! though i would perhaps lump "cost" into the "until it breaks" equation. even with decent perf, pg_vector's economics can be much worse, especially in multi-tenant scenarios where you need many small indexes (this is true of any vector db that builds indexes primarily on RAM/SSD)
sa-code•1w ago
Qdrant is also a good default choice, since it can work in-memory for development, with a hard drive for small deployments and also for "web scale" workloads.

As a principal eng, side-stepping a migration and having a good local dev experience is too good of a deal to pass up.

That being said, turbopuffer looks interesting. I will check it out. Hopefully their local dev experience is good

benesch•1w ago
For local dev + testing, we recommend just hitting the production turbopuffer service directly, but with a separate test org/API key: https://turbopuffer.com/docs/testing

Works well for the vast majority of our customers (although we get the very occasional complaint about wanting a dev environment that works offline). The dataset sizes for local dev are usually so small that the cost rounds to free.

sroussey•1w ago
That’s not local though
enigmo•1w ago
having a local simulator (DynamoDB, Spanner, others) helps me a lot for offline/local development and CI. when a vendor doesn't off this I have often end up mocking it out (one way or another) and have to wait for integration or e2e tests for feedback that could have been pushed further to the left.

in many CI environments unit tests don't have network access, it's not purely a price consideration.

(not a turbopuffer customer but I have been looking at it)

benesch•1w ago
> in many CI environments unit tests don't have network access, it's not purely a price consideration.

I've never seen a hard block on network access (how do you install packages/pull images?) but I am sympathetic to wanting to enforce that unit tests run quickly by minimizing/eliminating RTT to networked services.

We've considered the possibility of a local simulator before. Let me know if it winds up being a blocker for your use case.

lambda•1w ago
> how do you install packages/pull images

You pre-build the images with packages installed beforehand, then use those image offline.

benesch•1w ago
My point is it's enough of a hassle to set up that I've yet to see that level of restriction in practice (across hundreds of CI systems).
dzbarsky•1w ago
Look into Bazel, a very standard build system used at many large tech companies. It splits fetches from build/test actions and allows blocking network for build/test actions with a single CLI flag. No hassle at all.

The fact that you haven't come across this kind of setup suggests that your hundreds of CI systems are not representative of the industry as a whole.

benesch•1w ago
I agree our sample may not be representative but we try to stay focused on the current and next crop of tpuf customers rather than the software industry as a whole. So far "CI prohibits network access during tests" just hasn't come up as a pain point for any of them, but as I mentioned in another comment [0], we're definitely keeping an open mind about introducing an offline dev experience.

(I am familiar with Bazel, but I'll have to save the war stories for another thread. It's not a build tool we see our particular customers using.)

[0]: https://news.ycombinator.com/item?id=46758156

enigmo•1w ago
you pull packages from a trusted package repository, not from the internet. this is not rare in my experience (financial services, security) and will become increasingly common due to software supply chain issues.
lambda•1w ago
> although we get the very occasional complaint about wanting a dev environment that works offline

It's only occasional because the people who care about dev environments that work offline are most likely to just skip you and move on.

For actual developer experience, as well as a number of use cases like customers with security and privacy concerns, being able to host locally is essential.

Fair enough if you don't care about those segments of the market, but don't confuse a small number of people asking about it with a small number of people wanting it.

sa-code•1w ago
Can confirm. With a setup that works offline, one can

- start small on a laptop. Going through procurement at companies is a pain

- test things in CI reliably. Outages don’t break builds

- transition from laptop scale to web scale easily with the same API with just a different backend

Otherwise it’s really hard to justify not using S3 vectors here

The current dev experience is to start with faiss for PoCs, move to pgvector and then something heavy duty like one of the Lucene wrappers.

nik9000•1w ago
As someone who works for a competitor, they are probably right holding off on that segment for a while. Supporting both cloud and local deployments is somewhere between 20% harder and 300% harder depending on the day.

I'm watching them with excitement. We all learn from each other. There's so much to do.

benesch•1w ago
Yep, we're well aware of the selection bias effects in product feedback. As we grow we're thinking about how to make our product more accessible to small orgs / hobby projects. Introducing a local dev environment may be part of that.

Note that we already have a in-your-own-VPC offering for large orgs with strict security/privacy/regulatory controls.

sa-code•1w ago
I should have clarified, by local dev and testing I did in fact mean offline usage.

Without that it’s unfortunately a non starter

benesch•1w ago
So I can note this down on our roadmap, what's the root of your requirement here? Supporting local dev without internet (airplanes, coffee shops, etc.)? Unit test speed? Something else?
sa-code•1w ago
I listed some reasons in another comment: https://news.ycombinator.com/item?id=46757853

I appreciate your responsiveness and open mind

benesch•1w ago
Thanks, appreciate this! Jotted down some notes on our roadmap.
sa-code•1w ago
I wish you the best
nostrebored•1w ago
Qdrant is one of the few vendors I actively steer people away from. Look at the GitHub issues, look at what their CEO says, look at their fake “advancements” that they pay for publicity on…

The number of people I know who’ve had unrecoverable shard failures on Qdrant is too high to take it seriously.

sa-code•1w ago
I’m curious about this. Could you please point to some things the CEO has said, or reports of shard failures?

The bit about paying for publicity doesn’t bother me.

Edit: I haven’t found anything egregious that the CEO has said, or anything really sketchy. The shard failure warnings look serious, but the issues look closed

https://github.com/qdrant/qdrant/issues/6025

https://github.com/qdrant/qdrant/issues/4939

nostrebored•1w ago
https://x.com/nils_reimers/status/1809334134088622217?s=46

https://x.com/generall931/status/1809303448837582850?s=46

There used to be a benchmarking issue with a founder that was particularly egregious but I can’t find it anymore.

The sharding and consensus issues were from around a year and a half ago, so maybe it’s gotten better.

There are just so many options in the space, I don’t know why you’d go with one of the least correct vendors (whether or not the correctness is deception is a different question that I can’t answer)

bobvanluijt•1w ago
> issue with a founder

That would be me

andre-z•1w ago
What do I say? Happy to talk about "fakes". Here is my calendar. Feel free to book a slot. https://qdrant.to/andre-z
jauntywundrkind•1w ago
I'd love to know how they compare versus MixedBread, what relative strengths each has. https://www.mixedbread.com/

I really really enjoy & learn a lot from the mixedbread blog. And they find good stuff to open source (although the product itself is closed). https://www.mixedbread.com/blog

I feel like there's a lot of overlap but also probably a lot of distinction too. Pretty new to this space of products though.

shayonj•1w ago
v cool and impressive!
lmeyerov•1w ago
Fun!

I was curious given the cloud discussion - a quick search suggests default AWS SSD bandwidth is 250 MB/s, and you can pay more for 1 GB/s. Similar for s3, one http connection is < 100 MB/s, and you can pay for more parallel connections. So the hot binary quantized search index is doing a lot of work to minimize these both for the initial hot queries and pruning later fetches. Very cool!

kgeist•1w ago
Are there vector DBs with 100B vectors in production which work well? There was a paper which showed that there's 12% loss in accuracy at just 1 mln vectors. Maybe some kind of logical sharding is another option, to improve both accuracy and speed.
_peregrine_•1w ago
the solution described in the blog post is currently in production at 100B vectors
rahimnathwani•1w ago
For what/who?
esafak•1w ago
https://turbopuffer.com/customers/cursor
_peregrine_•1w ago
this is actually not how cursor uses turbopuffer, as they index per codebase and thus need many mid-sizes indexes as opposed to one massive index as this post describes
_peregrine_•1w ago
unfortunately i'm not able to share the customer or use case :( but the metrics that you see in the first charts in the post are from a production cluster
jasonjmcghee•1w ago
So many missing details...

Different vector indexes have very different recall and even different parameters for each dramatically impact this.

HNSW can have very good recall even at high vector counts.

There's also the embedding model, whether you're quantizing, if it's pure rag vs hybrid bm25 / static word embeddings vs graph connections, whether you're reranking etc etc

lmeyerov•1w ago
I don't know at these scales, but at the 1M-100M, we found switching from out-of-box embeddings to fine-tuning our embeddings gave less of a sting in the compression/recall trade-off . We had a 10-100X win here wrt comparable recall with better compression.

I'm not sure how that'd work with the binary quantization phase though. For example, we use Matroyska, and some of the bits matter way more than others, so that might be super painful.

mmaunder•1w ago
For those of us who operate on site, we have to add back network latency, which negates this win entirely and makes a proprietary cloud solution like this a nonstarter.
benesch•1w ago
Often not a dealbreaker, actually! We can spin up new tpuf regions and procure dedicated interconnects to minimize latency to the on-prem network on request (and we have done this).

When you're operating at the 100B scale, you're pushing beyond the capacity that most on-prem setups can handle. Most orgs have no choice but to put a 100B workload into the nearest public cloud. (For smaller workloads, considerations are different, for sure.)

redskyluan•1w ago
Using Hierarchical Clustering significantly reduces recall; this is a solution we used and abandoned three years ago.
montroser•1w ago
This is at 92% recall. Could be worse, but could definitely be much better. Quantization and hierarchical clustering are tricks that lead to awesome performance at the cost of extremely variable quality, depending on the dataset.
alanwli•1w ago
Out of curiosity, how is the 92% recall calculated? For a given query, is the recall compared to the true topk of all 100B vectors vs. recall at each of N shards compared to the topk of each respective shard?
nvanbenschoten•1w ago
(author here) The 92% mentioned in this post is showing recall@10 across all 100B vectors, calculated by comparing to the global top_k.

turbopuffer will also continuously monitor production recall at the per-shard level (or on-demand with https://turbopuffer.com/docs/recall). Perhaps counterintuitively, the global recall will actually be better than the per-shard recall if each shard is asked for its own, local top_k!

vander_elst•1w ago
> 504MiB shared L3 cache

What CPU are they using here?

benesch•1w ago
The exact CPU depends on the region/cloud provider, but this Granite Rapids CPU is representative: https://www.intel.com/content/www/us/en/products/sku/240777/...
vander_elst•1w ago
Thanks!
hwspeed•1w ago
The offline/local dev point is underrated. Being able to iterate without network latency or metered API costs makes a huge difference for prototyping. The challenge is making sure your local setup actually matches prod behavior. I've been burned by pgvector working fine locally then hitting performance cliffs at scale when the index doesn't fit in memory anymore.