frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
426•klaussilveira•5h ago•97 comments

Hello world does not compile

https://github.com/anthropics/claudes-c-compiler/issues/1
21•mfiguiere•42m ago•8 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
775•xnx•11h ago•472 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
142•isitcontent•6h ago•15 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
135•dmpetrov•6h ago•57 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
41•quibono•4d ago•3 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
246•vecti•8h ago•117 comments

A century of hair samples proves leaded gas ban worked

https://arstechnica.com/science/2026/02/a-century-of-hair-samples-proves-leaded-gas-ban-worked/
70•jnord•3d ago•4 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
180•eljojo•8h ago•124 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
314•aktau•12h ago•154 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
12•matheusalmeida•1d ago•0 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
311•ostacke•12h ago•85 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
397•todsacerdoti•13h ago•217 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
322•lstoll•12h ago•233 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
12•kmm•4d ago•0 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
48•phreda4•5h ago•8 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
109•vmatsiiako•11h ago•34 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
186•i5heu•8h ago•129 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
236•surprisetalk•3d ago•31 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
976•cdrnsf•15h ago•415 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
144•limoce•3d ago•79 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
17•gfortaine•3h ago•2 comments

I'm going to cure my girlfriend's brain tumor

https://andrewjrod.substack.com/p/im-going-to-cure-my-girlfriends-brain
49•ray__•2h ago•11 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
41•rescrv•13h ago•17 comments

Evaluating and mitigating the growing risk of LLM-discovered 0-days

https://red.anthropic.com/2026/zero-days/
35•lebovic•1d ago•11 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
52•SerCe•2h ago•42 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
77•antves•1d ago•57 comments

The Oklahoma Architect Who Turned Kitsch into Art

https://www.bloomberg.com/news/features/2026-01-31/oklahoma-architect-bruce-goff-s-wild-home-desi...
18•MarlonPro•3d ago•4 comments

Claude Composer

https://www.josh.ing/blog/claude-composer
108•coloneltcb•2d ago•71 comments

Show HN: Slack CLI for Agents

https://github.com/stablyai/agent-slack
39•nwparker•1d ago•10 comments
Open in hackernews

Caches: LRU vs. Random

https://danluu.com/2choices-eviction/
112•gslin•6mo ago

Comments

hinkley•6mo ago
I have never been able to wrap my head around why 2 random works better in load balancing than leastconn. At least in caching it makes sense why it would work better than another heuristic.
beoberha•6mo ago
Try giving Marc Brooker’s blog on this a read: https://brooker.co.za/blog/2012/01/17/two-random.html

It is only better than leastconn when you have stale information to base your decision on. If you have perfect, live information, best will always be optimal.

contravariant•6mo ago
Technically it doesn't, it's just really hard to implement leastconn correctly.

If you had perfect information and could just pick whichever was provably lowest that'd would probably work. However keeping that information up to date also takes effort. And if your information is outdated it's easy to overload a server that you think doesn't have much to do or underload one that's long since finished with its tasks. Picking between 2 random servers introduces some randomness without allowing the spread to become huge.

hinkley•6mo ago
When the cost of different requests varies widely it’s difficult to get it right. When we rolled out docker I saw a regression in p95 time. I countered this by doubling our instance size and halving the count, which made the number of processes per machine slightly more instead of way less than the number of machines. I reasoned that the local load balancing would be a bit fairer and that proved out in the results.
contravariant•6mo ago
I'm not 100% sure if it's just load balancing. It would depend on the details of the setup but that situation also allows you to throw more resources at each request.

I mean obviously there is a point where splitting up the instances doesn't help because you're just leaving more instances completely idle, or with too little resources to be helpful.

kgeist•6mo ago
By the time you decide to route to a particular node, conditions on that node might have already changed. So, from what I understand, there can be worst-case scenarios in usage patterns where the same nodes keep getting stressed due to repeatedly stale data in the load balancer. Randomization helps ensure the load is spread out more uniformly.
yuliyp•6mo ago
There are a few reasons:

1. Random is the one algorithm that can't be fooled. So even if there's something against number of connections as a load metric, not using that metric alone dampens the problems.

2. There is a lag between selection and actually incrementing the load metric for the next request, meaning that just using the load metric alone is prone to oscillation

3. A machine that's broken (immediately errors all requests) can capture almost all requests, while 2-random means its damage is limited to 2x its weight fraction

4. For requests which are a mix of CPU and IO work, reducing convoying (i.e. many requests in similar phases) is good for reducing CPU scheduling delay. You want some requests to be in CPU-heavy phases while others are in IO-heavy phases; not bunched.

hinkley•6mo ago
I’m fine with the random part. What I don’t get is why 2 works just as well as four, or square root of n. It seems like 3 should do much, much better and it doesn’t.

It’s one of those things I put in the “unreasonably effective” category.

nielsole•6mo ago
I wonder if someone tried a probabilistic "best of 1.5" or similar and if two is just a relatively high number.
hinkley•6mo ago
If I had to guess it’s related to e. In which case maybe choosing 2 30% of the time and 3 70% of the time is a better outcome.
adgjlsfhk1•6mo ago
it's not. you can get good behavior by choosing 1 90% of the time and 2 10% of the time.
Straw•6mo ago
It's because going from 1 to 2 changes the expected worst case load from an asymptotic log to an asymptotic log log, and further increases just change a constant.

See https://en.wikipedia.org/wiki/Balls_into_bins_problem

fwlr•6mo ago
In code the semantic difference is pretty small between “select one at random” and “select two at random and perform a trivial comparison” - roughly about the same difference as to “select three at random and perform two trivial comparisons”. That is, they are all just specific instances of the “best-of-x” algorithm: “select x at random and perform x-1 comparisons”. Natural to wonder why going from “best-of-1” to “best-of-2” makes such a big difference, but going from “best-of-2” to “best-of-3” doesn’t.

In complexity analysis however it is the presence or absence of “comparisons” that makes all the difference. “Best-of-1” does not have comparisons, while “best-of-2”, “best-of-3”, etc., do have comparisons. There’s a weaker “selections” class, and a more powerful “selections+comparisons” class. Doing more comparisons might move you around within the internal rankings of the “selections+comparisons” class but the differences within the class are small compared to the differences between the classes.

An alternative, less rigorous intuition: behind door number 1 is a Lamborghini, behind door number 2 is a Toyota, and behind door number 3 is cancer. Upgrading to “best of 2” ensures you will never get cancer, while upgrading again to “best of 3” merely gets you a sweeter ride.

bob1029•6mo ago
> But what if we take two random choices (2-random) and just use LRU between those two choices?

> Also, we can see that pseudo 3-random is substantially better than pseudo 2-random, which indicates that k-random is probably an improvement over 2-random for the k. Some k-random policy might be an improvement over DIP.

This sounds very similar to tournament selection schemes in evolutionary algorithms. You can control the amount of selective pressure by adjusting the tournament size.

I think the biggest advantage here is performance. A 1v1 tournament is extremely cheap to run. You don't need to maintain a total global ordering of anything.

nielsole•6mo ago
It's also similar to load balancing. Least requests vs. best of two. Benefit being that you never serve the most loaded backend. I guess the feared failure mode of least requests and LRU is similar. Picking the obvious choice might be the worst choice in certain scenarios (fast failures and cache churning respectively)
smusamashah•6mo ago
There was an article about this phenomenon but with interactive visualisations showing packets moving and load balancing. And there was an optimal number of random, gong higher wasn't improving things.
pvillano•6mo ago
The idea of using randomness to extend cliffs really tickles my brain.

Consider repeatedly looping through n+1 objects when only n fit in cache. In that case LRU misses/evicts on every lookup! Your cache is useless and performance falls of a cliff! 2-random turns that performance cliff into a gentle slope with a long tail(?)

I bet this effect happens when people try to be smart and loop through n items, but have too much additional data to fit in registers.

phamilton•6mo ago
This feels similar to when I heard they use bubble sort in game development.

Bubble sort seems pretty terrible, until you realize that it's interruptible. The set is always a little more sorted than before. So if you have realtime requirements and best-effort sorting, you can sort things between renders and live with the possibility of two things relative close to each other appearing a little glitched for a frame.

8n4vidtmkvmk•6mo ago
I thought it was insertion sort? Works well on nearly sorted lists.
chipsa•6mo ago
Quick sort usually uses something like insertion sort when the number of items is low, because the constants are better at low n, even if O(n) isn’t as good.
phamilton•6mo ago
That's a different problem. To quickly sort a nearly sorted list, we can use insertion sort. However the goal is to make progress with as little as one iteration.

One iteration of insertion sort will place one additional element in its correct place, but it leaves the unsorted portion basically untouched.

One iteration of bubble sort will place one additional element into the sorted section and along the way do small swaps/corrections. The % of data in the correct location is the same, but the overall "sortedness" of the data is much better.

8n4vidtmkvmk•6mo ago
That's interesting. I never considered this before. I came across this years ago and settled on insertion sort the first time I tried rendering a waterfall (translucency!). Will have to remember bubble sort for next time