frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
566•klaussilveira•10h ago•159 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
885•xnx•16h ago•537 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
89•matheusalmeida•1d ago•20 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
15•helloplanets•4d ago•8 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
16•videotopia•3d ago•0 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
195•isitcontent•10h ago•24 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
197•dmpetrov•11h ago•87 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
304•vecti•13h ago•136 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
352•aktau•17h ago•172 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
348•ostacke•16h ago•90 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
20•romes•4d ago•2 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
450•todsacerdoti•18h ago•228 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
77•quibono•4d ago•16 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
50•kmm•4d ago•3 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
246•eljojo•13h ago•150 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
384•lstoll•17h ago•260 comments

Zlob.h 100% POSIX and glibc compatible globbing lib that is faste and better

https://github.com/dmtrKovalenko/zlob
9•neogoose•3h ago•6 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
227•i5heu•13h ago•172 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
66•phreda4•10h ago•11 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
111•SerCe•6h ago•90 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
134•vmatsiiako•15h ago•59 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
23•gmays•5h ago•4 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
42•gfortaine•8h ago•12 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
263•surprisetalk•3d ago•35 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
165•limoce•3d ago•87 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1037•cdrnsf•20h ago•429 comments

Show HN: ARM64 Android Dev Kit

https://github.com/denuoweb/ARM64-ADK
14•denuoweb•1d ago•2 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
58•rescrv•18h ago•22 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
86•antves•1d ago•63 comments

WebView performance significantly slower than PWA

https://issues.chromium.org/issues/40817676
22•denysonique•7h ago•4 comments
Open in hackernews

How randomness improves algorithms (2023)

https://www.quantamagazine.org/how-randomness-improves-algorithms-20230403/
75•kehiy•5mo ago

Comments

flerovium114•5mo ago
Randomized numerical linear algebra has proven very useful as well. It allows you to use a black-box function implementing matrix-vector multiplication (MVM) to compute standard decompositions like SVD, QR, etc. Very useful when MVM is O(N log N) or better.
EdwardCoffin•5mo ago
It's unmentioned in the article, but Trevor Blackwell's PhD thesis, Applications of Randomness in System Performance Measurement [1] was advocating this in 1998:

This thesis presents and analyzes a simple principle for building systems: that there should be a random component in all arbitrary decisions. If no randomness is used, system performance can vary widely and unpredictably due to small changes in the system workload or configuration. This makes measurements hard to reproduce and less meaningful as predictors of performance that could be expected in similar situations.

[1] https://tlb.org/docs/thesis.pdf

hinkley•5mo ago
All else being equal, I like to have either a prime number of servers or a prime number of inflight requests per server. I’m always slightly afraid someone is going to send a batch of requests or tune a benchmark to be run a number of time that divides exactly evenly into the system parallelism and we won’t be testing what we think we are testing due to accidental locality of reference that doesn’t show up in the general population. Not unlike how you get uneven gear wear if you mesh two gears that have a large common denominator of tooth count, like a ratio of 3:1 or 2:3, so the same teeth keep meeting all the time.

But all else is seldom equal and Random 2 works as well or better.

sestep•5mo ago
Could the question mark in the HN version of the title be removed? It makes it read as a bit silly.
optimalsolver•5mo ago
Written by a shiba inu
k_g_b_•5mo ago
In my experience it's a common mistake of non-native English speakers, of native speakers of Slavic languages in particular. I see it often at work with titles starting with an interrogative word like "how".
pixelpoet•5mo ago
Guaranteed this is the case, I see it a lot too. They've done it twice before on previous submissions: https://news.ycombinator.com/item?id=44755116 and https://news.ycombinator.com/item?id=44785347

In case anyone is curious, the way to phrase it as a question would be, "How does randomness improve algorithms?"

jvanderbot•5mo ago
Weirdly, "Why randomness improves Algorithms." Is closer to the truth and also cannot be expressed correctly with a question mark.
prerok•5mo ago
Indeed. Well, FWIW, the title translated into my native Slavic language would also make no sense with a question mark.

What's interesting is that both How... and How does... would translate into the same words but with a dot or a question mark at the end it would mean two different things.

That said, that would be true for many languages.

egypturnash•5mo ago
It’s not there in the original title.
furyofantares•5mo ago
I don't think 'random' is doing any of the work. These sound like they would work fine with a deterministic PRNG seeded at 0. They don't sound like they need to be looking at lava lamps or the like.

It's that there's a population of values (integers for factoring, nodes-to-delete for the graph) where we know a way to get a lot of information cheaply from most values, but we don't know which values, so we sample them.

Which isn't to say the PRNG isn't doing work - maybe it is, maybe any straightforward iteration through the sample space has problems, failure values being clumped together, or similar values providing overlapping information.

If so that suggests to me that you can do better sampling than PRNG, although maybe the benefit is small. When the article talks about 'derandomizing' an algorithm, is it referring to removing the concept of sampling from this space entirely, or is it talking about doing a better job sampling than 'random'?

jvanderbot•5mo ago
I don't follow the question.

A pseudo random sequence of choices is still sufficiently detached from the input. Random here means "I'm making a decision in a way that is independent from the input sufficiently so that structuring the input adversarially won't cause worst case performance." Coupled with "the cost of this algorithm is expressed assuming real random numbers".

That's the work Random is doing.

INB4 worst case: you can do worst case analysis on randomized analysis but it's either worst case across any choice or worst case in expectation, not worst case given a poor implementation of RNG, effectively randomization sometimes serves to shake you out of an increasingly niche and unlikely series of bad decisions that is the crux of an adversarial input.

To wit

> In the rare cases where the algorithm makes an unlucky choice and gets bogged down at the last step, they could just stop and run it again.

furyofantares•5mo ago
I think "How randomness improves algorithms" makes it sound kind of mystical and paradoxical.

Instead we could say "we know shortcuts that work for most values in a population, but we don't know how to figure out which values without expensive computation" and that's not very mystical or paradoxical.

And using random sampling is now the obvious way to deal with a situation. But it's not random doing the work, that's an implementation detail of how you do the sampling.

It very well can be that there isn't another obvious way to iterate through the sample space that isn't a disaster. Maybe bad values are clumped together so when you retry on failure you retry a lot. But if that's the case, there might also be a better way to sample than random - if bad values are clumped then there may be an intelligent way to sample such that you hit two bad values in a row less often than random.

My question is if that's what's being referred to as 'derandomizing' - taking one of these algorithms that uses random sampling and sample more intelligently. Or if they instead mean using what they learned from the (so-called) probabilistic algorithm to go back to a more traditional form.

jvanderbot•5mo ago
Well in fact we know (if P! = NP) that for any randomized ALG there's a good deterministic one. So its not going do the kind of game changing class breaking work you're looking for. You know, where random is the only way.

How randomization helps is by making it much easier to design algorithms. E.g. Verifying a solution is cheap, so proving your random choice is in some class of good choices and making an algorithm that turns that into a solution is still an interesting approach and opens up solutions to things we cant yet solve.

Derandomizing as presented apparently means proving for a few random but careful choices you're in the funnel towards the right solution, or (hopefully) can detect otherwise, and a few transformations or reasonably performant steps produce your solution.

lifeinthevoid•5mo ago
> Well in fact we know (if P! = NP) that for any randomized ALG there's a good deterministic one

Oh, that's cool, do you have a reference for that?

jvanderbot•5mo ago
TFA (edit: in the politest way)
danhite•5mo ago
>> Well in fact we know (if P! = NP) that for any randomized ALG there's a good deterministic one > Oh, that's cool, do you have a reference for that?

The OP article has such a reference, but theirs is paywalled, and perhaps you missed it, so you may wish to see this no paywall link to the paper:

Hardness vs. Randomness by Noam Nisan & Avi Wigderson https://www.math.ias.edu/~avi/PUBLICATIONS/MYPAPERS/NOAM/HAR...

furyofantares•5mo ago
I'm not looking for any kind of game changing class breaking work. I don't think I'm being understood here at all.

I must have put too much emphasis on PRNG or on maybe there's a more intelligent way to sample - you can ignore those comments. I just think it's framed poorly to make it sound more counterintuitive than it is.

jvanderbot•5mo ago
It happens. Sorry for misunderstanding
hinkley•5mo ago
If you squint a bit, many of the optimizations for quicksort are essentially a very short Monte Carlo simulation.

Randomness seems to help more with intentionally belligerent users than anything else. There is no worst pattern because the same question twice yields a different result. For internal use that can help a little with batch processing, because the last time I did that seriously I found a big problem with clustering of outlier workloads. One team of users had 8-10x the cost of any other team. But since this work involved populating several caches, I sorted by user instead of team and that sorted it. Also meant that one request for team A was more likely to arrive after another request had populated the team data in cache instead of two of them concurrently asking the same questions. So it not only smoothed out server load it also reduced the overall read intensity a bit. But shuffling the data worked almost as well and took,hours instead of days.

jvanderbot•5mo ago
Adversary input is basically at the core of analysis of these types of algorithms, yes.

Here's another crossover analogy. In games, for any deterministic strategy there are games where you can't play a deterministic (pure) strategy and hope to get the best outcome, you need randomized (mixed) ones. Otherwise the Adversary could have anticipated and generated a different pathological response.

hinkley•5mo ago
I think this started though with systems doing incremental work and then passing the result on to an algorithm that does worse with partially sorted or inverted lists than with random ones. Adversarial came later, particularly with the spread of the Internet.
jvanderbot•5mo ago
I assure you adversarial analysis predates the internet