frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Brookhaven Lab's RHIC concludes 25-year run with final collisions

https://www.hpcwire.com/off-the-wire/brookhaven-labs-rhic-concludes-25-year-run-with-final-collis...
20•gnufx•2h ago•8 comments

SectorC: A C Compiler in 512 bytes

https://xorvoid.com/sectorc.html
61•valyala•3h ago•12 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
36•surprisetalk•3h ago•43 comments

Software factories and the agentic moment

https://factory.strongdm.ai/
75•mellosouls•6h ago•147 comments

I write games in C (yes, C)

https://jonathanwhiting.com/writing/blog/games_in_c/
105•valyala•3h ago•81 comments

Hoot: Scheme on WebAssembly

https://www.spritely.institute/hoot/
138•AlexeyBrin•8h ago•26 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
86•vinhnx•6h ago•11 comments

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
846•klaussilveira•23h ago•253 comments

First Proof

https://arxiv.org/abs/2602.05192
60•samasblack•5h ago•49 comments

The F Word

http://muratbuffalo.blogspot.com/2026/02/friction.html
14•zdw•3d ago•0 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
1080•xnx•1d ago•615 comments

Al Lowe on model trains, funny deaths and working with Disney

https://spillhistorie.no/2026/02/06/interview-with-sierra-veteran-al-lowe/
59•thelok•5h ago•8 comments

Reinforcement Learning from Human Feedback

https://rlhfbook.com/
88•onurkanbkrc•8h ago•5 comments

Start all of your commands with a comma (2009)

https://rhodesmill.org/brandon/2009/commands-with-comma/
509•theblazehen•3d ago•188 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
226•jesperordrup•13h ago•80 comments

Microsoft account bugs locked me out of Notepad – Are thin clients ruining PCs?

https://www.windowscentral.com/microsoft/windows-11/windows-locked-me-out-of-notepad-is-the-thin-...
34•josephcsible•1h ago•26 comments

We mourn our craft

https://nolanlawson.com/2026/02/07/we-mourn-our-craft/
298•ColinWright•2h ago•353 comments

Show HN: I saw this cool navigation reveal, so I made a simple HTML+CSS version

https://github.com/Momciloo/fun-with-clip-path
22•momciloo•3h ago•3 comments

Coding agents have replaced every framework I used

https://blog.alaindichiappari.dev/p/software-engineering-is-back
246•alainrk•8h ago•393 comments

72M Points of Interest

https://tech.marksblogg.com/overture-places-pois.html
34•marklit•5d ago•6 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
601•nar001•7h ago•264 comments

Selection Rather Than Prediction

https://voratiq.com/blog/selection-rather-than-prediction/
11•languid-photic•3d ago•4 comments

The AI boom is causing shortages everywhere else

https://www.washingtonpost.com/technology/2026/02/07/ai-spending-economy-shortages/
172•1vuio0pswjnm7•9h ago•233 comments

A Fresh Look at IBM 3270 Information Display System

https://www.rs-online.com/designspark/a-fresh-look-at-ibm-3270-information-display-system
43•rbanffy•4d ago•9 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
121•videotopia•4d ago•36 comments

History and Timeline of the Proco Rat Pedal (2021)

https://web.archive.org/web/20211030011207/https://thejhsshow.com/articles/history-and-timeline-o...
20•brudgers•5d ago•4 comments

Show HN: Kappal – CLI to Run Docker Compose YML on Kubernetes for Local Dev

https://github.com/sandys/kappal
27•sandGorgon•2d ago•14 comments

Where did all the starships go?

https://www.datawrapper.de/blog/science-fiction-decline
89•speckx•4d ago•99 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
207•limoce•4d ago•113 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
282•isitcontent•23h ago•38 comments
Open in hackernews

How randomness improves algorithms (2023)

https://www.quantamagazine.org/how-randomness-improves-algorithms-20230403/
75•kehiy•5mo ago

Comments

flerovium114•5mo ago
Randomized numerical linear algebra has proven very useful as well. It allows you to use a black-box function implementing matrix-vector multiplication (MVM) to compute standard decompositions like SVD, QR, etc. Very useful when MVM is O(N log N) or better.
EdwardCoffin•5mo ago
It's unmentioned in the article, but Trevor Blackwell's PhD thesis, Applications of Randomness in System Performance Measurement [1] was advocating this in 1998:

This thesis presents and analyzes a simple principle for building systems: that there should be a random component in all arbitrary decisions. If no randomness is used, system performance can vary widely and unpredictably due to small changes in the system workload or configuration. This makes measurements hard to reproduce and less meaningful as predictors of performance that could be expected in similar situations.

[1] https://tlb.org/docs/thesis.pdf

hinkley•5mo ago
All else being equal, I like to have either a prime number of servers or a prime number of inflight requests per server. I’m always slightly afraid someone is going to send a batch of requests or tune a benchmark to be run a number of time that divides exactly evenly into the system parallelism and we won’t be testing what we think we are testing due to accidental locality of reference that doesn’t show up in the general population. Not unlike how you get uneven gear wear if you mesh two gears that have a large common denominator of tooth count, like a ratio of 3:1 or 2:3, so the same teeth keep meeting all the time.

But all else is seldom equal and Random 2 works as well or better.

sestep•5mo ago
Could the question mark in the HN version of the title be removed? It makes it read as a bit silly.
optimalsolver•5mo ago
Written by a shiba inu
k_g_b_•5mo ago
In my experience it's a common mistake of non-native English speakers, of native speakers of Slavic languages in particular. I see it often at work with titles starting with an interrogative word like "how".
pixelpoet•5mo ago
Guaranteed this is the case, I see it a lot too. They've done it twice before on previous submissions: https://news.ycombinator.com/item?id=44755116 and https://news.ycombinator.com/item?id=44785347

In case anyone is curious, the way to phrase it as a question would be, "How does randomness improve algorithms?"

jvanderbot•5mo ago
Weirdly, "Why randomness improves Algorithms." Is closer to the truth and also cannot be expressed correctly with a question mark.
prerok•5mo ago
Indeed. Well, FWIW, the title translated into my native Slavic language would also make no sense with a question mark.

What's interesting is that both How... and How does... would translate into the same words but with a dot or a question mark at the end it would mean two different things.

That said, that would be true for many languages.

egypturnash•5mo ago
It’s not there in the original title.
furyofantares•5mo ago
I don't think 'random' is doing any of the work. These sound like they would work fine with a deterministic PRNG seeded at 0. They don't sound like they need to be looking at lava lamps or the like.

It's that there's a population of values (integers for factoring, nodes-to-delete for the graph) where we know a way to get a lot of information cheaply from most values, but we don't know which values, so we sample them.

Which isn't to say the PRNG isn't doing work - maybe it is, maybe any straightforward iteration through the sample space has problems, failure values being clumped together, or similar values providing overlapping information.

If so that suggests to me that you can do better sampling than PRNG, although maybe the benefit is small. When the article talks about 'derandomizing' an algorithm, is it referring to removing the concept of sampling from this space entirely, or is it talking about doing a better job sampling than 'random'?

jvanderbot•5mo ago
I don't follow the question.

A pseudo random sequence of choices is still sufficiently detached from the input. Random here means "I'm making a decision in a way that is independent from the input sufficiently so that structuring the input adversarially won't cause worst case performance." Coupled with "the cost of this algorithm is expressed assuming real random numbers".

That's the work Random is doing.

INB4 worst case: you can do worst case analysis on randomized analysis but it's either worst case across any choice or worst case in expectation, not worst case given a poor implementation of RNG, effectively randomization sometimes serves to shake you out of an increasingly niche and unlikely series of bad decisions that is the crux of an adversarial input.

To wit

> In the rare cases where the algorithm makes an unlucky choice and gets bogged down at the last step, they could just stop and run it again.

furyofantares•5mo ago
I think "How randomness improves algorithms" makes it sound kind of mystical and paradoxical.

Instead we could say "we know shortcuts that work for most values in a population, but we don't know how to figure out which values without expensive computation" and that's not very mystical or paradoxical.

And using random sampling is now the obvious way to deal with a situation. But it's not random doing the work, that's an implementation detail of how you do the sampling.

It very well can be that there isn't another obvious way to iterate through the sample space that isn't a disaster. Maybe bad values are clumped together so when you retry on failure you retry a lot. But if that's the case, there might also be a better way to sample than random - if bad values are clumped then there may be an intelligent way to sample such that you hit two bad values in a row less often than random.

My question is if that's what's being referred to as 'derandomizing' - taking one of these algorithms that uses random sampling and sample more intelligently. Or if they instead mean using what they learned from the (so-called) probabilistic algorithm to go back to a more traditional form.

jvanderbot•5mo ago
Well in fact we know (if P! = NP) that for any randomized ALG there's a good deterministic one. So its not going do the kind of game changing class breaking work you're looking for. You know, where random is the only way.

How randomization helps is by making it much easier to design algorithms. E.g. Verifying a solution is cheap, so proving your random choice is in some class of good choices and making an algorithm that turns that into a solution is still an interesting approach and opens up solutions to things we cant yet solve.

Derandomizing as presented apparently means proving for a few random but careful choices you're in the funnel towards the right solution, or (hopefully) can detect otherwise, and a few transformations or reasonably performant steps produce your solution.

lifeinthevoid•5mo ago
> Well in fact we know (if P! = NP) that for any randomized ALG there's a good deterministic one

Oh, that's cool, do you have a reference for that?

jvanderbot•5mo ago
TFA (edit: in the politest way)
danhite•5mo ago
>> Well in fact we know (if P! = NP) that for any randomized ALG there's a good deterministic one > Oh, that's cool, do you have a reference for that?

The OP article has such a reference, but theirs is paywalled, and perhaps you missed it, so you may wish to see this no paywall link to the paper:

Hardness vs. Randomness by Noam Nisan & Avi Wigderson https://www.math.ias.edu/~avi/PUBLICATIONS/MYPAPERS/NOAM/HAR...

furyofantares•5mo ago
I'm not looking for any kind of game changing class breaking work. I don't think I'm being understood here at all.

I must have put too much emphasis on PRNG or on maybe there's a more intelligent way to sample - you can ignore those comments. I just think it's framed poorly to make it sound more counterintuitive than it is.

jvanderbot•5mo ago
It happens. Sorry for misunderstanding
hinkley•5mo ago
If you squint a bit, many of the optimizations for quicksort are essentially a very short Monte Carlo simulation.

Randomness seems to help more with intentionally belligerent users than anything else. There is no worst pattern because the same question twice yields a different result. For internal use that can help a little with batch processing, because the last time I did that seriously I found a big problem with clustering of outlier workloads. One team of users had 8-10x the cost of any other team. But since this work involved populating several caches, I sorted by user instead of team and that sorted it. Also meant that one request for team A was more likely to arrive after another request had populated the team data in cache instead of two of them concurrently asking the same questions. So it not only smoothed out server load it also reduced the overall read intensity a bit. But shuffling the data worked almost as well and took,hours instead of days.

jvanderbot•5mo ago
Adversary input is basically at the core of analysis of these types of algorithms, yes.

Here's another crossover analogy. In games, for any deterministic strategy there are games where you can't play a deterministic (pure) strategy and hope to get the best outcome, you need randomized (mixed) ones. Otherwise the Adversary could have anticipated and generated a different pathological response.

hinkley•5mo ago
I think this started though with systems doing incremental work and then passing the result on to an algorithm that does worse with partially sorted or inverted lists than with random ones. Adversarial came later, particularly with the spread of the Internet.
jvanderbot•5mo ago
I assure you adversarial analysis predates the internet