frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
624•klaussilveira•12h ago•182 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
926•xnx•18h ago•548 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
32•helloplanets•4d ago•24 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
109•matheusalmeida•1d ago•27 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
9•kaonwarb•3d ago•7 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
40•videotopia•4d ago•1 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
219•isitcontent•13h ago•25 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
210•dmpetrov•13h ago•103 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
322•vecti•15h ago•143 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
369•ostacke•18h ago•94 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
358•aktau•19h ago•181 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
477•todsacerdoti•20h ago•232 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
272•eljojo•15h ago•160 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
402•lstoll•19h ago•271 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
85•quibono•4d ago•20 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
14•jesperordrup•2h ago•6 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
25•romes•4d ago•3 comments

Start all of your commands with a comma

https://rhodesmill.org/brandon/2009/commands-with-comma/
3•theblazehen•2d ago•0 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
56•kmm•5d ago•3 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
12•bikenaga•3d ago•2 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
244•i5heu•15h ago•188 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
52•gfortaine•10h ago•21 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
140•vmatsiiako•17h ago•62 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
280•surprisetalk•3d ago•37 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1058•cdrnsf•22h ago•433 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
132•SerCe•8h ago•117 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
70•phreda4•12h ago•14 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
28•gmays•7h ago•11 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
176•limoce•3d ago•96 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
63•rescrv•20h ago•22 comments
Open in hackernews

Convolutions, Polynomials and Flipped Kernels

https://eli.thegreenplace.net/2025/convolutions-polynomials-and-flipped-kernels/
106•mfrw•8mo ago

Comments

Sourabhsss1•8mo ago
The visualizations make the concept easy to grasp.
bjt12345•8mo ago
This and complex analysis are fascinating topics in Undergraduate studies.
esafak•8mo ago
Contour integrals still feel cool.
incognito124•8mo ago
My favourite use case for this: By the same derivation as this blog, one can prove that, if you have any two probability distributions X and Y (they can be different), the probability distribution of X+Y is a convolution of the PMFs/PDFs of X and Y.
srean•8mo ago
On similar lines the MAX operator on the random variables become PRODUCT operator on its distribution.

It's fun to play with the (Max, +)algebra of random variables and infer it's distribution.

This turns out to be quite useful in estimating completion time of dependant parallel jobs.

Spawning multiple parallel jobs becomes a Max operation and chaining sequential jobs becomes a '+' operation on the completion times. This expression tree of Max'es and Plus'es can be algebraically processed to obtain bounds on the completion time distribution.

One example is the straggler problem in mapreduce/Hadoop. In the naive case, the completion time is the max of each parallel subtask. (*)

If the tasks have a heavy tail, which sometimes they do, the straggler's completion time can be really bad. This can be mitigated by k-out-n set up, where you encode the problem in such a way that only k out of n jobs need to finish to obtain the final result. One can play with this trade-off between potentially wasted computation and expected completion time.

For heavy tailed distributions another simplification is possible. The tails of Max and + start becoming of the same order, so one can switch between convolutions and products.

(*) This shows up in microservices architectures also. The owner/maintainer of a microservice end point might be very happy with its tail latency. However an end user who consumes and composed the results of the endpoints can experience really bad tail latencies for the final output.

incognito124•8mo ago
Thanks for sharing the name of that problem! I've encountered it before while optimizing batched LLM inference. The whole batch would last until all queries in a batch were done, and by changing the batch size, you'd trade off per-query-speed (better in a larger batch) with overall performance (worse with a larger batch).

Nowadays I think this is solved in an entirely different way, though.

srean•8mo ago
It gets more entertaining.

It's common to wrap API calls with

retry on failure, or

spawn an identical request if taking longer than x,or

recursively spawn an identical request if taking longer than x,or

retry on failure but no more than k times.

All of these and similar patterns/decorators can be analysed using the same idea.

incognito124•8mo ago
Oh wow, pretty cool stuff! If you have more to share, you can always dump it in my mail inbox
silloncito•8mo ago
You should be careful with your estimation. The events should be independent to apply those properties but it is very common that one cause can influence many factors, so they are not independent and all the beauty math does not work as with independence. In the worst day, all fail, because one resource can block others and the system get strangled. There is the black swan book when one rare event make the financial market realize what is risk.
srean•8mo ago
A load-balancer transparently sitting in front of the api end-point (not an uncommon scenario) usually decouples things well enough to be practically independent.

That said, silloncito's warning does need to be paid heed.

While independence is essential for the proof to go through, the relationships need not break catastrophically with break of independence, usually it is graceful degradation with degree of independence. There are however specific, often degenerate, theoretical edge cases where the degradation is rapid.

ttoinou•8mo ago
What ? I've never heard of this math. Do you mean literally those are the resulting operations in the general case or are those approximate explanation and we need to find more specific cases to make this true ?
srean•8mo ago
The math is general and exact.

Max and Plus at the random variables space becomes product and convolution in their distribution function space.

    Distr(X+Y) =  DistrX ° DistrY

    Distr (X ^ Y) = DistrX * DistrY. 

    Where '^' denotes Max and '°' denotes convolution.
Note *, +, ° and ^ being commutative and associative they can be chained. One can also use their distributive properties. This really the math of groups and rings.

However, one can and one does resort to approximations to compute the desired end results.

More specifically, people are often interested not in the distribution, but some statistics. For example, mean, standard deviation, some tail percentile etc. To compute those stats from the exact distributions, approximations can be employed.

gjm11•8mo ago
Surely this isn't quite right.

Max of variables = product of cumulative distribution functions.

Sum of variables = convolution of probability density functions.

So both of the equations you write down are correct, but only if you interpret "Distr" as meaning different things in the two cases.

[EDITED to add:] Provided the random variables in question are independent, as mentioned elsewhere in the discussion; if they aren't then none of this works.

srean•8mo ago
The original post, to which I replied, is about the correspondence between summation of random variables and convolution of their distribution. Independence is sufficient for that.

I just carried through that assumption of independence in my own comment, thinking it was obvious to do that (carry over the assumptions).

ttoinou•8mo ago
But is he right about the different meanings of Distr in your equations ?
srean•8mo ago
No he is not.

Both the ideas work for the cumulative distribution function that is called just distribution function in math. I think he got confused by the fact that convolution relation works also with densities (so he might have assumed that it works with densities only and not distributions)

gjm11•8mo ago
I'm sorry, but I think you are just wrong about convolutions and cumulative distribution functions.

Let's take the simplest possible case: a "random" variable that's always equal to 0. Its cdf is a step function: 0 for negative values, 1 for positive values. (Use whatever convention you prefer for the value at 0.)

The sum of two such random variables is another with the same distribution, of course. So, what's the convolution of the cdfs?

Answer: it's not even well defined.

The convolution of functions f and g is a function such that h(x) = integral over t of f(t) g(x-t) dt. The integral is over the whole of (in this case) the real numbers.

In this case f and g are both step functions as described above, so (using the convenient Iverson bracket notation for indicator functions) this is the integral over t of [t>0] [x-t>0] dt, i.e., of [0<t<x] dt, whose value is 0 for negative x and x for positive x.

This is not the cdf of any probability distribution since it doesn't -> 1 as x -> oo. In particular, it isn't the cdf of the right probability distribution which, as mentioned above, would be the same step function as f and g.

If X,Y are independent with pdfs f,g and cdfs F,G then the cdf of X+Y is (not F conv G but) f conv G = F conv g.

gjm11•8mo ago
Oops, one thing in the above is completely wrong (I wrote it before thinking things through carefully, and then forgot to delete it).

It is not at all true that "it's not even well defined", and indeed the following couple of paragraphs determine exactly what the thing in question is. It's not an actual cdf, but the problem isn't that it's ill-defined but that it's well-defined but has the wrong shape to be a cdf.

silloncito•8mo ago
$Let M=Max(X,Y)$. If $X$ and $Y$ are independent then: $F_M(k) = P(M \leq K) = P((X \leq K) and (Y \leq K))$, so that

  $P(X \leq K) x P(Y \leq K) = F_X(K) x F_Y(K)$.

 So $F_M = F_X \times F_Y$
srean•8mo ago
Thanks for making the assumption of independence explicit and welcome to HN.
silloncito•8mo ago
Thank you for your welcome, I must have been lurking here for around 30 years or more (always changing accounts). Anyway in this specific case, since M = Max(X,X) = X you can't have F(M) = F(X)*F(X) = F(X) except when F(X) in {0,1}, so the independence property is essential. Welcome fellow Lisper (for the txr and related submission) and math inspired (this one and another related to statistical estimation) with OS related interest (your HN account), OS are not my cup of tea but awk is not bad).

  In another post there are some comments between topology and deep learning. I wonder if there is a definition similar to dimension in topology which would allow you to estimate the minimal size (number of parameters) in a neural network so that is able to achieve a certain state (for example obtaining the capacity to one shot learning with high probability).
srean•8mo ago
Yes independence is absolutely an assumption that I (implicitly) made. It's essential for the convolution identity to hold as well, I just carried through that assumption.

We share interest in AWK (*) then :) I don't know OS at all. Did you imply I know lisp ? I enjoy scheme, but used it in anger never. Big fan of the little schemer series of books.

(*) Have to find that Weinberger face Google-NY t-shirt. Little treasures.

Regarding your dimensions comment, this is well understood for a single layer, that is, for logistic regression. Lehmann's book will have the necessary material. With multiple layers it gets complicated real fast.

The best performance estimates, as in, within realms of being practically useful, largely come from two approaches, one from PAC-Bayesian bounds, the other from Statistical Physics (but these bounds are data distribution dependent). The intrinsic dimension of the data plays a fundamental role there.

The recommended place to dig around is JMLR (journal of machine learning research).

silloncito•8mo ago
Perhaps your txr submission suggests a lisp flavor. The intrinsic dimension concept looks interesting, also the V.C. dimension, but both concepts are very general. Perhaps Lehmann's book is: Elements of large sample theory.
srean•8mo ago
Txr is super interesting.

I meant Lehmann's Theory of Point Estimation, but large sample theory is a good book too. The newer editions of TPE are a tad hefty in number of pages. The earlier versions would serve you fine.

The generic idea is that smaller these dimensions, easier the prediction problem. Intrinsic dimension is one that comes closest to topology. VC is very combinatorial and gives the worst of worst case bounds. For a typical sized dataset one ends up with an error probability estimate of less than 420. With PAC-Bayes the bounds are atleast less than 1.0.

pizza•8mo ago
Check out tropical algebras
ttoinou•8mo ago
Fascinating thank you for reminding me about that
whatshisface•8mo ago
That doesn't sound right. If P(X) is the vector {0.5,0,0.5} and P(Y) is {0.5,0.5,0}, P(X)P(Y) is {0.25,0,0} and that's both not normalized and clearly not the distribution for max(X,Y). Did you get that from an LLM?
srean•8mo ago
You are using PMFs. I meant and wrote distribution function aka cumulative distribution function. They are closed under products.

> Did you get it from LLM

LOL. There must be a fun and guilty story lurking inside the accusation.

On a more serious note, I would love it if LLMs could do such simplifications and estimations on their own.

whatshisface•8mo ago
Distributions can be either PDFs or CDFs. To be honest I'd never heard of assuming that a distribution was a CDF unless otherwise specified.
srean•8mo ago
May I raise you a

https://en.m.wikipedia.org/wiki/Distribution_function

It's right in the title.

In probability theory, integration theory, as well as electrical engineering, "distribution function", unless further clarified, means that cumulative thing.

In math, nomenclature overloading can be a problem. So context matters. In the context of dirac delta, distribution means something else entirely -- generalized functions.

Oh! So sad you deleted your point about densities. One can only laugh at and enjoy these idiosyncrasies of nomenclature.

In Electrical Engineering one uses j for imaginary numbers because i is taken (by current).

math_dandy•8mo ago
This is a natural point of confusion. The true (IMO) primitive concept here is the probability measure. Probability measures on the real line are in canonical bijection with CDFs, the latter being axiomatizable as càdlàg functions (see https://en.wikipedia.org/wiki/Càdlàg) asymptotic to 0 (resp. 1) at minus infinity (resp. infinity). On the other hand, not every probability measure has a density function. (If you want the formalism of densities to capture all probability measures, you need to admit more exotic generalized functions à la Dirac.)
deepsun•8mo ago
Almost every probability theorem starts with "let's take independent random variables". But in reality almost nothing is independent. Superdeterminism even claims that exactly nothing is independent.
srean•8mo ago
You are right.

The degree of dependence matters though. Mutual information is one way to measure that.

Thankfully, some theorems remain valid even when independence is violated. The next stop after independence is martingale criteria. Martingale difference sequences can be quite strongly dependent yet allow some of the usual theorems to go through but with worse convergence rates.

mananaysiempre•8mo ago
Almost every elementary probability theorem. There are plenty of theorems conditional on bounds on how weak the dependency should be (or how fast correlations decrease with order, etc.), including CLT-like theorems, it’s just that they are difficult to state, very difficult to prove, and almost impossible to verify the applicability of. In practice you are anyway going to instead use the simpler version and check if the results make sense after the fact.
jerf•8mo ago
3blue1brown has a walkthrough of this: https://www.youtube.com/watch?v=IaSGqQa5O-M
stared•8mo ago
Beware - one step more and you get into the region of generating functions. I recommend a book Herbert Wilf with a wonderful name of Generatingfunctionology (https://www2.math.upenn.edu/~wilf/gfology2.pdf).
eliben•8mo ago
Indeed, generating functions are mentioned in a footnote :) Very interesting topic
stared•8mo ago
Saw that!

Sometimes it makes things simpler (quite a a lot of things in combinatorics), other times it is a tools for nice tricks (I have no idea how I would solved these equations if it were not for generating functions, see the appendix from a Mafia game paper, https://arxiv.org/abs/1009.1031).

srean•8mo ago
Ooh! Lovely. Thank you.

Generating functions, Z-transforms are indispensable in probability theory, Physics, signal processing, and now it seems for a good round of Mafia while camping with friends.

esafak•8mo ago
I tip my hat to the person who invented that.
nayuki•8mo ago
You can also multiply polynomials by way of analogy with integer multiplication:

         3  1  2  1
       ×    2  0  6
       ------------
        18  6 12  6
      0  0  0  0
   6  2  4  2
  -----------------
   6  2 22  8 12  6
= 6x^5 + 2x^4 + 22x^3 + 8x^2 + 12x^1 + 6x^0.
kazinator•8mo ago
Not to mention divide:

  2x^2 + 5x - 3
  -------------
     x + 2

                  2x + 1
          ______________
   x + 2 | 2x^2 + 5x - 3
           2x^2 + 4x
           -------------
                   x - 3
                   x + 2
                   -----
                     - 5
The remainder is -5, which gives us a -5/(x + 2) term: Thus

                    5
   =    2x + 1  - -----
                  x + 2
How about something we know divides:

                     x + 1
             _______________
      x + 1 | x^2 + 2x + 1
              x^2 + x
              --------
                     x + 1
                     x + 1
                     -----
                         0