frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
616•klaussilveira•12h ago•180 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
920•xnx•17h ago•545 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
32•helloplanets•4d ago•22 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
105•matheusalmeida•1d ago•26 comments

Jeffrey Snover: "Welcome to the Room"

https://www.jsnover.com/blog/2026/02/01/welcome-to-the-room/
8•kaonwarb•3d ago•2 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
37•videotopia•4d ago•1 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
214•isitcontent•12h ago•25 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
207•dmpetrov•12h ago•102 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
319•vecti•14h ago•141 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
356•aktau•19h ago•181 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
367•ostacke•18h ago•94 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
474•todsacerdoti•20h ago•232 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
270•eljojo•15h ago•159 comments

Vocal Guide – belt sing without killing yourself

https://jesperordrup.github.io/vocal-guide/
13•jesperordrup•2h ago•4 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
400•lstoll•18h ago•271 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
25•romes•4d ago•3 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
82•quibono•4d ago•20 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
56•kmm•4d ago•3 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
243•i5heu•15h ago•185 comments

Was Benoit Mandelbrot a hedgehog or a fox?

https://arxiv.org/abs/2602.01122
10•bikenaga•3d ago•2 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
51•gfortaine•10h ago•17 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
139•vmatsiiako•17h ago•61 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
277•surprisetalk•3d ago•37 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1055•cdrnsf•21h ago•433 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
69•phreda4•12h ago•13 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
128•SerCe•8h ago•113 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
28•gmays•7h ago•10 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
173•limoce•3d ago•94 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
62•rescrv•20h ago•22 comments

WebView performance significantly slower than PWA

https://issues.chromium.org/issues/40817676
30•denysonique•9h ago•6 comments
Open in hackernews

Uncertain<T>

https://nshipster.com/uncertainty/
448•samtheprogram•5mo ago

Comments

mackross•5mo ago
Always enjoy mattt’s work. Looks like a great library.
boscillator•5mo ago
Does this handle covariance between different variables? For example, the location of the object your measuring your distance to presumably also has some error in it's position, which may be correlated with your position (if, for example, if it comes from another GPS operating at a similar time).

Certainly a univarient model in the type system could be useful, but it would be extra powerful (and more correct) if it could handle covariance.

layer8•5mo ago
To properly model quantum mechanics, you’d have to associate a complex-valued wave function with any set of entangled variables you might have.
evanb•5mo ago
If you need to track covariance you might want to play with gvar https://gvar.readthedocs.io/en/latest/ in python.
joerick•5mo ago
I've been wondering for a while if a program could "learn" covariance somehow. Through real-world usage.

Otherwise, it feels to me that it'd be consistently wrong to model the variables as independent. And any program of notable size is gonna be far too big to consider correlations between all the variables.

As for how one might do the learning, I don't know yet!

flaghacker•5mo ago
Using this sampling-based approach you get correct covariance modeling for free. You have to only sample leaf values that are used in multiple places once per evaluation, but it looks like they do just that: https://github.com/mattt/Uncertain/blob/962d4cc802a2b179685d...
jakubmazanec•5mo ago
[flagged]
cobbal•5mo ago
I don't think inference is part of this at all, frequentist or otherwise.

It's not part of the type system, it's just the giry monad as a library.

frizlab•5mo ago
> And why does it need to be part of the type system?

As presented in the article, it is indeed just a library.

geocar•5mo ago
> What if I want Bayesian?

Bayes is mentioned on page 46.

> And why does it need to be part of the type system? It could be just a library.

It is a library that defines a type.

It is not a new type system, or an extension to any particularly complicated type system.

> Am I missing something?

Did you read it?

https://www.microsoft.com/en-us/research/wp-content/uploads/...

https://github.com/klipto/Uncertainty/

jakubmazanec•5mo ago
> Bayes is mentioned on page 46.

Bayes isn't mentioned in the linked article. But thanks for the links.

geocar•5mo ago
That did not surprise me because I did not think the article was about anything but about adapting the dot-net library they linked to on Microsoft's site to swift, and I figured that if I wanted to understand the library and the approach I had better read the links that indicated I might be able to learn from them.
muxl•5mo ago
It was chosen to be implemented as a generic type in this design because the way that uncertainty "pollutes" underlying values maps well onto monads which were expressed through generics in this case.
AlotOfReading•5mo ago
A small note, but GPS is only well-approximated by a circular uncertainty in specific conditions, usually open sky and long-time fixes. The full uncertainty model is much more complicated, hence the profusion of ways to measure error. This becomes important in many of the same situations that would lead you to stop treating the fix as a point location in the first place. To give a concrete example, autonomous vehicles will encounter situations where localization uncertainty is dominated by non-circular multipath effects.

If you go down this road far enough you eventually end up reinventing particle filters and similar.

mikepurvis•5mo ago
Vehicle GPS is usually augmented by a lot of additional sensors and assumptions, notably the speedometer, compass, and knowledge the you'll be on one of the roads marked on its map. Not to mention a fast fix because you can assume you haven't changed position since you last powered on.
monocasa•5mo ago
As well as a fast fix because you know what mobile cell or wifi network you're on.
astrange•5mo ago
None of the inputs you mention work against multipath effects in cities, which means car GPS won't know which lane you're in and in a grid system may think you're on the next street over.

If you have an HD map you can solve for it using building shapes or by looking at the street with cameras. WiFi seems like it would help, but the locations of the WiFi terminals are themselves based on crowdsourced GPS.

o11c•5mo ago
> Not to mention a fast fix because you can assume you haven't changed position since you last powered on.

... until you use a ferry.

jeffreygoesto•5mo ago
Well. Some part of the 101 was moved a bunch of feet sideways after construction. Really hard to correct for, the GPS and the map localization were constantly fighting like an old couple... Had to re-map that stretch quickly...
blauditore•5mo ago
> assume you haven't changed position since you last powered on

Sounds like a classic case of programmers ignoring corner cases: Towing, ferries, car trains, pushing the car because it broke down...

It's when you find messages in the log like "this should never happen".

mauvehaus•5mo ago
You can pretty clearly use it to correct errors up to a point though. If you have a 5km difference from when the GPS was turned off, you've probably hit a corner case. If you have a 25m difference, and it's converging on the last location as you pick up satellites, snapping to the prior location is almost certainly correct.
mauvehaus•5mo ago
And yet, sometimes driving down a divided, limited access highway, Apple/Google/whatever maps will suddenly start giving directions from whatever parallel dirt road I happen to be driving next to. As though there's a situation where I left the highway at highway speed, crossing a ditch, crushing a fence, and possibly smashing through a guard rail, and am now traveling 65mph/100kph down a dirt road.
nullhole•5mo ago
Lidar points aren't points, they're spheroids centred on the most likely location
layer8•5mo ago
Arguably Uncertain should be the default, and you should have to annotate a type as certain T when you are really certain. ;)
esafak•5mo ago
A complement to Optional.
nine_k•5mo ago
Only for physical measurements. For things like money, you should be pretty certain, often down to exact fractional cents.

It appears that a similar approach is implemented in some modern Fortran libraries.

XorNot•5mo ago
Money has the problem that no matter how clever you are someone will punch all the values into Excel and then complain they don't match.

Or specify they're paying X per day, but want hourly itemized billing...but it should definitely come out to X per day (this was one employer which meant I invoiced them with like 8 digits of precision due to how it divided, and they refused to accept a line item for mathematical uncertainty aggregates).

rictic•5mo ago
A person might have mistyped a price, a barcode may have been misread, the unit prices might be correct but the quantity could be mistaken. Modeling uncertainty well isn't just about measurement error from sensors.

I wonder what it'd look like to propagate this kind of uncertainty around. You might want to check the user's input against a representative distribution to see if it's unusual and, depending on the cost of an error vs the friction of asking, double-check the input.

bee_rider•5mo ago
Typos seem like a different type of error from physical tolerances, and one that would be really hard to reason about mathematically.
random3•5mo ago
have you ever tried working computationally with money? Forget money, have you worked with floating points? There really isn't anything certain.
nine_k•5mo ago
Yes, I worked in a billing department. No, floats are emphatically not suitable for representing money, except the very rounded values in presentations.

Floats try to keep the relative error at bay, so their absolute precision varies greatly. You need to sum them starting with the smallest magnitude, and do many other subtle tricks, to limit rounding errors.

geocar•5mo ago
> For things like money, you should be pretty certain, often down to exact fractional cents.

That's one way to look at it.

Another is that Money is certain only at the point of exchange.

> It appears that a similar approach is implemented in some modern Fortran libraries.

I'd be curious about that. Do you have a link?

munchler•5mo ago
Is this essentially a programmatic version of fuzzy logic?

https://en.wikipedia.org/wiki/Fuzzy_logic

esafak•5mo ago
https://en.wikipedia.org/wiki/Probabilistic_programming more like. It is already a thing; see, for example, https://pyro.ai/
krukah•5mo ago
Monads are really undefeated. This particular application feels to me akin to wavefunction evolution? Density matrices as probability monads over Hilbert space, with unitary evolution as bind, measurement/collapse as pure/return. I guess everything just seems to rhyme under a category theory lens.
valcron1000•5mo ago
Relevant (2006): https://web.engr.oregonstate.edu/~erwig/pfp/
8note•5mo ago
for mechanical engineering drawings to communicate with machinists and the like, we use tolerances

eg. 10cm +8mm/-3mm

for what the acceptable range is, both bigger and smaller.

id expect something like "are we there yet" referencing GPS should understand the direction of the error and what directions of uncertainty are better or worse

mabster•5mo ago
Something that's bugged me about this notation though is that sometimes it means "cannot exceed the bounds" and sometimes it means "only exceeds the bounds 10% of the time"
taneq•5mo ago
I don’t think I’ve ever seen mechanical drawings have “90% confidence” dimensions like this. If a part’s too big then it won’t fit, and it’s probably useless.
kevin_thibedeau•5mo ago
If a test procedure is verifying all dimensional accuracy, it can be assumed to be bounding tolerance. If it's a mass production line with less than 100% testing of parts, you'd have to expect that some outliers get by and the tolerance is something like 3-sigma on a Gaussian.
mabster•5mo ago
Yeah it's probably field specific and I guess Gaussian-based uncertainty would be more about statistical sampling rather than tolerances. I've noticed that if arithmetic is being done on it it's almost certainly Gaussian. I just mean whenever I see uncertainty like this, I don't know what is meant!
brabel•5mo ago
In Mechanical Engineering, tolerances ensure that when you put parts together, they will fit as long as the tolerances were respected.

It's not statistical. If the machinist makes a part that's not within the +/- bounds, they throw it away and start again. If you tried to fit multiple parts, all with only statistical respect for tolerances, you would run into trouble almost 100% of the time with just a few pieces.

mabster•5mo ago
Yeah understood. In electronics: Resistor values are Gaussian but they test and bucket the resistors so that they can be treated as tolerances for similar reasons.
_kb•5mo ago
Or for something likely relevant to many here - 3 point time estimates for project planning.

Probability distributions (even very simple ones) provide a much clearer view across any domain where there’s inherent uncertainty.

cb321•5mo ago
If you are in an even more "approximate" mindset (as opposed to propagating by simulation to get real world re-sampled skewed distributions, as often happens in experimental physics labs, or at least their undergraduate courses), there is an error propagation (https://en.wikipedia.org/wiki/Propagation_of_uncertainty) simplification for "small" errors thing you can do. Then translating "root" errors to "downstream errors" is just simple chain rule calculus stuff. (There is a Nim library for that at https://github.com/SciNim/Measuremancer that I use at least every week or two - whenever I'm timing anything.)

It usually takes some "finesse" to get your data / measurements into territory where the errors are even small in the first place. So, I think it is probably better to do things like this Uncertain<T> for the kinds of long/fat/heavy tailed and oddly shaped distributions that occur in real world data { IF the expense doesn't get in your way some other way, that is, as per Senior Engineer in the article }.

black_knight•5mo ago
This seems closely related to this classic Functional Pearl: https://web.engr.oregonstate.edu/~erwig/papers/PFP_JFP06.pdf

It’s so cool!

I always start my introductory course on Haskell with a demo of the Monty Hall problem with the probability monad and using rationals to get the exact probability of winning using the two strategies as a fraction.

internet_points•5mo ago
See also the Haskell library monad-bayes https://monad-bayes.netlify.app/tutorials/ https://www.tweag.io/blog/2019-09-20-monad-bayes-1/
droideqa•5mo ago
Could this be implemented in Rust or Clojure?

Does Anglican kind of do this?

j2kun•5mo ago
This concept has been done many times in the past, under the name "interval arithmetic." Boost has it [1] as does flint [2]

What is really curious is why, after being reinvented so many times, it is not more mainstream. I would love to talk to people who have tried using it in production and then decided it was a bad idea (if they exist).

[1]: https://www.boost.org/doc/libs/1_89_0/libs/numeric/interval/... [2]: https://arblib.org/

Tarean•5mo ago
Interval arithmetic is only a constant factor slower but may simplify at every step. For every operation over numbers there is a unique most precise equivalent op over intervals, because there's a Galois connection. But just because there is a most precise way to represent a set of numbers as an interval doesn't mean the representation is precise.

A computation graph which gets sampled like here is much slower but can be accurate. You don't need an abstract domain which loses precision at every step.

bee_rider•5mo ago
It would have been sort of interesting if we’d gone down the road of often using interval arithmetic. Constant factor slower, but also the operations are independent. So if it was the conventional way of handling non-integer numbers, I guess we’d have hardware acceleration by now to do it in parallel “for free.”
eru•5mo ago
You can probably get the parallelism for interval arithmetic today? Though it would probably require a bit of effort and not be completely free.

On the CPU you probably get implicit parallel execution with pipelines and re-ordering etc, and on the GPU you can set up something similar.

pklausler•5mo ago
Interval arithmetic makes good intuitive sense when the endpoints of the intervals can be represented exactly. Figuring out how to do that, however, is not obvious.
eru•5mo ago
Also not all uncertainties are modeled well by uniform distributions over an interval.
kccqzy•5mo ago
The article says,

> Under the hood, Uncertain<T> models GPS uncertainty using a Rayleigh distribution.

And the Rayleigh distribution is clearly not just an interval with a uniformly random distribution in between. Normal interval arithmetic isn't useful because that uniform random distribution isn't at all a good model for the real world.

Take for example that Boost library you linked. Ask it to compute (-2,2)*(-2,2). It will give (-4,4). A more sensible result might be something like (-2.35, 2.35). The -4 lower bound is only attainable when you have -2 and 2 as the multiplicands which are at the extremes of the interval; probabilistically if we assume these are independent random variables then two of them achieving this extreme value simultaneously should have an even lower probability.

rendaw•5mo ago
While it does sound like GP missed a distinction, I don't see how (-2.35, 2.35) would be sensible. The extremes can happen (or else they wouldn't be part of the input intervals) and the code has to sensibly deal with that event in order to be correct.
esrauch•5mo ago
The reason is that the uniform distribution is very rare. Nearly no real world scenario were something is equally likely to be the values 2, 0 and -2, and where it's literally impossible to be -2.01. It exists but it's not the normal case.

In noisy sensors case there's some arbitrary low probability of them being actually super wrong, if you go by true 10^-10 outlier bounds they will be useless for any practical use, while the 99% confidence range is a relatively small rent.

More often you want some other distribution and say (-2, 2) and those are the 90th percentile interval not the absolute bounds, 0 is more likely than -2 and -3 is possible but rare. It's not bounds, you can ask you model for your 99th or 99.9th percentile value or whatever tolerance you want and get something outside of (-2,2).

kccqzy•5mo ago
Interval arithmetic isn't useful because it only tells you the extreme values, but not how likely these values are. So you have to interpret them as uniform random. Operations like multiplications change the shape of these distributions, so then uniform random isn't applicable any more. Therefore interval arithmetic basically has an undefined underlying distribution that can change easily without being tracked.
mcphage•5mo ago
> Operations like multiplications change the shape of these distributions, so then uniform random isn't applicable any more.

Doesn't addition as well? Like if you roll d6+d6, the output range is 2-12, but it's not nearly the same as if you rolled d11+1.

kccqzy•5mo ago
Yes that's true! I used multiplication because that was my original example.
mcphage•5mo ago
Okay, thanks :-). I was just trying to make sure I was understanding what I was reading.
Dylan16807•5mo ago
-2 and 2 were not the extremes to begin with.
j2kun•5mo ago
https://arblib.org/#special-functions has all manner of distributions available. I think you misunderstand what I meant.
anal_reactor•5mo ago
Because reasoning about uncertain values / random variables / intervals / fuzzy logic / whatever is difficult and the model where things are certain is much easier to process while it models the reality well enough.
PaulDavisThe1st•5mo ago
Several years ago when I discovered some of the historical work on interval arithmetic, I was astounded to find that there was a notable contingent in the 60s that was urging hardware developers to make interval arithmetic be the basic design of new CPUs, and saying quite forcefully that if we simply went with "normal" integers and floating point, we'd be unable to correctly model the world.
skissane•5mo ago
I think as another commenter pointed out, interval arithmetic’s problem is that while it acknowledges the reality of uncertainty, its model of uncertainty is so simplistic, in many applications it is unusable. So making it the standard primitive, could potentially result in the situation where apps that don’t need to explicitly model uncertainty at all have to pay the price of being forced to do so; meanwhile, apps which need a more realistic model of uncertainty are being forced to do so while being hamstrung by its interactions with another overly simple model. It is one of those ideas which sounds great in theory, but there are good reasons it never succeeded in practice-the space of use cases where explicitly modelling uncertainty is desirable, but where the simplistic model of interval arithmetic is entirely adequate, is rather small-a standard primitive which only addresses the needs of a narrow subset of use cases is not a good architecture
woah•5mo ago
Using simple types (booleans etc) is very simple and easy to reason about, and any shortcomings are obvious. Trying to model physical uncertainty is difficult and requires different models for different domains. Once you have committed to needing to do that, it would be much better to use a purpose built model instead of a library which put some bell curves behind a pretty API.
eru•5mo ago
I agree that different application strictly speaking need different models of uncertainty.

But I'm not so sure in your conclusion: a good enough model could be universally useful. See how everyone uses IEEE 754 floats, despite them giving effectively one very specific model of uncertainty. Most of the time this just works, and sometimes people have to explicitly work around floats' weirdnesses (whether that's because they carefully planned ahead because they know what they are doing, or whether they got a nasty surprise first). But overall they are still useful enough to be used almost universally.

orlp•5mo ago
Not sure why this is being upvoted as the article is not describing interval arithmetic. It supports all kinds of uncertainty distributions.
jjcob•5mo ago
In physics, you typically learn about error propagation quite early in your studies.

If you make some assumptions about your error (a popular one is to assume Gaussian distribution) then you can calculate the error of the result quite elegantly.

It's a nice excercise to write some custom C++ types that have (value, error) and automatically propagate them as you perform mathematical operations on them.

Unfortunately, in the real world only very few measurements have a gaussian error distribution, and the problem are systematic (non-random) errors, and reasoning about them is a lot harder.

So this means that automatically handling error propagation is in most cases pointless, since you need to manually analyze the situation anyway.

nicois•5mo ago
Is there a risk that this will underemphasise some values when the source of error is not independent? For example, the ROI on financial instruments may be inversely correlated to the risk of losing your job. If you associate errors with each, then combine them in a way which loses this relationship, there will be problems.
tricky_theclown•5mo ago
S
lloydatkinson•5mo ago
IS there the complete C# available for this? I looked over the original paper and it's just snippets.
kittoes•5mo ago
https://github.com/klipto/uncertainty
Pxtl•5mo ago
10 years since commit and no attached documents besides a tiny readme. Pass.
miffy900•5mo ago
This is still some code, as opposed to no code. It does seem to model everything in the research paper.

Aside from the original research paper needing to be included in the repo, it definitely does not need anything more than what's already there. It all builds and compiles without errors, only 2 warnings for the library proper and 6 warnings for the test project. Oh and it comes with a unit testing project: 59 tests written that covers about 73% of the library code. Only 2 tests failed.

Even having a unit testing library means it beats out like 50% of all repos you see on GitHub.

kittoes•5mo ago
Blame Microsoft Research, as the link came directly from them: https://www.microsoft.com/en-us/research/project/uncertainty.... I don't think they ever really took the project past the initial paper/presentation.
naasking•5mo ago
Sometimes things can just be "done", and the paper is pretty good documentation if the implementation is faithful to what is described there.
contravariant•5mo ago
I feel like if you're worried about picking the right abstraction then this is almost certainly the wrong one.
lxe•5mo ago
I really like that this leans on computing probabilities instead of forcing everything into closed-form math or classical probability exercises. I’ve always found it way more intuitive to simulate, sample, and work directly with distributions. With a computer, it feels much more natural to uh... compute: you just run the process, look at the results, and reason from there.
keeganpoppen•5mo ago
oh man i had forgotten about this blog from when i orbited the swift ecosystem a bit... it's clearly as great as always! fun post!
dcsommer•5mo ago
Seems more proper to call it a `ProbabilityDistribution` type. It's a more general and intuitive way to handle the concept.
ngruhn•5mo ago
Yeah but the shorter name wins
bee_rider•5mo ago
But the pun, uncertainty.
btown•5mo ago
Once one understands that a variable (in a programming context) can hold a specification for a variable (in a mathematical context), one opens up incredible doors that are at the foundation of modern AI.

When you see y = m * x + b, your recollections of math class may note that you can easily solve for "m" or find a regression for "m" and "b" given various data points. But from a programming perspective, if these are all literal values, all this is is a "render" function. How can you reverse an arbitrary render function?

There are various approaches, depending on how Bayesian you want to be, but they boil down to: if your language supports redefining operators based on the types of the variables, and you have your variables contain a full specification of the subgraphs of computations that lead to them... you can create systems that can simultaneously do "forward passes" by rendering the relationships, and "backward passes" where the system can automatically calculate a gradient/derivative and thus allow a training system to "nudge" the likeliest values of variables in the right direction. By sampling these outputs, in a mathematically sound way, you get the weights that form a model.

Every layer in a deep neural network is specified in this way. Because of the composability of these operations, systems like PyTorch can compile incredibly optimal instructions for any combination of layers you can think of, just by specifying the forward-pass relationships.

So Uncertain<T> is just the tip of the iceberg. I'd recommend that everyone experiment with the idea that a numeric variable might be defined by metadata about its potential values at any given time, and that you can manipulate that metadata as easily as adding `a + b` in your favorite programming language.

jonahx•5mo ago
Very interesting.

Are there PLs that support this kind of thing at the language level as you are describing?

btown•5mo ago
https://colcarroll.github.io/ppl-api/ is likely a good starting point to get a taste of examples in Python; some use custom languages, but the success of Python-native frameworks in the LLM world I think has shown that embracing that makes interop and composability more possible at scale.

https://news.ycombinator.com/item?id=28941145 has some discussion here as well, though it’s a few years old.

Pyro and NumPyro seem to be popular at the moment!

astrange•5mo ago
If you're willing to be discrete about it, logic languages like Prolog and Mercury use "unification" instead of "evaluation" which means they can evaluate backwards.
danhau•5mo ago
This sounds super interesting, but as someone who knows little about ML or math in general, could you give an ELI5?
btown•5mo ago
I have a bunch of points. I want to fit a curve to them. I could write a function that takes a bunch of parameters as floats that specify the curve, and an x coordinate as a float, and have it output the most likely y value as a float.

If I have a library, though, that lets me add and multiply not just floats but entire computation subgraphs with the same exact + and * operators, though, I can have the library reverse that function automatically, and say: “optimize the parameters to minimize the difference between the curve and the data points.”

LLMs and other ML systems, to paint with a very broad stroke, solve that problem with billions of parameters in a million-dimensional space. Developing intuition for those high dimensions is hard! But the code is simple because once you’ve done the math for the forward pass, you can go straight from chalkboard to Python code, and the libraries largely assist with reversing and building a GPU-accelerated training process automatically!

MangoToupe•5mo ago
This comment seems to conflate variables, functions, and linear systems. I don't think these are worth conflating.
btown•5mo ago
If you store a specification for a probability distribution and give that specification a name, it can behave like a function in that it can be sampled for a scalar output. It can behave like a variable in that you can assign it to a new variable name, play with it as you would anything else in a programming language. And a linear system, perhaps overdetermined, is but one of many ways that the specification can be defined under the hood.

The fact that one can play with complicated nested probability distributions that unify these concepts, as one would play with dolls in a dollhouse, is the point!

Davidbrcz•5mo ago
Congrats, you have just reinvented the monad
webcoon•5mo ago
Awesome! This speaks to something, which I've been thinking (and wishing) for a long time. I've already done probabilistic programming in a scientific context (Python) and classical software engineering for web development (TypeScript, Python, Rust), and I've always wondered why I couldn't have the real-world modelling capacity of the former with the static type assurances of the latter. Love that you (and Microsoft) are thinking along the same lines! Do you perhaps know of any Python implementations for this? There are plenty of dynamic stats programming libraries, but none offer typing solutions AFAIK.
akst•5mo ago
Something I've wanted to make was a data type to represent a value that may or may not be known with a level of certainty over a certain distribution (or probability density function), but you could apply various transforms that may or may not have their own level of uncertainty, and you end up with a refined set of probability distributions each observation (or a new set of classifications based on whatever conditionals).

With the eventual goal of running various simulations over different randomly generated outcomes based on those probability distributions.

thekoma•5mo ago
We designed a processor microarchitecture [1] at the University of Cambridge, inspired by Uncertain<T> (James Bornholt) and related work. In addition to assuming parametric distributions (e.g., Gaussian, Rayleigh), it lets you load arbitrary sets of samples into registers/memory so program values are carried and propagated as nonparametric distributions through ordinary arithmetic.

A spin-off, Signaloid, is taking this technology to market. I'm also researching using this in state estimation (e.g., particle filters).

[1]: https://dl.acm.org/doi/10.1145/3466752.3480131

naasking•5mo ago
Really interesting to see how long ideas take to go mainstream. From my recollection, Oleg and Chung-chieh Shan did this first back in 2009 as a library in OCaml [1,2].

[1] https://groups.google.com/g/fa.caml/c/CbXeoR_Rzrk?pli=1

[2] https://okmij.org/ftp/kakuritu/

captainmuon•5mo ago
Back when I was studying physics, we frequently had to do calculations with error propagation. I tried to implement something very similar in C++ and in Python, but never finished it. I also thought it would be neat if a spreadsheet program could understand uncertainties, and also units, so you could enter 1m +- 10cm and it would propagate the errors correctly. If you laid out the data with one column for the values and one for the errors, I had a couple of OpenOffice macros that would perform the calculations.

Another place where I think this would be neat would be in CAD. Imagine if you are trying to create a model of an existing workpiece or of a room, and your measurements don't exactly add up. It's really frustrating and you have to go back and measure again, and you usually end up idealizing the model and putting in rounder numbers to make it fit, but it is less true to reality. It would be cool if you could put in uncertainties for all lengths and angles, and it would run a solver to minimize the total error.