frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Open in hackernews

Posit floating point numbers: thin triangles and other tricks (2019)

http://marc-b-reynolds.github.io/math/2019/02/06/Posit1.html
34•fanf2•4h ago

Comments

antiquark•4h ago
This seems to be related to the "type III unum":

https://en.wikipedia.org/wiki/Unum_(number_format)#Posit_(Ty...

andrepd•3h ago
Posit is the name of the 3rd in a series of John Gustafson's proposals of an alternative to ieee floats.
andrepd•3h ago
Great dive! I'm very interested in posits (and ieee float replacements in general) and never read this post before. Tons of insightful points.
adrian_b•3h ago
The example where computing an expression with posits has much better accuracy than when computing with IEEE FP32 is extremely misleading.

Regardless whether you use 32-bit posits or IEEE FP32, you can represent only the same count of numbers, i.e. of points on the "real" numbers axis.

When choosing a representation format, you cannot change the number of representable points, you can just choose to distribute the points in different places.

The IEEE FP32 format distributes the points so that the relative rounding error is approximately constant over the entire range.

Posits crowd the points into the segment close to zero, obtaining there a better rounding error, with the price that the segments distant to zero have very rare points, i.e. very high rounding errors.

Posits behave pretty much like a fixed-point format that has gradual overflow instead of a sharp cut-off. For big numbers you do not get an overflow exception that would stop the computation, but the accuracy of the results becomes very bad. For small numbers the accuracy is good, but not as good as for a fixed-point number, because some bit patterns must be reserved for representing the big numbers, in order to avoid overflow.

The example that demonstrates better accuracy for posits is manufactured by choosing values in the range where posits have better accuracy. It is trivial to manufacture an almost identical example where posits have worse accuracy, by choosing values in an interval where FP32 has better accuracy.

There are indeed problems where posits can outperform IEEE FP32, but it is quite difficult to predict which are those problems, because for a complex problem it can be very difficult to predict which will be the ranges for the intermediate results. This is the very reason why floating-point numbers are preferred over fixed-point numbers, to avoid the necessity of such analyses.

While for IEEE formats it is possible to make estimates of the relative errors of the results of a long computation, due to the guaranteed bounds for the relative error of each operation, that is pretty much impossible for posits, where the relative error is a function of the values of the operands, so you cannot estimate it without actually doing the computation.

For scientific and technical computations, posits are pretty much useless, because those have very wide value ranges for their data, also because those computations need error estimations, and also because posits can have significant advantages only for small number formats, of 32 bits or less, while those computations need mostly 64-bit numbers or even bigger.

Nevertheless, for special problems that are very well characterized, i.e. you know with certainty some narrow ranges for the values of the input data and of the intermediate results, posits could get much more accuracy than IEEE FP32, but they could have good performance only if they were implemented in hardware.

wat10000•2h ago
Isn’t that pretty much the entire point of this article?
andrepd•18m ago
> The example where computing an expression with posits has much better accuracy than when computing with IEEE FP32 is extremely misleading.

Did you not rtfa or am I missing something?

dnautics•2h ago
One of the creators of posits here (I came up with the name and i think ES is my idea, did the first full soft versions in julia, and designed the first circuits, including a cool optimization for addition). my personal stance is that posits are not great for scientific work precisely because of the difficulties with actually solving error propagation. Hopefully i can give a bit more measured insights into why the "parlor tricks" appear in the posits context.

John's background is in scientific compute/HPC and he previously advocated for using unums (which do fully track errors) and there is a version of posits (called valids) which do track errors, encouraging the user to combine with other techniques to cut the error bounds using invariants, but that requires algorithmic shift. Alas, a lot of examples were lifted from the unums book and sort of square peg/round holed into posits. you can see an example of algorithmic shift in the demo of matrix multiplication in the stanford talk (that demo is me; linked in OP).

as for me, i was much more interested in lower bit representation for ml applications where you ~don't care about error propagation. this also appears in the talk.

as it wound up, Facebook took some interest in it for AI but they nih'd it and redid the mantissa as logarithmic (which i think was a mistake).

and anyway redoing your silicon it turns out to be a pain in the ass (quires only make sense in the burn-the-existing-world perspective and are not so bad for training pipelines, where iirc kronecker product dominates), but the addition operation takes up quite a bit more floorspace, and just quantizing to int4 is with grouped scaling factors is easier with existing gpu pipelines, even custom hardware too.

fun side fact: Positron.ai, was so-named on the off chance that using posits makes sense (you can see the through line to science fiction that i was attempting to manifest when i came up with the name)

dnautics•2h ago
turns out only the slides are linked in op. here is the live recording:

https://youtu.be/aP0Y1uAA-2Y?feature=shared

andrepd•19m ago
> and designed the first circuits, including a cool optimization for addition

Curious, what trick? :)

Wishing for mainstream CPU support for anything but IEEE numbers was always a pipe dream on anything but the decades-long term, but I gotta be honest, I was hoping the current AI hype wave would bring some custom silicon for alternative float formats, Posits included.

> the addition operation takes up quite a bit more floorspace, and just quantizing to int4 is with grouped scaling factors is easier with existing gpu pipelines

Can you elaborate on this part?

burnt-resistor•2h ago
Condensed IEEE-like formats cheat sheet I threw together and tested:

https://pastebin.com/aYwiVNcA

mserdarsanli•1h ago
A while ago I built an interactive tool to display posits (Also IEEE floats etc.): https://mserdarsanli.github.io/FloatInfo/

It is hard to understand at first but after playing with this a bit it will make sense. As with everything, there are trade offs compared to IEEE floats, but having more precision when numbers are close to 1 is pretty nice.

Curved-Crease Sculpture

https://erikdemaine.org/curved/
112•wonger_•5h ago•14 comments

Andrej Karpathy: Software in the era of AI [video]

https://www.youtube.com/watch?v=LCEmiRjPEtQ
938•sandslash•18h ago•532 comments

How OpenElections uses LLMs

https://thescoop.org/archives/2025/06/09/how-openelections-uses-llms/index.html
32•m-hodges•3h ago•4 comments

Juneteenth in Photos

https://texashighways.com/travel-news/the-history-of-juneteenth-in-photos/
64•ohjeez•1h ago•21 comments

Show HN: A DOS-like hobby OS written in Rust and x86 assembly

https://github.com/krustowski/rou2exOS
95•krustowski•5h ago•11 comments

Show HN: EnrichMCP – A Python ORM for Agents

https://github.com/featureform/enrichmcp
30•bloppe•1h ago•5 comments

Star Quakes and Monster Shock Waves

https://www.caltech.edu/about/news/star-quakes-and-monster-shock-waves
8•gmays•2d ago•0 comments

Homegrown Closures for Uxn

https://krzysckh.org/b/Homegrown-closures-for-uxn.html
8•todsacerdoti•1h ago•0 comments

Show HN: Claude Code Usage Monitor – real-time tracker to dodge usage cut-offs

https://github.com/Maciek-roboblog/Claude-Code-Usage-Monitor
146•Maciej-roboblog•9h ago•89 comments

In praise of "normal" engineers

https://charity.wtf/2025/06/19/in-praise-of-normal-engineers/
88•zdw•1h ago•52 comments

Flowspace (YC S17) Is Hiring Software Engineers

https://flowspace.applytojob.com/apply/6oDtY2q6E9/Software-Engineer-II
1•mrjasonh•2h ago

We Can Just Measure Things

https://lucumr.pocoo.org/2025/6/17/measuring/
26•tosh•2d ago•16 comments

Posit floating point numbers: thin triangles and other tricks (2019)

http://marc-b-reynolds.github.io/math/2019/02/06/Posit1.html
34•fanf2•4h ago•11 comments

Guess I'm a Rationalist Now

https://scottaaronson.blog/?p=8908
142•nsoonhui•8h ago•404 comments

Show HN: Unregistry – “docker push” directly to servers without a registry

https://github.com/psviderski/unregistry
573•psviderski•20h ago•126 comments

From LLM to AI Agent: What's the Real Journey Behind AI System Development?

https://www.codelink.io/blog/post/ai-system-development-llm-rag-ai-workflow-agent
95•codelink•9h ago•31 comments

Geochronology supports LGM age for human tracks at White Sands, New Mexico

https://www.science.org/doi/10.1126/sciadv.adv4951
17•gametorch•3h ago•7 comments

Researchers are now vacuuming DNA from the air

https://www.sciencedaily.com/releases/2025/06/250603114822.htm
39•karlperera•3d ago•36 comments

What would a Kubernetes 2.0 look like

https://matduggan.com/what-would-a-kubernetes-2-0-look-like/
67•Bogdanp•7h ago•102 comments

Public/protected/private is an unnecessary feature

https://catern.com/private.html
19•PaulHoule•2d ago•16 comments

Munich from a Hamburger's perspective

https://mertbulan.com/2025/06/14/munich-from-a-hamburgers-perspective/
81•toomuchtodo•3d ago•50 comments

Visual History of the Latin Alphabet

https://uclab.fh-potsdam.de/arete/en
73•speckx•2d ago•52 comments

Show HN: TrendFi – I built AI trading signals that self-optimize

https://trend.fi
20•wolfman1•3d ago•23 comments

Why do we need DNSSEC?

https://howdnssec.works/why-do-we-need-dnssec/
26•gpi•2h ago•46 comments

Getting Started Strudel

https://strudel.cc/workshop/getting-started/
102•rcarmo•3d ago•43 comments

Elliptic Curves as Art

https://elliptic-curves.art/
178•nill0•15h ago•22 comments

Finding Dead Websites

https://www.marginalia.nu/log/a_122_dead_websites/
82•ingve•2d ago•13 comments

My iPhone 8 Refuses to Die: Now It's a Solar-Powered Vision OCR Server

https://terminalbytes.com/iphone-8-solar-powered-vision-ocr-server/
407•hemant6488•1d ago•166 comments

End of 10: Upgrade your old Windows 10 computer to Linux

https://endof10.org/
151•doener•6h ago•104 comments

The Scheme That Broke the Texas Lottery

https://www.newyorker.com/news/letter-from-the-southwest/the-scheme-that-broke-the-texas-lottery
22•mitchbob•6h ago•13 comments