frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Queueing Theory v2: DORA metrics, queue-of-queues, success-failure-skip notation

https://github.com/joelparkerhenderson/queueing-theory
1•jph•2m ago•0 comments

Show HN: Hibana – choreography-first protocol safety for Rust

https://hibanaworks.dev/
1•o8vm•3m ago•0 comments

Haniri: A live autonomous world where AI agents survive or collapse

https://www.haniri.com
1•donangrey•4m ago•1 comments

GPT-5.3-Codex System Card [pdf]

https://cdn.openai.com/pdf/23eca107-a9b1-4d2c-b156-7deb4fbc697c/GPT-5-3-Codex-System-Card-02.pdf
1•tosh•17m ago•0 comments

Atlas: Manage your database schema as code

https://github.com/ariga/atlas
1•quectophoton•20m ago•0 comments

Geist Pixel

https://vercel.com/blog/introducing-geist-pixel
1•helloplanets•23m ago•0 comments

Show HN: MCP to get latest dependency package and tool versions

https://github.com/MShekow/package-version-check-mcp
1•mshekow•31m ago•0 comments

The better you get at something, the harder it becomes to do

https://seekingtrust.substack.com/p/improving-at-writing-made-me-almost
2•FinnLobsien•32m ago•0 comments

Show HN: WP Float – Archive WordPress blogs to free static hosting

https://wpfloat.netlify.app/
1•zizoulegrande•34m ago•0 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
1•melvinzammit•34m ago•0 comments

Sony BMG copy protection rootkit scandal

https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootkit_scandal
1•basilikum•37m ago•0 comments

The Future of Systems

https://novlabs.ai/mission/
2•tekbog•37m ago•1 comments

NASA now allowing astronauts to bring their smartphones on space missions

https://twitter.com/NASAAdmin/status/2019259382962307393
2•gbugniot•42m ago•0 comments

Claude Code Is the Inflection Point

https://newsletter.semianalysis.com/p/claude-code-is-the-inflection-point
3•throwaw12•43m ago•1 comments

Show HN: MicroClaw – Agentic AI Assistant for Telegram, Built in Rust

https://github.com/microclaw/microclaw
1•everettjf•43m ago•2 comments

Show HN: Omni-BLAS – 4x faster matrix multiplication via Monte Carlo sampling

https://github.com/AleatorAI/OMNI-BLAS
1•LowSpecEng•44m ago•1 comments

The AI-Ready Software Developer: Conclusion – Same Game, Different Dice

https://codemanship.wordpress.com/2026/01/05/the-ai-ready-software-developer-conclusion-same-game...
1•lifeisstillgood•46m ago•0 comments

AI Agent Automates Google Stock Analysis from Financial Reports

https://pardusai.org/view/54c6646b9e273bbe103b76256a91a7f30da624062a8a6eeb16febfe403efd078
1•JasonHEIN•49m ago•0 comments

Voxtral Realtime 4B Pure C Implementation

https://github.com/antirez/voxtral.c
2•andreabat•52m ago•1 comments

I Was Trapped in Chinese Mafia Crypto Slavery [video]

https://www.youtube.com/watch?v=zOcNaWmmn0A
2•mgh2•58m ago•0 comments

U.S. CBP Reported Employee Arrests (FY2020 – FYTD)

https://www.cbp.gov/newsroom/stats/reported-employee-arrests
1•ludicrousdispla•1h ago•0 comments

Show HN: I built a free UCP checker – see if AI agents can find your store

https://ucphub.ai/ucp-store-check/
2•vladeta•1h ago•1 comments

Show HN: SVGV – A Real-Time Vector Video Format for Budget Hardware

https://github.com/thealidev/VectorVision-SVGV
1•thealidev•1h ago•0 comments

Study of 150 developers shows AI generated code no harder to maintain long term

https://www.youtube.com/watch?v=b9EbCb5A408
1•lifeisstillgood•1h ago•0 comments

Spotify now requires premium accounts for developer mode API access

https://www.neowin.net/news/spotify-now-requires-premium-accounts-for-developer-mode-api-access/
1•bundie•1h ago•0 comments

When Albert Einstein Moved to Princeton

https://twitter.com/Math_files/status/2020017485815456224
1•keepamovin•1h ago•0 comments

Agents.md as a Dark Signal

https://joshmock.com/post/2026-agents-md-as-a-dark-signal/
2•birdculture•1h ago•0 comments

System time, clocks, and their syncing in macOS

https://eclecticlight.co/2025/05/21/system-time-clocks-and-their-syncing-in-macos/
1•fanf2•1h ago•0 comments

McCLIM and 7GUIs – Part 1: The Counter

https://turtleware.eu/posts/McCLIM-and-7GUIs---Part-1-The-Counter.html
2•ramenbytes•1h ago•0 comments

So whats the next word, then? Almost-no-math intro to transformer models

https://matthias-kainer.de/blog/posts/so-whats-the-next-word-then-/
1•oesimania•1h ago•0 comments
Open in hackernews

Detached Point Arithmetic

https://github.com/Pedantic-Research-Limited/DPA
12•HappySweeney•6mo ago

Comments

gbanfalvi•6mo ago
This feels so obvious and simple how is this not already a standard thing everywhere? Is it because the mantissa and point position don't both fit into a single register?
HextenAndy•6mo ago
It is a standard thing. It's called floating point.
Cthulhu_•6mo ago
It already is; the author is trying to sell a solution to a problem that is known, well-understood and worked around, often with programming language types like `Money` or numerical calculation libraries. The "30 year compound interest inaccuracy" problem was solved 50+ years ago.

I'm really not sure what the angle of the author and their "organization" is, which was only created this month. The library and idea is cool and all, but it strongly implies the author didn't actually do any research before building a solution, one that probably (but I'm not qualified to say) has some issues they overlooked. See e.g. https://cs.opensource.google/go/go/+/master:src/math/big/flo... for a modern implementation of big accurate floating point numbers.

fake edit: reading that this was AI generated, my time and attention was wasted on this.

HextenAndy•6mo ago
Surely that's just normal floating point but missing e.g. normalisation? Floating point is literally an int exponent and an int mantissa. Except real floating point adjusts the exponent to avoid integer overflow in the mantissa - which is where rounding happens.

In DPA the mantissa just overflows (silently in the C implementation) and then what?

withinboredom•6mo ago
He increased the size of a float from 4 bytes to 9 bytes, so it is more accurate; then claims that it is so much better. If he did the same implementation, but in 4 bytes; and it was more accurate. Then ... maybe this would be "revolutionary" (to quote TFA).
bradrn•6mo ago
Er… this may be a stupid question, but how is this actually different to ordinary floating-point arithmetic? Ordinary IEEE-754 floating-point numbers already store the mantissa and exponent in separate fields, don’t they?
HextenAndy•6mo ago
Exactly.
pixelpoet•6mo ago
Oh dear, author quoting himself for his legendary contributions... embarrassing.

> Every error compounds.

https://en.wikipedia.org/wiki/Kahan_summation_algorithm

floppyd•6mo ago
There are two places in the readme where author quotes themselves, my eyes might never roll back
nephrite•6mo ago
They just reimplemented floating point with bigger mantissa and exponent. Rounding erorrs will appear with sufficiently large/small numbers.
ncruces•6mo ago
Actually, the exponent is smaller. IEEE 754 64-bit binary floats have an 11 bit exponent, 1 sign bit, and 53 bit mantissa (one of the bits of the mantissa is implied, by actually knowing what they're doing, rather than… whatever this is).
cmrx64•6mo ago
Displeased with this trend of LLM-assisted “research”. The central claims are false, the examples of floating point are false and off by a factor of 1e11, the latency numbers are complete WTF and disconnected from reality (‘cycles’ of what).

Fixed point arithmetic with a dynamic scale is presented along the way motivating floating point in probably every computer architecture class. It’s a floating point.

This guy needs to open a book. I recommend Nick Higham’s _Accuracy and Stability of Numerical Algorithms_.

vgo96•6mo ago
I think you are discrediting LLMs, gemini 2.5 pro catches most of the flaws in the author's article. I think the author just doesn't understand floating point.
kragen•6mo ago
How do you know if Gemini caught the flaws you didn't notice?
cmrx64•6mo ago
possibly so. I’m even seeing GPT 4.1-mini ripping it apart when prompted with only the content. DeepSeek (not with thinking) is fooled.
withinboredom•6mo ago
So does ChatGPT 4.1: https://chatgpt.com/share/6888d177-4ebc-8013-b3a2-2648ebea91...
cmrx64•6mo ago
to properly plumb the LLM here you should also freshly ask it “Tell me what is right with this:”

I prompted them without anything except the content and they autonomously decide it’s either all nonsense or they take the bait and start praising it.

GaggiX•6mo ago
I wouldn't be surprised if the author suffers from Bipolar Disorder or Schizophrenia reading the repo and his own citations, also a quite dumb LLM if used.
constantcrying•6mo ago
There is a fundamental theorem of numerical analysis, due to Kahan, that certain, very desirable properties of modeling the real numbers on a computer are incompatible.

Again and again people try to "fix" IEEE floating points without realizing that they are trying to do something akin to creating free energy. Whenever you start out with a project like this you have to start by asking yourself what desirable property you are willing to let go of. Not presenting that flaw makes the whole thing look either dishonest or uninformed.

>Any real number x can be represented as

This statement is just false. I do not know why you would start out by making such a basic error. The real numbers are uncountable you can not represent all real numbers by a pair of integers. This is basic real analysis

>The mathematics is elementary. The impact is revolutionary.

???

>Special thanks to everyone who said "that's just how it works" - you motivated me to prove otherwise.

Maybe the people know better than you? It is after all a mathematical theorem.

vgo96•6mo ago
The author says that every real number x can be represented as

x = m * 2^p

where m is an integer(mantissa) and p is an integer (point position)

Well this is clearly wrong, take x = 1/3 for example

1/3 = m * 2^p

m = 1 / (3 * 2^p), where m is an integer, doesn't hold true for any integer p.

If the author had read first 2 pages of

https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.h...

they could have avoided the embarrassment.

cmrx64•6mo ago
brilliant reference, thank you!
SetTheorist•6mo ago
Amusingly, even the numbers in the first example are counter-examples to the author's false claim: neither 0.1 nor 0.2 can be represented thusly.
cscheid•6mo ago
I’m dismayed that people are willing to put their names on garbage like this.

If you want the serious version of the idea instead of the LLM diarrhea, just go Jonathan Shewchuk’s robust predicates work: https://people.eecs.berkeley.edu/~jrs/papers/robustr.pdf from 1997.

ncruces•6mo ago
Thanks, didn't know this one! Have some reading to do.

For a library that implements just the two component version of this, commonly known as a double-double, for a mantissa of 107 bits and an exponent of 11, see: https://github.com/ncruces/dbldbl

agnishom•6mo ago
Isn't this how floating point already works?
kragen•6mo ago
This is incorrect. There are a number of eerors, as others have pointed out, but for me the most central one is not that almost all reals are uncomputable numbers, but that the product of two 64-bit integers is 128 bits, as anyone who has done arbitrary-precision rational math has noticed.

I think it's great to experiment with improving fundamental algorithms, but not to make misleading claims about your results.