frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Medinilla – an OCPP compliant .NET back end (partially done)

https://github.com/eliodecolli/Medinilla
1•rhcm•1m ago•0 comments

How Does AI Distribute the Pie? Large Language Models and the Ultimatum Game

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6157066
1•dkga•1m ago•1 comments

Resistance Infrastructure

https://www.profgalloway.com/resistance-infrastructure/
2•samizdis•5m ago•0 comments

Fire-juggling unicyclist caught performing on crossing

https://news.sky.com/story/fire-juggling-unicyclist-caught-performing-on-crossing-13504459
1•austinallegro•6m ago•0 comments

Restoring a lost 1981 Unix roguelike (protoHack) and preserving Hack 1.0.3

https://github.com/Critlist/protoHack
1•Critlist•8m ago•0 comments

GPS and Time Dilation – Special and General Relativity

https://philosophersview.com/gps-and-time-dilation/
1•mistyvales•11m ago•0 comments

Show HN: Witnessd – Prove human authorship via hardware-bound jitter seals

https://github.com/writerslogic/witnessd
1•davidcondrey•11m ago•1 comments

Show HN: I built a clawdbot that texts like your crush

https://14.israelfirew.co
2•IsruAlpha•13m ago•1 comments

Scientists reverse Alzheimer's in mice and restore memory (2025)

https://www.sciencedaily.com/releases/2025/12/251224032354.htm
1•walterbell•16m ago•0 comments

Compiling Prolog to Forth [pdf]

https://vfxforth.com/flag/jfar/vol4/no4/article4.pdf
1•todsacerdoti•17m ago•0 comments

Show HN: Cymatica – an experimental, meditative audiovisual app

https://apps.apple.com/us/app/cymatica-sounds-visualizer/id6748863721
1•_august•19m ago•0 comments

GitBlack: Tracing America's Foundation

https://gitblack.vercel.app/
2•martialg•19m ago•0 comments

Horizon-LM: A RAM-Centric Architecture for LLM Training

https://arxiv.org/abs/2602.04816
1•chrsw•19m ago•0 comments

We just ordered shawarma and fries from Cursor [video]

https://www.youtube.com/shorts/WALQOiugbWc
1•jeffreyjin•20m ago•1 comments

Correctio

https://rhetoric.byu.edu/Figures/C/correctio.htm
1•grantpitt•20m ago•0 comments

Trying to make an Automated Ecologist: A first pass through the Biotime dataset

https://chillphysicsenjoyer.substack.com/p/trying-to-make-an-automated-ecologist
1•crescit_eundo•24m ago•0 comments

Watch Ukraine's Minigun-Firing, Drone-Hunting Turboprop in Action

https://www.twz.com/air/watch-ukraines-minigun-firing-drone-hunting-turboprop-in-action
1•breve•25m ago•0 comments

Free Trial: AI Interviewer

https://ai-interviewer.nuvoice.ai/
1•sijain2•25m ago•0 comments

FDA intends to take action against non-FDA-approved GLP-1 drugs

https://www.fda.gov/news-events/press-announcements/fda-intends-take-action-against-non-fda-appro...
21•randycupertino•27m ago•10 comments

Supernote e-ink devices for writing like paper

https://supernote.eu/choose-your-product/
3•janandonly•29m ago•0 comments

We are QA Engineers now

https://serce.me/posts/2026-02-05-we-are-qa-engineers-now
1•SerCe•29m ago•0 comments

Show HN: Measuring how AI agent teams improve issue resolution on SWE-Verified

https://arxiv.org/abs/2602.01465
2•NBenkovich•29m ago•0 comments

Adversarial Reasoning: Multiagent World Models for Closing the Simulation Gap

https://www.latent.space/p/adversarial-reasoning
1•swyx•30m ago•0 comments

Show HN: Poddley.com – Follow people, not podcasts

https://poddley.com/guests/ana-kasparian/episodes
1•onesandofgrain•38m ago•0 comments

Layoffs Surge 118% in January – The Highest Since 2009

https://www.cnbc.com/2026/02/05/layoff-and-hiring-announcements-hit-their-worst-january-levels-si...
13•karakoram•38m ago•0 comments

Papyrus 114: Homer's Iliad

https://p114.homemade.systems/
1•mwenge•38m ago•1 comments

DicePit – Real-time multiplayer Knucklebones in the browser

https://dicepit.pages.dev/
1•r1z4•38m ago•1 comments

Turn-Based Structural Triggers: Prompt-Free Backdoors in Multi-Turn LLMs

https://arxiv.org/abs/2601.14340
2•PaulHoule•40m ago•0 comments

Show HN: AI Agent Tool That Keeps You in the Loop

https://github.com/dshearer/misatay
2•dshearer•41m ago•0 comments

Why Every R Package Wrapping External Tools Needs a Sitrep() Function

https://drmowinckels.io/blog/2026/sitrep-functions/
1•todsacerdoti•41m ago•0 comments
Open in hackernews

Detached Point Arithmetic

https://github.com/Pedantic-Research-Limited/DPA
12•HappySweeney•6mo ago

Comments

gbanfalvi•6mo ago
This feels so obvious and simple how is this not already a standard thing everywhere? Is it because the mantissa and point position don't both fit into a single register?
HextenAndy•6mo ago
It is a standard thing. It's called floating point.
Cthulhu_•6mo ago
It already is; the author is trying to sell a solution to a problem that is known, well-understood and worked around, often with programming language types like `Money` or numerical calculation libraries. The "30 year compound interest inaccuracy" problem was solved 50+ years ago.

I'm really not sure what the angle of the author and their "organization" is, which was only created this month. The library and idea is cool and all, but it strongly implies the author didn't actually do any research before building a solution, one that probably (but I'm not qualified to say) has some issues they overlooked. See e.g. https://cs.opensource.google/go/go/+/master:src/math/big/flo... for a modern implementation of big accurate floating point numbers.

fake edit: reading that this was AI generated, my time and attention was wasted on this.

HextenAndy•6mo ago
Surely that's just normal floating point but missing e.g. normalisation? Floating point is literally an int exponent and an int mantissa. Except real floating point adjusts the exponent to avoid integer overflow in the mantissa - which is where rounding happens.

In DPA the mantissa just overflows (silently in the C implementation) and then what?

withinboredom•6mo ago
He increased the size of a float from 4 bytes to 9 bytes, so it is more accurate; then claims that it is so much better. If he did the same implementation, but in 4 bytes; and it was more accurate. Then ... maybe this would be "revolutionary" (to quote TFA).
bradrn•6mo ago
Er… this may be a stupid question, but how is this actually different to ordinary floating-point arithmetic? Ordinary IEEE-754 floating-point numbers already store the mantissa and exponent in separate fields, don’t they?
HextenAndy•6mo ago
Exactly.
pixelpoet•6mo ago
Oh dear, author quoting himself for his legendary contributions... embarrassing.

> Every error compounds.

https://en.wikipedia.org/wiki/Kahan_summation_algorithm

floppyd•6mo ago
There are two places in the readme where author quotes themselves, my eyes might never roll back
nephrite•6mo ago
They just reimplemented floating point with bigger mantissa and exponent. Rounding erorrs will appear with sufficiently large/small numbers.
ncruces•6mo ago
Actually, the exponent is smaller. IEEE 754 64-bit binary floats have an 11 bit exponent, 1 sign bit, and 53 bit mantissa (one of the bits of the mantissa is implied, by actually knowing what they're doing, rather than… whatever this is).
cmrx64•6mo ago
Displeased with this trend of LLM-assisted “research”. The central claims are false, the examples of floating point are false and off by a factor of 1e11, the latency numbers are complete WTF and disconnected from reality (‘cycles’ of what).

Fixed point arithmetic with a dynamic scale is presented along the way motivating floating point in probably every computer architecture class. It’s a floating point.

This guy needs to open a book. I recommend Nick Higham’s _Accuracy and Stability of Numerical Algorithms_.

vgo96•6mo ago
I think you are discrediting LLMs, gemini 2.5 pro catches most of the flaws in the author's article. I think the author just doesn't understand floating point.
kragen•6mo ago
How do you know if Gemini caught the flaws you didn't notice?
cmrx64•6mo ago
possibly so. I’m even seeing GPT 4.1-mini ripping it apart when prompted with only the content. DeepSeek (not with thinking) is fooled.
withinboredom•6mo ago
So does ChatGPT 4.1: https://chatgpt.com/share/6888d177-4ebc-8013-b3a2-2648ebea91...
cmrx64•6mo ago
to properly plumb the LLM here you should also freshly ask it “Tell me what is right with this:”

I prompted them without anything except the content and they autonomously decide it’s either all nonsense or they take the bait and start praising it.

GaggiX•6mo ago
I wouldn't be surprised if the author suffers from Bipolar Disorder or Schizophrenia reading the repo and his own citations, also a quite dumb LLM if used.
constantcrying•6mo ago
There is a fundamental theorem of numerical analysis, due to Kahan, that certain, very desirable properties of modeling the real numbers on a computer are incompatible.

Again and again people try to "fix" IEEE floating points without realizing that they are trying to do something akin to creating free energy. Whenever you start out with a project like this you have to start by asking yourself what desirable property you are willing to let go of. Not presenting that flaw makes the whole thing look either dishonest or uninformed.

>Any real number x can be represented as

This statement is just false. I do not know why you would start out by making such a basic error. The real numbers are uncountable you can not represent all real numbers by a pair of integers. This is basic real analysis

>The mathematics is elementary. The impact is revolutionary.

???

>Special thanks to everyone who said "that's just how it works" - you motivated me to prove otherwise.

Maybe the people know better than you? It is after all a mathematical theorem.

vgo96•6mo ago
The author says that every real number x can be represented as

x = m * 2^p

where m is an integer(mantissa) and p is an integer (point position)

Well this is clearly wrong, take x = 1/3 for example

1/3 = m * 2^p

m = 1 / (3 * 2^p), where m is an integer, doesn't hold true for any integer p.

If the author had read first 2 pages of

https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.h...

they could have avoided the embarrassment.

cmrx64•6mo ago
brilliant reference, thank you!
SetTheorist•6mo ago
Amusingly, even the numbers in the first example are counter-examples to the author's false claim: neither 0.1 nor 0.2 can be represented thusly.
cscheid•6mo ago
I’m dismayed that people are willing to put their names on garbage like this.

If you want the serious version of the idea instead of the LLM diarrhea, just go Jonathan Shewchuk’s robust predicates work: https://people.eecs.berkeley.edu/~jrs/papers/robustr.pdf from 1997.

ncruces•6mo ago
Thanks, didn't know this one! Have some reading to do.

For a library that implements just the two component version of this, commonly known as a double-double, for a mantissa of 107 bits and an exponent of 11, see: https://github.com/ncruces/dbldbl

agnishom•6mo ago
Isn't this how floating point already works?
kragen•6mo ago
This is incorrect. There are a number of eerors, as others have pointed out, but for me the most central one is not that almost all reals are uncomputable numbers, but that the product of two 64-bit integers is 128 bits, as anyone who has done arbitrary-precision rational math has noticed.

I think it's great to experiment with improving fundamental algorithms, but not to make misleading claims about your results.