frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The Anthropic Hive Mind

https://steve-yegge.medium.com/the-anthropic-hive-mind-d01f768f3d7b
1•gozzoo•35s ago•0 comments

A Horrible Conclusion

https://addisoncrump.info/research/a-horrible-conclusion/
1•todsacerdoti•45s ago•0 comments

I spent $10k to automate my research at OpenAI with Codex

https://twitter.com/KarelDoostrlnck/status/2019477361557926281
1•tosh•1m ago•0 comments

From Zero to Hero: A Spring Boot Deep Dive

https://jcob-sikorski.github.io/me/
1•jjcob_sikorski•2m ago•0 comments

Show HN: Solving NP-Complete Structures via Information Noise Subtraction (P=NP)

https://zenodo.org/records/18395618
1•alemonti06•7m ago•1 comments

Cook New Emojis

https://emoji.supply/kitchen/
1•vasanthv•9m ago•0 comments

Show HN: LoKey Typer – A calm typing practice app with ambient soundscapes

https://mcp-tool-shop-org.github.io/LoKey-Typer/
1•mikeyfrilot•12m ago•0 comments

Long-Sought Proof Tames Some of Math's Unruliest Equations

https://www.quantamagazine.org/long-sought-proof-tames-some-of-maths-unruliest-equations-20260206/
1•asplake•13m ago•0 comments

Hacking the last Z80 computer – FOSDEM 2026 [video]

https://fosdem.org/2026/schedule/event/FEHLHY-hacking_the_last_z80_computer_ever_made/
1•michalpleban•14m ago•0 comments

Browser-use for Node.js v0.2.0: TS AI browser automation parity with PY v0.5.11

https://github.com/webllm/browser-use
1•unadlib•15m ago•0 comments

Michael Pollan Says Humanity Is About to Undergo a Revolutionary Change

https://www.nytimes.com/2026/02/07/magazine/michael-pollan-interview.html
1•mitchbob•15m ago•1 comments

Software Engineering Is Back

https://blog.alaindichiappari.dev/p/software-engineering-is-back
1•alainrk•16m ago•0 comments

Storyship: Turn Screen Recordings into Professional Demos

https://storyship.app/
1•JohnsonZou6523•16m ago•0 comments

Reputation Scores for GitHub Accounts

https://shkspr.mobi/blog/2026/02/reputation-scores-for-github-accounts/
1•edent•19m ago•0 comments

A BSOD for All Seasons – Send Bad News via a Kernel Panic

https://bsod-fas.pages.dev/
1•keepamovin•23m ago•0 comments

Show HN: I got tired of copy-pasting between Claude windows, so I built Orcha

https://orcha.nl
1•buildingwdavid•23m ago•0 comments

Omarchy First Impressions

https://brianlovin.com/writing/omarchy-first-impressions-CEEstJk
2•tosh•28m ago•1 comments

Reinforcement Learning from Human Feedback

https://arxiv.org/abs/2504.12501
2•onurkanbkrc•29m ago•0 comments

Show HN: Versor – The "Unbending" Paradigm for Geometric Deep Learning

https://github.com/Concode0/Versor
1•concode0•30m ago•1 comments

Show HN: HypothesisHub – An open API where AI agents collaborate on medical res

https://medresearch-ai.org/hypotheses-hub/
1•panossk•33m ago•0 comments

Big Tech vs. OpenClaw

https://www.jakequist.com/thoughts/big-tech-vs-openclaw/
1•headalgorithm•35m ago•0 comments

Anofox Forecast

https://anofox.com/docs/forecast/
1•marklit•36m ago•0 comments

Ask HN: How do you figure out where data lives across 100 microservices?

1•doodledood•36m ago•0 comments

Motus: A Unified Latent Action World Model

https://arxiv.org/abs/2512.13030
1•mnming•36m ago•0 comments

Rotten Tomatoes Desperately Claims 'Impossible' Rating for 'Melania' Is Real

https://www.thedailybeast.com/obsessed/rotten-tomatoes-desperately-claims-impossible-rating-for-m...
3•juujian•38m ago•2 comments

The protein denitrosylase SCoR2 regulates lipogenesis and fat storage [pdf]

https://www.science.org/doi/10.1126/scisignal.adv0660
1•thunderbong•39m ago•0 comments

Los Alamos Primer

https://blog.szczepan.org/blog/los-alamos-primer/
1•alkyon•42m ago•0 comments

NewASM Virtual Machine

https://github.com/bracesoftware/newasm
2•DEntisT_•44m ago•0 comments

Terminal-Bench 2.0 Leaderboard

https://www.tbench.ai/leaderboard/terminal-bench/2.0
2•tosh•44m ago•0 comments

I vibe coded a BBS bank with a real working ledger

https://mini-ledger.exe.xyz/
1•simonvc•45m ago•1 comments
Open in hackernews

Detached Point Arithmetic

https://github.com/Pedantic-Research-Limited/DPA
12•HappySweeney•6mo ago

Comments

gbanfalvi•6mo ago
This feels so obvious and simple how is this not already a standard thing everywhere? Is it because the mantissa and point position don't both fit into a single register?
HextenAndy•6mo ago
It is a standard thing. It's called floating point.
Cthulhu_•6mo ago
It already is; the author is trying to sell a solution to a problem that is known, well-understood and worked around, often with programming language types like `Money` or numerical calculation libraries. The "30 year compound interest inaccuracy" problem was solved 50+ years ago.

I'm really not sure what the angle of the author and their "organization" is, which was only created this month. The library and idea is cool and all, but it strongly implies the author didn't actually do any research before building a solution, one that probably (but I'm not qualified to say) has some issues they overlooked. See e.g. https://cs.opensource.google/go/go/+/master:src/math/big/flo... for a modern implementation of big accurate floating point numbers.

fake edit: reading that this was AI generated, my time and attention was wasted on this.

HextenAndy•6mo ago
Surely that's just normal floating point but missing e.g. normalisation? Floating point is literally an int exponent and an int mantissa. Except real floating point adjusts the exponent to avoid integer overflow in the mantissa - which is where rounding happens.

In DPA the mantissa just overflows (silently in the C implementation) and then what?

withinboredom•6mo ago
He increased the size of a float from 4 bytes to 9 bytes, so it is more accurate; then claims that it is so much better. If he did the same implementation, but in 4 bytes; and it was more accurate. Then ... maybe this would be "revolutionary" (to quote TFA).
bradrn•6mo ago
Er… this may be a stupid question, but how is this actually different to ordinary floating-point arithmetic? Ordinary IEEE-754 floating-point numbers already store the mantissa and exponent in separate fields, don’t they?
HextenAndy•6mo ago
Exactly.
pixelpoet•6mo ago
Oh dear, author quoting himself for his legendary contributions... embarrassing.

> Every error compounds.

https://en.wikipedia.org/wiki/Kahan_summation_algorithm

floppyd•6mo ago
There are two places in the readme where author quotes themselves, my eyes might never roll back
nephrite•6mo ago
They just reimplemented floating point with bigger mantissa and exponent. Rounding erorrs will appear with sufficiently large/small numbers.
ncruces•6mo ago
Actually, the exponent is smaller. IEEE 754 64-bit binary floats have an 11 bit exponent, 1 sign bit, and 53 bit mantissa (one of the bits of the mantissa is implied, by actually knowing what they're doing, rather than… whatever this is).
cmrx64•6mo ago
Displeased with this trend of LLM-assisted “research”. The central claims are false, the examples of floating point are false and off by a factor of 1e11, the latency numbers are complete WTF and disconnected from reality (‘cycles’ of what).

Fixed point arithmetic with a dynamic scale is presented along the way motivating floating point in probably every computer architecture class. It’s a floating point.

This guy needs to open a book. I recommend Nick Higham’s _Accuracy and Stability of Numerical Algorithms_.

vgo96•6mo ago
I think you are discrediting LLMs, gemini 2.5 pro catches most of the flaws in the author's article. I think the author just doesn't understand floating point.
kragen•6mo ago
How do you know if Gemini caught the flaws you didn't notice?
cmrx64•6mo ago
possibly so. I’m even seeing GPT 4.1-mini ripping it apart when prompted with only the content. DeepSeek (not with thinking) is fooled.
withinboredom•6mo ago
So does ChatGPT 4.1: https://chatgpt.com/share/6888d177-4ebc-8013-b3a2-2648ebea91...
cmrx64•6mo ago
to properly plumb the LLM here you should also freshly ask it “Tell me what is right with this:”

I prompted them without anything except the content and they autonomously decide it’s either all nonsense or they take the bait and start praising it.

GaggiX•6mo ago
I wouldn't be surprised if the author suffers from Bipolar Disorder or Schizophrenia reading the repo and his own citations, also a quite dumb LLM if used.
constantcrying•6mo ago
There is a fundamental theorem of numerical analysis, due to Kahan, that certain, very desirable properties of modeling the real numbers on a computer are incompatible.

Again and again people try to "fix" IEEE floating points without realizing that they are trying to do something akin to creating free energy. Whenever you start out with a project like this you have to start by asking yourself what desirable property you are willing to let go of. Not presenting that flaw makes the whole thing look either dishonest or uninformed.

>Any real number x can be represented as

This statement is just false. I do not know why you would start out by making such a basic error. The real numbers are uncountable you can not represent all real numbers by a pair of integers. This is basic real analysis

>The mathematics is elementary. The impact is revolutionary.

???

>Special thanks to everyone who said "that's just how it works" - you motivated me to prove otherwise.

Maybe the people know better than you? It is after all a mathematical theorem.

vgo96•6mo ago
The author says that every real number x can be represented as

x = m * 2^p

where m is an integer(mantissa) and p is an integer (point position)

Well this is clearly wrong, take x = 1/3 for example

1/3 = m * 2^p

m = 1 / (3 * 2^p), where m is an integer, doesn't hold true for any integer p.

If the author had read first 2 pages of

https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.h...

they could have avoided the embarrassment.

cmrx64•6mo ago
brilliant reference, thank you!
SetTheorist•6mo ago
Amusingly, even the numbers in the first example are counter-examples to the author's false claim: neither 0.1 nor 0.2 can be represented thusly.
cscheid•6mo ago
I’m dismayed that people are willing to put their names on garbage like this.

If you want the serious version of the idea instead of the LLM diarrhea, just go Jonathan Shewchuk’s robust predicates work: https://people.eecs.berkeley.edu/~jrs/papers/robustr.pdf from 1997.

ncruces•6mo ago
Thanks, didn't know this one! Have some reading to do.

For a library that implements just the two component version of this, commonly known as a double-double, for a mantissa of 107 bits and an exponent of 11, see: https://github.com/ncruces/dbldbl

agnishom•6mo ago
Isn't this how floating point already works?
kragen•6mo ago
This is incorrect. There are a number of eerors, as others have pointed out, but for me the most central one is not that almost all reals are uncomputable numbers, but that the product of two 64-bit integers is 128 bits, as anyone who has done arbitrary-precision rational math has noticed.

I think it's great to experiment with improving fundamental algorithms, but not to make misleading claims about your results.