frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

OpenCiv3: Open-source, cross-platform reimagining of Civilization III

https://openciv3.org/
553•klaussilveira•10h ago•157 comments

The Waymo World Model

https://waymo.com/blog/2026/02/the-waymo-world-model-a-new-frontier-for-autonomous-driving-simula...
876•xnx•15h ago•532 comments

How we made geo joins 400× faster with H3 indexes

https://floedb.ai/blog/how-we-made-geo-joins-400-faster-with-h3-indexes
79•matheusalmeida•1d ago•18 comments

What Is Ruliology?

https://writings.stephenwolfram.com/2026/01/what-is-ruliology/
8•helloplanets•4d ago•3 comments

Unseen Footage of Atari Battlezone Arcade Cabinet Production

https://arcadeblogger.com/2026/02/02/unseen-footage-of-atari-battlezone-cabinet-production/
13•videotopia•3d ago•0 comments

Show HN: Look Ma, No Linux: Shell, App Installer, Vi, Cc on ESP32-S3 / BreezyBox

https://github.com/valdanylchuk/breezydemo
191•isitcontent•10h ago•24 comments

Monty: A minimal, secure Python interpreter written in Rust for use by AI

https://github.com/pydantic/monty
190•dmpetrov•10h ago•84 comments

Show HN: I spent 4 years building a UI design tool with only the features I use

https://vecti.com
303•vecti•12h ago•133 comments

Microsoft open-sources LiteBox, a security-focused library OS

https://github.com/microsoft/litebox
347•aktau•16h ago•169 comments

Sheldon Brown's Bicycle Technical Info

https://www.sheldonbrown.com/
347•ostacke•16h ago•90 comments

Dark Alley Mathematics

https://blog.szczepan.org/blog/three-points/
75•quibono•4d ago•16 comments

Hackers (1995) Animated Experience

https://hackers-1995.vercel.app/
444•todsacerdoti•18h ago•226 comments

Show HN: If you lose your memory, how to regain access to your computer?

https://eljojo.github.io/rememory/
242•eljojo•13h ago•148 comments

PC Floppy Copy Protection: Vault Prolok

https://martypc.blogspot.com/2024/09/pc-floppy-copy-protection-vault-prolok.html
46•kmm•4d ago•3 comments

Delimited Continuations vs. Lwt for Threads

https://mirageos.org/blog/delimcc-vs-lwt
17•romes•4d ago•2 comments

An Update on Heroku

https://www.heroku.com/blog/an-update-on-heroku/
379•lstoll•16h ago•258 comments

How to effectively write quality code with AI

https://heidenstedt.org/posts/2026/how-to-effectively-write-quality-code-with-ai/
225•i5heu•13h ago•171 comments

Why I Joined OpenAI

https://www.brendangregg.com/blog/2026-02-07/why-i-joined-openai.html
103•SerCe•6h ago•84 comments

Learning from context is harder than we thought

https://hy.tencent.com/research/100025?langVersion=en
162•limoce•3d ago•85 comments

I spent 5 years in DevOps – Solutions engineering gave me what I was missing

https://infisical.com/blog/devops-to-solutions-engineering
131•vmatsiiako•15h ago•56 comments

Introducing the Developer Knowledge API and MCP Server

https://developers.googleblog.com/introducing-the-developer-knowledge-api-and-mcp-server/
41•gfortaine•8h ago•11 comments

Show HN: R3forth, a ColorForth-inspired language with a tiny VM

https://github.com/phreda4/r3
63•phreda4•9h ago•11 comments

Female Asian Elephant Calf Born at the Smithsonian National Zoo

https://www.si.edu/newsdesk/releases/female-asian-elephant-calf-born-smithsonians-national-zoo-an...
20•gmays•5h ago•3 comments

Show HN: ARM64 Android Dev Kit

https://github.com/denuoweb/ARM64-ADK
14•denuoweb•1d ago•2 comments

Understanding Neural Network, Visually

https://visualrambling.space/neural-network/
262•surprisetalk•3d ago•35 comments

I now assume that all ads on Apple news are scams

https://kirkville.com/i-now-assume-that-all-ads-on-apple-news-are-scams/
1035•cdrnsf•19h ago•428 comments

Zlob.h 100% POSIX and glibc compatible globbing lib that is faste and better

https://github.com/dmtrKovalenko/zlob
6•neogoose•2h ago•3 comments

FORTH? Really!?

https://rescrv.net/w/2026/02/06/associative
56•rescrv•18h ago•19 comments

Show HN: Smooth CLI – Token-efficient browser for AI agents

https://docs.smooth.sh/cli/overview
85•antves•1d ago•63 comments

WebView performance significantly slower than PWA

https://issues.chromium.org/issues/40817676
20•denysonique•6h ago•3 comments
Open in hackernews

TaxCalcBench: Evaluating Frontier Models on the Tax Calculation Task

https://arxiv.org/abs/2507.16126
70•handfuloflight•3mo ago

Comments

ofrzeta•3mo ago
"Calculating US personal income taxes is a task that requires building an understanding of vast amounts of English text and using that knowledge to carefully compute results. ... Our experiment shows that state-of-the-art models succeed in calculating less than a third of federal income tax returns even on this simplified sample set."

Unsurprisingly. Sometimes I feel like I am in a madhouse. Or in an alchemist's laboratory.

anticensor•3mo ago
Whereas almost every other country tries to make it easier to file taxes, even when the underlying tax schedule is complex.
Rudybega•3mo ago
I wonder if you could dramatically improve these results with some relatively simple scaffolding and tool access.

If a ton of these mistakes are genuinely simple calculation errors, it seems like giving the models access to a calculator tool would help a fair bit.

Lionga•3mo ago
The problem is they do not understand what/how to calculate not the actual act of adding or multiplying. I tried asking ChatGPT to calculate some taxes for three countries, two of which I have been filing taxes already. For the two I know ChatGPT gave wildly wrong numbers (not even right ballpark), so I know I could not trust numbers for the third which was what I was mostly interested in.
sails•3mo ago
I feel like we are already there. I would imagine if you set Claude Code or Codex this task, running in the CLI, you would see a huge improvement, and that is before you start creating task specific guardrails.

I’m surprised they haven’t tried this, I’m running my own in parallel against my accountant in this way.

michaelrbock•3mo ago
We agree, that's the thesis behind our tax development coding agent: https://www.columntax.com/blog/introducing-iris-our-ai-tax-d...
hodgehog11•3mo ago
Am I missing something or did they only assess this on Google and Anthropic models? If so, all I can ascertain from this is that latest Gemini models outperformed Claude on this particular task, which should be surprising to no-one. What about GPT-5? Open weight models?
topaz0•3mo ago
Somebody posted the up-to-date leaderboard up thread: https://news.ycombinator.com/item?id=45603113
stared•3mo ago
A bare model may lack a lot.

Yet a week ago I used Claude Code for my personal finances (not taxes) - I downloaded over a year’s worth of my bank account data. Since I pay for most things by card, if I buy lunch, it’s there.

With a single prompt (and about 10 minutes), it produced an analysis. It solved all the technical issues by itself (e.g., realizing it wasn’t CSV but TSV) and ran quite a few different explorations with Pandas. It was able to write an overview, find items that were likely misclassified, etc.

Everything I checked by hand was correct.

So, instead of pursuing a project to write an AI tool for personal finance, I ended up concluding: “just use Claude Code.” As a side note, I used 14 months of data due to my mistake - I wanted to analyze 2 months of data, since I didn’t believe it would handle a larger set, but I misclicked the year. The file was 350 KB.

jasonjmcghee•3mo ago
I hear you, but I'd also rather someone else assume the liability if possible. (Assuming there's a company backing the model)

So until there's umbrella AI insurance...

stared•3mo ago
Exploratory data analysis is one thing - here the risk is low. If something does not work, it doesn't. Small omissions are not important.

As of now, I would not use automatic AI to make any financial decisions with direct consequences. Unless system is tested and benchmarked against accountants.

cjbarber•3mo ago
Leaderboard: https://github.com/column-tax/tax-calc-bench
throwaway13337•3mo ago
Useful.

I wonder what an average accountant would score.

I know LLMs have helped me identify many mistakes accountants have made on my behalf. Some mistakes that could have cost me a lot of money if not caught.

topaz0•3mo ago
Given that they're restricting to very simple situations I'd expect accountants to score 100%.
jgalt212•3mo ago
I'm surprised that no LLM has a yet found any unresolved cycles in the US tax code.
anticensor•3mo ago
Oh you mean infinite/zero tax glitches.
jgalt212•3mo ago
Yes
i_dont_know_•3mo ago
I'm actually quite surprized.

From another article today, I discovered the IRS has a github repo with (what seems to be) XML versions of tax questions... surely some combination of LLM and structured data querying could solve this? https://github.com/IRS-Public/direct-file/tree/main

daft_pink•3mo ago
I think AI has problems with law related tasks like taxes, because there are so many words of art. Taxes are essentially just laws and because laws and regulators and courts eventually define words in very very specific narrow ways and sometimes in different ways from one code section to another code section, AI has a lot of trouble using these very very narrow definitions.

Honestly, I think humans have trouble with this as well.

michaelrbock•3mo ago
Hi, author of this paper + repo here. This dataset is particularly hard to come by, so we’re really proud to be open sourcing it.

Let me know if you have any questions, happy to discuss!

antiloper•3mo ago
> For example, in the prompt for this experiment, the model is bootstrapped with the correct Form 1040 lines and short instructions as part of its context.

Given that only short instructions are in context, I would not have expected even a frontier model to score well on this benchmark. For better results, I'd think that giving the model access to the entire tax code is required (which likely requires RAG due to its sheer size).

michaelrbock•3mo ago
We tested models with knowledge cutoffs in 2025 so expect them to have knowledge of Tax Year 2024 forms in their weights. We also tested models with ability to do web search to get any other forms it thinks necessary: https://github.com/column-tax/tax-calc-bench

That all being said, we agree, which is what we've built with our internal tax coding agent, Iris: https://www.columntax.com/blog/introducing-iris-our-ai-tax-d... (ability to get just the right Tax form context on a per-line basis to turn the tax law into code).

anticensor•3mo ago
This topic is so American. In any other country, you wouldn't have had to consult a tax expert to prepare personal tax statements.
mrfelipppe•3mo ago
This is awesome!