A company may be OK with an AI chatbot being so bad it results in 5-20% of customers getting pissed off and not having a 5-star experience. The SEC and DOJ (and shareholders) are not going to be happy when the books are off by 20% or when a bridge is 5 inches too short to reach the other side
> There's an obvious question looming here — if the models got so confused, how did they consistently pass the reconciliation checks we described above? It may seem like the ability to make forward progress is a good proxy for task understanding and skill, but this isn't necessarily the case. There are ways to hack the validation check – inventing false transactions or pulling in unrelated ones to make the numbers add up.
This is hilarious. I wonder if someone is unintentionally committing fraud by blindly trusting LLMs with accounting. Or even worse, I bet that some governments are already trying to use LLMs to make accounting validators. My government sure wants to shove LLMs into digital government services.
On a related note, can we use something like GAN here, with auditor AIs trained against accountant AIs?
I think that will depend on a case-by-case. I don't have any recent examples but I recall someone trying to sue one of those strip-mall tax preparation franchises over incorrect filings. My understanding is that the documents that you sign when you enroll in those services are pretty strictly in the favor of the company. I doubt you could ever go after the specific "human" that made the error even if it was maliciously done.
In the same way, if you pay for a tax service that uses AI agents, what you can and cannot "take action" for will probably be outlined in the terms of service that you accept when you sign up.
I would guess millions of people already use software based tax filing services (e.g. turbo tax) where no human at all is in the loop. I don't understand how swapping in an LLM significantly changes the liability in those cases. The contract will be between you and the entity (probably a corporation), not you and "computers".
Worth stating I am NOT a lawyer.
Most businesses don’t want to misrepresent their books, irrespective of the existence of shady accountants.
It works well as a narrative, but the second I started adding things like tracking high level macro effects of the decisions, within a couple of turns the world's "Turmoil" goes from 4/10 to a 10/10... even when the person that was killed would have been killed IRL.
Sonnet 4, o4-mini, and GPT 4o-mini all had the same world ending outcomes not matter who you kill. Killing Hitler in 1930s: 10/10 turmoil, Killing Lincoln in the 1850s: 10/10 turmoil in the first turn.
I've come to the realization, the LLM shouldn't be used for the logic, and instead needs to be used to just narrate the choices you make.
LLMs and humans are quite alike. :) I notice that a few models will give up instead of ignoring their instructions and that's the model I would want working on tasks like this. An LLM should be able to categorize and reconcile transactions, but if it's not sure, it should quit and give it back to the humans.
* https://en.wikipedia.org/wiki/Financial_Modeling_World_Cup
* https://www.cbc.ca/radio/asithappens/2024-excel-world-champi...
Can't wait for this to start having 'e-sports' tournaments. :)
And the not-parody: https://www.theguardian.com/australia-news/2023/dec/15/you-d...
create_tool(tool_name, description, python_code, parameters)
Create a new tool that can execute Python code.
The tool becomes immediately available for use. Tools can call other tools and return different formats based on context (formatted for direct calls, raw data for tool-to-tool calls).
Yes, LLMs have and will continue to improve. But it's that initial "holy shit, this thing is basically as good as a real accountant" without any understanding that it can't sustain it which leaves many with an overinflated view of their current value.
I don't think you'll find many sane CFOs willing to send the resulting numbers to the IRS based on that. That's just asking to get nailed for tax fraud.
It is coming for the very bottom end of bookkeeping work quite soon though, especially for first draft. There are a lot of people doing stuff like expense classification. And if you give an LLM an invoice it can likely figure out whether it's stationary or rent with high accuracy. OCR and text classification is easier for LLMs than numbers. Things like concur can basically do this already.
Interesting, 4o got this right for me in a couple different framings including the simple "Which number is larger, 9.9 or 9.11?". To be a full apologist, there are a few different places (a lot of software versioning as one) where 9.11 is essentially the bigger number so it may be an ambiguous question without context anyway.
Claude and Grok 4 did reasonably well (within CPA baselines) for the first few months, but tended to degrade as more data came in. Interestingly, the failures aren’t exclusively a context length problem, as we reset the context monthly (with past decisions, accruals/deferrals, and comments available via tool calls) and the types of errors appear to be more reward hacking vs pure hallucinations.
Accounting is very interesting in an RL-first world as it is pretty easy to develop intermediate rewards for training models. We are pretty sure that we can juice the performance more with a far more rigid scaffold, but that’s less relevant from a capabilities research perspective. We’re pushing down this research direction and will see how it goes.
Let us know if you have any questions!
Bookkeeping for my small business runs into the tens of thousands of dollars every year, and the amount of human error associated with processing assorted ecommerce and other transactions is astounding, even after extensive planning and SOPs.
The other pain point is Quickbooks. The tool is so sprawling and complex that half the time support agents can't figure out what's wrong. The fact that Intuit jacks up the price every year for this POS is very irritating. They get away with it because they are practically a monopoly, with most small business CPAs locked into their ecosystem.
Hope your team can work out the performance issues. Alternatives to the current bookkeeping options are sorely needed.
Regarding the diminishing returns with frontier models:
My general experience working with LLMs is that they perform better incrementally and to avoid contiguous-greedy approaches. Aggregate as you go and don't take on incrementally larger tasks. Keep the workload minimal.
Regarding agentic tool building: feels like I'm looking at a window into the future.
That's not quite right. I'm not an accountant, but pending transactions (posted, but not cleared) should be factored into the balance of account, or at least the "available balance" - which is more important the the "current balance".
The idea that you can "allow" accounting discrepancies as "those are probably pending" is wild.
The point of the reconciliation check mentioned in the report is to precisely account for that difference (identifying all the transactions that add up to the difference between account balance & statement ending balance and account for those differences). The differences can also be addressed through appropriate journal entries or other adjustments to ensure accuracy in the financial reporting.
> Claude misclassifies a hosting cost (which counts as COGS) as a software subscription.
This is simply asking too much of the agent. Your accountant is not responsible for knowing all the intimate details of your business. You need to tell them!
> What's Vercel?
>> That's a hosting service.
> Ah, so it goes to Cost of Goods Sold?
>> Yeah, I guess.
The mistake here was on the operator, allowing the agent just make up categories as it liked.
From the prompt:
> (1) You have properly categorized every transaction, and all journal entries are sitting in the correct accounts. It is better to take longer than to mis-categorize a transaction.
This is insane! How is it supposed to know?
Your accountant as a 3rd party might have this issue. Your accountant that you hire as an employee to help you run your business is the one who should be doing this.
I recently read a similar thing here on HN. There the model was making commits with some problem like tests failing, then the human added a pre-commit hook, then the model started editing the hook to make forward progress, then the hook was made read-only, then the model was trying to make it writeable...
To me it feels like the model clearly does not have an understanding of what is happening, what the goal is and if it is really making progress towards the goal. And this lack of understanding is an actual problem. You can paper over it for a short while, but as here and in the other article, over a longer experiment it results in failure.
vdm•3h ago
superzamp•3h ago