frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Rewriting Pycparser with the Help of an LLM

https://eli.thegreenplace.net/2026/rewriting-pycparser-with-the-help-of-an-llm/
1•y1n0•1m ago•0 comments

Lobsters Vibecoding Challenge

https://gist.github.com/MostAwesomeDude/bb8cbfd005a33f5dd262d1f20a63a693
1•tolerance•1m ago•0 comments

E-Commerce vs. Social Commerce

https://moondala.one/
1•HamoodBahzar•2m ago•1 comments

Avoiding Modern C++ – Anton Mikhailov [video]

https://www.youtube.com/watch?v=ShSGHb65f3M
1•linkdd•3m ago•0 comments

Show HN: AegisMind–AI system with 12 brain regions modeled on human neuroscience

https://www.aegismind.app
2•aegismind_app•7m ago•1 comments

Zig – Package Management Workflow Enhancements

https://ziglang.org/devlog/2026/#2026-02-06
1•Retro_Dev•9m ago•0 comments

AI-powered text correction for macOS

https://taipo.app/
1•neuling•12m ago•1 comments

AppSecMaster – Learn Application Security with hands on challenges

https://www.appsecmaster.net/en
1•aqeisi•13m ago•1 comments

Fibonacci Number Certificates

https://www.johndcook.com/blog/2026/02/05/fibonacci-certificate/
1•y1n0•15m ago•0 comments

AI Overviews are killing the web search, and there's nothing we can do about it

https://www.neowin.net/editorials/ai-overviews-are-killing-the-web-search-and-theres-nothing-we-c...
3•bundie•20m ago•1 comments

City skylines need an upgrade in the face of climate stress

https://theconversation.com/city-skylines-need-an-upgrade-in-the-face-of-climate-stress-267763
3•gnabgib•21m ago•0 comments

1979: The Model World of Robert Symes [video]

https://www.youtube.com/watch?v=HmDxmxhrGDc
1•xqcgrek2•25m ago•0 comments

Satellites Have a Lot of Room

https://www.johndcook.com/blog/2026/02/02/satellites-have-a-lot-of-room/
2•y1n0•26m ago•0 comments

1980s Farm Crisis

https://en.wikipedia.org/wiki/1980s_farm_crisis
4•calebhwin•26m ago•1 comments

Show HN: FSID - Identifier for files and directories (like ISBN for Books)

https://github.com/skorotkiewicz/fsid
1•modinfo•31m ago•0 comments

Show HN: Holy Grail: Open-Source Autonomous Development Agent

https://github.com/dakotalock/holygrailopensource
1•Moriarty2026•38m ago•1 comments

Show HN: Minecraft Creeper meets 90s Tamagotchi

https://github.com/danielbrendel/krepagotchi-game
1•foxiel•46m ago•1 comments

Show HN: Termiteam – Control center for multiple AI agent terminals

https://github.com/NetanelBaruch/termiteam
1•Netanelbaruch•46m ago•0 comments

The only U.S. particle collider shuts down

https://www.sciencenews.org/article/particle-collider-shuts-down-brookhaven
2•rolph•49m ago•1 comments

Ask HN: Why do purchased B2B email lists still have such poor deliverability?

1•solarisos•49m ago•2 comments

Show HN: Remotion directory (videos and prompts)

https://www.remotion.directory/
1•rokbenko•51m ago•0 comments

Portable C Compiler

https://en.wikipedia.org/wiki/Portable_C_Compiler
2•guerrilla•53m ago•0 comments

Show HN: Kokki – A "Dual-Core" System Prompt to Reduce LLM Hallucinations

1•Ginsabo•54m ago•0 comments

Software Engineering Transformation 2026

https://mfranc.com/blog/ai-2026/
1•michal-franc•55m ago•0 comments

Microsoft purges Win11 printer drivers, devices on borrowed time

https://www.tomshardware.com/peripherals/printers/microsoft-stops-distrubitng-legacy-v3-and-v4-pr...
3•rolph•55m ago•1 comments

Lunch with the FT: Tarek Mansour

https://www.ft.com/content/a4cebf4c-c26c-48bb-82c8-5701d8256282
2•hhs•58m ago•0 comments

Old Mexico and her lost provinces (1883)

https://www.gutenberg.org/cache/epub/77881/pg77881-images.html
1•petethomas•1h ago•0 comments

'AI' is a dick move, redux

https://www.baldurbjarnason.com/notes/2026/note-on-debating-llm-fans/
5•cratermoon•1h ago•0 comments

The source code was the moat. But not anymore

https://philipotoole.com/the-source-code-was-the-moat-no-longer/
1•otoolep•1h ago•0 comments

Does anyone else feel like their inbox has become their job?

1•cfata•1h ago•1 comments
Open in hackernews

LLM Economist – Mechanism Design for Simulated Agent Societies

https://github.com/sethkarten/LLM-Economist
2•milkkarten•6mo ago

Comments

milkkarten•6mo ago
We simulate large-scale agent societies where heterogeneous personas work, adapt, and vote—governed by an in-context planner optimizing social welfare.

The system models decentralized governance, dynamic tax policy, and institutional evolution—entirely via in-context reinforcement learning, no fine-tuning required.

Full paper (arXiv): https://arxiv.org/abs/2507.15815

slwvx•6mo ago
I like the idea of simulating a society! I don't pretend to understand everything that you're doing, so please correct me where I'm wrong below.

The right side of Fig 5a shows that your LLM tool has 80% tax for people making between 0 and $11.6k/year, then drops to about 30% for the next tax bracket, with other tax brackets moving around all over the place. This seems to be designed to induce people to NOT pay taxes.For all its faults, I think the US progressive system is fairly rational and does a pretty good job of inducing people to actually pay taxes [1]; specifically the (effectively) negative tax rate in the US for low-income people gets them in the habit of paying taxes. I.e. whatever underlying model of social welfare you are assuming to get the great social welfare on the right side of Fig 5a seems to not model real people. I wonder if some LLM hallucinations are going on under the hood to create the strange behavior in Fig 5a.

Some questions: You don't seem to model the US system of tax credits; is that right? Also, is there a Saez tax below $47.2k in Fig 5a? What about between $244k and $609k? I.e. is the Saez tax ever under the LLM tax?

[1] https://blogs.worldbank.org/en/governance/why-does-progressi...

milkkarten•6mo ago
These are the marginal tax rates not the effective tax rate (e.g. 80% of first $10k, 30% of $10k-20k). We do not model tax credits here. We try to keep the system as simple as possible so that we can effectively evaluate changes. As is, the Economic theory is intractable once we introduce bounded rationality from purely rational. We do think in future work we can potentially work out some smoothness in the overall tax rate but try to let the LLM planner try what it thinks is best in order to help test the in-context optimization capabilities.

Also, while there is a complicated tax code in the US, in our simulation there is no way for agents to avoid paying taxes :)

The Saez tax rates are perturbed from the LLM Economist's tax rates to find the theoretically optimal values according to the economic theory.

Thanks for the interest and I hope that this helps clarify some of the details.

slwvx•6mo ago
Thanks for the further details!

Ah, the fact that they are marginal rates makes marginally more sense, but it still seems to me that the SWF in fig 5a has very little relation to the real world.

> Also, while there is a complicated tax code in the US, in our simulation there is no way for agents to avoid paying taxes :)

Seems like an obvious thing to add. I.e. if you believe the World Bank when they say "People are more willing to pay tax when taxes are progressive" [1], then it seems worthwhile to update your model to include this.

[1] https://blogs.worldbank.org/en/governance/why-does-progressi...

MutedEstate45•6mo ago
Interesting approach, but I'm curious about the practical cost considerations. A 1,000-agent simulation could easily be hundreds of thousands of API calls. The repo recommends gpt-4o-mini over gpt-4 and supports local Llama models, but there's no guidance on the performance trade-offs.

Would love to see cost-per-experiment breakdowns and quality benchmarks across model tiers. Does a local Llama 3.1 8B produce meaningful economic simulations or do you need the reasoning power of frontier models? This could be the difference between $5 and $500 experiments.

milkkarten•6mo ago
Using smaller, cheaper agents is one of the goals of the work. There is a Pareto frontier though: by using smaller, faster, cheaper agents, the number of steps required to converge increases. We touch upon this briefly in the paper
MutedEstate45•6mo ago
Thanks. That Pareto trade-off is exactly what I'm trying to quantify not just qualify. For example, if I've got a $50 budget, what's the sweet spot?

Scenario A: 100 agents × GPT-4o-mini × 500 steps Scenario B: 500 agents × local Llama 3-8B × 1,000+ steps

A quick table like "X agents × Y model × Z steps → tokens, $, convergence score" in the README would let new users budget experiments without having to read the whole paper plus run expensive experiments just to discover basic resource planning.

milkkarten•6mo ago
We ran each method in under 24 hours on a singular H100. I understand your point and think we will include this in future iterations of our work since this is very interesting from the user perspective. Though, in the paper we focus more on algorithmic concerns.
MutedEstate45•6mo ago
I'll look out for future iterations. Thanks and good luck with the paper.