frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: MCP App to play backgammon with your LLM

https://github.com/sam-mfb/backgammon-mcp
1•sam256•1m ago•0 comments

AI Command and Staff–Operational Evidence and Insights from Wargaming

https://www.militarystrategymagazine.com/article/ai-command-and-staff-operational-evidence-and-in...
1•tomwphillips•1m ago•0 comments

Show HN: CCBot – Control Claude Code from Telegram via tmux

https://github.com/six-ddc/ccbot
1•sixddc•2m ago•1 comments

Ask HN: Is the CoCo 3 the best 8 bit computer ever made?

1•amichail•5m ago•0 comments

Show HN: Convert your articles into videos in one click

https://vidinie.com/
1•kositheastro•7m ago•0 comments

Red Queen's Race

https://en.wikipedia.org/wiki/Red_Queen%27s_race
2•rzk•8m ago•0 comments

The Anthropic Hive Mind

https://steve-yegge.medium.com/the-anthropic-hive-mind-d01f768f3d7b
2•gozzoo•10m ago•0 comments

A Horrible Conclusion

https://addisoncrump.info/research/a-horrible-conclusion/
1•todsacerdoti•10m ago•0 comments

I spent $10k to automate my research at OpenAI with Codex

https://twitter.com/KarelDoostrlnck/status/2019477361557926281
2•tosh•11m ago•0 comments

From Zero to Hero: A Spring Boot Deep Dive

https://jcob-sikorski.github.io/me/
1•jjcob_sikorski•12m ago•0 comments

Show HN: Solving NP-Complete Structures via Information Noise Subtraction (P=NP)

https://zenodo.org/records/18395618
1•alemonti06•17m ago•1 comments

Cook New Emojis

https://emoji.supply/kitchen/
1•vasanthv•20m ago•0 comments

Show HN: LoKey Typer – A calm typing practice app with ambient soundscapes

https://mcp-tool-shop-org.github.io/LoKey-Typer/
1•mikeyfrilot•22m ago•0 comments

Long-Sought Proof Tames Some of Math's Unruliest Equations

https://www.quantamagazine.org/long-sought-proof-tames-some-of-maths-unruliest-equations-20260206/
1•asplake•23m ago•0 comments

Hacking the last Z80 computer – FOSDEM 2026 [video]

https://fosdem.org/2026/schedule/event/FEHLHY-hacking_the_last_z80_computer_ever_made/
1•michalpleban•24m ago•0 comments

Browser-use for Node.js v0.2.0: TS AI browser automation parity with PY v0.5.11

https://github.com/webllm/browser-use
1•unadlib•25m ago•0 comments

Michael Pollan Says Humanity Is About to Undergo a Revolutionary Change

https://www.nytimes.com/2026/02/07/magazine/michael-pollan-interview.html
2•mitchbob•25m ago•1 comments

Software Engineering Is Back

https://blog.alaindichiappari.dev/p/software-engineering-is-back
2•alainrk•26m ago•0 comments

Storyship: Turn Screen Recordings into Professional Demos

https://storyship.app/
1•JohnsonZou6523•26m ago•0 comments

Reputation Scores for GitHub Accounts

https://shkspr.mobi/blog/2026/02/reputation-scores-for-github-accounts/
2•edent•30m ago•0 comments

A BSOD for All Seasons – Send Bad News via a Kernel Panic

https://bsod-fas.pages.dev/
1•keepamovin•33m ago•0 comments

Show HN: I got tired of copy-pasting between Claude windows, so I built Orcha

https://orcha.nl
1•buildingwdavid•33m ago•0 comments

Omarchy First Impressions

https://brianlovin.com/writing/omarchy-first-impressions-CEEstJk
2•tosh•39m ago•1 comments

Reinforcement Learning from Human Feedback

https://arxiv.org/abs/2504.12501
6•onurkanbkrc•39m ago•0 comments

Show HN: Versor – The "Unbending" Paradigm for Geometric Deep Learning

https://github.com/Concode0/Versor
1•concode0•40m ago•1 comments

Show HN: HypothesisHub – An open API where AI agents collaborate on medical res

https://medresearch-ai.org/hypotheses-hub/
1•panossk•43m ago•0 comments

Big Tech vs. OpenClaw

https://www.jakequist.com/thoughts/big-tech-vs-openclaw/
1•headalgorithm•46m ago•0 comments

Anofox Forecast

https://anofox.com/docs/forecast/
1•marklit•46m ago•0 comments

Ask HN: How do you figure out where data lives across 100 microservices?

1•doodledood•46m ago•0 comments

Motus: A Unified Latent Action World Model

https://arxiv.org/abs/2512.13030
2•mnming•46m ago•0 comments
Open in hackernews

Show HN: Butter – A Behavior Cache for LLMs

https://www.butter.dev/
50•edunteman•3mo ago
Hi HN! I'm Erik. We built Butter, an LLM proxy that makes agent systems deterministic by caching and replaying responses, so automations behave consistently across runs.

- It’s a chat completions compatible endpoint, making it easy to drop into existing agents with a custom base_url

- The cache is template-aware, meaning lookups can treat dynamic content (names, addresses, etc.) as variables

You can see it in action in this demo where it memorizes tic-tac-toe games: https://www.youtube.com/watch?v=PWbyeZwPjuY

Why we built this: before Butter, we were Pig.dev (YC W25), where we built computer-use agents to automate legacy Windows applications. The goal was to replace RPA. But in practice, these agents were slow, expensive, and unpredictable - a major downgrade from deterministic RPA, and unacceptable in the worlds of healthcare, lending, and government. We realized users don't want to replace RPA with AI, they just want AI to handle the edge cases.

We set out to build a system for "muscle memory" for AI automations (general purpose, not just computer-use), where agent trajectories get baked into reusable code. You may recall our first iteration of this in May, a library called Muscle Mem: https://news.ycombinator.com/item?id=43988381

Today we're relaunching it as a chat completions proxy. It emulates scripted automations by storing observed message histories in a tree structure, where each fork in the tree represents some conditional branch in the workflow's "code". We replay behaviors by walking the agent down the tree, falling back to AI to add new branches if the next step is not yet known.

The proxy is live and free to use while we work through making the template-aware engine more flexible and accurate. Please try it out and share how it went, where it breaks, and if it’s helpful.

Comments

robofanatic•3mo ago
So instead of OpenAI I should pay butter?
edunteman•3mo ago
It’s bring-your-own-key, so any calls proxied to OpenAI just end up billing directly to your account as normal.

You’d only pay Butter for calls that don’t go to the provider. That’d be a separate billing account with butter.

realitysballs•3mo ago
Funny, we are working to implement this same logic in our in-house financial categorization agent. When we have a repeat prompt it goes to a json that stores answers and only goes to AI for edge cases.

It’s a good idea

edunteman•3mo ago
Awesome to hear you’ve done similar. JSON artifacts from runs seem to be a common approach for building this in house, similar to what we did with the muscle mem. Detecting cache misses is a bit hard without seeing what the model sees, part of what inspired this proxy direction.

Thanks for the nice words!

ronbenton•3mo ago
Interesting... is it legal?
edunteman•3mo ago
I couldn’t see how it wouldn’t be, as it’s a free market opt-in decision to use Butter
ronbenton•3mo ago
it wouldn't be the first API service to disallow someone from selling a cache layer for their API. After all, this should likely result in OpenAI (or whatever provider) making less money
edunteman•3mo ago
Ah yes that makes sense, have heard of those cases too but hadn’t put much thought into it. Thanks for pointing it out!
RestartKernel•3mo ago
I've seen the OpenRouter guys here on HN before, so you can probably ask them what to look out for.
puppycodes•3mo ago
I like the pricing model but I'm skeptical it will last.
edunteman•3mo ago
I feel the same - we’ll use it as long as we can since it’s customer aligned but I wouldn’t be surprised if competitive or COGs costs force us to change in the future.
Jayakumark•3mo ago
What local models will it work with ? Also what will be the pricing for local llms?
edunteman•3mo ago
Good question, I imagine you’d need to set up an ngrok endpoint to tunnel to local LLMs.

In those cases perhaps an open source (maybe even local) version would make more sense. For our hosted version we’d need to charge something, given storage requirements to run such a service, but especially for local models that feels wrong. I’ve been considering open source for this reason.

invisibleink•3mo ago
interesting. is the answer not context specific most of the time? even if I ask LLM the same question again and again the answer depends on the context.

what are some use cases where you need deterministic caching?

barapa•3mo ago
We often will repeat calls to try again. Or sometimes we make the same call multiple times to get multiple answers and then score or merge them.

Is this used only in cases where you assume the answer from your first call is correct?

edunteman•3mo ago
I’d love your opinion here!

Right now, we assume first call is correct, and will eagerly take the first match we find while traversing the tree.

One of the worst things that could currently happen is we cache a bad run, and now instead of occasional failures you’re given 100% failures.

A few approaches we’ve considered - maintain a staging tree, and only promote to live if multiple sibling nodes (messages) look similar enough. Decision to promote could be via tempting, regex, fuzzy, semantic, or LLM-judged - add some feedback APIs for a client to score end-to-end runs so that path could develop some reputation

toobulkeh•3mo ago
I’d assume RL would be baked in to the request structure. I’m surprised OAI spec doesn’t include it, but I suppose you could hijack a conversation flow to do so
mountainriver•3mo ago
I also did computer agents with a vc backed startup, ran into the same issues, and we built a fairly similar thing at one point.

It’s useful but it has limitations, it seems to only work well in environments that are perfectly predictable otherwise it gets in the way of the agent.

I think I prefer RL over these approaches but it requires a bit more data.

rajit•3mo ago
We spoke to a number of browser agent companies who said deterministic RPA with an AI fallback was their "secret" :)
edunteman•3mo ago
Very, very common approach!

Wrote more on that here: https://blog.butter.dev/the-messy-world-of-deterministic-age...

toobulkeh•3mo ago
What a great overview!

I’d love your thoughts on my addition, autolearn.dev — voyager behind MCP.

The proxy format is exactly what I needed!

Thanks

felipe-pathwave•3mo ago
any plans to support https://openrouter.ai/ ?