frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Ask HN: Thinking about memory for AI coding agents

5•hoangnnguyen•22h ago
I’ve been experimenting with AI coding agents in real day-to-day work and ran into a recurring problem, I keep repeating the same engineering principles over and over.

Things like validating input, being careful with new dependencies, or respecting certain product constraints. The usual solutions are prompts or rules.

After using both for a while, neither felt right. - Prompts disappear after each task. - Rules only trigger in narrow contexts, often tied to specific files or patterns. - Some principles are personal preferences, not something I want enforced at the project level. - Others aren’t really “rules” at all, but knowledge about product constraints and past tradeoffs.

That led me to experiment with a separate “memory” layer for AI agents. Not chat history, but small, atomic pieces of knowledge: decisions, constraints, and recurring principles that can be retrieved when relevant.

A few things became obvious once I started using it seriously: - vague memory leads to vague behavior - long memory pollutes context - duplicate entries make retrieval worse - many issues only show up when you actually depend on the agent daily

AI was great at executing once the context was right. But deciding what should be remembered, what should be rejected, and when predictability matters more than cleverness still required human judgment.

Curious how others are handling this. Are you relying mostly on prompts, rules, or some form of persistent knowledge when working with AI coding agents?

Comments

hoangnnguyen•21h ago
I tried to build an experiment, detail in this dev log https://codeaholicguy.com/2026/01/24/i-use-ai-devkit-to-deve...
gauravsc•17h ago
We have built something very close to it, a memory and learning layer called versanovatech.com and we posted about it here: https://news.ycombinator.com/from?site=versanovatech.com
7777777phil•12h ago
Earlier this month I argued why LLMs need episodic memory (https://philippdubach.com/posts/beyond-vector-search-why-llm...), and this lines up closely with what you’re describing..

But not sure it's a prompts vs rules problem. It’s more about remembering past decisions as decisions. Things like 'we avoided this dependency because it caused trouble before' or 'this validation exists due to a past incident' have sequence and context. Flattening them into embeddings or rules loses that. I see even the best models making those errors over a longer context rn.

My current view is that humans still need to control what gets remembered. Models are good at executing once context is right, but bad at deciding what deserves to persist.

qazxcvbnmlp•10h ago
Humans decide what to remember based on their emotions. The LLM’s don’t have emotions. Our definition of good and bad comes from our morals and emotions.

In the context of software development; requirements are based on what we wanna do (which is based on emotion), the methods we choose to implement it are also based mostly on our predictions about what will work and not work well.

Most of our affinity for good software development hygiene comes from emotional experiences of the negative feelings felt from the extra work of bad development hygiene.

I think this explains a lot of varied success with coding agents. You don’t talk to them like you talk to an engineer because with an engineer, you know that they have a sense of what is good and bad. Coding agents won’t tell you what is good and bad. They have some limited heuristics, but they don’t understand nuance at all unless you prompt them on it.

Even if they could have unlimited context, window and memory, they would still need to be able to which part of that memories is important. I.e. if the human gave them conflicting instructions, how do they resolve that?.

I eventually think we’ll get to a state where a lot of the mechanics of coding and development can be incorporated into coding agents, but the what and why we build will still come from a human. I.e. will be able to do from 0 to 100% by itself a full stack web application, including deployment with all the security compliance and logins and whatever else, but it still won’t know what is important to emphasize in that website. Should the images be bigger here or the text? Questions like that.

nasnasnasnasnas•11h ago
Claude.MD /agents.MD can help with this ... You can update it for a project with the base I formation that you want to give it... I think you can also put these in different places in your home dir too (Claude code / open code)

Ask HN: Gmail spam filtering suddenly marking everything as spam?

171•goopthink•17h ago•113 comments

Ask HN: What's the current best local/open speech-to-speech setup?

243•dsrtslnd23•1d ago•61 comments

Ask HN: Have we confused efficiency with "100% utilization"?

24•nickevante•13h ago•13 comments

Ask HN: What usually happens after a VC asks for a demo?

12•stijo•14h ago•4 comments

Ask HN: Do you have any evidence that agentic coding works?

454•terabytest•4d ago•450 comments

Ask HN: Career transition question – assistance, MLOps guidance

4•Pierre_Esteves•11h ago•0 comments

Ask HN: May an agent accept a license to produce a build?

26•athrowaway3z•19h ago•47 comments

Tell HN: 2 years building a kids audio app as a solo dev – lessons learned

136•oliverjanssen•3d ago•77 comments

Ask HN: Seeeking help to reverse engineer a PCB

20•Dlg001•2d ago•9 comments

Ask HN: Why does the number of datasets on data.gov vary so much?

7•akudha•13h ago•3 comments

Ask HN: Thinking about memory for AI coding agents

5•hoangnnguyen•22h ago•5 comments

Ask HN: What are some good unintuitive statistics problems?

6•ronbenton•17h ago•6 comments

Ask HN: Rust and AI builders interested in local-first, multi-agent systems?

3•cajazzer•13h ago•8 comments

Coding assistants are slow. So we multitask

3•brunaxLorax•9h ago•8 comments

Ask HN: How to redeem a gift card without risking lock-out?

6•magnetic•15h ago•6 comments

Ask HN: Weekend Social: Top two programming languages and what they can borrow?

3•susam•15h ago•6 comments

Ask HN: Where is society heading, is there a plan for a jobless future?

16•evo_9•1d ago•33 comments

Ask HN: Why are so many rolling out their own AI/LLM agent sandboxing solution?

32•ATechGuy•4d ago•14 comments

Ask HN: General purpose search engine that will respect special characters?

3•what-if•15h ago•3 comments

Ask HN: How do you AI code from your phone?

5•splitbrain•23h ago•1 comments

Ask HN: Do you "micro-manage" your agents?

7•xinbenlv•1d ago•8 comments

Ask HN: Revive a mostly dead Discord server

20•movedx•4d ago•28 comments

Locked out of my GCP account for 3 days, still charged, can't redirect domain

14•lifeoflee•1d ago•3 comments

Ask HN: Where do you look for semiconductor jobs?

4•johncole•1d ago•8 comments

Ask HN: Does DDG no longer honor "site:" prefix?

19•everybodyknows•2d ago•6 comments

Ask HN: LLMs for new job categories?

6•aavci•1d ago•2 comments

Ask HN: Best practice securing secrets on local machines working with agents?

9•xinbenlv•2d ago•11 comments

Tell HN: AI is all about the tools (for now)

3•keepamovin•22h ago•0 comments

ChatGPT has no real-time clock – time awareness is essential

3•Stehpwtimace•23h ago•1 comments

Ask HN: How do you authorize AI agent actions in production?

6•naolbeyene•2d ago•5 comments