frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Sid Meier's System for Real-Time Music Composition and Synthesis

https://patents.google.com/patent/US5496962A/en
1•GaryBluto•3m ago•1 comments

Show HN: Slop News – HN front page now, but it's all slop

https://dosaygo-studio.github.io/hn-front-page-2035/slop-news
2•keepamovin•4m ago•1 comments

Show HN: Empusa – Visual debugger to catch and resume AI agent retry loops

https://github.com/justin55afdfdsf5ds45f4ds5f45ds4/EmpusaAI
1•justinlord•7m ago•0 comments

Show HN: Bitcoin wallet on NXP SE050 secure element, Tor-only open source

https://github.com/0xdeadbeefnetwork/sigil-web
2•sickthecat•9m ago•0 comments

White House Explores Opening Antitrust Probe on Homebuilders

https://www.bloomberg.com/news/articles/2026-02-06/white-house-explores-opening-antitrust-probe-i...
1•petethomas•9m ago•0 comments

Show HN: MindDraft – AI task app with smart actions and auto expense tracking

https://minddraft.ai
2•imthepk•14m ago•0 comments

How do you estimate AI app development costs accurately?

1•insights123•15m ago•0 comments

Going Through Snowden Documents, Part 5

https://libroot.org/posts/going-through-snowden-documents-part-5/
1•goto1•16m ago•0 comments

Show HN: MCP Server for TradeStation

https://github.com/theelderwand/tradestation-mcp
1•theelderwand•19m ago•0 comments

Canada unveils auto industry plan in latest pivot away from US

https://www.bbc.com/news/articles/cvgd2j80klmo
2•breve•20m ago•1 comments

The essential Reinhold Niebuhr: selected essays and addresses

https://archive.org/details/essentialreinhol0000nieb
1•baxtr•22m ago•0 comments

Rentahuman.ai Turns Humans into On-Demand Labor for AI Agents

https://www.forbes.com/sites/ronschmelzer/2026/02/05/when-ai-agents-start-hiring-humans-rentahuma...
1•tempodox•24m ago•0 comments

StovexGlobal – Compliance Gaps to Note

1•ReviewShield•27m ago•1 comments

Show HN: Afelyon – Turns Jira tickets into production-ready PRs (multi-repo)

https://afelyon.com/
1•AbduNebu•28m ago•0 comments

Trump says America should move on from Epstein – it may not be that easy

https://www.bbc.com/news/articles/cy4gj71z0m0o
5•tempodox•29m ago•2 comments

Tiny Clippy – A native Office Assistant built in Rust and egui

https://github.com/salva-imm/tiny-clippy
1•salvadorda656•33m ago•0 comments

LegalArgumentException: From Courtrooms to Clojure – Sen [video]

https://www.youtube.com/watch?v=cmMQbsOTX-o
1•adityaathalye•36m ago•0 comments

US moves to deport 5-year-old detained in Minnesota

https://www.reuters.com/legal/government/us-moves-deport-5-year-old-detained-minnesota-2026-02-06/
6•petethomas•39m ago•2 comments

If you lose your passport in Austria, head for McDonald's Golden Arches

https://www.cbsnews.com/news/us-embassy-mcdonalds-restaurants-austria-hotline-americans-consular-...
1•thunderbong•44m ago•0 comments

Show HN: Mermaid Formatter – CLI and library to auto-format Mermaid diagrams

https://github.com/chenyanchen/mermaid-formatter
1•astm•59m ago•0 comments

RFCs vs. READMEs: The Evolution of Protocols

https://h3manth.com/scribe/rfcs-vs-readmes/
3•init0•1h ago•1 comments

Kanchipuram Saris and Thinking Machines

https://altermag.com/articles/kanchipuram-saris-and-thinking-machines
1•trojanalert•1h ago•0 comments

Chinese chemical supplier causes global baby formula recall

https://www.reuters.com/business/healthcare-pharmaceuticals/nestle-widens-french-infant-formula-r...
2•fkdk•1h ago•0 comments

I've used AI to write 100% of my code for a year as an engineer

https://old.reddit.com/r/ClaudeCode/comments/1qxvobt/ive_used_ai_to_write_100_of_my_code_for_1_ye...
2•ukuina•1h ago•1 comments

Looking for 4 Autistic Co-Founders for AI Startup (Equity-Based)

1•au-ai-aisl•1h ago•1 comments

AI-native capabilities, a new API Catalog, and updated plans and pricing

https://blog.postman.com/new-capabilities-march-2026/
1•thunderbong•1h ago•0 comments

What changed in tech from 2010 to 2020?

https://www.tedsanders.com/what-changed-in-tech-from-2010-to-2020/
3•endorphine•1h ago•0 comments

From Human Ergonomics to Agent Ergonomics

https://wesmckinney.com/blog/agent-ergonomics/
1•Anon84•1h ago•0 comments

Advanced Inertial Reference Sphere

https://en.wikipedia.org/wiki/Advanced_Inertial_Reference_Sphere
1•cyanf•1h ago•0 comments

Toyota Developing a Console-Grade, Open-Source Game Engine with Flutter and Dart

https://www.phoronix.com/news/Fluorite-Toyota-Game-Engine
2•computer23•1h ago•0 comments
Open in hackernews

Ask HN: Thinking about memory for AI coding agents

7•hoangnnguyen•1w ago
I’ve been experimenting with AI coding agents in real day-to-day work and ran into a recurring problem, I keep repeating the same engineering principles over and over.

Things like validating input, being careful with new dependencies, or respecting certain product constraints. The usual solutions are prompts or rules.

After using both for a while, neither felt right. - Prompts disappear after each task. - Rules only trigger in narrow contexts, often tied to specific files or patterns. - Some principles are personal preferences, not something I want enforced at the project level. - Others aren’t really “rules” at all, but knowledge about product constraints and past tradeoffs.

That led me to experiment with a separate “memory” layer for AI agents. Not chat history, but small, atomic pieces of knowledge: decisions, constraints, and recurring principles that can be retrieved when relevant.

A few things became obvious once I started using it seriously: - vague memory leads to vague behavior - long memory pollutes context - duplicate entries make retrieval worse - many issues only show up when you actually depend on the agent daily

AI was great at executing once the context was right. But deciding what should be remembered, what should be rejected, and when predictability matters more than cleverness still required human judgment.

Curious how others are handling this. Are you relying mostly on prompts, rules, or some form of persistent knowledge when working with AI coding agents?

Comments

hoangnnguyen•1w ago
I tried to build an experiment, detail in this dev log https://codeaholicguy.com/2026/01/24/i-use-ai-devkit-to-deve...
gauravsc•1w ago
We have built something very close to it, a memory and learning layer called versanovatech.com and we posted about it here: https://news.ycombinator.com/from?site=versanovatech.com
7777777phil•1w ago
Earlier this month I argued why LLMs need episodic memory (https://philippdubach.com/posts/beyond-vector-search-why-llm...), and this lines up closely with what you’re describing..

But not sure it's a prompts vs rules problem. It’s more about remembering past decisions as decisions. Things like 'we avoided this dependency because it caused trouble before' or 'this validation exists due to a past incident' have sequence and context. Flattening them into embeddings or rules loses that. I see even the best models making those errors over a longer context rn.

My current view is that humans still need to control what gets remembered. Models are good at executing once context is right, but bad at deciding what deserves to persist.

qazxcvbnmlp•1w ago
Humans decide what to remember based on their emotions. The LLM’s don’t have emotions. Our definition of good and bad comes from our morals and emotions.

In the context of software development; requirements are based on what we wanna do (which is based on emotion), the methods we choose to implement it are also based mostly on our predictions about what will work and not work well.

Most of our affinity for good software development hygiene comes from emotional experiences of the negative feelings felt from the extra work of bad development hygiene.

I think this explains a lot of varied success with coding agents. You don’t talk to them like you talk to an engineer because with an engineer, you know that they have a sense of what is good and bad. Coding agents won’t tell you what is good and bad. They have some limited heuristics, but they don’t understand nuance at all unless you prompt them on it.

Even if they could have unlimited context, window and memory, they would still need to be able to which part of that memories is important. I.e. if the human gave them conflicting instructions, how do they resolve that?.

I eventually think we’ll get to a state where a lot of the mechanics of coding and development can be incorporated into coding agents, but the what and why we build will still come from a human. I.e. will be able to do from 0 to 100% by itself a full stack web application, including deployment with all the security compliance and logins and whatever else, but it still won’t know what is important to emphasize in that website. Should the images be bigger here or the text? Questions like that.

dabaja•1w ago
Interesting framing, but I think emotions are a proxy for something more tractable: loss functions over time. Engineers remember bad hygiene because they've felt the cost. You can approximate this for agents by logging friction: how many iterations did a task take, how many reverts, how much human correction. Then weight memory retrieval by past-friction-on-similar-tasks. It's crude, but it lets the agent "learn" that certain shortcuts are expensive without needing emotions. The hard part is defining similarity well enough that the signal transfers. Still early, but directionally this has reduced repeat mistakes in our pipeline more than static rules did.
qazxcvbnmlp•1w ago
How do you choose which loss function over time to pursue?
dabaja•1w ago
Honestly, it's empirical. We started with what was easiest to measure: human correction rate. If I had to step in and fix something, that's a clear signal the agent took a bad path. Iterations and reverts turned out to be noisier -- sometimes high iteration count means the task was genuinely hard, not that the agent made a mistake. So we downweighted those. The meta-answer is: pick the metric that most directly captures "I wish the agent hadn't done that." For us that's human intervention. For a team with better test coverage, it might be test failures after commit. For infra work, maybe rollback frequency. There's no universal loss function — it depends on where your pain actually is. We just made it explicit and started logging it. The logging alone forced clarity.
nasnasnasnasnas•1w ago
Claude.MD /agents.MD can help with this ... You can update it for a project with the base I formation that you want to give it... I think you can also put these in different places in your home dir too (Claude code / open code)
dabaja•1w ago
We hit something similar building document processing agents. What helped was treating memory as typed knowledge rather than flat text. Three buckets: (1) constraints -- hard rules that should always apply, (2) decisions -- past choices with context on why, (3) heuristics -- soft preferences that can be overridden. Retrieval then becomes: constraints always injected, decisions pulled by similarity to current task, heuristics only when ambiguity is high. Still experimenting with how to detect "ambiguity" reliably -- right now it's a cheap classifier on the task description. The deduplication problem is real. We ended up hashing on (topic, decision_type) and forcing manual review when collision detected. Brutal but necessary. TBH we are not home yet but that's the path we walking.
GOGOGOD•1w ago
IMO CLIs already have strong constraint channels (CLAUDE.md / agents.md / rules), the real pain is manual upkeep + agents forgetting to read/update them. If docs can be auto-maintained, that’s actually useful “memory”. We’re shipping an OSS experiment in ~1 week: https://github.com/hakoniwaa/Squirrel