frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

The better you get at something, the harder it becomes to do

https://seekingtrust.substack.com/p/improving-at-writing-made-me-almost
1•FinnLobsien•7s ago•0 comments

Show HN: WP Float – Archive WordPress blogs to free static hosting

https://wpfloat.netlify.app/
1•zizoulegrande•1m ago•0 comments

Show HN: I Hacked My Family's Meal Planning with an App

https://mealjar.app
1•melvinzammit•1m ago•0 comments

Sony BMG copy protection rootkit scandal

https://en.wikipedia.org/wiki/Sony_BMG_copy_protection_rootkit_scandal
1•basilikum•4m ago•0 comments

The Future of Systems

https://novlabs.ai/mission/
2•tekbog•5m ago•1 comments

NASA now allowing astronauts to bring their smartphones on space missions

https://twitter.com/NASAAdmin/status/2019259382962307393
2•gbugniot•9m ago•0 comments

Claude Code Is the Inflection Point

https://newsletter.semianalysis.com/p/claude-code-is-the-inflection-point
2•throwaw12•11m ago•1 comments

Show HN: MicroClaw – Agentic AI Assistant for Telegram, Built in Rust

https://github.com/microclaw/microclaw
1•everettjf•11m ago•2 comments

Show HN: Omni-BLAS – 4x faster matrix multiplication via Monte Carlo sampling

https://github.com/AleatorAI/OMNI-BLAS
1•LowSpecEng•12m ago•1 comments

The AI-Ready Software Developer: Conclusion – Same Game, Different Dice

https://codemanship.wordpress.com/2026/01/05/the-ai-ready-software-developer-conclusion-same-game...
1•lifeisstillgood•14m ago•0 comments

AI Agent Automates Google Stock Analysis from Financial Reports

https://pardusai.org/view/54c6646b9e273bbe103b76256a91a7f30da624062a8a6eeb16febfe403efd078
1•JasonHEIN•17m ago•0 comments

Voxtral Realtime 4B Pure C Implementation

https://github.com/antirez/voxtral.c
2•andreabat•19m ago•0 comments

I Was Trapped in Chinese Mafia Crypto Slavery [video]

https://www.youtube.com/watch?v=zOcNaWmmn0A
1•mgh2•26m ago•0 comments

U.S. CBP Reported Employee Arrests (FY2020 – FYTD)

https://www.cbp.gov/newsroom/stats/reported-employee-arrests
1•ludicrousdispla•27m ago•0 comments

Show HN: I built a free UCP checker – see if AI agents can find your store

https://ucphub.ai/ucp-store-check/
2•vladeta•33m ago•1 comments

Show HN: SVGV – A Real-Time Vector Video Format for Budget Hardware

https://github.com/thealidev/VectorVision-SVGV
1•thealidev•34m ago•0 comments

Study of 150 developers shows AI generated code no harder to maintain long term

https://www.youtube.com/watch?v=b9EbCb5A408
1•lifeisstillgood•34m ago•0 comments

Spotify now requires premium accounts for developer mode API access

https://www.neowin.net/news/spotify-now-requires-premium-accounts-for-developer-mode-api-access/
1•bundie•37m ago•0 comments

When Albert Einstein Moved to Princeton

https://twitter.com/Math_files/status/2020017485815456224
1•keepamovin•39m ago•0 comments

Agents.md as a Dark Signal

https://joshmock.com/post/2026-agents-md-as-a-dark-signal/
2•birdculture•40m ago•0 comments

System time, clocks, and their syncing in macOS

https://eclecticlight.co/2025/05/21/system-time-clocks-and-their-syncing-in-macos/
1•fanf2•42m ago•0 comments

McCLIM and 7GUIs – Part 1: The Counter

https://turtleware.eu/posts/McCLIM-and-7GUIs---Part-1-The-Counter.html
2•ramenbytes•44m ago•0 comments

So whats the next word, then? Almost-no-math intro to transformer models

https://matthias-kainer.de/blog/posts/so-whats-the-next-word-then-/
1•oesimania•46m ago•0 comments

Ed Zitron: The Hater's Guide to Microsoft

https://bsky.app/profile/edzitron.com/post/3me7ibeym2c2n
2•vintagedave•49m ago•1 comments

UK infants ill after drinking contaminated baby formula of Nestle and Danone

https://www.bbc.com/news/articles/c931rxnwn3lo
1•__natty__•49m ago•0 comments

Show HN: Android-based audio player for seniors – Homer Audio Player

https://homeraudioplayer.app
3•cinusek•50m ago•2 comments

Starter Template for Ory Kratos

https://github.com/Samuelk0nrad/docker-ory
1•samuel_0xK•51m ago•0 comments

LLMs are powerful, but enterprises are deterministic by nature

2•prateekdalal•55m ago•0 comments

Make your iPad 3 a touchscreen for your computer

https://github.com/lemonjesus/ipad-touch-screen
2•0y•1h ago•1 comments

Internationalization and Localization in the Age of Agents

https://myblog.ru/internationalization-and-localization-in-the-age-of-agents
1•xenator•1h ago•0 comments
Open in hackernews

Why do so many "agentic AI" systems collapse without persistent state?

3•JohannesGlaser•1mo ago
I’ve been thinking a lot about what’s currently called “agentic AI”. Many systems try to achieve agent-like behavior through planning, tool use, orchestration layers, or increasingly careful prompting. In practice, what I keep running into is that these systems don’t fail because models can’t reason or plan, but because they lack stable state. Without persistent state, coherence has to be re-established every turn. The result is longer prompts, retrieval pipelines, guardrails, and corrective instructions — all of which help access information, but don’t really solve continuity over time.

I’ve been experimenting with a different approach: making state explicit and persistent outside the model, but directly attached to the assistant’s working environment. Append-only logs, rules, inventories, histories — readable files that the model initializes from every run. Not queried opportunistically like a vector DB, just present as working context. Once state is stable, a lot of “agentic” behavior seems to emerge naturally. The system stops reacting moment by moment and starts behaving coherently across longer timescales.

I’m curious how others here see this: Is persistent state under-discussed compared to planning and tooling? For those building agents with RAG / LangChain / similar stacks: how do you handle continuity across days or weeks? Am I underestimating what current agent frameworks already solve here?

Would love technical perspectives or counterexamples.

Comments

verdverm•1mo ago
> readable files that the model initializes from every run

this is how AGENTS.md et al. are supposed to work, you can include many more things, like ...

> Append-only logs, rules, inventories, histories

I include open terminals and files for example, these may make it into the system prompt. The same problem arises here, how much and when. Same story for tools, mcp, skills.

> Without persistent state

There are a lot of different ways people are approaching this. In the end, you are just prewarming a cache (system prompt). The next step is to give the agent control over that system prompt towards self-controlled / dynamic context engineering.

You, increasingly in collaboration with an agent, are doing context engineering. One can take the analogy towards a memory or knowledge hierarchy. You're also going to want a table of contents or librarian (context collecting subagent or phase, search, lots of open design space here)

JohannesGlaser•1mo ago
I think there’s an important distinction here.

What you describe (AGENTS.md, open files, terminals, system prompts) is still context shaping inside the prompt space. It’s about what to load and how much, and yes, that quickly turns into dynamic context engineering.

What I’m experimenting with is one step earlier: treating state as an external artifact, not as an emergent property of the prompt. The files aren’t hints or instructions that compete for relevance, but the assistant’s working state itself. On initialization, the model doesn’t decide what to pull in; it reconstructs orientation from a fixed set of artifacts.

In that sense it’s not prewarming a cache so much as rebuilding a process from disk. Forgetting, correction, and continuity are handled by explicitly changing those artifacts, not by prompt evolution.

I agree there’s a lot of open design space here. My main point was that persistent state tends to be discussed as a prompt or retrieval problem, whereas treating it as first-class state changes the failure modes quite a bit.

Curious how far current agent frameworks really go in that direction in practice.

verdverm•1mo ago
What you are describing is context construction. When working with agents and LLMs, the only way you get anything beyond their training is through the system prompt and message history. There is nothing else.

You can call them whatever fancy notions and anthropomorphic concepts you want, but in the end, it is just context engineering, regardless of how and when you create, store, retrieve, and inject artifacts. A good framework will give you building blocks and flexibility in how you use these and how that happens. That's why I use ADK anyway

Maybe you are talking about giving the agent tools for working with this state or cache? I have that in my ADK based setup

If I have a root AGENTS.md, or a user level file of similar nature, and these are always loaded for every conversation, how is what you are talking about different?

JohannesGlaser•1mo ago
At the lowest level, you’re right: everything the model ever sees is context. I’m not claiming a channel beyond tokens. The distinction I’m trying to draw isn’t where state ends up, but how it is governed.

AGENTS.md (and similar conventions) are a good step toward making agent context explicit. But they are still instructional artifacts: static guidance that gets loaded into the prompt. They don’t define a state lifecycle. They don’t encode history, correction, or invalidation over time. And they don’t change unless a human edits them. In most agent setups I’ve worked with, “state” is assembled per turn. An agent or orchestration layer decides what to include, summarize, drop, or rewrite. That makes continuity an emergent property of context engineering. It works locally, but over time you see drift, silent overwrites, and loss of accountability.

What I’m experimenting with is treating state as a process artifact, not an input artifact. The assistant doesn’t curate its own context. On startup, it reconstructs orientation from a fixed, inspectable set of external files — logs, rules, inventories — with explicit lifecycle rules. State changes happen deliberately (append, correct, invalidate), not implicitly via prompt evolution.

So yes, the model ultimately reads tokens. But forgetting, correction, and continuity are handled outside the prompt logic. The prompt becomes closer to a bootloader than a workspace.

If you always load a root AGENTS.md plus a stable artifact set, the surface can look similar. In practice, the difference shows up in failure modes: how systems degrade over weeks instead of minutes.

I’m not arguing current frameworks can’t approximate this — just that persistent state is usually framed as a context problem, rather than as first-class state with explicit lifecycle semantics. That shift changes what “agentic” failure even looks like.