frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Show HN: Engineering Perception with Combinatorial Memetics

1•alan_sass•2m ago•1 comments

Show HN: Steam Daily – A Wordle-like daily puzzle game for Steam fans

https://steamdaily.xyz
1•itshellboy•4m ago•0 comments

The Anthropic Hive Mind

https://steve-yegge.medium.com/the-anthropic-hive-mind-d01f768f3d7b
1•spenvo•4m ago•0 comments

Just Started Using AmpCode

https://intelligenttools.co/blog/ampcode-multi-agent-production
1•BojanTomic•6m ago•0 comments

LLM as an Engineer vs. a Founder?

1•dm03514•6m ago•0 comments

Crosstalk inside cells helps pathogens evade drugs, study finds

https://phys.org/news/2026-01-crosstalk-cells-pathogens-evade-drugs.html
2•PaulHoule•7m ago•0 comments

Show HN: Design system generator (mood to CSS in <1 second)

https://huesly.app
1•egeuysall•7m ago•1 comments

Show HN: 26/02/26 – 5 songs in a day

https://playingwith.variousbits.net/saturday
1•dmje•8m ago•0 comments

Toroidal Logit Bias – Reduce LLM hallucinations 40% with no fine-tuning

https://github.com/Paraxiom/topological-coherence
1•slye514•11m ago•1 comments

Top AI models fail at >96% of tasks

https://www.zdnet.com/article/ai-failed-test-on-remote-freelance-jobs/
4•codexon•11m ago•1 comments

The Science of the Perfect Second (2023)

https://harpers.org/archive/2023/04/the-science-of-the-perfect-second/
1•NaOH•12m ago•0 comments

Bob Beck (OpenBSD) on why vi should stay vi (2006)

https://marc.info/?l=openbsd-misc&m=115820462402673&w=2
2•birdculture•15m ago•0 comments

Show HN: a glimpse into the future of eye tracking for multi-agent use

https://github.com/dchrty/glimpsh
1•dochrty•16m ago•0 comments

The Optima-l Situation: A deep dive into the classic humanist sans-serif

https://micahblachman.beehiiv.com/p/the-optima-l-situation
2•subdomain•17m ago•0 comments

Barn Owls Know When to Wait

https://blog.typeobject.com/posts/2026-barn-owls-know-when-to-wait/
1•fintler•17m ago•0 comments

Implementing TCP Echo Server in Rust [video]

https://www.youtube.com/watch?v=qjOBZ_Xzuio
1•sheerluck•17m ago•0 comments

LicGen – Offline License Generator (CLI and Web UI)

1•tejavvo•20m ago•0 comments

Service Degradation in West US Region

https://azure.status.microsoft/en-gb/status?gsid=5616bb85-f380-4a04-85ed-95674eec3d87&utm_source=...
2•_____k•20m ago•0 comments

The Janitor on Mars

https://www.newyorker.com/magazine/1998/10/26/the-janitor-on-mars
1•evo_9•22m ago•0 comments

Bringing Polars to .NET

https://github.com/ErrorLSC/Polars.NET
3•CurtHagenlocher•24m ago•0 comments

Adventures in Guix Packaging

https://nemin.hu/guix-packaging.html
1•todsacerdoti•25m ago•0 comments

Show HN: We had 20 Claude terminals open, so we built Orcha

1•buildingwdavid•25m ago•0 comments

Your Best Thinking Is Wasted on the Wrong Decisions

https://www.iankduncan.com/engineering/2026-02-07-your-best-thinking-is-wasted-on-the-wrong-decis...
1•iand675•26m ago•0 comments

Warcraftcn/UI – UI component library inspired by classic Warcraft III aesthetics

https://www.warcraftcn.com/
1•vyrotek•27m ago•0 comments

Trump Vodka Becomes Available for Pre-Orders

https://www.forbes.com/sites/kirkogunrinde/2025/12/01/trump-vodka-becomes-available-for-pre-order...
1•stopbulying•28m ago•0 comments

Velocity of Money

https://en.wikipedia.org/wiki/Velocity_of_money
1•gurjeet•31m ago•0 comments

Stop building automations. Start running your business

https://www.fluxtopus.com/automate-your-business
1•valboa•35m ago•1 comments

You can't QA your way to the frontier

https://www.scorecard.io/blog/you-cant-qa-your-way-to-the-frontier
1•gk1•36m ago•0 comments

Show HN: PalettePoint – AI color palette generator from text or images

https://palettepoint.com
1•latentio•37m ago•0 comments

Robust and Interactable World Models in Computer Vision [video]

https://www.youtube.com/watch?v=9B4kkaGOozA
2•Anon84•40m ago•0 comments
Open in hackernews

2026 will be the year of on-device agents

2•mycelial_ali•1mo ago
I have been building a local AI memory layer for a while, and the same problem shows up every time you try to make an assistant feel stateful.

The agent is impressive in the moment, then it forgets. Or it remembers the wrong thing and hardens it into a permanent belief. A one off comment becomes identity. A stray sentence becomes a durable trait. That is not a model quality issue. It is a state management issue.

Most people talk about memory as “more context.” Bigger windows, more retrieval, more prompt stuffing. That is fine for chatbots. Agents are different. Agents plan, execute, update beliefs, and come back tomorrow. Once you cross that line, memory stops being a feature and becomes infrastructure.

The mental model I keep coming back to is an operating system.

1.What gets stored 2.What gets compressed 3.What gets promoted from “maybe” to “true” 4.What decays 5.What gets deleted 6.What should never become durable memory in the first place

If you look at what most memory stacks do today, the pipeline is basically the same everywhere.

Capture the interaction. Summarize or extract. Embed. Store vectors and metadata. Retrieve. Inject into the prompt. Write back new memories.

That loop is not inherently wrong. The bigger issue is where the loop runs. In a lot of real deployments, the most sensitive parts happen outside the user’s environment. Raw interactions get shipped out early, before you have minimized or redacted anything, and before you have decided what should become durable.

When memory goes cloud first, the security model gets messy in a very specific way. Memory tends to multiply across systems. One interaction becomes raw snippets, summaries, embeddings, metadata, and retrieval traces. Even if each artifact feels harmless alone, the combined system can reconstruct a person’s history with uncomfortable fidelity.

Then there is the trust boundary problem. If retrieved memories are treated as trusted context, retrieval becomes a place where prompt injection and poisoning can persist. A bad instruction that gets written into memory does not just affect one response. It can keep resurfacing later as “truth” unless you have governance that looks like validation, quarantine, deletion, and audit.

Centralized memory also becomes a high value target. It is not just user data, it is organized intent and preference, indexed for search. That is exactly what attackers want.

And even if you ignore security, cloud introduces latency coupling. If your agent reads and writes memory constantly, you are paying a network tax on the most frequent operations in the system.

This is why I think the edge is not a constraint. It is the point. If memory is identity, identity should not default to leaving the device.

There is also a hardware angle that matters as agents become more persistent. CXL is interesting here because it enables memory pooling. Instead of each machine being an island, memory can be disaggregated and allocated as a shared resource. That does not magically create infinite context, but it does push the stack toward treating agent state as a real managed substrate, not just tokens.

My bet for 2026 is simple. The winning agent architectures will separate cognition from maintenance. Use smaller local models for the repetitive memory work like summarization, extraction, tagging, redundancy checks, and promotion decisions. Reserve larger models for the rare moments that need heavy reasoning. Keep durable state on disk so it survives restarts, can be inspected, and can actually be deleted.

Curious what others are seeing. For people building agents, what is the biggest blocker to running memory locally today: model quality, tooling, deployment, evaluation, or something else?

Comments

4d4m•1mo ago
Very excited about getting local models. There are too many concerns and bad actor possibilities with centralized models.