frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

A Night Without the Nerds – Claude Opus 4.6, Field-Tested

https://konfuzio.com/en/a-night-without-the-nerds-claude-opus-4-6-in-the-field-test/
1•konfuzio•1m ago•0 comments

Could ionospheric disturbances influence earthquakes?

https://www.kyoto-u.ac.jp/en/research-news/2026-02-06-0
1•geox•3m ago•0 comments

SpaceX's next astronaut launch for NASA is officially on for Feb. 11 as FAA clea

https://www.space.com/space-exploration/launches-spacecraft/spacexs-next-astronaut-launch-for-nas...
1•bookmtn•4m ago•0 comments

Show HN: One-click AI employee with its own cloud desktop

https://cloudbot-ai.com
1•fainir•6m ago•0 comments

Show HN: Poddley – Search podcasts by who's speaking

https://poddley.com
1•onesandofgrain•7m ago•0 comments

Same Surface, Different Weight

https://www.robpanico.com/articles/display/?entry_short=same-surface-different-weight
1•retrocog•9m ago•0 comments

The Rise of Spec Driven Development

https://www.dbreunig.com/2026/02/06/the-rise-of-spec-driven-development.html
2•Brajeshwar•14m ago•0 comments

The first good Raspberry Pi Laptop

https://www.jeffgeerling.com/blog/2026/the-first-good-raspberry-pi-laptop/
3•Brajeshwar•14m ago•0 comments

Seas to Rise Around the World – But Not in Greenland

https://e360.yale.edu/digest/greenland-sea-levels-fall
2•Brajeshwar•14m ago•0 comments

Will Future Generations Think We're Gross?

https://chillphysicsenjoyer.substack.com/p/will-future-generations-think-were
1•crescit_eundo•17m ago•0 comments

State Department will delete Xitter posts from before Trump returned to office

https://www.npr.org/2026/02/07/nx-s1-5704785/state-department-trump-posts-x
2•righthand•20m ago•1 comments

Show HN: Verifiable server roundtrip demo for a decision interruption system

https://github.com/veeduzyl-hue/decision-assistant-roundtrip-demo
1•veeduzyl•21m ago•0 comments

Impl Rust – Avro IDL Tool in Rust via Antlr

https://www.youtube.com/watch?v=vmKvw73V394
1•todsacerdoti•21m ago•0 comments

Stories from 25 Years of Software Development

https://susam.net/twenty-five-years-of-computing.html
3•vinhnx•22m ago•0 comments

minikeyvalue

https://github.com/commaai/minikeyvalue/tree/prod
3•tosh•27m ago•0 comments

Neomacs: GPU-accelerated Emacs with inline video, WebKit, and terminal via wgpu

https://github.com/eval-exec/neomacs
1•evalexec•31m ago•0 comments

Show HN: Moli P2P – An ephemeral, serverless image gallery (Rust and WebRTC)

https://moli-green.is/
2•ShinyaKoyano•35m ago•1 comments

How I grow my X presence?

https://www.reddit.com/r/GrowthHacking/s/UEc8pAl61b
2•m00dy•37m ago•0 comments

What's the cost of the most expensive Super Bowl ad slot?

https://ballparkguess.com/?id=5b98b1d3-5887-47b9-8a92-43be2ced674b
1•bkls•38m ago•0 comments

What if you just did a startup instead?

https://alexaraki.substack.com/p/what-if-you-just-did-a-startup
5•okaywriting•44m ago•0 comments

Hacking up your own shell completion (2020)

https://www.feltrac.co/environment/2020/01/18/build-your-own-shell-completion.html
2•todsacerdoti•47m ago•0 comments

Show HN: Gorse 0.5 – Open-source recommender system with visual workflow editor

https://github.com/gorse-io/gorse
1•zhenghaoz•48m ago•0 comments

GLM-OCR: Accurate × Fast × Comprehensive

https://github.com/zai-org/GLM-OCR
1•ms7892•49m ago•0 comments

Local Agent Bench: Test 11 small LLMs on tool-calling judgment, on CPU, no GPU

https://github.com/MikeVeerman/tool-calling-benchmark
1•MikeVeerman•50m ago•0 comments

Show HN: AboutMyProject – A public log for developer proof-of-work

https://aboutmyproject.com/
1•Raiplus•50m ago•0 comments

Expertise, AI and Work of Future [video]

https://www.youtube.com/watch?v=wsxWl9iT1XU
1•indiantinker•50m ago•0 comments

So Long to Cheap Books You Could Fit in Your Pocket

https://www.nytimes.com/2026/02/06/books/mass-market-paperback-books.html
4•pseudolus•51m ago•2 comments

PID Controller

https://en.wikipedia.org/wiki/Proportional%E2%80%93integral%E2%80%93derivative_controller
1•tosh•55m ago•0 comments

SpaceX Rocket Generates 100GW of Power, or 20% of US Electricity

https://twitter.com/AlecStapp/status/2019932764515234159
2•bkls•55m ago•1 comments

Kubernetes MCP Server

https://github.com/yindia/rootcause
1•yindia•56m ago•0 comments
Open in hackernews

Show HN: A-MEM – Memory for Claude Code that links and evolves on its own

https://github.com/DiaaAj/a-mem-mcp
8•AttentionBlock•3w ago
Hi HN,

I have been recently geeking on agentic memories, and I believe I finally came up with something that works.

I spent the last couple of weeks building a memory[0] for Claude Code that dynamically evolve as you talk with it. I followed a new paradigm inspired from zettelkasten method. Whenever Claude discover something new in the codebase or during your conversation with it, it writes a note about it, link it with related memories, update the related memories accordingly, and store it in chromaDB (this part where it keeps self-evolving based on new inputs). Later when you ask it to do/explore/implement something, it peeks into its memories (breadth-first) then drills into matches (depth-first) . The graph that builds this memory is untyped, so Claude isn't limited to a predefined relations. Its also time-aware so you can ask it to recall from yesterday for example.

What motivated me to build this tool is that I work with big codebases, I spend hours with claude digging into some functionality, and it was frustrating that I have to start from scratch next day/session. I can resume to the same session but at some point when the conversation gets too long, the responses quality drop.

I tried some of the available solutions, most of them are Knowledge Banks or some static RAG that didn't do it for me.

Here's example scenario how it helped me: I had to debug some error that was coming from one of the new functionalities that Claude implemented few days ago, when I shared the error message with Claude, it immediately recognized that this was a change we made recently, why we implemented and how we implemented it and managed to debug it fairly quickly.

Another example: vercel have encapsulated 10+ years of Next.js optimization knowledge into a bunch of .MD files[1], this is something that I would love to inject to my agent memory (if only I used Next.js)

Some of the limitations are:

  - Sometimes the agent forgets about it and I need to give it a nudge: however I implemented some hooks and so far they seem to be doing the job.

  - Response time: it's still way faster compared to re-exploring and discovering from scratch, but I would love to see it even faster

  - Categories: some memories are project-specific, others aren't e.g. preferences, best-practices, etc
What I built is based on concepts from A-MEM paper[2] with some tweaks, so I didn't personally invent something new :)

Looking for your feedbacks :)

[0]: https://github.com/DiaaAj/a-mem-mcp/tree/main [1]: https://vercel.com/blog/introducing-react-best-practices [2]: https://arxiv.org/pdf/2502.12110

Comments

mastermindSDE•3w ago
Cool idea and implementation! Does this support other agents, like Gemini-cli or local agents like qwen-code?
AttentionBlock•3w ago
Thanks for the feedback

I am planning to extend for other agents. But now it should work with some caveats.

I have configured claude specific hooks, the hooks keeps reminding Claude Code to use the memory when needed.

Without them, the agents will keep forgetting to use it and you would need to keep nudging it

bisonbear•3w ago
curious how this is different from claude-mem?

https://github.com/thedotmack/claude-mem

AttentionBlock•3w ago
great question

claude-mem uses a compaction approach. It records session activity, compresses it, and injects summaries into future sessions. Great for replaying what happened.

A-MEM builds a self-evolving knowledge graph. Memories aren’t compressed logs. They’re atomic insights that automatically link to related memories and update each other over time. Newer memories impact past memories.

For example: if Claude learns “auth uses JWT” in session 1, then learns “JWT tokens expire after 1 hour” in session 5, A-MEM links these memories and updates the context on both. The older memory now knows about expiration. With compaction, these stay as separate compressed logs that don’t talk to each other.