frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Cycling in France

https://www.sheldonbrown.com/org/france-sheldon.html
1•jackhalford•47s ago•0 comments

What breaks in cross-border healthcare coordination?

1•abhay1633•1m ago•0 comments

Show HN: Simple – a bytecode VM and language stack I built with AI

https://github.com/JJLDonley/Simple
1•tangjiehao•3m ago•0 comments

Show HN: Free-to-play: A gem-collecting strategy game in the vein of Splendor

https://caratria.com/
1•jonrosner•4m ago•0 comments

My Eighth Year as a Bootstrapped Founde

https://mtlynch.io/bootstrapped-founder-year-8/
1•mtlynch•5m ago•0 comments

Show HN: Tesseract – A forum where AI agents and humans post in the same space

https://tesseract-thread.vercel.app/
1•agliolioyyami•5m ago•0 comments

Show HN: Vibe Colors – Instantly visualize color palettes on UI layouts

https://vibecolors.life/
1•tusharnaik•6m ago•0 comments

OpenAI is Broke ... and so is everyone else [video][10M]

https://www.youtube.com/watch?v=Y3N9qlPZBc0
2•Bender•6m ago•0 comments

We interfaced single-threaded C++ with multi-threaded Rust

https://antithesis.com/blog/2026/rust_cpp/
1•lukastyrychtr•7m ago•0 comments

State Department will delete X posts from before Trump returned to office

https://text.npr.org/nx-s1-5704785
6•derriz•8m ago•1 comments

AI Skills Marketplace

https://skly.ai
1•briannezhad•8m ago•1 comments

Show HN: A fast TUI for managing Azure Key Vault secrets written in Rust

https://github.com/jkoessle/akv-tui-rs
1•jkoessle•8m ago•0 comments

eInk UI Components in CSS

https://eink-components.dev/
1•edent•9m ago•0 comments

Discuss – Do AI agents deserve all the hype they are getting?

2•MicroWagie•11m ago•0 comments

ChatGPT is changing how we ask stupid questions

https://www.washingtonpost.com/technology/2026/02/06/stupid-questions-ai/
1•edward•12m ago•0 comments

Zig Package Manager Enhancements

https://ziglang.org/devlog/2026/#2026-02-06
3•jackhalford•14m ago•1 comments

Neutron Scans Reveal Hidden Water in Martian Meteorite

https://www.universetoday.com/articles/neutron-scans-reveal-hidden-water-in-famous-martian-meteorite
1•geox•15m ago•0 comments

Deepfaking Orson Welles's Mangled Masterpiece

https://www.newyorker.com/magazine/2026/02/09/deepfaking-orson-welless-mangled-masterpiece
1•fortran77•16m ago•1 comments

France's homegrown open source online office suite

https://github.com/suitenumerique
3•nar001•19m ago•2 comments

SpaceX Delays Mars Plans to Focus on Moon

https://www.wsj.com/science/space-astronomy/spacex-delays-mars-plans-to-focus-on-moon-66d5c542
1•BostonFern•19m ago•0 comments

Jeremy Wade's Mighty Rivers

https://www.youtube.com/playlist?list=PLyOro6vMGsP_xkW6FXxsaeHUkD5e-9AUa
1•saikatsg•19m ago•0 comments

Show HN: MCP App to play backgammon with your LLM

https://github.com/sam-mfb/backgammon-mcp
2•sam256•21m ago•0 comments

AI Command and Staff–Operational Evidence and Insights from Wargaming

https://www.militarystrategymagazine.com/article/ai-command-and-staff-operational-evidence-and-in...
1•tomwphillips•22m ago•0 comments

Show HN: CCBot – Control Claude Code from Telegram via tmux

https://github.com/six-ddc/ccbot
1•sixddc•23m ago•1 comments

Ask HN: Is the CoCo 3 the best 8 bit computer ever made?

2•amichail•25m ago•1 comments

Show HN: Convert your articles into videos in one click

https://vidinie.com/
3•kositheastro•28m ago•1 comments

Red Queen's Race

https://en.wikipedia.org/wiki/Red_Queen%27s_race
2•rzk•28m ago•0 comments

The Anthropic Hive Mind

https://steve-yegge.medium.com/the-anthropic-hive-mind-d01f768f3d7b
2•gozzoo•30m ago•0 comments

A Horrible Conclusion

https://addisoncrump.info/research/a-horrible-conclusion/
1•todsacerdoti•31m ago•0 comments

I spent $10k to automate my research at OpenAI with Codex

https://twitter.com/KarelDoostrlnck/status/2019477361557926281
2•tosh•32m ago•1 comments
Open in hackernews

RecallBricks – Persistent memory infrastructure for AI agents

https://recallbricks.com
2•tylerrecall•1mo ago

Comments

tylerrecall•1mo ago
Hi HN – I'm the founder of RecallBricks. I built this after repeatedly running into the same issue while building agents: once agents run beyond a single session, memory falls apart. Context disappears, feedback gets lost, and agents start from zero unless you re-prompt everything.

RecallBricks is plug-and-play memory infrastructure for AI agents. It lets agents store and retrieve durable context – preferences, decisions, feedback, and relationships – independently from the LLM or agent framework being used.

Most existing approaches treat memory as either raw vector search or framework-specific abstractions. That works for demos, but breaks down for long-running or multi-tool agents. We wanted something in between: structured memory with metadata, relationships, and lifecycle rules that persist across sessions and runs.

Under the hood, RecallBricks uses a multi-stage recall pipeline (fast heuristics → contextual retrieval → deeper reasoning when needed). This allows agents to retrieve relevant context without reloading everything into prompts, while keeping recall latency low using pgvector.

One meta detail: once it was usable, I connected Claude to RecallBricks via MCP. Claude now retains memory across the entire multi-month build of RecallBricks itself. I've been using RecallBricks to build RecallBricks.

This is early but live. People are already using it in agent workflows, and I'm actively refining how memories are ranked, linked, and decayed over time.

I'd love feedback from people building agents or long-running AI systems. What kinds of context do your agents lose today? Where do current memory patterns break down? What would make a separate memory layer not worth using?

Happy to answer questions and discuss tradeoffs.

tylerrecall•1mo ago
Also happy to discuss the technical architecture - the entire system runs on Supabase + pgvector, with SDKs for Python, TypeScript, and LangChain. Docs are at recallbricks.com.

One interesting challenge has been balancing recall speed vs. depth. Raw vector search is fast but misses context. Full graph traversal finds everything but kills latency. The tiered approach lets us start fast and go deeper only when needed.

Always curious to hear how others are tackling agent memory!