frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Study confirms experience beats youthful enthusiasm

https://www.theregister.com/2026/02/07/boomers_vs_zoomers_workplace/
1•Willingham•5m ago•0 comments

The Big Hunger by Walter J Miller, Jr. (1952)

https://lauriepenny.substack.com/p/the-big-hunger
1•shervinafshar•6m ago•0 comments

The Genus Amanita

https://www.mushroomexpert.com/amanita.html
1•rolph•11m ago•0 comments

We have broken SHA-1 in practice

https://shattered.io/
1•mooreds•12m ago•1 comments

Ask HN: Was my first management job bad, or is this what management is like?

1•Buttons840•13m ago•0 comments

Ask HN: How to Reduce Time Spent Crimping?

1•pinkmuffinere•14m ago•0 comments

KV Cache Transform Coding for Compact Storage in LLM Inference

https://arxiv.org/abs/2511.01815
1•walterbell•19m ago•0 comments

A quantitative, multimodal wearable bioelectronic device for stress assessment

https://www.nature.com/articles/s41467-025-67747-9
1•PaulHoule•21m ago•0 comments

Why Big Tech Is Throwing Cash into India in Quest for AI Supremacy

https://www.wsj.com/world/india/why-big-tech-is-throwing-cash-into-india-in-quest-for-ai-supremac...
1•saikatsg•21m ago•0 comments

How to shoot yourself in the foot – 2026 edition

https://github.com/aweussom/HowToShootYourselfInTheFoot
1•aweussom•21m ago•0 comments

Eight More Months of Agents

https://crawshaw.io/blog/eight-more-months-of-agents
3•archb•23m ago•0 comments

From Human Thought to Machine Coordination

https://www.psychologytoday.com/us/blog/the-digital-self/202602/from-human-thought-to-machine-coo...
1•walterbell•23m ago•0 comments

The new X API pricing must be a joke

https://developer.x.com/
1•danver0•24m ago•0 comments

Show HN: RMA Dashboard fast SAST results for monorepos (SARIF and triage)

https://rma-dashboard.bukhari-kibuka7.workers.dev/
1•bumahkib7•25m ago•0 comments

Show HN: Source code graphRAG for Java/Kotlin development based on jQAssistant

https://github.com/2015xli/jqassistant-graph-rag
1•artigent•30m ago•0 comments

Python Only Has One Real Competitor

https://mccue.dev/pages/2-6-26-python-competitor
3•dragandj•31m ago•0 comments

Tmux to Zellij (and Back)

https://www.mauriciopoppe.com/notes/tmux-to-zellij/
1•maurizzzio•32m ago•1 comments

Ask HN: How are you using specialized agents to accelerate your work?

1•otterley•33m ago•0 comments

Passing user_id through 6 services? OTel Baggage fixes this

https://signoz.io/blog/otel-baggage/
1•pranay01•34m ago•0 comments

DavMail Pop/IMAP/SMTP/Caldav/Carddav/LDAP Exchange Gateway

https://davmail.sourceforge.net/
1•todsacerdoti•35m ago•0 comments

Visual data modelling in the browser (open source)

https://github.com/sqlmodel/sqlmodel
1•Sean766•37m ago•0 comments

Show HN: Tharos – CLI to find and autofix security bugs using local LLMs

https://github.com/chinonsochikelue/tharos
1•fluantix•37m ago•0 comments

Oddly Simple GUI Programs

https://simonsafar.com/2024/win32_lights/
1•MaximilianEmel•37m ago•0 comments

The New Playbook for Leaders [pdf]

https://www.ibli.com/IBLI%20OnePagers%20The%20Plays%20Summarized.pdf
1•mooreds•38m ago•1 comments

Interactive Unboxing of J Dilla's Donuts

https://donuts20.vercel.app
1•sngahane•39m ago•0 comments

OneCourt helps blind and low-vision fans to track Super Bowl live

https://www.dezeen.com/2026/02/06/onecourt-tactile-device-super-bowl-blind-low-vision-fans/
1•gaws•41m ago•0 comments

Rudolf Vrba

https://en.wikipedia.org/wiki/Rudolf_Vrba
1•mooreds•41m ago•0 comments

Autism Incidence in Girls and Boys May Be Nearly Equal, Study Suggests

https://www.medpagetoday.com/neurology/autism/119747
1•paulpauper•42m ago•0 comments

Wellness Hotels Discovery Application

https://aurio.place/
1•cherrylinedev•43m ago•1 comments

NASA delays moon rocket launch by a month after fuel leaks during test

https://www.theguardian.com/science/2026/feb/03/nasa-delays-moon-rocket-launch-month-fuel-leaks-a...
2•mooreds•44m ago•0 comments
Open in hackernews

RecallBricks – Persistent memory infrastructure for AI agents

https://recallbricks.com
2•tylerrecall•1mo ago

Comments

tylerrecall•1mo ago
Hi HN – I'm the founder of RecallBricks. I built this after repeatedly running into the same issue while building agents: once agents run beyond a single session, memory falls apart. Context disappears, feedback gets lost, and agents start from zero unless you re-prompt everything.

RecallBricks is plug-and-play memory infrastructure for AI agents. It lets agents store and retrieve durable context – preferences, decisions, feedback, and relationships – independently from the LLM or agent framework being used.

Most existing approaches treat memory as either raw vector search or framework-specific abstractions. That works for demos, but breaks down for long-running or multi-tool agents. We wanted something in between: structured memory with metadata, relationships, and lifecycle rules that persist across sessions and runs.

Under the hood, RecallBricks uses a multi-stage recall pipeline (fast heuristics → contextual retrieval → deeper reasoning when needed). This allows agents to retrieve relevant context without reloading everything into prompts, while keeping recall latency low using pgvector.

One meta detail: once it was usable, I connected Claude to RecallBricks via MCP. Claude now retains memory across the entire multi-month build of RecallBricks itself. I've been using RecallBricks to build RecallBricks.

This is early but live. People are already using it in agent workflows, and I'm actively refining how memories are ranked, linked, and decayed over time.

I'd love feedback from people building agents or long-running AI systems. What kinds of context do your agents lose today? Where do current memory patterns break down? What would make a separate memory layer not worth using?

Happy to answer questions and discuss tradeoffs.

tylerrecall•1mo ago
Also happy to discuss the technical architecture - the entire system runs on Supabase + pgvector, with SDKs for Python, TypeScript, and LangChain. Docs are at recallbricks.com.

One interesting challenge has been balancing recall speed vs. depth. Raw vector search is fast but misses context. Full graph traversal finds everything but kills latency. The tiered approach lets us start fast and go deeper only when needed.

Always curious to hear how others are tackling agent memory!