frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Why does this river slice straight through a mountain range?

https://theconversation.com/why-does-this-river-slice-straight-through-a-mountain-range-after-150...
1•PaulHoule•1m ago•0 comments

Show HN: Limelight – Let your AI see what your app does at runtime

https://github.com/getlimelight/limelight-sdk
1•cyrusburns•2m ago•0 comments

Rubio to World: Stop Doing the Exact Same Thing the US Just Did

https://www.techdirt.com/2026/03/03/rubio-to-world-stop-doing-the-exact-same-thing-the-us-just-did/
3•hn_acker•4m ago•0 comments

Why Apple's move to video could endanger podcasting's greatest power

https://www.anildash.com/2026/02/28/apple-video-podcast-power/
1•latexr•5m ago•0 comments

Revamping Our Membership Program

https://www.rtings.com/company/revamping-our-membership-program
1•akyuu•7m ago•0 comments

Amidst the AI frenzy nobody is taking the human side

https://cognitivefriction.substack.com/p/missing-tribe
1•cyclopeanutopia•8m ago•0 comments

Show HN: AI making payments with your regular Visa card, securely. Prava

https://playground.prava.space/
1•davinciind•9m ago•0 comments

Show HN: Grok Brain – Turn your Grok data into a private 3D brain visualization

https://grok-brain.vercel.app/
1•zimtzimt•9m ago•1 comments

Google Rewrites Applications Every Few Years. Can You?

https://orischwartz.com/posts/google-rewrites-applications-every-few-years.html
1•fleaflicker•10m ago•0 comments

OpenAI CEO Sam Altman Defends Pentagon Work to Staff

https://www.wsj.com/tech/ai/openai-ceo-altman-defends-pentagon-work-to-staff-calls-backlash-reall...
4•cdrnsf•11m ago•0 comments

Growing Postal

https://rescx.substack.com/p/growing-postal
1•cyclopeanutopia•12m ago•0 comments

D3D12 Shader Execution Reordering

https://devblogs.microsoft.com/directx/shader-execution-reordering/
1•ksec•13m ago•0 comments

Ubuntu Planning Mandatory Age Verification

https://twitter.com/lundukejournal/status/2028914903587631613
3•egorfine•13m ago•1 comments

Awesome-Selfhosted

https://github.com/awesome-selfhosted/awesome-selfhosted
2•nobody9999•14m ago•0 comments

Facebook Appears to Be Down

7•Molitor5901•18m ago•4 comments

Sen. Wyden Warns of Mass Surveillance Amid Pentagon's Fight with Anthropic

https://gizmodo.com/sen-wyden-warns-of-mass-surveillance-amid-pentagons-fight-with-anthropic-2000...
7•WarOnPrivacy•18m ago•0 comments

Bluesky adds (broken) age verification

https://bsky.app
2•neogodless•18m ago•1 comments

Show HN: Webact – token-efficient browser control for AI agents (GitHub)

https://github.com/kilospark/webact
1•kxbnb•20m ago•0 comments

In startups, "I assumed" is the most expensive sentence you can say [video]

https://www.tiktok.com/@taxhero_ai/video/7613137316227353887?is_from_webapp=1&sender_device=pc
1•salleisha•20m ago•1 comments

Ask HN: Why don't MacBooks have Cellular Modems yet?

1•avonmach•21m ago•1 comments

Show HN: Proofd – Free AI career risk score based on your tasks, not job title

https://www.proofd.ai
1•dixalex•22m ago•0 comments

Is Shopify Good for SEO in 2026?

https://www.techwrath.com/is-shopify-good-for-seo-2026/
1•techwrath11•24m ago•0 comments

EURO-3C Project to build a federated Telco-Edge-Cloud infrastructure

https://digital-strategy.ec.europa.eu/en/news/commission-announces-eu75-million-euro-3c-project-b...
1•_____k•24m ago•0 comments

Show HN: TypeShim – .NET WebAssembly Meets TypeScript

https://github.com/ArcadeMode/TypeShim
1•ArcadeMode•25m ago•0 comments

How to Choose the Right Shopify Development Agency in 2026

https://www.techwrath.com/how-to-choose-right-shopify-development-agency/
1•techwrath11•25m ago•0 comments

Neovim cookies for the pluginless – random nvim native tips

https://eduardofuncao.com/blog/neovim-cookies/
1•xGoivo•25m ago•1 comments

The Social Media Discoverability Problem

https://samranda.com/blog/social-media-discoverability/
1•performative•27m ago•0 comments

Millennium Challenge 2002: Persian Gulf War Game Exercise

https://en.wikipedia.org/wiki/Millennium_Challenge_2002
2•Jimmc414•27m ago•1 comments

Open-source community gets a Claude-sized gift

https://www.thedeepview.com/articles/open-source-community-gets-a-claude-sized-gift
2•CrankyBear•29m ago•0 comments

Turning 4,668 comments into AGENTS.md rules to automate Pydantic AI reviews

https://pydantic.dev/articles/scaling-open-source-with-ai
2•yoredana•30m ago•0 comments
Open in hackernews

Show HN: Memobase – Universal memory that works across all your AI tools

https://memobase.ai/
2•chsitter•2h ago
Hey HN — I'm the builder behind Memobase.

Timing: Anthropic just launched memory import for Claude yesterday. You can export your ChatGPT memories and bring them over. It's a step in the right direction, but it's still moving your data from one silo to another. You don't really own that memory.

The problem as I see it: there's no standard protocol for AI memory. You can't say "here's my MCP server, use it for memory in every session." Each platform builds its own walled garden. Number portability took regulation. Email interoperability took SMTP. AI memory needs something similar.

What Memobase is: a universal, AI-agnostic memory layer. It builds a structured profile — your preferences, context, project history — that any connected AI tool reads from. Not locked inside ChatGPT, Claude, or any single platform.

Technical approach: - Profile-based memory, not raw conversation logs. Compact and fast (sub-100ms lookups). - You own your data. Full visibility, editing, deletion, export. Self-hosted option coming. - Working toward an open protocol so any tool can plug in — not just our integrations.

What's live: open beta with the core memory and integrations for the major tools. What's still patchy: Agents don't automatically use it all the time without being prodded, the protocol spec is still being formalized, and we need more tools to adopt it for this to really work.

I'd love to hear: - Would you want your AI memory to live outside any single platform, or do you prefer each tool handling it? - What would the protocol need to look like for you to build against it? - Technical feedback on the approach — we chose profile-based RAG vs knowledge graphs etc, happy to go deep on that.

Comments

xing_horizon•2h ago
Interesting positioning. Cross-tool memory portability is the right direction. A practical trust layer to add: every recalled item should carry provenance + freshness metadata so agents can choose whether to trust, refresh, or ignore memory instead of treating all recalls as equally valid.
chsitter•1h ago
Thanks - glad it resonates. Great idea on the freshness that makes total sense. I'll add that for sure

//Edit: I've implemented this and it's live now

jlongo78•2h ago
persistent memory across tools is the right problem to solve. the real friction isnt context length, its context continuity -- picking up a claude session tomorrow feeling like you never left. memobase looks solid for the memory layer. the missing piece most people ignore is session state itself: terminal output, working directory, what commands actually ran. memory without replay is just notes.
chsitter•1h ago
Yeah, 100% agree. That's one thing I just thought about yesterday also - Every session should be summarised and written to the data store also, that way sessions become portable contexts. There's potentially a case to be made to have full replay, i.e. the plaintext sessions stored but I'm not entirely sure on how much more valuable that is over a summary.

Do you have thoughts, or a take on that?

jlongo78•1h ago
summaries are probably 80% of the value at 10% of the storage cost. full replay is nice for debugging weird agent behavior but day-to-day you rarely need the raw transcript.

the interesting edge case is when the summary itself becomes the lossy artifact - like, who decides whats important enough to keep? if the model summarises, it might quietly drop the context that would have mattered most next week.

hybrid might be the move: rolling summary plus last N raw turns.

chsitter•1h ago
Right - that makes sense. I see it similarly. My assumption though is that as models get better, the likelyhood of the model missing context that matters the most will get lower and lower.

The hybrid approach though is nice, I'll have a think about that and see if that's something I can incoporate into it. Thanks for the feedback, very much appreciated

jlongo78•1h ago
yeah models are definitely getting better at prioritizing signal over noise. but theres still an interesting edge case -- the context that matters most is often the stuff you didnt know mattered when you said it. like a throwaway comment three sessions ago that turns out to be load-bearing. hybrid at least gives you a fallback when the model makes a bad bet on whats important.
chsitter•57m ago
100% - I'll implement and add that now :)
jlongo78•39m ago
nice, ship it fast and break things. curious what your persistence layer looks like -- are you serializing full context or just surfacing semantic anchors? the anchor approach is way more interesting imo, lets you reconstruct intent without dragging 50k tokens of cruft into every new session. thats the real trick nobody talks about
chsitter•17m ago
I'm not gonna lie - at the moment it's pretty basic as a chunked semantic store where relevant chunks are retrieved in conjunction with some criteria the Agent can pass to the MCP server.

Context usage is definitely a problem that's on my mind also and semantic anchors are one area I'm exploring but don't have a clear architecture for it jotted down yet. The real problem I'm facing right now is how to inject this into say claude or chatgpt and have those agents default use it as a memory layer

Nicky_Montana•1h ago
this is super cool. can i get it to work as effortlessly as a chrome plugin? I use a lot of different models and a lot of specific/vertical ai on different products that I'd love to not have to constantly give context to to be useful.

love where this is headed!

chsitter•1h ago
Short answer is yes - just add the MCP server and you're golden. Longer answer is that most chat clients do not allow the MCP server to automatically inject it's own system prompt which means you have to specifically prompt your AI to write to memobase.