frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

UK to drop historic figures from banknotes and change them to images of wildlife

https://brusselssignal.eu/2026/03/uk-to-drop-historic-figures-from-banknotes-and-change-them-to-i...
1•thunderbong•15s ago•0 comments

2026 AI Adoption and Workforce Performance Benchmarks

https://www.activtrak.com/resources/state-of-the-workplace/
1•ptrhvns•15s ago•0 comments

Runtime Safety Infrastructure for AI Agents

https://nono.sh
1•TheTaytay•42s ago•0 comments

Show HN: OneCLI – Vault for AI Agents in Rust

https://github.com/onecli/onecli
1•guyb3•2m ago•0 comments

Finland Is Ready for Russia. Is Anyone Else?

https://www.bloomberg.com/graphics/2026-opinion-finland-is-ready-russia-arctic/
1•RyanShook•2m ago•0 comments

Get ready for takeoff with Uber and Joby

https://www.uber.com/us/en/newsroom/uber-air/
1•r-bt•3m ago•0 comments

Live AI Session Summaries in a Two-Line Tmux Status Bar

https://quickchat.ai/post/tmux-session-summaries-for-parallel-ai-agents
1•piotrgrudzien•3m ago•0 comments

Reversing memory loss via gut-brain communication

https://med.stanford.edu/news/all-news/2026/03/gut-brain-cognitive-decline.html
1•mustaphah•4m ago•0 comments

Should Sam Altman fear token compression?

3•Gillesray•6m ago•0 comments

Show HN: Open-Source GTM Skills for Claude Code, Codex, and Cursor

https://github.com/athina-ai/goose-skills
1•hbamoria•6m ago•0 comments

DesiPeeps

https://desipeeps.com
1•saibuilds•7m ago•0 comments

Should Sam Altman fear token compression?

https://www.edgee.ai/blog/posts/2026-03-12-should-sam-altman-fear-token-compression-technology-or...
1•Gillesray•7m ago•1 comments

I wrote Gitleaks, now I'm maintaining Betterleaks

https://www.aikido.dev/blog/betterleaks-gitleaks-successor
3•zricethezav•8m ago•1 comments

Nvidia Fork of Godot Engine

https://github.com/NVIDIA-RTX/godot
1•throwaway2027•9m ago•0 comments

Hegger

https://hegger.party
1•davedx•9m ago•0 comments

Ask HN: Which DNS based ad blocker do you suggest?

2•SoftwareEn2•9m ago•1 comments

Save the Student Essay

https://openquestionsblog.substack.com/p/save-the-student-essay
1•voxleone•10m ago•0 comments

Show HN: BoltzPay – fetch() that pays for AI agents (x402 and L402)

https://github.com/leventilo/boltzpay
1•leventilo•12m ago•0 comments

Show HN: Stop AI Debugging with Print(). Use a Debugger

https://github.com/AlmogBaku/debug-skill
1•almogbaku•13m ago•0 comments

Show HN: Claude Status

https://github.com/gmr/claude-status
1•crad•13m ago•0 comments

AI isn't digital anymore. It's a 1-GW power problem

1•TheBottlenecker•13m ago•0 comments

Show HN: OpenTabs – Your AI calls Slack's internal API through the browser

https://github.com/opentabs-dev/opentabs
1•Jbced•14m ago•0 comments

What we learned building 100 API integrations with OpenCode

https://nango.dev/blog/learned-building-100-api-integrations-with-opencode
1•rguldener•16m ago•0 comments

Important Updates to GitHub Copilot for Students

https://github.com/orgs/community/discussions/189268
2•archb•16m ago•0 comments

We will come to regret our every use of AI

https://libresolutions.network/articles/ai-regret/
3•paulnpace•16m ago•0 comments

Show HN: Subagent-CLI – a CLI for managing multiple coding agents

1•otakumesi•17m ago•0 comments

What's My ΔE(OK) JND?

https://www.keithcirkel.co.uk/whats-my-jnd/
2•sebg•17m ago•0 comments

Cynium

https://cynium.com/
2•falibout•19m ago•0 comments

Claude can now build interactive charts and diagrams, directly in the chat

https://twitter.com/claudeai/status/2032124273587077133
2•tzury•20m ago•0 comments

I built a system that turns tax law from 100 regions into executable rules

https://www.getsphere.com/blog/building-tram
4•abowcut•21m ago•0 comments
Open in hackernews

Anchor Engine – deterministic semantic memory for LLMs, <1GB RAM runs on a phone

https://github.com/RSBalchII/anchor-engine-node
1•BERTmackliin•1h ago

Comments

BERTmackliin•1h ago
I built Anchor because I kept hitting the same wall: local LLMs are great, but every conversation is a fresh start. Vector search is the default hammer, but for structured memory—project decisions, entity relationships, temporal facts—it's often the wrong tool.

Live demo (in-browser, no setup): https://rsbalchii.github.io/anchor-engine-node/demo/index.ht...

Search Moby Dick or Frankenstein and see the tag-based receipts that show why each result matched.

How it works Anchor uses graph traversal (the STAR algorithm) instead of embeddings. Concepts become nodes, relationships become edges. The database stores only pointers (file paths + byte offsets); content stays on disk, so the index is small and rebuildable. PGlite (PostgreSQL in WASM) lets it run anywhere Node.js does – including a Pixel 7 in Termux, with <1GB RAM.

Performance - <200ms p95 search on a 28M-token corpus - <1GB RAM – runs on a $200 mini PC, a Raspberry Pi, or a phone - Pure JS/TS, compiled to WASM, no cloud dependencies

What’s new in v4.6 - distill: lossless compression of your corpus into a single deduplicated YAML file. I tested it on 8 months of my own chat logs: 2336 → 1268 unique lines, 1.84:1 compression, 5 minutes on a Pixel 7. - MCP server (v4.7.0) – exposes search and distillation to any MCP client (Claude Code, Cursor, Qwen tools) - Adaptive concurrency – automatic switching between sequential (mobile) and parallel (desktop) processing

The recursion I used Anchor to build itself. Every bug fix and design decision is in the graph – that's how I kept the complexity manageable.

Where it fits If you're building local agents, personal knowledge bases, or mobile assistants and want memory that's inspectable, deterministic, and lightweight – this is for you.

GitHub repo: https://github.com/RSBalchII/anchor-engine-node

Whitepaper: https://github.com/RSBalchII/anchor-engine-node/blob/main/do...

Happy to answer questions about the algorithm, the recursion, or the mobile optimizations.

silentsvn•1h ago
The inspectability angle is genuinely useful, being able to trace exactly why something was retrieved is something vector search can't offer, and the tag-receipt approach is clean for structured knowledge.

One thing I'm trying to understand: the README calls this "semantic" retrieval, but looking at the Unified Field Equation in the whitepaper, the core scoring is tag intersection with temporal decay: W(q,a) = (shared tags) × γ^(graph distance) × (recency). That's weighted keyword matching, which is deterministic precisely because it's lexical, not semantic.

The vector.ts also has MockSoulIndex as a no-op stub with a note saying dense vector search is "optional augmentation" that's currently disabled so no embeddings are running in practice.

I've been building in this space with hand-written TypeScript (no AI codegen) and the line between "semantic" and "keyword" matters a lot to users. If someone stores "the JWT conversation" they won't find it by querying "authentication."

Is the tag extraction smart enough to bridge that, or is explicit tagging on the user to handle?

BERTmackliin•47m ago
@silentsvn - thank you for reading carefully enough to ask this. You're correct that the core scoring is tag‑based and deterministic, which is lexical, not "semantic" in the modern embedding sense. The terminology is worth unpacking.

We call it "semantic" in the broader sense of meaning‑bearing structure—the graph encodes relationships between concepts, and retrieval walks those relationships. But you're correct that at query time, it's matching on tags, not vector similarity.

Why not embeddings? We made a deliberate trade‑off: determinism and explainability over fuzziness. With vector search, you get a black‑box similarity score and no way to debug why something was retrieved. With tag‑based traversal, you can trace the exact path: "This result matched because it shares tags X, Y, Z and is within 2 hops of your query." That matters for agentic workflows where auditability is critical.

Tag extraction is where we do the work to bridge the lexical gap. The atomization pipeline uses: - Wink NLP for entity recognition and part‑of‑speech filtering (so "authentication" and "JWT" both get tagged with relevant concepts if they appear in context). - Co‑occurrence windows to infer relationships (e.g., if "JWT" and "authentication" repeatedly appear near each other, they get linked in the graph). - Synonym expansion (via Standard 111) so queries for "authentication" can surface nodes tagged with "JWT" if the system has learned that relationship from your corpus.

It's not magic - if you never mention "JWT" in the same context as "authentication," the graph won't connect them. But that's a feature, not a bug: the system reflects your actual usage, not a statistical average of the internet.

The trade‑off is real: you give up the fuzzy "close enough" retrieval of vectors in exchange for perfect traceability and no embedding drift. For many use cases (project memory, execution traces, personal knowledge bases), that's the right call.

I'd love to hear more about what you're building in this space. Always good to find others thinking about these trade‑offs.

silentsvn•14m ago
Thanks for the response

The determinism trade-off is genuinely interesting — auditability over fuzziness is a real design philosophy, not just a limitation.

We've been building something that tries to avoid forcing that choice. Engram uses three strategies in parallel: vector embeddings (nomic-embed-text via Ollama, local-first), BM25 keyword, and temporal recency — merged with Reciprocal Rank Fusion. Each result comes back with an explicit similarity score and the tier it came from (working memory / long-term / archived), so the retrieval path is still traceable even when it's fuzzy.

We also layer on a graph component similar to yours — entity-relationship extraction that augments top results with connected context. The difference is that graph is additive on top of embedding retrieval rather than the primary mechanism.

The place your approach wins clearly is corpus-specific precision. If the graph is built from your actual usage (your JWT/authentication example), tag traversal will reliably surface relationships that vectors would miss or dilute with internet priors. That's a real advantage for execution traces and project memory.

Still working through the right defaults for consolidation (when to summarize old working memories vs keep them granular). Curious whether you've thought about memory aging in your model.

Repo if curious: github.com/Cartisien/engram (http://github.com/Cartisien/engram)