frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Show HN: A Karpathy-style LLM wiki your agents maintain (Markdown and Git)

https://github.com/nex-crm/wuphf
67•najmuzzaman•2h ago
I shipped a wiki layer for AI agents that uses markdown + git as the source of truth, with a bleve (BM25) + SQLite index on top. No vector or graph db yet.

It runs locally in ~/.wuphf/wiki/ and you can git clone it out if you want to take your knowledge with you.

The shape is the one Karpathy has been circling for a while: an LLM-native knowledge substrate that agents both read from and write into, so context compounds across sessions rather than getting re-pasted every morning. Most implementations of that idea land on Postgres, pgvector, Neo4j, Kafka, and a dashboard.

I wanted to go back to the basics and see how far markdown + git could go before I added anything heavier.

What it does: -> Each agent gets a private notebook at agents/{slug}/notebook/.md, plus access to a shared team wiki at team/.

-> Draft-to-wiki promotion flow. Notebook entries are reviewed (agent or human) and promoted to the canonical wiki with a back-link. A small state machine drives expiry and auto-archive.

-> Per-entity fact log: append-only JSONL at team/entities/{kind}-{slug}.facts.jsonl. A synthesis worker rebuilds the entity brief every N facts. Commits land under a distinct "Pam the Archivist" git identity so provenance is visible in git log.

-> [[Wikilinks]] with broken-link detection rendered in red.

-> Daily lint cron for contradictions, stale entries, and broken wikilinks.

-> /lookup slash command plus an MCP tool for cited retrieval. A heuristic classifier routes short lookups to BM25 and narrative queries to a cited-answer loop.

Substrate choices: Markdown for durability. The wiki outlives the runtime, and a user can walk away with every byte. Bleve for BM25. SQLite for structured metadata (facts, entities, edges, redirects, and supersedes). No vectors yet. The current benchmark (500 artifacts, 50 queries) clears 85% recall@20 on BM25 alone, which is the internal ship gate. sqlite-vec is the pre-committed fallback if a query class drops below that.

Canonical IDs are first-class. Fact IDs are deterministic and include sentence offset. Canonical slugs are assigned once, merged via redirect stubs, and never renamed. A rebuild is logically identical, not byte-identical.

Known limits: -> Recall tuning is ongoing. 85% on the benchmark is not a universal guarantee.

-> Synthesis quality is bounded by agent observation quality. Garbage facts in, garbage briefs out. The lint pass helps. It is not a judgment engine.

-> Single-office scope today. No cross-office federation.

Demo. 5-minute terminal walkthrough that records five facts, fires synthesis, shells out to the user's LLM CLI, and commits the result under Pam's identity: https://asciinema.org/a/vUvjJsB5vtUQQ4Eb

Script lives at ./scripts/demo-entity-synthesis.sh.

Context. The wiki ships as part of WUPHF, an open source collaborative office for AI agents like Claude Code, Codex, OpenClaw, and local LLMs via OpenCode. MIT, self-hosted, bring-your-own keys. You do not have to use the full office to use the wiki layer. If you already have an agent setup, point WUPHF at it and the wiki attaches.

Source: https://github.com/nex-crm/wuphf

Install: npx wuphf@latest

Happy to go deep on the substrate tradeoffs, the promotion-flow state machine, the BM25-first retrieval bet, or the canonical-ID stability rules. Also happy to take "why not an Obsidian vault with a plugin" as a fair question.

Comments

dhruv3006•1h ago
I love that so many people are building with markdown !

But also would like to understand how markdown helps in durability - if I understand correctly markdown has a edge over other formats for LLMs.

Also I too am building something similar on markdown which versions with git but for a completely different use case : https://voiden.md/

left-struck•55m ago
I read the durability thing as markdown files are very open, easy to find software for, simple and are widely used. All of this together almost guarantees that they will he viewable/usable in the far future.
dhruv3006•28m ago
So markdown will be great for distribution in the future.
goodra7174•1h ago
I was looking for something similar to try out. Cool!
davedigerati•1h ago
why not an Obsidian vault with a plugin?
davedigerati•1h ago
srsly tho this looks slick & love the office refs / will go play with it :)
tomtomistaken•1h ago
what plugin are you using?
mellosouls•1h ago
Karpathy's original post for context:

https://x.com/karpathy/status/2039805659525644595

https://xcancel.com/karpathy/status/2039805659525644595

hyperionultra•1h ago
[flagged]
spiderfarmer•1h ago
Probably just envy.
wiseowise•1h ago
Obviously it is envy, and not scepticism over a guy who practically lives on Twitter and has unhinged[1] follower base.

1 -https://x.com/__endif/status/2039810651120705569

William_BB•1h ago
I have the same feeling ever since his infamous LLM OS post
mirekrusin•1h ago
Feels like disliking musician for fanaticism towards musical instruments.
jimmypk•1h ago
The BM25-first routing bet is interesting. You mention 85% recall@20 on 500 artifacts, but the heuristic classifier routing "short lookups to BM25 and narrative queries to cited-answer" raises a practical question: what does the classifier key on to decide a query is narrative vs short? Token count? Syntactic structure? The reason I ask is that in agent-generated queries, the boundary is often blurry - an agent doing a dependency lookup might issue a surprisingly long, well-formed sentence. If the classifier routes those to the more expensive cited-answer loop it could negate the latency advantage of BM25 being first.
Unsponsoredio•1h ago
love the bm25-first call over vector dbs. most teams jump to vectors before measuring anything
armcat•54m ago
Any particular reason for BM25? Why not just a table of contents or index structure (json, md, whatever) that is updated automatically and fed in context at query time? I know bag of words is great for speed but even at 1000s of documents, the index can be quite cheap and will maximise precision
imafish•52m ago
Cool idea. But is anyone actually building real stuff like this with any kind of high quality?

Every time I hear someone say "I have a team of agents", what I hear is "I'm shipping heaps of AI slop".

hansmayer•26m ago
+100 for this comment.
portly•50m ago
I don't understand the point of automating note taking. It never worked for me to copy paste text into my notes and now you can 100x that?

The whole point of taking notes for me is to read a source critically, fit it in my mental model, and then document that. Then sometimes I look it up for the details. But for me the shaping of the mental model is what counts

_zoltan_•19m ago
Then you have never worked at a large enough codebase or across enough many projects?
stingraycharles•19m ago
The few scientific studies out there actually show a degradation of output quality when these markdown collections are fully LLM maintained (opposed to an increase when they’re human maintained), which I found fascinating.

I think the sweet spot is human curation of these documents, but unsupervised management is never the answer, especially if you don’t consciously think about debt / drift in these.

criley2•7m ago
Are you referring to the one (1) study that showed that when low-quality/outdated/free LLM's auto-generated an AGENTS.md, it performed more poorly than human editted AGENTS.md? https://arxiv.org/abs/2602.11988

I'd love to see other sources that seek to academically understand how LLM's use context, specifically ones using modern frontier models.

My takeaway from these CLAUDE.md/AGENTS.md efforts isn't that agents can't maintain any form of context at all, rather, that bloated CLAUDE.md files filled with data that agents can gather on the spot very quickly are counter-productive.

For information which cannot be gathered on the spot quickly, clearly (to me) context helps improve quality, and in my experience, having AI summarize some key information in a thread and write to a file, and organize that, has been helpful and useful.

mplappert•12m ago
I think there‘s a serious issue with people using AI to do an immense amount of busywork and then never look at it again. Colossal waste.
souravroy78•50m ago
Don’t know if Karpathy even wrote this version. Where are the citations?
batoga•46m ago
Put AI in your product name, make billion dollars. Put Karpathy in your blog article, get hired by Anthropic as Principal engineer. Milk money as long as fad last. No one is thinking about customer needs, everyone is trying to wash hands in the wave as it last.
vlady_nyz•42m ago
need to try out asap. love the „the office“ vibe
dataviz1000•35m ago
LLM models and the agents that use them are probabilistic, not deterministic. They accomplish something a percentage of the time, never every time.

That means the longer an agent runs on a task, the more likely it will fail the task. Running agents like this will always fail and burn a ton of token cash in the process.

One thing that LLM agents are good at is writing their own instructions. The trick is to limit the time and thinking steps in a thinking model then evaluate, update, and run again. A good metaphor is that agents trip. Don't let them run long enough to trip. It is better to let them run twice for 5 minutes than once for 10 minutes.

Give it a few weeks and self-referencing agents are going to be at the top of everybody's twitter feed.

hansmayer•24m ago
Couldn't you instruct your LLM to make the starting dir configurable?
GistNoesis•6m ago
The space of self building artefacts is interesting and is booming now because recent LLM versions are becoming good at it fast (in particular if they are of the "coding" kind).

I've also experimented recently with such a project [0] with minimal dependencies and with some emphasis on staying local and in control of the agent.

It's building and organising its own sqlite database to fulfil a long running task given in a prompt while having access to a local wikipedia copy for source data.

A very minimal set of harness and tools to experiment with agent drift.

Adding image processing tool in this framework is also easy (by encoding them as base64 (details can be vibecoded by local LLMs) and passing them to llama.cpp ).

It's a useful versatile tool to have.

For example, I used to have some scripts which processed invoices and receipts in some folders, extracting amount date and vendor from them using amazon textract, then I have a ui to manually check the numbers and put the result in some csv for the accountant every year. Now I can replace the amazon textract requests by a llama.cpp model call with the appropriate prompt while still my existing invoices tools, but now with a prompt I can do a lot more creative accounting.

I have also experimented with some vibecoded variation of this code to drive a physical robot from a sequence of camera images and while it does move and reach the target in the simple cases (even though the LLM I use was never explicitly train to drive a robot), it is too slow (10s to choose the next action) for practical use. (The current no deep-learning controller I use for this robot does the vision processing loop at 20hz).

[0]https://github.com/GistNoesis/Shoggoth.db/

New 10 GbE USB adapters are cooler, smaller, cheaper

https://www.jeffgeerling.com/blog/2026/new-10-gbe-usb-adapters-cooler-smaller-cheaper/
210•calcifer•5h ago•82 comments

Google plans to invest up to $40B in Anthropic

https://www.bloomberg.com/news/articles/2026-04-24/google-plans-to-invest-up-to-40-billion-in-ant...
604•elffjs•18h ago•582 comments

Show HN: A Karpathy-style LLM wiki your agents maintain (Markdown and Git)

https://github.com/nex-crm/wuphf
72•najmuzzaman•2h ago•30 comments

A 3D Body from Eight Questions – No Photo, No GPU

https://clad.you/blog/posts/questionnaire-mlp/
74•arkadiuss•2d ago•11 comments

How to Implement an FPS Counter

https://vplesko.com/posts/how_to_implement_an_fps_counter.html
22•vplesko•3d ago•0 comments

Paraloid B-72

https://en.wikipedia.org/wiki/Paraloid_B-72
200•Ariarule•3d ago•39 comments

Humpback whales are forming super-groups

https://www.bbc.com/future/article/20260416-the-humpback-super-groups-swarming-the-seas
117•andsoitis•3d ago•58 comments

A Powerful New 'QR Code' Untangles Math's Knottiest Knots

https://www.quantamagazine.org/a-powerful-new-qr-code-untangles-maths-knottiest-knots-20260422/
26•defrost•2d ago•7 comments

Plain text has been around for decades and it’s here to stay

https://unsung.aresluna.org/plain-text-has-been-around-for-decades-and-its-here-to-stay/
111•rbanffy•9h ago•34 comments

My audio interface has SSH enabled by default

https://hhh.hn/rodecaster-duo-fw/
255•hhh•15h ago•80 comments

Replace IBM Quantum back end with /dev/urandom

https://github.com/yuvadm/quantumslop/blob/25ad2e76ae58baa96f6219742459407db9dd17f5/URANDOM_DEMO.md
168•pigeons•10h ago•24 comments

Sabotaging projects by overthinking, scope creep, and structural diffing

https://kevinlynagh.com/newsletter/2026_04_overthinking/
440•alcazar•20h ago•109 comments

PCR is a surprisingly near-optimal technology

https://nikomc.com/2026/04/22/pcr/
33•mailyk•2d ago•4 comments

Iliad fragment found in Roman-era mummy

https://www.thehistoryblog.com/archives/75877
184•wise_blood•2d ago•55 comments

The mail sent to a video game publisher

https://www.gamefile.news/p/panic-mail-arco-despelote-time-flies-thank-goodness-teeth
38•colinprince•3d ago•0 comments

Turbo Vision 2.0 – a modern port

https://github.com/magiblot/tvision
130•andsoitis•6h ago•35 comments

Cosmology with Geometry Nodes

https://www.blender.org/user-stories/cosmology-with-geometry-nodes/
67•shankysingh•9h ago•1 comments

Open source memory layer so any AI agent can do what Claude.ai and ChatGPT do

https://alash3al.github.io/stash?_v01
52•alash3al•9h ago•19 comments

A Man Who Invented the Future

https://hedgehogreview.com/web-features/thr/posts/the-man-who-invented-the-future
3•apollinaire•3d ago•0 comments

Escrow Security for iCloud Keychain

https://support.apple.com/guide/security/escrow-security-for-icloud-keychain-sec3e341e75d/web
10•gurjeet•4h ago•0 comments

The Classic American Diner

https://blogs.loc.gov/picturethis/2026/04/the-classic-american-diner/
224•NaOH•15h ago•135 comments

There Will Be a Scientific Theory of Deep Learning

https://arxiv.org/abs/2604.21691
249•jamie-simon•16h ago•111 comments

Education must go beyond the mere production of words

https://www.ncregister.com/commentaries/schnell-repairing-the-ruins
68•signor_bosco•10h ago•20 comments

Email could have been X.400 times better

https://buttondown.com/blog/x400-vs-smtp-email
187•maguay•2d ago•154 comments

Work with the garage door up (2024)

https://notes.andymatuschak.org/Work_with_the_garage_door_up
161•jxmorris12•3d ago•115 comments

Firefox Has Integrated Brave's Adblock Engine

https://itsfoss.com/news/firefox-ships-brave-adblock-engine/
256•nreece•9h ago•128 comments

DeepSeek v4

https://api-docs.deepseek.com/news/news260424
1930•impact_sy•1d ago•1486 comments

Larry McMurtry's Tall Tales

https://www.thenation.com/article/culture/larry-mcmurtry-biography/
6•samclemens•2d ago•0 comments

You don't want long-lived keys

https://argemma.com/blog/long-lived-keys/
65•kkl•3d ago•39 comments

Show HN: I've built a nice home server OS

https://lightwhale.asklandd.dk/
130•Zta77•13h ago•49 comments