I'm a Not a developer. I built this over a few weeks with Codex+Claude Code.
What I ended up with: https://vassiliylakhonin.github.io/
The interesting design decisions:
Instead of just a PDF, I have six machine-readable JSON files: - resume.json — standard JSON Resume format - evidence.json — maps each claimed metric to its source and verification method. The theory: AI candidate evaluation will increasingly distinguish evidenced claims from unverified ones. - availability.json, capabilities.json, engage.json, verification.json — availability signals, capability profile, intake schema, identity cross-references
llms.txt points crawlers to the pages that matter. robots.txt explicitly allows GPTBot and OAI-SearchBot. JSON-LD (schema.org ProfilePage/Person) on the homepage.
The most experimental piece: a live MCP server on Railway. In principle, an AI recruiting agent could call it as a tool and get structured answers about my background without scraping HTML. I haven't seen anyone else do this for a personal CV, which either means it's ahead of the curve or completely pointless.
The honest version: I have no idea if any of this actually works. I don't know whether recruiter tooling parses llms.txt or JSON-LD from personal sites, or whether everything still flows through LinkedIn scraping and PDF vision models. I built it because structured reporting systems are literally my job, and this felt like the right way to represent that.
Repo: https://github.com/vassiliylakhonin/vassiliylakhonin.github....
Curious: is anyone building sourcing or screening agents that consume structured data from candidate-owned sites? Or does all candidate data still enter the pipeline through LinkedIn and uploaded PDFs?