I built Xpaper (https://github.com/laiso/xpaper), an open-source Chrome extension that curates and summarizes your X (Twitter) timeline into a clean, readable newsletter format.
Like many of you, I wanted to distance myself from the endless scrolling of Twitter, but completely quitting wasn't an option—I still needed to extract the signal from the noise. I built this to solve that exact dilemma.
I took a specific technical approach that I thought HN might find interesting:
1. *No Backend, Pure DOM Scraping:* I didn't want to mess with the restrictive official API or run a fragile scraping backend. Instead, the extension reads the timeline directly from the DOM in your active tab. Since it only processes what's already visible locally on your screen for personal use, it operates cleanly within the browser environment.
2. *Cloud LLMs for Best UX, Local LLMs for Privacy:* While Xpaper is designed to work best with Cloud APIs (OpenAI, Anthropic, Gemini, OpenRouter and more) for speed and quality, I also built full support for *Local LLMs as an option* for users who prioritize privacy. Your timeline data never has to leave your machine if you choose to connect to Chrome's experimental Built-in AI (Gemini Nano via `window.ai`) or Local Network LLMs like Ollama/LM Studio.
3. *Bypassing Manifest V3 Local IP Restrictions:* Connecting an extension to local LLMs (like `192.168.x.x` or `::1`) in Manifest V3 is notoriously difficult because you can't easily use IP wildcards in `host_permissions`. I had to implement a dynamic permission request flow (`chrome.permissions.request`) specifically for RFC 1918 and loopback addresses to make "Bring Your Own Local Server" actually work smoothly.
4. *Combatting "AI Slop" with Multi-Agent Auditing & Human Review:* There’s a lot of valid criticism lately about "vibe coding" leading to the mass production of insecure "AI slop". Extensions that handle DOM injection and LLM outputs are specifically an XSS nightmare waiting to happen. To prevent this, I implemented a rigorous review process: I had 3 different AI agents mutually cross-review the codebase specifically focusing on vulnerabilities (XSS, DNS rebinding, CSP). Finally, I conducted a thorough human review as the last line of defense. The entire audit methodology is documented in the repo.
It’s completely open source. I'd love your thoughts on this "local browser scraping" approach, the security auditing process, or the UX!