I'm releasing LLM Newsletter Kit, a production-tested TypeScript toolkit designed to build AI-driven newsletter pipelines. It handles the full lifecycle: crawling → analysis → generation → delivery.
The Context: I’m an archaeologist-turned-engineer. I built this engine for "Research Radar" (a cultural heritage newsletter) to automate my own manual research aggregation. It currently maintains a 15% CTR with near-zero maintenance, costing ~$0.20-1 per issue.
Core Features:
"Bring Your Own Scraper": Async injection lets you use Cheerio, Puppeteer, or LLM-based parsers without framework lock-in.
Provider-based DI: Swap out crawling, analysis, or storage components via clean interfaces.
Production-First: 100% test coverage, built-in retries, cost controls, and observability baked in.
Tech Stack: TypeScript ESM, LangChain runnables, Vercel AI SDK (structured outputs), and Zod.
Why Code instead of No-Code? To optimize costs and quality, you need granular control over context windows, token limits, and retry strategies. This toolkit allows for advanced workflows (e.g., self-reflection loops) that are often prohibitively expensive or impossible in drag-and-drop tools.
Links:
Code (GitHub): https://github.com/kimhongyeon/heripo-research-radar
Live Example: https://heripo.com/research-radar-newsletter-example.html
npm: @llm-newsletter-kit/core
I'd love your feedback on the architecture and DX!