A few weeks ago I posted about llm-exe, a TypeScript library for structuring LLM calls with reusable components.
I just put together a Medium series that digs deeper into how it works. It breaks down the idea of an LLM function - an executor that wraps a prompt, a model, and a parser. Each part does one thing well, and together they give you a clean, well-typed, testable, and composable way to work with LLMs.
The posts walk through each layer: prompt, parser, LLM, executor. If you're building LLM features in production with TypeScript, I think you’ll find the structure helpful. I am interested in any feedback.
Great work! folks at llm-exe; As a TypeScript developer working with LLMs, llm-exe has been a game-changer for me. It abstracts away the boilerplate of prompt formatting, model integration, and response parsing, allowing me to focus on building features.
The modular design—separating prompts, parsers, and executors—makes my codebase cleaner and more maintainable. Plus, the ability to switch between different LLM providers with minimal code changes is incredibly convenient.
llm-exe•2h ago
I just put together a Medium series that digs deeper into how it works. It breaks down the idea of an LLM function - an executor that wraps a prompt, a model, and a parser. Each part does one thing well, and together they give you a clean, well-typed, testable, and composable way to work with LLMs.
The posts walk through each layer: prompt, parser, LLM, executor. If you're building LLM features in production with TypeScript, I think you’ll find the structure helpful. I am interested in any feedback.
Medium series: https://medium.com/llm-exe