prompt-run treats `.prompt` files as first-class runnable artifacts. A `.prompt` file is a YAML header (model, provider, temperature, variable declarations) followed by a plain text body with `{{variable}}` substitution. You run it from the terminal:
``` prompt run summarize.prompt --var text="$(cat article.txt)" ```
You can override model and provider at runtime without editing the file:
``` prompt run summarize.prompt --model gpt-4o --provider openai ```
The `prompt diff` command runs the same prompt for two different inputs (or two prompt versions against the same input) and shows outputs side by side. That's the feature I find most useful when iterating.
Supports Anthropic, OpenAI, and Ollama out of the box. MIT license. No telemetry, no accounts, no backend — just a local CLI tool that talks directly to whichever provider you configure.
The file lives in your repo, gets versioned by git, and can be reviewed in PRs like any other code.
Would be curious to hear whether others have hit this same friction and how you've handled it.