LLMs are non-deterministic; small changes in model versions, weights, or context can lead to subtle (or major) shifts in behavior over time. PromptDrifter helps you catch this drift by running prompts in CI and failing the build when responses deviate from what you expect.
It’s like snapshot testing, but for prompts.
It’s early days, and I’d love feedback on what would make this more useful for your workflow, especially if you’re building products powered by LLMs in production.
feynmanquest•3h ago
It’s like snapshot testing, but for prompts.
It’s early days, and I’d love feedback on what would make this more useful for your workflow, especially if you’re building products powered by LLMs in production.