Prompts started as throwaway text. Once they entered real systems, they changed:
- they got reused across flows
- they carried conditional logic
- they returned structured data that downstream code depended on
At that point, prompt bugs started to look a lot like production bugs.
In practice, I’ve mostly seen two approaches:
- Branching in code and concatenating prompt strings Explicit and predictable, but hard to maintain and test as prompts grow.
- Describing rules in the prompt and letting the model apply them Less code, but logic becomes implicit and behavior harder to reason about.
This feels similar to how we used to treat SQL, HTML, or config as raw strings before structure caught up with usage.
Not claiming this is the right answer. I’m mostly curious how others here are handling:
- prompt reuse
- prompt logic placement
- testing and change safety
At what point did prompts stop feeling like “just text” in your systems?
hoangnnguyen•2h ago
I also built a small TypeScript lib for experimenting with the concept: https://github.com/codeaholicguy/promptfmt