Theoretically, any of them could be implemented using the ReAct architecture(I encourage you to go through the same though exercise), but I'm still grappling with when this makes sense and when it doesn't. Some applications clearly benefit from LLM-backed reasoning—particularly those with heavy business logic that changes frequently. In these cases, updating prompts could replace code changes, potentially enabling product teams to directly influence system behavior without engineering involvement.
On the other hand, static data processing pipelines seem like poor candidates for this architecture. When the logic is stable and deterministic, the overhead and unpredictability of LLM inference doesn't add value. The sweet spot appears to be applications where business rules evolve rapidly and the cost of maintaining traditional code outweighs the complexity of prompt engineering.
https://www.sanathkandikanti.com/meditations