It feels like we’re moving away from treating an LLM as “the application” and toward using it as one part of a larger system.
In practice, a lot of real-world complexity seems to live outside the model: data processing, retrieval strategies, graphs, tools, orchestration, observability, and reliability. The LLM mostly acts as a reasoning or interface layer on top of structured intelligence.
Instead of building monolithic LLM apps, teams appear to be assembling intelligence from composable parts.
Curious how others here are thinking about this shift, especially in production systems.
GurbinderGill•2h ago
In practice, a lot of real-world complexity seems to live outside the model: data processing, retrieval strategies, graphs, tools, orchestration, observability, and reliability. The LLM mostly acts as a reasoning or interface layer on top of structured intelligence.
Instead of building monolithic LLM apps, teams appear to be assembling intelligence from composable parts.
Curious how others here are thinking about this shift, especially in production systems.