Before LLMs, we did not fully understand the libraries we read, the kernels we touched, the networks we only grasped conceptually. We' have always been outsourcing intelligence to other engineers, teams, and systems for decades.
One possible reason is that we use a library, we can tell ourselves we could read it. With an LLM, the fiction of potential understanding collapses. The real shift I am seeing isn't from "understanding" to "not understanding."
It is towatds "I understand the boundaries, guarantees, and failure modes of what I'm responsible for." If agentic coding is the future, mastery becomes the ability to steer, constrain, test, and catch failures - not the ability to manually type every line.
Full piece:
https://open.substack.com/pub/grandimam/p/ai-native-and-anti-ai-engineers?utm_campaign=post-expanded-share&utm_medium=post%20viewer
truelinux1•35m ago