The article shows how this is happening. The examples given are translating code from one programming language to another, explaining a codebase, and generating small solutions to common problems (interview questions). At the end the author jumps to the conclusion that literally anything will be possible via prompting an LLM. This does not necessarily follow, and we could be hitting a wall, if we haven't already.
What LLMs lack is creativity and novel seeking functions. Without this you cannot have an intelligent system. LLMs are effectively smart (like 'smart' in smart phone) knowledge bases. They have a lot encoded knowledge and you can retrieve that knowledge with natural language. Very useful, with many great use cases like learning or even some prototyping (emphasis on some) capabilities.
If LLMs could actually write code as well as a human, even prompting would not be necessary. You could just give it an app, and tell it to improve it, fix bugs, add new features based on usage metrics. I'm sure the industry has tried this, and if it had been successful, we would have already replaced ALL programmers, not just senior programmers at large companies that already have hundreds or thousands of other engineers already.
duxup•2h ago
Anyway does a Principal Engineer at Microsoft typically code a lot?