Yet over the past year, I’ve shipped real, production-grade software using ChatGPT, GitHub Copilot, and Cursor. Not toy scripts. Not demos. Actual infrastructure and applications people rely on.
This isn’t an “AI replaces developers” post. It doesn’t. It’s about leverage and interface design.
The backstory
I came up academically, not through the classic CS grind. Degrees from a few places. PhD in 2019. Full professor four years later. My work sits at the intersection of AI, digital health, and large systems.
Traditionally, that meant whiteboards, specs, meetings, then handing things to engineers. I understood systems conceptually, but translating intent into code was slow and fragile.
Then the tooling changed.
The real unlock: prompts as infrastructure
Most people use ChatGPT or Copilot like a smarter Stack Overflow. Fine, but shallow.
The breakthrough was treating prompts as first-class engineering artefacts.
In Cursor, I built a master prompt that encodes:
repo structure and branching
sandbox vs hardened environments
security and compliance constraints
coding standards and review behaviour
when to refactor, when to stop, when to ask
I don’t “ask for code”. I define operating instructions for an AI that already knows how to code.
At that point, it stops being a chatbot and starts behaving like a disciplined junior engineer who never gets tired and never sulks.
How it works day to day
I always work in a sandbox branch. The AI knows this.
I specify outcomes, not keystrokes. It proposes changes. I review. We iterate. When I’m happy, I push via GitHub CLI, open a PR, and apply the same discipline as if a human wrote it.
Copilot handles the micro. Cursor handles the macro. ChatGPT helps refine the prompts that govern both.
I don’t memorise syntax. I stay in architect mode.
“But you’re not a real developer”
Correct. That’s the point.
I’m not trying to out-code people who’ve been writing C since childhood. I’m removing the translation tax between intent and execution.
Senior people are expensive because they hold context. AI is good at execution once context is explicit.
Prompt engineering, done properly, is context compression.
What this enabled
This setup let me:
stand up hardened cloud infrastructure
scaffold full-stack apps
integrate APIs and data pipelines
refactor aggressively without fear
keep velocity while dev teams were offline
All while running a lab, a company, and a large enterprise programme.
I didn’t become a great coder. The interface finally respected how senior people think.
The punchline
The bottleneck is no longer “can you write code”.
It’s:
can you specify intent
can you design constraints
can you spot nonsense
can you take responsibility
AI doesn’t remove judgment. It amplifies it. Bad architects move faster into walls. Good ones move at an unfair speed.
Cursor didn’t turn me into a 10x engineer.
It turned me into a 1x engineer with a 10x exoskeleton.
And for people whose job was never typing but deciding, that changes everything.
nis0s•1h ago