I feel like many of the AI insights featured on hackernews are from the software engineers Point of View?
There's also the perspective of vibe-coded-to-prod disaster scenarios.
For me, as a software engineer with nearly 20 years experience, AI is already more trustworthy than many contractors I've worked with over the years.
As a founder/product manager, a lot of trust is put into the engineers building the platforms. The Founder <-> Implementer interaction already involves huge amounts of trust. Things happen all the times, and C-suite executives don't have the ability to fix themselves.
My interpretation is that a lot of engineers are just encountering this trust relationship for the first time with AI. We're trusting AI (a ~contractor) to do something to spec. Advanced engineers are able to audit the AI output at a very deep level.
For founders / product managers / executives, this relationship is nothing new, and we already trust other parties to implement our code.
What does everyone think?
Going from contractor to LLM is actually a huge benefit to me, LLM has a much faster feedback loop than human contractors, costs a fraction and has a lower base rate of error (in my experience). Nothing new with the trust model.
GianFabien•47m ago
I very much believe that comprehensive domain knowledge and technical proficiency are both essential. Actual code production can be mostly delegated. If AI produces better quality code than the contractors available to you, then it is the preferable option.
IMHO a small team of experienced engineers using AI is the optimal choice.
Vibe-coded startups without competent technical oversight is tech-debt on steroids.