I would never have thought that someone could actually write this.
“Oh, cursor wrote that.”
If it made it into your pull request, YOU wrote it, and it it’ll be part of your performance review. Cursor doesn’t have a performance review. Simple as
This seems like a curious choice. At my company we have both Gemini and cursor (I’m not sure which model under the hood on that) review agents available. Both frequently raise legitimate points. Im sure they’re abusable, I just haven’t seen it
And in practice that means that I won’t take “The AI did it” as an excuse. You have to stand behind the work you did even if you used AI to help.
I neither tell people to use AI, nor tell them not to use it, and in practice people have not been using AI much for whatever that is worth.
I'm dreading the day the hammer falls and there will be AI-use metrics implemented for all developers at my job.
Since when has that not been the bare minimum. Even before AI existed, and even if you did not work in programming at all, you sort of have to do that as a bare minimum. Even if you use a toaster and your company guidelines suggest you toast every sandwich for 20 seconds, if following every step as per training results in a lump of charcoal for bread, you can’t serve it up to the customer. At the end of the day, you make the sandwich, you’re responsible for making it correctly.
One thing I didn't like was the copy/paste response for violations.
It makes sense to have one. Just the text they propose uses what I'd call insider terms, and also terms that sort of put down the contributor.
And while that might be appropriate at the next level of escalation, the first level stock text should be easier for the outside contributor to understand, and should better explain the next steps for the contributor to take.
I also recently wrote a similar policy[0] for my fork of a codebase[1]. I had to write this because the original developer took the AI pill, and starting committing totally broken code that was fulled of bugs, and doubled down when asked about it [2].
On an analysis level, in a recent post[3], I commented that "Non-coders using AI to program are effectively non-technical people, equipped with the over-confidence of technical people. Proper training would turn those people into coders that are technical people. Traditional training techniques and material cannot work, as they are targeted and created with technical people in mind."
But what's more, we're also seeing programmers use AI creating slop. They're effectively technical people equipped with their initial over-confidence, highly inflated by a sense of effortless capability. Before AI, developers were once (sometimes) forced to pause, investigate, and understand, and now it's just easier and more natural to simply assume they grasp far more than they actually do.
[0]: https://gixy.io/contributing/#ai-llm-tooling-usage-policy
[1]: https://github.com/MegaManSec/gixyng
[2]: https://joshua.hu/gixy-ng-new-version-gixy-updated-checks#qu...
[3]: https://joshua.hu/ai-slop-story-nginx-leaking-dns-chatgpt#fi...
whatever1•1h ago
rvz•1h ago