I've been using cursor lately to handle some (most) of the grunt work and it's been surprisingly useful for two things: keeping documentation from going stale and spotting gaps in test coverage. For example, I'll ask it things like "what's the most likely way this function could break?" and it will suggest edge cases I hadn't thought of.
That said, it's not (maybe?) magic and sometimes hallucinates test cases that are rubbish so some critical thinking is still required.
I'm curious what y'all are using to keep docs/tests maintainable? Are you leaning on AI or doing it the old fashioned way?
sunscream89•12h ago
I know, I know, “everyone should do it.” Everyone is not going to do it, everyone left it undone the last times.
Works best when someone is trying to impress, or is told to do it in some essential capacity.
And there is also the setting of a good example.