I don't know what they're doing where you can do code reviews in 5-10 minutes, but in my decades doing this that only works for absolutely trivial changes.
Splitting the change does not prevent you from looking at diffs of any combination of those commits. (Including whole pr) You're not losing anything.
> at least 10x longer because an average approval time is often more than a day.
Why do you think it would take longer to review? Got any evidence?
But I trust my colleagues to do good reviews when I ask them to, and to ignore my PRs when I don't. That's kind of the way we all want it.
I regularly ask for a review of specific changes by tagging them in a comment on the lines in question, with a description of the implications and a direct question that they can answer.
This, "throw the code at the wall for interpretation" style PR just seems like it's doomed to become lower priority for busy folks. You can add a fancy narrative to the comments if you like, but the root problem is that presenting a change block as-is and asking for a boolean blanket approval of the whole thing is an expensive request.
There's no substitute for shared understanding of the context and intent, and that should come out-of-band of the code in question.
PRs should be optional, IMHO. Not all changes require peer review, and if we trust our colleagues then we should allow them to merge their branch without wasting time with performative PRs.
Part of the difference is the idea you can catch all problems with piecemeal code review is nonsense, so you should have at least some sweeping QA somewhere.
Ideally, yes. After a decade and something' under ZIRP, seems a lot of workers never had incentive to remain conscious of their intents in context long enough to conduct productive discourse about them. Half of the people I've worked with would feel slighted not by the bitterness the previous sentence, but by its length.
There's an impedance mismatch between what actually works, and what we're expected to expect to work. That's another immediate observation which people to painfully syntaxerror much more frequently than it causes them to actually clarify context and intent.
In that case the codebase remains the canonical representation of the context and intent of all contributors, even when they're not at their best, and honestly what's so bad about that? Maybe communicating them in-band instead of out-of-band might be a degraded experience. But when out-of-band can't be established, what else is there to do?
I'd be happy to see tools that facilitate this sort of communication through code. GitHub for example is in perfect position to do something like that and they don't. Git + PRs + Projects implement the exact opposite information flow, the one whose failure modes people these days do whole conference talks about.
You can split a big feature in N MRs and that doesn’t necessarily mean the N MRs are easier to understand (and so to review) than a single big MR. Taking into account the skills of the average software engineer, I prefer to review a single big MR than N different and not very well connected MRs (the classic example is that MR number N looks good and innocent, but then MR number N+1 looks awful… but since MR number N was already approved and merged the incentives are not there to redo the work)
OK.
There is little you can review properly in 10 minutes unless you were already pairing on it. You might have time to look for really bad production-breaking red flags maybe.
Remember the underlying reasons for PR. Balance between get shit done and operational, quality and tech debt concerns. Depending on what your team needs you can choose anything from no review at all to 3x time reviewing than coding. What is right depends on your situation. PR is a tool.
Depends on the specific changes of course, but generally speaking.
From my experience most of the issues I find are actually from this type of observation rather than actually reading the code and imagining what it does in my head.
> A good rule of thumb is 300 lines of code changes - once you get above 500 lines, you're entering unreviewable territory.
I've found LoC doesn't matter when you split up commits like they suggest. What does matter is how controversial a change is. A PR should ideally have one part at most that generates a lot of discussion. The PR that does this should ideally also have the minimal number of commits (just what dossn't make sense standalone). Granted this take experience generally and experience with your reviewers which is where metrics like LoC counts can come in handy.
It’s not really something you can easily enforce with automation, so basically unachievable unless you are like Netflix and only hiring top performers. And you aren’t like Netflix.
- Keep PR messages short and to the point. - use as many commits as you need, it's all the same branch. Squash if you want, I think it hides valuable meta. - put the ticket in the branch name. Non negotiable. - Update your ticket with progres. Put as much details as you can, as if you were writing to someone who's picking up the task after you. - add links to your ticket. Docs, readme, o11y graphs, etc. - Link ticket in PR for easy access to additional info - Admit if you don't understand what your looking at. Better to pair and keep moving forward. - if you request changes, stay available for conversation and approval for the next few hours. - punt the review if you don't feel like you can legitimately engage with the content right now. Make sure you communicate it though. - Avoid nits. This causes a loss in momentum and v rarely is worth changing.
- Are you trying to make sure that more than one human has seen the code? Then simply reading through a PR in 10 minutes and replying with either a LGTM or a polite version of WTF can be fine. This works if you have a team with good taste and a lot of cleanly isolated modules implementing clear APIs. The worst damage is that one module might occasionally be a bit marginal, but that can be an acceptable tradeoff in large projects.
- Does every single change need to be thoroughly discussed? Then you may want up-front design discussions, pairing, illustrated design docs, and extremely close reviews (not just of the diffs, but also re-reviewing the entire module with the changes in context). You may even want the PR author to present their code and walk throuh it with one or more people. This can be appropriate for the key system "core" that shapes everything else in the system.
- Is half your code written by an AI that doesn't understand the big picture, that doesn't really understand large-scale maintainability, and that cuts corners and _knowingly_ violates your written policy and best practices? Then honestly you're probably headed for tech debt hell on the express train unless your team is willing to watch the AI like hawks. Even one clueless person allowing the AI to spew subtlety broken code could create a mess that no number of reviewers could easily undo. In which case, uh, maybe keep everything under 5,000 lines and burn it all down regularly, or something?
I trust my colleagues to do the same (and they often do).
I can't imagine working in an environment where this is a theater.
No thank you. Talking to future ME, I don't need to know how I got to what I want me to look at.
A squashed ticket-by-ticket set of merges is enough for me.
But the thing is: this code is terrible and huge chunks of it are a unholy mix and match of code written for very specific purpose for this or that client, with this very weird "falsely generalized" code. I don't know how to call that: you have some very specific code, but you insert useless and probably buggy indirections everywhere so that it can be considered "general". The worst kind of code.
Anyways, I was asked by my boss to do such a review. I look at it and I realize that building a database setup to be able to properly run that code is going to take me weeks because I'm going to have to familiarize myself with tons and tons of modules I don't know about.
So I write down my estimate in our tracker: 1 month.
He comes back to me, alarmed. A whole month? Well yeah, otherwise I can't even run the code.
All you have to do is look at the code! What? No way, that ain't a review. Well, I ask you to do it as such. I'm not writing LGTM there.
So I was never asked to do reviews there again (in fact, I stopped working on OpenERP at all), but I could see "LGTM" popping up from my colleagues. By the way, on OpenERP tracker, all you ever saw in review logs was "LGTM" and minor style suggestions. Nothing else. What a farce.
So yeah, as the article says, there are some "LGTM-inducing" type of PRs, but the core of the problem is developers accepting to do this "LGTM-stamping" in the first place. Without them, there would only be reviewable PRs.
kaapipo•1h ago
keriati1•54m ago
I still encourage do to a lot of small commits with good commit messages, but don't submit more then 2-3 or 4 commits in a single PR...
epage•41m ago