An original sin of Free Software which carried through to Open Source and infects HN via its many Open Source believers is a reluctance to take project management seriously. OP shows that Jellyfin’s dictat... er, maintainer is not effectively managing the project. Open Source has no adequate answers (“fork” is not adequate).
1. Free Software / Open Source are Good and True by assertion. There is no God but source code, and Stallman is its prophet. 2. Questions whose answers tend to contradict point 1., such as “Gee, the world runs on Python — as wonderful as job as Guido and his inner circle have done, is it time to ask what an ideal management structure for a technology worth (tens? hundreds? of) billions of dollars might be?” are not welcome — are largely not asked.
Automated code review is supposed to help catch the most trivial and basic mistakes (which, as the author claims, are often repetitive), and also speed up feedback. Ultimately, this should help push issues forward and let maintainers focus on harder problems like architectural issues, which needs deep knowledge, and AI can't solve this part yet.
On the other hand, there are comments opposing the policies of AI companies, complaining about pointless and nit-picky-annoying code review comments, that don't add much, and raising the concern that AI reviews are treated as checklist for getting things merged; which can be frustrating regarding to the amount of bot comments. The suggested mitigation would be to explicitly note, that the AI code review is only a suggestion of changes. [1]
In the end, I think accepting AI in a way similar to the rules introduced in Linux (i.e., you can make your life easier, but you still have to understand the code) makes sense, given the limited code review capacity, compared to the volume of incoming contributions - which is also referred in a mailing list thread I'm referring to [2]
[0] http://lists.openwrt.org/pipermail/openwrt-devel/2026-April/...
[1] http://lists.openwrt.org/pipermail/openwrt-devel/2026-April/...
[2] http://lists.openwrt.org/pipermail/openwrt-devel/2026-April/...
Leverage the innate in-context learning - by supplying the code review AI with an annotated list of "do" and "don't". Define the expected reviewer behavior better, dial it in over time.
In some commercial projects we use copilot reviews in github, and noticed this "low quality nit-picky" style of review comments as well - but there is no way of getting rid of them as it is managed externally by github...
[0]: http://lists.openwrt.org/pipermail/openwrt-devel/2026-April/...
AI code reviews easily double the work in reviewing: you have to both review the original code and the AI code review. The AI code review can be 80% correct, but you never know which 80% is correct and which 20% is garbage, so you have to review all the AI's comments.
1. Modularize your code to allow plugins so users can start using them immediately, and you can vet them at your own pace. 2. Make tests robust and easy to run (one command to run, at most one to setup) so you don't have to pore over their code to have some confidence that it works.
dvh•1h ago
armanckeser•1h ago
onionisafruit•1h ago
esafak•21m ago
zokier•44m ago