frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Ask HN: Anyone Using a Mac Studio for Local AI/LLM?

43•UmYeahNo•1d ago•27 comments

Ask HN: Non AI-obsessed tech forums

18•nanocat•6h ago•12 comments

Ask HN: Ideas for small ways to make the world a better place

9•jlmcgraw•8h ago•16 comments

Ask HN: 10 months since the Llama-4 release: what happened to Meta AI?

42•Invictus0•1d ago•11 comments

AI Regex Scientist: A self-improving regex solver

6•PranoyP•10h ago•1 comments

Ask HN: Who wants to be hired? (February 2026)

139•whoishiring•4d ago•511 comments

Ask HN: Who is hiring? (February 2026)

312•whoishiring•4d ago•511 comments

Ask HN: Any International Job Boards for International Workers?

2•15charslong•5h ago•1 comments

Tell HN: Another round of Zendesk email spam

104•Philpax•2d ago•54 comments

Ask HN: Why LLM providers sell access instead of consulting services?

4•pera•16h ago•13 comments

Ask HN: Is Connecting via SSH Risky?

19•atrevbot•2d ago•37 comments

Ask HN: What is the most complicated Algorithm you came up with yourself?

3•meffmadd•17h ago•7 comments

Ask HN: Has your whole engineering team gone big into AI coding? How's it going?

17•jchung•1d ago•12 comments

Ask HN: How does ChatGPT decide which websites to recommend?

5•nworley•1d ago•11 comments

Ask HN: Is it just me or are most businesses insane?

7•justenough•1d ago•5 comments

Ask HN: Mem0 stores memories, but doesn't learn user patterns

9•fliellerjulian•2d ago•6 comments

Ask HN: Anyone Seeing YT ads related to chats on ChatGPT?

2•guhsnamih•1d ago•4 comments

Ask HN: Is there anyone here who still uses slide rules?

123•blenderob•3d ago•122 comments

Ask HN: Does global decoupling from the USA signal comeback of the desktop app?

5•wewewedxfgdf•1d ago•2 comments

Kernighan on Programming

170•chrisjj•4d ago•61 comments

We built a serverless GPU inference platform with predictable latency

5•QubridAI•1d ago•1 comments

Ask HN: How Did You Validate?

4•haute_cuisine•1d ago•4 comments

Ask HN: Does a good "read it later" app exist?

8•buchanae•3d ago•18 comments

Ask HN: Cheap laptop for Linux without GUI (for writing)

15•locusofself•3d ago•16 comments

Ask HN: Have you been fired because of AI?

17•s-stude•3d ago•15 comments

Ask HN: Anyone have a "sovereign" solution for phone calls?

12•kldg•3d ago•1 comments

Test management tools for automation heavy teams

2•Divyakurian•1d ago•2 comments

Ask HN: OpenClaw users, what is your token spend?

14•8cvor6j844qw_d6•4d ago•6 comments

Ask HN: Has anybody moved their local community off of Facebook groups?

23•madsohm•4d ago•17 comments

Ask HN: Are "provably fair" JavaScript games trustless?

2•rishi_blockrand•2d ago•0 comments
Open in hackernews

Ask HN: Etiquette giving feedback on mostly AI-generated PRs from co-workers

5•chfritz•1mo ago
I struggle to find the right way to provide feedback on pull requests (PRs) that mostly consist of AI generated code. Co-workers submitting them have learned to disclose this -- I found it frustrating when they didn't -- and now say they have reviewed and iterated on it. But often the result is still what I would describe as "a big contribution off the mark", meaning a lot of code that just follows the wrong approach.

Usually, when someone does a lot of work, which we used to be able to measure in lines of code, it would seem unfair to criticize them afterwards. A good development process with ticket discussions would ensure that someone doesn't do a lot of work before there is agreement on the general approach. But now, with AI, this script no longer works, partially because it's "too easy" to do it before even deciding this.

So I'm asking myself and now HN: is it OK to point out when an entire PR as such is garbage and should simply be discarded? How can I tell how much "brain juice" a co-worker has spent on it and how attached they might be to it by now if I don't even know whether they even know the code they submitted or not?

I have to admit that I hate reviewing huge PRs and the problem with AI generated code is that often it would have been much better to find and use an existing open-source library to get the task done rather than (re-)generate a lot of code for it. But how will I know this until I've actually taken the time to review and understand the big, new proposed contributions? And even if I now do spend the time to actually understand the code and implied approach, how will I know which part of it reflects their genuine opinion and intellect (which I'd be hesitant to criticize) and what is AI-fluff I can rip apart without stepping on their toes? If the answer is "let's have a meeting", then I'd say the process has failed.

Not sure there is a right answer here, but I would love to hear people's take on this.

Comments

Webstir•1mo ago
Here's a take: Consider a new job that doesn't involve (A)uto(I)nfantillization. You're destroying humanity.
wertnayi•1mo ago
Some options:

1. "Your PR is bad and you should feel bad"

2. Use AI to reject the PR

3. Does it fill an immediate business need? Then Ship it. Most code is slop anyways, even before AI era. Otherwise, it's an unjustified continuous maintenance burden - drop it.

Remember all your competitors are also using AI slop. You're in good company...

dsernst•1mo ago
> which part of it reflects their genuine opinion and intellect (which I'd be hesitant to criticize) and what is AI-fluff I can rip apart without stepping on their toes?

Are you able to ask them this directly? They might appreciate it.

everybodyknows•1mo ago
Why not demand that code be accompanied by comments that describe the solution at a higher level of design abstraction? Perhaps also a justification of design choices, and comparison with similar implementations in off-the-shelf libraries?
gus_massa•1mo ago
10 Find one tiny error somewhere

20 Post something like "I found this error, but I have no time now. I'll review the rest later."

30 Wait until they fix the tiny error

40 GOTO 10

Edit: Similar idea from Joel Spolsky, in the old good times when there was no AI. https://www.joelonsoftware.com/2001/12/25/getting-things-don...

zalah•1mo ago
If you're the author/core maintainer of the codebase, well the norm is to ignore the PR entirely. But thank them for the improvement idea/bug report.

If you're just one of the people with commit access, probably your regression testing setup is due.