frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

AI Lazyslop and Personal Responsibility

https://danielsada.tech/blog/ai-lazyslop-and-personal-responsibility/
42•dshacker•1h ago

Comments

xyzsparetimexyz•1h ago
If you get a 1600 line PR you just close it and ask them to break it up into reviewable chunks. If your workplace has an issue with that, quit. This was true before AI and will be true after AI.
dshacker•1h ago
I mean, there are some exceptions on when 1600 PRs are acceptable (Refactorings, etc) but otherwise agree.

What really bugs me is that today, it is easier than ever to do this (even the LLM can do this!) and people still don't do it.

tjr•1h ago
If you can have AI review the PR, does this still matter?
hxugufjfjf•1h ago
Or just have AI do it for you /s
emeraldd•1h ago
There are a number of cases where this is not really possible. For some classes of updates, the structure of the underlying application and the type of update being made requires that you do an "all or nothing" type of update in order to get a buildable result. I've run into this a lot with Large Java applications where we have to jump several Spring versions just due to the scope of what's being updated. More incremental updates weren't an option for a number of time/architectural reasons and refactoring the application structure (which really wouldn't have helped too much either) would have been time and cost prohibitive... Really annoying but sometimes you just don't have another option to actually accomplish your goals.
dog4hire•43m ago
Some people can write 1-3k lines of good code (incl. tests) in a day when everything is just right. We used to be called 10xers lol. The 1600LOC PR is legit if trust is there, it's really a single change unit, it's not just being thrown over a wall (should have a great PR description and clear, concise commit history).

I automatically block PRs with LLM-generated summaries, commit messages, documentation, etc.

throwawaysleep•1h ago
> Then, I’d get a ping from his manager asking on why am I blocking the review.

If you are in a culture like this, you may as well just ship slop.

Management wants to break stuff, that is on them.

dshacker•1h ago
Right, I think there is always a balance between being strict on code reviews, and just letting people ship stuff. I've also seen the other end of the stick in which a senior employee is blocking an important pr over "spacing".
AndrewDucker•13m ago
Your software linting should be automated, and if possible it should be formatted automatically.

It really shouldn't be possible to have arguments in a PR over formatting.

dgxyz•1h ago
I paid my mortgage off by being the insurance policy when that happens.
shimman•1h ago
How does that work? I find the ability to be in these positions as an IC really impossible nowadays. Maybe it was easier in the 90s? I heard contracting was a way better gig back then too, until corpos got all high and mighty about it putting an end to the practice by favoring head shops instead.
apercu•1h ago
I mean, you'll still get blamed even if management pushes you to work in a manner that "breaks stuff".

There is very little accountability in the upper echelons these days (if there ever was) and less each day in our current "leadership" climate.

kibwen•1h ago
> Management wants to break stuff, that is on them.

This implies that managers will do both of the following in response to the aforementioned breakage:

1. Understand that their own managerial policies are the root cause.

2. Not use you as a scapegoat.

And yet, if you had managers that were mentally and emotionally capable enough to do both of the above, you wouldn't be in this position to begin with.

throwawaysleep•1h ago
That happens a lot, but rarely have I seen the reviewer get blamed. The guy who shipped it gets blamed.
dkarl•1h ago
I've received questions like this from very good, very reasonable, very technically carefully managers. What happens is, Mike complains and tries to throw you under the bus, and the manager reaches out to hear your side of it. You tell them Mike is trying to ship code with a bunch of issues and no tests, and they go back to Mike and tell him that he's the problem and he needs to meet the technical standards enforced by the rest of the team.

Just because management asks doesn't mean they're siding with Mike.

sdoering•50m ago
I have been on both - actually on all three - sorry, make that four - sides.

1. I tried to ship crap and complained to my manager for being blocked. I was young, dumb, in a bad place and generally an asshole. 2. I was the manager being told that some unreasonable idiot from X blocked their progress. I was the unreasonable manager demanding my people to be unblocked. I was without context, had a very bad prior relationship with the other party and an asshole - because no prior bad faith acts were actually behind the block - it was shitty code. 3. I was the manager being asked to help with unblocking. I asked to understand the issue and to actually try to - based on the facts - find a way towards a solution. My report had to refactor. 4. I was the one being asked. Luckily I had prior experience and did this time manage to not become the asshole.

I am glad I had the environments to learn.

Edit: Format.

dragoman1993•1h ago
At the end there's a typo "catched" should be caught.

Otherwise, agree-ish. There should be business practices in place for responsible AI use to avoid coworkers having to suffer from bad usage.

dshacker•1h ago
Thanks! Fixed.
throwawaysleep•1h ago
> Then, I’d get a ping from his manager asking on why am I blocking the review.

The suffering is self inflicted for this particular person. The organization doesn’t value code review.

NewsaHackO•1h ago
He had a typo in the one section where he didn't use AI to copy-edit! But really, copyediting with LLMs is a godsend. I used to struggle with grammar to the point that I had a grammarly subscription. Now, proofreading can even be done locally.
dkarl•1h ago
I have no idea what AI changes about this scenario. It's the same scenario as when Mike did this with 1600 lines of his own code ten years ago; it just happens more often, since Mike comes up with 1600 lines of code in a day instead of in a sprint.

> I don’t blame Mike, I blame the system that forced him to do this.

Bending over backwards not to be the meanie is pointless. You're trying to stop him because the system doesn't really reward this kind of behavior, and you'll do Mike a favor if you help him understand that.

dshacker•1h ago
I think people can be in hard conditions, needing a job, under pressure, burnt out and feel like this is their only way to keep their job. At least that's how it felt with Mike.

At the end, I spent a lot of time sitting down with Mike to explain this kinds of things, but I wasn't effective.

Also, now LLMs empower Mike to make a 1600 line PR daily, and me needing to distinguish between "lazyslopped" PRs or actual PRs.

BugsJustFindMe•1h ago
> I have no idea what AI changes

> Mike comes up with 1600 lines of code in a day instead of in a sprint

It seems like you do have an idea of at least one thing that AI changes.

dkarl•21m ago
The more often it happens, the more practice you get at delivering the bad news, and the quicker Mike learns to live up to the team's technical standards?
miltonlost•1h ago
> I have no idea what AI changes about this scenario. It's the same scenario as when Mike did this with 1600 lines of his own code ten years ago; it just happens more often, since Mike comes up with 1600 lines of code in a day instead of in a sprint.

So now instead of reviewing 1600 lines of bad code every 2 weeks, you must review 1600 lines of bad code every day (while being told 1600 lines of bad code every day is an improvement because just how much more bad code he's "efficiently" producing! Scale and volume is the change.

zahlman•1h ago
> it just happens more often

Which is extremely relevant, as it dramatically increases the probability that other people will have to care about it.

Aurornis•50m ago
> Bending over backwards not to be the meanie is pointless.

This thinking that we must avoid blaming individuals for their own actions and instead divert all blame to an abstract system is getting a little out of control. The blameless post-mortem culture was a welcome change from toxic companies who were scapegoating hapless engineers for every little event, but it's starting to seem like the pendulum has swung too far the other way. Now I keep running into situations where one person's personal, intentional choices are clearly at the root of a situation but everyone is doing logical backflips to try to blame a "system" instead of acknowledging the obvious.

This can get really toxic when teams start doing the whole blameless dance during every conversation, but managers are silently moving to PIP or lay off the person who everyone knows is to blame for repeated problems. In my opinion, it's better to come out and be honest about what's happening than to do the blameless performance in public for feel-good points.

dmmartins•1h ago
> What was your thought process using AI? > Share your prompts! Share your process! It helps me understand your rationale.

why? does it matter? do you ask the same questions for people that don't use AI? I don't like using AI for code because I don't like the code it generates and having to go over and over until I like it, but I don't care how people write code. I review the code that's on the PR and if there's I don't understand/agree, I comment on the PR

other than the 1600 lines PR that's hard to view, it feels that the author just want to be in the way and control everything other people are doing

OptionOfT•1h ago
Because when your code is handwritten, it's supposed to be a translation of you parsing business requirements to code.

Using AI adds a non-deterministic layer in between, and a lot of code now is there that you probably didn't need.

The prompt is helpful to figure out what is needed and what isn't.

lo_zamoyski•1h ago
The correct thing to do is to annotate the code and the PR with comments. You shouldn't be submitting code you don't understand in the first place. These comments will contain the reasoning in the prompts. Giving me a list of prompts would just be annoying and messy, not informative.

Also, we should not be submitting huge PRs in general. It is difficult to be thorough in such cases. Changes will be less well understood and more bugs will sneak their way into the code base.

OptionOfT•18m ago
The AI velocity comes from large PRs that no-one reviews.
hombre_fatal•1h ago
The prompt is the ground truth that reveals the assumptions and understandings of the person who generated the code.

It makes a lot more sense to review and workshop that into a better prompt than to refactor the derived code when there are foundational problems with the prompt.

Also, we do do this for human-generated code. It's just a far more tedious process of detective work since you often have to go the opposite direction and derive someone's understanding from the code. Especially for low effort PRs.

Ideally every PR would come with an intro that sells the PR and explains the high level approach. That way you can review the code with someone's objectives in mind, and you know when deviations from the objective are incidental bugs rather than misunderstandings.

roxolotl•1h ago
Yes of course you should ask the same thing of other non AI PRs. Figuring out the why and the thought process behind behavior is one of the most important parts of communication especially when you don’t know people as well
madeofpalk•1h ago
> why? does it matter? do you ask the same questions for people that don't use AI?

…yes? If someone dumps a PR on me without any rationale I definitely want to understand their thought process about how they landed on this solution!

colinmilhaupt•1h ago
Love to see the responsible use disclosure. I did the same several months back. https://colinmilhaupt.com/posts/responsible-llm-use/

Also love the points during review! Transparency is key to understanding critical thinking when integrating LLM-assisted coding tools.

epolanski•1h ago
Pointless blog post about made up situations that never happened.

1. Companies that push and value slop velocity do not have all these bureaucratic merge policies. They change them or relax them, and a manager would just accept it without needing to ping the author.

2. If the author was on the high paladin horse of valuing the craft he would not be working in such a place. Or he would be half assing slop too while concentrating on writing proper code for his own projects like most of us do when we end in bs jobs.

throwawaysleep•1h ago
Things like SOC II effectively require merge control. That doesn't mean the organization really values it, but for compliance purposes, the approval process needs to be there and is applied by someone up on high.
noitpmeder•53m ago
I think you're being overly pessimistic about the chance this exists in some form at nearly every mid-to-large size software company.

It doesn't take a company policy for an ai-enabled engineer to start absolutely spewing slop. But it's instantly felt by whatever process exists downstream.

I think there's still a significant quantity of engineers who value the output of AI, but at the same time put the effort in to avoid situations like what the author is describing. Reviewing code, writing/generating appropriate tests (and reviewing those too). The secret is those are the good ones. These are the ones you SHOULD promote, laud, and set as examples. The rest should be made examples of and be held accountable.

Id hope my usages of AI are along these lines. I'm sure I'm failing at some of the points, and always trying to improve.

fnoef•1h ago
While I agree with the sentiment of the post, I’ve also came to a conclusion that it’s not worth to fight against the system. If you can’t quit your job, then just do what everyone else is doing: use AI to write and review code, and make sure everyone is happy (especially the management).
krzysz00•1h ago
This does seem to align decently well with, for example, the policy the LLVM project recently adopted https://llvm.org/docs/AIToolPolicy.html , which allows for AI but requires a human in the loop that understands the code and allows for fast closure of "extractive" PRs that are mainly a timesink for reviewers where the author doesn't seem to be quite sure what's going on.
yesitcan•1h ago
> why do I need tests? It works already

> I don't blame Mike

You should blame Mike.

babblingfish•1h ago
This is consistent with my own observations of LLM-generated code increasing the burden on reviewers. You either review the code carefully, putting more effort into it than the actual original author. Or you approve it without careful review. I feel like the latter is becoming more common. This is basically creating tech debt that will only be realized later by future maintainers
mghackerlady•55m ago
or, if you know it was written by an LLM, reject it
bunderbunder•37m ago
It’s a prisoner’s dilemma, too. The person who commits to giving code review its due diligence is going to end up spending an inordinate amount of time reviewing others’ changes, leaving less time to completing their own assignments. And they’re likely to request a lot of changes, too. That’s socially untenable for most people, especially ones who clearly aren’t completing as many story points as their teammates. Next thing you know your manager is giving you less than stellar performance reviews, and the AI slopcoders on your team are getting the promotions and being put into position to influence how team norms and culture evolve over time.

The worst part is, this isn’t me speculatively catastrophizing. I’m just observing how my own organization’s culture has changed over the past couple of years.

It’s hitting the less senior team members hardest, too. They are generally less skilled at reading code and therefore less able to keep up with the rapid growth in code volume. They are also more likely to get assigned the (ever growing volume of) defect tickets so the more senior members can keep on vibecoding their way to glory.

solomonb•1h ago
> After I “Requested changes” he’d get frustrated that I’d do that, and put all his changes in an already approved PR and sneak merge it in another PR.

This is outrageous regardless of AI. Clearly there are process and technical barriers that failed in order to even make this possible. How does one commit a huge chunk of new code to an approved PR and not trigger a re-review?

But more importantly, in what world does a human think it is okay to be sneaky like this? Being able to communicate and trust one another is essential to the large scale collaboration we participate in as professional engineers. Violating that trust erodes all ability to maintain an effective team.

wasmainiac•56m ago
Yeah this never happened. This just sounds like an and everyone clapped moments, made to make a blog post. Most people on this planet are reasonable if not pushed.
dshacker•51m ago
I'm not sure how to prove otherwise, but this actually happened to me. I don't understand this kind of comments saying "FAKE" for views or blog-posting. This is something that happened to me, and I can say for sure people were really pushed in this situation to ship faster every time.
doesnt_know•36m ago
I envy the type of career you’ve had if you find this sort of behaviour unbelievable.
badsectoracula•36m ago
I actually had someone like "Mike" in my most recent job (though he wasn't uncooperative, just didn't seem to care about writing proper code). He made some tool using AI and i took it over to clean it up and improve it, but he still worked on it too. He got occasionally annoyed when i suggested changes and sometimes i felt i was talking to ChatGPT (or whatever AI he used, i don't know) via a middleman. He didn't put any 1600 line PR (that i remember anyway) but he did add extra stuff to his PRs which were often related to other tasks and the code submitted was often much larger in "volume" than needed.

As an example there was a case where some buttons needed a special highlight based on some flag, something that could be done in 4-5 lines of code or so (this was in Unreal Engine, the UI is drawn each frame), but the PR was adding a value indicating if the button would need to be highlighted when the button was created and this value was passed around all over the place in the up to the point where the data that was used to create the UI with the buttons would be. And since the UI could change that flag, the code also recreated the UI whenever the flag changed. And because the UI had state such as scrolling, selected items, etc, whenever the UI was recreated, it saved the current state and restored it after it was recreated (together with adding storage for the state). Ultimately, it worked, but it was too much code for what it needed to do.

The kicker was that the modifications to pass around the value for the flag's state wasn't even necessary (even ignoring the fact that the flag could have been checked directly during drawing) because a struct with configuration settings was already passed through the same paths and the value could have been added to said struct. Not that it would have saved the need to save/restore the UI state though.

liuliu•54m ago
Collaborative software development is a high-trust activity. It simply doesn't work in low-trust environment. This is not an issue with code review, it is an issue with maintaining a trust environment for collaboration.
dshacker•53m ago
It really demotivated me when this happened, I just kept seeing the PR open, but then I saw the changes applied before the PR was merged, which made me very confused. I then had an alert placed on every one of the updates made by Mike to make sure he didn't do this again. People were against "reset reviewers on commit" for "agility".
ljm•51m ago
TFA wouldn’t blame ‘Mike’ but I definitely would. And ‘Mike’s Boss.

That’s not just a process error. At some point you just have to feed back to the right person that someone isn’t up to the task.

ghm2199•58m ago
If you work in a company where some kind of testing is optional to get your PR merged, run in the opposite direction. Because testing showed you your engineer _thought_ things through. Its communicating what the intended use and many times when well written is as clarifying as documentation. I would be even willing to accept integration/manual tests if writing unit tests is sometimes not possible.
serial_dev•57m ago
> put all his changes in an already approved PR and sneak merge it in another PR. I don’t blame Mike, I blame the system that forced him to do this.

Oh you should definitely blame Mike for this. It’s like blaming the system when someone in the kitchen spits in the food of customer. Working with people like this is horrible because you know they don’t mind to lie cheat deceive.

Ozzie_osman•55m ago
I call it L-ai-ziness and I try to reduce it on my team.

If it has your name on it, you're accountable for it to your peers. If it has our name on it as a company, we're accountable for it to our users. AI doesn't change that.

dog4hire•49m ago
hiring?
firasd•54m ago
Unfortunately the list of AI edits this person declares at the bottom of their post is self-refuting

If you use AI as a Heads-up Display you can't make a giant scroll of every text change you accepted.

mrkeen•50m ago
> Mike sent me a 1600 line pull-request with no tests, entirely written by AI, and expected me to approve it immediately as to not to block him on his deployment schedule.

Both Mike and the manager are cargo-culting the PR process too. Code review is what you do when you believe it's worth losing velocity in order for code to pass through the bottleneck of two human brains instead of one.

LLMs are about gaining velocity by passing less code through human brains.

serial_dev•43m ago
Lazyslop PRs offload the work to code reviewers while keeping all the benefits to the PR creator.

Now creating a 1600 loc PR is about ten minutes, reviewing it at the very least an hour. Mike submits a bunch of PRs, the rest of the team tries to review it to prevent the slop from causing an outage at night or blowing up the app. Mike is a hero, he really embraced AI, he leveraged it to get 100x productivity.

This works for a while, until everyone realizes that Mike gets the praise, they get reprimanded for not shipping their features fast enough. After a couple of these sour experiences, other developers will follow suit, embrace the slop. Now there is nobody that stops the train wreck. The ones who really cared, left, the ones who cared at least a little gave up, and churn out slop.

How do you share your AI prompts/tools?

1•phil611•56s ago•0 comments

Show HN: Pinecone Explorer

https://www.pinecone-explorer.com/
1•arsentjev•1m ago•0 comments

The Oligarchs Pushing for Conquest in Greenland

https://newrepublic.com/article/205102/oligarchs-pushing-conquest-greenland-trump
1•speckx•3m ago•0 comments

How to prove you know a discrete logarithm

https://www.johndcook.com/blog/2026/01/23/zkp-discrete-logarithm/
1•ibobev•3m ago•0 comments

Ask HN: Is it advisable to run two Go-to-Market strategies?

1•bitlad•3m ago•0 comments

Gwtar: A static efficient single-file HTML format

https://gwern.net/gwtar
1•gildas•5m ago•0 comments

Add text to MIT License banning ICE collaborators (2018)

https://github.com/lerna/lerna/pull/1616
1•seanclayton•5m ago•1 comments

So I got an EEG: A weekend measuring my brainwaves

https://chillphysicsenjoyer.substack.com/p/so-i-got-an-eeg
1•crescit_eundo•5m ago•0 comments

MinIO fork keeping the admin console, with webhook tagging support

https://github.com/IamZoY/minio
1•iamzoy•5m ago•1 comments

Newsom's signature water tunnel is set back by California court ruling

https://www.latimes.com/environment/story/2026-01-08/court-california-water-tunnel
1•PaulHoule•6m ago•0 comments

Refusing to Use Twitter

https://blog.korny.info/2026/01/25/refusing-to-use-twitter
1•pavel_lishin•6m ago•0 comments

Riding Roads

https://www.mountainroads.com/
1•gilad•7m ago•0 comments

Apple Releases iOS 26.2.1 With AirTag 2 Support

https://www.macrumors.com/2026/01/26/apple-releases-ios-26-2-1/
1•nateb2022•10m ago•0 comments

Introducing Colin - A context engine that keeps agent skills fresh

https://www.jlowin.dev/blog/colin
1•gfortaine•10m ago•1 comments

OpenAl Showed Up at My Door [video]

https://www.youtube.com/watch?v=qnOmUWd-OII
1•SLHamlet•10m ago•0 comments

Quitting 9-to-5 to build genius off-grid homestead, permitted and loan-free [video]

https://www.youtube.com/watch?v=aucsiGWbEyU
2•surprisetalk•11m ago•0 comments

Same Energy – Visual Search Engine

https://same.energy
1•surprisetalk•11m ago•0 comments

A Brief, Incomplete, and Mostly Wrong History of Programming Languages

http://james-iry.blogspot.com/2009/05/brief-incomplete-and-mostly-wrong.html
1•surprisetalk•11m ago•1 comments

Long branches in compilers, assemblers, and linkers

https://maskray.me/blog/2026-01-25-long-branches-in-compilers-assemblers-and-linkers
1•matt_d•12m ago•0 comments

Dealing with Bots: A COAR Resource for Repository Managers

https://dealing-with-bots.coar-repositories.org/
1•bobbiechen•17m ago•0 comments

The Best Web Browser in 2026

https://www.magiclasso.co/insights/best-web-browser-2026/
1•bentocorp•17m ago•0 comments

Ricursive Intelligence Raises $300M Series A at $4B Valuation for AI Chip Design

https://www.morningstar.com/news/pr-newswire/20260126ny70403/ricursive-intelligence-raises-300-mi...
1•pieter3d•19m ago•0 comments

GitHub project that compares translations of Homer's Iliad

https://github.com/ivanagas/iliadtranslations
2•pcaharrier•19m ago•1 comments

Mozilla Pioneers

https://newproducts.mozilla.org/mozilla-pioneers/
3•weinzierl•20m ago•0 comments

Long-hidden Leonardo mural opens to the public ahead of 2026 Milan Olympics

https://news.artnet.com/art-world/leonardo-sforza-castle-olympics-2739171
3•antigizmo•21m ago•0 comments

People who know the formula for WD-40

https://www.wsj.com/business/the-secret-society-of-people-who-know-the-formula-for-wd-40-e9c0ff54
10•fortran77•22m ago•13 comments

New Vulnerability in React Server Components – CVE-2026-23864

https://vercel.com/changelog/summary-of-cve-2026-23864
1•mufeedvh•22m ago•0 comments

A Better Practices Guide to Using Claude Code

https://kylestratis.com/posts/a-better-practices-guide-to-using-claude-code/
4•achileas•22m ago•0 comments

Show HN: AES Cypher – Mac encryption with batch password testing

http://aescypher.com/
1•oryxandcake•23m ago•0 comments

Using Claude Code to analyze your genome for interpretation [video]

https://www.youtube.com/watch?v=O1ICQworLVc
1•tgtweak•23m ago•1 comments