Last week he was telling me about a PR he'd received. It should have been a simple additional CRUD endpoint, but instead it was a 2,000+ loc rats nest adding hooks that manually manipulated their cache system to make it appear to be working without actually working.
He spent most of his day explaining why this shouldn't be merged.
More and more I think Brandolini's law applies directly to AI-generated code
> The amount of [mental] energy needed to refute ~bullshit~ [AI slop] is an order of magnitude bigger than that needed to produce it.
"Explain to me in detail exactly how and why this works, or I'm not merging."
This should suffice as a response to any code the developer did not actively think about before submitting, AI generated or not.
> AI-generated slop his non-technical boss is generating
It’s his boss. The type of boss who happily generates AI slop is likely to be the type of person who wants things done their way. The employee doesn’t have the power to block the merge if the boss wants it, thus the conversation on why it shouldn’t be merged needs to be considerably longer (or they need to quit).
He wants to build a website that will turn him into a bazillionaire.
He asks AI how to solve problem X.
AI provides direction, but he doesn't quite know how to ask the right questions.
Still, the AI manages to give him a 70% solution.
He will go to his grave before he learns enough programming to do the remaining 30% himself, or, understand the first 70%.
Delegating to AI isn't the same as delegating to a human. If you mistrust the human, you can find another one. If you mistrust the AI, there aren't many others to turn to, and each comes with an uncomfortable learning curve.
Once GPS became ubiquitous, I started relying on it, and over about a decade, my navigational skills degraded to the point of embarrassment. I've lived in the same major city now for 5 years and I still need a GPS to go everywhere.
This is happening to many people now, where LLMs are replacing our thinking. My dad thinks he is writing his own memoirs. Yeah pop, weird how you and everyone else just started using the "X isn't Y, it's Z" trope liberally in your writing out of nowhere.
It's definitely scary. And it's definitely sinister. I maintain that this is intentional, and the system is working the way they want it to.
https://www.joelonsoftware.com/2000/04/06/things-you-should-... (read the bold text in the middle of the article)
These articles are 25 years old.
1. Create a branch and vibe code a solution until it works (I'm using codex cli)
2. Open new PR and slowly write the real PR myself using the vibe code as a reference, but cross referencing against existing code.
This involved a fair few concepts that were new to me, but had precedent in the existing code. Overall I think my solution was delivered faster and of at least the same quality as if I'd written it all by hand.
I think its disrespectful to PR a solution you don't understand yourself. But this process feels similar to my previous non-AI assisted approach where I would often code spaghetti until the feature worked, and then start again and do it 'properly' once I knew the rough shape of the solution
I see this in code-reviews where AI tools like code-rabbit and greptile are producing workslop in enormous quantities. It is sucking up enormous amount of human energy just reading the nicely formatted bs put out by these tools. All of that for finding an occasional nugget that turns out to be useful.
I like the quote in the middle of the article: "creating a mentally lazy, slow-thinking society that will become wholly dependant [sic] upon outside forces". I believe that orgs that fall back on the AI lie, who insist on schlepping slop from one side to the other, will be devoured by orgs that see through the noise.
It's like code. The most bug-free code are those lines that are never written. The most productive workplace is the one that never bothers with that BS in the first place. But, promotions and titles and egos are on the line so...
AI in its current form, like the swirling vortex of corporate bilge that people are forced to swim through day after day after day to, can't die fast enough.
Also the problem where someone has bullet-points, they fluff them up in an LLM, send the prose, and then the receiver tries to use an LLM to summarize it back down to bullet-points.
I may be over-optimistic in predicting that eventually everyone involved will rage-flip the metaphorical table, and start demanding/sending the short version all the time, since there's no longer anything to be gained by prettying it up.
Our manager is so happy to report that he's using AI for everything. Even in cases where I think completeness and correctness is important. I honestly think it's scary how quickly that desire for correctness is gone and replaced with "haha this is cool tech".
Us devs are much more reluctant. We don't want to fall behind, but in the end when it comes to correctness and accountability, we're the ones responsible. So I won't brainlessly dump my work into an LLM and take its word for granted.
I love programming, but I also love building things. When I imagine what having an army of mid-level engineers that genuinely only need high level instruction to reliably complete tasks, and don't require raising hundreds of millions while become beholden to some 3rd party, would let me build... I get very excited.
It's kind of a mirror image of the global AI marketing hype-factory: Always pump/promote the ways it works well, and ignore/downplay when it works poorly.
Everything sounded very mandatory, but a couple of months later nobody was asking about reports anymore.
Identify a real issue with the technology, then shift the blame to a made-up group of people who (supposedly) aren't trying hard enough to embrace the technology.
Embody a pilot mindset, with high agency and optimism
Thanks for the career advice.
Reviewing poor content quickly is most often based on reviewing form. A spelling error, poor formatting all give clues to inferior work. It is very difficult to review perfectly formatted bs with well-placed subject terms and language.
Now combine this problem with the peter-principle and you will see smart companies soon banning the use of LLMs when doing specific tasks such as internal questionnaires, internal communication, performance reviews and policy documents
mallowdram•2h ago
AI is functionally equivalent to disinformation as it automates the dark matter of communication/language, transfers the status back to the recipient, it teaches receivers that units contents are no longer valid in general and demands a tapeworm format to replace what is being trained on.
backprop1989•2h ago
lazystar•1h ago
mallowdram•1h ago
mallowdram•1h ago
oblio•1h ago