frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

Font Memories of Old Macs

https://leancrew.com/all-this/2025/05/font-memories-of-old-macs/
1•ingve•31s ago•0 comments

Nabla: Differentiable Programming in Mojo

https://github.com/nabla-ml/nabla
1•melodyogonna•3m ago•0 comments

Microsoft Advertising is closing the Xandr DSP, layoffs pending

https://digiday.com/media-buying/microsoft-advertising-is-closing-the-xandr-dsp/
1•LunaSea•3m ago•0 comments

Waymo recalls more than 1,200 automated vehicles after minor crashes

https://www.latimes.com/business/story/2025-05-14/waymo-recalls-more-than-1-200-automated-vehicles-after-minor-crashes
2•jaredwiener•6m ago•1 comments

VW and Rivian team up on $22.5K EV with software stack–affordable, advanced

https://www.businessinsider.com/rivians-software-chief-affordable-evs-dont-need-be-low-tech-2025-5
1•bit_qntum•8m ago•1 comments

Gradients Are the New Intervals

https://www.mattkeeter.com/blog/2025-05-14-gradients/
1•mkeeter•13m ago•0 comments

UK ministers block AI transparency amendment demanding copyright disclosures

https://www.theguardian.com/technology/2025/may/14/uk-ministers-to-block-amendment-requiring-ai-firms-to-declare-use-of-copyrighted-content
1•byte-bolter•13m ago•0 comments

Minister accused of being too close to big tech after rise in meetings

https://www.theguardian.com/politics/2025/may/14/minister-accused-of-being-too-close-to-big-tech-after-analysis-of-meetings
1•pera•14m ago•0 comments

Show HN: Pomni.ai – Train your image set for a consistent style

https://pomni.ai/
1•kihihosting•16m ago•0 comments

Building a multi-source ingestion pipeline with Ray

https://thehyperplane.substack.com/p/multi-source-ingestion-pipelines
1•andreeamiclaus•17m ago•0 comments

Nuclear blasts, preserved on film [video]

https://www.youtube.com/watch?v=ftCcMjXPpII
1•simonebrunozzi•18m ago•0 comments

Rogue communication devices found in Chinese solar power inverters

https://www.msn.com/en-us/news/world/ar-AA1EMfHP
1•elsewhen•19m ago•1 comments

The first year of free-threaded Python – Labs

https://labs.quansight.org/blog/free-threaded-one-year-recap
2•rbanffy•19m ago•0 comments

Hasui Kawase

https://en.wikipedia.org/wiki/Hasui_Kawase
1•handfuloflight•20m ago•0 comments

Software engineer lost his $150K-a-year job to AI - forced to DoorDash

https://www.yahoo.com/news/software-engineer-lost-150k-job-090000839.html
3•thenaturalist•21m ago•2 comments

10k Drum Machines

https://10kdrummachines.com/
2•mrzool•25m ago•0 comments

Butler: All of the AI tools you need, in one place

https://www.butler.ai/
2•bereketsemagn•26m ago•0 comments

A History Lesson – The Story of Railroad Tracks [pdf]

https://www.aghost.net/images/e0186601/ahistorylessonofrailroadtracks.pdf
1•jruohonen•26m ago•0 comments

Computational Chemistry Unlocked: Large Dataset to Train AI Models Has Launched

https://newscenter.lbl.gov/2025/05/14/computational-chemistry-unlocked-a-record-breaking-dataset-to-train-ai-models-has-launched/
1•gnabgib•27m ago•0 comments

Environment: Making Rivers Run Backward (1982)

https://time.com/archive/6883794/environment-making-rivers-run-backward/
1•jruohonen•31m ago•1 comments

Democratizing AI: The Psyche Network Architecture

https://nousresearch.com/nous-psyche/
2•namenumber•33m ago•0 comments

Smallweb – a self-editable website with an embedded VSCode UI

https://www.demo.smallweb.live/
1•madacol•33m ago•1 comments

Spika: An energy-efficient time-domain hybrid CMOS-RRAM compute-in-memory macro

https://www.frontiersin.org/articles/10.3389/felec.2025.1567562
1•PaulHoule•34m ago•0 comments

Proximity to Golf Courses and Risk of Parkinson Disease

https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2833716
3•airstrike•36m ago•0 comments

Innovative Insurance Products to Introduce in 2025

https://openkoda.com/innovative-insurance-products/
1•mgl•37m ago•0 comments

An itch.io game became a million-dollar hit

https://howtomarketagame.com/2025/04/28/how-an-itch-io-game-became-a-million-dollar-hit-the-roottrees-are-dead/
1•mgl•38m ago•0 comments

Musk's Grok brings up South African white genocide claims to unrelated questions

https://www.nbcnews.com/tech/tech-news/elon-musks-ai-chatbot-grok-brings-south-african-white-genocide-claims-rcna206838
9•ceejayoz•38m ago•1 comments

NOAA scrambles to fill forecasting jobs as hurricane season looms

https://www.washingtonpost.com/weather/2025/05/14/national-weather-service-vacancies-hurricane-season/
3•howard941•38m ago•2 comments

Pyodide is a Python distribution for the browser and Node based on WebAssembly.

https://pyodide.org/en/stable/index.html
2•silverret•40m ago•0 comments

AI therapy is a surveillance machine in a police state

https://www.theverge.com/policy/665685/ai-therapy-meta-chatbot-surveillance-risks-trump
5•laurex•40m ago•0 comments
Open in hackernews

Perverse incentives of vibe coding

https://fredbenenson.medium.com/the-perverse-incentives-of-vibe-coding-23efbaf75aee
67•laurex•1h ago

Comments

comex•1h ago
> There was no standardization of parts in the probe. Two widgets intended to do almost the same job could be subtly different or wildly different. Braces and mountings seemed hand carved. The probe was as much a sculpture as a machine.

> Blaine read that, shook his head, and called Sally. Presently she joined him in his cabin.

> “Yes, I wrote that," she said. "It seems to be true. Every nut and bolt in that probe was designed separately. It's less surprising if you think of the probe as having a religious purpose. But that's not all. You know how redundancy works?"

> “In machines? Two gilkickies to do one job. In case one fails."

> “Well, it seems that the Moties work it both ways."

> “Moties?"

> She shrugged. "We had to call them something. The Mote engineers made two widgets do one job, all right, but the second widget does two other jobs, and some of the supports are also bimetallic thermostats and thermoelectric generators all in one. Rod, I barely understand the words. Modules: human engineers work in modules, don't they?"

> “For a complicated job, of course they do."

> “The Moties don't. It's all one piece, everything working on everything else. Rod, there's a fair chance the Moties are brighter than we are."

- The Mote in God's Eye, Larry Niven and Jerry Pournelle (1974)

[…too bad that today's LLMs are not brighter than we are, at least when it comes to writing correct code…]

mnky9800n•1h ago
That book is very much fun and also I never understood why Larry Niven is so obsessed with techno feudalism and gender roles. I think this is my favourite book but I think his best book is maybe Ringworld.
Loughla•39m ago
Ringworld is a great book. The later books have great concepts, but could do without so much. . . rishing. Niven plainly inserted his furry porn fetish into those books, for reasons unclear to any human alive.
jerf•1h ago
Yeah, I've had that thought too.

I think a lot about Motie engineering versus human engineering. Could Motie engineering be practical? Is human engineering a fundamentally good idea, or is it just a reflection of our working memory of 7 +/- 2? Biology is Motie-esque, but it's pretty obvious we are nowhere near a technology level that could ever bring a biological system up from scratch.

If Motie engineering is a good idea, it's not a smooth gradient. The Motie-est code I've seen is also the worst. It is definitely not the case that getting a bit more Motie-esque, all else being equal, produces better results. Is there some crossover point where it gets better and maybe passes our modular designs? If AIs do get better than us at coding, and it turns out they do settle on Motie-esque coding, no human will ever be able to penetrate it ever again. We'd have to instruct our AI coders to deliberately cripple themselves to stay comprehensible, and that is... economically a tricky proposition.

After all, anyone can write anything into a novel they want to and make anything work. It's why I've generally stopped reading fiction that is explicitly meant to make ideological or political points to the exclusion of all else; anything can work on a page. Does Motie engineering correspond to anything that could be manifested practically in reality?

Will the AIs be better at modularization than any human? Will they actually manifest the Great OO Promise of vast piles of amazingly well-crafted, re-usable code once they mature? Or will the optimal solution turn out to be bespoke, locally-optimized versions of everything everywhere, and the solution to combining two systems is to do whatever locally-sensible customizations are called for?

(I speak of the final, mature version, however long that may be. Today LLMs are kind of the worst of both worlds. That turns out to be a big step up from "couldn't play in this space at all", so I'm not trying to fashionably slag on AIs here. I'm more saying that the one point we have is not yet enough to draw so much as a line through, let alone an entire multi-dimensional design methodology utility landscape.)

I didn't expect to live to see the answers, but maybe I will.

fwip•2m ago
For me, "Motie engineering" always brings to mind "The Story of Mel." http://www.catb.org/jargon/html/story-of-mel.html
bradly•1h ago
> it might be difficult for AI companies to prioritize code conciseness when their revenue depends on token count.

Would open source, local models keep pressure on AI companies to prioritize the usable code, as code quality and engineering time saved are critical to build vs buy discussions?

jsheard•1h ago
Depends if open source models can remain relevant once the status quo of "company burns a bunch of VC money to train a model, open sources it, and generates little if any revenue" runs out of steam. That's obviously not sustainable long term.
Larrikin•1h ago
Maybe we will get some university backed SETI like projects to replace all those personal mining rigs now that that hype is finally fading.
Workaccount2•1h ago
Are using the APIs worth the extra cost vs using the web tools? I haven't used any API tools, I am not a programmer, but I have generated many millions of tokens in the web canvas, something that would cost way more than the $20 I spend for them.
jfim•50m ago
If you're using Claude code or cursor, for example, they can read files automatically instead of needing the user to copy paste back and forth.

Both can generate code though, I've generated code using the web interface and it works, it's just a bit tedious to copy back and forth.

tippytippytango•1h ago
This article captures a lot of the problem. It’s often frustrating how it tries to work around really simple issues with complex workarounds that don’t work at all. I tell it the secret simple thing it’s missing and it gets it. It always makes me think, god help the vibe coders that can’t read code. I actually feel bad for them.
r053bud•1h ago
I fear that’s going to end up being a significant portion of engineers in the future.
babyent•1h ago
I think we are in the Flash era again lol.

You remember those days right? All those Flash sites.

iotku•1h ago
There's a pretty big gap between "make it work" and "make it good".

I've found with LLMs I can usually convince them to get me at least something that mostly works, but each step compounds with excessive amounts of extra code, extraneous comments ("This loop goes through each..."), and redundant functions.

In the short term it feels good to achieve something 'quickly', but there's a lot of debt associated with running a random number generator on your codebase.

didgetmaster•24m ago
In my opinion, the difference between good code and code that simply works (sometimes barely); is that good code will still work (or error out gracefully) when the state and the inputs are expected.

Good programs are written by people who anticipate what might go wrong. If the document says 'don't do X'; they know a tester is likely to try X because a user will eventually do it.

grufkork•42m ago
Working as an instructor for a project course for first-year university students, I have run in to this a couple of times. The code required for the project is pretty simple, but there are a couple of subtle details that can go wrong. Had one group today with bit shifts and other "advanced" operators everywhere, but the code was not working as expected. I asked them to just `Serial.println()` so they could check what was going on, and they were stumped. LLMs are already great tools, but if you don't know basic troubleshooting/debugging you're in for a bad time when the brick wall arrives.

On the other hand, it shows how much coding is just repetition. You don't need to be a good coder to perform serviceable work, but you won't create anything new and amazing either, if you don't learn to think and reason - but that might for some purposes be fine. (Worrying for the ability of the general population however)

You could ask whether these students would have gotten anything done without generated code? Probably, it's just a momentarily easier alternative to actual understanding. They did however realise the problem and decided by themselves to write their own code in a simpler, more repetitive and "stupid" style, but one that they could reason about. So hopefully a good lesson and all well in the end!

martin-t•40m ago
> I tell it the secret simple thing it’s missing and it gets it.

Anthropomorphizing LLMs is not helpful. It doesn't get anything, you just gave it new tokens, ones which are more closely correlated with the correct answer. It also generates responses similar to what a human would say in the same situation.

Note i first wrote "it also mimicks what a human would say", then I realized I am anthropomorphizing a statistical algorithm and had to correct myself. It's hard sometimes but language shapes how we think (which is ironically why LLMs are a thing at all) and using terms which better describe how it really works is important.

ben_w•27m ago
Given that LLMs are trained on humans, who don't respond well to being dehumanised, I expect anthropomorphising them to be better than the opposite of that.

https://www.microsoft.com/en-us/worklab/why-using-a-polite-t...

sigmaisaletter•1h ago
In section 4, the author writes "... cheaper than Claude 3.7 ($0.80 per token vs. $3)".

This is an obvious mistake, the price is per Megatoken, not per token.

Source: https://www.anthropic.com/pricing

vanschelven•1h ago
> Its “almost there” quality — the feeling we’re just one prompt away from the perfect solution — is what makes it so addicting. Vibe coding operates on the principle of variable-ratio reinforcement, a powerful form of operant conditioning where rewards come unpredictably. Unlike fixed rewards, this intermittent success pattern (“the code works! it’s brilliant! it just broke! wtf!”), triggers stronger dopamine responses in our brain’s reward pathways, similar to gambling behaviors.

Though I'm not a "vibe coder" myself I very much recognize this as part of the "appeal" of GenAI tools more generally. Trying to get Image Generators to do what I want has a very "gambling-like" quality to it.

dingnuts•1h ago
it's not like gambling, it is gambling. you exchange dollars for chips (tokens -- some casinos even call the chips tokens) and insert it into the machine in exchange for the chance of a prize.

if it doesn't work the first time you pull the lever, it might the second time, and it might not. Either way, the house wins.

It should be regulated as gambling, because it is. There's no metaphor, the only difference from a slot machine is that AI will never output cash directly, only the possibility of an output that could make money. So if you're lucky with your first gamble, it'll give you a second one to try.

Gambling all the way down.

princealiiiii•57m ago
> It should be regulated as gambling, because it is.

That's wild. Anything with non-deterministic output will have this.

kagevf•50m ago
> "Anything with non-deterministic output will have this.

Anything with non-deterministic output that charges money ...

Edit Added words to clarify what I meant.

GuinansEyebrows•46m ago
i think at least a lot of things (if not most things) that i pay for have an agreed-upon result in exchange for payment, and a mitigation system that'll help me get what i paid for in the event that something else prevents that from happening. if you pay for something and you don't know what you're going to get, and you have to keep paying for it in the hopes that you get what you want out of it... that sounds a lot like gambling. not exactly, but like.
0cf8612b2e1e•10m ago
If I ask an artist to draw a picture, I still have to pay for the service, even if I am unhappy without the result.
GuinansEyebrows•48m ago
maybe more accurately anything with non-deterministic output that you have to pay-per-use instead of paying by outcome.
martin-t•47m ago
That's incorrect, gambling is about waiting.

Brain scans have revealed that waiting for a potential win stimulates the same areas as the win itself. That's the "appeal" of gambling. Your brain literally feels like it's winning while waiting because it _might_ win.

squeaky-clean•39m ago
So how exactly does that work for the $25/mo flat fee that I pay OpenAI for chatgpt. They want me to keep getting the wrong output and burning money on their backend without any additional payment from me?
dwringer•36m ago
Something of an aside, but this is sort of equivalent to asking "how does that work for the $50 dollars the casino gave me to gamble with for free"? I once made 50 dollars exactly in that way by taking the casino's free tokens and putting them all on black in a single roulette spin. People like that are not the ones companies like that make money off of.
kimixa•11m ago
For the amount of money OpenAI burns that $25/mo is functionally the same as zero - they're still in the "first one is free" phase.

Though you could say the same thing about pretty much any VC funded sector in the "Growth" phase. And I probably will.

mystified5016•38m ago
I run genAI models on my own hardware for free. How does that fit into your argument?
codr7•34m ago
The fact that you can get your drugs for free doesn't exactly make you less of an addict.
squeaky-clean•27m ago
It does literally make it not gambling though, which is what's betting discussed.

It also kind of breaks the whole argument that they're designed to be addictive in order to make you spend more on tokens.

latentsea•27m ago
I used to run GenAI image generators on my own hardware, and I 200% agree with your stance. Literally wound up selling my RTX 4090 to get the dealer to move out of the house. I'm better off now, but can't ever really own a GPU again without opening myself back up to that. Sigh...
NathanKP•33m ago
This only makes sense if you have an all or nothing concept of the value of output from AI.

Every prompt and answer is contributing value toward your progress toward the final solution, even if that value is just narrowing the latent space of potential outputs by keeping track of failed paths in the context window, so that it can avoid that path in a future answer after you provide followup feedback.

The vast majority of slot machine pulls produce no value to the player. Every single prompt into an LLM tool produces some form of value. I have never once had an entirely wasted prompt unless you count the AI service literally crashing and returning a "Service Unavailable" type error.

One of the stupidest takes about AI is that a partial hallucination or a single bug destroys the value of the tool. If a response is 90% of the way there and I have to fix the 10% of it that doesn't meet my expectations, then I still got 90% value from that answer.

NegativeLatency•21m ago
> Every prompt and answer is contributing value toward your progress toward the final solution

This has not been my experience, maybe sometimes, but certainly not always.

As an example: asking chatgpt/gemini about how to accomplish some sql data transformation set me back in finding the right answer because the answer it did give me was so plausible but also super duper not correct in the end. Would've been better off not using it in that case.

Brings to mind "You can't build a ladder to the moon"

secabeen•13m ago
> One of the stupidest takes about AI is that a partial hallucination or a single bug destroys the value of the tool. If a response is 90% of the way there and I have to fix the 10% of it that doesn't meet my expectations, then I still got 90% value from that answer.

That assumes that the value of a solution is linear with the amount completed. If the Pareto Principle holds (80% of effects come from 20% of causes), then not getting that critical 10+% likely has an outsized effect on the value of the solution. If I have to do the 20% of the work that's hard and important after taking what the LLM did for the remainder, I haven't gained as much because I still have to build the state machine in my head to understand the problem-space well enough to do that coding.

rapind•59s ago
By this logic:

- I buy stock that doesn't perform how I expected. - I hire someone to produce art. - I pay a lawyer to represent me in court. - I pay a registration fee to play a sport expecting to win. - I buy a gift for someone expecting friendship.

Are all gambas.

yewW0tm8•43m ago
Same with anything though? Startups, marriages, kids.

All those laid off coders gambled on a career that didn’t pan out.

Want more certainty in life, gonna have to get political.

And even then there is no guarantee the future give a crap. Society may well collapse in 30 years, or 100…

This is all just role play to satisfy the prior generations story driven illusions.

gitroom•1h ago
man, pricing everywhere is getting nuts. makes me wonder if most stuff just gets harder to use over time or im just old now - you ever hit a point where you stop caring about new tools because it feels like too much work?
biker142541•58m ago
Can we please stop using 'vibe coding' to mean 'ai assisted coding'?? (best breakdown, imo: https://simonwillison.net/2025/Mar/19/vibe-coding/)

Is it really vibe coding if you are building a detailed coding plan, conducting "git-based experimentation with ruthless pruning", and essentially reviewing the code incrementally for correctness and conciseness? Sure, it's a process dependent on AI, but it's very far from nearly "forget[ing] that the code even exists".

That all said, I do think the article captures some of the current cost/quality dilemmas. I wouldn't jump to conclusions that these incentives are actually driving most current training decisions, but it's an interesting area to highlight.

Animats•50m ago
"Vibe coding" is a trend.[1]

[1] https://trends.google.com/trends/explore?geo=US&q=%22vibe%20...

Ancapistani•46m ago
There should be a distinction, but I don't think it's really clear where it is yet.

In my own usage, I tend to alternate between tiny, well-defined tasks and larger-scale, planned architectural changes or new features. Things in between those levels are hit and miss.

It also depends on what I'm building and why. If it's a quick-and-dirty script for my own use, I'll often write up - or speak - a prompt and let it do its thing in the background while I work on other things. I care much less about code quality in those instances.

codr7•27m ago
It's still gambling, you're trading learning/reinforcing for efficiency, which in the long run means losing skills.
parliament32•8m ago
This reads like "is it really gambling when I have a many-step system for predicting roulette outcomes?"
samtp•57m ago
I've pretty clearly seen the critical thinking ability of coworkers who depend on AI too much sharply decline over the past year. Instead of taking 30 seconds to break down the problem and work through assumptions, they immediately copy/paste into an LLM and spit back what it tells them.

This has lead to their abilities stalling while their output seemingly goes up. But when you look at the quality of their output, and their ability to get projects over the last 10% or make adjustments to an already completed project without breaking things, it's pretty horrendous.

Etheryte•48m ago
My observations align with this pretty closely. I have a number of colleagues who I wager are largely using LLM-s, both by changes in coding style and how much they suddenly add comments, and I can't help but feel a noticeable drop in the quality of the output. Issues that should clearly have no business making it to code review are now regularly left for others to catch, it often feels like they don't even look at their own diffs. What to make of it, I'm not entirely sure. I do think there are ways LLM-s can help us work in better ways, but they can also lead to considerably worse outcomes.
jimbokun•45m ago
Just replace your colleagues with the LLMs they are using. You will reduce costs with no decrease in the quality of work.
andy99•44m ago
I think lack of critical thinking is the root cause, not a symptom. I think pretty much everyone uses LLMs these days, but you can tell who sees the output and considers it "done" vs who uses LLM output as an input to their own process.
mystified5016•20m ago
I mean, I can tell that I'm having this problem and my critical thinking skills are otherwise typically quite sharp.

At work I've inherited a Kotlin project and I've never touched Kotlin or android before, though I'm an experienced programmer in other domains. ChatGPT has been guiding me through what needs to be done. The problem I'm having is that it's just too damn easy to follow its advice without checking. I might save a few minutes over reading the docs myself, but I don't get the context the docs would have given me.

I'm a 'Real Programmer' and I can tell that the code is logically sound and self-consistent. The code works and it's usually rewritten so much as to be distinctly my code and style. But still it's largely magical. If I'm doing things the less-correct way, I wouldn't really know because this whole process has led me to some pretty lazy thinking.

On the other hand, I very much do not care about this project. I'm very sure that it will be used just a few times and never see the light of day again. I don't expect to ever do android development again after this, either. I think lazy thinking and farming the involved thinking out to ChatGPT is acceptable here, but it's clear how easily this could become a very bad habit.

I am making a modest effort to understand what I'm doing. I'm also completely rewriting or ignoring the code the AI gives me, it's more of an API reference and example. I can definitely see how a less-seasoned programmer might get suckered into blindly accepting AI code and iterating prompts until the code works. It's pretty scary to think about how the coming generations of programmers are going to experience and conceptualize programming.

charcircuit•51m ago
This article ignores the enormous demand of AI coding paired with competition between providers. Reducing the price of tokens means that people can afford to generate more tokens. A code provider being cheaper on average to operate than another is a competitive advantage.
chaboud•49m ago
1. Yes. I've spent several late nights nudging Cline and Claude (and other systems) to the right answers. And being able to use AWS Bedrock to do this has been great (note: I work at Amazon).

2. I've had good fortunes keeping the agents to constrained areas, working on functions, or objects, with clearly defined (by me) boundaries. If the measure of a junior engineer is that you correct them once a day, an engineer once a week, a senior once a month, a principal once a quarter... Treat these agents like hyper-energetic interns. Nudge frequently.

3. Standard org management coding practices apply. Force the agents to show work, plan, unit test, investigate.

And, basically, I've described that we're becoming Software Development Managers with teams of on-demand low-quality interns. That's an incredibly powerful tool, but don't expect hyper-elegant and compact code from them. Keep that for the senior engineering staff (humans) for now.

(Note: The AlphaEvolve announcement makes me wonder if I'm going to have hyper-energetic applied science interns next...)

xianshou•49m ago
Amusingly, about 90% of my rat's-nest problems with Sonnet 3.7 are solved by simply appending a few words to the end of the prompt:

"write minimum code required"

It's not even that sensitive to the wording - "be terse" or "make minimal changes" amount to the same thing - but the resulting code will often be at least 50% shorter than the un-guided version.

panstromek•35m ago
Well, the article mentions that this reduces accuracy. Do you hit that problem often then?
andy99•48m ago
I wish more had been written about the first assertion that using an LLM to code is like gambling and you're always hoping that just one more prompt will get you what you want.

It really captures how little control one has over the process, while simultaneously having the illusion of control.

I don't really believe that code is being made verbose to make more profits. There's probably some element of model providers not prioritizing concise code, but if conciseness while maintaining "quality" was possible is would give one model a sufficient edge over others that I suspect providers would do it.

Pxtl•47m ago
I can feel how the extreme autocomplete of AI is a drug.

Half of my job is fighting the "copy/paste/change one thing" garbage that developers generate. Keeping code DRY. The autocompletes do an amazing job of automating the repeated boilerplate. "Oh you're doing this little snippet for the first and second property? Obviously you want to do that for every property! Let me just expand that out for you!"

And I'm like "oooh, that's nice and convenient".

...

But I also should be looking at that with the stink-eye... part of that code is now duplicated a dozen times. Is there any way to reduce that duplication to the bare minimum? At least so it's only one duplicated declaration or call and all of the rest is per-thingy?

Or any way to directly/automatically wrap the thing without going property-by-property?

Normally I'd be asking myself these questions by the 3rd line. But this just made a dozen of those in an instant. And it's so tempting and addictive to just say "this is fine" and move on.

That kind of code is not fine.

Ancapistani•42m ago
> That kind of code is not fine.

I agree, but I'm also challenging that position within myself.

Why isn't it OK? If your primary concern is readability, then perhaps LLMs can better understand generated code relative to clean, human-readable code. Also, if you're not directly interacting with it, who cares?

As for duplication introducing inconsistencies, that's another issue entirely :)

andrewstuart•44m ago
Claude was last week.

The author should try Gemini it’s much better.

martin-t•38m ago
Honestly can't tell if satire or not.
jazoom•26m ago
It's not satire. Gemini is much better for coding, at least for me.

Just to illustrate, I asked both about a browser automation script this morning. Claude used Selenium. Gemini used Playwright.

I think the main reasons Gemini is much better are:

1. It gets my whole code base as context. Claude can't take that many tokens. I also include documentation for newer versions of libraries (e.g. Svelte 5) that the LLM is not so familiar with.

2. Gemini has a more recent knowledge cutoff.

3. Gemini 2.5 Pro is a thinking model.

4. It's free to use through the web UI.

neilv•35m ago
I would seriously consider banning "vibe coding" right now, because:

1. Poor solutions.

2. Solutions not understood by the person who prompted them.

3. Development team being made dumber.

4. Legal and ethical concerns about laundering open source copyrights.

5. I'm suspicious of the name "vibe coding", like someone is intentionally marketing it to people who don't care to be good at their jobs.

6. I only want to hire people who can do holistically better work than current "AI". (Not churn code for a growth startup's Potemkin Village, nor to only nominally satisfy a client's requirements while shipping them piles of counterproductive garbage.)

7. Publicizing that you are a no-AI-slop company might scare away the majority of the bad prospective employees, while disproportionately attracting the especially good ones. (Not that everyone who uses "AI" is bad, but they've put themselves in the bucket with all the people who are bad, and that's a vastly better filter for the art of hiring than whether someone has spent months memorizing LeetCode answers solely for interviews.)

YossarianFrPrez•28m ago
There are two sets of perverse incentives at play. The main one the author focuses on is that LLM companies are incentivized to produce verbose answers, so that when you task an LLM on extending an already verbose project, the tokens used and therefore cost increases.

The second one is more intra/interpersonal: under pressure to produce, it's very easy to rely on LLMs to get one 80% of the way there and polish the remaining 20%. I'm in a new domain that requires learning a new language. So something I've started doing is asking ChatGPT to come up with exercises / coding etudes / homework for me based on past interactions.

neonate•23m ago
https://archive.ph/EzbNK
Vox_Leone•19m ago
Noted — but honestly, that's somewhat expected. Vibe-style coding often lacks structure, patterns, and architectural discipline. That means the developer must do more heavy lifting: decide what they want, and be explicit — whether that’s 'avoid verbosity,' 'use classes,' 'encapsulate logic,' or 'handle errors properly.'