frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Drug Discovery in the U.S. vs. China

https://www.statnews.com/2025/12/16/china-drug-discovery-us-leadership-falling/
1•bikenaga•36s ago•0 comments

From bigger models to better intelligence:what NeurIPS25 tells us about progress

https://lambda.ai/blog/neurips-2025-from-bigger-models-to-better-intelligence
1•lambda-research•1m ago•0 comments

Rad Power Bikes files for bankruptcy protection

https://www.bicycleretailer.com/industry-news/2025/12/16/rad-power-bikes-files-bankruptcy-protection
1•jerlam•4m ago•0 comments

Show HN: Hero – Notion for Formal Docs

https://www.myhero.so
2•kevintouati•7m ago•0 comments

Offline Is Always Better

https://josebriones.substack.com/p/offline-is-always-better
2•toomuchtodo•10m ago•0 comments

Attempting Cross Translation Unit Taint Analysis for Firefox

https://attackanddefense.dev/2025/12/16/attempting-cross-translation-unit-static-analysis.html
1•jonchang•10m ago•0 comments

Top Startup Ideas Guaranteed to Get $1M Seed Funding

1•suhaspatil101•11m ago•0 comments

EU moves to ease 2035 ban on internal combustion cars

https://apnews.com/article/eu-ban-combustion-engines-emissions-environment-d1432af14eaa73d6536f60...
1•Svip•14m ago•0 comments

Show HN: AI-powered SEO automation tool distilled from production agency systems

https://www.quicklyseo.com/
1•ralphqkly•18m ago•0 comments

Show HN: I built the fastest RSS reader in Zig

https://github.com/superstarryeyes/hys
3•superstarryeyes•18m ago•0 comments

Why mistletoe is thriving, even as its traditional orchards are lost

https://theconversation.com/why-mistletoe-is-thriving-even-as-its-traditional-orchards-are-lost-2...
1•zeristor•19m ago•1 comments

YouTube Creators Find a New Consumer for AI Slop: Babies

https://www.bloomberg.com/news/articles/2025-12-03/ai-slop-youtube-videos-for-kids-pretend-to-be-...
2•paulpauper•19m ago•3 comments

What Does a Database for SSDs Look Like?

https://brooker.co.za/blog/2025/12/15/database-for-ssd.html
1•KraftyOne•21m ago•0 comments

Run Codex CLI in a firewalled Docker sandbox

https://github.com/paulux84/codex-lockbox
1•p2dev•21m ago•0 comments

The Worst Thing About the RAM Shortage That Nobody's Talking About

https://gizmodo.com/the-worst-thing-about-the-ram-shortage-that-nobodys-talking-about-2000700185
2•_____k•23m ago•0 comments

Netflix Is Buying Nostalgia

https://12gramsofcarbon.com/p/tech-things-netflix-is-buying-nostalgia
2•theahura•25m ago•0 comments

Ask HN: What's up with the "model overloaded" on Gemini API?

3•worldsavior•25m ago•0 comments

SK Hynix Internal Analysis

https://twitter.com/BullsLab/status/1998691507956756547
1•_____k•26m ago•0 comments

Hacking group says it's extorting Pornhub after stealing users' viewing data

https://techcrunch.com/2025/12/16/hacking-group-says-its-extorting-pornhub-after-stealing-users-v...
2•SilverElfin•31m ago•1 comments

Codex Is Down

https://status.openai.com/incidents/01KCM7PAMQMCM8KAB6ZCWPKNK1
2•bartkappenburg•31m ago•0 comments

Using GitLab CI/CD with a GitHub Repository

https://docs.gitlab.com/ci/ci_cd_for_external_repos/github_integration/
1•ahmgeek•31m ago•0 comments

Letta Code: a memory-first coding agent

https://github.com/letta-ai/letta-code
4•pacjam•32m ago•2 comments

Neural Networks XD in JavaScript

https://chuwon.github.io/nn/
1•bicepjai•33m ago•0 comments

Feeding the Machine

https://www.theverge.com/cs/features/831818/ai-mercor-handshake-scale-surge-staffing-companies
1•paulpauper•33m ago•0 comments

We built an internal project management system – it became Dyversal AI

1•nivafy•34m ago•1 comments

The Order Is Backwards

https://granot.io/the-order-is-backwards/
1•tomgs•35m ago•0 comments

iRobot files for bankruptcy; Bought by its Chinese Manufacturer

https://apnews.com/article/irobot-roomba-bankruptcy-picea-amazon-7ef311c0b3848af2b30ba3921496efe1
1•Zaheer•35m ago•1 comments

How to Reclaim Aesthetic Vision from the Lean Startup?

https://medium.com/@gp2030/the-lean-startup-zen-the-art-of-failing-fast-and-reclaiming-aesthetic-...
1•light_triad•36m ago•0 comments

Lessons from building a content scanner for multiple social platforms

https://keywordspal.com/blog/building-multi-platform-content-aggregator
2•binsquare•39m ago•0 comments

Audio Plugin UI Components

https://www.audio-ui.com/
1•gregsadetsky•40m ago•0 comments
Open in hackernews

Vibe coding creates fatigue?

https://www.tabulamag.com/p/too-fast-to-think-the-hidden-fatigue
81•rom16384•1h ago

Comments

windex•1h ago
This is why I document my ideas and then go for a walk. It also helps me stay within the quota limits.
esafak•1h ago
No joke. The quotas are good for humans! It's the new "my code's compiling". https://xkcd.com/303/
JohnMakin•1h ago
There probably needs to be some settled discussion on what constitutes "vibe coding." I interpret this term as "I input text into $AI_MODEL, I look at the app to see my change was implemented. I iterate via text prompts alone, rarely or never looking at the code generated."

vs. what this author is doing, which seems more like agent assisted coding than "vibe" coding.

With regard to the subject matter, it of course makes sense that managing more features than you used to be able to manage without $AI_MODEL would result in some mental fatigue. I also believe this gets worse the older you get. I've seen this within my own career, just from times of being understaffed and overworked, AI or not.

unshavedyak•1h ago
> There probably needs to be some settled discussion on what constitutes "vibe coding." I interpret this term as "I input text into $AI_MODEL, I look at the app to see my change was implemented. I iterate via text prompts alone, rarely or never looking at the code generated."

Agreed. I've seen some folks say that it requires absolute ignorance of the code being generated to be considered "vibe coded". Though i don't agree with that.

For me it's more nuanced. I consider a lack of review to be "vibed" related to how little you looked at it. Considering LLMs can do some crazy things, even a few ignored LOC might end up with a pretty "vibe coded" feelings, despite being mostly reviewed outside of those ignored lines.

layer8•47m ago
Maybe read the original definition: https://x.com/karpathy/status/1886192184808149383

Or here: https://en.wikipedia.org/wiki/Vibe_coding

Not looking at the code at all by default is essential to the term.

unshavedyak•15m ago
I agree, i'm saying any code it produces. Eg if you ignore 95% of the LLM's PR, are you vibe coding? Some would say no, because you read 5% of the PR. I would say yes, you are vibe coding.

Ie you could side you vibe'd 95% of the PR, and i'd agree with that - but are you vibe coding then? You looked at 5% of the code, so you're not ignoring all of the code.

Yet in the spirit of the phrase, it seems silly to say someone is not vibe coding despite ignoring almost all of the code generated.

happytoexplain•1h ago
Yes, I'm getting increasingly confused as to why some people are broadening the use of "vibe" coding to just mean any AI coding, no matter how thorough/thoughtful.
loloquwowndueo•1h ago
Like people using “bricked” to signal recoverable situations. “Oh the latest update bricked my phone and I had to factory-reset it, but it’s ok now”. Bricked used to mean it turned into something as useful as a brick, permanently.
namrog84•58m ago
I've also seen the term. You've been permanently banned for 12 hours.

Instead of temporarily suspended.

Whatever happened to the word suspended for temporary and ban for permanent and places say permanent with an expiration date.

nancyminusone•34m ago
My favorite one of these is "electrocuted" meaning "to be killed by electricity".

Nobody alive has ever been electrocuted, but you will meet people who claim to have been.

viccis•44m ago
I'm not sure how common this is in other countries, but Americans would rather add another definition to the dictionary for the misuse before they'd ever tolerate being corrected or (god forbid) learning the real meaning of a word. I got dogpiled for saying this about "factoid" the other day here, but sometimes when people misuse words like "bricked" or "electrocuted", the ambiguity does actually make a difference, meaning you have to follow up with "actually bricked permanently?" or "did the shock kill him?", meaning that semantic information has been lost.
habinero•34m ago
Languages evolve, my dude. It's common everywhere.
crazygringo•53m ago
It's because the term itself got overapplied by people critical of LLMs -- they dismissed all LLM-assisted coding as "vibe coding" because they were prejudiced against LLMs.

Then lots of people were introduced to the term "vibe coding" in these conversations, and so naturally took it as a synonym for using LLMs for coding assistance even when reading the code and writing tests and such.

Also because vibe coding just sounds cool.

habinero•35m ago
I mean, that's the joke. "vibe coding" only sounds cool if you don't know how to code but horrific if you do.
crazygringo•29m ago
Right but there are tons of examples of things that started out as insults or negative only to be claimed as the proper or positive name. Impressionism in painting, for a start. The Quakers. Queer. Punk. Even "hacker", which started out meaning only breaking into computer systems -- and now we have "Hacker News." So vibe coding fits in perfectly.
NitpickLawyer•20m ago
> "vibe coding" only sounds cool if you don't know how to code but horrific if you do.

Disagree. Vibe coding is even more powerful if you know what you're doing. Because if you know what you're doing, and you keep up with the trends, you also know when to use it, and when not to. When to look at the code or when to just "vibe" test it and move on.

iLoveOncall•38m ago
Words don't have meaning in 2025.

A negative but courteous remark is "slamming", a tweet is an "attack", etc.

So yeah I'm not surprised that people conflate any use of AI with vibe-coding.

lukan•32m ago
Words changed meaning all the time through history, it just happens faster.
pferde•24m ago
The two examples the grandparent post mentioned are not really evolution, but rather making everything sound bombastic and sensationalist. The end game for that trend is the cigarette brand ad billboard in Idiocracy, where a half-naked muscular man glares at you angrily and going "If you do not smoke our brand, f* you!"

Sounds more like de-volution to me.

celeryd•1h ago
I don't see a distinction. Vibe coding is either agent assisted coding or using chatbots as interpreters for your design goals. They are the same thing.
christophilus•55m ago
No. One involves human quality control, and one omits it.
johnsmith1840•49m ago
"Vibe" has connotations of easy and fun neither of which are true when building something difficult
zephyrthenoble•1h ago
I've felt this too as a person with ADHD, specifically difficulty processing information. Caveat: I don't vibe code much, partially because of the mental fatigue symptoms.

I've found that if an LLM writes too much code, even if I specified what it should be doing, I still have to do a lot of validation myself that would have been done while writing the code by hand. This turns the process from "generative" (haha) to "processing", which I struggle a lot more with.

Unfortunately, the reason I have to do so much processing on vibe code or large generated chunks of code is simply because it doesn't work. There is almost always an issue that is either immediately obvious, like the code not working, or becomes obvious later, like poorly structured code that the LLM then jams into future code generation, creating a house of cards that easily falls apart.

Many people will tell me that I'm not using the right model or tools or whatever but it's clear to me that the problem is that AI doesn't have any vision of where your code will need to organically head towards. It's great for one shots and rewrites, but it always always always chokes on larger/complicated projects, ESPECIALLY ones that are not written in common languages (like JavaScript) or common packages/patterns eventually, and then I have to go spelunking to find why things aren't working or why it can't generate code to do something I know is possible. It's almost always because the input for new code is my ask AND the poorly structured code, so the LLM will rarely clean up it's own crap as it goes. If anything, it keeps writing shoddy wrapper around shoddy wrappers.

Anyways, still helpful for writing boilerplate and segments of code, but I like to know what is happening and have control over how my code is structured. I can't trust the LLMs right now.

Jeff_Brown•1h ago
Agreed. Some strategies that seem to help exist, though. Write extensive tests before writing the code. They serve as guidance. Commit tests separately from library code, so you can tell the AI didn't change the test. Specify the task with copious examples. Explain why yo so things, not just what to do.
zephyrthenoble•1h ago
Interesting, I haven't tried tests outside of the code base the LLM is working on.

I could see other elements of isolation being useful, but this kind of feels like a lot of extra work and complexity which is part of the issue...

danielbln•38m ago
Also: detailed planning phase, cross-LLM reviews via subagents, tests, functional QA etc. There at more (and complimentary) ways to ensure the code does what it should then to comb through ever line.
habinero•28m ago
Yeah, this is where I start side-eying people who love vibe coding. Writing lots of tests and documentation and fixing someone else's (read: the LLM's) bad code? That's literally the worst parts of the job.
LocalH•1h ago
Downtime for the conscious brain is paramount in life, as that allows the sub-conscious to absorb and act on new information. I have no science to point to, but I believe wholeheartedly that the conscious and sub-conscious minds cannot access the same neurons at the same time, it's like single-ported RAM. More than one thing in my life has been improved by taking a conscious break and letting the subconscious churn
johnsmith1840•38m ago
I did alot of AI research around this (memory/finetuning)

Coolest bit of research I cam across was what the brain does during sleep. It basically reduces connection during this. But it also makes you hallucinate (sleep). This was found in researching fish and also training LLMs there's great value in "forgetting" for generalization.

After studying it in LLMs for awhile I also came to your same conclusion about my own brain. Problems are often so complex you must let your brain forget in order to handle the complexity in the same sense I also believe this is the path to AGI.

Zigurd•1h ago
I don't want to be that contrarian guy, but I find it energizing to go faster. For example, being able to blast through a list of niggling defects that need to be fixed is no longer a stultifying drag.

I recently used a coding agent on a project where I was using an unfamiliar language, framework, API, and protocol. It was a non-trivial project, and I had to be paying attention to what the agent was doing because it definitely would go off into the weeds fairly often. But not having to spend hours here and there getting up to speed on some mundane but unfamiliar aspect of the implementation really made everything about the experience better.

I even explored some aspects of LLM performance: I could tell that new and fast changing APIs easily flummox a coding agent, confirming the strong relationship of up-to-date and accurate training material to LLM performance. I've also seen this aspect of agent assisted coding improve and vary across AIs.

Avicebron•57m ago
Can you share why it was non-trivial? I'm curious about how folks are evaluating the quality of their solutions when the project space is non trivial and unfamiliar
ares623•54m ago
A little bit of Dunning-Kruger maybe?
joseda-hg•37m ago
Non triviality is relative anyway, if anything admiting complexity beyond your skills on your expertise field reads like the inverse
vidarh•55m ago
Same here. I've picked up projects that have languished for years because the boring tasks no longer make me put them aside.
skdhshdd•46m ago
> But not having to spend hours here and there getting up to speed on some mundane but unfamiliar aspect of the implementation

At some point you realize if you want people to trust you you have to do this. Otherwise you’re just gambling, which isn’t very trustworthy.

It’s also got the cumulative effect of making you a good developer if done consistently over the course of your career. But yes, it’s annoying and slow in the short term.

louthy•42m ago
> But not having to spend hours here and there getting up to speed on some mundane but unfamiliar aspect of the implementation

Red flag. In other words you don’t understand the implementation well enough to know if the AI has done a good job. So the work you have committed may work or it may have subtle artefacts/bugs that you’re not aware of, because doing the job properly isn’t of interest to you.

This is ‘phoning it in’, not professional software engineering.

jmalicki•35m ago
Learning an unfamiliar aspect and doing it be hand will have the same issues. If you're new to Terraform, you are new to Terraform, and are probably going to even insert more footguns than the AI.

At least when the AI does it you can review it.

louthy•32m ago
> Learning an unfamiliar aspect and doing it be hand will have the same issues. If you're new to Terraform, you are new to Terraform

Which is why you spend time upfront becoming familiar with whatever it is you need to implement. Otherwise it’s just programming by coincidence [1], which is how amateurs write code.

> and are probably going to even insert more footguns than the AI.

Very unlikely. If I spend time understanding a domain then I tend to make fewer errors when working within that domain.

> At least when the AI does it you can review it.

You can’t review something you don’t understand.

[1] https://dev.to/decoeur_/programming-by-coincidence-dont-do-i...

pferde•31m ago
No, you can not. Without understanding the technology, at best you can "vibe-review" it, and determine that it "kinda sorta looks like it's doing what it's supposed to do, maybe?".
visarga•31m ago
>Red flag. In other words you don’t understand the implementation well enough to know if the AI has done a good job.

Red flag again! If your protection is to "understand the implementation" it means buggy code. What makes a code worthy of trust is passing tests, well designed tests that cover the angles. LGTM is vibe testing

I go as far as saying it does not matter if code was written by a human who understands or not, what matters is how well it is tested. Vibe testing is the problem, not vibe coding.

nosianu•12m ago
> What makes a code worthy of trust is passing tests

(Sorry, but you set yourself up for this one, my apologies.)

Oh, so this post describes "worthy code", okay then.

https://news.ycombinator.com/item?id=18442941

Tests are not a panacea. They don't care about anything other than what you test. If you don't have code testing maintainability and readability, only that it "works", you end up like the product in that post.

Ultimate example: Biology (and everything related, like physiology, anatomy), where the test is similarly limited to "does it produce children that can survive". It is a huuuuuge mess, and trying to change any one thing always messes up things elsewhere in unexpected and hard or impossible to solve ways. It's genius, it works, it sells - and trying to deliberately change anything is a huge PITA because everything is interconnected and there is no clean design anywhere. You manage to change some single gene to change some very minor behavior, suddenly the ear shape changes and fur color and eye sight and digestion and disease resistance, stuff like that.

bongodongobob•31m ago
It sounds like you've never worked a job where you aren't just supporting 1 product that you built yourslef. Fix the bug and move on. I do not have the time or resources to understand it fully. It's a 20 year old app full of business logic and MS changed something in their API. I do not need to understand the full stack. I need to understand the bug and how to fix it. My boss wants it fixed yesterday. So I fix it and move onto the next task. Some of us have to wear many hats.
louthy•16m ago
In my 40 years of writing code, I’ve worked on many different code bases and in many different organisations. And I never changed a line of code, deleted code, or added more code unless I could run it in my head and ‘know’ (to the extent that it’s possible) what it will do and how it will interact with the rest of the project. That’s the job.

I’m not against using AI. I use it myself, but if you don’t understand the domain fully, then you can’t possibly validate what the AI is spitting out, you can only hope that it has not fucked up.

Even using AI to write tests will fall short if you can’t tell if the tests are good enough.

For now we still need to be experts. The day we don’t need experts the LLMs should start writing in machine code, not human readable languages

observationist•42m ago
There's something exhilarating about pushing through to some "everything works like I think it should" point, and you can often get there without doing the conscientious, diligent, methodical "right" way of doing things, and it's only getting easier. At the point where everything works, if it's not just a toy or experiment, you definitely have to go back and understand everything. There will be a ton to fix, and it might take longer to do it like that than just by doing it right the first time.

I'm not a professional SWE, I just know enough to understand what the right processes look like, and vibe coding is awesome but chaotic and messy.

lukan•35m ago
"It was a non-trivial project, and I had to be paying attention to what the agent was doing"

There is a big difference between vibe coding and llm assisted coding and the poster above seems to be aware of it.

Rperry2174•36m ago
I think both experience are true.

AI removes boredome AND removes the natural pauses where understanding used to form..

energy goes up, but so does the kind of "compression" of cognitive things.

I think its less a quesiton of "faster" or "slower" but rather who controls the tempo

visarga•34m ago
After 4 hours of vibe coding I feel as tired as a full day of manual coding. The speed can be too much. If I only use it for a few minutes or an hour, it feels energising.
stuffn•26m ago
I think the counter-point to that is what I experience.

I agree it can be energizing because you can offload the bullshit work to a robot. For example, build me a CRUD app with a bootstrap frontend. Highly useful stuff especially if this isn't your professional forte.

The problems come afterwards:

1. The bigger the base codebase generation the less likely you're going to find time or energy to refactor LLM slop into something maintainable. I've spent a lot of time tailoring prompts for this type of generation and still can't get the code to be as precise as something an engineer would write.

2. Using an unfamiliar language means you're relying entirely on the LLM to determine what is safe. Suppose you wish to generate a project in C++. An LLM will happily do it. But will it be up to a standard that is maintainable and safe? Probably not. The devil is in the mundane details you don't understand.

In the case of (2) it's likely more instructive to have the LLM make you do the leg work, and then it can suggest simple verifiable changes. In the case of (1) I think it's just an extension of the complexity of any project professional or not. It's often better to write it correct the first time than write it fast and loose and then find the time to fix it later.

Animats•23m ago
> I don't want to be that contrarian guy, but I find it energizing to go faster.

You, too, can be awarded the Order of Labor Glory, Third Class.[1]

[1] https://en.wikipedia.org/wiki/Order_of_Labour_Glory

xnorswap•1h ago
I feel this.

I take breaks.

But I also get drawn to overworking ( as I'm doing right now ), which I justify because "I'm just keeping an eye on the agent".

It's hard work.

It's hard to explain what's hard about it.

Watching as a machine does in an hour what would take me a week.

But also watching to stop the machine spin around doing nothing for ages because it's got itself in a mess.

Watching for when it gets lazy, and starts writing injectable SQL.

Watching for when it gets lazy, and tries to pull in packages it had no right to.

We've built a motor that can generate 1,000 horse power.

But one man could steer a horse.

The motor right now doesn't have the appropriate steering apparatus.

I feel like I'm chasing it around trying to keep it pointed forward.

It's still astronomically productive.

To abandon it would be a waste.

But it's so tiring.

colechristensen•1h ago
Build tools to keep it in check.
vidarh•51m ago
Really, this. You still need to check its work, but it is also pretty good at checking its work if told to look at specific things.

Make it stop. Tell it to review whether the code is cohesive. Tell it to review it for security issues. Tell it to review it for common problems you've seen in just your codebase.

Tell it to write a todo list for everything it finds, and tell it fix it.

And only review the code once it's worked through a checklist of its own reviews.

We wouldn't waste time reviewing a first draft from another developer if they hadn't bothered looking over it and test it properly, so why would we do that for an AI agent that is far cheaper.

colechristensen•44m ago
Tell it to grade its work in various categories and that you'll only accept B+ or greater work. Focusing on how good it's doing is an important distinction.
habinero•26m ago
It's very funny that I can't tell if this is sarcasm or not. "Just tell it to do better."
colechristensen•23m ago
Oh I'm not at all joking. It's better at evaluating quality than producing it blindly. Tell it to grade it's work and it can tell you most of the stuff it did wrong. Tell it to grade it's work again. Keep going through the cycle and you'll get significantly better code.

The thinking should probably include this kind of introspection (give me a million dollars for training and I'll write a paper) but if it doesn't you can just prompt it to.

CPLX•47m ago
My least favorite part is where it runs into some stupid problem and then tries to go around it.

Like when I'm asking it to run a bunch of tests against the UI using a browser tool, and something doesn't work. Then it goes and just writes code to update the database instead of using the user element.

My other thing that makes me insane is when I tell it what to do, and it says, "But wait, let me do something else instead."

khimaros•59m ago
Doctorow's "Reverse Centaur"
cmrdporcupine•52m ago
I play stupid web games (generals.io I'm looking at you) while Claude does its thing. Takes the edge off the pace a bit.

This fine for WFH/remote work. It didn't have great optics when I went back to in-office for a bit.

euph0ria•21m ago
Same here :) love generals
AutumnsGarden•51m ago
There’s a lot of points I agree with but I think what’s important is fully conceptualizing the mental model of your project. Then, context switching doesn’t even induce much mental fatigue.
layer8•41m ago
That’s only possible for relatively small projects.
retuy5668•19m ago
Don't you have a fully conceptualized mental model of your body, to the point that you are quite functional with it? Isn't your body far, far, far more complicated than any software project could be?

How'd you reckon?

iceflinger•47m ago
Why is this written as a bullet pointed list?
tehjoker•40m ago
This is what loom workers experienced after the introduction of the power loom, what factory workers experienced under Taylorism, what Amazon workers experience today in many cases. Just working at a pace that is unsustainable. This is why unions exist.
waltbosz•36m ago
> One reason developers are developers is the dopamine loop > You write code, it doesn’t work, you fix it, it works, great! Dopamine rush. Several dozens or a hundred times a day.

This statement resonates with me. Vibe coding gets the job done quickly, but without the same joy. I used to think that it was the finished product that I liked to create, but maybe it's the creative process of building. It's like LEGO kits, the fun is putting them together, not looking at the finished model.

On the flip side, coding sessions where I bang my head against the wall trying to figure out some black box were never enjoyable. Nor was writing POCOs, boilerplate, etc.

cxromos•29m ago
true in my case. when i get into the zone while coding, i can go on and on. while llm can help, there's a cognitive mismatch between coding directions to llm and reading the code when it comes to continuing coding. brain and the generated code aren't aligned. i prefer at the moment, after coding a feature, to see if it can be improved using llm. and it's of great help to write tests.
wendgeabos•29m ago
anecdata
scuff3d•27m ago
A guy at work did a demo of an agent work flow for some higher ups (we have chatbots but haven't adopted agents yet). He raved about how after writing a several hundred line spec, being extremely specific about the technology to use, and figuring out where to put all the guardrails, he was able to get Claude to generate weeks worth of code. When all was said and done it was like 20k lines of code between implementation, tests, and helper tools. Along the way he acknowledged you have to keep a close eye on it, or it will generate functions that pass tests but don't actually do their jobs, tests that pass but don't test anything, and a bunch of other issues.

To people with little to no practical software experience, I can see why that seems incredible. Think of the savings! But to anyone who's worked in a legacy code base, even well written ones, should know the pain. This is worse. That legacy code base was at least written with intention, and is hopefully battle tested to some degree by the time you look at it. This is 20k lines of code written by an intern that you are now responsible for going through line by line, which is going to take at least as long as it would have to write yourself.

There are obvious wins from AI, and agents, but this type of development is a bad idea. Iteration loops need to be kept much smaller, and you should still be testing as you go like you would when writing everything yourself. Otherwise it's going to turn into an absolute nightmare fast.

inetknght•21m ago
Even asking it to do little tests, Claude 4.5 Sonnet Thinking still ends up writing tests that do nothing or don't do what it says will do. And it's always fucking cheery about it: "you're code is now production-ready!" and "this is an excellent idea!" and "all errors are now fixed! your code is production-ready!" and "I fixed the compiler issue, we're now production ready!"

...almost as if it's too eager to make its first commit. Much like a junior engineer might be.

It's not eager enough to iterate. Moreover, when it does iterate, it often brings along the same wrong solutions it came up with before.

It's way easier to keep an eye on small changes while iterating with AI than it is with letting it run free in a green field.

scuff3d•1m ago
Yeah that aggressive sycophancy is incredibly annoying. Someone telling me I'm being a fucking idiot is more useful then "what a fantastic observation! You're so right" for the millionths time.

Even using it to spitball ideas can be a problem. I was using Claude to bounce ideas off of for a problem I was working on it, and it was dead set a specific solution involving a stack and some complex control logic was correct, when it reality it would have made the entire solution far more complicated. All I really needed was a sliding window into an array.

simonw•25m ago
This morning I attended and paid attention to three separate meetings and at one point had three coding agents running in parallel solving some quite complex problems for me.

It's now 11:47am and I am mentally exhausted. I feel like my dog after she speeds an hour at her sniff-training class (it wipes her out for the rest of the day.)

I've felt like that on days without the meetings too. Keeping up with AI tools requires a great deal of mental effort.

OptionOfT•25m ago
I see people with no coding experience now generating PRs to a couple of repos I manage.

They ask a business question to the AI and it generates a bunch of code.

But honestly, coding isn't the part that slowed me down. Mapping the business requirements to code that doesn't fail is the hard part.

And the generated PRs are just answers to the narrow business questions. Now I need to spend time in walking it all back, and try to figure out what the actual business question is, and the overall impact. From experience I get very little answer to those questions.

And this is where Software Engineering experience becomes important. It's asking the right questions. Not just writing code.

Next to that I'm seeing developers drinking the cool-aid and submitting PRs where a whole bunch of changes are made, but they don't know why. Well, those changes DO have impact. Keeping it because the AI suggested it isn't the right answer. Keeping it because you agree with the AI's reasoning isn't the right answer either.

spike021•11m ago
I find vibe coding similar to visiting a country where I don't know the local language very well.

Usually that requires saying something, seeing if the other person understands what I'm saying, and occasionally repeating myself in a different way.

It can be real tiring when I'm with friends who only speak the other language so we're both using translator tools and basically repeating that loop up to 2-3 hours.

I've found the same situation with vibe coding. Especially when the model misunderstands what I want or starts going off on a tangent. sometimes it's easier to edit the original query or an earlier step in the flow and re-write it for a better result.

blahbob•10m ago
In my experience, it depends on the task. As a hobby, I develop personal projects (mostly web apps) where I'm not an expert in the relevant technology. In this case, LLM-assisted coding is empowering - and I don't think it's Dunning-Kruger, as I can understand the generated code, and it "feels" good enough given my 20+ years' experience in software engineering.

However, when it comes to my professional work on a mature, advanced project, I find it much easier to write the code myself than to provide a very precise specification without which the LLM wouldn't generate code of a sufficiently high quality.

gaigalas•54s ago
If you're automating code generation but not automating verification, there will be a mismatch.

Maybe the fatigue comes from that mismatch?

The classical vibe coder style is to just ignore verification. That's not a good approach as well.

I think this space has not matured yet. We have old tools (test, lint) and some unreliable tools (agent assisted reviews), but nothing to match the speed of generation yet.

I do it by creating ad-hoc deterministic verifiers. Sometimes they'll last just a couple of PRs. It's cheap to do them now. But also, there must be a better way.