frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Google is dead. Where do we go now?

https://www.circusscientist.com/2025/12/29/google-is-dead-where-do-we-go-now/
157•tomjuggler•1h ago•118 comments

All Delisted Steam Games

https://delistedgames.com/all-delisted-steam-games/
95•Bondi_Blue•2h ago•23 comments

Static Allocation with Zig

https://nickmonad.blog/2025/static-allocation-with-zig-kv/
132•todsacerdoti•5h ago•67 comments

Flame Graphs vs. Tree Maps vs. Sunburst (2017)

https://www.brendangregg.com/blog/2017-02-06/flamegraphs-vs-treemaps-vs-sunburst.html
55•gudzpoz•2d ago•8 comments

List of domains censored by German ISPs

https://cuiiliste.de/domains
182•elcapitan•3h ago•71 comments

The production bug that made me care about undefined behavior

https://gaultier.github.io/blog/the_production_bug_that_made_me_care_about_undefined_behavior.html
58•birdculture•3h ago•34 comments

Which Humans?

https://osf.io/preprints/psyarxiv/5b26t_v1
15•surprisetalk•1h ago•3 comments

Left Behind: Futurist Fetishists, Prepping and the Abandonment of Earth

https://www.boundary2.org/2019/08/sarah-t-roberts-and-mel-hogan-left-behind-futurist-fetishists-p...
9•naves•2h ago•3 comments

The Future of Software Development Is Software Developers

https://codemanship.wordpress.com/2025/11/25/the-future-of-software-development-is-software-devel...
36•cdrnsf•2h ago•10 comments

Show HN: Aroma: Every TCP Proxy Is Detectable with RTT Fingerprinting

https://github.com/Sakura-sx/Aroma
33•Sakura-sx•4d ago•17 comments

High-performance C++ hash table using grouped SIMD metadata scanning

https://github.com/Cranot/grouped-simd-hashtable
23•rurban•5d ago•8 comments

GOG is getting acquired by its original co-founder

https://www.gog.com/blog/gog-is-getting-acquired-by-its-original-co-founder-what-it-means-for-you/
433•haunter•4h ago•245 comments

Show HN: Superset – Terminal to run 10 parallel coding agents

https://superset.sh/
46•avipeltz•6d ago•34 comments

Libgodc: Write Go Programs for Sega Dreamcast

https://github.com/drpaneas/libgodc
170•drpaneas•7h ago•39 comments

Show HN: Evidex – AI Clinical Search (RAG over PubMed/OpenAlex and SOAP Notes)

https://www.getevidex.com
21•amber_raza•4h ago•4 comments

Static Allocation for Compilers

https://matklad.github.io/2025/12/23/static-allocation-compilers.html
11•enz•5d ago•3 comments

Nvidia takes $5B stake in Intel under September agreement

https://www.reuters.com/legal/transactional/nvidia-takes-5-billion-stake-intel-under-september-ag...
146•taubek•4h ago•50 comments

Kidnapped by Deutsche Bahn

https://www.theocharis.dev/blog/kidnapped-by-deutsche-bahn/
831•JeremyTheo•9h ago•776 comments

Linux DAW: Help Linux musicians to quickly and easily find the tools they need

https://linuxdaw.org/
141•prmoustache•9h ago•72 comments

Meta's ads tools started switching out top-performing ads with AI-generated ones

https://www.businessinsider.com/meta-ai-generating-bizarre-ads-advantage-plus-2025-10
84•zdw•1h ago•51 comments

You can't design software you don't work on

https://www.seangoedecke.com/you-cant-design-software-you-dont-work-on/
199•saikatsg•13h ago•68 comments

Show HN: Z80-μLM, a 'Conversational AI' That Fits in 40KB

https://github.com/HarryR/z80ai
450•quesomaster9000•15h ago•101 comments

Why is calling my asm function from Rust slower than calling it from C?

https://ohadravid.github.io/posts/2025-12-rav1d-faster-asm/
81•gavide•2d ago•26 comments

Binance's Trust Wallet extension hacked; users lose $7M

https://www.web3isgoinggreat.com/?id=trust-wallet-hack
31•ilamont•1h ago•2 comments

Karpathy on Programming: "I've never felt this much behind"

https://twitter.com/karpathy/status/2004607146781278521
172•rishabhaiover•3d ago•115 comments

What an unprocessed photo looks like

https://maurycyz.com/misc/raw_photo/
2255•zdw•23h ago•364 comments

Feynman's Hughes Lectures: 950 pages of notes

https://thehugheslectures.info/the-lectures/
151•gnubison•10h ago•34 comments

Show HN: See what readers who loved your favorite book/author also loved to read

https://shepherd.com/bboy/2025
102•bwb•9h ago•24 comments

Show HN: Per-instance TSP Solver with No Pre-training (1.66% gap on d1291)

12•jivaprime•7h ago•2 comments

Show HN: Vibe coding a bookshelf with Claude Code

https://balajmarius.com/writings/vibe-coding-a-bookshelf-with-claude-code/
240•balajmarius•8h ago•182 comments
Open in hackernews

LLMs Are Not Fun

https://orib.dev/nofun.html
181•todsacerdoti•2h ago

Comments

kylecazar•2h ago
Programming can also not be fun. Maybe if you only use models for the tedious bits, a balance will be struck.

But, if you are in a work situation where LLM's are forced upon you in very high doses, then yes -- I understand the feeling.

jwaldrip•2h ago
Typing is not fun. It robs me of my craft of holding my pencil and feeling it press against the paper with my hand... LLMs are merely a tool to achieve a similar end result. The different aspects of software development are an art. But even with LLMS, I critique and care about the code just as much as if I were writing it line by line myself. I have had more FUN being able to get all of my ideas on paper with LLMs than I have had over years of banging my head against a keyboard going down the rabbit hole on production bugs.
marcofloriano•2h ago
It's not about typing, it's about writing. You don't type, you write. That's the paradigm. You can write with a pen or you can type a keyboard. Different ways, same goal. You write.

LLMs code for you. They write for you.

daliusd•2h ago
So does autocomplete. Why not treat LLM as next autocomplete iteration?
b40d-48b2-979e•2h ago
LLMs are generative and do not have a fixed output in the way past autocompletes have. I know when I accept "intellisense" or whatever editor tools are provided to me, it's using a known-set of completions that are valid. LLMs often hallucinate and you have to double-check everything they output.
yunwal•1h ago
I don't know what autocomplete you're using but mine often suggests outright invalid words given the context. I work around this by simply not accepting them
b40d-48b2-979e•1h ago
The high failure rate of LLM-based autocompletes has had me avoid those kind of features altogether as they waste my time and break my focus to double-check someone else's work. I was efficient before they were forced into every facet of our lives three years ago, and I'll be just as efficient now.
catlifeonmars•15m ago
Personally, I configure autocomplete so that LSP completions rank higher than LLM completions. I like it because it starts with known/accurate completions and then gracefully degrades to hallucinations.
marcofloriano•2h ago
Because they are not. Autocomplete completes the only thing you already thought. You solve the problem, the machine writes. Mechanical.

LLMs defines paths, ideas, choose routes, analyze and so on. They don't just autocomplete. They create the entire poem.

daliusd•1h ago
Sometimes. Usually LLM does exactly what I ask it. There is not like there are million ways - usually 4-10.
NegativeLatency•2h ago
You could build one like that, but most implementations I've seen cross the line for me.

Hard to define but feels similar to the "I know it when I see it" or "if it walks like a duck and quacks like a duck" definitions.

autoexec•1h ago
Who'd want an autocomplete that randomly invents words and spellings while presenting them as real? It's annoying enough when autocomplete screws up every other ducking message I send by choosing actual words inappropriately. I don't need one that produces convincing looking word salad by shoving in lies too.
daliusd•59m ago
I wonder why people have such completely different experience with LLM
JohnFen•1h ago
Autocomplete annoys me, derails my train of thought, and slows me down. I'm happy that nobody forces me to use it. Likewise, I would greatly resent being forced to use LLMs.
glial•2h ago
Yesterday I had semi-coherent idea for an essay. I told it to an LLM and asked for a list of authors and writings where similar thoughts have been expressed - and it provided a fantastic bibliography. To me, this is extremely fun. And, reading similar works to help articulate an idea is absolutely part of writing.

"LLMs" are like "screens" or "recording technology". They are not good or bad by themselves - they facilitate or inhibit certain behaviors and outcomes. They are good for some things, and they ruin some things. We, as their users, need to be deliberate and thoughtful about where we use them. Unfortunately, it's difficult to gain wisdom like this a priori.

encyclopedism•1h ago
As someone said "I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes".
xnx•1h ago
You don't have to use them.
encyclopedism•1h ago
You're wrong in saying so. Many companies are quite literally mandating their use, do a quick search on HN.
wahnfrieden•1h ago
Only if you are already wealthy or fine with finding a new job

If I were still employed, I would also not want my employer to tolerate peers of mine rejecting the use of agents in their work out of personal preference. If colleagues were allowed to produce less work for equal compensation, I would want to be allowed to take compensated time off work by getting my own work done in faster ways - but that never flies with salaried positions, and getting work done faster is greeted with more work to do sooner. So it would be demoralizing to work alongside and be required to collaborate with folks who are allowed to take the slow and scenic route if it pleases them.

In other words, expect your peers to lobby against your right to deny agent use, as much as your employer.

If what you really want is more autonomy and ownership over your work, rejecting tool modernity won't get you that. It requires organizing. We learned this lesson already from how the Luddite movement and Jacobin reaction played out.

encyclopedism•1h ago
Very well put
tjr•1h ago
Why limit this to AI? There have been lots of programming tools which have not been universally adopted, despite offering productivity gains.

For example, it seems reasonably that using a good programming editor like Emacs or VI would offer a 2x (or more) productivity boost over using Notepad or Nano. Why hasn't Nano been banned, forbidden from professional use?

catlifeonmars•55m ago
You’re assuming implicitly that the tool use in question always results in greater productivity. That’s not true across the board for coding agents. Let me put this another way: 99% of the time, the bottleneck is not writing code.
bugglebeetle•1h ago
As a former artist, I can tell you that you will never have good or sufficient ideas for your art or writing if you don’t do your laundry and dishes.

A good proxy for understanding this reality is that wealthy people who pay people to do all of these things for them have almost uniformly terrible ideas. This is even true for artists themselves. Have you ever noticed how that the albums all tend to get worse the more successful the musicians become?

It’s mundanity and tedium that forces your mind to reach out for more creative things and when you subtract that completely from your life, you’re generally left with self-indulgence instead of hunger.

dbtc•1h ago
When I do dishes by hand I think all kinds of interesting thoughts.

Anyway, we've had machines that do our dishes and laundry for a long while now.

autoexec•1h ago
Sadly all the AI is owned by companies that want to do all your art and writing so that they can keep you as a slave doing their laundry and dishes. Maybe we'll eventually see powerful LLMs running locally so that you don't have to beg some cloud service for permission to use it in the ways you want, but at this point most people will be priced out of the hardware they'd need to run it anyway.

However you feel about LLMs or AI right now, there are a lot of people with way more money and power than you have who are primarily interested in further enriching and empowering themselves and that means bad news for you. They're already looking into how to best leverage the technology against you, and the last thing they care about is what you want.

blks•1h ago
So finding out information was fun for you. Would it be also fun if said LLM write your essay for you based on your semi-coherent idea?
flatline•2h ago
I write what I want the LLM to do. Generating a satisfactory prompt is sometimes as much work as writing the code myself - it just separates the ideation from the implementation. LLMs are the realization of the decades-long search for natural language programming, dating at least as far back as COBOL. I personally think they are great - not 100% of the time, just as a tool.
xnx•1h ago
> LLMs code for you. They write for you.

A director is the most important person to the creation of a film. The director delegates most work (cameras, sets, acting, costumes, makeup, lighting, etc.), but can dive in and take low-level/direct control of any part if they choose.

ffsm8•1h ago
have you actually done some projects with e.g. claude code? completely greenfield entirely up to yourself?

because ime, youre completely wrong.

I mean i get were youre coming from if you imagine it like the literal vibe coding how this started, but thats just a party trick and falls off quickly as the project gets more complex.

to be clear, simple features in an existing project can often be done simply - with a single prompt making changes across mutliple files - but that only works under _some circumstances_ and bigger features / more indepth architecture is still necessary to get the project to work according to your ideas

And that part needs you to tell the llm how it should do it - because otherwise youre rolling the dice wherever its gonna be a clusterfuck after the next 5 changes

jstummbillig•1h ago
To get the LLM to code for me, I need to write.
maxweisel•2h ago
100% this. I've had more fun using Claude Code because I get to spend more of my time doing the fun parts (design, architecture, problem solving, etc) and less time spent typing, fixing small compilation errors, looking up API docs to figure out that query parameters use camelcase instead of underscores.
monster_truck•1h ago
You don't have to do any of that if you simply don't make mistakes in the first place FYI
stackghost•1h ago
This is why I exclusively write C89 when handling untrusted user input. I simply never make mistakes and so I don't need to worry about off-by-ones or overflows or memory safety or use after frees.

Garbage collection and managed types are for idiots who don't know what the hell they're doing; I'm leet af. You don't need to worry about accidentally writing heartbleed if you simply don't make mistakes in the first place.

bugglebeetle•1h ago
Attitudes like this one are why people prefer working with AI to code lol.
autoexec•1h ago
I'd rather spend my time designing and writing code than spending it debugging and reformatting whatever an LLM cobbled together from stack overflow and github. 'Design, architecture, problem solving, etc' all takes a backseat when the LLM barfs out all the code and you have to either spend your time convincing it to output what you could have written yourself anyway or play QA fixing its slop all day long.
maxweisel•1h ago
Back when I would ask ChatGPT to write code, I would agree with you, but using Claude Code's planning mode is a night and day difference. You write out a list of specs, Claude writes up a plan (that for writing backend APIs has always been just about perfect for me if my spec is solid), and then Claude executes that plan to almost perfection, with small nudges along the way.

If you're doing anything UI-based, it hasn't performed well for me, but for certain areas of software development, it's been an absolute dream.

ori_b•2h ago
I never spent much of my coding time on typing. My most productive coding is done in my head, usually a mile or so into a walk.
linsomniac•1h ago
>usually a mile or so into a walk

My place for that is in the shower.

I had one of those shower epiphanies a couple mornings ago... And I fed it into a couple LLMs while I was playing a video game (taking some time over the holidays to do that), and by the afternoon I had that idea as working code: ~4500 LOC with that many more in tests.

People keep saying "I want LLMs to take out the laundry so I can do art, not doing the laundry while LLMs do art." This is an example of LLMs doing the coding, so I can rekindle a joy of gaming, which feels like it's leaning in the right direction.

1718627440•1h ago
Or on the toilet.
6r17•2h ago
I was about to write something really emotional and clearly lacking any kind of self reflect ; then I read you again ; and I admit there is a lot of part of this that is true.

I feel like it may be something inherently wrong in the interface more than the actual expression of the tool. I'm pretty sure we are in some painful era where LLM, quiet frankly, help a tons with an absurd amount of stuff, underlying tons and "stuff" because it really is about "everything".

But it also generate a lot of frustrations ; I'm not convinced of the conversational status-quo for example ; and I could easily see something inspired directly from what you said about drawing ; there is something here about the experience - and it's really difficult to work on because it's inherently personal and may require to actually spend time, accumulate frustration to finally be able to express it through something else.

Ok time to work lmao

observationist•1h ago
Radical change in the available technology is going to require radical shifts in perspective. People don't like change, especially if it involves degrading their craft. If they pivot and find the joy in the new process, they'll be happy, but people far more often prefer to be "right" and miserable.

I have some sympathy for them, but AI is here to stay, and it's getting better, faster, and there's no stopping it. Adapt and embrace change and find joy in the process where you can, or you're just going to be "right" and miserable.

The sad truth is that nobody is entitled to a perpetual advantage in the skills they've developed and sacrificed for. Expertise and craft and specialized knowledge can become irrelevant in a heartbeat, so your meaning and joy and purpose should be in higher principles.

AI is going to eat everything - there will be no domain in which it is better for humans to perform work than it will be to have AI do it. I'd even argue that for any given task, we're pretty much already there. Pick any single task that humans do and train a multibillion dollar state of the art AI on that task, and the AI is going to be better than any human for that specific task. Most tasks aren't worth the billions of dollars, but when the cost drops down to a few hundred dollars, or pennies? When the labs figure out the generalization of problem categories such that the entire frontier of model capabilities exceeds that of all humans, no matter how competent or intelligent?

AI will be better, cheaper, and faster in any and every metric of any task any human is capable of performing. We need to figure out a better measure of human worth than the work they perform, and it has to happen fast, or things will get really grim. For individuals, that means figuring out your principles and perspective, decoupling from "job" as meaning and purpose in life, and doing your best to surf the wave.

JodieBenitez•32m ago
I can't wait for the machine to do all my work so I can finally do my personal projects without being interrupted.
observationist•10m ago
I have so many personal projects that I've started over the years, and then left to wither on the vine. I've been able to complete a dozen or so over the last 2 years, and work on a handful consistently over that same period, using AI heavily, and it's a lot of fun. I can work on the high level ideas, create projects, spitball with various characters and simulations, and it's like having a team of digital minions and henchmen. There is fun to be had, and you can us AI well or poorly, so you can develop your own skills while playing with the systems.

There's still just something magical about speaking with a machine - "put the man's face from the first picture onto the cookie tin in the second picture, make sure he still looks like Santa!" You can have a vague idea or inkling about a thing, throw it at the AI, and you've got a soundingboard to refine your thoughts and chase down intuitions. I totally understand the frustration people are having, but at some point, you gotta put down the old tools and learn to use the new. You're only hurting yourself if you stay angry and frustrated with the new status quo.

tmcw•1h ago
Unironically this: isn't writing on paper more fun than typing? Isn't painting with real paint and canvas more satisfying than with a stylus and an iPad? Isn't it more fun to make a home-cooked meal for your family than ordering out? Who stomps into the holiday celebration and tells mom that it'd be a lot more efficient to just get catering?

Isn't there something good about being embodied and understanding a medium of expression rather than attempting to translate ideas directly into results as quickly as possible?

ianbutler•1h ago
To you maybe, to someone else maybe not. It's really hard to pin down a universal framing for existence.

My family eats out at a nice steak restaurant every Christmas no one wants to cook. None of us like to cook.

tmcw•1h ago
Yes, exactly: I'm not saying everyone loves to paint or cook or whatever, but that a lot of people do, and it's weird and bad for the response to this kind of article, in which someone shares that they are losing something they enjoyed, to be some form of "well, not everyone enjoys that."
EA-3167•1h ago
Speaking as someone who despises writing freehand, and loves typing... what? I understand what you're trying to say, but you lost me very quickly I'm afraid. Whatever tool I use to write I'm still making every choice along the way, and that's true if I'm dictating, using a stylus to press into a clay tablet, or any other medium. An LLM is writing for me based on prompts, it's more analogous to hiring a very stupid person to write for you, and has very little to do with pens or keyboards.
gallerdude•2h ago
Today I showed Claude Code how to control my lights, and I'm having a blast.
jaggederest•2h ago
Claude Code is absurdly good at setting up and configuring Home Assistant.
manugo4•2h ago
That's bait. I've never had as much fun as now as a developer being able to develop side projects in matter of days.
flockonus•2h ago
I certainly don't feel like the author, but it's someone else's perspective, not "bait".
skybrian•1h ago
Maybe "scissor statement" would be more apt, at least for the headline.
nicce•2h ago
For a prototype, but something production ready requires almost similar amount of effort than it used to, if you care about good design and code quality.
lloydatkinson•2h ago
My overwhelming experience is that the sort of developers unironically using the phrase "vibe coding" are not interested in or care about good design and code quality.
encyclopedism•1h ago
Coding is merely a means to an end and not the end itself. Capitalism sees to it that a great many things are this way. Unfortunately only the results matter and not much else. I'm personally very sorry things are this way. What I can change I know not.
NitpickLawyer•1h ago
Not sure it's the gotcha you want it to be. What you said is true by definition. That is, vibe coding is defined as not caring about code. Not to be confused with LLM-assisted coding.
xnx•1h ago
I care about product quality. If "good design" and "code quality" can't be perceived in the product they don't matter.

I have no idea what the code quality is like in any of the software I use, but I can tell you all about how well they work, how easy to use they are, and how fast they run.

pritambarhate•1h ago
What is good design and code quality?

If I can keep adding new features without introducing big regressions that is good design and good code quality. (Of course there will come a time when it will not be possible and it will need a rewrite. Same like software created by top paid developers from the best universities.)

As long as we can keep new bugs to the same level as hand written code with LLM written code, I think, LLMs writing code is much superior just because of the speed with which it allows us to implement features.

We write software to solve (mostly) business efficiency problems. The businesses which will solve those problems faster than their competitors will win.

bonesss•1h ago
In light of OpenAI confessing to shareholders there’s no there there (being shocked by and then using Anthropics MCP, being shocked by and then using Anthropics Skills, opening up a hosted dev platform to milk my awesome LLM business ideas, and now revealing that inline ads a-la Google is their best idea so far to make, you know, make money…), I was thinking about those LLM project statistics. Something like 5-10% of projects are seeing a nice productivity bump.

Standard distribution says some minority of IT projects are tragi-bad… I’ve worked with dudes who would copy and paste three different JavaScript frameworks onto the same page, as long as it worked…

AirFryers are great household tabletop appliances that help people cook extraordinary dishes their ovens normally wouldn’t faster and easier than ever before. A true revolution. A proper chef can use one to craft amazing food. They’re small and economical, awesome for students.

Chefs just call it “convection cooking” though. It’s been around for a minute. Chefs also know to go hot (when and how), and can use an actual deep fryer if and when they want.

The frozen food bags here have AirFryer instructions now. The Michelin star chefs are still focusing on shit you could buy books about 50 years ago…

phito•2h ago
I really doesn't. I just ditched my wordpress/woocommerce webshop for a custom one that I made in 3 days with Claude, in C# blazor. It is better in every single way than my old webshop, and I have control over every aspect of it. It's totally production ready.

The code is as good or even better than I would have written. I gave Claude the right guidelines and made sure it stayed in line. There are a bunch of playwright tests ensuring things don't break over time, and proving that things actually work.

I didn't have to mess with any of the HTML/css which is usually what makes me give up my personal projects. The result is really, really good, and I say that as someone who's been passionate about programming for about 15 years.

3 days for a complete webshop with Stripe integration, shipping labels and tracking automation, SMTP emails, admin dashboard, invoicing, CI/CD, and all the custom features that I used to dream of.

Sure it's not a crazy innovative projet, but it brings me a ton of value and liberates me from these overengineered, "generic" bulky CMS. I don't have to pay $50 for a stupid plugin (that wouldn't really fit my needs anyway) anymore.

The future is both really exciting and scary.

gherkinnn•1h ago
I wish. I have all the rules and skill files and constraints in place and yet Claude 4.5 Sonnet continues to do strange things beyond a medium scale.

But it does save me time in many other aspects, so I can't complain.

phito•1h ago
I find that restricting it to very small modules that are clearly separated works well. It does sometimes do weird things, but I'm there to correct it with my experience.

I just wish I could have competent enough local LLMs and not rely on a company.

wahnfrieden•1h ago
The ones approaching competency cost tens of thousands in hardware to run. Even if competitive local models existed would you spend that to run them? (And then have to upgrade every handful of years.)
phito•1h ago
Nope, I wouldn't. I wish for competent local LLMs that don't require a supercomputer at home to run. One can dream!
wahnfrieden•1h ago
Use Opus only, or use GPT 5.2 Codex High (with 5.2 Pro as oracle and for spec work)
qudat•1h ago
You can be as specific as you want with an LLM, you can literally tell it to do “clean code” or use a DI framework or whatever and it’ll do it. Is it still work? Yes. But once you start using them you’ll realize how much code you actually write is safely in the realm of boilerplate and the core aspect of software dev is architecture which you don’t have to lose when instructing an agent. Most of the time I already know how I want the code to look, I just farm out the actual work to an agent and then spend a bunch of time reviewing and asking follow up questions.

Here’s a bunch of examples: moving code around, abstracting common functionality into a function and then updating all call sites, moving files around, pattern matching off an already existing pattern in your code. Sometimes it can be fun and zen or you’ll notice another optimization along the way … but most of the time it’s boring work an agent can is 10x faster than you.

encyclopedism•1h ago
> the core aspect of software dev is architecture which you don’t have to lose when instructing an agent. Most of the time I already know how I want the code to look, I just farm out the actual work to an agent and then spend a bunch of time reviewing and asking follow up questions.

This right here in your very own comment is the crux. Unless you're rich or run your own business, your employer (and many other employers) are right now counting down the days till they can think of YOU as boilerplate they want to farm YOU out to an LLM. At the very least where they currently employee 10 they are salivating about reducing it to 2.

This means painful change for a great many people. Appeal by analogy to historical changes like motorised vehicles etc miss the QUALITATIVE change occurring this time.

Many HN users may point to Jevons paradox, I would like to point out that it may very well work up until the point that it doesn't. After all a chicken has always seen the farmer as benevolent provider of food, shelter and safety, that is until of course THAT day when he decides he doesn't.

throw1235435•6m ago
Jevons paradox I doubt applies to software sadly for SWE's; or at least not in the way they hope it does. That paradox implies that there are software projects on the shelf that have a decent return on investment (ROI) but aren't taken up because of lack of resources (money, space, production capacity or otherwise). In general unlike physical goods usually the only resource lacking is now money and people which means the only way for more software to be built is lower value projects now stack up.

AI may make low ROI projects more viable now (e.g. internal tooling in a company, or a business website) but in general the high ROI and therefore can justify high salary projects would of been done anyway.

pizzafeelsright•1h ago
Perhaps for the inexperienced or timid. Code quality is it compiles and design is it performs to spec. Does properly formatted code matter when you no longer have to read it?
nicce•1h ago
> Does properly formatted code matter when you no longer have to read it?

That is exactly the moment when you cannot say anything about the code and cannot fix single line by yourself.

phito•1h ago
I don't agree, I looked at most of the code the AI wrote in my project, I have a good idea of how it is architectured because I actively planned it. If I have a bug in my orders, I know I have to go to the orders service. Then it's not much harder than reading the code my coworkers write at my daily job.
nicce•1h ago
Parent comment implied that they don’t plan to read the code at all in the long term.
deergomoo•1h ago
Formatted? I guess not really, because it’s trivially easy to reformat it. But how it’s structured, the data structures and algorithms it uses, the way it models the problem space, the way it handles failures? That all matters, because ultimately the computer still has to run the code.

It may be more extreme than what you are suggesting here, but there are definitely people out there who think that code quality no longer matters. I find that viewpoint maddening. I was already of the opinion that the average quality of software is appalling, even before we start talking about generated code. Probably 99% of all CPU cycles today are wasted relative to how fast software could be.

Of course there are trade-offs: we can’t and shouldn’t all be shipping only hand-optimised machine code. But the degree to which we waste these incredible resources is slightly nauseating.

Just because something doesn’t have to be better, it doesn’t mean we shouldn’t strive to make it so.

causal•2h ago
I can sympathize with what the author is saying but I agree that "LLMs are not fun" is a pretty coarse statement that invites disagreement.
marcofloriano•2h ago
The point of the OP is not the fun. It's the craft. He's losing his craft!
encyclopedism•2h ago
The core issue is that AI is taking away, or will take away, or threatens to take away, experiences and activities that humans would WANT to do. Things that give them meaning and many of these are tied to earning money and producing value for doing just that thing. Software/coding is once of these activities. One can do coding for fun but doing the same coding where it provides value to others/society and financial upkeep for you and your family is far more meaningful.

For those who have swallowed the AI panacea hook line and sinker. Those that say it's made me more productive or that I no longer have to do the boring bits and can focus on the interesting parts of coding. I say follow your own line of reasoning through. It demonstrates that AI is not yet powerful enough to NOT need to empower you, to NOT need to make you more productive. You're only ALLOWED to do the 'interesting' parts presently because the AI is deficient. Ultimately AI aims to remove the need for any human intermediary altogether. Everything in between is just a stop along the way and so for those it empowers stop and think a little about the long term implications. It may be that for you right now it is comfortable position financially or socially but your future you in just a few short months from now may be dramatically impacted.

As someone said "I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes".

I can well imagine the blood draining from peoples faces, the graduate coder who can no longer get on the job ladder. The law secretary whose dream job is being automated away, a dream dreamt from a young age. The journalist whose value has been substituted by a white text box connected to an AI model.

I don't have any ideas as to what should be done or more importantly what can be done. Pandora's box has been opened, Humpty Dumpty has fallen and he can't be put back together again. AI feels like it has crossed the rubicon. We must all collectively await to see where the dust settles.

benlivengood•1h ago
In the long run I think it's pretty unhealthy to make one's career a large part of one's identity. What happens during burnout or retirement or being laid off if a huge portion of one's self depends on career work?

Economically it's been a mistake to let wealth get stratified so unequally; we should have and need to reintroduce high progressive tax rates on income and potentially implement wealth taxes to reduce the necessity of guessing a high-paying career over 5 years in advance. That simply won't be possible to do accurately with coming automation. But it is possible to grow social safety nets and decrease wealth disparity so that pursuing any marginally productive career is sufficient.

Practically, once automation begins producing more value than 25% or so of human workers we'll have to transition to a collective ownership model and either pay dividends directly out of widget production, grant futures on the same with subsidized transport, or UBI. I tend to prefer a distribution-of-production model because it eliminates a lot of the rent-seeking risk of UBI; your landlord is not going to want 2X the number of burgers and couches you get distributed as they'd happily double rent in dollars.

Once full automation hits (if it ever does; I can see augmented humans still producing up to 50% of GDP indefinitely [so far as anyone can predict anything past human-level intelligence] especially in healthcare/wellness) it's obvious that some kind of direct goods distribution is the only reasonable outcome; markets will still exist on top of this but they'll basically be optional participation for people who want to do that.

encyclopedism•1h ago
I agree with much of what you say.

Career being the core of one's identity is so ingrained in society. Think about how schooling is directed towards producing what 'industry' needs. Education for educations sake isn't a thing. Capitalism see's to this and ensures so many avenues are closed to people.

Perhaps this will change but I fear it will be a painful transition to other modes of thinking and forming society.

Another problem is hoarding. Wealth inequality is one thing but the unadulterated hoarding by the very wealthy means that wealth is unable to circulate as freely as it ought to be. This burdens a society.

b40d-48b2-979e•59m ago

    Education for educations sake isn't a thing.
It is but only for select members of society. Off the top of my head, those with benefits programs to go after that opportunity like 100% disabled veterans, or the wealthy and their families.
throw1235435•14m ago
If we had done what you say more to the point I don't know if AI would of progressed as it has - companies would of been more selective with their investment money and previously AI was seen at best as a long shot bet. Most companies in the "real economy" can't afford to make too many of these kind of bets in general.

The main reason for the transformer architecture, and many other AI advancements really was "big tech" has lots of cash that they don't know what to do with. It seems the US system punishes dividends as well tax wise; so companies are incentivized to become like VC's -> buy lots of opportunities hoping one makes it big even if many end up losing.

ravenstine•2h ago
I'm not sure I'm having more fun, at least not yet, since for me the availability of LLMs takes away some of the pleasure of needing to use only my intellect to get something working. On the other hand, yes, it is nice to be able to have Copilot work away on a thing for my side project while I'm still focused on my day job. The tradeoff is definitely worth it, though I'm undecided on whether I am legitimately enjoying the entire process more than I used to.
verdverm•2h ago
You don't have to use LLMs the whole time. For example, I've gotten a lot done with AI and had the time to spend over the holidays on a long time side project... organically coding the big fun thing

Replacing Dockerfiles and Compose with CUE and Dagger

dboreham•1h ago
I don't do side projects, but the LLM has completely changed the calculus about whether some piece of programming is worthwhile doing at all. I've been enjoying myself automating all sorts of admin/ops stuff that hitherto got done manually because there was never a clear 1/2 day of time to sit down and write the script. Claude does it while I'm deleting email or making coffee.
ZpJuUuNaQ5•1h ago
>That's bait.

For you, maybe. In my experience, the constant need for babysitting LLMs to avoid the generation of verbose, unmaintainable slop is exhausting and I'd rather do everything myself. Even with all the meticulously detailed instructions, it feels like a slot machine - sometimes you get lucky and the generated code is somewhat usable. Of course, it also depends of the complexity and scope of the project and/or the tasks that you are automating.

cortesoft•57m ago
It is clearly an emotional question. My comment on here saying I enjoyed programming with an LLM has received a bunch of downvotes, even though I don't think the comment was derogatory towards anyone who feels differently.

People seem to have a visceral reaction towards AI, where it angers them enough that even the idea that people might like it upsets them.

Areibman•2h ago
I've found criticism like this comes from people who feel as if LLMs pose a threat to their intelligence.
etaioinshrdlu•2h ago
Like others here, I disagree completely. I find them very fun, almost too fun, like intellectual crack. The craziest ideas are now within reach.
theteapot•1h ago
Cool, I get to call my Youtube, TikTok addiction my "intellectual crack" now. Only fair.
sho_hn•2h ago
I find posts like this very brave and courageous, and it makes me feel a lot of respect for the author and their personal integrity.

There's currently an enormous pressure on developers to pay lip service to loving AI tools. Expressing a differing opinion easily gets someone piled on for being outdated or not understanding things, from people who sometimes mainly do it to virtue-signal and perform their own branding exercise.

Open self-expression takes guts, and is hard to substitute for with AI assistance.

lloydatkinson•2h ago
I agree, writing anything bad about LLM's is more or less antithetical to the current hype. At least last time, during the crypto/nft slop trend, HN was not on board with it.
baq•1h ago
I’m not writing good things about LLMs because I’m busy thinking about what to build next, since the things build faster than I can come up with ideas.

It’s amazing and scary. I was wondering how takeoff would look like and I’m living it for better or worse.

oconnor663•1h ago
Ehhhhhh hating on AI is also extremely popular on social media.
encyclopedism•1h ago
Here here. I totally agree with both the authors sentiments and your comments.
wayy•1h ago
It seemed to me the author was simply sharing his own lived experience, which happens to be a bit contrarian to the popular hype around LLMs. It may seem courageous for some but I can see a world where the author didn't think twice about writing down his thoughts in 15 minutes and publishing on his own personal site. Perhaps it comes naturally to people who have been around this industry longer
Alex2037•1h ago
bro, parroting "AI bad" on any social media full of the terminally online folx gets you free updoots.
bugglebeetle•2h ago
> For me, the joy of programming is understanding a problem in full depth, so that when considering a change, I can follow the ripples through the connected components of the system.

>The joy of management is seeing my colleagues learn and excel, carving their own paths as they grow. Watching them rise to new challenges. As they grow, I learn from their growth; mentoring benefits the mentor alongside the mentee.

I fail to grasp how using LLMs precludes either of these things. If anything, doing so allows me to more quickly navigate and understand codebases. I can immediately ask questions or check my assumptions against anything I encounter.

Likewise, I don’t find myself doing less mentorship, but focusing that on higher-level guidance. It’s great that, for example, I can tell a junior to use Claude to explore X,Y, or Z design pattern and they can get their own questions answered beyond the limited scope of my time. I remember seniors being dicks to me in my early career because they were overworked or thought my questions were beneath them. Now, no one really has to encounter stuff like that if they don’t want to.

I’m not even the most AI-pilled person I know or on my team, but it just seems so staggeringly obvious how much of a force multiplier this stuff has become over the last 3-6 months.

encyclopedism•2h ago
As I've commented already...

The core issue is that AI is taking away, or will take away, or threatens to take away, experiences and activities that humans would WANT to do. Things that give them meaning and many of these are tied to earning money and producing value for doing just that thing. Software/coding is once of these activities. One can do coding for fun but doing the same coding where it provides value to others/society and financial upkeep for you and your family is far more meaningful.

If that is what you've been doing, a love for coding, I can well empathise how the world is changing underneath your feet.

LogicFailsMe•2h ago
If you understand their limitations, they are quite helpful and fun already. If you expect what the tech bros who can't code anything(tm) say they are, not so much. But I do expect them to improve because the market opportunity for getting anywhere close to the grandiose hype is huge. What isn't fun is the clueless C-suite force feeding them down the chain in hopes of a Hail Mary Pass to profits.

Edit: I know, I know, blink 3 times to signal SOS. I clearly only wrote the above under duress and threats from my managers. There's simply nothing fun about interacting with an entity that would be the stuff of science fiction just 5 years ago, no sir!

minimaxir•2h ago
How people derive utility from programming varies from person to person and I suspect is the root cause of most AI generation pipeline debates, creative and code-wise. There are two camps that are surprisingly mutually exclusive:

a) People who gain value from the process of creating content.

b) People who gain value from the end result itself.

I personally am more of a (b): I did my time learning how to create things with code, but when I create things such as open-source software that people depend on, my personal satisfaction from the process of developing is less relevant. Getting frustrated with code configuration and writing boilerplate code is not personally gratifying.

Recently, I have been experimenting more with Claude Code and 4.5 Opus and have had substantially more fun creating utterly bizarre projects that I suspect would have more frustration than fun implementing the normal way. It does still require brainpower to QA, identify problems, and identify potential fixes: it's not all vibes. The code quality, despite intuition, has no issues or bad code smells that is expected of LLM-generated code and with my approach actually runs substantially more performantly. (I'll do a full writeup at some point)

mgaunard•2h ago
LLMs are great if you don't care about every little detail being correct nor having control of how everything works so that you can change it whenever the situation warrants it.

Turns out that a lot of code is fine with this. Some parts of the industry still have more stringent standards however.

catigula•2h ago
Nope. It's still faster to just prompt Claude and read all of the output, I'm sorry.
greggoB•1h ago
That's a very broad statement, explicitly covering all types of code and all kinds of coders. Are you really confident enough to make such an assertion?
catigula•59m ago
Yes. It's not just me: I'm a professional staff engineer, great.

Andrej Karpathy is one of the best engineers in the country, George Hotz is one of the best engineers in the country, etc.

greggoB•25m ago
My question was more rhetorical, the point being that it's very bold (read: foolish) to make a claim that extends well outside of any conceivable amount of experience you may have accrued or ability to know how others may operate.

> Andrej Karpathy is one of the best engineers in the country, George Hotz is one of the best engineers in the country, etc.

You have citations of them explicitly making this claim on behalf of all SWEs in all domains/langs? I'd find that surprising, if so.

verdverm•2h ago
> if you don't care about every little detail being correct nor having control of how everything works

This is the same situation we were in decades ago, just before ai, and still are

AI changes nothing about this statement, humans do not write prefect code

qoez•2h ago
Ultimately capitalism doesn't care if a job is fun or not. The vast majority aren't I've realized. It's an odd bit of coincidence that coding with flow is hugely enjoyable but it seems like that amazing job at this rate will be a momentary bit of history where profit making and fun had a non zero intersection for a strange reason.
toenail•2h ago
I mostly use them for stuff I would never get done otherwise, or for prototyping. I have lots of fun.
devhouse•2h ago
There's a version of this that's about control vs. collaboration. Compilers do what you say. Teammates grow with you. LLMs do neither. They're confident strangers who (sometimes) get it right. That's a new category of relationship, and maybe we don't have the emotional toolkit for it yet?
kalterdev•2h ago
LLM enable me to extend the limited number of my individual thoughts. Sometimes they help me connect the dots faster. The only condition is: I do the main job. The final choice is always mine. I never let LLM be a blackbox independent agent.
cortesoft•2h ago
I have been programming for over 30 years now, and I have been re-energized by using LLMs for programming. I am having SO MUCH FUN building things with AI.

For me, the fun part of programming is having the freedom to get my computer to do whatever I want. If I can't find a tool to do something, I can write it myself! That has been a magical feeling since I first discovered it all those years ago.

LLMs gives me the ability to do even more things I want, faster. I can conceptualize what I want to create, I can specify the details as much as I want, and then use an LLM to make it happen.

It is truly magical. I feel like I am programming in Star Trek, with the computer as an ally instead of as the a receptacle for my code.

itsthecourier•2h ago
it's fun to become a necromancer

I have become a general and a master of multitude of skeleton agents. my attention to the realm of managing effectively the unreproducible result of running the same incantations.

As the sailor through the waters of the coastline he have roamed plenty of times, the currents are there, yet the waves are new everyday.

Whatever limitation is removed, I should approach the market and test my creations swiftly and enrich myself, before the first legion of lich kings appear. they, better masters than I would ever be.

macinjosh•1h ago
I see post after post like this on different parts of the web. What is always clear to me is these authors feel threatened, put too much of their personal identity and self-esteem into knowing the "secret tongue" of the machines. Generally are introverted and write software for themselves and people like them, not for normal folks.

People like this have a great deal to personally lose from LLMs. It makes them substantially less "special". Or so they think, but it is actually not true at all.

I think some of them resent having to level up again to stay relevant. Like when video games add more levels to a game you though you already beat. Fair enough, but such is life and natural competition.

When they come at LLMs with this attitude (gritting their teeth while prompting) it is no wonder they are grossly offended and disgusted by its outputs.

I've been tempted at times to hold these attitudes myself but my approach for now is to see how much I can learn about this tool and use it for as much as I can while tokens are subsidized. Either it all pops with the bubble or I have gained new, marketable skills. And no your hand coding skills don't just evaporate. In fact, I now I have a new found love of hand coding as a hobby since that part of my brain is no longer used up by the end of the day with coding tasks for Work.

EugeneOZ•1h ago
> Using LLMs undercuts both

Absolutely disagree. I use LLM to speed up the process and ONLY accept code that I would write myself.

butler14•1h ago
exactly

end of the day, guys like the author, for better or worse, are going to be replaced by the next generation of developers who don't care for the 'aesthetics' in the same way

chrisfosterelli•1h ago
I was recently talking to a colleague I went to school with and they said the same thing, but for a different reason. We both did grad studies with a focus on ML, and at the time ML as a field seemed to be moving so fast. There was a lot of excitement around AI again finally after the 'AI winter'. It was easy to participate in bringing something new to the field, and there was so many unique and interesting models coming about every day. There was genuine discussion about a viable path to AGI.

Now, basically every new "AI" feature feels like a hack on top of yet another LLM. And sure the LLMs seem to keep getting marginally better, but the only people with the resources to actually work on new ones anymore are large corporate labs that hide their results behind corporate facades and give us mere mortals an API at best. The days of coding a unique ML algorithm for a domain specific problem are pretty much gone -- the only thing people pay attention to is shoving your domain specific problem into an LLM-shaped box. Even the original "AI godfathers" seem mostly disinterested in LLMs these days, and most people in ML seem dubious that simply scaling up LLMs more and more will be a likely path to AGI.

It seems like there's more excitement around AI for the average person, which is probably a good thing I suppose, but for a lot of people that were into the field they're not really that fun anymore.

In terms of programming, I think they can be pretty fun for side projects. The sort of thing you wouldn't have had time to do otherwise. For the sort of thing you know you need to do anyway and need to do well, I notice that senior engineers spend more time babysitting them than benefitting from them. LLMs are good at the mechanics of code and struggle with the architecture / design / big picture. Seniors don't really think much about the mechanics of code, it's almost second nature, so they don't seem to benefit as much there. Juniors seem to get a lot more benefit because the mechanics of the code can be a struggle for them.

porker•1h ago
> Now, basically every new "AI" feature feels like a hack on top of yet another LLM.

LLM user here with no experience of ML besides fine-tuning existing models for image classification.

What are the exciting AI fields outside of LLMs? Are there pending breakthroughs that could change the field? Does it look like LLMs are a local maxima and other approaches will win through - even just for other areas?

Personally I'm looking forward to someone solving 3D model generation as I suck at CAD but would 3D print stuff if I didn't have to draw it. And better image segmentation/classification models. There's gotta be other stuff that LLMs aren't the answer to?

jiggawatts•55m ago
It’s now moving faster than ever. Huge strides have been made in interpretability, multi modality, and especially the theoretical understanding of how training interacts with high dimensional spaces. E.g.: https://transformer-circuits.pub/2022/toy_model/index.html
petesergeant•1h ago
> For me, the joy of programming is understanding a problem in full depth, so that when considering a change, I can follow the ripples through the connected components of the system … using LLMs undercuts [that]

If you’re letting the LLM do things you aren’t spending the time to understand in depth, you are shirking your professional responsibilities

niorad•1h ago
100% this! The main fun in development for me is typing and getting something to run, and seeing the feature finally work after figuring out how to get to it. The finished product is almost irrelevant to that. LLMs steal that feeling of achievement, like rushing through a book of sudoku with a solver.
Insanity•1h ago
I strongly agree with the author to be honest. I can see the various perspectives in the comments here. In my view, some people care about shipping products (i.e, seeing their idea come to life). Some people enjoy solving the problems more than the shipping.

I'm in the second camp, and I think the author is as well. For those of us, LLMs are kind of boring.

maybewhenthesun•1h ago
I wholeheartedly agree. I'm not saying LLMs are 'bad'. I'm not saying they are not useful. But to me personally they take out the fun parts from my profession.

My role changes from coming up with solutions to babysitting a robotic intern. Not 100% of course. And of course an agent can be useful like 'intellisense on steroids'. Or an assistant who 'ripgreps' for me. There are advantages for sure. But for me the advantages don't match the disadvantages. LLMs take the heart out of what made me like programming: building stuff yourself with your near infinite lego box of parts and coming up with ideas yourself.

I'm only half convinced the LLMs will become as important to coding as they seem . And I'm hoping a sane balance will emerge at the other end of the hype. But if it goes where OpenAI etc. want it to go I think I'll have to re-school to become an electrician or something...

dnautics•1h ago
> LLMs take the heart out of what made me like programming: building stuff yourself with your near infinite lego box of parts and coming up with ideas yourself.

i feel like that's all im doing with llms. just in the last hour i realized that i wanted an indexed string internpool instead of passing string literals. the LLM refactored everything and then i didn't have to worry about that lego piece anymore.

throw1235435•19m ago
As I mentioned in another comment they smell blood in our profession, and as entities dependent on investor/VC/seed money rounds they want it. There's a reason every new model that comes out has a blog post "best at coding" often in their main headline - its also a target that people outside of tech don't really care about IMO unlike for example art and writing.

Tbh if it wasn't for coding disruption I don't think the AI boom would of really been that hyped up.

mythrwy•1h ago
Maybe not fun but effective is even better.
kwar13•1h ago
I don't quite agree. LLMs have been fun for me in the sense that they have enabled me explore topics I wouldn't quite be familiar with and it would take too much for me to actually explore. For instance, putting a "restore last session" (restoring tabs) in my daily tabbed file explorer, Nemo. I wouldn't have cared enough to fork the repo and add this for myself if I didn't have an LLM to guide me through the parts that I would need to edit. It's too large of a code base for me to go digging through.
mrbonner•1h ago
I wonder if LLMs and Gen AI represent a shift similar to the invention of the tractor a century ago. Initially, both technologies struggled to find their most effective applications, despite grand promises of transformative productivity. It took tractors several decades to become truly mainstream. Perhaps the same pattern will unfold with LLMs and AI. The difference this time is that companies are investing enormous amounts of capital in preparation for that inevitable moment of widespread adoption.
vtemian•1h ago
> For me, the joy of programming is understanding a problem in full depth, so that when considering a change, I can follow the ripples through the connected components of the system.

100%. The fun is in understanding, creating, exaplaining. Is not in typing, boilerplating, fixing missing imports, and API mismatch etc.

wwarner•1h ago
I reluctantly agree. It’s like ebikes — yes it’s great that I don’t have to pedal up hill, but on the other hand the cyclists that did it the hard way deserved the praise and glory for their achievement while weak and distracted ebikers definitely do not.
bvan•1h ago
Agree. Just like writing with pen and paper facilitates the thought process, so does coding. Typing out code facilitates logical thought and forces you to mind the details. Not to mention the inherent learning process.

Hand-holding an LLM cheats me of all these things, along with the uneasy feeling there is unexplored ordnance in there somewhere which will eventually go boom.

To each his or her own.

thunkle•1h ago
Doesn't matter what we want or how we feel. Product, C-suite, Customers just want software as fast and as cheap as possible. They don't care about the code and the craft. If that's the case then we have to use AI if we want to stay marketable.

I wonder if customers even appreciate the organic artisanal labels that some sites are putting up e.g. https://play.date/games/diora/

shitter•1h ago
I share the author's perspective that LLMs are not fun for programming. I don't use them to generate code, save for small snippets to demonstrate some concept or do something rote that I wouldn't enjoy writing myself.

However - and maybe I'm just an easily entertained simpleton - I find them really fun for exploring those random, not trivially Google-able questions that pop into my head on a daily basis, technical and otherwise. Most of my chats with ChatGPT begin with questions of this form. I keep my critical thinking cap on during the dialogue and always verify the output if it's to be used for anything serious, but I'd be lying if I said I didn't enjoy the process.

notepad0x90•1h ago
Can't say I agree strongly, but I get OOP's frustration.

It's just a tool, misuse of the tool can very much not be fun. When it's forced on you, most things tend to not be fun.

But I am having lots of fun with LLMs, their application as well as assisting me with coding. What used to be a frustrating scour of the internet for solutions and examples is now a question away to an LLM. The not-so-fun things I used to dread, I'm letting the LLM tackle it. Most of the code I write could probably be written by an LLM, but I am choosing to write the code specifically because it is fun, and because maintaining LLM generated code is not so fun.

I think this is a case of people taking extremes. Extremes are usually not a good thing. Don't over use or over depend on LLMs, but use them with moderation, letting them do things they shine at. Don't create solutions that are looking for problems (with any tool, not just LLMs). Don't fall for the deceptive traps of nostalgia, or be stuck in "back in my day".

It goes both ways too, don't tell someone used to vim and nano to start using cursor.sh!

I like driving sports cars in the desert, very fun. I hate driving anything in traffic. It's all about context.

The internet can be very toxic at times, and very user-hostile at others. It can also be great. I like HN for example, as I'm sure many of you do. I don't like visiting gizmodo or some ad-trodden site, or toxic sub-reddits. LLMs are similar, there are and will be terrible LLM usages (truly, think of slaughterbots and LLMs being used by autonomous attack drones), but also fun and great usages.

johnfn•1h ago
I find this interesting for two reasons.

1. LLMs inspire and clearly strike a certain type of chord in people that other technology does not. For instance, can you imagine a post called "Rust Is Not Fun" at the top of HN? Or even replace "Rust" with technology that has some fans and some haters, like "PHP Is Not Fun". Can you ever imagine that finding equivalent traction? Why would you even write a post called "Skateboarding Is Not Fun"? I just did a search across HN and the only other thing I saw being called not fun that actually got traction was... Twitter.

2. The post makes two points about why LLMs are wrong, and I (as someone who gets a lot of mileage out of LLMs) pretty strongly disagree with both.

a) You can't get better at using LLMs ("Nurturing the personal growth of an LLM is an obvious waste of time")

This seems almost objectively false to me? I have gotten substantially better at prompting and using LLMs after about 6 months of daily use. I think at a higher level of abstraction with LLMs than I would when working directly with code, and this is a different type of thinking that requires effort and practice to develop. An LLM doesn't solve all problems you could ever have, it just allows you to think at a higher level of abstraction.

Previously I might convert one 300 LoC file from JS to TS (or whatever) and call it a day. Now, in the same amount of time, I might do 10. But obviously just asking the LLM to commit that doesn't cut it because I might have broken something, so I need to think of some way to get the LLM to verify that I haven't broken anything, so maybe I'd first get it to build some unit tests to my specifications, or a couple of linter rules, or something else tailored to the problem at hand. This is the "thinking at a higher level of abstraction" and I tend to find it an interesting puzzle. It's a very different type of thinking then the "how am I going to refactor this single file", but it ends up being pretty enjoyable.

b) "For me, the joy of programming is understanding a problem in full depth"

I suspect that this is why a lot of people are frustrated with LLMs. I empathize here more than with part a). I mean, sure, I like digging into complex systems - it's fun and rewarding. But... and I feel this will be controversial, but I feel like this isn't really the most meaningful part of coding. Isn't the "complex problem solving" mostly a side-effect of the thing which is really the important thing (and the thing I like much more), which is delivering software that people enjoy and use? And while LLMs do a good bit of heavy lifting on the first, I don't find that they are capable at all of solving the second - which means that coding is just as fun for me as ever. In fact, probably more so, since I feel like I can iterate and build ideas faster for users.

sjsdaiuasgdia•1h ago
> But... and I feel this will be controversial, but I feel like this isn't really the most meaningful part of coding. Isn't the "complex problem solving" mostly a side-effect of the thing which is really the important thing (and the thing I like much more)

Different people can find enjoyment and meaning in different parts of work, even if they do the same kind of work.

You enjoy delivering products to people. Other people might feel more enjoyment from the problem solving and understanding the system end to end.

It's not controversial for people to have different preferences. What might be controversial is saying that your preference is more correct / somehow "better" than someone else's preference.

Bukhmanizer•1h ago
As a long time complainer about AI, I disagree, LLMs are very fun. They’re maybe the coolest technology that has come out in the last 20 years for me.

What’s not fun is the corpratization of AI. Being forced to use it even if it doesn’t make sense. Every project having to shove AI into it to get buy-in.

alexpadula•1h ago
Is this supposed to be rage bait? All jokes aside. I’ve been programming for 16+ years and I’ve been absolutely obsessed with it since I was a child. Programming today is amazing, I’d not prefer to go back to VS6 in modern times, I’d feel like I’m going backwards. Work in the times. Your pride is strong!
alansaber•1h ago
Imo a lot of the "this isn't fun" comes from minmaxing
brightbed•1h ago
I feel the opposite. Building my first cli dev tool with Claude brought back the joy of software for me, joy that had been eroded by the grind of the software industry. Claude helped me solve real problems that I didn’t otherwise have time to solve (so much typing) and I really enjoyed having this tool I had been dreaming of come to life.
legitster•1h ago
As someone who loves analogue things, you could basically repeat this for every pursuit humans make. The things they care about are cheapened by convenience - but then they will mock people who still care about manual transmissions or mechanical watches or what have you.

I think LLMs are fun. It does not get rid of the problem solving or troubleshooting or decision making. If anything, for me it completely resparked the hacker ethos in me. I got my start by being an idiot "script kitty" - so I am used to building bodged together things with code I only liminally understand.

There are so many new things I am trying and getting done that I feel like I am only limited by my creativity and my tolerance for risk.

turzmo•1h ago
Strongly agree. I've given up a job handcrafting chairs to be a foreman at the chair factory. Yes, more chairs are produced. That's not what I care about.

Coding for me was always about the understanding and craftsmanship. The associated output and pay came as an adult, but that was never the point.

amortka•1h ago
There’s a third axis here besides “process vs result”: feedback loop latency. Hand-coding keeps the loop tight (think → type → run → learn), which is where a lot of the craft/joy lives. LLMs can either compress that loop (generate boilerplate/tests, unblock yak-shaves) or stretch it into “read 200 LOC of plausible code, then debug the one wrong assumption,” which feels like doing code review for an intern who doesn’t learn. The sweet spot for me has been using them to increase iteration speed while keeping myself on the hook for invariants (tests, types, small diffs), otherwise you’re just trading typing for auditing.
ThierryBuilds•1h ago
Balancing developer satisfaction with raw productivity is a critical trade-off. While the 'joy of coding' maintains long-term engagement, LLMs provide a necessary lift in throughput. I prefer a surgical approach: disabling LLMs for core logic to avoid 'auto-pilot' bias, while utilizing them for the high-friction work of documentation and unit testing.
porcoda•1h ago
I do wish when people say happy vs sad with LLMs for code they’d qualify it with what kind of code they’re talking about. I can totally see a web dev being super happy grinding out JS code and someone doing scientific computing being less happy even though they’re using the same tools. Without understanding what people are using it for, what their expectations are with respect to correctness, completeness, and performance, these discussions just turn into the same back and forth of people arguing that the other person is wrong and talking past each other. I think people on this site forget the diverse contexts where people use computers, the different backgrounds we all have, and our different expectations for what we work on.
vtemian•1h ago
why was this post removed? it was #1
Tarucho•1h ago
I don´t understand. Most answers say they want to program but that they don´t want to type, compile, debug, add files to the project, refactor, etc. Well that´s programming.

Asking a prompt to do something is asking a prompt to do something.

In my case I fear the day comes where I can not program anymore and I have to give orders to a prompt.

travisgriggs•1h ago
Largely agree. Thoreau said for every 1000 hacking at the leaves of evil, there was 1 hacking at the roots.

Web programming is not fun. Years ago, a colleague who had pivoted in the early years said "Web rots your brain" (we had done some cool work together in real time optical food sorting).

I know it (web programming) gives a lot of people meaning, purpose, and a paycheck, to become a specialist in an arcane art that is otherwise unplumbable by others. First it was just generally programming. But it's bifurcated into back end, front end, db, distributed, devops, meta, api, etc. The number of programmers I meet now days, who are at start ups that eventually "pivot" to making tools for the tool wielders is impressive (e.g. "we tried to make something for the general public, but that didn't stick, but on the way, we learned how to make a certain kind of pick axe and are really hoping we can get some institutional set of axe wielders at a big digging corporation to buy into what we're offering"). Instead of "Software is eating the world" the real story these days may be "Software is eating itself"

Mired with a mountain of complexity we've created as a result of years of "throw it at the wall and ship what sticks", we're now doubling down on "stochastic programming". We're literally, mathematically, embracing "this probab[i]l[it]y works". The usefulness/appeal of LLMs is an indictment and a symptom. Not a cause.

tomwphillips•53m ago
I like this analysis.

I'm constantly surprised by developers who like LLMs because "it's great for boiler plate". Why on earth were you wasting your time writing boiler plate before? These people are supposed to be programmers. Write code to generate the boiler plate or get abstract it away.

I suppose the path of least resistance is to ignore the complexity, let the LLM deal with it, instead of stepping back and questioning why the complexity is even there.

lukaslalinsky•1h ago
The most fun aspect of programming for me is designing something unique, whether it's an algorithm to some niche problem, or a simple API over a complex system. LLMs help me express my ideas in terms of specific code, sometimes it's just a prototype, sometimes the final product. I don't enjoy the coding part, I enjoy the thinking and designing part.