frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

FSF announces Librephone project

https://www.fsf.org/news/librephone-project
809•g-b-r•8h ago•305 comments

Pixnapping Attack

https://www.pixnapping.com/
72•kevcampb•2h ago•7 comments

Show HN: Firm, a text-based work management system

https://github.com/42futures/firm
16•danielrothmann•1h ago•6 comments

Beliefs that are true for regular software but false when applied to AI

https://boydkane.com/essays/boss
371•beyarkay•13h ago•277 comments

Nvidia DGX Spark: great hardware, early days for the ecosystem

https://simonwillison.net/2025/Oct/14/nvidia-dgx-spark/
106•GavinAnderegg•7h ago•39 comments

A modern approach to preventing CSRF in Go

https://www.alexedwards.net/blog/preventing-csrf-in-go
65•todsacerdoti•16h ago•17 comments

DOJ seizes $15B in Bitcoin from 'pig butchering' scam based in Cambodia

https://www.cnbc.com/2025/10/14/bitcoin-doj-chen-zhi-pig-butchering-scam.html
119•pseudolus•17h ago•96 comments

Interviewing Intel's Chief Architect of x86 Cores

https://chipsandcheese.com/p/interviewing-intels-chief-architect
79•ryandotsmith•5d ago•7 comments

How bad can a $2.97 ADC be?

https://excamera.substack.com/p/how-bad-can-a-297-adc-be
227•jamesbowman•15h ago•123 comments

Unpacking Cloudflare Workers CPU Performance Benchmarks

https://blog.cloudflare.com/unpacking-cloudflare-workers-cpu-performance-benchmarks/
219•makepanic•12h ago•34 comments

How AI hears accents: An audible visualization of accent clusters

https://accent-explorer.boldvoice.com/
211•ilyausorov•16h ago•89 comments

Hacking the Humane AI Pin

https://writings.agg.im/posts/hacking_ai_pin/
126•agg23•6d ago•31 comments

Can we know whether a profiler is accurate?

https://stefan-marr.de/2025/10/can-we-know-whether-a-profiler-is-accurate/
35•todsacerdoti•6h ago•8 comments

SmolBSD – build your own minimal BSD system

https://smolbsd.org
191•birdculture•14h ago•17 comments

A 12,000-year-old obelisk with a human face was found in Karahan Tepe

https://www.trthaber.com/foto-galeri/karahantepede-12-bin-yil-oncesine-ait-insan-yuzlu-dikili-tas...
322•fatihpense•1w ago•142 comments

Astronomers 'image' a mysterious dark object in the distant Universe

https://www.mpg.de/25518363/1007-asph-astronomers-image-a-mysterious-dark-object-in-the-distant-u...
220•b2ccb2•17h ago•123 comments

Show HN: Greenonion.ai – AI-Powered Design Assistant

https://exuberant-premise-723012.framer.app/
31•yanjiechg•1w ago•21 comments

How to turn liquid glass into a solid interface

https://tidbits.com/2025/10/09/how-to-turn-liquid-glass-into-a-solid-interface/
146•tambourine_man•12h ago•99 comments

Intel Announces Inference-Optimized Xe3P Graphics Card with 160GB VRAM

https://www.phoronix.com/review/intel-crescent-island
97•wrigby•13h ago•71 comments

CSS for Styling a Markdown Post

https://webdev.bryanhogan.com/miscellaneous/styling-markdown/
41•bryanhogan•1w ago•9 comments

Python's splitlines does more than just newlines

https://yossarian.net/til/post/python-s-splitlines-does-a-lot-more-than-just-newlines/
12•woodruffw•6d ago•2 comments

What Americans die from vs. what the news reports on

https://ourworldindata.org/does-the-news-reflect-what-we-die-from
537•alphabetatango•13h ago•309 comments

Surveillance data challenges what we thought we knew about location tracking

https://www.lighthousereports.com/investigation/surveillance-secrets/
386•_tk_•11h ago•93 comments

Printing Petscii Faster

https://retrogamecoders.com/printing-petscii-faster/
23•ibobev•4d ago•6 comments

I am a programmer, not a rubber-stamp that approves Copilot generated code

https://prahladyeri.github.io/blog/2025/10/i-am-a-programmer.html
157•pyeri•3h ago•176 comments

Beating the L1 cache with value speculation (2021)

https://mazzo.li/posts/value-speculation.html
34•shoo•4d ago•8 comments

Why Is SQLite Coded In C

https://www.sqlite.org/whyc.html
205•plainOldText•11h ago•217 comments

GrapheneOS is ready to break free from Pixels

https://www.androidauthority.com/graphene-os-major-android-oem-partnership-3606853/
296•MaximilianEmel•9h ago•143 comments

Updating Desktop Rust

https://tritium.legal/blog/update
7•piker•3d ago•3 comments

ADS-B Exposed

https://adsb.exposed/
303•keepamovin•21h ago•76 comments
Open in hackernews

I am a programmer, not a rubber-stamp that approves Copilot generated code

https://prahladyeri.github.io/blog/2025/10/i-am-a-programmer.html
157•pyeri•3h ago

Comments

Cheer2171•2h ago
I am a programmer, not a rubber stamp that copy pastes StackOverflow answers I don't understand

Or clones a template repo and only tweaks a few files

Or imports libraries with code I've never read

zepolen•2h ago
This conversation isn't for you, you're not a programmer, you're a developer, a modern day script kiddy.

Programmers wrote the StackOverflow answer and wrote that library.

danielbln•2h ago
This conversation is for true Scotsmen.
BiteCode_dev•2h ago
Yeah, I used to be in the top 100 SO users, so I wrote a lot of SO answers, and if you used Red Hat Linux, you probably used my library.

But according to your definition, I'm a script kiddy.

Copenjin•2h ago
I understand which category of people you are describing, but this is what a proper programmer actually does:

- Check stackoverflow only for very niche issues, never finds what he needs but reaches a solution reading multiple answers and sometimes used to post a better solution for his issue

- Have his own templates if he does repetitive and boring stuff (common), implements the complex logic if any first and get the rest done as fast as possible being mildly disgusted.

- Imports libraries and often take a look at the code noticing stuff that could be improved. Has private forks of some popular opensource libraries that fix issues or improve performance fixing silly errors upstream. Sometimes he is allowed/has time to send the fixes back upstream. When using those libraries sometimes he finds bugs, and the first thing he does is checking the code and try to fix them directly, no tickets to the maintainers, often opens a PR with the fix directly.

0xbadcafebee•2h ago
> Some exec somewhere in the company decided everyone needs to be talking to AI, and they track how often you're talking with it. I ended up on a naughty list for the first time in my career, despite never having performance issues. I explain to my manager and his response is to just ask it meaningless questions.

That's not a career-switching issue, that's a company-switching issue. Most people will work for at least one company in their career where the people in charge are dickheads. If you can't work around them, go find a different company to work for. You don't have to throw away an entire career because of one asshole boss.

Also fwiw, resistance is more effective than you think. You'd be surprised how often a dickhead in charge is either A) easy to call the bluff of, or B) needs someone to show them they are wrong. If you feel like you're going to quit anyway, put your foot down and take a stand.

Cheer2171•2h ago
> said usage is actually getting monitored and performance appraisals have now started depending on the AI usage instead of (or at least in addition to) traditional metrics like number of priority bugs raised, code reviews, Function Points Analysis, etc.

Really? This sounds absurd. "Instead of" means it doesn't matter how shit your work is as long as you're burning tokens? Or it doesn't matter how good your work is if you're not burning tokens? Name and shame

simonw•2h ago
There are a bunch of companies out there that are tracking what percentage of their developers are using LLMs now.

I heard a rumor recently that AWS are doing this, and managers are evaluated based on what percentage of their direct reports used an LLM (an Amazon-approved model) at least once over a given time period.

shakna•2h ago
Microsoft, Oracle, Amazon, to name a few
procaryote•1h ago
If you admin a google-workspace domain, you get metrics out of the box on agent usage.

I guess it's great for AI companies that they've managed to bait and switch "this will improve your productivity" to "this is how much time you're sinking into this, let's not care about if that was useful"

overgard•2h ago
The worst part of AI is the way it's aggressively pushed. Sometimes I have to turn off AI completions in the IDE just because it becomes extremely aggressive in showing me very wrong snippets of code in an incredibly distracting way. I hope when the hype dies down the way these tools are pushed on us in a UX sense is also dialed down a bit.
geekybiz•2h ago
The most annoying is when I'm trying to think through a data structure. While I'm trying to deeply think through every member of a class, its type, relationships, etc., this zealous fellow acts like a toddler that knows no way to stay shut unless snoozed off.
cyberax•2h ago
JetBrains IDEs have an option to enable AI inline suggestions on demand via a keypress. I really like it. It saves some "boring" typing, while not being annoying.

I'm pretty sure Cursor also has something similar?

BikiniPrince•1h ago
Yeah, it’s just horribly wrong in my experience and a complete distraction. Code completion for functions in the project is another story and that has been around for ages.
rurban•38m ago
With emacs I love the github copilot auto suggestions. Light gray. Either accept it with Ctrl-Tab or ignore it.
matt3210•1h ago
I really get irritated when AI is opt out. Opt out is not consent.
LeoPanthera•1h ago
Does big tech understand consent?

[ ] Yes

[ ] Maybe later

klabb3•1h ago
[ ] Use recommended settings
triyambakam•1h ago
That's why I don't use it in my editor and only through CLI coding agents.
jstummbillig•1h ago
Agents are great (in so far the models are able to complete the task). Autocomplete copilot just feels like bad UX. It's both, not super effective and also disruptive to my thinking.
1dom•1h ago
I think it depends on the context. If I've been writing the same language and frameworks and code solidly for a few months, then autocomplete gets in the way. But that rarely happens, I like to keep trying and learning new things.

If I'm familiar with something (or have been) but not done it in a while, 1 - 2 line autocomplete saves so much time doing little syntax and reference lookups. Same if I'm at that stage of learning a language or framework where I get the high level concepts, principals, usescases and such, but I just haven't learned all the keywords and syntax structures fluently yet. In those situations, speedy 1 - 2 line AI autocomplete probably doubles the amount of code I output.

Agents is how you get the problems discussed in this thread: code that looks okay on the surface, but falls apart on deeper review, whereas 1 - 2 line autocomplete forces every other line or 2 to be intentional.

Gigachad•1h ago
I disabled the inline auto suggestions. It’s like the tech version of that annoying person who interrupts every sentence with the wrong ending.
leptons•1h ago
"AI" autocomplete has become rather like mosquitos buzzing around my head that I have to constantly swat away. I'm likely to shut it all off soon, it's just become irritating.
pjmlp•1h ago
On VS you can change that to only come up if you do a key shortcut.

For those on VS, this is how to hide it, if using 17.14 or later,

https://learn.microsoft.com/en-us/visualstudio/ide/copilot-n...

eloisant•1h ago
The worse is when writing comments. I'm writing a comment such as "Doing X because..." and it never get it right.

I'm making a comment precisely because it's not obvious when reading the code, and the AI will make up some generic and completely wrong reason.

ptsneves•57m ago
I feel you. I totally disabled AI completions as they actually were often sidelining me from my reasoning.

It is like having an obnoxious co-worker shoving me to the side everytime i type a new line and complete a whole block of code and asking me if it is good without regards to how many times I rejected those changes.

I still use AI, but favor a copy paste flow where I at least need to look at what i am copying and locating the code I am pasting to. At least i am aware of the methods or function names and general code organization.

I also ask for small copy paste changes so that I keep it digestible. A bonus point is that ChatGPT in firefox when the context gets too big, the browser basically slowsdown locks and it works as a form extra sense that the context window is too big and LLM is about to start saying non-sense.

That said AI, is an amazing tool for prototyping and help when out of my domain of expertise.

XenophileJKO•8m ago
So one really big thing that can make the AI autocomplete super useful is to follow the old method from "Code Complete", Pseudocode Programming Process (PPP).

Write a comment first on what you intend to do, then the AI generally does a good job auto-completing below it. I mean you don't have to "sketch everything out", but just that the AI is using the page as context and the comment just helps disambiguate what you want to do and it can autocomplete significant portions when you give it a nudge with the comment.

I've almost fully converted to agentic coding, but when I was using earlier tools, this was an extremely simple method to get completions to speed you up instead of slow you down.

jasonkester•40m ago
Indeed. That’s my only interaction with AI coding.

Every time Visual Studio updates, it’ll turn back on the thing that shoves a ludicrously wrong, won’t even compile, not what I was in the middle of doing line of garbage code in front of my cursor, ready to autocomplete in and waste my time deleting if I touch the wrong key.

This is the thing that Microsoft thinks is important enough to be worth burning goodwill by re-enabling every few weeks, so I’m left to conclude that this is the state of the art.

Thus far I haven’t been impressed enough to make it five lines of typing before having to stop what I’m doing and google how to turn it off again.

Zardoz84•17m ago
My little experience with AI coding, using copilot on Eclipse, was mixed... Context: I work with an old Java source code that uses Servlets and implements his own web framework. There is a lot of code without tests or comments.

The autocomplete, I find it useful. Specially doing menial, very automatic stuff like moving stuff when I refactor long methods. Even the suggestions of comments looks useful. However, the frequency with it jumps it's annoying. It needs to be dialed down somehow (I can only disable it). Plus, it eats the allowed autocomplete quota very quickly.

The "agent" chat. It's like tossing a coin. I find very useful when I need to write a tests for a class that don't have. At least, allows me to avoid writing the boiler player. But usually, I need to fix the mocking setup. Another case when it worked fine, it's when helped me to fix a warning that I had on a few VUE2 components. However, in other instances, I saw miserable falling to write useful code or messing very bad with the code. Our source code is in ISO8859-1 (I asked many times to migrate it to UTF8), and for some reason, sometimes Copilot agent messes the encoding and I need to manually fix all the mess.

So... The agent/chat mode, I think that could be useful, if you know in what cases it would do it ok. The autocomplete is very useful, but needs to be dialed down.

ale•2h ago
The good and bad aspect of this approach to AI in tech is that it revealed really how many developers out there are merely happy with getting something to work and get it out the door before clocking out and not actually understanding the inner workings of their code.
csmantle•2h ago
This is almost inevitable when something industrializes; people maximize profit by quickly shipping things that barely works. We need someone who try to excel in technology, and AI just amplifies this need.
troupo•2h ago
I find it to be actually a boon for small throw away side projects that I don't care about, and just want to have [1]

Actual code/projects? Detrimental

[1] E.g. I spent an evening on this: https://github.com/dmitriid/mop

almostgotcaught•2h ago
whenever people complain about someone being "merely happy with getting something to work and get it out the door before clocking out" i wonder to myself if i'm dealing with someone that has The Protestant Ethic and the Spirit of Capitalism on their nightstand, or has never read Economic and Philosophic Manuscripts of 1844, or simply does not understand the significance of these two essays.

like ... you expect people to actually be committed to "the value of a hard day's work" for its own sake? when owners aren't committed to value of a hard day's worker? and you think that your position is the respectable/wise one? lol

hansmayer•1h ago
No, it's not about capitalism and exploitation, hard work propaganda etc. You can work to the contract (e.g. strictly whats in your work contract and not "above and beyond") while still retaining the quality of the work. So reduce the quantity but not the quality. This is about a ton of bootcamp developers that were created in the last 10ish years, for which, unlike the rest of us, it is just a better paid job.
zdragnar•45m ago
Given the remainder of the comment is "and not understanding the inner workings" it's safe to assume that "getting something to work" does not imply that it worked correctly.

Back in the days of SVN, I'd have to deal with people who committed syntax errors, broken unit tests, and other things that either worked but were obviously broken, or just flat out didn't work.

Taking a bit of pride in your work is as much for your coworkers as it is for yourself. Not everything needs to be some silly proles vs bourge screed.

Dylan16807•35m ago
Where did they say anything about a "hard day's work"? Are you making up arguments to attribute to them, lol

And are you assuming the alternative involves not clocking out? Because "clock out, finish when there's more time" is a very good option in many situations.

petesergeant•2h ago
> how many developers out there are merely happy with getting something to work and get it out the door

There's a very large number of cases where that's the right choice for the business.

worldsayshi•2h ago
Also for small cli tools and scripts that otherwise wouldn't get written.
lovecg•2h ago
Steelmanning the "we must force tool usage" position: it's possible that a tool does increase productivity, but there's either a steep learning curve (productivity only improves after sustained usage) or network effects (most people must use it for anyone to benefit).

No opinion on whether or not this applies to the current moment. But maybe someone should try forcing Dvorak layout on everyone or something like that for a competitive edge!

resonious•2h ago
I once had a boss who saw me use Vim and was really impressed with how quickly I could jump around files and make precision edits. He tried getting the other devs (not many, < 5) to use Vim too but it didn't quite pan out.

I would guess that interest, passion, and motivation all play a role here. It's kind of like programming itself. If you sit people down and make them program for awhile, some will get good at it and some won't.

eCa•2h ago
> I would guess that interest, passion, and motivation all play a role here.

And, to use less pointed language, people’s brains are wired differently. What works for one doesn’t necessarily work for another, even with similar interest, passion, and motivation.

rkomorn•1h ago
I agree with this.

I was using emacs for a while, but when I switched to vim, something about the different modes just really meshed with how I thought about what I was doing, and I enjoyed it way more and stuck to it for a couple of decades.

I see people that I'd say are more proficient with their emacs, VS Code, etc setups than I am with my vim setup, so I don't think there's anything special about vim other than "it works for me".

mabster•1h ago
I worked with a developer that copied and pasted A LOT and would keep his fingers on the old copy and paste buttons (Ctrl-Ins, etc.). I've even seen him copy and paste single letters. He's one of the most productive developers I've ever worked with.
Xenoamorphous•1h ago
I've had plenty of interest, passion and motivation during my career. But never, ever, directed at learning something like vim, even if it's going to make me more productive.

I'd rather learn almost any other of the myriad of topics related with software development that the quirks of an opinionated editor. I especially hate memorising shortcuts and commands.

lelandfe•2h ago
Your old boss probably would have been a bit chastened if he knew said devs would then be spending their hours learning how to exit Vim instead of programming
vidarh•1h ago
There was a time where I'd change to a different terminal and do sudo killall -9 to get out vim.

And that time when I changed vim to a symlink to emacs on a shared login server and sat back and enjoyed the carnage. (I did change it back relatively quickly)

lawn•1h ago
If learning how to exit Vim takes hours then they aren't worth keeping as employees anyway.
raverbashing•2h ago
Vim's learning curve is much steeper to be honest
procaryote•2h ago
Coding agents seem to be in the fun paradox of "it's so easy to use, anyone can code!" and "using it productively is a deep skill, and we have to force people to use it so they learn"
ozgrakkurt•1h ago
Programming isn’t a government desk job. The interface between programmer and company should be the output only, they can’t force a programmer to use w/e bs they think is good at the time
sandspar•2h ago
User: list crafts that software has automated

GPT-5: Typesetting and paste-up, film prepress/stripping, CMYK color separations, halftone screening, darkroom compositing/masking, airbrush photo retouching, optical film compositing/titling, photochemical color timing, architectural hand drafting, cartographic scribing and map lettering, music engraving, comic book lettering, fashion pattern grading and marker making, embroidery digitizing and stitching, screen-print color separations

sreekanth850•2h ago
Adapt or perish.
coolThingsFirst•2h ago
AI is coming for you John Connor.
sreekanth850•2h ago
I see where you’re coming from, but there’s a small difference. Coding itself is mostly a routine tasks, turning ideas into working code. Humans really stand out in the important parts:creative thinking, planning and architecting the system, deciding what it should do, how it should do, finding problems, checking code quality, and making smart decisions that a tool can’t. AI can help with the routine work, but the creative and thinking parts are still human.And this is exactly where developers should focus and evolve themselves.
Copenjin•2h ago
> creative thinking, planning and architecting the system, deciding what it should do, how it should do, finding problems, checking code quality, and making smart decisions that a tool can’t.

Are you aware that there are people that think that even now AI can do everything you describe?

sreekanth850•1h ago
Then its a problem.
aforwardslash•44m ago
It can.

The reason crappy software has existed since...ever is because people are notoriously bad at thinking, planning and architecting systems.

When someone does a "smart decision", it often translates to the nightmare of someone else 5 or 10 years down the line. Most people shouldn't be making "smart decisions"; they should be making boring decisions, as most software is actually a glorified crud. There are exceptions, obviously, but don't think you're special - your code also sucks and your design is crap :) the goal is often to be less sucky and less crappier than one would expect; in the end, its all ones and zeros, and the fancy abstractions exist to dumb down the ones and zeros to concepts humans can grasp.

A machine can and will, obviously, produce better results and better reasoning than an average solution designer; it can consider a multitude of options a single person seldom can; it can point out from the get-go shortcomings and domain-specific pitfalls a human wouldnt even think of in most cases.

So go ahead, try it. Feed it your design and ask about shortcomings; ask about risk management strategies; ask about refactoring and maintenance strategies; you'd probably be surprised.

sreekanth850•33m ago
People often blame LLMs for bad code, but the real issue is usually poor input or unclear context. An LLM can produce weak code if you give weak instructions but it can also write production ready code if you guide it well, explain the approach clearly, and mention what security measures are needed. The same rule applies to developers too. I’m really surprised to see so much resistance from the developer community, instead, they should use AI to boost their productivity and efficiency. Personally Iam dead against using CLI tools, istead IDE based tools will give your better visibility on code produced and betetr control over the changes.
petesergeant•2h ago
> If they’re really so confident on the LLM’s effectiveness, why not just keep it voluntary, why force it on people?

For people who are so confident (which, I'm not), it's an obvious step; developers who don't want to use it must either be luddites or afraid it'll take their jobs. Moving sales people to digital CRMs from paper files, moving accountants to accounting software from paper ledgers and journals, moving weavers to power looms, etc etc -- there would have been enthusiasts and holdouts at every step.

The PE-bro who's currently boasting to his friends that all code at a portfolio has to be written first with Claude Code and developers are just there to catch the very rare error would have been boasting to his friends about replacing his whole development team with a team that cost 1/10 the price in Noida.

Coding agents can't replace developers _right now_, and it's unclear whether scaling the current approach will allow them to at any point, but at some point (and maybe that's not until we get true AGI) they will be able to replace a substantial chunk of the developer workforce, but a significant chunk of developers will be highly resistant to it. The people you're complaining about are simply too early.

hooverd•2h ago
It tracks with the trend of computing being something you passively consume rather than something you do. Don't learn how anything works! Deskill yourself! Not that LLMs aren't a force multiplier.
monster_truck•2h ago
I feel bad for my friends that are married with kids working at places like microsoft, telling me how their copilot usage is tracked and they fear that if they don't hit some arbitrary weekly metric they will fall victim to the next wave of layoffs.
teiferer•2h ago
Even married people with kids can switch companies. Sometimes that implies a pay cut, but not always.

And if they really tied their livelihood to working at the same company for next decade because they maxed out their lifestyle relative to the income generated by that company, then that falls all on them and I don't actually feel that bad for them.

zwnow•2h ago
Absolutely, programmers are paid exceptionally well compared to a lot of other jobs. If they live paycheck to paycheck they are doing things wrong, especially when having family.
ViscountPenguin•1h ago
The hedonic treadmill really gets away from some people. I've had coworkers on 7 figures talk about how they couldn't possibly retire because the costs of living in (HCOL city) are far too high for that.

When you dig down into it, there's usually some insane luxury that they're completely unwilling to give up on.

If you're a software engineer in the United States, or in London, you can almost certainly FIRE.

zwnow•1h ago
Yup it's insane to me. I am a software developer in Germany making 30k (after taxes) and manage to save up 600-700€ a month while still living really good (rural area, no car).

Absolutely not enough to retire early but easily enough to not live paycheck to paycheck. Making 6 figures in the USA and not being able to afford life is so cryptic to me.

lnsru•1h ago
Add family and 100k after taxes in Munich will be no big deal. I could live alone in the car, but the kids might want their own rooms and their own beds.
zwnow•1h ago
100k is a unreachable dream for me unless I found a business myself and actually succeed. Munich is expensive, I've seen some prices there. I live near Denmark though so Munich would not be a option in the first place. I could afford a house considering my savings rate and current appartment rent. Not a big one but it would be enough. Have no reason to buy one for myself though.
Root_Denied•2h ago
>And if they really tied their livelihood to working at the same company for next decade because they maxed out their lifestyle relative to the income generated by that company, then that falls all on them and I don't actually feel that bad for them.

I'd say that there's some room for nuance there. Tech hiring has slowed significantly, such that even people in senior roles who get laid off may be looking for a long time.

If you work for Microsoft you're not getting top tier comp already (at least as compared with many other tech companies), and then on top of that you're required to work out of a V/HCOL city. Add in the expenses of a family, which have risen dramatically the last few years, and it's easy to find examples of people who are starting to get stretched paycheck to paycheck who weren't having that issue a couple of years ago.

Check the prices in Seattle, SF, LA, DC, and NYC metro areas for 2-4 bedroom rentals and how they've jumped the last few years. You're looking at 35%-45% of their take home pay just on rent even before utilities. I'm not sure the math works out all that well for people trying to support a family, even with both parents working.

teiferer•34m ago
> Add in the expenses of a family, which have risen dramatically the last few years, and it's easy to find examples of people who are starting to get stretched paycheck to paycheck

If you maxed out your lifestyle relative to your income then yes, that is the case. It will always be, no matter how much you make.

It's also the case for the guy stocking the shelves at your local Walmart if he maxes out his lifestyle. But if you compare both in absolute terms, there are huge differences.

Which lifestyle you have is your choice. How big of a house, what car, where to eat, hobbies, clothes, how many kids, etc. If you max that out, fine, enjoy it. But own that it was your choice and comes with consequences, i.e., if expenses rise more than income, then suddenly your personal economy is stretched. And that's on you.

tho23i4909234u•2h ago
For the H1Bs, I've heard that it's a nightmare.
pjmlp•1h ago
Depends on the job market on their area.
oezi•1h ago
And that's why performance tracking is prohibited in countries where unions still have a bit of power.
bloppe•1h ago
Yeesh. Prohibited? Then how do you decide who gets a promotion? At random?
OlivOnTech•1h ago
You have human managers discussing with their team (instead of human-decided metrics that cannot see the full picture)
ehnto•1h ago
Not that hard, but also why would you want to promote based on metrics? That will get you people gaming the system, and I can't imagine a single software dev metric that actually captures the full gamut of value a dev can provide. You will surely miss very valuable devs in your metrics.
lnsru•1h ago
There are no real promotions. It‘s about employment duration. In Bavaria you have like 12 salary groups. For white collar workers 9 is entry level, 10 is for some experience, 11 for experienced and 12 is the carrot to work harder for. Some companies do some downgrade to pay less. Group 8 for experienced folks job ads started appearing recently. The bonus is up to 28% depending on the performance. So basically you can slack all day, have +5% bonus on the base salary when someone doing overnighters will have +15%. The higher bonuses are reserved for oldtimers. This system is absolutely cringe. Btw most of these unionized companies offer 35 hours contracts. 40 hours must be negotiated as a bonus… Anyway union will take care of regular base salary increase, that’s really nice. +6% for doing nothing good is amazing!
rapsey•1h ago
And why those countries tend to have barely any growth in their economies (i.e. europe).
pjmlp•1h ago
It is ok, I earn enough to pay my bills, the ones from my family, a bit of travelling around and healthcare.

Usually over here we don't dream of making it big with big villas and a Ferrari on the garage, we work to live, not live to work.

rapsey•1h ago
France and UK are in giant fiscal crises. The German economy is in the toilet with no hope in sight. All of them have seen large deterioration of the quality of health care in the last decade. The EU leaders care more about Ukraine and destroying all privacy than any economic reform.
lm28469•1h ago
The irony of promoting performance tracking at employee level and criticizing the EU for destroying privacy
pjmlp•1h ago
How are those tarifs working out?
rapsey•1h ago
I am european lol.
deaux•1h ago
Korea has strong worker protections and unions. Not on tracking, but in general.
rapsey•1h ago
Low union membership though.
DocTomoe•3m ago
Which may be related to unions having been actively persecuted (to the extend of actual state-sanctioned torture, disappearences, and bona-fide massacres against people involved with unions - and people living next door. With active support of US civilian and military leadership, that is).

Google Gwangju.

lm28469•1h ago
The economy is supposed to serve us, not the other way, there is no pride in being a slave, it's not the flex you think it is lol.

Let's work 90 hours a week and retire at 80, imagine the growth, big numbers get bigger makes bald monkey happy

xela79•38m ago
> Let's work 90 hours a week and retire at 80, imagine the growth, big numbers get bigger makes bald monkey happy

that is all you heard in the 80-90s, people over the pond showing off how many hours per week they worked. like... how is that something to be proud of? So wauw, you spend 12hrs+ per day working , had no free evenings, zero paid holidays. And that is supposed to impress who?

please.

lm28469•27m ago
What happened in the 80s too is that politics told us automation would bring a 3 days work week, which never materialized. But now we have to trust the same people, moved by the same greed, that this time it'll be different
jstanley•8m ago
40 years on, how many of them truly are the same people?
ehnto•1h ago
Which seems to be a great thing for liveability and happiness metrics across the board.
rapsey•1h ago
Not when your economy is reduced to mass importing of third world labor to keep salaries down and your economy going (i.e. Italy, France, UK and Germany).
adammarples•1h ago
Easy way to game that would be to spam a couple of pages of unread documentation for every page of code you write. 2/3rds copilot usage, it's not critical, and documenting existing code is a much more likely to work use case for an LLM.
p_v_doom•1h ago
I mean, nobody reads documentation anyway
moomoo11•49m ago
Why feel bad? They signed up for that. There is no reason to feel bad for people who enter into voluntary contracts willingly.

Personally I want my MSFT position to increase, so I’m cool with whatever the company does to increase the share price.

piva00•9m ago
Feel bad because you have empathy?

Or perhaps that's the problem, lacking it.

charcircuit•2h ago
>why not just keep it voluntary, why force it on people?

People hate learning new tools, even if they are more efficient. People would rather avoid doing things than learning a tool to do it efficiently.

Even in this thread you can see simeone who is / was a Vim holdout. But the improvement from Vim to IDE will be a fraction of the difference compared to AI integrated IDEs.

notrealyme123•2h ago
I tried cursor but it felt impossible for me to create novelty there. It just only work on things which have been, more or less, in the training data.

Saying that the people are the problem instead of the tool is a lazy argument IMO. "Its not the companies fault, its the customer"

yrds96•2h ago
Did people force React? Cloud infrastructure? Microservices? You get it.

I know there are people still using PHP 5 and deploying via FTP, but most people moved on to be better professionals and use better tools. Many people are doing this to AI, too, me included.

The problem is that some big companies and influential people treat AI as a silver bullet and convince investors and customers to think the same way. These people aren't thinking about how much AI can help people be productive. They are just thinking about how much revenue it can give until the bubble pops.

procaryote•1h ago
Forcing react, cloud infra and microservices makes a lot more sense than forcing certain development tools. One is the common system you work, the other is what you use to essentially edit text.
aforwardslash•55m ago
Its basically the same. It abstracts away a layer of complexity, so you focus on different stuff. The inherent disadvantage of using these shortcuts/abstractions is only obvious if you actually understand their inner workings and their shortcomings - being cloud services or llm-generated code.

Today you have "frontend programmers" that couldn't implement a simple algorithm even if their life depended on it; thats not necessarily bad - it democratizes access to tech and lowers the entry bar. These devs up in arms against ai tools are just gatekeepers - they see how easy is to produce slop and feel threatened by it. AI is a tool; in most cases will improve the speed and quality of your work; in some cases, it wont. Just like everything else.

aforwardslash•1h ago
> Did people force React? Cloud infrastructure? Microservices? You get it.

Actually, yes; People forced React (instead of homegrown or different options) because its easier to hire to, than finding js/typescript gurus to build your own stuff.

People forced cloud infrastructure; even today, if your 10-customer startup isn't using cloud at some capacity and/or kubernetes, investors will frown on you; devops will look at you weird (what? Needing to understand inner workings of software products to properly configure them?)

Microservices? Check. 5 years ago, you wouldn't even be hired if you skipped microservices; everyone thinks they're gooogle, and many startups need to burn those aws credits; thats how you get a dozen-machine cluster to run a solution a proper dev would code in a week and could run on a laptop.

procaryote•1h ago
Most companies I've worked with don't care if you use vim or an IDE.

I've worked with people using vim who wildly outproduce full teams using IDEs, and I have a strong suspicion that forcing the vim person to use an IDE would lower their productivity, and vice versa

charcircuit•1h ago
>I've worked with people using vim who wildly outproduce full teams using IDEs

This is not due to the editor. Vim is not a 20x productivity enhancer.

>forcing the vim person to use an IDE would lower their productivity

Temporarily, sure. But there productivity should actually go up after they are used to it. This idea of wanting to avoid such a setback and avoiding change is what keeps people on such an outdated workflow.

quantummagic•2h ago
They said the same thing about the loom. "I'm an artist, no machine can replace me!" Now it's all done by machine, and none of us worry about it. We're in the early stages of the same process with AI; history rhyme.
overgard•2h ago
That may be the case some day, but I don't think it's going to happen with LLMs. They get too many things wrong via hallucinations (likely unfixable) and often they can go deep into an (incorrect) rabbit hole burning a ton of tokens at the same time.

Useful tools, but I think the idea that they'll replace programmers is (wishful? eek) thinking.

quantummagic•2h ago
Yup.. took the loom 200 years, and it won't be overnight for AI either. But it will eat away at the edges, and do the simple things first. It already is, for those who embrace it.
sumuyuda•2h ago
High quality hand made clothes still exist and people do want to pay for them. Mass produced clothing made in sweats shops are what the majority of the people buy because that is where the capitalist companies drove the production.
quantummagic•2h ago
They exist the same way the horse-and-buggy exist -- for a select few. They're the exception that proves the rule.
aforwardslash•39m ago
The loom dropped production costs immensely - even hand-made clothes are done with premade fabrics, they dont do it from scratch.

Mass produced clothing exists in many industrialized countries - typically the premium stuff; the sweatshop stuff is quite cheaper, and customers are happy paying less; its not capitalism, its consumer greed. But nice story.

rdtsc•2h ago
> If they’re really so confident on the LLM’s effectiveness, why not just keep it voluntary, why force it on people? The results will be there in the outcome of the shipped product for all to see.

It’s a bit like returning to the office. If it’s such an obvious no-brainer performance booster with improved communication and collaboration, they wouldn’t have to force people to do it. Teams would chomp at the bit to do it to boost their own performance.

BiteCode_dev•2h ago
That's assuming this is most people's objective when they are at work.

And even if it was, that's also assuming this benefit would be superior to the benefit of remote work for the individual.

vineyardmike•2h ago
I don't want to wade into the actual effectiveness of RTO nor LLMs at boosting productivity, but if you buy into the claims made by advocates, it seems pretty obvious that the "in office boosts communication" claim is only true if your coworker (the other side of the conversation) is in office. Not everyone has the same priorities, so you'd have to mandate compliance to see the benefits.

Similarly, many people don't like learning new tools, and don't like changing their behavior. Especially if it's something they enjoy vs something good for the business. It's 2025 and people will have adamantly used vim for 25 years; some people aren't likely to change what they're comfortable with. Regardless of what is good for productivity (which vim may or may not be), developers are picky about their tools, and its hard to convince people to try new things.

I think the assumption that people will choose to boost their own productivity is questionable, especially in the face of their own comfort or enjoyment, and if "the business" must wait for them to explore and discover it on their own time, they risk forgoing profits associated with that employee's work.

gherkinnn•1h ago
I don't see how using vim is in any way bad for business, what a terrible example. And I don't even use it myself.

Your argument also hinges on "business" knowing what is good for productivity, which they generally don't. Admittedly, neither do many programmers, else we'd have a lot less k8s.

vidarh•1h ago
Indeed, I detest vim but I think mentioning it detracted from the argument by showing why developers tend to not trust it when others try to dictate what is "good for the business" based on their own views rather than objective metrics.
danielrothmann•1h ago
You've got a point on RTO. Because it's a group behaviour, if you believe it will have positive effects, mandating it could be a way of jumpstarting the group dynamic.

With LLMs, I'm not so sure. Seems more like an individual activity to me. Are some people resistant to new tools, sure. But a good tool does tend to diffuse naturally. I think LLMs are diffusing naturally too, but maybe not as fast as the AI-boosters would like.

The mistake these managers are making is assuming it's a good tool for work that they're not qualified to assess.

forgotusername6•1h ago
There are psychological barriers to using a tool that diminishes the work you previously thought was complex.
Gigachad•1h ago
I’m lazy. I’d rather work from home even if the office is more productive because it’s easier for me to not have to go to the office.

If the AI tools actually worked how they are marketed I’d use them because that’s less work for me to have to do. But they don’t.

not_that_d•2h ago
I am living this but the CEOs of my company are also "active" programmers.

Even when I already hear from them that "it helps them in language they do not know" (which is also my experience) I get frown upon if on meetings I do not say that I am "Actively using AI to GENERATE whole files of code".

I use AI as rubber duck, generate repetitive code or support me when going into an new language or technology, but as soon as I understand it, most of the code given for complete, non hobby, enterprise level projects contains either inefficient code or just plain mistakes which takes me ages to fix for new technologies.

germandiago•2h ago
You want some advice from a 16 years-in-industry person? Not so long, but long enough: software, as all industries, are driven by metrics.

Metrics we understand, but that managers miss to understand sometimes. You are a means to produce. With the advent of AI, some very hyped people think and wish they could get rid of programmers.

You know what I am doing in the meantime? I built a business. I am just finishing the beta deploymet test now. It can go wrong? Yes.

But otherwise, be faced to be a number, a production chain thing in the future. Besides that, when they can get rid of you, you are going to be in a bad positio to move at that time. Invest time now in an alternative strategy, if you can.

Of course, I know nothing about you so I might be totally wrong. If you already have financial safety for the rest of your life, this does not apply as hard.

I am trying to buy more freedom on my side. I already had some, but not enough. You will not be free with a manager to report to, even if you are thinking you are doing a better job than he thinks. Or even if you are objetively doing it.

They will care about delivery in a rush, politics, self-interest (this is not different from any human, but you will depend on them), etc.

Just choose freedom :D

_ZeD_•2h ago
what baffles me is how much more rage is coming from any other creative workers (painters, filmmakers, musicians) than from programmers.

Why are programs - the result of the ingenuity of people working in software field - not protected against AI slop stuff.

Why is there not any kind of narrative out there describing how fake and soulless is code written by any AI agent?

Copenjin•2h ago
Because there already was slop written by humans, and I think that in many cases the AI slop is better looking.
protocolture•1h ago
>not protected against AI slop stuff.

Programmers are by and large not assholes adverse to sharing, which is why we have copyleft and stack overflow..

Coding is also a process, a process that you may need to go through many times. The creation and maintenance of expert systems

Artists tend to want to win it big once, never innovate, and use the government to force people to send them money.

eloisant•1h ago
Because for other workers the threat is much bigger. I'm not a painter, filmmaker or musician, but now I can make a picture, a short movie or a song. Yes it will be mediocre, but if I'm fine with mediocre I no longer need those professionals.

Programs on the other hand still need developers to make. Also, we've seen decades of tooling evolution that (1) made developers more productive (2) failed to replace developers.

visarga•2h ago
The forcing argument has merit, it should not be forced, in fact they should say very little about how we do our work.

But the "rubber-stamp" framing is wrong, if it were true then you would not be needed at all. It's actually harder to use gen AI than to code manually. Gen AI has a rapid pace and overwhelming quantity of code you need to ensure is not broken in non-obvious ways. You need to layer constraints, tests, feedback systems for self repair and handle memories across contexts.

I recently vibe coded 100K LOC across dozens of apps, I feel the rush of power in coding agents but also the danger. At any moment they could hallucinate, misunderstand or use a different premise than you did. Going past 1000 LOC requires sustained focus, it will quickly unravel into a mess otherwise.

therein•2h ago
It is not harder if you don't care about or even understand what could go wrong. It is harder if you care and want to be as confident of this code as if it is your own hand-written code.

Feels like you are assuming everyone has your diligence and the diligence that exists in the industry isn't already rapidly decaying due to what's happening.

avhception•2h ago
Just yesterday I made some notes about a program I'd like to write (hobby project, to be open sourced). After that, the thought of using an LLM to turn the notes into an implementation squished the joy right out of me.

The better the code generated by LLMs get, the less there is of an incentive to say "no". Granted, we're not nearly there yet (even though media reports and zealous tech bros say otherwise). But - and this is especially true for organizations that already had a big code quality problem before the LLMs showed up - if the interpreter / compiler accepts the code and it superficially looks like it does what it should, there is pressure to simply accept it.

Why say no when we could be done now and move on!? Rubber-stamp it and let's go! Sigh. Maybe I'm overly pessimistic, reading the raves about LLMs every day grinds me down.

krackers•2h ago
I find LLM generated code ends up pushing review/maintenance burden onto others. It "looks" right at first glance, and passes superficial tests, so it's easy to get merged. But then as you build on top of it, you realize the foundations are hastily put together, so a lot of it needs to be rewritten. Fine for throwaway or exploratory work, but heaven help you if you're working in a project where people use LLMs to "fix" bugs generated by previous LLM generated code.

So yes it does increase "velocity" for the person A who can get away with using it. But then the decrease in velocity for person B trying to build on top of that code is never properly tracked. It's like a game of hot potato, if you want to game the metrics you better be the one working on greenfield code (although I suppose maintenance work has never been looked at favorably in performance review; but now the cycle of code rot is accelerated)

Gigachad•1h ago
This has been described a lot as “workslop”, work that superficially looks great but pushes the real burden on the receiver of the work rather than the producer.
loveparade•1h ago
That sounds more like an organizational problem. If you are an employee that doesn't care about maintainability of code, e.g. a freelancer working on a project you will never touch again after your contract is over, your incentive has always been to write crappy code as quickly as possible. Previously that took the form of copying cheap templates, copying and pasting code from StackOverflow as-is without adjustments, not caring about style, using tools to autogenerate bindings, and so on. I remember a long time ago I took over a web project that a freelancer had worked on, and when I opened it I saw one large file of mixed python and HTML. He literally just copied and pasted whole html pages into the render statements in the server code.

The same is true for many people submitting PRs to OSS. They don't care about making real contributions, they just want to put something on their resume.

AI is probably making it more common, but it really isn't a new issue, and is not directly related to LLMs.

tschumacher•1h ago
Yes, this is it. The idea that LLMs somehow write this deceptive code that magically looks right but isn't is just silly. Why would that be the case? If someone finds they are good at writing code (hard to define of course but take a "measure" like long term maintainability for example) but they fail to catch bad code in review it is just an issue with their skill. Reviewing code can be trained just as writing code can be. A good first step might be to ask oneself: "how would I have approached this".
dm270•1h ago
Im working on some website and created some custom menu. Nothing fancy. AI got it done after some tries and I was happy as web development is not my area of expertise. After some time I realized the menu results to scrolling when it shouldn’t and wanted to make the parent container expand. This was impossible as the AI did a rather unusual implementation even for such a limited use case. Best part: my task now is impossible to solve with AI as it doesn’t really get its own code. I resulted to actually just looking into CSS and the docs and realized there is a MUCH simpler way to solve all of my issues.

Turns out sometimes the next guy who has to do maintenance is oneself.

Terr_•50m ago
> Turns out sometimes the next guy who has to do maintenance is oneself.

Over the years I've been well-served by putting lots of comments into tickets like "here's the SQL query I used to check for X" or "an easy local repro of this bug is to disable Y", etc.

It may not always be useful to others... but Future Me tends to be glad of it when a similar issue pops up months later.

piva00•40m ago
On the same boat, I've learnt to leave breadcrumbs for the future quite a long time ago, and it's paid off many, many times.

After it becomes second-nature is really relaxing to know I have left all the context I could muster around, comments in tickets, comments in the code referencing a decision, well-written commit messages for anything a little non-trivial. I learnt that peppering all the "whys" around is just being a good citizen in the codebase, even if only for Future Me.

thefz•35m ago
> it doesn’t really get its own code

It doesn’t really get its own anything, as it is unable to "get". It's just a probabilistic machine spitting out the next token

Kudos•26m ago
Hey, I think everyone understands how they work by now and the pedantry isn't helpful.
mellosouls•1h ago
This is pretty much how permanent staff often have to work with consultants/contractors or job-hoppers in some sectors.

Shiny new stuff quickly produced, manager smiles and pays, contractor disappears, heaven help the poor staffers who have to maintain it.

It's not new, just in a new form.

eloisant•1h ago
In my experience, AI generated code is much higher quality than code written by external service companies. For example it will look at your code base and follow the style and conventions.
cjfd•1h ago
Style can conventions are very superficial properties of code. The more relevant property is how many bugs are lurking below the surface.
mfru•1h ago
Style conventions have a real impact on how effectively bugs are found.
samrus•22m ago
The actual design of the solution has a way bigger impact on the amount of bugs to be found in the first place
sussmannbaka•32m ago
this just means the bugs it creates are better camouflaged
karmakurtisaani•56m ago
What's new though is that now you can do it to your future self!
izacus•46m ago
Don't ignore the difference in scale though. Something happening some of the time isn't the same as happening most of the time.
p_l•2m ago
Yeah, LLMs are easier to keep available ;)
samrus•23m ago
This misallignment of incentives is why we have shitty software in everyday life
Degorath•1h ago
I've decided to fight it the same way I fight tactical tornadoes - by leaving those people negative reviews at mid-year review.

(I also find the people who simply paste LLM output to you in chat are the much bigger evil)

m463•1h ago
I'm sort of reminded of the south park movie.

They kept repeatedly getting an NC-17 from the MPAA and kept on resubmitting it (6 times) until just before release when they just relented, gave it an R and released it as-is.

https://en.wikipedia.org/wiki/South_Park:_Bigger,_Longer_%26...

Fomite•56m ago
One of the things about AI generally is it doesn't "save" work - it pushes work from the one who generates the work to the person who has to evaluate it.
rewgs•39m ago
It's yet another example of "don't be the last one holding the bag."
trklausss•15m ago
I'd say is a change of paradigm, and it might be even faster if you have test-driven development... Imagine writing your tests manually, getting LLM code, trying to pass the tests, done.

Of course, golden rules are 1. write the tests yourself, don't let the LLM write them for you and 2. don't paste this code directly on the LLM prompt and let it generate code for you.

In the end it boils down to specification: the prompt captures the loosely-defined specification of what you want, LLM spouts something already very similar to what you want, tweak it, test it, off you go.

With test driven development this process can be made simpler, and other changes in other parts of the code are also checked.

Nio1024•2h ago
I don’t think AI is anywhere near the point of replacing humans yet. The main issue here is whether the use of these tools is forced or voluntary. I’ve seen quite a few companies where the boss tries to fully adopt AI productivity tools, but faces strong resistance during implementation.From the employees’ perspective, the boss might be moving too aggressively without considering the practical realities. From the boss’s perspective, it’s frustrating to see the pushback.This tension seems to be a common challenge at the current stage of AI adoption.
Nio1024•1h ago
Just to add,many people tend to overestimate the power of AI. At least for now, vibe coding doesn’t play a significant role in building complex software. I recently read a Stack Overflow research report showing that:“Most respondents are not vibe coding (72%), and an additional 5% are emphatic it not being part of their development workflow.”It also noted that in a future with advanced AI, the #1 reason developers would still ask another human for help is “When I don’t trust AI’s answers” (75%).This clearly shows that human developers remain the ultimate
zwnow•1h ago
It wont replace humans in the foreseeable future as they can not reason or react to changes they werent trained on. Bosses are jumping on a hype train making decisions in fields they barely have expertise in, which is the frustrating part. They listen to false promises of other "founders". Bosses not listening to their employees always has been a key factor to frustration at work, these businesses have no right to succeed.
bloppe•1h ago
If tech companies are this stupid, it ought to be very easy to disrupt and usurp them by simply shipping competing code that works. In that sense, the author is painting an incredibly bright picture of the future of the software industry: one where founders don't have to be particularly talented to hit the jackpot.
numpy-thagoras•1h ago
"...one where founders don't have to be particularly talented to hit the jackpot."

That's where we're at right now anyways.

"If tech companies are this stupid, it ought to be very easy to disrupt and usurp them by simply shipping--"

And that's how we got here.

The code rot issue will blow up a lot more over the next few years, that we can finally complete the sentence and start "shipping competing code that works".

I worry that mopping up this catastrophe is going to be a task that people will again blindly set AI upon without the deep knowledge of what exactly to do, rather than "to do in general, over there, behind that hill".

ehnto•1h ago
Saving misguided AI codebases is going to be quite lucrative for contract work I suspect.

A lot of non-technical people are going to get surprisingly far into their product without realising they are on a bad path.

It already happens now when a non-technical founder doesn't get a good technical hire.

The surprising thing for developers though, is how often a shit codebase makes millions of dollars before becoming an issue. As much as I love producing rock solid software, I too would take millions of dollars and a shit codebase over a salary and good code.

fancyfredbot•38m ago
Yes this is only bad news if you are working for morons.

Unfortunately a lot of people are in that situation. You can basically forget about disruption. Meritocracy is dead, long live the Peter principle.

mock-possum•1h ago
This feels like it would have been better off as a topic level reply to that Reddit thread, than as a short whiny blog post, or as a post to HN
gngoo•1h ago
Okay, but now what? Clearly, the industry is trending towards an entirely new style of doing programming. What are the longterm options going to be for those who don't enjoy this? Especially when there is a good chunk of people embracing it and adopting tools faster than any other tools for this proffesion have been adopted in the past. How will this end?
eloisant•1h ago
Ask people who thought compilers were stupid, generating wrong code and decided they preferred to keep writing assembly code...
ayaros•1h ago
"Needless to say, they’d still want you to take the responsibility. If bugs or tickets get raised on the shipped code, it’s you who gets fired, not the copilot or chatgpt - though the larger narrative or news headlines next day would still be, 'AI is eating jobs'!"

I'm also reminded of that legendary old IBM quote from 1979: "A computer can never be held accountable. Therefore a computer must never make a management decision."

p0w3n3d•1h ago
I totally agree, the employer requires me to take ownership of the code I pushed to the repository. I should not be enforced to use some tool if I think that the tool does wrong.

In a larger scope, I tend to break many "rules" when I code, because I say that my experience proves against it, and this is what makes me unique. Of course nowadays, I need to convince my team to approve it, but sometimes things that are written differently are free from certain flaws that I want in this very case to avoid.

-- EDIT --

I think that this management trend comes from the bad management principles. There's a joke that a bad manager is a person who knowingly that one woman delivers a baby in 9 months, will consider that nine women deliver a baby in one month. I'd say similar principle comes in here - they were bought by the commercials on how AI makes things faster, they have put the numbers into their spreadsheet and now they expect the numbers they pay get similar to the numbers on the sheet. And if the numbers do not fit, they start pushing.

alganet•1h ago
What I don't like about this take is that it implies that it could be that way. It implies the LLM could do the job of writing, leaving the programmer to just approve it.

It sounds anti-LLM, but it actually helps support the illusion that LLMs can do more than they actually can.

I don't think an LLM can write serious software on its own. If it could, there would be some extraordinary evidence, but all there is are some people spreading rumours. If you ask them for simple evidence of comparable performance (like a video), they shy away or answer vaguely.

The thing is not there yet, and I understand the optimism of some, but I also must emphasize that it's not looking great for LLM coding enthusiasts right now. There's no amount of proselitism that can make up for the lack of substance in their claims. Maybe they can trick investors and some kids, but that's not going to cut it in the long run.

Therefore, this is not a problem. I don't need to worry about it. If (or when) some evidence appears, I can then worry about it. This hasn't happened yet.

xkbarkar•1h ago
As a response to the AI negativity in the thread. Remember that this thing is in its infancy.

Current models are the embryos of what is to come.

Code quality of the current models is not replacing skilled software engineers, network or ops engineers.

Tomorrows models may well do that though.

Venting the frustrations of this is all very well but I sincerely hope those who wish to stay in the industry, learn to get ahead of AI and utilize and control it.

Set industry standards (now) and fight technically incompetent lawmakers before they steer us into disaster.

We have no idea what the effect of tomorrows LLMs is going to have, autonomous warfare is not that far away eg.

All while todays tech talent spends energy bickering on HN about the loss of being the code review King.

Everyone hated the code review royalty anyway. No one mourns them. Move on.

awesan•54m ago
If managers are pushing a clearly not-working tool, it makes perfect sense for workers to complain about this and share their experiences. This has nothing to do with the future. No one knows for sure if the models will improve or not. But they are not as advertised today and this is what people are reacting to.
sirwhinesalot•27m ago
Current LLMs are already trained on the entirety of the interwebs, including very likely stuff they really should not have had access to (private github repos and such).

GPT-5 and other SoTA models are only slightly better than their predecessors, and not for every problem (while being worse in other metrics).

Assuming there is no major architectural breakthrough[1], the trajectory only seems to be slowing down.

Not enough new data, new data that is LLM generated (causing a "recompressed JPEG" sort of problem), absurd compute requirements for training that are only getting more expensive. At some point you hit hard physical limits like electricity usage.

[1]: If this happens, one side effect is that local models will be more than good enough. Which in turn means all these AI companies will go under because the economics don't add up. Fun times ahead, whichever direction it goes.

throw-10-13•17m ago
In its infancy, but still forced on people like it's a mature product.

The marketing around AI as a feature complete tool ready for production is disingenuous at best, and outright fraud in many cases.

Zababa•46m ago
Interesting to see how programmers seems to be separating into people embracing those tools and people rejecting them. I wonder if it's linked to liking the act of coding itself vs liking the results.
Ekaros•44m ago
I myself are among people I would trust least to approve any code. In general I am way too trusting that others either know better or have properly thought through their work.

In scenarios were especially the later might not be true it seems like a inevitable failure. And I am not even sure any fixes will be thought trough either... Which makes me rather sceptical of whole thing.

throw-10-13•20m ago
Working with AI is like working with an ADHD intern that says they understand the problem but then gets distracted by every possible thing and then tries to gaslight you into thinking their mistakes are your fault.

Its exhausting, infuriating, and a waste of time.

jjav•11m ago
Funny, in that just moments ago I was describing AI coder to friends as a very drunk intern high on crack.

My experience is that using AI as a fancy code completion tool works very well and saves me a lot of time.

But, trying to let it define how to do things aka vibe coding, is a recipe for endless disaster.

AI coder can do great things but it needs someone to first define the architecture and forcefully guide it in the right direction at every step. If let loose, things go haywire.

throw-10-13•5m ago
I basically just use it every now and then to summarize api docs for me, every time I try to use it to solve a real problem it just flounders in context polluted by its previous failed attempts.

I generally find the whole process to be more frustrating and time consuming than just writing the code myself.

I am not interested in entire new architectural paradigms required to enable a mediocre code ad-lib bot.

serf•17m ago
interesting to see the phrase 'programmer' coming back en masse - especially as someone who never really stopped using it.

I thought we were all 'full stack engineeers' now, otherwise the resume got thrown into the circular file?

Great. I wait with anticipation for the slide back to 'Calculator'.

DocTomoe•10m ago
That's a bit like an 1950s era pilot raging against autopilots because 'I am a pilot, not a rubber-stamp that does autopilot surveillance.'

Today, no commercial pilot would get the idea that they are there to fly straight for eight hours. They are there for when bad things happen.

I expect software development to go into a similar direction.

cHaOs667•3m ago
"If they’re really so confident on the LLM’s effectiveness, why not just keep it voluntary, why force it on people?" To answer this question: To justify the investment.

No, for real, LLM solutions costs a shitload of money, and every investment needs to be justified on a management level. That's the reason they are enforcing it.

My bigger problem is that there are a whole lot of "developers" who do not read the generated code properly, why do you end up in review sessions where the developer does not know what is happening and why the code acts in a particular way. And we have not yet discussed clean code principles throughout the whole solution...