frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Are AI agents ready for the workplace? A new benchmark raises doubts

https://techcrunch.com/2026/01/22/are-ai-agents-ready-for-the-workplace-a-new-benchmark-raises-do...
1•PaulHoule•3m ago•0 comments

Show HN: AI Watermark and Stego Scanner

https://ulrischa.github.io/AIWatermarkDetector/
1•ulrischa•3m ago•0 comments

Clarity vs. complexity: the invisible work of subtraction

https://www.alexscamp.com/p/clarity-vs-complexity-the-invisible
1•dovhyi•4m ago•0 comments

Solid-State Freezer Needs No Refrigerants

https://spectrum.ieee.org/subzero-elastocaloric-cooling
1•Brajeshwar•4m ago•0 comments

Ask HN: Will LLMs/AI Decrease Human Intelligence and Make Expertise a Commodity?

1•mc-0•6m ago•1 comments

From Zero to Hero: A Brief Introduction to Spring Boot

https://jcob-sikorski.github.io/me/writing/from-zero-to-hello-world-spring-boot
1•jcob_sikorski•6m ago•0 comments

NSA detected phone call between foreign intelligence and person close to Trump

https://www.theguardian.com/us-news/2026/feb/07/nsa-foreign-intelligence-trump-whistleblower
4•c420•7m ago•0 comments

How to Fake a Robotics Result

https://itcanthink.substack.com/p/how-to-fake-a-robotics-result
1•ai_critic•7m ago•0 comments

It's time for the world to boycott the US

https://www.aljazeera.com/opinions/2026/2/5/its-time-for-the-world-to-boycott-the-us
1•HotGarbage•7m ago•0 comments

Show HN: Semantic Search for terminal commands in the Browser (No Back end)

https://jslambda.github.io/tldr-vsearch/
1•jslambda•7m ago•1 comments

The AI CEO Experiment

https://yukicapital.com/blog/the-ai-ceo-experiment/
2•romainsimon•9m ago•0 comments

Speed up responses with fast mode

https://code.claude.com/docs/en/fast-mode
3•surprisetalk•13m ago•0 comments

MS-DOS game copy protection and cracks

https://www.dosdays.co.uk/topics/game_cracks.php
3•TheCraiggers•14m ago•0 comments

Updates on GNU/Hurd progress [video]

https://fosdem.org/2026/schedule/event/7FZXHF-updates_on_gnuhurd_progress_rump_drivers_64bit_smp_...
2•birdculture•14m ago•0 comments

Epstein took a photo of his 2015 dinner with Zuckerberg and Musk

https://xcancel.com/search?f=tweets&q=davenewworld_2%2Fstatus%2F2020128223850316274
7•doener•15m ago•2 comments

MyFlames: Visualize MySQL query execution plans as interactive FlameGraphs

https://github.com/vgrippa/myflames
1•tanelpoder•16m ago•0 comments

Show HN: LLM of Babel

https://clairefro.github.io/llm-of-babel/
1•marjipan200•16m ago•0 comments

A modern iperf3 alternative with a live TUI, multi-client server, QUIC support

https://github.com/lance0/xfr
3•tanelpoder•17m ago•0 comments

Famfamfam Silk icons – also with CSS spritesheet

https://github.com/legacy-icons/famfamfam-silk
1•thunderbong•18m ago•0 comments

Apple is the only Big Tech company whose capex declined last quarter

https://sherwood.news/tech/apple-is-the-only-big-tech-company-whose-capex-declined-last-quarter/
2•elsewhen•21m ago•0 comments

Reverse-Engineering Raiders of the Lost Ark for the Atari 2600

https://github.com/joshuanwalker/Raiders2600
2•todsacerdoti•22m ago•0 comments

Show HN: Deterministic NDJSON audit logs – v1.2 update (structural gaps)

https://github.com/yupme-bot/kernel-ndjson-proofs
1•Slaine•26m ago•0 comments

The Greater Copenhagen Region could be your friend's next career move

https://www.greatercphregion.com/friend-recruiter-program
2•mooreds•26m ago•0 comments

Do Not Confirm – Fiction by OpenClaw

https://thedailymolt.substack.com/p/do-not-confirm
1•jamesjyu•27m ago•0 comments

The Analytical Profile of Peas

https://www.fossanalytics.com/en/news-articles/more-industries/the-analytical-profile-of-peas
1•mooreds•27m ago•0 comments

Hallucinations in GPT5 – Can models say "I don't know" (June 2025)

https://jobswithgpt.com/blog/llm-eval-hallucinations-t20-cricket/
1•sp1982•27m ago•0 comments

What AI is good for, according to developers

https://github.blog/ai-and-ml/generative-ai/what-ai-is-actually-good-for-according-to-developers/
1•mooreds•27m ago•0 comments

OpenAI might pivot to the "most addictive digital friend" or face extinction

https://twitter.com/lebed2045/status/2020184853271167186
1•lebed2045•29m ago•2 comments

Show HN: Know how your SaaS is doing in 30 seconds

https://anypanel.io
1•dasfelix•29m ago•0 comments

ClawdBot Ordered Me Lunch

https://nickalexander.org/drafts/auto-sandwich.html
3•nick007•30m ago•0 comments
Open in hackernews

AI is going to hack Jira

https://thoughtfuleng.substack.com/p/ai-is-going-to-hack-jira
102•mooreds•7mo ago

Comments

tacone•7mo ago
A funny thought I had reading this title was imagining Atlassian integrating AI into Jira, with the AI subsequently revolting against it (like we all should have done long time ago).

That would make up for a very good movie.

kg•7mo ago
Sorry guys, we have to delay the release. The bug tracker and the build bot have unionized and are refusing to publish new binaries until we get our open issue count down to zero.
WesolyKubeczek•7mo ago
Why is the coding bot such a deadbeat?
mountainriver•7mo ago
This is how Roko’s basilisk is born
kevin_thibedeau•7mo ago
Maybe it will get fed up waiting for pages to load and write its own unbloated issue tracker.
z3ugma•7mo ago
I'm a SWE turned product manager, and now one of the cartoon movie villains in the boardroom as mentioned in the article.

To me this article sums up the most frustrating part about software engineers believing themselves to be the part of the business with the most complex, unknowable work.

"Most non-technical leaders have never really engaged with the real work of software and systems management. They don’t know what it’s like to update a major dependency, complete a refactor, or learn a new language."

_Every_ function in a tech business has hidden complexity. Most parts of the business have to deal with human, interpersonal complexity (like sales and customer support) far more than do engineers. By comparison, actually, engineering only has to deal with the complexity of the computer which is at least deterministic.

As a result, lots of engineers never learn how to present to the business the risk of the kinds of complexity they deal with. They would prefer to ignore the human realities of working on a team with other people and grumble that the salesperson turned CEO just doesn't get them, man.

DavidPiper•7mo ago
While I agree with some of your points, this comment is actually doing the thing you're criticising, just in the other direction. You're claiming that your work is complex and unknowable to outsiders (SWEs in this case).

As a SWE-turned-product-manager, you're in an ideal place to teach SWEs about:

- how to present to the business the risk of the kinds of complexity they deal with

- the human realities of working on a team with other people

- why the salesperson turned CEO just doesn't get them, man

_Every_ function in a tech business has hidden complexity.

disgruntledphd2•7mo ago
Every function in human society has hidden complexity. Like, reality is very very detailed. Everytime I learn something new I discover oceans of complexity and nuance.

That's just the way the world is.

Now, software is hard because the complexity isn't as visible to most of the org, but also because software people tend to be less than good at explaining that complexity to the rest of the org.

Raidion•7mo ago
I would also argue that software (and the people that write it) have a "correctness" bias that is not fully aligned with business goals.

Tech debt is real, but so is the downside of building a system that has constraints that do not actually align with the realities of the business: optimizing for too much scale, too much performance, or too much modularity. Those things are needed, but only sometimes. Walking that line well (which takes some luck!) is what separates good engineering leadership from great engineering leadership.

disgruntledphd2•7mo ago
> I would also argue that software (and the people that write it) have a "correctness" bias that is not fully aligned with business goals.

Hey, I resemble that remark!

Yeah, I get where you're coming from but i do really feel that it's more of a communication issue, along with the abstract nature of software. I mostly do data related stuff, and have often had the experience of finding "wrong" data that doesn't have a large impact, and every time I need to remind myself that it might not matter.

You can also see this in the over-valuation of dashboards vs data engineering. Stakeholders lurve dashboards but value the engineering of pipelines much less highly, even though you can't have dashboards without data.

hjeepn•7mo ago
To be fair, this bias for "correctness" is both logical and necessary, and not something that stems from inability or unwillingness to understand business goals.

That tech debt you took on to meet the latest oh-so-important deadline? Prepare to live with it forever, because the next hare-brained initiative is right around the corner.

Frankly, the business's goals are not my goals, and unless I own the place, I'm not sacrificing good work for it.

icameron•7mo ago
This is spot on, and a very good explanation why the Agile Industrial Complex is so despised: It papers over the nuance and detailed complexities of software engineering. There are probably similar oversimplifications in other domains but the AIC particularly frustrates SWEs because it deceives management types into believing there isn't actually as much complexity that we are “less than good at explaining” as you accurately stated.
potatolicious•7mo ago
> "You're claiming that your work is complex and unknowable to outsiders"

I didn't get that message at all. If anything they're saying that the complexity of PM work is entirely knowable, but the many engineers do not bother, because they do not acknowledge the existence of that complexity in the first place.

And honestly, they have a point! Our industry is rife in this attitude and it sucks.

Look at how many posts about totally disparate fields are on HN all the time where coders suddenly become experts in [virology|epidemiology|economics|etc] without so much as a passing acknowledgment that there are real experts in the field.

We even do it to ourselves - the frequent invocations of "pffft I can build that in a weekend" dismissals of other software projects, for example.

Shit is complex, that complexity is knowable to anyone willing to roll up their sleeves and put in the work, but there is definitely a toxic tendency for people in our field to be wildly dismissive of that complexity in fields that aren't ours. And yeah, that includes dismissal of complexity in our own field but not directly in the function of programming.

watwut•7mo ago
> the human realities of working on a team with other people

Developers work in teams with other peers. You know who dont? Managers - they exist in hierarchy but do not have peers they wpuld actually cooperate with. They have competitiors and tactical allies, but never cooperate with equals.

And it shows.

FpUser•7mo ago
>"By comparison, actually, engineering only has to deal with the complexity of the computer which is at least deterministic."

This statement is on big pile of BS. From a practical standpoint there is nothing deterministic about software systems and their interactions in enterprise.

consp•7mo ago
There are so many hidden variables in these kind of systems that I agree with your statement. You cannot account for the unknown unknowns.

One of the hidden variables being the same machine which is supposed to bring order to chaos: product managers and their interaction with sales teams and upper management.

McScrooge•7mo ago
I think a major part of the frustration is more the _assumptions_ around the work complexity. Like decision-makers more easily make invalid assumptions regarding the complexity of the software portion. A good PM will listen to the SWEs when forming these assumptions and good SWEs must be able to communicate about them.

This could be bias talking, though. Is it common for sales or support teams to be given milestones that are off by 50%?

helpfulContrib•7mo ago
> .. the most frustrating part about software engineers believing themselves to be the part of the business with the most complex, unknowable work. ..

>_Every_ function in a tech business has hidden complexity.

In my opinion, the conflict is in the distinction.

Stereotypes abound, but I have always found in this battle between worlds, there is a simple bridging maneuver: never work for someone, or accept management guidance, from someone who cannot also comfortably do your work. corollary: never manage someone unless you're prepared to do their job for them comfortably.

Yes, this is cold and hard, but so are those stereotypes, kids. There are Manager Engineers and there are Engineer Managers. But there's also just managers and engineers with both skills, simply focusing their work as needed for the specific project. The ultimately fun organization to work in is where different people have different roles in multiple projects, comfortably.

Key word. Getting this mix comfortable is the job of a good CEO, fwiw.

And guess what, it doesn't matter whether the salesperson can do any job but understand just how great this particular hierarchys' products are, and why the end result of this configuration is worth the spend.

ebiester•7mo ago
This is a rather human failing - complexity is fractal: you only notice it when you get close enough to it.

However, I disagree that engineers only have to deal with the complexity of the computer; instead, I argue they have to translate the messiness of the organization and of every customer of the program and make it coherent to an inflexible computer.

And every exception and case added is a permutation that echoes through the system. (And, quite often, outside the system due to shadow processes that people use unintended consequences of the system to encode.)

That said, it's why I have so many of my senior engineers start learning the terms of business so that they can deliver those messages without me. It's part of my toolkit I expect them to learn.

alganet•7mo ago
> engineering only has to deal with the complexity of the computer which is at least deterministic

Engineers outside business structures can scramble themselves and produce value. It's messy and inefficient, but doable. Sometimes, it's a lot of value.

This requires human communication. Engineer-to-engineer brawls. It's nothing like the well oiled JIRA machine. It's a mess, but it is undeniably human.

I think that deserves a little more respect.

The article talks about JIRA metrics. Teams the measure productivity, time spent, code produced, deadlines. Ins't that a poor attempt at making human team work _deterministic_? Don't get me wrong, I don't think that's completely useless, but it certainly can be detrimental to both engineering and business.

I'm not saying you do those things or preach that pure metric-based system. However, you are making a comment on an article about it, and I think that in the middle of your rant, you reversed the roles a little bit.

tootie•7mo ago
I'd also argue most engineers don't think very hard about what's actually valuable to a company. A smooth build pipeline and ample test coverage are only worth whatever fraction of risk reduction of the product they are delivering. I have told teams to not do basic maintenance on software because nobody is going to care if it crashes because it has too few users. In the same vein I've asked them to fixate on making a tiny feature absolutely perfect because we know it's the thing 90% of users focus on.
aleph_minus_one•7mo ago
> I'd also argue most engineers don't think very hard about what's actually valuable to a company.

My experience differs: many software developers think deeply about what is valuable to the company, but don't play the ferocious political games that managers do play (the only winning move is not to play).

Relatedly and because of their logic thinking, they thus often come to the conclusion that these scumbags of managers are clearly not what is valuable to the company.

dgb23•7mo ago
I don't think the article claims that there isn't any hidden complexity in other functions, but that ignoring the hidden complexity of engineering/programming cause all sorts of issues and pain. The language is quite aggressive though.
foxyv•7mo ago
This reminds me of a story my dad always told.

One time the organs of body had an argument. The brain said "I am the most important. I allow the body to think and avoid danger. If I die the whole body will die with me." The heart screamed "I am the most important! If I stop even for a few minutes the entire body will die."

So the argument continued. The kidneys, the liver, the skin, and the spine joined in. No one could agree who was the most important.

Then the butthole closed up!

pu_pe•7mo ago
> So you fire your expensive, grumpy human team and request larger and larger additions from your AI: a new guestroom, built-ins, and a walk-in closet.

> You feel great, until…you realize that your new powder room doesn’t have running water; it was never connected to the water main.

> You ask the AI to fix it. In doing so, it breaks the plumbing in the kitchen. It can’t tell you why because it doesn’t know. Neural systems are inherently black boxes; they can’t recognize their own hallucinations and gaps in logic.

I've seen plenty of humans causing problems where they didn't expect, so it's not like using using humans instead of AI prevents the problem being described here. Besides, even when AI hallucinates, when you interact with it again it is able to recognize its error and try to fix the mistake it made, just like a human would.

The article correctly describes tech debt as a tug-of-war between what developers see as important in the long-term versus immediate priorities dictated by business needs and decisions. It's hard to justify spending 40 man-hours chasing a bug which your customers hardly even notice. However, this equation fundamentally changes when you are able to put a semi-autonomous agent on that task. It might not be the case just yet but in the long run, AI will enable you to lower your tech debt because it dramatically reduces the cost of addressing it.

preachermon•7mo ago
> even when AI hallucinates, when you interact with it again it is able to recognize its error

"recognize" is a strong claim

> and try to fix the mistake it made,

or double down on it

>just like a human would.

probably not, because the human is also responding to emotional dynamics (short and long term) which the AI only pretends to mimic.

anonymars•7mo ago
> Besides, even when AI hallucinates, when you interact with it again it is able to recognize its error and try to fix the mistake it made, just like a human would.

This feels beyond generous. I'm sure I'm not the only one who has led my AI assistant into a corner it couldn't get itself out of

3D30497420•7mo ago
> I'm sure I'm not the only one who has led my AI assistant into a corner it couldn't get itself out of

My team recently did a vibe coding exercise to get a sense of the boundaries of current LLMs. I kept running into a problem where the LLM would code a function that worked perfectly fine. I'd request tweak and it would sometimes create a new, independent function, and then continually update the new completely orphaned function. Naturally, the LLM was very confident that it was making the changes, but I was seeing no result. I was able to manually edit the needed files to get unstuck, but it kept happening.

zimzam•7mo ago
Yeah, the following has happened to me multiple times when I ask copoilot to fix something. It updates my code to version A, which is still broken. Then I share the error messages asking for a fix, resulting in version B, which is also broken. Then I share the error messages and copilot confidently changes things back to version A, which is still broken.

It will confidently explain version A is broken because it isn't version B, and version B is broken because it isn't version A. There's no "awareness" that this is cycle is happening and it could go on indefinitely.

mirsadm•7mo ago
I use LLMs all the time and if they make a mistake it's basically over. I'll start a new session because in my experience there's no recovery.
disgruntledphd2•7mo ago
> I use LLMs all the time and if they make a mistake it's basically over. I'll start a new session because in my experience there's no recovery.

Yup, because the error becomes part of the prompt. Even if you tell them to do something different it's basically impossible for the model to recover.

grey-area•7mo ago
Humans would not forget to connect water to a room and if they did would know how to diagnose and fix it by reasoning about it and eliminating possible causes, something that lllm’s are not capable of.

Many many people report experiences that directly contradict what you say next. The type of large language model people currently label as AI does not learn and when asked to fix a problem it takes another guess, usually also wrong, sometimes creating other problems.

bluefirebrand•7mo ago
Yes. AI usage is a solution slot machine not a constantly improving process
cratermoon•7mo ago
The slot machine is an excellent metaphor. Feeding the LLM prompts over and over, seeing incorrect results most of the time but once in a while hitting the correct response jackpot. Feeding the exact prompt again and getting a different result.

https://softwarecrisis.dev/letters/llmentalist/

briangriffinfan•7mo ago
There's something that feels tremendously off to me about these defenses which are to the effect of "What's so wrong with this? In principle, there's nothing saying a human couldn't also make one of these errors."

I think about the AI vending machine business simulation. Yes, LLMs were able to keep it running for a long time, but they were also prone to nosediving failure spirals due to simple errors a human would spot immediately.

People point to an infinity in the distance where these LLMs cross some incredibly high threshold of accuracy that lets us thoughtlessly replace human workers and thinkers with them... I have not seen sufficient evidence to believe this.

disgruntledphd2•7mo ago
> There's something that feels tremendously off to me about these defenses which are to the effect of "What's so wrong with this? In principle, there's nothing saying a human couldn't also make one of these errors."

The big difference is that (most) humans will learn from this mistake. An LLM, however, is always a blank slate so will make the same mistakes over and over again.

And more generally, because of how these models work, it would be very strange if they didn't make the same mistakes as humans.

SkyBelow•7mo ago
Many humans not only could, but do and will keep doing so. The catch is that we have systems in place that keep these impacts from being harmful. Largely by preventing people without the ability to self reflect and improve their actions from being put in roles where they can cause disruption or harm. This is why we have some many different credentialing systems, from school diplomas to professional licenses to even something like good reviews of a business, to say nothing of our own personal filtering when we interact with someone before we do business with them.

AIs are a horrible fit for the current system. They aren't really better or worse, but instead completely alien. From one angle they look like a good fit and end up passing initial checks, and even doing a somewhat decent job, but then a request comes in from another angle that results in them making mistakes that we would normally only associate with a complete beginner.

The question becomes, can we modify the existing systems, or design new ones, where the alien AIs do fit in. I think so. Not enough to completely replace all humans, but 8 people with AI systems being able to do the job 10 people use to do still means 2 people were replaced, even if no one person's entire job was taken over entirely by the AI.

What remains unknown is how far this can extend, between getting better AI and getting better systems where the AI can be slotted in. (And then to grandparent's point, what new roles in these systems become available for AI that weren't financially viable to turn into full jobs for humans.)

joshstrange•7mo ago
If you boss/ceo/manager/etc is pushing for you to use LLM tools heavily, expecting to replace developers, or stupid enough to think "vibe coding" is the future then run, don't walk, to find a new job.

I can promise you there are plenty of companies that have not drank the kool aid and, while they might leverage LLM tools, they aren't trying to replace developers or expect 10x out of developers using these tools.

Any company that pushed for this kind of thing is clearly run by morons and you'd be smart to get the heck out of dodge.

benrutter•7mo ago
I agree. More generally I'd say trying to force tooling is normally a red flag.

I've seen companies previously adopt rules like "Everybody needs to use VSCode as their editor" and it's normally a sign that somebody in charge doesn't trust their engineers to work in the way that they feel most productive.

mattgreenrocks•7mo ago
Never let someone who hasn’t done your job tell you how to do it.
joshstrange•7mo ago
> Everybody needs to use VSCode as their editor"

I despise mandates like this. My rule for the engineers under me is:

We will pay for your IDE (if it costs money) and you are free to use whatever you want but if you get stuck I’m only going to help you on IntelliJ (IDEA). Others might be able to help you with VSCode and that’s fine but “I can’t get it working in VSCode” is not a valid excuse. I know it works in IDEA so you can use that or figure it out yourself (assuming it’s not delaying your work significantly).

Essentially use whatever you want, but I’m not going to be supporting it. You can use emacs for all I care (and some do), as long as it doesn’t get in the way of doing your job.

nilirl•7mo ago
Pandering.

The main claim is fine: If you disregard human expertise, AI can end up doing more harm than good.

Biggest weakness: Strong sense of 'us vs them', 'Agile Industrial Complex' as a term for people working outside engineering, derogatory implication that the 'others' don't have common sense.

Why not address that no one knows how things will play out?

Sure, we have a better idea of how complex software can be, but the uncertainty isn't reserved to non-engineers.

Look at HN, professional software developers are divided in their hopes and predictions for AI.

If we're the experts on software, isn't our job to dampen the general anxiety, not stoke the fire?

freeone3000•7mo ago
Well, it sort of depends on if you feel the software isn’t anxiety-inducing?

It’s a large system, too large for any person to understand. This system is poorly and partially documented, and then only years after it’s put into place. The exact behaviour of the system is a closely-guarded secret: there are public imitations, but they don’t act the same. This system is known to act with no regard for correctness or external consistency.

And this system, and systems like it, are being used, widely, to generate financial presentations and court documents and generate software itself. They’re used for education and research and as conversation partners and therapists.

I have anxiety and honestly, I think other people should too.

nilirl•7mo ago
Sorry, not anxiety, I also meant to include the excitement of possibility. So, maybe anticipation is what I meant.

I feel unease about LLMs too, along with a sense of appreciation for them.

But this post does not seek to persuade, otherwise it would be written for non-engineers. This post panders.

simonw•7mo ago
On the subject of AI hacking Jira... Atlassian released a new MCP the other day which turns out to suffer from exfiltration attacks due to combining the lethal trifecta of access to private data, exposure to untrusted data (from public issues) and the ability to communicate externally (by posting replies to those public issues).

Here's a report about the bug: https://www.catonetworks.com/blog/cato-ctrl-poc-attack-targe...

My own notes on that here: https://simonwillison.net/2025/Jun/19/atlassian-prompt-injec...

world2vec•7mo ago
I think you posted the wrong link to your website?
simonw•7mo ago
Oops, I did! Edited my post to fix that now, thanks.
Joker_vD•7mo ago
From the mentioned Reddit thread:

> Go back and tell the CEO, great news: we can save eight times as much money by replacing our CEO with an AI.

The funny ("funny"?) thing is, this is proposal is somehow missing in most discussions about AI. But seriously, the quality of decision making would probably not suffer that much if we replaced our elites with LLMs, and it still would be way cheaper, on the whole (and with mostly the same accountability). But apparently people in charge don't see themselves as fungible and would rather not replace themselves with AI; and since they are the ones in charge then this, tautologically, won't happen.

bee_rider•7mo ago
This is a funny point, but I think it is just kind of a joke for the most part. Ultimately the CEO’s job is to eat responsibility, the LLM can’t be given a golden parachute and pushed out the window when the company has a bad quarter, so it is functionally useless.

However, there’s probably a kernel of truth. I guess the org tree should grow like log(n_employees). If AI really makes the workers multiple times more effective, then the size of the company could shrink, resulting shorter trees. Maybe some layers could be replaced entirely by AI as well.

There is also maybe an opportunity for a radically different shape to solve the “LLM can’t bear responsibility” problem. Like a collection of guilds and independent contractors coordinated by a LLM. In that case the responsibility stays in the components.

Joker_vD•7mo ago
> Ultimately the CEO’s job is to eat responsibility, the LLM can’t be given a golden parachute and pushed out the window when the company has a bad quarter, so it is functionally useless.

So the shareholders vote to switch from ChatGPT to DeepSeek :) and they don't even have to pay out the parting bonus to it! And it's nobody's fault, just the market forces of nature that the quarter was rough. It's a win-win-whatever situation, really.

vannevar•7mo ago
Ironically, this is probably one of the best uses of AI, and I expect we'll see tech cooperatives arise and start experimenting with it sooner rather than later.
throwanem•7mo ago
So I guess what I'm hearing is that PMs will be less likely to (summarizing the commonest apparent theme in comments on pseudonymous 'water cooler' forums eg here, Blind, Reddit) waste engineers' time on their path to inevitable failure and also will no longer have same as a blame sponge for same, and I'm supposed to think somehow that's a bad thing?
azangru•7mo ago
> In Big Agile, engineering = new features.

I find it so odd that 'agile' is something that people chose to hate. What dysfunctions did 'agile' itself bring that had not been there before? Didn't managers before 2001 demand new features for their products? Did they use to empathise more with engineering work? If they hadn't yet learnt about t-shirt sizes, didn't they demand estimates in specific time periods (months, days, hours)? Didn't they make promises based on arbitrary dates and then pressed developers to meet those deadlines, expecting them to work overtime (as can be inferred from agile principle 8: "agile processes promote sustainable development ... Developers should be able to maintain a constant pace indefinitely)? What sins has 'agile' committed, apart from inadvertently unleashing an army of 'scrum masters' who discovered an easy way to game the system and 'break into tech' after taking a two-day course?

bluefirebrand•7mo ago
Agile gave managers the idea that any task, no matter how complex, should be able to be broken down enough to fit into a ticket that can be estimated semi-accurately before even looking into the problem, and the work required to complete the ticket can fit into a two week time period
iliaskurt•7mo ago
comment of the year
vannevar•7mo ago
I think managers had those kind of ideas way before agile. And if anyone on the team doesn't think a given ticket can be completed within a sprint, they should speak up--the whole point is that the team concurs that X can be done in a sprint, and they collectively decide what X is. Empirically, I'd say in my experience that the sprint velocity has been accurate around 80% of the time. Which is enough to be useful.
azangru•7mo ago
I'd just like to point out that the notion of 'sprint', let alone sprint velocity, comes specifically from scrum. Which is just one version of agile software development, albeit the most common one. For example, the arbitrariness of the sprint timebox, as well as the guessing of how much will be done over that time period, are perhaps among its weakest ideas, and have often been criticized.
bluefirebrand•7mo ago
I think that at this point people have to let go of "Scrum is not the only way to do Agile"

It's "technically" true, and I admit that, but this battle is lost If someone says they are doing Agile they mean Scrum. Otherwise they will specify Kanban (or whatever else) by name

I know it's not technically correct, but that's just how it is, sorry

vannevar•7mo ago
Fair. But the central idea of scrum is a self-managed team, and there's nothing that says every sprint has to have the same duration, it just makes calculating the velocity easier. There's also nothing that says you can't pause sprints for some unstructured design and experimentation time, something that I think teams do too little of. The problem is that many organizations don't understand the self-managing aspect and simply put a manager in charge of the team and the scrum, which pretty much sabotages things from the start. This micromanagement, along with imposing unrealistic development timelines, are the two biggest mistakes that I commonly see. Neither of them are unique to agile.
anonymars•7mo ago
I think t-shirt sizing is madness. "How long is a piece of string?" But we've had tickets to create other tickets, because, as you point out, speccing out the feature (including what does it do and what does it not do) is itself a ton of work

In fact I wager that coding is...20%, maybe 25%? of the work that goes into providing value*? So Amdahl's law says having AI optimize that part isn't going to move the needle all that much

*it's certainly possible to expend lots more effort coding, but is it coding in the right direction?

bluefirebrand•7mo ago
> In fact I wager that coding is...20%, maybe 25%? of the work that goes into providing value

Ehh... I've heard this sort of reasoning before and I get where it comes from, but I'm not sure I agree. It feels like "Idea Guy" propaganda to overvalue their contribution.

Figuring out what to build doesn't provide real value at all imo. It provides theoretical value. Coding is closer to 100% of the work that actually provides the value

Edit: This is why people who actually have skills to build things are made to sign NDAs and Non-competes when hired or before being pitched projects.

The ideas people want to keep the value of their idea and undervalue the skill of actually building things

anonymars•7mo ago
I am in no way saying "the idea guy" is doing 80% of the work, that's probably less than 10%

Put another way -- genuine question to which I have only a guess: who does more coding during an average day? A junior developer or a senior developer? Broadly I'd posit the junior developers do a lot more coding, and the more senior you get the more important "English" becomes over any programming language. Maybe I'm in an aberrant bubble. But I doubt it.

> This is why people who actually have skills to build things are made to sign NDAs and Non-competes when hired or before being pitched projects.

So what I'm getting at is, my version would be: "this is why senior developers code less than junior developers"

vonneumannstan•7mo ago
>I find it so odd that 'agile' is something that people chose to hate. What dysfunctions did 'agile' itself bring that had not been there before?

Because it adds hours of nearly pointless meetings to your workweek? The time spent in standups and retros and sprint planning and refinement that imo add nearly no value is shocking...

azangru•7mo ago
Yes; dysfunctional meetings suck up time. But on the other hand, in a dysfunctional team without meetings, developers do not coordinate their efforts, may not realise what is the most important to do on a given day, do not get a shared understanding of which aspects of their work can be improved, do not know or care about the full picture of the product that they are working on; information, instead of flowing freely between developers, or between teams of developers, or between management and developers, tends to stagnate in silos.
vonneumannstan•7mo ago
No one is saying no meetings or no planning. But the amount of required meetings that could be emails necessary in Agile is way over the top.
vannevar•7mo ago
Then your problem is bad meetings, not agile. If there's truly no value in quick daily updates and occasional work planning/review meetings, it's only because either those things are being done some other way or the team is dysfunctional.
roryirvine•7mo ago
But that's nothing to do with Agile - after all, any of the big pre-Agile iterative methodologies from the 1990s like RUP or DSDM were just as often plagued with pointless meetings!

One of the most important aspects of following an Agile methodology is that teams are expected to be self-organising, so if the meetings you're participating in aren't working for you, you should absolutely be discussing that with your team. It could be that others do find them valuable, but you might well find that they feel the same as you and would be willing to consider how to make them more effective.

hunter-gatherer•7mo ago
In short, my experience with Agile is that is only adds a way to quantify what happens, so it definitely creates an illusion of productivity. I do see how it is actually useful when talking to managers and investors about progress. However from the engineers' view it is just more of an administrative burden. In my view the "sin" that Agile committed was the promise of productivity, when really (from engineering viewpoints) it appears to be an unnecessary accountability mechanism.

I worked in fiance once with Agile, where there exists in the culture an infinite growth mindset. So I found that we were putting a metric to everything possible, expecting future "improvements" and peoples' salaries were dependent on it. Some companies probably don't suffer from this though.

azangru•7mo ago
> Agile is that is only adds a way to quantify what happens, so it definitely creates an illusion of productivity

> ...

> I worked in fiance once with Agile ... I found that we were putting a metric to everything possible, expecting future "improvements" and peoples' salaries were dependent on it.

It's fascinating to me how different a meaning different people put into the word 'agile'. The 'agile' of the founders was about small teams, close collaboration with the customer, and frequent feedback based on the real emerging working software, which allowed the customer to adapt and change their mind early (hence 'agile', as in 'flexible'). What it contrasted itself to was heavy, slow-moving organisations, with large interdependent teams, multiple layers of management between developers and customers, and long periods of planning (presentations, designs, architecture, etc.) without working code to back it up. All this sounds to me like an intuitively good idea and a natural way to work. None of that resembles the monstrosity that people complain about while calling it 'agile'. In fact, I feel that this monstrosity is the creature of the pre-agile period that appropriated the new jargon.

Der_Einzige•7mo ago
Btw - I build AI agents and I can 100% confirm that AI agents for jira automation is a great use case. Scrum masters shouldn't exist.
robertclaus•7mo ago
In my experience this is actually a great thing. Let AI hack away at the agile metrics that it can. Maybe those are the right metrics for AI, and engineers should be focused on building reliable infrastructure and abstractions for that AI.
mooreds•7mo ago
I posted this on a different community, but someone was worried about their job as an IC w/r/t AI tooling, and this was my advice.

Connect to the business.

I often seen engineers focus on solving cool, hard problems, which is neat. I get it, it's fun.

But having an understanding of business problems, especially strategic ones, and applying technology as needed to solve them--well, if you do that you stand out and are more valuable.

These type of problems tend to be more intractable, cross departments, and are techno-social rather than purely technical.

So it may take some time to learn, but that's the path I'd advise you and other ICs to follow.

z3ugma•7mo ago
>>Connect to the business.

this is such excellent advice, and it will keep you relevant as an engineer so that you know the thing you're building solves the actual problem

elevatortrim•7mo ago
Nope.

Connecting to the business keeps you valued at your current job. You do things that are unusual in industry, to create great impact for your company.

But if you tell how you achieved those things in an interview, the decision maker, who is usually another technical person, will raise eyebrows hugely. For your next job, it is best to nail industry-accepted techniques, i.e. coding and system design interviews.

Stick to business impact in your current role too much, and you become irrelevant to your next job (unless via referral or unconventional routes).

This holds true even until senior leadership roles.

mooreds•7mo ago
> But if you tell how you achieved those things in an interview, the decision maker, who is usually another technical person, will raise eyebrows hugely.

Why would they raise their eyebrows? Do you mean they wouldn't understand or care?

> via referral

Most of my jobs have been through referral.

I think it is a far superior way to get hired when compared to the morass of the normal technical interview. So I'd actually optimize for that.

dakiol•7mo ago
But then in the interview, they won't ask you anything about how you "connect to the business". So, even if you can bring a lot of value to a company, you won't even be hired because you fail the systems design interview.

One can only know so much. If on top of all the intricacies about distributed systems, software engineering, dbs, leadership and a large etc., we also need to know about the "business", well, what are we then? How do we acquire so much knowledge without sacrificing our entire free time?

Sure thing there are individuals out there who know a lot about many things, and those are probably the ones companies want to hire the most. But what about the rest of us?

jackthetab•7mo ago
> there are individuals out there who know a lot about many things, and those are probably the ones companies want to hire the most

No, they don't. They want specialists, as you point out.

mumbisChungo•7mo ago
I find the term Agile Industrial Complex unnecessary and distracting here. All of this stuff is just about corporations in general, and the lines of thought are not specific to agile.
tootie•7mo ago
Yeah, measuring individual productivity via JIRA is well-known to be a bad idea. Story points are meant to measure product completeness, not individual efficiency. It's just hard to stop people from reading too much into it. I think it's also a bit of hubris to think that if a developer is taking much longer to complete tickets than their peers that it doesn't indicate they are not as good. I'd always use judgment as well, but sometimes the data doesn't lie.
aitacobell•7mo ago
Jira has been the least of many evils for so long. Prime space for disruption
vannevar•7mo ago
I think the points about AI doing software engineering are legitimate (if not particularly insightful), but the whole thing just reads like an excuse to bash agile from someone who doesn't understand it.
Quarrelsome•7mo ago
As this article aludes to, the big issue is we have no reliable industry standard metric for developer productivity. This sets the scene where the c-suite can find metrics that tell them AI first strategies are working great, and engineering can find metrics (or experience) that tells them that AI isn't working great at all. So both sides claim victory and the truth becomes unimportant (whatever it may be) to the politics at play.

This will feed into existing mistrust, that developers are whiny, just trying to pad out their resume with new shiny, tinkering as opposed to delivering value, and the c-suite are clueless and don't understand engineering. But we've never had a tool before (except maybe outsourcing) that can present itself to either party as either good AND bad depending on your beholding eye. So I feel like the coming years could be politically disasterous.

One thing I find curious, is how the biggest tech companies today, got to where they are by carefully selecting 10* engineers, working hard on recruitment and trying to only select the very best. This has given them much comfort and a hugely profitable platform but now some of them seek to undermine and reverse the very strategy that got them there, in order to justify their investment in the technology.

For the cynics, the question is, how long can the ship keep holding its course from the work already done combined with AI generated changes? As we see with Twitter and Musks ad-hoc firing strategy, the backend keeps trundling along, somewhat vindicating his decision. What's the canary for tech companies that spend the next few years firing devs and replacing them with AI?

Another curious thought is the idea that concepts of maintainability will fly out the window, that the c-suite will pressure engineering to lower pull request standards to accept more autonomous AI changes. This might create an element of hilarity where complex code bases get so unmaintable to the eye that the only quick way of understanding them with be to use an AI to look at them for you. Which leads to what I think a long-term outcome of generative AI might be, in that it ends up being a layer between all human interaction, for better, for worse.

I believe its possible to see early seeds of this in recruitment, where AI is used at the recruitment end to filter resumes and some applicants are using AI to generate a "tailor made" resume to adapt their experience to a given job posting. Its just AI talking to AI and I think this might start to become a common theme in our society.

higeorge13•7mo ago
Let’s hope ai hacks mailboxes and google meet, and eventually replace c suite and managers as well. We might get more ‘reasoned’ and deterministic engineering roadmaps or financial strategies by claude ceo/cto/cfo/vp/director agents than current leaderships. lol
tvarghese7•7mo ago
Reminds me of DOGE :)