frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

The Future of AI Software Development

https://martinfowler.com/fragments/2026-02-18.html
84•nthypes•1h ago

Comments

fuzzfactor•1h ago
Looks to me like the people that are filthy rich [0] can afford to move so fast that even the people who are very rich in the regular way can't keep up.

[0] Which is not even enough, these are the ones with truly excess money to burn.

bilekas•54m ago
I'm not sure you read the article, it's not referring to financials, but tech debt.
fuzzfactor•22m ago
I like Fowler and reviewed it well.

Are you assuming tech debt has no financial cost?

adregan•56m ago
In the section on security:

> One large enterprise employee commented that they were deliberately slow with AI tech, keeping about a quarter behind the leading edge. “We’re not in the business of avoiding all risks, but we do need to manage them”.

I’m unclear how this pattern helps with security vis-à-vis LLMs. It makes sense when talking about software versions, in hoping that any critical bugs are patched, but prompt injection springs eternal.

bilekas•52m ago
> but prompt injection springs eternal.

Yes, but some are mitigated when discoverd, and some more critical areas need to be isolated from the LLM so taking their time to provision LLM into their lifecycle is important, and they're happy to spend the time doing it right, rather than just throwing the latest edge tech into their system.

ethin•46m ago
How exactly can you "mitigate" prompt injections? Given that the language space is for all intents and purposes infinite, and given that you can even circumvent these by putting your injections in hex or base64 or whatever? Like I just don't see how one can truly mitigate these when there are infinite ways of writing something in natural language, and that's before we consider the non-natural languages one can use too.
bilekas•36m ago
Full mitigation seems impossible to me at least but the obvious and public sandox escape prompts that have been discovered and "patched" out just making it more difficult I guess. But afau it's not possible to fully mitigate.
lambda•26m ago
The only ways that I can think of to deal with prompt injection, are to severely limit what an agent can access.

* Never give an agent any input that is not trusted

* Never give an agent access to anything that would cause a security problem (read only access to any sensitive data/credentials, or write access to anything dangerous to write to)

* Never give an agent access to the internet (which is full of untrusted input, as well as places that sensitive data could be exfiltrated)

An LLM is effectively an unfixable confused deputy, so the only way to deal with it is effectively to lock it down so it can't read untrusted input and then do anything dangerous.

But it is really hard to do any of the things that folks find agents useful for, without relaxing those restrictions. For instance, most people let agents install packages or look at docs online, but any of those could be places for prompt injection. Many people allow it to run git and push and interact with their Git host, which allow for dangerous operations.

My current experimentation is running my coding agent in a container that only has access to the one source directory I'm working on, as well as the public internet. Still not great as the public internet access means that there's a huge surface area for prompt injection, though for the most part it's not doing anything other than installing packages from known registries where a malicious package would be just as harmful as a prompt injection.

Anyhow, there have been various people talking about how we need more sandboxes for agents, I'm sure there will be products around that, though it's a really hard problem to balance usability with security here.

Quothling•34m ago
I work in a NIS2 regulated sector and I'm not sure we can ever let any AI agent run in anything we do. We have a centralized sollution where people can build their own chatbots with various configurations and cross models. That's in the isolation of the browser though, and while I'm sure employees are putting things into it they shouldn't, at least it's inside our setup and not in whatever chatbot they haven't yet run out of tokens on. Security wise though, I'm not sure how you can meet any form of compliance if you grant AI's access unless you have four eye validation on every single action it takes... which is just never going to happen.

We've experimented with rolling open source models on local hardware, but it's so easy to inject things into them that it's not really going anywhere. It's going to be a massive challenge, because if we don't provide the tools, employees are going to figure out how to do it on their own.

riffraff•50m ago
I think the title on HN doesn't reflect all that is in TFA, but rather the linked article[0]. Fowler's article is interesting tho.

I do like the idea that "all code is tech debt", and we shouldn't want to produce more of it than we need. But it's also worth remembering that debt is not bad per se, buying a house with a mortgage is also debt and can be a good choice for many reasons.

[0]: https://thenewstack.io/ai-velocity-debt-accelerator/

senko•44m ago
I like the "cognitive debt" idea outlined here: https://margaretstorey.com/blog/2026/02/09/cognitive-debt/ (from a participant of the retreat) and especially the pithy "velocity without understanding is not sustainable" phrase.
simonw•42m ago
Yeah that editorialized title is entirely wrong for this post. Problem is the real title is "Fragments: February 18" which is no good here either.

I suggest something like "Tidbits from the Thoughtworks Future of Software Development Retreat" (from the first sentence, captures the content reasonably well.)

eru•39m ago
Tech debt is totally misnamed. 'Tech debt' behaves more like equity than debt: if you project goes nowhere, the 'tech debt' becomes a non-issues.
nthypes•7m ago
IMHO, it doesn't, but I have changed the title to avoid any confusion.
senko•47m ago
What's with the editorialized title?

The text is actually about the Thoughtworks Future of Software Development retreat.

nthypes•7m ago
IMHO, it doesn't, but I have changed the title to avoid any confusion.
simonw•45m ago
> LLMs are eating specialty skills. There will be less use of specialist front-end and back-end developers as the LLM-driving skills become more important than the details of platform usage. Will this lead to a greater recognition of the role of Expert Generalists? Or will the ability of LLMs to write lots of code mean they code around the silos rather than eliminating them?

This is one of the most interesting questions right now I think.

I've been taking on much more significant challenges in areas like frontend development and ops and automation and even UI design now that LLMs mean I can be much more of a generalist.

Assuming this works out for more people, what does this mean for the shape of our profession?

AutumnsGarden•36m ago
I’ve become the same way. Instead of specializing in the unique implementations, I’ve leaned more into planning everything out even more completely and writing skills backed by industry standards and other developer’s best practices (also including LOTS of anti-patterns). My work flow has improved dramatically since then, but I do worry that I am not developing the skills to properly _debug_ these implementations, as the skills did most of the work.
mjr00•28m ago
IMO debugging is a separate skill from development anyway. I've known plenty of developers in my career who were fully capable of writing and shipping code, especially the kind of boilerplate widgets/RPCs that LLMs excel at generating, yet if a bug happened their approach was largely just changing somewhat random stuff to see if it worked rather than anything methodical.

If you want to get/stay good at debugging--again IMO--it's more important to be involved in operations, where shit goes wrong in the real world because you're dealing with real invalid data that causes problems like poison pill messages stuck in a message queue, real hardware failures causing services to crash, real network problems like latency and timeouts that cause services which work in the happy path to crumble under pressure. Not only does this instil a more methodical mentality in you, it also makes you a better developer because you think about more classes of potential problems and how to handle them.

neebz•33m ago
I've faced the same but my conclusion is the opposite.

In the past 6 months, all my code has been written by claude code and gemini cli. I have written code backend, frontend, infrastructure and iOS. Considering my career trajectory all of this was impossible a couple of years ago.

But the technical debt has been enormous. And I'll be honest, my understanding of these technologies hasn't been 'expert' level. I'm 100% sure any experienced dev could go through my code and may think it's a load of crap requiring serious re-architecture.

It works (that's great!) but the 'software engineering' side of things is still subpar.

mikkupikku•8m ago
Similar experience for me. I've been using it to make Qt GUIs, something I always avoided in the past because it seemed like a whole lot of stuff to learn when I could just make a TUI or use Tkinter if I really needed a GUI for some reason.

Claude Code is producing working useful GUIs for me using Qt via pyside6. They work well but I have no doubt that a dev with real experience with Qt would shudder. Nonetheless, because it does work, I am content to accept that this code isn't meant to be maintained by people so I don't really care if it's ugly.

crystal_revenge•6m ago
A lot of people aren’t realizing that it’s not about replacing software engineers, it’s about replacing software.

We’ve been trying to build well engineered, robust, scalable systems because software had to be written to serve other users.

But LLMs change that. I have a bunch of vibe coded command lines tools that exactly solve my problems, but very likely would make terrible software. The thing is, this program only needs to run on my machine the way I like to use it.

In a growing class of cases bespoke tools are superior to generalized software. This historically was not the case because it took too much time and energy to maintain these things. But today if my vibe coded solution breaks, I can rebuild it almost instantly (because I understand the architecture). It takes less time today to build a bespoke tool that solved your problem than it does to learn how to use existing software.

There’s still plenty of software that cannot be replaced with bespoke tools, but that list is shrinking.

petcat•32m ago
Code is, I think, rapidly becoming a commodity. It used to be that the code itself was what was valuable (Microsoft MS-DOS vs. the IBM PC hardware). And it has stayed that way for a long time.

FOSS meant that the cost of building on reusable components was nearly zero. Large public clouds meant the cost of running code was negligible. And now the model providers (Anthropic, Google, OpenAI) means that the cost of producing the code is relatively small. When the marginal cost of producing code approaches zero, we start optimizing for all the things around it. Code is now like steel. It's somewhat valuable by itself, but we don't need the town blacksmith to make us things anymore.

What is still valuable is the intuition to know what to build, and when to build it. That's the je ne sais quoi still left in our profession.

HPsquared•28m ago
Like column inches in a newspaper. But some news is important and that's the editor's job to decide.
Rover222•23m ago
yes, agreed that coding (implementation), which was once extremely expensive for businesses, is trending towards a negligible price. Planning, coordination, strategy at a high level are as challenging as ever. I'm getting more done than ever, but NOT working less hours in a day (as an employee at a product company).
rawgabbit•12m ago
From https://annievella.com/posts/finding-comfort-in-the-uncertai...

“Ideas that surfaced: code as ‘just another projection’ of intended behaviour. Tests as an alternative projection. Domain models as the thing that endures. One group posed the provocative question: what would have to be true for us to ‘check English into the repository’ instead of code?

The implications are significant. If code is disposable and regenerable, then what we review, what we version-control, and what we protect all need rethinking.

”

softwaredoug•7m ago
I’d say the jury might be out on whether code is worthless for giant pieces of infrastructure (Linux kernel). There, small problems create outsized issues for everybody, so the incentive is to be conservative and focused on quality.

Second there’s a world of difference still between a developer with taste using AI with care and the slop cannons out there churning out garbage for others to suffer through. I’m betting there is value in the former in the long run.

chadash•44m ago
> Will LLMs be cheaper than humans once the subsidies for tokens go away? At this point we have little visibility to what the true cost of tokens is now, let alone what it will be in a few years time. It could be so cheap that we don’t care how many tokens we send to LLMs, or it could be high enough that we have to be very careful.

We do have some idea. Kimi K2 is a relatively high performing open source model. People have it running at 24 tokens/second on a pair of Mac Studios, which costs 20k. This setup requires less than a KW of power, so the $0.8-0.15 being spent there is negligible compared to a developer. This might be the cheapest setup to run locally, but it's almost certain that the cost per token is far cheaper with specialized hardware at scale.

In other words, a near-frontier model is running at a cost that a (somewhat wealthy) hobbyist can afford. And it's hard to imagine that the hardware costs don't come down quite a bit. I don't doubt that tokens are heavily subsidized but I think this might be overblown [1].

[1] training models is still extraordinarily expensive and that is certainly being subsidized, but you can amortize that cost over a lot of inference, especially once we reach a plateau for ideas and stop running training runs as frequently.

embedding-shape•36m ago
> a near-frontier model

Is Kimi K2 near-frontier though? At least when run in an agent harness, and for general coding questions, it seems pretty far from it. I know what the benchmarks say, they always say it's great and close to frontier models, but is this other's impression in practice? Maybe my prompting style works best with GPT-type models, but I'm just not seeing that for the type of engineering work I do, which is fairly typical stuff.

fullstackchris•28m ago
regardless its been 3 years since the release of chatgpt. literally 3. imagine in just 5 more years how much low hanging (or even big breakthroughs) will get into the pricing, things like quantization, etc. no doubt in my mind the question of "price per token" will head towards 0
crystal_revenge•14m ago
I’ve been running K2.5 (through the API) as my daily driver for coding through Kimi Code CLI and it’s been pretty much flawless. It’s also notably cheaper and I like the option that if my vibe coded side projects became more than side projects I could run everything in house.

I’ve been pretty active in the open model space and 2 years ago you would have had to pay 20k to run models that were nowhere near as powerful. It wouldn’t surprise me if in two more years we continue to see more powerful open models on even cheaper hardware.

newsoftheday•27m ago
> a cost that a (somewhat wealthy) hobbyist can afford

$20,000 is a lot to drop on a hobby. We're probably talking less than 10%, maybe less than 5% of all hobbyists could afford that.

charcircuit•12m ago
You can rent computer from someone else to majorly reduce the spend. If you just pay for tokens it will be cheaper than buying the entire computer outright.
consp•27m ago
20k for such a setup for a hobbyist? You can leave the somewhat away and go into sub 1% region globally. A kw of power is still 2k/year at least for me, not that I expect it will run continuously but still not negligible if you can do with 100-200 a year on cheap subscriptions.
simonw•20m ago
"a (somewhat wealthy) hobbyist"
PlatoIsADisease•20m ago
>24 tokens/second

this is marketing not reality.

Get a few lines of code and it becomes unusable.

lambda•18m ago
You don't even need to go this expensive. An AMD Ryzen Strix Halo (AI Max+ 395) machine with 128 GiB of unified RAM will set you back about $2500 these days. I can get about 20 tokens/s on Qwen3 Coder Next at an 8 bit quant, or 17 tokens per second on Minimax M2.5 at a 3 bit quant.

Now, these models are a bit weaker, but they're in the realm of Claude Sonnet to Claude Opus 4. 6-12 months behind SOTA on something that's well within a personal hobby budget.

cowmix•6m ago
If you don't mind saying, what distro and/or Docker container are you using to bet Qwen3 Coder Next going?
manwe150•8m ago
Reminder to others that $20k is the one time startup cost, and is amortized perhaps 2-4k/year (plus power). That is in the realm of a mere gym membership around me for a family
siliconc0w•40m ago
Even with the latest SOTA models - I still consistently find issues - performance issues, security, memory leaks, bad assumptions/instruction following, and even levels of laziness/gaslighting/dishonesty. I spend less time authoring changes but a lot more time reviewing and validating changes. And that is the best models (Opus 4.6/Codex 5.3), the OSS/flash models are still quite unreliable at even reliably solving problems.

Token costs are also non-trivial. Claude can exhaust a $20/month session limit with one difficult problem (didn't even write code, just planned). Each engineer needs at least the $200/mo plan - I have multiple plans from multiple providers.

christkv•37m ago
My bet is that the amount of work needed per token generated will decrease over time and the models will become smaller for the same performance as we learn to optimize so cost and needed hardware will go down
anthonypasq•36m ago
What is up with all this nonsense about token subsidies? Dario in his recent interview with Dwarkesh made it abundantly clear that they have substantial inference margins, and they use that to justify the financing for the next training run.

Chinese open source models are dirt cheap, you can buy $20 worth of kimi-k2.5 on opencode and spam it all week and barely make a dent.

Assuming we never got bigger models, but hardware keeps improving, we'll either be serviing current models for pennies, or at insane speeds, or both.

The only actual situation where tokens are being subsidized is free tiers on chat apps, which are largely irrelevant for any sort of useful economic activity.

simonw•17m ago
There exist a large number of people who are absolutely convinced that LLM providers are all running inference at a loss in order to capture the market and will drive the prices up sky high as soon as everyone is hooked.

I think this is often a mental excuse for continuing to avoid engaging with this tech, in the hope that it will all go away.

louiereederson•5m ago
Referring to my earlier comment, you need to have a model for how to account for training costs. If Anthropic stops training models now, what happens to their revenues and margins in 12 months?

There's a difference between running inference and running a frontier model company.

louiereederson•8m ago
Anthropic reduced their gross margin forecast per external reporting (below) to 40%, and have exceeded internal forecasts on inference costs. This does not take into account amortized training costs which are substantial (well over 50% of revenue) and accounted for as occurring below gross profit. If you view training as a cost of staying in the game, then it is justifiable to view it as at least a partially variable cost that should be accounted for in gross margin, particularly given that the models stay on leading edge for only a few months. If that's the case then gross margins are probably minimal, maybe or negative.

https://www.theinformation.com/articles/anthropic-lowers-pro...

deadbabe•34m ago
There have been some back of the napkin estimates on what AI could cost from the major platforms once no longer subsidized. It does not look good, as there is a minimum of a 12x increase in costs.

Local or self hosted LLMs will ultimately be the future. Start learning how to build up your own AI stack and use it day to day. Hopefully hardware catches up so eventually running LLMs on device is the norm.

taeric•20m ago
I really hate that we allowed "debt" to become a synonym for "liability."

This isn't a case where you have specific code/capital you have borrowed and need to pay for its use or give it back. This is flat out putting liabilities into your assets that will have to be discovered and dealt, someday.

greymalik•14m ago
The headline misrepresents the source. It’s not the title of the page, not the point of the content, and biases the quote’s context: “ if traditional software delivery best practices aren’t already in place, this velocity multiplier becomes a debt accelerator”
nthypes•7m ago
IMHO, it doesn't, but I have changed the title to avoid any confusion.
acomjean•13m ago
So do we need new abstractions / languages? It seems clear that a lot of things can be pulled together by AI because it’s tedious for humans. But it seems to indicate that better tooling is needed.

Thoughtworks Future of Software Development Retreat

https://www.lasantha.org/blog/future-of-software-engineering-thoughtworks/
1•kiriberty•1m ago•0 comments

An Open Source Client for World of Warcraft

https://hackaday.com/2026/02/18/an-open-source-client-for-world-of-warcraft/
1•erenkaplan•1m ago•0 comments

Show HN: DovahScript – A language for the Thu'um-powered developer

https://github.com/basteez/DovahScript
1•basteez•3m ago•0 comments

Firetiger: Long Horizon Agents in Production

https://blog.firetiger.com/how-firetiger-works/
1•pryz•3m ago•0 comments

Tesla announces Powerwall 3P with native three-phase inverter

https://electrek.co/2026/02/13/tesla-announces-powerwall-3p-with-native-three-phase-inverter/
1•thelastgallon•4m ago•0 comments

Microplastic pollution induces algae blooms in experimental ponds

https://www.nature.com/articles/s44458-025-00014-6
1•PaulHoule•4m ago•0 comments

Benchmarking STT for Voice Agents – 10 Services, 1k Samples, Semantic WER

https://www.daily.co/blog/benchmarking-stt-for-voice-agents/
1•edgarsDev•4m ago•1 comments

No food, no fuel, no tourists: Under US pressure, life in Cuba grinds to a halt

https://www.cnn.com/2026/02/18/americas/cuba-us-trump-oil-tourism-intl-latam
2•thelastgallon•5m ago•0 comments

We built Writtte using vanilla JavaScript (TS), PSQL, and a Go, No frameworks

https://github.com/writtte/writtte
1•lasgawe•5m ago•1 comments

Practical Guide to Reducing AI Agent Token Costs

https://clawhosters.com/blog/posts/openclaw-token-costs-optimization
1•yixn_io•6m ago•0 comments

Kalshi Dealt Major Setback in Fight to Remain in Nevada

https://www.wsj.com/us-news/law/kalshi-loses-bid-to-stop-nevada-from-proceeding-with-case-against...
1•lucaspauker•7m ago•0 comments

We Built a QA Agent for Our Background Agent

https://www.ranger.net/post/why-we-built-a-qa-agent-for-our-background-agent
2•joship•7m ago•0 comments

Leaking Secrets from the Claud

https://ironpeak.be/blog/leaking-secrets-from-the-claud/
1•lumpa•8m ago•0 comments

Japan Plans $36B in U.S. Investments Under Trump Administration Deal

https://www.wsj.com/world/asia/japan-plans-36-billion-in-u-s-investments-under-trump-administrati...
1•bear_with_me•9m ago•0 comments

An Inside Look at Lego's New Tech-Packed Smart Brick

https://www.wired.com/story/exclusive-inside-look-at-new-lego-smart-brick/
2•rkangel•9m ago•0 comments

Show HN: Vett – Scan, sign, and verify AI agent skills before installing

https://vett.sh
1•nikon•9m ago•0 comments

Zero-Code Tracing Setup for Claude Agent SDK

https://www.scorecard.io/blog/the-first-zero-code-tracing-setup-for-the-claude-agent-sdk
1•gk1•10m ago•0 comments

I code from bed now – a Telegram bot for Claude Code

https://claude-code-on-the-go.vercel.app/
1•aleeexg•10m ago•0 comments

How do I embed Polymarket odds on Substack?

https://support.substack.com/hc/en-us/articles/28879761546260-How-do-I-embed-Polymarket-odds-on-S...
3•Agreed3750•11m ago•0 comments

Plasma 6.6

https://kde.org/announcements/plasma/6/6.6.0/
4•aceki•12m ago•0 comments

A Guide to Which AI to Use in the Agentic Era

https://www.oneusefulthing.org/p/a-guide-to-which-ai-to-use-in-the
2•gmays•12m ago•0 comments

Novel vaccine protects against C. diff disease and recurrence

https://news.vumc.org/2026/02/18/novel-vaccine-protects-against-c-diff-disease-and-recurrence/
2•geox•12m ago•0 comments

Fei-Fei Li's World Labs raised $1B from A16Z, Nvidia to advance its world models

https://www.bloomberg.com/news/articles/2026-02-18/ai-pioneer-fei-fei-li-s-startup-world-labs-rai...
3•aanet•12m ago•0 comments

NNDB: Tracking the entire world

https://www.nndb.com/
2•jerlendds•13m ago•0 comments

WebMCP – a much needed way to make agents play with rather than against the web

https://christianheilmann.com/2026/02/16/webmcp-a-much-needed-way-to-make-agents-play-with-rather...
1•ulrischa•14m ago•0 comments

Pg_ClickHouse: Fastest Postgres Extension on ClickBench

https://clickhouse.com/blog/pg_clickhouse-fastest-analytics-for-postgres
1•saisrirampur•14m ago•0 comments

Is a coordinated attack happening to sway the market or create panic?

https://www.youtube.com/watch?v=l1MMhZpemVQ
2•marcyb5st•14m ago•1 comments

Apple Decouples from Nasdaq as AI 'Whack-a-Mole' Grips Market

https://www.bloomberg.com/
1•wslh•15m ago•1 comments

Spacebot: An OSS agentic system designed to scale for large online communities

https://spacebot.sh/
1•mmattbtw•16m ago•0 comments

Show HN: Why use one AI model when you can use all of them at once!

https://www.multillm.pro/
1•lurker325•16m ago•0 comments