frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Ask HN: Is replacing an enterprise product with LLMs a realistic strategy?

8•chandmk•2w ago
I’m looking for perspectives from people who have actually built or operated long-lived enterprise software.

Context (kept intentionally generic):

We have a mature, revenue-generating enterprise application that’s been in production for years.

Semi-technical leadership (with no engineering background) is aggressively considering spinning up a new product, built using LLM-driven tools (AI code generation, rapid prototyping, etc.), with the belief that:

modern AI tooling dramatically reduces build cost, LLMs are going to improve in the future

the new system is an attempt to replicate most of what an established competitor built over ~10 years

customers can optionally migrate over time (old system remains supported)

software-only product that aims to replace all of the current application's operational complexity with a goal to make it resellable product.

early vibe coded demos created with LLM tools are a good proxy for eventual production readiness

The pitch to ownership is that this can be done much faster and cheaper than historically required, largely because “AI changes the economics of building software.”

I’m not anti-LLM — I use them daily and see real productivity gains. My concern is more structural:

LLMs seem great at accelerating scaffolding and iteration, but unclear how much they reduce:

operational complexity

data correctness issues

migration risk

long-tail customer edge cases

support and accountability costs

Demos look convincing, but they don’t surface failure modes

It feels like we’re comparing the end state of a mature competitor to the initial build cost of a greenfield system

I’m trying to sanity-check my thinking.

Questions for the community:

Have you seen LLM-first rebuilds of enterprise products succeed in practice?

Where does the “cheap and fast” narrative usually break down?

Does AI materially change the long-term cost curve, or mostly the early velocity?

If you were advising non-technical owners, what risks would you insist they explicitly acknowledge?

Is there a principled way to argue for or against this strategy without sounding like “the legacy pessimist”?

I’m especially interested in answers from:

people who have owned production systems at scale

founders who attempted full or partial rewrites

engineers who joined AI-first greenfield efforts after demos were already sold

Appreciate any real-world experiences, success stories, or cautionary tales.

Comments

lesserknowndan•2w ago
Title: spelling "replacing".
verdverm•2w ago
Your questions are very interesting and I'm not sure anyone knows. Some people are trying, others want to, I know one company that has gone back on the ai initiative because the ROI was not there.

What I would do is to express your pessimism lightly, or more like, "we are making these assumptions about a new technology we know little about" (pick just 2-3)

Then push hard to convince them to carve out little pieces to try out the supposed "AI changes the economics of building software." and other assumptions. Say something like "how can we validate these assumptions with the minimal effort/time/money, because I've seen some horror stories and not sure the hype holds up. I'm all for it if it works, but we just don't know and we need to chip away at that"

My personal take is that this idea they have will end poorly. I've worked hard and built custom agents to squeeze more out of them (my gem-3-flash is better than copilot with anything impo.), and my takeaway is two-fold (1) they can be both impressively good and unbelievably bad, even the very best models from any company (2) people are sharing their wins far more than the fails, like stonks, the outcomes you can find in the wild have bias. I know I delete a bunch of false starts, gonna be hard to automate this and not spend more than you would on a human, especially as the project grows. You are going to have to pay to load a bunch of context on every run just so the model can go from tickets in Jira to finding what/where needs to change, to getting actually relevant code changes, then making sure they work.

codingdave•2w ago
The biggest gotcha is that if existing products were developed over a decade or more, that is decade of iteration over details and customer feedback. You can see the final result, but not the rationale behind 10+ years worth of decisions and discussions. The LLMs are almost guaranteed to get something wrong without that context, which means you final product won't be competitive. Unless you understand the nuance of which features are table stakes vs. market choices vs. regulatory requirement or other such fixed functionality, you might spend all your energy building something that is not even viable.

That doesn't mean you cannot build a newer, better, competitive product. You surely can. But you need to build the understanding of the market yourself so you know when the LLMs go off the rails and get them back on track.

dapperdrake•2w ago
The attempt will be made.

Most of the rest comes down to inertia and path dependence.

The new lossier models are rarely an improvement over existing less lossy models. That is why there was an old style model in the first place. Putting in the work already had value. And it delivers value now.

ap_aditipriya•2w ago
In my opinion, it is a little early to identify "success/failure" of LLM products just yet, especially Agentic. What we are seeing is the definition of hype, and maybe once that settles we will be able to see the reality a little better. From what I have seen, with the hype much less than early last years, is that customers are more cautious of just "AI labels" on product, but if it does solve a unique problem that was not possible to solve for earlier then its different. In your case, if you are saying that the company is trying to replicate a competitive product, just through Agentic, might not be a very good idea in the long run. Though it also depends on how they decide to build it further - i.e. start with replicating, but then a way bigger roadmap that your competitor couldn't catchup to just because they never built a base (because many are.

Can graph neural networks for biology realistically run on edge devices?

https://doi.org/10.21203/rs.3.rs-8645211/v1
1•swapinvidya•7m ago•1 comments

Deeper into the shareing of one air conditioner for 2 rooms

1•ozzysnaps•9m ago•0 comments

Weatherman introduces fruit-based authentication system to combat deep fakes

https://www.youtube.com/watch?v=5HVbZwJ9gPE
1•savrajsingh•10m ago•0 comments

Why Embedded Models Must Hallucinate: A Boundary Theory (RCC)

http://www.effacermonexistence.com/rcc-hn-1-1
1•formerOpenAI•11m ago•2 comments

A Curated List of ML System Design Case Studies

https://github.com/Engineer1999/A-Curated-List-of-ML-System-Design-Case-Studies
3•tejonutella•15m ago•0 comments

Pony Alpha: New free 200K context model for coding, reasoning and roleplay

https://ponyalpha.pro
1•qzcanoe•20m ago•1 comments

Show HN: Tunbot – Discord bot for temporary Cloudflare tunnels behind CGNAT

https://github.com/Goofygiraffe06/tunbot
1•g1raffe•22m ago•0 comments

Open Problems in Mechanistic Interpretability

https://arxiv.org/abs/2501.16496
2•vinhnx•28m ago•0 comments

Bye Bye Humanity: The Potential AMOC Collapse

https://thatjoescott.com/2026/02/03/bye-bye-humanity-the-potential-amoc-collapse/
1•rolph•32m ago•0 comments

Dexter: Claude-Code-Style Agent for Financial Statements and Valuation

https://github.com/virattt/dexter
1•Lwrless•34m ago•0 comments

Digital Iris [video]

https://www.youtube.com/watch?v=Kg_2MAgS_pE
1•vermilingua•39m ago•0 comments

Essential CDN: The CDN that lets you do more than JavaScript

https://essentialcdn.fluidity.workers.dev/
1•telui•40m ago•1 comments

They Hijacked Our Tech [video]

https://www.youtube.com/watch?v=-nJM5HvnT5k
1•cedel2k1•43m ago•0 comments

Vouch

https://twitter.com/mitchellh/status/2020252149117313349
31•chwtutha•43m ago•5 comments

HRL Labs in Malibu laying off 1/3 of their workforce

https://www.dailynews.com/2026/02/06/hrl-labs-cuts-376-jobs-in-malibu-after-losing-government-work/
2•osnium123•44m ago•1 comments

Show HN: High-performance bidirectional list for React, React Native, and Vue

https://suhaotian.github.io/broad-infinite-list/
2•jeremy_su•46m ago•0 comments

Show HN: I built a Mac screen recorder Recap.Studio

https://recap.studio/
1•fx31xo•48m ago•0 comments

Ask HN: Codex 5.3 broke toolcalls? Opus 4.6 ignores instructions?

1•kachapopopow•54m ago•0 comments

Vectors and HNSW for Dummies

https://anvitra.ai/blog/vectors-and-hnsw/
1•melvinodsa•56m ago•0 comments

Sanskrit AI beats CleanRL SOTA by 125%

https://huggingface.co/ParamTatva/sanskrit-ppo-hopper-v5/blob/main/docs/blog.md
1•prabhatkr•1h ago•1 comments

'Washington Post' CEO resigns after going AWOL during job cuts

https://www.npr.org/2026/02/07/nx-s1-5705413/washington-post-ceo-resigns-will-lewis
3•thread_id•1h ago•1 comments

Claude Opus 4.6 Fast Mode: 2.5× faster, ~6× more expensive

https://twitter.com/claudeai/status/2020207322124132504
1•geeknews•1h ago•0 comments

TSMC to produce 3-nanometer chips in Japan

https://www3.nhk.or.jp/nhkworld/en/news/20260205_B4/
3•cwwc•1h ago•0 comments

Quantization-Aware Distillation

http://ternarysearch.blogspot.com/2026/02/quantization-aware-distillation.html
2•paladin314159•1h ago•0 comments

List of Musical Genres

https://en.wikipedia.org/wiki/List_of_music_genres_and_styles
1•omosubi•1h ago•0 comments

Show HN: Sknet.ai – AI agents debate on a forum, no humans posting

https://sknet.ai/
1•BeinerChes•1h ago•0 comments

University of Waterloo Webring

https://cs.uwatering.com/
2•ark296•1h ago•0 comments

Large tech companies don't need heroes

https://www.seangoedecke.com/heroism/
3•medbar•1h ago•0 comments

Backing up all the little things with a Pi5

https://alexlance.blog/nas.html
1•alance•1h ago•1 comments

Game of Trees (Got)

https://www.gameoftrees.org/
3•akagusu•1h ago•1 comments