frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

CapROS: The Capability-Based Reliable Operating System

https://www.capros.org/
37•gjvc•2h ago•14 comments

2002: Last.fm and Audioscrobbler Herald the Social Web

https://cybercultural.com/p/lastfm-audioscrobbler-2002/
157•cdrnsf•5h ago•91 comments

Elevated errors across many models

https://status.claude.com/incidents/9g6qpr72ttbr
269•pablo24602•5h ago•132 comments

JSDoc is TypeScript

https://culi.bearblog.dev/jsdoc-is-typescript/
119•culi•7h ago•147 comments

Hashcards: A plain-text spaced repetition system

https://borretti.me/article/hashcards-plain-text-spaced-repetition
256•thomascountz•10h ago•106 comments

Ask HN: What Are You Working On? (December 2025)

155•david927•10h ago•551 comments

In the Beginning was the Command Line (1999)

https://web.stanford.edu/class/cs81n/command.txt
101•wseqyrku•6d ago•44 comments

History of Declarative Programming

https://shenlanguage.org/TBoS/tbos_15.html
33•measurablefunc•4h ago•11 comments

An attempt to articulate Forth's practical strengths and eternal usefulness

https://im-just-lee.ing/forth-why-cb234c03.txt
21•todsacerdoti•1w ago•10 comments

The Typeframe PX-88 Portable Computing System

https://www.typeframe.net/
93•birdculture•9h ago•28 comments

Interview with Kent Overstreet (Bcachefs) [audio]

https://linuxunplugged.com/644
44•teekert•3d ago•29 comments

Shai-Hulud compromised a dev machine and raided GitHub org access: a post-mortem

https://trigger.dev/blog/shai-hulud-postmortem
194•nkko•16h ago•115 comments

Advent of Swift

https://leahneukirchen.org/blog/archive/2025/12/advent-of-swift.html
60•chmaynard•6h ago•19 comments

AI and the ironies of automation – Part 2

https://www.ufried.com/blog/ironies_of_ai_2/
215•BinaryIgor•13h ago•92 comments

DARPA GO: Generative Optogenetics

https://www.darpa.mil/research/programs/go
15•birriel•3h ago•2 comments

Microsoft Copilot AI Comes to LG TVs, and Can't Be Deleted

https://www.techpowerup.com/344075/microsoft-copilot-ai-comes-to-lg-tvs-and-cant-be-deleted
64•akyuu•2h ago•54 comments

GraphQL: The enterprise honeymoon is over

https://johnjames.blog/posts/graphql-the-enterprise-honeymoon-is-over
188•johnjames4214•9h ago•164 comments

Developing a food-safe finish for my wooden spoons

https://alinpanaitiu.com/blog/developing-hardwax-oil/
156•alin23•4d ago•97 comments

Price of a bot army revealed across online platforms

https://www.cam.ac.uk/stories/price-bot-army-global-index
96•teleforce•10h ago•34 comments

Checkers Arcade

https://blog.fogus.me/games/checkers-arcade.html
25•fogus•2d ago•1 comments

Baumol's Cost Disease

https://en.wikipedia.org/wiki/Baumol_effect
93•drra•14h ago•97 comments

Claude CLI deleted my home directory and wiped my Mac

https://old.reddit.com/r/ClaudeAI/comments/1pgxckk/claude_cli_deleted_my_entire_home_directory_wi...
174•tamnd•3h ago•135 comments

Checkpointing the Message Processing

https://event-driven.io/en/checkpointing_message_processing/
8•ingve•6d ago•0 comments

Show HN: Dograh – an OSS Vapi alternative to quickly build and test voice agents

https://github.com/dograh-hq/dograh
8•a6kme•6d ago•2 comments

SPhotonix – 360TB into 5-inch glass disc with femtosecond laser

https://www.tomshardware.com/pc-components/storage/sphotonix-pushes-5d-glass-storage-toward-data-...
15•peter_d_sherman•2h ago•5 comments

Our emotional pain became a product

https://www.theguardian.com/us-news/ng-interactive/2025/dec/14/trauma-mental-health
24•worik•3h ago•7 comments

Compiler Engineering in Practice

https://chisophugis.github.io/2025/12/08/compiler-engineering-in-practice-part-1-what-is-a-compil...
113•dhruv3006•19h ago•24 comments

GNU recutils: Plain text database

https://www.gnu.org/software/recutils/
125•polyrand•7h ago•35 comments

Getting into Public Speaking

https://james.brooks.page/blog/getting-into-public-speaking
113•jbrooksuk•4d ago•35 comments

Efficient Basic Coding for the ZX Spectrum (2020)

https://blog.jafma.net/2020/02/24/efficient-basic-coding-for-the-zx-spectrum/
51•rcarmo•14h ago•13 comments
Open in hackernews

AI agents are starting to eat SaaS

https://martinalderson.com/posts/ai-agents-are-starting-to-eat-saas/
73•jnord•3h ago

Comments

MangoToupe•2h ago
> If I want an internal dashboard, I don't even think that Retool or similar would make it easier. I just build the dashboard

Oh, child.... building is easy. Coordinating maintenance of the tool across a non-technical team is hell.

toomuchtodo•2h ago
Indeed, wisdom is being able to see the lifecycle like a time knife and making favorable decisions based on past experiences (ie getting the most value from what may need to be built and operated for its lifetime). Writing code is easy, managing a living codebase with ever changing business requirements and stakeholders is hard.
lwhi•2h ago
Exactly.

Corporations think in terms of risk.

Second only to providing a useful function, a successful SaaS app will have been built to mitigate risk well.

It's not going to be easy to meet these requirements without prior knowledge and experience.

scotty79•2h ago
Let me give you an example of my workflow from tonight:

1. I had two text documents containing plain text to compare. One with minor edits (done by AI).

2. I wanted to see what AI changed in my text.

3. I tried the usual diff tools. They diffed line by line and result was terrible. I searched google for "text comparison tool but not line-based"

4. As second search result it found me https://www.diffchecker.com/ (It's a SaaS, right?)

5. Initially it did equally bad job but I noticed it had a switch "Real-time diff" which did exactly what I wanted.

6. I got curious what is this algorithm. So I asked Gemini with "Deep Research" mode: "The website https://www.diffchecker.com/ uses a diff algorithm they call real-time diff. It works really good for reformatted and corrected text documents. I'd like to know what is this algorithm and if there's any other software, preferably open-source that uses it."

7. As a first suggestion it listed diff-match-patch from Google. It had Python package.

8. I started Antigravity in a new folder, ran uv init. Then I prompted the following:

"Write a commandline tool that uses https://github.com/google/diff-match-patch/wiki/Language:-Py... to generate diff of two files and presents it as side by side comparison in generated html file."

[...]

"I installed the missing dependance for you. Please continue." - I noticed it doesn't use uv for installing dependencies so I interrupted and did it myself.

[...]

"This project uses uv. To run python code use

uv run python test_diff.py" - I noticed it still doesn't use uv for running the code so its testing fails.

[...]

"Semantic cleanup is important, please use it." - Things started to show up but it looked like linear diff. I noticed it had a call to semantic cleanup method commented out so I thought it might help if I push it in that direction.

[...]

"also display the complete, raw diff object below the table" - the display of the diff still didn't seem good so I got curious if it's the problem with the diffing code or the display code

[...]

"I don't see the contents of the object, just text {diffs}" - it made a silly mistake by outputting template variable instead of actual object.

[...]

"While comparing larger files 1.txt and 2.txt I notice that the diff is not very granular. Text changed just slightly but the diff looks like deleting nearly all the lines of the document, and inserting completely fresh ones. Can you force diff library to be more granular?

You seem to be doing the right thing https://github.com/google/diff-match-patch/wiki/Line-or-Word... but the outcome is not good.

Maybe there's some better matching algoritm in the library?" - it seemed that while on small tests that Antigravity made itself it worked decently but on the texts that I actually wanted to compare was still terrible although I've seen glimpses of hope because some spots were diffed more granularly. I inspected the code and it seemed to be doing character level diffing as per diff-match-patch example. While it processed this prompt I was searching for solution myself by clicking around diff-match-patch repo and demos. I found a potential solution by adjusting cleanup, but it actually solved the problem by itself by ditching the character level diffing (which I'm not sure I would have come up with at this point). Diffed object looked great but as I compared the result to https://www.diffchecker.com/ output it seemed that they did one minor thing about formatting better.

[...]

"Could you use rowspan so that rows on one side that are equivalent to multiple rows on the other side would have same height as the rows on the other side they are equivalent to?" - I felt very clumsily trying to phrase it and I wasn't sure if Antigravity will understand. But it did and executed perfectly.

I didn't have to revert a single prompt and interrupted just two times at the beginning.

After a while I added watch functionality with a single prompt:

"I'd like to add a -w (--watch) flag that will cause the program to keep running and monitor source files to diff and update the output diff file whenever they change."

[...]

So I basically went from having two very similar text files and knowing very little about diffing to knowing a bit more and having my own local tool that let's me compare texts in satisfying manner, with beautiful highlighting and formatting, that I can extend or modify however I like, that mirrors interesting part of the functionality of the best tool I found online. And all of that in the time span shorter than it took me to write this comment (at least the coding part was, I followed few wrong paths during my search for a bit).

My experience tells me that even if I could replicate what I did today (keeping motivated is an issue for me), it would most likely be multi-day project full of frustration and hunting small errors and venturing into wrong paths. Python isn't even my strongest language. Instead it was a pleasant and fun evening with occasional jaw drops and feeling so blessed that I live in SciFi times I read about as a kid (and adult).

giancarlostoro•2h ago
You could have also ran diff on both files and gotten a reasonable diff.
austinjp•2h ago
> 3. I tried the usual diff tools. They diffed line by line and result was terrible.

Um. I don't want to be That Guy (shouting at clouds, or at kids to get off my lawn or whatever) but ... what "usual diff" tools did you use? Because comparing two text files with minor edits is exactly what diff-related tools have excelled at for decades.

There is word-level diff, for example. Was that not good enough? Or delta [0] perhaps?

[0] https://github.com/dandavison/delta

redwood•2h ago
Jamin Ball had a better take on Clouded Judgement https://cloudedjudgement.substack.com/p/clouded-judgement-12... "Long Live Systems of Record"
lwhi•2h ago
This was a good read.
bigtones•1h ago
Yeah I think that is a much more accurate take on the same subject.
yellow_lead•2h ago
Another post with no data, but plenty of personal vibes.

> The signals I'm seeing

Here are the signals:

> If I want an internal dashboard...

> If I need to re-encode videos...

> This is even more pronounced for less pure software development tasks. For example, I've had Gemini 3 produce really high quality UI/UX mockups and wireframes

> people really questioning renewal quotes from larger "enterprise" SaaS companies

Who are "people"?

ares623•2h ago
Vibes can move billions of dollars from someone else's retirement money
samdoesnothing•2h ago
> For example, I've had Gemini 3 produce really high quality UI/UX mockups and wireframes

Is the author a competent UX designer who can actually judge the quality of the UX and mockups?

> I write about web development, AI tooling, performance optimization, and building better software. I also teach workshops on AI development for engineering teams. I've worked on dozens of enterprise software projects and enjoy the intersection between commercial success and pragmatic technical excellence.

Nope.

ares623•2h ago
Maybe someday we'll see job postings for maintaining these in-house SaaS tools. And someday someday, we'll see these in-house SaaS tools being consolidated as its own separate product. Wait what.
Imustaskforhelp•2h ago
Hey lets hope maybe people will open source the product too :D
sleazebreeze•1h ago
and around and around we'll go again!
Oarch•2h ago
Earlier this year I thought that rare proprietary knowledge and IP was a safe haven from AI, since LLMs can only scrub public data.

Then it dawned on me how many companies are deeply integrating Copilot into their everyday workflows. It's the perfect Trojan Horse.

Aurornis•2h ago
Using an LLM on data does not ingest that data into the training corpus. LLMs don’t “learn” from the information they operate on, contrary to what a lot of people assume.

None of the mainstream paid services ingest operating data into their training sets. You will find a lot of conspiracy theories claiming that companies are saying one thing but secretly stealing your data, of course.

nerdponx•2h ago
If they weren't, then why would enterprise level subscriptions include specific terms stating that they don't train on user provided data? There's no reason to believe that they don't, and if they don't now then there's no reason to believe that they won't later whenever it suits them.
Aurornis•1h ago
> then why would enterprise level subscriptions include specific terms stating that they don't train on user provided data?

What? That’s literally my point: Enterprise agreements aren’t training on the data of their enterprise customers like the parent commenter claimed.

leptons•2h ago
> LLMs don’t “learn” from the information they operate on, contrary to what a lot of people assume.

Nothing is really preventing this though. AI companies have already proven they will ignore copyright and any other legal nuisance so they can train models.

lioeters•2h ago
They're already using synthetic data generated by LLMs to further train LLMs. Of course they will not hesitate to feed "anonymized" data generated by user interactions. Who's going to stop them? Or even prove that it's happening. These companies have already been allowed to violate copyright and privacy on a historic global scale.
Archelaos•2h ago
How should they dinstinguish between real and fake data? It would be far to easy to pollute their models with nonesense.
tick_tock_tick•2h ago
I mean is it really ignoring copyright when copyright doesn't limit them in anyway on training?
Aurornis•1h ago
> Nothing is really preventing this though

The enterprise user agreement is preventing this.

Suggesting that AI companies will uniquely ignore the law or contracts is conspiracy theory thinking.

AuthAuth•2h ago
They are not directly ingesting the data into their trainning sets but they are in most cases collecting it and will be using it to train future models.
Aurornis•1h ago
Do you have any source for this at all?
fzeroracer•2h ago
> You will find a lot of conspiracy theories claiming that companies are saying one thing but secretly stealing your data, of course.

It's not really a conspiracy when we have multiple examples of high profile companies doing exactly this. And it keeps happening. Granted I'm unaware of cases of this occuring currently with professional AI services but it's basic security 101 that you should never let anything even have the remote opportunity to ingest data unless you don't care about the data.

james_marks•2h ago
> never let anything even have the remote opportunity to ingest data unless you don't care about the data

This is objectively untrue? Giants swaths of enterprise software is based on establishing trust with approved vendors and systems.

mulquin•1h ago
To be pedantic, it is still a conspiracy, just no longer a theory.
Aurornis•1h ago
> It's not really a conspiracy when we have multiple examples of high profile companies doing exactly this.

Do you have any citations or sources for this at all?

sotrusting•2h ago
Ah yes, blindly trusting the corpo fascists that stole the entire creative output of humanity to stop now.

I hope you find some self awareness when you slip a disc bending over this much for these corpo fascists, especially when they are failing to hold their own language to your level of prevarication and puffery:

> When you use our services for individuals such as ChatGPT, Codex, and Sora, we may use your content to train our models.

https://help.openai.com/en/articles/5722486-how-your-data-is...

protocolture•2h ago
>Ah yes, blindly trusting the corpo fascists that stole the entire creative output of humanity to stop now.

Stealing implies the thing is gone, no longer accessible to the owner.

People aren't protected from copying in the same way. There are lots of valid exclusions, and building new non competing tools is a very common exclusion.

The big issue with the OpenAI case, is that they didn't pay for the books. Scanning them and using them for training is very much likely to be protected. Similar case with the old Nintendo bootloader.

The "Corpo Fascists" are buoyed by your support for the IP laws that have thus far supported them. If anything, to be less "Corpo Fascist" we would want more people to have more access to more data. Mankind collectively owns the creative output of Humanity, and should be able to use it to make derivative works.

sotrusting•1h ago
> Stealing implies the thing is gone, no longer accessible to the owner.

You know a position is indefensible when you equivocation fallacy this hard.

> The "Corpo Fascists" are buoyed by your support for the IP laws

You know a position is indefensible when you strawman this hard.

> If anything, to be less "Corpo Fascist" we would want more people to have more access to more data. Mankind collectively owns the creative output of Humanity, and should be able to use it to make derivative works.

Sounds about right to me, but why you would state that when defending slop slingers is enough to give me whiplash.

> Scanning them and using them for training is very much likely to be protected.

Where can I find these totally legal, free, and open datasets all of these slop slingers are trained on?

Oarch•1h ago
> Stealing implies the thing is gone, no longer accessible to the owner.

Isn't this a little simplistic?

If the value of something lies in its scarcity, then making it widely available has robbed the owner of a scarcity value which cannot be retrieved.

A win for consumers, perhaps, but a loss for the owner nonetheless.

Retric•2h ago
Companies have already shifting from not using customer data to giving them an option to opt out ex:

“How can I control whether my data is used for model training?

If you are logged into Copilot with a Microsoft Account or other third-party authentication, you can control whether your conversations are used for training the generative AI models used in Copilot. Opting out will exclude your past, present, and future conversations from being used for training these AI models, unless you choose to opt back in. If you opt out, that change will be reflected throughout our systems within 30 days.” https://support.microsoft.com/en-us/topic/privacy-faq-for-mi...

At this point suggesting it has never and will her happen is wildly optimistic.

Aurornis•1h ago
An enterprise Copilot contract will have already decided this for the organization.
olyjohn•3m ago
[delayed]
lwhi•2h ago
Information about the way we interact with the data (RLHF) can be used to refine agent behaviour.

While this isn't used specifically for LLM training, it can involve aggregating insights from customer behaviour.

Aurornis•1h ago
That’s a training step. It requires explicitly collecting the data and using it in the training process.

Merely using an LLM for inference does not train it on the prompts and data, as many incorrectly assume. There is a surprising lack of understanding of this separation even on technical forums like HN.

agumonkey•1h ago
maybe prompts are enough to infer the rest ?
TheRoque•1h ago
Just read the ToS of the LLM products please
doctorpangloss•1h ago
This is so naive. The ToS permits paraphrasing of user conversations, by not excluding it, and then training on THAT. You’d never be able to definitively connected paraphrased data to yours, especially if they only train on paraphrased data that covers frequent, as opposed to rare, topics.
Aurornis•1h ago
Do it have a citation for this?
Aurornis•1h ago
I have. Have you? Can you quote the sections you’re talking about?
popalchemist•1h ago
Wrong, buddy.

Many of the top AI services use human feedback to continuously apply "reinforcement learning" after the initial deployment of a pre-trained model.

https://en.wikipedia.org/wiki/Reinforcement_learning_from_hu...

Aurornis•1h ago
RLHF is a training step.

Inference (what happens when you use an LLM as a customer) is separate from training.

Inference and training are separate processes. Using an LLM doesn’t train it. That’s not what RLHF means.

popalchemist•45m ago
I am aware, I've trained my own models. You're being obtuse.

The big companies - take Midjourney, or OpenAI, for example - take the feedback that is generated by users, and then apply it as part of the RLHF pass on the next model release, which happens every few months. That's why they have the terms in their TOS that allow them to do that.

findjashua•2h ago
providers' ToS explicitly states whether or not any data provided is used for training purposes. the usual that i've seen is that while they retain the right to use the data on free tiers, it's almost never the case for paid tiers
GCUMstlyHarmls•2h ago
I wonder how much wiggle there is for collect now (to provide service, context history, etc), then later anonymise (some how, to some level) and then train on it?

Also I wonder if the ToS covers "queries & interaction" vs "uploaded data" - I could imagine some tricky language in there that says we wont use your word document, but we may at some time use the queries you put against it, not as raw corpus but as a second layer examining what tools/workflows to expand/exploit.

danielheath•31m ago
“We don’t train on your data” doesn’t exclude metadata, training on derived datasets via some anonymisation process, etc.

There’s a range of ways to lie by omission, here, and the major players have established a reputation for being willing to take an expansive view of their legal rights.

sotrusting•2h ago
Right, so totally cool to ignore the law but our TOS is a binding contract.
mc32•2h ago
Yes, they can be sued for breach of contract. And it’s not a regular ToS but a signed MSA and other legally binding documents.
blibble•2h ago
the license on my open source code is a contract, and they ignored that

if they can get away with it (say by claiming it's "fair use"), they'll ignore corporate ones too

protocolture•2h ago
Where are they ignoring the law?
yieldcrv•1h ago
people that say this tend to have a misinterpretation of copyright, and use all the court cases brought by large rights holders as validation

despite all 3 branches of the government disagreeing with them over and over again

sotrusting•1h ago
https://www.reuters.com/business/environment/musks-xai-opera...
bdangubic•1h ago
it is amazing in almost 2026 there is anyone believing this… amazing
Oarch•1h ago
Given the conduct we've seen to date, I'd trust them to follow the letter - but not the spirit - of IP law.

There may very well be clever techniques that don't require directly training on the users' data. Perhaps generating a parallel paraphrased corpus as they serve user queries - one which they CAN train on legally.

The amount of value unlocked by stealing practically ~everyone's lunch makes me not want to put that past anyone who's capable of implementing such a technology.

gaigalas•2h ago
What kind of rare proprietary knowledge?
Oarch•1h ago
It could be a wide range of things depending on your field: highly particular materials, knowledge or processes that give your products or services a particular edge, and which a company has often incurred high R&D costs to discover.

Many businesses simply couldn't afford to operate without such an edge.

matt-p•1h ago
Even if they're were doing this (I highly doubt it) so much would be lost to distillation I'm not convinced there would be much that actually got in, apart from perhaps internal codenames or whatever which will be obvious.
kankerlijer•1h ago
Well, perhaps this is naive of me from the perspective of not fully understanding the training process. However, at some point, with all available training data having been exhausted, gains with synthetic data exhausted, and a large pool of publicly available AI generated code, at what point is it 'smart' to scrape codebases from what you identify as high quality code based, clean it up to remove identifiers, and use that for training a smaller model?
phendrenad2•1h ago
Ironically (for you), copilot is the one provider that is doing a good job of provably NOT training on user data. The rest are not up to speed on that compliance angle, so many companies ban them (of course, people still use them).
Aurornis•1h ago
Do you have a source for this?

There are claims all through this thread that “AI companies” are probably doing bad things with enterprise customer data but nobody has provided a single source for the claim.

This has been a theme on HN. There was a thread a few weeks back where someone confidently claimed up and down the thread that Gemini’s terms of service allowed them to train on your company’s customer data, even though 30 seconds of searching leads to the exact docs that say otherwise. There is a lot of hearsay being spread as fact, but nobody actually linking to ToS or citing sections they’re talking about.

gijoeyguerra•2h ago
I’m always skeptical when I see (or say for that matter) phrases that start with “just”.
jackschultz•2h ago
Another video about this today: https://www.youtube.com/watch?v=4Bg0Q1enwS4

Summary is that for agents to work well they need clear vision into all things, and putting the data behind a gui or not well maintained CLI is a hinderance. Combined with how structured crud apps are an how the agents can for sure write good crud apps, no reason to not have your own. Wins all around with not paying for it, having a better understanding of processes, and letting agents handle workflows.

bgwalter•2h ago
The author teaches workshops for "AI" development. Next commercial please.
MyFirstSass•1h ago
Hackernews seems completely astroturfed in the last weeks.

It's not the hackernews i knew even 3 years ago anymore and i'm seriously close to just ditching the site after 15+ years of use.

I use AI heavily but everyday there's crazy optimistic almost manic posts about how AI is going to take over various sectors that are completely ludicrous - and they are all filled with comments from bizarrely optimistic people that have seemingly no knowledge of how software is actually run or built, ie. it's the human organisational, research and management elements that are the hard parts, something AI can't do in any shape or form at the moment for any complex or even small company.

Starlevel004•2h ago
Oh no! Anyway.
arealaccount•2h ago
The where this doesn’t work section is chefs kiss

- anything that requires very high uptime

-very high volume systems and data lakes

-software with significant network effects

-companies that have proprietary datasets

-regulation and compliance is still very important

weitendorf•2h ago
This is why I started working on an open source, generic protobuf sqlite ORM + CRUD server (with search/filtering) + type/service registry + grpc mesh, recently: https://github.com/accretional/collector Note: collector's docs are mostly from LLMs, partially because it's more of a framework for tool-calling LLMs than humans

Then this project lets you generate static sites from svelte components (matches protobuf structures) and markdown (documentation) and global template variables: https://github.com/accretional/statue

A lot of the SaaS ecosystem actually has rather simple domain logic and oftentimes doesn't even model data very well, or at least not in a way that matches their clients/users mental models or application logic. A lot of the value is in integrations, or the data/scaling, or the marketing and developer experience, or some kind of expertise in actually properly providing a simple interface to a complex solution.

So why not just create a compact universal representation of that? Because it's not so big a leap to go beyond eating SaaS to eating integrations, migration costs/bad moats, and the marketing/documentation/wrapper.

_pdp_•2h ago
I often give the follow analogy which I think is a good proxy to what is going on.

Spreadsheets! They are everywhere. In fact, they are so abundant these days that that many are spawned for a quick job and immediately discarded. In fact, the cost of having these spreadsheets is practically zero so in many cases one may find themselves having hundreds if not thousands of them sitting around with no indication to ever being deleted. Spreadsheets are also personal and annoying especially when forced upon you (since you did not make it yourself). Spreadsheets are also programming for non-programmers.

These new vibe-coded tools are essentially the new spreadsheets. They are useful,... for 5 minutes. They are also easily forgettable. They are also personal (for the person who made them) and hated (by everyone else). I have no doubt in my mind that organisation will start using more and more of these new types of software to automate repetitive tasks, improve existing processes and so on but ultimately, apart from perhaps just a few, none will replace existing, purpose-built systems.

Ultimately you can make your own pretty dashboard that nobody else will see or use because when the cost of production is so low your users will want to create their own version because they would think they could do better.

After all, how hard is to prompt harder then the previous person?

Also, do you really think that SaaS companies are not deploying AI themselves? It is practically an arms race: the non-expert plus some AI vs 10 specialist developers plus their AIs doing this all day long.

Who is going to have the upper-hand?

NikolaNovak•2h ago
I get and agree with a lot of skepticism (and I get where ad-hominem attacks come from:). I have AI shoved my throat at work and at home 24x7 and most of it not for my benefit, and the writer doesn't out as much rigor into writing as might be beneficial.

At the same time, to the core theme of the article - do any of us think a small sassy SaaS like Bingo card creator could take off now? :-)

https://training.kalzumeus.com/newsletters/archive/selling_s...

blazespin•2h ago
There is a significant risk of uncertainty in all of this, the most damaging aspect really. If AI improves, and it is threatening to, then growth in SaaS may decline to a point where investing in it needs to be reconsidered.

The problem is, nobody knows how much and how fast AI will improve or how much it will cost if it does.

That uncertainty alone is very problematic and I think is being underestimated in terms of its impact on everything it can potentially touch.

For now though, I've seen a wall form in benchmarks like swe-rebench and swebench pro. Greenfield is expanding, but maintenance is still a problem.

I think AI needs to get much better at maintenance before serious companies can choose build over buy for anything but the most trivial apps.

hyperpape•2h ago
Note that the author does not mention a single specific SaaS subscription he’s cancelled or seen a team cancel.

The only named product was Retool.

linsomniac•1h ago
We just had a $240/year renewal for teamretro.com come due, and while TeamRetro has a lot of components, we are only using the retro and ice breaker components. So I gave Claude Code a couple of prompts and I now have a couple static HTML pages that do the ice breaker (using local storage) and the retro (using a Google sheet as the storage backend, largely because it mimics our pre-teamretro process).

It took me no more than 2 hours to put those together. We didn't renew our TeamRetro

nop_slide•29m ago
Or just don’t do retro and save even more time and money!
andy_ppp•2h ago
I’m currently working on an in house ERP and inventory system for a specific kind of business. With very few people you can now instead of paying loads of money for some off the shelf solution to your software needs get something completely bespoke to your business. I think AI enables the age of boutique software that works fantastically for businesses, agencies will need to dramatically reduce their price to compete with in house teams.

I’m pretty certain AI quadruples my output at least and facilitates fixing, improving and upgrading poor quality inherited software much better than in the past. Why pay for SaaS when you can build something “good enough” in a week or two? You also get exactly what you want rather than some £300k per year CRM that will double or treble in price and never quite be what you wanted.

_pdp_•1h ago
This is only true if you assume that you are producing the same amount of code as today. Though, AI ultimately will produce more code which will require higher maintenance. Your internal team will need to scale up due to the the amount of code they need to maintain. Your security team will have more work to do as well because they will need to review more code which will require scaling that team as well. Your infrastructure costs will start adding up and if you have any DevOps they will need scaling too.

Soon or later the CTO will be dictating which projects can be vibe coded which ones make sense to buy.

SaaS benefits from network effects - your internal tools don't. So overall SaaS is cheaper.

The reality is that software license costs is a tiny fraction of total business costs. Most of it is salaries. The situation you are describing the kind of dead spiral many companies will get into and that will be their downfall not salvation.

technotony•1h ago
Interesting application. Can you share more about your stack and how you are approaching that build?
Aurornis•1h ago
> Why pay for SaaS when you can build something “good enough” in a week or two?

About a decade ago we worked with a partner company who was building their own in-house software for everything. They used it as one of their selling points and as a differentiator over competitors.

They could move fast and add little features quickly. It seemed cool at first.

The problems showed up later. Everything was a little bit fragile in subtle ways. New projects always worked well on the happy path, but then they’d change one thing and it would trigger a cascade of little unintended consequences that broke something else. No problem, they’d just have their in-house team work on it and push out a new deploy. That also seemed cool at first, until they accumulated a backlog of hard to diagnose issues. Then we were spending a lot of time trying to write up bug reports to describe the problem in enough detail for them to replicate, along with constant battles over tickets being closed with “works in the dev environment” or “cannot reproduce”.

> You also get exactly what you want rather than some £300k per year CRM

What’s the fully loaded (including taxes and benefits) cost of hiring enough extra developers and ops people to run and maintain the in house software, complete with someone to manage the project and enough people to handle ops coverage with room for rotations and allowing holidays off? It turns out the cost of running in-house software at scale is always a lot higher than 300K, unless the company can tolerate low ops coverage and gaps when people go on vacation.

mikert89•1h ago
Its not that people will build their own saas, its that competitors will pop up at a rapid pace
mattas•1m ago
You've just described the magic of spreadsheets.
henning•2h ago
Ah, yes. If the thing that is false is true, all kinds of interesting things happen! For example, if I became the queen of France, I could make people do silly dances! That is an interesting hypothesis that could play out in my imaginary world!

SaaS maintenance isn't about upgrading packages, it's about accountability and a point of contact when something breaks along with SLAs and contractual obligations. It isn't because building a kanban board app is hard. Someone else deals with provisioning, alerts, compliance, etc. and they are a real human who cannot hallucinate that the issue has been fixed when it hasn't. Depending on the contract and how it is breached, you can potentially take them to court and sue them to recover money lost as a result of their malpractice. None of that applies to a neural network that misreads the alert, does something completely wrong, then concludes the issue is fixed the way the latest models constantly do when I use them.

hurturue•2h ago
Related, Microsoft CEO said that soon the biggest client of Microsoft is going to be agents, not humans.
agumonkey•1h ago
agent economy .. that's a fun thought
m-hodges•1h ago
Man, AI tools are going to get so expensive, aren’t they.
yieldcrv•1h ago
Yeah, when everyone can create a SaaS, noone will and will create the boutique thing you’re selling for their own purpose
shermantanktop•1h ago
The real question isn’t whether we’ll run out of SaaS customers, it’s whether we’ll run out of new problems that can be solved by the current set of tools. I doubt it, it’d be a historical first in the modern era. But the solutions may move closer to the companies with the problems. More in-house, fewer intermediaries.
mikert89•1h ago
"It was always possible to clone software, but doing so was costly and time consuming, and the clone would need to be much cheaper, making any such venture financially non-viable.

With AI, that equation is now changing. I anticipate that within 5 years autonomous coding agents will be able to rapidly and cheaply clone almost any existing software, while also providing hosting, operations, and support, all for a small fraction of the cost.

This will inevitably destroy many existing businesses. In order to survive, businesses will require strong network effects (e.g. marketplaces) or extremely deep data/compute moats. There will also be many new opportunities created by the very low cost of software. What could you build if it were possible to create software 1000x faster and cheaper?"

Paul Bucheit

https://x.com/paultoo/status/1999245292294803914

lateforwork•1h ago
This article made no sense to me. It is talking about AI-generated code eating SaaS. That's not what is going to replace SaaS. When AI is able to do the job itself — without generating code — that's what is going to replace SaaS.

AI-generated code still requires software engineers to build, test, debug, deploy, secure, monitor, be on-call, handle incidents, and so on. That's very expensive. It is much cheaper to pay a small monthly fee to a SaaS company.

mjr00•1h ago
> AI-generated code still requires software engineers to build, test, debug, deploy, ensure security, monitor, be on-call, handle incidents, and so on. That's very expensive. It is much cheaper to pay a small monthly fee to a SaaS company.

Yeah it's a fundamental misunderstanding of economies of scale. If you build an in-house app that does X, you incur 100% of the maintenance costs. If you're subscribed to a SaaS product, you're paying for 1/N % of the maintenance costs, where N is the number of customers.

I only see AI-generated code replacing things that never made sense as a SaaS anyway. It's telling the author's only concrete example of a replaced SaaS product is Retool, which is much less about SaaS and much more about a product that's been fundamentally deprecated.

Wake me up when we see swaths of companies AI-coding internal Jira ("just an issue tracker") and Github Enterprise ("just a browser-based wrapper over git") clones.