I feel like with AI, self-hosting software reliably is becoming easier so the incentives to pay for a hosted service of an OSS project are going down.
Wanna sack a load of staff? - AI
Wanna cut your consumer products division? - AI
Wanna take away the source? - AI
I understand why this is a tempting thing to do in a "STOP THE PRESSES" manner where you take a breather and fix any existing issues that snuck through. I don't yet understand why when you reach steady-state, you wouldn't rely on the same tooling in a proactive manner to prevent issues from being shipped.
And if you say "yeah, that's obv the plan," well then I don't understand what going closed-source _now_ actually accomplishes with the horses already out of the barn.
Give him $100 to obtain that capability.
Give each open source project maintainer $100.
Or internalize the cost if they all decide the hassle of maintaining an open source project is not worth it any more.
I'm not aiming this reply at you specific, but it's the general dynamic of this crisis. The real answer is for the foundational model providers to give this money. But instead, at least one seems to care more about acquiring critical open source companies.
We should openly talk about this - the existing open source model is being killed by LLMs, and there is no clear replacement.
Is that true? Didn't the Mythos release say they spent $20k? I'm also skeptical of Anthropic here doing essentially what amounts to "vague posting" in an attempt scare everyone and drive up their value before IPO.
If the tool correctly says you've got security issues, trying to hide them won't work. You still have the security issues and someone is going to find them.
You can keep the untested branch closed if you want to go with “cathedral” model, even.
To what end? You can just look at the code. It's right there. You don't need to "hack" anything.
If you want to "hack on it", you're welcome to do so.
Would you like to take a look at some of my open-source projects your neighbour's kid might like to hack on?
A few years ago, I invoked Linus's Law in a classroom, and I was roundly debunked. Isn't it a shame that it's basically been fulfilled now with LLMs?
It's not a symmetric game, either. On defense, you have to get lucky every time - the attacker only has to get lucky once.
This! I love OSS but this argument seems to get overlooked in most of the comments here.
But you might need thousands of sessions to uncover some vulnerabilities, and you don’t want to stop shipping changes because the security checks are taking hours to run
But you won't keep the doors open for others to use them against it.
So it is, unfortunately, understandable in a way...
Great move.
Open-source supporters don't have a sustainable answer to the fact that AI models can easily find N-day vulnerabilities extremely quickly and swamp maintainers with issues and bug-reports left hanging for days.
Unfortunately, this is where it is going and the open-source software supporters did not for-see the downsides of open source maintenance in the age of AI especially for businesses with "open-core" products.
Might as well close-source them to slow the attackers (with LLMs) down. Even SQLite has closed-sourced their tests which is another good idea.
Then good, that overengineered, intentionally-crippled crap should go away.
It makes me think of how great chess engines have affected competitive chess over the last few years. Sure, the ceiling for Elo ratings at the top levels has gone up, but it's still a fair game because everyone has access to the new tools. High-level players aren't necessarily spending more time on prep than they were before; they're just getting more value out of the hours they do spend.
I think Cal are making the wrong call, and abandoning their principles. But it isn't fair to say the game is accelerating in a proportionate way.
See: https://www.youtube.com/watch?v=2CieKDg-JrA
Ultimately, he concludes that while in the short run the game defines the players' actions, an environment that makes cooperation too risky naturally forces participants to stop cooperating to protect themselves from being "exploited" (this bit is around 34:39 - 34:46)
Since security exploits can now be found by spending tokens, open source is MORE valuable because open source libraries can share that auditing budget while closed source software has to find all the exploits themselves in private.
> If Mythos continues to find exploits so long as you keep throwing money at it, security is reduced to a brutally simple equation: to harden a system you need to spend more tokens discovering exploits than attackers will spend exploiting them.
I'd give them more credits if they use the AI slop unmaintainability argument.
I might like to live there.
(I just hope they can learn to verify the exploits are valid before sharing them!)
The media momentum of this threat really came with Mythos, which was like 2 or 3 weeks ago? That seems like a fairly short time to pivot your core principles like that. It sounds to me like they wanted to do this for other business related reasons, but now found an excuse they can sell to the public.
(I might be very wrong here)
Your average open source library isn’t going to get that scrutiny, though. It seems like it will result in consolidation around a few popular libraries in each category?
It also means that you need to extract enough value to cover the cost of said tokens, or reduce the economic benefit of finding exploits.
Reducing economic benefit largely comes down to reducing distribution (breadth) and reducing system privilege (depth).
One way to reduce distribution is to, raise the price.
Another is to make a worse product.
Naturally, less valuable software is not a desirable outcome. So either you reduce the cost of keeping open (by making closed), or increase the price to cover the cost of keeping open (which, again, also decreases distribution).
The economics of software are going to massively reconfigure in the coming years, open source most of all.
I suspect we'll see more 'open spec' software, with actual source generated on-demand (or near to it) by models. Then all the security and governance will happen at the model layer.
If the null hypothesis is that LLMs are good at finding bugs, full stop, then it's unclear to me that going closed actually does much to stop your adversary (particularly as a service operator).
At your cost.
Every time you push. (or if not that, at least every time there is a new version that you call a release)
Including every time a dependency updates, unless you pin specific versions.
I assume (caveat: I've not looked into the costs) many projects can't justify that.
Though I don't disagree with you that this looks like a commercial decision with “LLM based bug finders could find all our bad code” as an excuse. The lack of confidence in their own code while open does not instil confidence that it'll be secure enough to trust now closed.
I believe than N companies using an open source project and contributing back would make this burden smaller than one company using the same closed-source project.
IMHO, open source will continue to exist and it will be successful but the existence of AI is deterrent for most. Lets be honest, in recent times the only reason startups went open source first was to build a community and build organic growth engine powered by early adaptors. Now this is no longer viable and in fact it is simply helping competitors. So why do it then?
The only open source that will remain will be the real open source projects that are true to the ethos.
How has this changed?
Attribution isn't required for permissive many open source licenses. Dependencies with those licenses will oftentimes end up inside closed source software. Even if there isn't FOSS in the closed-source software, basically everyone's threat model includes (or should include) "OpenSSL CVE". On that basis, I doubt Cal is accomplishing as much as they hope to by going closed source.
This post's argument seems circular to me.
That said, I think it’s important to try and recognize where things are from multiple angles rather than bucket things from your filter bubble alone, fear sells and we need to stop buying into it.
We did consider arguments in both directions (e.g. easier to recreate the code, agents can understand better how it works), but I honestly think the security argument goes for open source: the OSS projects will get more scrutiny faster, which means bugs won't linger around.
Time will tell, I am in the open source camp, though.
andsoitis•2h ago
It seems like an easy decision, not a difficult one.