Not from the automated repo scanners, but bug bounty programs can generate a lot of reports in my experience. AI tools are becoming a problem there, too, because amateurs are drawn to the bounties and will submit anything the AI hallucinates.
Closed source companies can (and should!) also run their own security audits rather than passively waiting for volunteers to spend their tokens on it.
That still exists in the OSS world too, having your code out there is no panacea. I think we'll see a real swarm of security issues across the board, but I would expect the OSS world to fare better (perhaps after a painful period).
I analyze crash dumps for a Windows application. I haven't had much luck using Claude, OpenAI, or Google models when working with WinDbg. None of the models are very good at assembly and don't seem to be able to remember the details of different calling conventions or even how some of the registers are typically used. They are all pretty good at helping me navigate WinDbg though.
If the cost of security audit becomes marginal, it would seem reasonable to expect projects to publish results of such audits frequently.
There’s probably a quite hefty backlog of medium- and low-severity issues in existing projects for maintainers to suffer through first though.
This really just seems like Strix marketing. Which is totally fair, but let's be reasonable here, any open-source business stands to lose way more by continuing to be open-source vs. relying on the benevolence of people scanning their code for them.
Actually the opposite is obvious - the comment you replied too talked about an abundance of good Samaritan reports - it's strange to speculate on some nebulous "gain" when responding to facts about more then enough reports concerning open source code.
> In this new closed-source world (for Cal.com), there's nothing stopping them from running their own internal security agent audits
That's one good Samaritan for a closed source app vs many for an open source one. Open source wins again.
> any open-source business stands to lose way more
That doesn't make any sense - why would it lose more when it has many more good Samaritans working for it for free?
You seem to forget that the number of vulnerabilities in a certain app is finite, an open source app will reach a secure status much faster than a closed source one, in addition to also gaining from shorter time to market.
In fact, open source will soon be much better and more capable due to new and developing technological and organizational advancements which are next to impossible to happen under a closed source regime.
but with cal.com i dont think this is about security lol
open source will always be an advantage just you need to decide wether it aligns with you business needs
How so? AI won't have access to the source code. In some cases AI may have access to deployed binaries (if your business deploys binaries) but I am not aware that it has the same capabilities against compiled code than source code.
But in a SAAS world, all AI has access to is your API. It might be still be up to no good but surely you will be several orders of magnitude less exposed than with access to source code.
Shame
Some things just can't be truly secure as well, ddos protection is mostly a guessing/preventive game, exposing your firewall config/scripts will make you more vulnerable than NOT.
If your codebase isn't exposed, attackers are constrained by the network and other external restrictions which greatly reduce the number of possible trials, even with a swarm of residential proxies, it's not the same at all from inspecting a codebase in depth with thousand of agents and all models.
use battle-tested frameworks such as Rails, Django then you won't make rookie security mistakes.
At $WORK we had a system which, if you traced its logic, could not possibly experience the bug we were seeing in production. This was a userspace control module for an FPGA driver connected to some machinery you really don't want to fuck around with, and the bug had wasted something like three staff+ engineer-years by the time I got there.
Recognizing that the bug was impossible in the userspace code if the system worked as intended end-to-end, the engineers started diving into verilog and driver code, trying to find the issue. People were suspecting miscompilations and all kinds of fun things.
Eventually, for unrelated reasons, I decided to clean up the userspace code (deleting and refactoring things unlocks additional deletion and refactoring opportunities, and all said and done I deleted 80% of the project so that I had a better foundation for some features I had to add).
For one of those improvements, my observation was just that if I had to write the driver code to support the concurrency we were abusing I'd be swearing up a storm and trying to find any way I could to solve a simpler problem instead.
Long story short, I still don't know what the driver bug was, but the actual authors must've felt the same way, since when I opted for userspace code with simpler concurrency demands the bug disappeared.
Tying it back to AI and hacking, the white box approach here literally didn't work, and the black box approach easily illuminated that something was probably fucky. Given that AI can de-minify and otherwise spot patterns from fairly limited data, I wouldn't be shocked if black-box hacking were (at least sometimes) more token-efficient than white-box.
This seems to be extremely common. Been a very long time since I looked at Linux kernel stuff, but there were numerous drivers that disabled hardware acceleration or offloading features simply because they became unreliable if they were given heavy loads or deep queues.
There is zero incentive or reason for content creators to let AI slurp their content for free and distribute it and get all the money from it.
Everything new will be licensed and if AI companies want access to it, they will need to pay for it, just like we will.
The people that go behind paywalls don't realize how much they'll have to spend on marketing to catch up to those that are open.
And that's only frames the current state, where models are very expensive to train. Once model training is close to the point where a group of individuals can afford it, it's pretty much game over for our current paradigm. The software police will be running around trying to play whack-a-mole on open weight models with people all over the world.
Search engines will cease to exist, so no one will search your content and then click on your link. AI will simply regurgitate your content and take the money for tokens or subscription and not acknowledge you at all.
is anyone else seeing this / fixed this problem ?
I mean an AI skill is perfectly capable of doing this exact same thing.
I wrote some very nice expressive text for our deployment guide. My project manager took the guide and had Gemini break it down into plain boring bullet points. AI and the pundits can gf themselves in their journey to kill human expression.
Here is what I wrote in the guide:
"Post Deploy Responsibility
If you made it this far, say “Wow I really did it and it was so easy!”
Did you say it? Good. Now you are entirely responsible for any issues or bugs that may arise from the newly deployed code. Don’t go anywhere until the deploy has finished (usually takes a few minutes). While an issue or bug may not leave you directly at fault, you are responsible for coordinating any rollbacks or remediations that may be needed until the next deploy."
Here is what the product manager slopped it into:
"- Post deploy responsibility
- You are responsible for performing QA upon deployment
- You are responsible for any issues or bugs that may arise from newly deployed code
- You are responsible for coordinating any rollbacks or remediations that may be needed until the next deploy"
My paragraph wasn't long, hard to understand, or poorly written. I wouldn't have objected to a rewording or some changes but the project manager chose to just copy paste it into Gemini and copy and paste it back. So my take is that they didn't understand what I wrote. Which is a few sentences long and frankly sad if a paragraph is too intense for you to read. When my project manager did this during the meeting I said, "RIP human expression" and their response was a very hasty "no that's not what's happening". This is what all the pundits want to do to everyone and society. Don't believe them that "it's just a tool", that is just a tactic to get you to rollover so they can shove more AI in your face.Open Source was always open to "many eyes" in theory exposing itself to zero-day vulnerabilities. But the "many eyes" go for the good and the bad actors.
As far as I am concerned... Way to go Cal.com, and a good reminder to never use your services.
In order to build trust, they open source their product. I forked it, removed the blocks from the freemium feature in 15 minutes using Claude Code. Never published the code to anyone else, just used it myself
Unfortunately, I think it isn’t going to be tenable for systems to be fully open sourced going forward.
going closed source does not mean we are not fighting fire with fire
we are using a handful of internal AI vulnerability scanners for months now
being open source simply reduces risk by 5x to 10x according to several security researchers we are working with https://cal.com/blog/continuous-ai-pentesting-vulnerability-...
It’s OK if there’s another reason for this transition, just be transparent about it and don’t treat your users as children.
My concern is mostly financial. Most people would be in a better position to monetize my software than I am... Using AI to obfuscate the origin while appropriating all the key innovations. I wouldn't get any credit.
Also, I'm not really interested in humans anymore. I have human fatigue.
I mean, bold statement but statistically speaking it's almost certainly incorrect. I will say that, irrespective of whether source is open or closed, I would be deeply skeptical of a project that made this assertion.
Then AI will eat your lunch anyway if the financial part has anything at all to do with the code.
AI can decompile code very well.
With that said it at least seems possible to be able to be able to read binary itself, but most of the magic there is in execution, so you'd have to have an LLM behave kind of like a processor I think.
Cal.com is going closed source
I'm not sure how this works in the legal sense. A human could ostensibly study an existing project and then rewrite it from scratch. The original work's license shouldn't apply as long as code wasn't copy & pasted, right?
What happens when an automated tool does the same? It's basically just a complicated copy & paste job.
- it’s not open vs closed anymore, it’s more like bug finding going a few devs poking around to basically infinite parallel scanners
- so now you don’t get a couple of thoughtful reports, you get a many edge cases and half-real junk. fixing capacity didn’t change though
- closing the repo doesn’t really save you, it just switches from white-box to black-box… and that’s getting pretty damn good anyway
real problem is: vuln discovery scaled, patching didn’t. now everything is a backlog game
1) Pulls you in with a catchy title, that at first glance seems like a dunk on Cal.com (whatever that is).
2) Takes the "we understand your pain" approach to empathize w/ Cal.com, so you feel like you're on the good vibes side.
3) Provides a genuine response to the actual problem Cal.com is dealing with. Something you can't dismiss out of hand.
4) But in the end of the day, the response aligns perfectly with the product they're promoting (a click away to the homepage!)
This mix of genuine ideas and marketing is quite potent. Not saying this is all bad or anything, just found it a bit funny. The mixed-up-ness is the point!
It's entirely possible this CEO sincerely believes this, but that means you as a potential customer should stay away: now you know that the CEO of this company has no idea how technology works even at an executive level and/or that he doesn't consult his experts before making decisions.
The pipeline goes like this:
Use open source license to gain traction and credibility > establish a customer base > pull the rug on open source to get everyone who depends on your product but isn't yet paying to pay.
Well ...
Open Source as such will never "die", but we only need to look at what happened in, say, the last 5 or 10 years. Private entities with a commercial interest, have been flexing their muscles. Microsoft - also known as Microslop these days - with Github is probably the most famous example still, but you can see other examples. One that annoys me personally is Shopify's recent influence - rubygems.org is basically just shopifygems.org now. See: https://blog.rubygems.org/2026/04/15/rubygems-org-has-a-publ...
"Contributors from both the RubyGems client team and Shopify are already working with us on making native gems a better experience for the Ruby community. "
There is a lot more I could add to this (see my complaint about how rubygems.org hijacks gems past the 100.000 download barrier; this was why I retired from using rubygems.org, and then the year afterwards ruby core purged numerous developers. The handwriting is soooooo clear that shopify flexed their muscles here).
I think we need to make open source development more accessible to everyone, not just corporations throwing their money to gain influence and leverage. I don't have a great idea to make this model work; economic incentives kind of have to be there too, I get that part, and I am not sure which models could work. But right now we really have a big problem. We can also see this with age sniffing (age verification - see the article that pointed at Meta at orchestrating influence and lobbyism) and many more changes. Something has to change. Hopefully some people cleverer than me can come up with models that are actually sustainable, even if it may not necessarily be a "fund an open source developer for a year". There could be a more wide-spread "achieve xyz" or some other lower finance effort - but again, I don't have a good suggestion here. Hopfully something improves here though, because I am getting really tired of private interests constantly sabotaging and ruining the whole ecosystem while claiming they do "improve" an ecosystem. We have the old "War is peace. Freedom is slavery. Ignorance is strength." going again. Opposite day, every day.
Corporations are about money.
Individuals need to eat.
Governments love to concentrate power.
Making the assumption that the same amount of money needed to attack a critical vulnerability is also required to find and fix it.
Lets say we have a project with 100 modules, and it costs us $100 000 to check these modules for vulnerabilities. What is stopping an attacker from spending the same amount of money to scan, lets say 10 modules but this time with 10x the number of tokens per module than the defender had when hardening the software?
This feels like the core of the article, but it doesn’t prove the need for open source.
One of which I am experiencing right now is somebody just copying my repo, not crediting me, didn't even try to change the README. It's pretty discouraging.
The other is security reasons, the premise that volunteers will report vulnerabilities really matter if you are big enough for small portion of people to dedicate themselves, for the most part people take open source tool use it and then forget about it, they only want stuff fixed.
Lastly, open source development kinda sucks so far. I'v been working on a few different tools and the amount of trolling and just bad faith actors I had to deal with is exhausting. On top of that there is a constant stream of people just demanding stuff to be fixed quickly.
But... playing devil's advocate, if AI makes it very easy to find exploits without the source code, wouldn't it be doubly effective finding them with the source code as well? And why is the dichotomy posed by this blog post "open source with AI reviews by everyone" vs "closed source but only the bad guys use AI"? What if the scenario was: closed source and the authors/security team use every AI tool at their disposal to find bugs? What do the community's eyeballs add to this equation, assuming (big if) AI review of exploits is such a force multiplier?
Before any knee-jerk reactions: big fan of open source, I'm not arguing this will kill it, I don't have the faintest idea what Cal.com is and I think a world without FOSS would be a tragedy, I run linux and most of my software on my personal PC (other than games) is FOSS.
Which works if you assume that AI can find 100% of your bugs.
It can't. So this is a complete waste of your time and will hide actual bugs behind a layer of confidence _and_ obscurity.
You're going to actually have to sit down and figure out how to provide real security in your product while earning profits. This is called "work." I understand Silicon Valley would like to earn money and not work. I am eager for these people to get their comeuppance.
CodesInChaos•1h ago
That sounds like an excuse. The real reason is probably that it's hard to make a viable business out of developing open source.
mdp•1h ago
bearsyankees•1h ago
renewiltord•5m ago
serial_dev•1h ago
svnt•1h ago
Now I can take an open source repo and just add the missing features, fix the bugs, deploy in a few hours. The value of integration and bug-fixing when the code is available is now a single capable dev for a few hours, instead of an internal team. The calculus is completely different.
phillipcarter•1h ago
p_stuart82•1h ago
blaming AI scanners is just really convenient PR cover for a normal license change.
mikeryan•46m ago
“I need to do foo in my app. Libraries bar and baz do these bits well. Pick the best from each and let’s implement them here”
I’d not be surprised if npmjs.com and its ilk turn into more a reference site than a package manager backend soon.
wilj•22m ago
It started as a what-if joke, but it's turned out to be amazing. So yeah, npmjs.com is just reference site for me now, and node_modules stays tiny.
And the output is honestly superior. I end up with smaller projects, clean code, and a huge suite of property-based tests from the refactor process. And it's fully automatic.
pixel_popping•19m ago
yibers•20m ago