frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Open in hackernews

Cal.com is going closed source

https://cal.com/blog/cal-com-goes-closed-source-why
51•Benjamin_Dobell•1h ago

Comments

andsoitis•1h ago
> Today, we are making the very difficult decision to move to closed source, and there’s one simple reason: security.

It seems like an easy decision, not a difficult one.

gouthamve•1h ago
This is a weird knee-jerk reaction. I feel like this is more a business decision than a security decision.

I feel like with AI, self-hosting software reliably is becoming easier so the incentives to pay for a hosted service of an OSS project are going down.

badgersnake•1h ago
AI is certainly getting a lot of milage as an excuse for doing bad things.

Wanna sack a load of staff? - AI

Wanna cut your consumer products division? - AI

Wanna take away the source? - AI

esafak•1h ago
Their product is getting commoditized: https://workspace.google.com/resources/appointment-schedulin...
fhn•1h ago
Yeah, I don't buy it. If they don't want these security reports, ignore them and continue your path. Blaming AI is just an excuse to close source. If you don't want AI to learn from your code, too late. Add genetic algorithms and fuzzing into AI and it can iterate and learn a billion times faster, no need to learn for humans.
doytch•1h ago
I get the mentality but it feels very much like security through obscurity. When did we decide that that was the correct model?
Peer_Rich•1h ago
hey cofounder here. since it takes my 16 year old neighbors son 15 mins and $100 claude code credits to hack your open source project
doytch•1h ago
Right, but those capabilities are available to you as well. Granted the remediation effort will take longer but...you're going to do that for any existing issues _anyway_ right?

I understand why this is a tempting thing to do in a "STOP THE PRESSES" manner where you take a breather and fix any existing issues that snuck through. I don't yet understand why when you reach steady-state, you wouldn't rely on the same tooling in a proactive manner to prevent issues from being shipped.

And if you say "yeah, that's obv the plan," well then I don't understand what going closed-source _now_ actually accomplishes with the horses already out of the barn.

throwaway5752•1h ago
> those capabilities are available to you as well

Give him $100 to obtain that capability.

Give each open source project maintainer $100.

Or internalize the cost if they all decide the hassle of maintaining an open source project is not worth it any more.

I'm not aiming this reply at you specific, but it's the general dynamic of this crisis. The real answer is for the foundational model providers to give this money. But instead, at least one seems to care more about acquiring critical open source companies.

We should openly talk about this - the existing open source model is being killed by LLMs, and there is no clear replacement.

simonw•1h ago
Are you at all worried that the message you are spreading here is "We are no longer confident in our own ability to secure your data?"
wild_egg•1h ago
That's exactly the message I got from the video
sambaumann•1h ago
Couldn't you just spend those $100 on claude code credits yourself and make sure you're not shipping insecure software? Security by obscurity is not the correct model (IMO)
discordianfish•1h ago
Please, go ahead!
wild_egg•1h ago
It only takes 20 minutes and $200 to hack a closed source one too though. LLMs are ludicrously good at using reverse engineering tools and having source available to inspect just makes it slightly more convenient.
hypeatei•1h ago
> neighbors son 15 mins and $100 claude code credits

Is that true? Didn't the Mythos release say they spent $20k? I'm also skeptical of Anthropic here doing essentially what amounts to "vague posting" in an attempt scare everyone and drive up their value before IPO.

pdntspa•1h ago
whooptie fuggin doo, then spend $200 on finding and fixing the issues before you push your commits to the cloud
bakugo•1h ago
*This comment sponsored by Anthropic
toast0•1h ago
I don't think this really helps that much. Your neighbor could ask an LLM to decompile your binaries, and then run security analysis on the results.

If the tool correctly says you've got security issues, trying to hide them won't work. You still have the security issues and someone is going to find them.

bayindirh•1h ago
Why not can’t you (as in Cal.com) spend that amount of money and find vulnerabilities yourself?

You can keep the untested branch closed if you want to go with “cathedral” model, even.

senko•1h ago
What makes you think it'll take him more than 16 mins and $110 claude code credits to hack your closed source project?
ErroneousBosh•1h ago
> since it takes my 16 year old neighbors son 15 mins and $100 claude code credits to hack your open source project

To what end? You can just look at the code. It's right there. You don't need to "hack" anything.

If you want to "hack on it", you're welcome to do so.

Would you like to take a look at some of my open-source projects your neighbour's kid might like to hack on?

1970-01-01•1h ago
This is not security via obscurity; it is reducing your attack surface as much as possible.
dspillett•1h ago
Reducing your attack surface as much as possible via obscurity.
1970-01-01•7m ago
Going closed source is making the branch secret/private, not making it obscure. Obscurity would be zipping up the open source code (without a password) and leaving it online. Obscurity is just called taking additional steps to recover the information. Your passwords are not obscure strings of characters, they are secrets.
ButlerianJihad•1h ago
This seems kind of crazy. If LLMs are so stunningly good at finding vulnerabilities in code, then shouldn't the solution be to run an LLM against your code after you commit, and before you release it? Then you basically have pentesting harnesses all to yourself before going public. If an LLM can't find any flaws, then you are good to release that code.

A few years ago, I invoked Linus's Law in a classroom, and I was roundly debunked. Isn't it a shame that it's basically been fulfilled now with LLMs?

https://en.wikipedia.org/wiki/Linus%27s_law

fwip•1h ago
It's entirely possible to address all the LLM-found issues and get an "all green" response, and have an attacker still find issues that your LLM did not. Either they used a different model, a different prompt, or spent more money than you did.

It's not a symmetric game, either. On defense, you have to get lucky every time - the attacker only has to get lucky once.

earthnail•1h ago
> It's not a symmetric game, either. On defense, you have to get lucky every time - the attacker only has to get lucky once.

This! I love OSS but this argument seems to get overlooked in most of the comments here.

dgellow•1h ago
I mean, you should definitely have _some_ level of audit by LLMs before you ship, as part of the general PR process.

But you might need thousands of sessions to uncover some vulnerabilities, and you don’t want to stop shipping changes because the security checks are taking hours to run

samename•1h ago
That’s a non-trivial cost for commonly severely underfunded open source projects
yawndex•1h ago
Cal.com is not a severely underfunded project, it raised around $32M of VC money.
vlapec•1h ago
LLMs really are stunningly good at finding vulnerabilities in code, which is why, with closed-source code, you can and probably will use them to make your code as secure as possible.

But you won't keep the doors open for others to use them against it.

So it is, unfortunately, understandable in a way...

paprikanotfound•1h ago
I'm not a security expert but can't close source applications be vulnerable and exploited too? I feel like using close source as a defense is just giving you a false sense of security.
pixel_popping•13m ago
Delaying attacks is a form of valid security.
rvz•1h ago
You know what?

Great move.

Open-source supporters don't have a sustainable answer to the fact that AI models can easily find N-day vulnerabilities extremely quickly and swamp maintainers with issues and bug-reports left hanging for days.

Unfortunately, this is where it is going and the open-source software supporters did not for-see the downsides of open source maintenance in the age of AI especially for businesses with "open-core" products.

Might as well close-source them to slow the attackers (with LLMs) down. Even SQLite has closed-sourced their tests which is another good idea.

zb3•1h ago
> especially for businesses with "open-core" products.

Then good, that overengineered, intentionally-crippled crap should go away.

wild_egg•1h ago
Haven't the SQLite tests always been closed? Getting access to them is a major reason for financially supporting them
hayleox•1h ago
The tools are available to everyone. It's becoming easier for hackers to attack you at the same speed that it's becoming easier for you to harden your systems. When everyone gains the same advantage at the same time, nothing has really changed.

It makes me think of how great chess engines have affected competitive chess over the last few years. Sure, the ceiling for Elo ratings at the top levels has gone up, but it's still a fair game because everyone has access to the new tools. High-level players aren't necessarily spending more time on prep than they were before; they're just getting more value out of the hours they do spend.

popalchemist•1h ago
I agree it's a shit tactic, but one thing I can say for those running software businesses is that it's not an equivalent linear increase on both sides. It's asymmetric, because # of both attackers and the amount of attack surface (exposed 3rd party dependencies, for example) is near infinite, with no opportunity cost for failure by the bad actors (hackers). However a single failure can bring down a company, particularly when they may be hosting sensitive user data that could ruin their customers' businesses or lives.

I think Cal are making the wrong call, and abandoning their principles. But it isn't fair to say the game is accelerating in a proportionate way.

See: https://www.youtube.com/watch?v=2CieKDg-JrA

Ultimately, he concludes that while in the short run the game defines the players' actions, an environment that makes cooperation too risky naturally forces participants to stop cooperating to protect themselves from being "exploited" (this bit is around 34:39 - 34:46)

hayleox•52m ago
Sure, I can see that to a degree. And there definitely is a bit of chaos during the transition period as everyone scrambles to figure out what the landscape looks like now. I could understand if they decided to temporarily do less-frequent code releases, or maybe release their code on a delay or something, while they wait for the dust to settle. But I don't think permanently ending open source development is the right move.
simonw•1h ago
Drew Breunig published a very relevant piece yesterday that came to the opposite conclusion: https://www.dbreunig.com/2026/04/14/cybersecurity-is-proof-o...

Since security exploits can now be found by spending tokens, open source is MORE valuable because open source libraries can share that auditing budget while closed source software has to find all the exploits themselves in private.

> If Mythos continues to find exploits so long as you keep throwing money at it, security is reduced to a brutally simple equation: to harden a system you need to spend more tokens discovering exploits than attackers will spend exploiting them.

not-chatgpt•1h ago
Security should be a non issue in the age of AI now that auditing is cheaper than ever.

I'd give them more credits if they use the AI slop unmaintainability argument.

criddell•1h ago
How may open source libraries have auditing budgets?
Mordisquitos•1h ago
Their commercial users have auditing budgets.
dspillett•1h ago
Does your ideal world have an easy path to citizenship?

I might like to live there.

simonw•59m ago
I expect we're about to find that it's a lot easier to convince a company to spend money running an AI security scan of their dependencies and sharing the results with the maintainers than it is to have them give those maintainers money directly.

(I just hope they can learn to verify the exploits are valid before sharing them!)

pietz•1h ago
This conclusion makes more sense to me, but maybe I'm too naive.

The media momentum of this threat really came with Mythos, which was like 2 or 3 weeks ago? That seems like a fairly short time to pivot your core principles like that. It sounds to me like they wanted to do this for other business related reasons, but now found an excuse they can sell to the public.

(I might be very wrong here)

skybrian•1h ago
This seems similar to the lesson learned for cryptographic libraries where open source libraries vetted by experts become the most trusted.

Your average open source library isn’t going to get that scrutiny, though. It seems like it will result in consolidation around a few popular libraries in each category?

DrammBA•1h ago
I have a feeling the real reason is them trying to avoid someone using AI to copyright-wash their product, they're just using security as the excuse.
mgdev•57m ago
This is an economically sound conclusion.

It also means that you need to extract enough value to cover the cost of said tokens, or reduce the economic benefit of finding exploits.

Reducing economic benefit largely comes down to reducing distribution (breadth) and reducing system privilege (depth).

One way to reduce distribution is to, raise the price.

Another is to make a worse product.

Naturally, less valuable software is not a desirable outcome. So either you reduce the cost of keeping open (by making closed), or increase the price to cover the cost of keeping open (which, again, also decreases distribution).

The economics of software are going to massively reconfigure in the coming years, open source most of all.

I suspect we'll see more 'open spec' software, with actual source generated on-demand (or near to it) by models. Then all the security and governance will happen at the model layer.

woodruffw•1h ago
Today, it's easy to (publicly) evaluate the ability of LLMs to find bugs in open source codebases, because you don't need to ask permission. But this doesn't actually tell us the negative statement, which is that an LLM won't just as effectively find bugs in closed codebases, including through black-box testing, reverse engineering, etc.

If the null hypothesis is that LLMs are good at finding bugs, full stop, then it's unclear to me that going closed actually does much to stop your adversary (particularly as a service operator).

zb3•1h ago
This has to be the most bullshit reason I've seen.. if AI can be pointed and find vulnerabilities then do it yourself before publishing the code.
dspillett•1h ago
> if AI can be pointed and find vulnerabilities then do it yourself before publishing the code

At your cost.

Every time you push. (or if not that, at least every time there is a new version that you call a release)

Including every time a dependency updates, unless you pin specific versions.

I assume (caveat: I've not looked into the costs) many projects can't justify that.

Though I don't disagree with you that this looks like a commercial decision with “LLM based bug finders could find all our bad code” as an excuse. The lack of confidence in their own code while open does not instil confidence that it'll be secure enough to trust now closed.

zb3•13m ago
For-profit companies using open-source software should bear that cost - that's my position.

I believe than N companies using an open source project and contributing back would make this burden smaller than one company using the same closed-source project.

bearsyankees•1h ago
Think this is a bad, bad move...

https://news.ycombinator.com/item?id=47780712

creatonez•1h ago
This is some truly exceptionally clownish attention seeking nonsense. The rationale here is complete nonsense, they just wanted to put "because AI" after announcing their completely self-serving decision. If AI cyber offense is such a concern, recognize your role as a company handling truckloads of highly sensitive information and actually fix your security culture instead of just obscuring it.
jhatemyjob•1h ago
I mean it's not complete nonsense, but yeah, doing it for security reasons sounds like BS. I actually thought this was going to be about how AI makes it super easy for someone to steal all their code and fold it into their own competing project. I've seen a few open source projects get sideswiped by this, AI is pretty good at copying code (and obfuscating the fact that it was copied). I suspect that's the real reason but it doesn't sound as good. So they went with this half-truth.
nativeit•1h ago
I guess why fix vulnerabilities when you can just obscure them?
asdev•1h ago
Who even uses their open source product?
_pdp_•1h ago
The real threat is not security but bad actors copying your code and calling it theirs.

IMHO, open source will continue to exist and it will be successful but the existence of AI is deterrent for most. Lets be honest, in recent times the only reason startups went open source first was to build a community and build organic growth engine powered by early adaptors. Now this is no longer viable and in fact it is simply helping competitors. So why do it then?

The only open source that will remain will be the real open source projects that are true to the ethos.

fcarraldo•1h ago
> The real threat is not security but bad actors copying your code and calling it theirs.

How has this changed?

HyprMusic•1h ago
Bad actors can rewrite it with AI and claim ownership of the result.
evanjrowley•1h ago
I agree with you that AI's disruption of attribution is a much bigger problem, but it's also worth recognizing that not everyone has this same motivation. It mostly affects copyleft open source licenses.

Attribution isn't required for permissive many open source licenses. Dependencies with those licenses will oftentimes end up inside closed source software. Even if there isn't FOSS in the closed-source software, basically everyone's threat model includes (or should include) "OpenSSL CVE". On that basis, I doubt Cal is accomplishing as much as they hope to by going closed source.

popalchemist•1h ago
Seems like it's just being used as a convenient pretense to back out of open-source.
barelysapient•1h ago
I hate how this sounds...but this reads to me "we lack the confidence in our code security so we're closing the source code to conceal vulnerabilities which may exist."
iancarroll•1h ago
I know plenty of security researchers who exclusively use Claude Code and other tools for blackbox testing against sites they don’t have the source code for. It seems like shutting down the entire product is the only safe decision here!
adamtaylor_13•1h ago
Could you not simply point AI at your open source codebase and use it to red-team your own codebase?

This post's argument seems circular to me.

hmokiguess•1h ago
Risk tolerance and emotional capacity differs from one individual to another, while I may disagree with the decision I am able to respect the decision.

That said, I think it’s important to try and recognize where things are from multiple angles rather than bucket things from your filter bubble alone, fear sells and we need to stop buying into it.

tudorg•1h ago
It's funny that this news showed up just as we (Xata) have gone the other direction, citing also changes due to AI: https://xata.io/blog/open-source-postgres-branching-copy-on-...

We did consider arguments in both directions (e.g. easier to recreate the code, agents can understand better how it works), but I honestly think the security argument goes for open source: the OSS projects will get more scrutiny faster, which means bugs won't linger around.

Time will tell, I am in the open source camp, though.

tokai•1h ago
Security through obscurity has been known to be a faulty approach for nearly 200 years. Yet here we are.
righthand•1h ago
This is the future now that AI is here. Publishing is going to be dead, look at the tea leaves, how many engineers are claiming they don’t use package managers anymore and just generate dependencies? 5 years and no one will be making an argument for open source or blogging.
evanjrowley•1h ago
Juxtapose this with the fact that many HNers will decry strong copyleft FOSS licenses as not being truly "open source" - the reality is that closed source software is still full of open-source non-copyleft dependencies. Unless you're rolling your own encryption and TCP stack, being closed source will not be the easy solution that many imagine it to be.
dec0dedab0de•1h ago
This seems dishonest, like someone is forcing the decision for other reasons, and they're using security and AI as a distraction.
poisonborz•20m ago
AI sure is useful as a scapegoat for any negative PR inducing moves.

Patch Tuesday, April 2026 Edition

https://krebsonsecurity.com/2026/04/patch-tuesday-april-2026-edition/
1•Brajeshwar•32s ago•0 comments

New search engine reveals if ancestors were in Nazi party

https://www.bbc.com/news/articles/cr411ndee7yo
1•hackernj•53s ago•0 comments

Ecovacs Wants to Weaponize Your Mop Water

https://www.siliconsnark.com/ecovacs-wants-to-weaponize-your-mop-water/
1•SaaSasaurus•1m ago•0 comments

French cops free mother and son after 20-hour crypto kidnap ordeal

https://www.theregister.com/2026/04/15/crypto_kidnap_france/
1•Bender•2m ago•0 comments

Encrypted Client Hello: A Big Tech Privacy Fix

https://blog.miloslavhomer.cz/encrypted-client-hello/
1•ArcHound•3m ago•0 comments

Henry's Pocket

https://en.wikipedia.org/wiki/Henry%27s_pocket
1•thunderbong•5m ago•0 comments

Notes on Twins Vol. 2 – 12 Weeks

https://roryflint.substack.com/p/notes-on-twins-vol-2
2•mrroryflint•8m ago•1 comments

'Handing out the blueprint to a bank vault' Why AI led Cal to drop open source

https://www.zdnet.com/article/ai-security-worries-force-company-to-abandon-open-source/
2•CrankyBear•9m ago•0 comments

Can astrologers gain insights about people from astrological charts?

https://www.clearerthinking.org/post/can-astrologers-use-astrological-charts-to-understand-people...
1•pavel_lishin•9m ago•0 comments

Write stuff down and document things

https://thereabouts.bearblog.dev/why-you-should-write-stuff-down-and-document-things/
1•speckx•11m ago•0 comments

How Older Adults Are Using V.R. To Counter Social Isolation

https://www.nytimes.com/2026/04/15/technology/vr-technology-elderly-community-social-isolation.html
1•mitchbob•12m ago•0 comments

The next evolution of the Agents SDK

https://openai.com/index/the-next-evolution-of-the-agents-sdk/
4•meetpateltech•15m ago•0 comments

What China's Great Green Wall can teach the world

https://www.nature.com/articles/d41586-026-01195-3
3•Brajeshwar•16m ago•0 comments

Graphs That Explain the State of AI in 2026

https://spectrum.ieee.org/state-of-ai-index-2026
3•CarbonCycles•16m ago•0 comments

Microbes make microplastics more likely to form ice in clouds, research reveals

https://phys.org/news/2026-03-microbes-microplastics-ice-clouds-reveals.html
2•PaulHoule•16m ago•0 comments

CPUs Aren't Dead. Gemma2B Out Scored GPT-3.5 Turbo on Test That Made It Famous

https://seqpu.com/CPUsArentDead/
3•fredmendoza•17m ago•0 comments

Can you steal $10k from a locked iPhone? [video]

https://www.youtube.com/watch?v=PPJ6NJkmDAo
2•terramex•18m ago•0 comments

Allbirds shares soar 600% as it pivots from footwear to AI

https://www.cnn.com/2026/04/15/investing/allbirds-pivot-to-ai
4•samsolomon•19m ago•1 comments

The Download: NASA's nuclear spacecraft and unveiling our AI 10

https://www.technologyreview.com/2026/04/15/1135904/the-download-nasa-nuclear-powered-spacecraft-...
1•joozio•20m ago•0 comments

Prove You Are a Robot: CAPTCHAs for Agents

https://browser-use.com/posts/prove-you-are-a-robot
2•lukasec•20m ago•0 comments

Gemini on Mac

https://twitter.com/sundarpichai/status/2044452464724967550
1•tosh•21m ago•1 comments

Show HN: Tine – Drive Wayland Around with Agents

https://github.com/smythp/tine
2•tarboreus•21m ago•0 comments

Allbirds Is Pivoting to AI. Why Not?

https://www.wsj.com/livecoverage/stock-market-today-dow-sp-500-nasdaq-04-15-2026/card/allbirds-is...
3•gbourne1•21m ago•1 comments

Show HN: Mac menu bar app for Claude Code rate limits

https://github.com/elliotykim/claudewatch
2•elliotykim•22m ago•0 comments

Show HN: Dependicus, a dashboard for your monorepo's dependencies

https://descriptinc.github.io/dependicus/
5•irskep•22m ago•0 comments

Show HN: I built a dev server that runs on half a lightbulb

https://bhave.sh/how-cheap-agent-dev-server/
1•muunbo•23m ago•1 comments

Inter-1 – Omni-modal model for detecting social signals in video

https://www.interhuman.ai/blog/introducing-inter-1
3•interhuman•23m ago•0 comments

Nasal spray rewinds the aging brain, restoring memory and reversing inflammation

https://isevjournals.onlinelibrary.wiley.com/doi/10.1002/jev2.70232
2•arunc•23m ago•0 comments

Introducing: ShaderPad

https://rileyjshaw.com/blog/introducing-shaderpad/
3•evakhoury•25m ago•0 comments

The Slop KPI Era: How Tokenmaxxing Is Making AI Worse

https://portofcontext.com/blog/welcome-to-the-slop-kpi-era-how-tokenmaxxing-is-making-ai-worse
4•pmkelly4444•25m ago•0 comments