vs. spamming OSS maintainers with slop reports costs Google nothing
If it was slop they could complain that it was wasting their time on false or unimportant reports, instead they seem to be complaining that the program reported a legitimate security issue?
For a human, generating bug reports requires a little labor with a human in the loop, which imposes a natural rate limit on how many reports are submitted, which also imposes a natural triaging of whether it's personally worth it to report the bug. It could be worth it if you're prosocially interested in the project or if your operations depend on it enough that you are willing to pay a little to help it along.
For a large company which is using LLMs to automatically generate bug reports, the cost is much lower (indeed it may be longer-term profitable from a standpoint like marketing, finding product niches, refining models, etc.) This can be asymmetric with the maintainer's perspective, where the quality and volume of reports matter in affecting maintainer throughput and quality of life.
Their choice becomes to: - maintain a complex fork, constantly integrating from upstream. - Or pin to some old version and maybe go through a Herculean effort to rebase when something they truly must have merges upstream. - Or genuinely fork it and employ an expert in this highly specific domain to write what will often end up being parallel features and security patches to mainline FFmpeg.
Or, of course, pay someone in doing OSS to fix it in mainline. Which is the beauty of open source; that’s genuinely the least painful option, and also happens to be the one that benefits the community the most.
If that's what I have to expect, I'd rather not even interact with them at all.
For instance, I reported to the xorg-bug tracker that one app behaved oddly when I did --version on it. I was batch-reporting all xorg-applications via a ruby script.
Alan Coopersmith, the elderly hero that he is, fixed this not long after my report. (It was a real bug; granted, a super-small one, but still.)
I could give many more examples here. (I don't remember the exact date but I think I reported this within the last 3 years or so. Unfortunately reporting bugs in xorg-apps is ... a bit archaic. I also stopped reporting bugs to KDE because I hate bugzilla. Github issues spoiled me, they are so easy and convenient to use.)
But the way I see it, a bug report is a bug report, no matter how small or big the bug or the team, it should be addressed.
I don’t know, I’m not exactly a pillar of the FOSS community with weight behind my words.
As the article states, these are AI-generated bug reports. So it's a trillion-dollar company throwing AI slop over the wall and demanding a 90-day turn around from unpaid volunteers.
and the ffmpeg maintainers say it's not wanted
so it's slop
I'm not a Google fan, but if the maintainers are unable to understand that, I welcome a fork.
There is a convergence of very annoying trends happening: more and more are garbage found and written using AI and with an impact which is questionable at best, the way CVE are published and classified is idiotic and platform founding vulnerability research like Google are more and more hostile to projects leaving very little time to actually work on fixes before publishing.
This is leading to more and more open source developers throwing the towel.
Some of them are not even bugs in the traditional sense of the world but expected behaviours which can lead to unsecure side effects.
This was a bug, which caused an exploitable security vulnerability. The bug was reported to ffmpeg, over their preferred method for being notified about vulnerabilities in the software they maintain. Once ffmpeg fixed the bug, a CVE number was issued for the purpose of tracking (e.g. which versions are vulnerable, which were never vulnerable, which have a fix).
Having a CVE identifier is important because we can't just talk about "the ffmpeg vulnerability" when there have been a dozen this year, each with different attack surfaces. But it really is just an arbitrary number, while the bug is the actual problem.
Things which are usually managed inside a project now have a visibility outside of it. You might justify it as you want like the need to have an identifier. It doesn't fundamentally change how that impacts the dynamic.
Also, the discussion is not about a specific bug. It's a general discussion regarding how Google handles disclosure in the general case.
The 90 day period is the grace period for the dev, not a demand. If they don't want to fix it then it goes public.
If this keeps up, there won't be anyone willing to maintain the software due to burn out.
In today's situation, free software is keeping many companies honest. Losing that kind of leverage would be a loss to the society overall.
And the public disclosure is going to hurt the users which could include defense, banks and other critical institutions.
That’s how open source works.
What's the point of just showering these things with bug reports when the same tool (or a similar one) can also apparently fix the problem too?
Imagine you're a humble volunteer OSS developer. If a security researcher finds a bug in your code they're going to make up a cute name for it, start a website with a logo, Google is going to give them a million dollar bounty, they're going to go to Defcon and get a prize and I assume go to some kind of secret security people orgy where everyone is dressed like they're in The Matrix.
Nobody is going to do any of this for you when you fix it.
Doesn't really fit with your narrative of security researchers as shameless glory hounds, does it?
Note FFmpeg and cURL have already had maintainers quit from burnout from too much attention from security researchers.
I don't know if you'd be satisfied with that, but certainly this would allow them to easily make the blog posts you seem to be complaining about, all while making the load on maintainers rather minimal, at least insofar as blog posts appear to be quite infrequent compared to the total amount of vulnerabilities they report - around 20 vulnerability reports per year certainly seems like a manageable load for the entire FOSS community to bear, especially given almost none of these 20 yearly vulnerability reports would go to ffmpeg (if not literally none, given the Project Zero blog has 0 search results for "ffmpeg" or "libav"), and a significant portion of their blog posts aren't even about FOSS at all but instead about proprietary software like the operating systems Microsoft and Apple make.
I do think such a thing would be bad for everyone, though (including the ffmpeg developers themselves, to be honest) - Project Zero is good for everyone's security, in my opinion, and even if all FOSS developers were to universally decide to reject all Project Zero reports that don't come with a patch, and Google decided to still not make such patches, people being able to know that these vulnerabilities exist is still a good thing nonetheless - certainly much better than more vulnerabilities being left in for malicious actors to discover and use in zero-day attacks.
I suppose you'd prefer they abandon their projects entirely? Because that's the real alternative at this point.
And maybe it's fine to have AI-generated articles that summarize Twitter threads for HN, but this is not a good summarization of the discussion that unfolded in the wake of this complaint. For one, it doesn't mention a reply from Google security, which you would think should be pretty relevant here.
https://issuetracker.google.com/issues/440183164?pli=1
It's of excellent quality. They've made things about as easy for the FFmpeg devs as possible without actually fixing the bug, which might entail legal complications they want to avoid.
AI was only used to find the bug.
Another bunch of people who make era-defining software where they extract everything they can. From customers, transactionally. From the first bunch, pure extraction (slavery, anyone?).
They could adopt a more flexible policy for FOSS though.
Project Zero hasn't reported any vulnerabilities in any software I maintain. Lots of other security groups have, some well respected as well, but to my knowledge none of these "outside" reports were actual vulnerabilities when analyzed in context.
Where did you get that idea?
chill, nobody knows what ffmpeg is
Why there's such a weird toxic empathy around ffmpeg?
If this wasn't google but lone developer by now he'd be doxxed, fired and receiving death threats. It's not first time ffmpeg strikes like this.
It’s hard to take any comment seriously that tries to use “slavery” for situations where nobody is forced to do anything for anyone.
Though it's more likely there's just some team tasked with shoving AI into vulnerability hunting.
To me its okay to “demand” from a for profit company (eg google) to fix an issue fast. Because they have ressources. But to “demand” that an oss project fix something with a certain (possibly tight) timeframe.. well I’m sure you better than me, that that’s not who volunteering works
Let's say that FFMPEG has a 10 CVE where a very easy stream can cause it to RCE. So what?
We are talking about software commonly for end users deployed to encode their own media. Something that rarely comes in untrusted forms. For an exploit to happen, you need to have a situation where an attacker gets out a exploited media file which people commonly transcode via FFMPEG. Not an easy task.
This sure does matter to the likes of google assuming they are using ffmpeg for their backend processing. It doesn't matter at all for just about anyone else.
You might as well tell me that `tar` has a CVE. That's great, but I don't generally go around tarring or untarring files I don't trust.
Looks like firefox does the same.
I would be shocked if any company working with user generated video from the likes of zoom or TikTok or YouTube to small apps all over which do not have it in their pipeline somewhere.
One because they are a rust shop and gstreamer is slightly better supported in that realm (due to an official binding), the other because they do complex transformations with the source streams at a basal level vs high-level batch transformations/transcoding.
My point was it would be hard to imagine eschewing ffmpeg completely, not that there is no value for other tools and ffmpeg is better at everything. It is so versatile and ubiquitous it is hard to not use it somewhere.
In my experience there usually is always some scenarios in the stack where throwing in ffmpeg for a step is simpler and easier even if there no proper language binding etc, for some non-core step or other.
From a security context that wouldn't matter, As long it touches data, security vulnerabilities would be a concern.
It would be surprising, not that it would impossible to forgo ffmpeg completely. It would be just like this site is written Lisp, not something you would typically expect not impossible.
It would be surprising to find memory corruption in tar in 2025, but not in ffmpeg.
In this world the user is left vulnerable because attackers can use published vulnerabilities that the maintainers are to overwhelmed to fix
Google runs this security program even on libraries they do not use at all, where it's not a demand, it's just whitehat security auditing. I don't see the meaningful difference between Google doing it and some guy with a blog doing it here.
That's a pretty core difference.
I saw another poster say something about "buggy software". All software is buggy.
This is significant when they represent one of the few entities on the planet likely able to find bugs at that scale due to their wealth.
So funding a swarm of bug reports, for software they benefit from, using a scale of resources not commonly available, while not contributing fixes and instead demanding timelines for disclosure, seems a lot more like they'd just like to drive people out of open source.
An exploit is different. It can affect anyone and is quite pertinent.
It looks like they are now starting to flood OSS with issues because "our AI tools are great", but don't want to spend a dime helping to fix those issues.
xkcd 2347
I fail to see a single Google logo. I also didn't know that Google sonehow had a contract with ffmpeg to be their customer.
> Note the names of those maintainers. Now go to fflabs.eu
> Now click on the “team” link and check out the names
Quite an investigative work you've done there: some maintainers may do some work that surely... means sonething?
Meanwhile actual maintainer actually patching thousands of vulnerabilities in ffmpeg, including the recent ones reported by Google:
--- start quote ---
so far i got 7560€ before taxes for my security work in the last 7 months. And thats why i would appreciate that google, facebook, amazon and others would pay me directly. Also that 7560 i only got after the twitter noise.
https://x.com/michael__ni/status/1989391413151518779
--- end quote ---
Hint: just because people do some consulting for a customer doesn't mean that they are continuously paid to work on something.
This bit of ffmpeg is not a Chrome dependency, and likely isn’t used in internal Google tools either.
> Just publishing bug reports by themselves does not make open source projects secure!
It does, especially when you first privately report them to the maintainers and give them a plenty of time to fix the bug.
Nobody is against Google reporting bugs, but they use automatic AI to spam them and then expect a prompt fix. If you can't expect the maintainers to fix the bug before disclosure, then it is a balancing act: Is the bug serious enough that users must be warned and avoid using the software? Will disclosing the bug now allow attackers to exploit it because no fix has been made?
In this case, this bug (imo) is not serious enough to warrant a short disclosure time, especially if you consider *other* security notices that may have a bigger impact. The chances of an attacker finding this on their own and exploiting it are low, but now everybody is aware and you have to rush to update.
What do you believe would be an appropriate timeline?
>especially if you consider other security notices that may have a bigger impact.
This is a bug in the default config that is likely to result in RCE, it doesn’t get that much worse than this.
Likely to get RCE? No. Not every UAF results in a RCE. Also, someone would have to find this and it's clearly not something you can easily spot from the code. Google did extensive fuzzing to discover it. The trade off is that Ffmpeg had to divert resources to fix this, when the chance it would have been discovered independently is tiny, and exploited even tinier.
Making the vulnerability public makes it easy to find to exploit, but it also makes it easy to find to fix.
If you want to fix up old codecs in ffmpeg for fun, would you rather have a list of known broken codecs and what they're doing wrong; or would you rather have to find a broken codec first.
While true, Only Google has google infrastructure, this presupposes that 100% of all published exploits would be findable.
What a strange sentence. Google can do a lot of things that nobody can do. The list of things that only Google, a handful of nation states, and a handful of Google-peers can do is probably even longer.
Google does have immense scale that makes some things easier. They can test and develop congestion control algorithms with world wide (ex-China) coverage. Only a handful of companies can do that; nation states probably can't. Google isn't all powerful either, they can't make Android updates really work even though it might be useful for them.
[1] https://security.googleblog.com/2014/01/ffmpeg-and-thousand-...
Not really. It requires time, ergo money.
Yes? It's in the license
>NO WARRANTY
>15. BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE LIBRARY "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.
If I really care, I can submit a patch or pay someone to. The ffmpeg devs don't owe me anything.
Google should provide a fix but it's been standard to disclose a bug after a fixed time because the lack of disclosure doesn't remove the existence of the bug. This might have to be rethought in the context of OSS bugs but an MIT license shouldn't mean other people can't disclose bugs in my project.
Holding public disclosure over the heads of maintainers if they don't act fast enough is damaging not only to the project, but to end users themselves also. There was no pressing need to publicly disclose this 25 year old bug.
Just because software makes no guarantees about being safe doesn’t mean I want it to be unsafe.
Every software I've ever used had a "NO WARRANTY" clause of some kind in the license. Whether an open-source license or a EULA. Every single one. Except, perhaps, for public-domain software that explicitly had no license, but even "licenses" like CC0 explicitly include "Affirmer offers the Work as-is and makes no representations or warranties of any kind concerning the Work ..."
But yes things you get for free have no guarantees and there should be no expectations put in the gift giver beyond not being actively intentionally malicious.
Also, "depending on jurisdiction" is a good point as well. I'd forgotten how often I've seen things like "Offer not valid in the state of Delaware/California/wherever" or "If you live in Tennessee, this part of the contract is preempted by state law". (All states here are pulled out of a hat and used for examples only, I'm not thinking of any real laws).
And still, we live in a society. We have to use software, bugs or not.
Really, the burden is on those shipping products that depend on ffmpeg: they are the ones who have to fix the security issues for their customers. If Google is one of those companies, they should provide the fix in the given time.
It's about accountability! Who really gets to do it once those who ship it to customers care, is on them to figure out (though note that maintainers will have some burden to review, integrate and maintain the change anyway).
So while this might be a high security risk because it possibly could allow RCE, the real-world risk is very low.
Yes, because publicly disclosing the vulnerability means someone will have enough information to exploit it. Without public disclosure, the chance of that is much lower.
Note that ffmpeg doesn't want to remove the codec because their goal is to play every format known to man, but that's their goal. No one forces them to keep all codecs working.
The X days is a concession to the developers that the public disclosure will be delayed to give them an opportunity to address the issue.
If the obscure coded is not included by default or cannot be triggered by any means other than being explicitly asked for, then it would be reasonable to tag it Won't Fix. If it can be triggered by other means, such as auto file type detection on a renamed file, then it doesn't matter how obscure the feature is, the exploit would affect all.
What is the alternative to a time limited embargo. I don't particularly like the idea of groups of people having exploits that they have known about for ages that haven't been publicly disclosed. That is the kind of information that finds itself in the wrong hands.
Of course companies should financially support the developers of the software they depend upon. Many do this for OSS in the form of having a paid employee that works on the project.
Specifically, FFMPEG seems to have a problem that much of their limitation of resources comes from them alienating contributors. This isn't isolated to just this bug report.
As a user this is making me wary of running it tbh.
Silly nitpick, but you search for vulnerabilities not CVEs. CVE is something that may or may not be assigned to track a vulnerability after it has been discovered.
Most security issues probably get patched without a CVE ever being issued.
It's standard practice for commercially-sponsored software, and it doesn't necessarily fit volunteer maintained software. You can't have the same expectations.
Consumers of closed source software have a pretty reasonable expectation that the creator will fix it in a timely manner. They paid money, and the (generally) the creator shouldn't put the customer in a nasty place because of errors.
Consumers of open source software should have zero expectation that someone else will fix security issues. Individuals should understand this; it's part of the deal for us using software for free. Organizations that are making money off of the work of others should have the opposite of an expectation that any vulns are fixed. If they have or should have any concern about vulnerabilities in open source software, then they need to contribute to fixing the issue somehow. Could be submitting patches, paying a contractor or vendor to submit patches, paying a maintainer to submit patches, or contributing in some other way that betters the project. The contribution they pick needs to work well with the volunteers, because some of the ones I listed would absolutely be rejected by some projects -- but not by others.
The issue is that an org like Google, with its absolute mass of technical and financial resources, went looking for security vulnerabilities in open source software with the pretense of helping. But if Google (or whoever) doesn't finish the job, then they're being a piece of shit to volunteers. The rest of the job is reviewing the vulns by hand and figuring out patches that can be accepted with absolutely minimal friction.
To your point, the beginning of the expectation should be that vulns are disclosed, since otherwise we have known insecure software. The rest of the expectation is that you don't get to pretend to do a nice thing while _knowing_ that you're dumping more work on volunteers that you profit from.
In general, wasting the time of volunteers that you're benefiting from is rude.
Specifically, organizations profiting off of volunteer work and wasting their time makes them an extractive piece of shit.
Stop being a piece of shit, Google.
The OSS maintainer has the responsibility to either fix, or declare they won't fix - both are appropriate actions, and they are free to make this choice. The consumer of OSS should have the right to know what vulns/issues exist in the package, so that they make as informed a decision as they can (such as adding defense in depth for vulns that the maintainers chooses not to fix).
Also in general Google does investigate software they don't make money off.
An organization of this size might actually have trouble making sure they really don't use code from that project. Or won't do so in the future.
but they make money off the reputational increase they earn for having their name attached to the investigation. Unless the investigation and report is anonymous and their name not attached (which, could be true for some researchers), i can say that they're not doing charity.
As a user of ffmpeg I would definitely want to know this kind of information. The responsibility the issue filer has is not to the project, but to the public.
Why is Google deliberately running an AI process to find these bugs if they're just going to dump them all on the FFmpeg team to fix?
They have the option to pay someone to fix them.
They also have the option to not spend resources finding the bugs in the first place.
If they think these are so damn important to find that it's worth devoting those resources to, then they can damn well pay for fixing them too.
Or they can shut the hell up and let FFmpeg do its thing in the way that has kept it one of the https://xkcd.com/2347/ pieces of everyone's infrastructure for over 2 decades.
Are the bug reports accurate? If so, then they are contributing just as if I found them and sent a bug report, I'd be contributing. Of course a PR that fixes the bug is much better than just a report, but reports have value, too.
The alternative is to leave it unfound, which is not a better alternative in my opinion. It's still there and potentially exploitable even when unreported.
It's just not possible.
So Google is dedicating resources to finding these bugs
and feeding them to bad actors.
Bad actors who might, hypothetically have had the information before, but definitely do once Google publicizes them.
You are talking about an ideal situation; we are talking about a real situation that is happening in the real world right now, wherein the option of Google reports bug > FFmpeg fixes bug simply does not exist at the scale Google is doing it at.
I think the far more likely result of all the complaints is that Google simply completely disengages from ffmpeg and stops doing any security work on it. I think that would be quite bad for the security of the project - if Google can trivially find bugs at a high speed such that it overwhelms the ffmpeg developers, I would imagine bad actors can also search for them and find those same vulnerabilities Google is constantly finding, and if they know that those vulnerabilities very much exist, but that Google has simply stopped searching for them upon demand of the ffmpeg project, this would likely give them extremely high motivation to go looking in a place they can be almost certain they'll find unreported/unknown vulnerabilities in. The result would likely be a lot more 0-day attacks involving ffmpeg, which I do not think anyone regards as a good outcome (I would consider "Google publishes a bunch of vulnerabilities ffmpeg hasn't fixed so that everyone knows about them" to be a much preferable outcome, personally)
Now, you might consider that possibility fine - after all, the ffmpeg developers have no obligation to work on the project, and thus to e.g. fix any vulnerabilities in it. But if that's fine, then simply ignoring the reports Google currently makes is presumably also fine, no ?
In my opinion if the problem is money, and they cannot raise enough, then somebody should help them with that. Isn’t it?
Either way, users need to know about the vulnerabilities. That way, they can make an informed tradeoff between, for example, disabling the LucasArts Smush codec in their copy of ffmpeg, and being vulnerable to this hole (and probably many others like it).
I mean, yes, the ffmpeg maintainers are very likely to decide this on their own, abandoning the project entirely. This is already happening for quite a few core open source projects that are used by multiple billion-dollar companies and deployed to billions of users.
A lot of the projects probably should be retired and rewritten in safer system languages. But rewriting all of the widely-used projects suffering from these issues would likely cost hundreds of millions of dollars.
The alternative is that maybe some of the billion-dollar companies start making lists of all the software they ship to billions of users, and hire some paid maintainers through the Linux or Apache Foundations.
that is a good outcome, because then the people dependent on such a project would find it plausible to pay a new set of maintainers.
Google submitting a patch does not address this issue. The main work for maintainers here is making the decision whether or not they want to disable this codec, whether or not Google submits a patch to do that is completely immaterial.
With Google finding these bugs, at least the user can be informed. For this instance for example, the core problem here is the codec is in *active use*. Ffmpeg utilizes a disingenuous argument that it's old and obscure, but omits the fact that it's still compiled in meaning that an attacker can craft a file and send it to you and still works.
A user (it could be a distro who packages ffmpeg) can use this information to turn off the codec that virtually no one uses today and make their distribution of ffmpeg more secure. Not having this information means they can't do that.
If ffmpeg doesn't have the resources to fix these bugs, at least let the public know so we can deal with it.
Also, just maybe, they wouldn't have that many vulnerabilities filed against them if the project took security more seriously to begin with? It's not a good sign for the software when you get so many valid security reports and just ask them to withhold them.
A lot of these core pieces of infrastructure are maintained by one to three middle-aged engineers in their free time, for nothing. Meanwhile, billion dollar companies use the software everywhere, and often give nothing back except bug reports and occasional license violations.
I mean, I love "responsible disclosure." But the only result of billion dollar corporations drowning a couple of unpaid engineers in bug reports is that the engineers will walk away and leave the code 100% unmaintained.
And yeah, part of the problem here is that C-based data parsers and codecs are almost always horrendously insecure. We could rewrite it all in Rust (and I have in fact rewritten one obscure codec in Rust) or WUFFS. But again, who's going to pay for that?
Then point to the "PoC + Patch or GTFO" sign when reports come in. If you use a library with a "NO WARRANTY" license clause in an application where you're responsible for failures, it's on you to fix or mitigate the issues, not on the library authors.
As to why they bother finding these bugs... it's because that's how Google does things. You don't wait for something to break or be exploited, you load your compiler up with santizers and go hunting for bugs.
Yeah this one is kind of trivial, but if the bug-finding infrastructure is already set up it would be even more stupid if Google just sat on it.
That is, you'd rather a world where Google either does know about a vulnerability and refuses to tell anyone, or just doesn't look for them at all, over a world where google looks for them and lets people know they exist, but doesn't submit their own fix for it.
Why do you want that world? Why do you want corporations to reduce the already meager amounts of work and resources they put into open source software even further?
How many people are actively looking for bugs? Google, and then the other guys that don't share their findings, but perhaps sell them to the highest bidder. Seems like Google is doing some good work by just picking big, popular open source projects and seeing if they have bugs, even if they don't intend to fix them. And I doubt Google was actually using the Lucas Arts video format their latest findings were about.
However, in my mind the discussion whether Google should be developing FFmpeg (beyond the codec support mentioned elsewhere in the thread) or other OSS projects is completely separate from whether they should be finding bugs in them. I believe most everyone would agree they should. They are helping OSS in other ways though, e.g. https://itsfoss.gitlab.io/post/google-sponsors-1-million-to-... .
This is called fuzzing and it has been standard practice for over a decade. Nobody has had any problem with it until FFmpeg decided they didn’t like that AI filed a report against them and applied the (again, mostly standard at this point) disclosure deadline. FWIW, nobody would have likely cared except they went on their Twitter to complain, so now everyone has an opinion on it.
The Copenhagen interpretation of security bugs: if you don’t look for it, it doesn’t exist and is not a problem.
That is not an accurate description? Project Zero was using a 90 day disclosure policy from the start, so for over a decade.
What changed[0] in 2025 is that they disclose earlier than 90 days that there is an issue, but not what the issue is. And actually, from [1] it does not look like that trial policy was applied to ffmpeg.
> To me its okay to “demand” from a for profit company (eg google) to fix an issue fast. Because they have ressources. But to “demand” that an oss project fix something with a certain (possibly tight) timeframe.. well I’m sure you better than me, that that’s not who volunteering works
You clearly know that no actual demands or even requests for a fix were made, hence the scare quotes. But given you know it, why call it a "demand"?
[0] https://googleprojectzero.blogspot.com/2025/07/reporting-tra..., discussed at https://news.ycombinator.com/item?id=44724287
[1] https://googleprojectzero.blogspot.com/p/reporting-transpare...
It's similar to someone cooking a meal for you, and you go on and complain about every little thing that could have been better instead of at least saying "thank you"!
Here, Google is doing the responsible work of reporting vulnerabilities. But any company productizing ffmpeg usage (Google included) should sponsor a security team to resolve issues in high profile projects like these too.
Sure, the problem is that Google is a behemoth and their internal org structure does not cater to this scenario, but this is what the complaint is about: make your internal teams do the right thing by both reporting, but also helping fix the issue with hands-on work. Who'd argue against halving their vulnerability finding budget and using the other half to fund a security team that fixes highest priority vulnerabilities instead?
My understanding is that the bug in question was fixed about 100 times faster than Project Zero's standard disclosure timeline. I don't know what vulnerability report your scenario is referring to, but it certainly is not this one.
> and name-calling the maintainers
Except Google did not "name-call the maintainers" or anything even remotely resembling that. You just made it up, just like GP made up the the "demands". It's pretty telling that all these supposed misdeeds are just total fabrications.
So how long should all bug reporters wait before filing public bugs against open source projects? What about closed source projects? Anyone who works in software knows to ship software is to always have way more things to do than time to do it in. By this logic, we should never make bug reports public until the software maintainers (whether OSS, Apple or Microsoft) has a fix ready. Instead of "with enough eyeballs, all bugs are shallow" the new policy going forward I guess will be "with enough blindfolds, all bugs are low priority".
It's a call not to stop reporting, but to equally invest in fixing these.
In the end, Google does submit patches and code to ffmpeg, they also buy consulting from the ffmpeg maintainers. And here they did some security testing and filed a detailed and useful bug report. But because they didn't file a patch with the bug report, we're dragging them through the mud. And for what? When another corporation looks at what Google does do, and what the response this bug report has gotten them, which do you think is the most likely lesson learned?
1) "We should invest equally in reporting and patching bugs in our open source dependencies"
2) "We should shut the hell up and shouldn't tell anyone else about bugs and vulnerabilities we discover, because even if you regularly contribute patches and money to the project, that won't be good enough. Our name and reputation will get dragged for having the audacity to file a detailed bug report without also filing a patch."
All I am saying is that you should be as mindful to open source maintainers as you are to the people at companies.
It’s almost almost like bitching about the “free labor” open source projects are getting from their users, especially when that labor is of good quality and comes from a user that is actively contributing both code and money to the project is a losing strategy for open source fans and maintainers.
> All I am saying is that you should be as mindful to open source maintainers as you are to the people at companies.
And all I’m saying is there is nothing that’s “un-mindful” about reporting real bugs to an open source project, whether that report is public or not. And especially when that report is well crafted and actionable. If this report were for something that wasn’t a bug, is this report was a low quality “foo is broke, plz to fix” report with no actionable information, or if the report actually came with demands for responses and commitment timelines, then it would be a different matter. But ffmpeg runs a public bug tracker. To say then that making public bug reports is somehow disrespectful of the maintainers is ridiculous.
No one is forcing anyone to do anything. Ffmpeg does not have to fix this bug, btw. If they don't have time, just let the disclosure happen.
Also, in this case, the simple fix is to turn off the codec. They just didn't want to do that because they want to have all codecs enabled. This is a conscious choice and no one is forcing them to do that. If the CVE was allowed to disclose without ffmpeg fixing the issue, at least the downstream users can turn off the codec themselves.
Just to be clear here: Googles' responsibility here is to the public (aka the users of ffmpeg), not the project.
Also, let's go back to your "cooked a meal" analogy. If I cook a meal for you, for free, that's nice. But that doesn't entitle me to be careless in hygiene and gives you salmonella poisoning because I didn't wash my hands. Doing things for free doesn't absolve me of any responsibility.
Expecting a reporter to fix your security vulnerabilities for you is entitlement.
If your reputation is harmed by your vulnerable software, then fix the bugs. They didn’t create the hazzard they discovered it. You created it, and acting like you’re entitled to the free labor of those that gave you the heads up is insane, and trying to extort them for their labor is even worse.
Google did nothing like this.
If people infer that a hypothetical project doesn't care about security because they didn't fix anything, then they're right. It's not google's fault they're factually bad at security. Making someone look bad is not always a bad action.
Drawing attention to that decision by publicly reporting a bug is not a demand for what the decision will be. I could imagine malicious attention-getting but a bug report isn't it.
What you do with the notice as a dev is up to you, but responsible ones would fix it without throwing a tantrum.
Devs need to stop thinking of themselves as the main character and things get a lot more reasonable.
These two terms are not interchangeable.
Most vulnerabilities never have CVEs issued.
This opens up transparency of ffmpeg’s security posture, giving others the chance to fix it themselves, isolate where it’s run or build on entirely new foundations.
All this assuming the reports are in fact pointing to true security issues. Not talking about AI-slop reports.
codependency is when someone accepts too much responsibility, in particular responsibility for someone else or other things out of their control.
the answer is to have a "healthy neutrality".
"obscurity isn't security" is true enough, as far as it goes, but is just not that far.
And "put the bugs that won't be fixed soon on a billboard" is worse.
The super naive approach is ignoring that and thinking that "fix the bugs" is a thing that exists.
Sure, in maybe 1 special lucky case you might be empowered. And in 99 other cases you are subject to a bug without being in the remotest control over it since it's buried away within something you use and don't even have the option not to use the surface service or app let alone control it's subcomponents.
Google does contribute some patches for codecs they actually consume e.g. https://github.com/FFmpeg/FFmpeg/commit/b1febda061955c6f4bfb..., the bug in question was just an example of one the bug finding tool found that they didn't consume - which leads to this conversation.
(To put this in context: I assume that on average a published security vulnerability is known about to at least some malicious actors before it's published. If it's published, it's me finding out about it, not the bad actors suddenly getting a new tool)
remember we're not talking about keeping a bug secret, we're talking about using a power tool to generate a fire hose of bugs and only doing that, not fixing them
This is true. Congratulations. Man we are all so smart for getting that right. How could anyone get something so obvious and simple wrong?
What you leave out is "in a vacuum" and "all else being equal".
We are not in a vacuum and all else is not equal, and there are more than those 2 factors alone that interact.
In regards whether it's a bad idea to publicly document security concerns found regardless whether you plan on fixing them, it often depends if you ask the product manager what they want for their product or what the security concerned folks in general want for every product :).
That just means the script kiddies will have more trouble, while more scary actors like foreign intellegence agencies will have free reign.
The same question applies if they have time to fix it in six months, since that presumably still gives attackers a large window of time.
In this case the bug was so obscure it’s kind of silly.
And after all that, they just drop an issue, instead of spending a little extra time on producing a patch.
If not pumping out patches allows them to get more security issues fixed, that’s fine!
Making open source code more secure and at the same time less prevalent seems like a net loss for society. And if those researchers could spare some time to write patches for open source projects, that might benefit society more than dropping disclosure deadlines on volunteers.
High quality bug reports like this are very good for open source projects.
(The argument also seems backwards to me: Google appears to use a lot of not-inexpensive human talent to produce high quality reports to projects, instead of dumping an ASan log and calling it a day. If all they cared about was shoveling labor onto OSS maintainers, they could make things a lot easier for themselves than they currently do!)
(But also, while this is great, it doesn’t make an expectation of a patch with a security report reasonable! Most security reports don’t come with patches.)
There are many groups searching for security vulnerabilities in popular open source software who deliberately do not disclose them. They do this to save them for their own use or even to sell them to bad actors.
It’s starting to feel silly to demonize Google for doing security research at this point.
Aren't most people here demonizing Google for dedicating the resources to find bugs, but not to fix them?
It's not some lone report of an important bug, it's AI spam that put forth security issues at a speed greater than they have resources to fix it.
whether or not AI found it, clearly a human refined it and produced a very high quality bug report. There's no AI slop here. No spam.
At least, if this information is public, someone can act on it and sandbox ffmpeg for their use case, if they think it's worth it.
I personally prefer to have this information be accessible to all users.
For one, it lets people understand where ffmpeg is at so they can treat it more carefully (e.g. run it in a sandbox).
Ffmpeg is also open source. After public disclosure, distros can choose to turn off said codec downstream to not expose this attack vector. There are a lot of things users can do to protect themselves but they need to be aware of the problem first.
I think this is the heart of the issue and it boils off all of the unimportant details.
If it's a real, serious issue, you want to know about it and you want to fix it. Regardless of who reports it.
If it's a real, but unimportant issue, you probably at least want to track it, but aren't worried about disclosure. Regardless of who reports it.
If it's invalid, or AI slop, you probably just want to close/ignore it. Regardless of who reports it.
It seems entirely irrelevant who is reporting these issues. As a software project, ultimately you make the judgment call about what bugs you fix and what ones you don't.
[0] More or less. It seems the actual language is shied from. Is there a meaningful difference?
Most vulnerabilities never get CVEs even when they’re patched.
The way many (perhaps most) people think of CVEs is badly broken. The CVE system is deeply unreliable, resulting in CVEs being issued for things that are neither bugs nor vulnerabilities while at the same time most things that probably should have CVEs assigned do not have them. Not to even mention the ridiculous mess that is CVSS.
I’m just ranting though. You know all this, almost certainly much better than me.
In no reasonable reading of the situation can I see how anything Google has done here has made things worse:
1) Before hand, the bug existed, but was either known by no one, or known only by people exploiting it. The maintainers weren't actively looking at or for this particular bug and so it may have continue to go undiscovered for another 20 years.
2) Then Google was the only one that knew about it (modulo exploiters) and were the only people that could take any steps to protect themselves. The maintainers still don't know so everyone else would remain unprotected until they discover it independently.
3) Now everyone knows about the issue, and are now informed to take whatever actions they deem appropriate to protect themselves. The maintainers know and can choose (or not) to patch the issue, remove the codec or any number of other steps including deciding it's too low priority in their list of todos and advising concerned people to disable/compile it out if they are worried.
#3 is objectively the better situation for everyone except people who would exploit the issue. Would it be even better if Google made a patch and submitted that too? Sure it would. But that doesn't make what they have done worthless or harmful. And more than that, there's nothing that says they can't or won't do that. Submitting a bug report and submitting a fix don't need to happen at the same time.
It's hard enough convincing corporations to spend any resources at all on contributing to upstream. Dragging them through the mud for not submitting patches in addition to any bug reports they file is in my estimation less likely to get you more patches, and more likely to just get you less resources spent on looking for bugs in the first place.
I dont agree the following framing is accurate, but I can mention it because you've already said the important part (about how this issue exists, and mearly knowing about it doesn't create required work.) But here announcing it, and registering a CVE, Google is starting the clock. By some metrics, it was already running, but the reputational risk clearly was not. This does change priorities, and requires as urgent context switch. neither are free actions, especially not within FOSS.
To me, being someone who believes everyone, individuals and groups, have a responsibility to contribute fairly. I would frame it as Google's behavior gives the appearance weaponizing their cost center externally, given this is something Google could easily fix, but instead they shirked that responsibility to unfunded volunteers.
So thank you for saying the important thing too! :)
The problem isn't Google reporting vulnerabilities. It's Google using AI to find obscure bugs that affect 2 people on the planet, then making a CVE out of it, without putting any effort into fixing it themselves or funding the project. What are the ffmpeg maintainers supposed to do about this? It's a complete waste of everybody's time.
> The latest episode was sparked after a Google AI agent found an especially obscure bug in FFmpeg. How obscure? This “medium impact issue in ffmpeg,” which the FFmpeg developers did patch, is “an issue with decoding LucasArts Smush codec, specifically the first 10-20 frames of Rebel Assault 2, a game from 1995.”
Wrong. The original files only affect 2 people. A malicious file could be anywhere.
Do you remember when certain sequences of letters could crash iphones? The solution was not "only two people are likely to ever type that, minimum priority". Because people started spreading it on purpose.
> The latest episode was sparked after a Google AI agent found an especially obscure bug in FFmpeg. How obscure? This “medium impact issue in ffmpeg,” which the FFmpeg developers did patch, is “an issue with decoding LucasArts Smush codec, specifically the first 10-20 frames of Rebel Assault 2, a game from 1995.”
This doesn't feel like a medium-severity bug, and I think "Perhaps reconsider the severity" is a polite reading. I get that it's a bug either way, but this leaves me with a vague feeling of the ffmpeg maintainer's time being abused.
There’s absolutely no reason to assume that it does not lead to RCE, and certainly no reason whatsoever to invest significant time to prove that one way or the other unless you make a living selling exploits.
"Given enough eyeballs, every bug is shallow" right? Well, Google just contributed some eyeballs, and now a bug has been made shallow. So what's the actual problem here? If some retro game enthusiast had filed the same but report would that be "abusing" the maintainer's time? I would think not, but then we're saying that a bug report can be "abusive" simply by the virtue of who submits it. And I'm really not sure "don't assign employees to research bugs in your open source dependencies and if you do certainly don't submit bug reports on what you find because that's abusive" is the message we want to be sending to corporations that are using these projects.
Typically disclosures happen after a fix exists.
To my original comment, the underlying problem here IMO is wanting to have it both ways: you can adhere to common notions of security for reputational reasons, or you can exercise your right as a maintainer to say “I don’t care,” but you can’t do both.
Maintaining a reputation might be enough reward for you, but not everyone is happy to work for free for a billion dollars corporation breathing down their necks. It's puzzling to me why people keep defending their free lunch.
P.S. I'm an open source maintainer myself, and I used to think, "oh, OSS developers should just stop whining and fix stuff." Fast forward a few years, and now I'm buried under false-positive "reports" and overwhelmed by non-coding work (deleting issue spam, triage, etc.)
P.P.S. What's worse, when your library is a security component the pressure’s even higher - one misplaced loc could break thousands of apps (we literally have a million downloads at nuget [1] )
If you (Amazon, in this case) can put it that way, it seems like throwing them 10 or 20 thousand a year would simply be a good insurance policy! Any benefits you might get in goodwill and influence are a bonus.
But on a more serious note, it is crazy that between Google and Amazon they can not fund them with 50k each per year, so that they can pay people to work on this.
Specially Google, with Youtube, they can very easily pay them more. 100k~200k easily.
And they get the tool + community good will, all for a rounding error on any part of their budgets...
That is why I said easily 100~200k. It will be a rounding error for them.
It is actually crazy that Google is not already hiring the main dev to work on ffmpeg with all the use they give it on Youtube.
I also wonder if it is maybe used by Netflix also.
They do and it is.
https://netflixtechblog.com/the-making-of-ves-the-cosmos-mic...
https://netflixtechblog.com/for-your-eyes-only-improving-net...
So yeah it'd be nice if they put some real money (to ffmpeg - wouldn't be their coffee allowance to any of them) into it.
Almost everything that touches video at some point uses it.
A rising tide lifts all yachts. If he had written the check, my instinct tells me, he would have enough for two yachts. Goodwill is an actual line item on 10Q's and 10K's. I don't know why companies think it's worth ignoring.
Boltzmann brain-wise it clearly doesn't make sense to wait that long.
I, and I think most security researchers do too, believe that it would be incredibly negligent for someone who has discovered a security vulnerability to allow it to go unfixed indefinitely without even disclosing its existence. Certainly, ffmpeg developers do not owe security to their users, but security researchers consider that they have a duty to disclose them, even if they go unfixed (and I think most people would prefer to know an unfixed vulnerability exists than to get hit by a 0-day attack). There's gotta be a point where you disclose a vulnerability, the deadline can never be indefinite, otherwise you're just very likely allowing 0-day attacks to occur (in fact, I would think that if this whole thing never happened and we instead got headlines in a year saying "GOOGLE SAT ON CRITICAL VULNERABILITY INVOLVED IN MASSIVE HACK" people would consider what Google did to be far worse).
To be clear, I do in fact think it would be very much best if Google were to use a few millionths of a percent of their revenue to fund ffmpeg, or at least make patches for vulnerabilities. But regardless of how much you criticize the lack of patches accompanying vulnerability reports, I would find it much worse if Google were to instead not report or disclose the vulnerability at all, even if they did so at the request of developers saying they lacked resources to fix vulnerabilities.
Because security researchers want to move on from one thing to another. And nobody said indefinitely. Its about a path that works for OSS project.
Its also not about security through obscurity. You are LITERALLY telling the world check this vuln in this software. Oooh too bad the devs didnt fix it. Anybody in the sec biz would be following Google's security research.
Putting you in a spotlight and telling it doesn't make any difference is silly.
That's capitalism, they need to quit their whining or move to North Korea. /s The whole point is to maximize value to the shareholders, and the more work they can shove onto unpaid volunteers, the move money they can shove into stock buybacks or dividends.
The system is broken. IMHO, there outta be a law mandating reasonable payments from multi-billion dollar companies to open source software maintainers.
It's not "whining" to refuse to do unpaid labor for the benefit of someone else - especially when the someone else is as well-resourced as Google.
I feel that this is mostly a kneejerk reaction to AI and Google in general, with people coming up with arguments to support their reaction after already forming an opinion.
Let's just saying they're being asshole-ish, which is a problem for volunteer projects just as much as non-volunteer ones.
The ffmpeg twitter sucks.
But it's developers do offer paid consulting as ffmpeg maintainers, which Google does pay for.
The only reasonable way is for Google and other big corps to either sponsor members of the existing team or donate money to the project. And making it long term not one-shotting for publicity.
enterprise must pay.
You can view, read the code = open source.
The latter is about money.
These two requirements combined make it impossible to distribute open source software with the provision that it is only free for individuals.
Thus, as Mark Atwood, an open source policy expert, pointed out on Twitter, he had to keep telling Amazon to not do things that would mess up FFmpeg because, he had to keep explaining to his bosses that “They are not a vendor, there is no NDA, we have no leverage, your VP has refused to help fund them, and they could kill three major product lines tomorrow with an email. So, stop, and listen to me … ”
I agree with the headline here. If Google can pay someone to find bugs, they can pay someone to fix them. How many time have managers said "Don't come to me with problems, come with solutions"
It might not make sense morally, but it makes total sense from a business perspective… if they are going to pay for the development, they are going to want to maintain control.
As it stands, they're just abusing someone's gift.
Like jerks.
In fact, we are probably just really lucky that some early programmers were kooky believers in the free software philosophy. Thank God for them. So much of what I do owes to the resulting ecosystem that was built back then.
I think Stallmann's ideological "allowing users to run, modify, and share the software without restrictions" stance is good, but I think for me at least that should apply to "users" as human persons, and doesn't necessarily apply to "corporate personhood" and other non-human "users". I don't see a good way to make that distinction work in practice, but I think it's something that if going to become more and more problematic as time goes on, and LLM slop contributions and bug reports somehow feed into this too.
I was watching MongoDB and Redis Labs experiments with non-OSF approved licences clearly targeted at AWS "abusing" those projects, but sadly neither of those cases seemed to work out in the long term. Also sadly, I do not have any suggestions of how to help...
> We realized it was time to dump the confrontational attitude that has been associated with "free software" in the past and sell the idea strictly on the same pragmatic, business-case grounds that motivated Netscape.
https://web.archive.org/web/20021001164015/http://www.openso...
This is the same tortured logic as Citizens United and Santa Clara Co vs Southern Pacific Railroad, but applied to FS freedoms instead of corporate personhood and the 1st Amendment.
I like the FS' freedoms, but I favor economic justice more, and existing FS licenses don't support that well in the 21st c. This is why we get articles like this every month about deep-pocketed corporate free riders.
Open source software is critical infrastructure at this point. Maintainers should be helped out, at least by their largest users. If free riding continues, and maintainers' burden becomes too large, supply chain attacks are bound to happen.
It's an important conversation to have.
I remember a particular developer...I'll be honest, I remember his name, but I remember him being a pretty controversial figure here, so I'll pretend not to know them to avoid reflexive downvotes...but this developer made a particular argument that I always felt was compelling.
> If you do open source, you’re my hero and I support you. If you’re a corporation, let’s talk business.
The developer meant this in the context of preferring the GPL as a license, but the problem with the GPL is that it still treats all comers equally. It's very possible for a corporation to fork a GPL project and simply crush the original project by throwing warm bodies at their projects.
Such a project no longer represents the interests of the free software community as a whole, but its maintainers specifically. I also think that this can apply to projects that are alternatives to popular GPL projects, except for the license being permissive.
We need to revisit the four freedoms, because I no longer think they are fit for purpose.
The only reason for needing control would be if it was part of their secret sauce and at that point they can fork it and fuck off.
These companies should be heavily shamed for leaching off the goodwill of the OSS community.
The entire point here is to pay for the fixes/features you keep demanding, else the project is just going to do as it desires and ignore you.
More and more OSS projects are getting to this point as large enterprises (especially in the SaaS/PaaS spheres) continue to take advantage of those projects and treat them like unpaid workers.
It's a dumb reason, especially when there are CVE bugs like this one, but that's how executives think.
So the premise here is that AWS should waste their own money maintaining an internal fork in order to try to make their competitors do the same thing? But then Google or Intel or someone just fixes it a bit later and wisely upstreams it so they can pay less than you by not maintaining an internal fork. Meanwhile you're still paying the money even though the public version has the fix because now you either need to keep maintaining your incompatible fork or pay again to switch back off of it. So what you've done is buy yourself a competitive disadvantage.
> that's how executives think.
That's how cargo cult executives think.
Just because you've seen someone else doing something doesn't mean you should do it. They might not be smarter than you.
The first is that you have a shared finite resource, the classic example being a field for grazing which can only support so many cattle. Everyone then has the incentive to graze their cattle there and over-graze the field until it's a barren cloud of dust because you might as well get what you can before it's gone. But that doesn't apply to software because it's not a finite resource. "He who lights his taper at mine, receives light without darkening me."
The second is that you're trying to produce an infinite resource, and then everybody wants somebody else to do it. This is the one that nominally applies to software, but only if you weren't already doing it for yourself! If you can justify the effort based only on your own usage then you don't lose anything by letting everyone else use it, and moreover you have something to gain, both because it builds goodwill and encourages reciprocity, and because most software has a network effect so you're better off if other people are using the same version you are. It also makes it so the effort you have to justify is only making some incremental improvement(s) to existing code instead of having to start from scratch or perpetually pay the ongoing maintenance costs of a private fork.
This is especially true if your company's business involves interacting with anything that even vaguely resembles a consolidated market, e.g. if your business is selling or leasing any kind of hardware. Because then you're in "Commoditize Your Complement" territory where you want the software to be a zero-margin fungible commodity instead of a consolidated market and you'd otherwise have a proprietary software company like Microsoft or Oracle extracting fees from you or competing with your hardware offering for the customer's finite total spend.
But given its license, they’re going to have to reveal those changes anyways (since many of the most common codecs trigger the GPL over LGPL clause of the license) or rewrite a significant chunk of the library.
There are many reasons, often good ones, not to pay money for an open source project but instead fund your own projects, from a company's perspective.
I don’t see how that at all relates to the point, but sure, you got me.
It's enough if one or two main contributors assert their copyrights. Their contributions are so tangled with everything else after years of development that it can't meaningfully be separated away.
The "kill it with an email" probably means that whoever said this is afraid that some usecase there wouldn't stand up to an audit by the usual patent troll mothercluckers. The patents surrounding video are so complex, old and plentiful that I'd assume full compliance is outright impossible.
The API manual for it is nearly 4000 pages and it can do insane stuff[1].
I had to use it at last job(TM), it's not terrible API wise.
[1] https://docs.aws.amazon.com/pdfs/mediaconvert/latest/apirefe... CAUTION: big PDF.
That's a pretty good indicator it's likely just ffmpeg in an AWS Hoodie/Trenchcoat.
I interpret this as meaning there was an implied "if you screw this up" at the end of "they could kill three major product lines with an email."
In the now deleted tweet Theo thrashed VLC codecs to which ffmpeg replied basically "send patches, but you wouldn't be able to". The reply to which was
--- start quote ---
https://x.com/theo/status/1952441894023389357
You clearly have no idea how much of my history was in ffmpeg. I built a ton of early twitch infra on top of yall.
--- end quote ---
This culminated in Theo offering a 20k bounty to ffmpeg if they remove the people running ffmpeg twitter account. Which prompted a lot of heated discussion.
So when Google Project Zero posted their bug... ffmpeg went understandably ballistic
The company that I work at makes sure anything that uses third-party library, whether in internal tools/shipped product/hosted product, goes through legal review. And you'd better comply with whatever the legal team asks you to do. Unless you and everyone around you are as dumb as a potato, you are not going to do things that blatantly violates licenses, like shipping a binary with modified but undisclosed GPL source code. And you can be sure that (1) it's hard to use anything GPL or LGPL in the first place (2) even if you are allowed to, someone will tell you to be extra careful and exactly do what you are told to (or not to)
And as long as Amazon is complying with ffmpeg's LGPL license, ffmpeg can't just stop licensing existing code via an email. Of course, unless there is some secret deal, but again, in that case, someone in the giant corporation will make sure you follow what's in the contract.
Basically, at company at Amazon where there are functional legal teams, the chance of someone "screwing up" is very small.
For instance, here's one for the Amazon Music apps, which includes an FFMpeg license: https://www.amazon.com/gp/help/customer/display.html?nodeId=...
There's no penalty clause, there's no recovery clause. If you don't comply with the licence conditions then you don't have a licence. If you don't have a licence then you can't use the program, any version of the program. And if your products depend on that program then you lose your products.
The theoretical email would be a notification that they had breached the licence and could no longer use the software. The obvious implication being that AWS was wanting to do something that went contrary to the restrictions in the GPL, and he was trying to convince them not to.
I'm not sure what you think you mean when you say "running AIs indiscriminately". It's quite expensive to run AI this way, so it needs to be done with very careful consideration.
The problem is, the issue in the article is explicitly named as "CVE slop", so if the patch is of the same quality, it might require quite some work anyway.
But it's also a bug report about the decoder for "SANM ANIM v0" - a format so obscure almost all the search results are the bug report itself. Possibly a format exclusive to mid-1990s LucasArts games [1]
Pretty crazy that ffmpeg supports the codec in the first place, IMHO.
I can understand volunteers not wanting to sink time into maintaining a codec to play a video format that hasn't been used since the Clinton administration. gstreamer divides their plugins into 'good', 'bad' and 'ugly' to give them somewhere to stash unmaintained codecs.
[1] https://web.archive.org/web/20250419105551/https://wiki.mult...
The format being obscure and having no real usage doesn't help when it's the attackers creating the files. The obscure formats are exposing just as much attack surface as the common ones.
> Pretty crazy that ffmpeg supports the codec in the first place, IMHO.
Yes.
Edit: And also, how is anyone supposed to know they should compile the codec our unless someone makes a bug report and makes it public in the first place?
Is that somehow _less_ of a terrible way to think than "someone who's contributed their time as a volunteer to an open source software project that we have come to rely on, now has some sort of an obligation to drop everything and do more unpaid work for a trillion dollar company"?
> it really needs to be fixed in the up stream
Lots of people love using "Free Software" that they didn't have to write as essential parts of their business.
Way too many of them seem to blink right when they get to this bit of the licence they got it with:
SHOULD THE LIBRARY PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
(That's directly from Section 15 "NO WARRANTY of https://code.ffmpeg.org/FFmpeg/FFmpeg/src/branch/release/4.0... )
If you could highlight the relevant part of the bug report that demanded the developers "drop everything" and do "unpaid work for a trillion dollar company", that would be great because I'm having trouble finding it. I see "hey, we found this bug, we found where we think the issue is in the code and here's a minimal reproduction. Also FYI we've sent you this bug report privately, but we will also be filing a public bug report after 90 days." And no, I don't think having a policy of doing a private bug report followed by a public report some time later qualifies as a demand. They could have just made a public report from the get go. They could also have made a private report and then surprised them with a public bug report some arbitrary amount of time later. Giving someone a private heads up before filing a public bug report is a courtesy, not a demand.
And it's really funny to complain about Google expecting "unpaid work for a trillion dollar company", when the maintainers proudly proclaim that the likes of no less than Google ARE paying them for consulting work on ffmpeg[1][2][3]
[1]: https://fflabs.eu [2]: https://fflabs.eu/about/ [3]: https://ffmpeg.org/consulting.html
1) Fix it yourself
2) Sit on it silently until the maintainers finally get some time to fix it
That seems crazy to me. For one, not everyone who discovers a bug can fix it themselves. But also a patch doesn't fix it until it's merged. If filing a public bug report is expecting the maintainers to "drop everything and do free labor" then certainly dropping an unexpected PR with new code that makes heretofore unseen changes to a claimed security vulnerability must surely be a much stronger demand that the maintainers "drop everything" and do the "free labor" of validating the bug, validating the patch, merging the patch etc etc etc. So if the maintainers don't have time to patch a bug from a highly detailed bug report, they probably don't have time to review an unexpected patch for the same. So then what? Does people sit on that bug silently until someone finally gets around to having the time to review the PR. Or are they allowed to go public with the PR even though that's far more clearly a "demand to drop everything and come fix the issue NOW".
I for one am quite happy the guy who found the XZ backdoor went public before a fix was in place. And if tomorrow someone discovers that all Debian 13 releases have a vulnerable SSH installation that allows root logins with the password `12345`, I frankly don't give a damn how overworked the SSH or Debian maintainers are, I want them to go public with that information too so the rest of us can shut off our Debian servers.
The can, but there's not an obvious reason why they should. If anything, public disclosure timelines for commercial closed source projects should be much much longer than for contributor-driven projects because once a bug is public ANYONE can fix it in the contributor-driven project, where as for a commercial project, you're entirely at the mercy of the commercial entities timelines.
> Also, if Google has the funds to pay for bug finding, they also have the funds for bug fixing the community projects they depend on.
They do. And they do. They literally higher the ffmpeg maintainers via the maintainer's consulting business (fflabs.eu) and they routinely contribute code to the ffmpeg project.
Of course there are obvious reasons: corporations have the resources and incentives to fix them promptly once threatened with disclosure. Corporations don't respond well otherwise. None of these apply to volunteer projects.
> They literally higher the ffmpeg maintainers via the maintainer's consulting business (fflabs.eu) and they routinely contribute code to the ffmpeg project.
Great, then they should loop in the people they're paying on any notification of a vulnerability.
Of course, if this has truly been the case then nobody would have heard of this debacle.
How so? Volunteer projects have maintainers assigned to the project writing code. The "resources" to fix a bug promptly are simply choosing to allocate your developer resources to fixing the bug. Of course, volunteers might not want to do that, but then again, a company might not want to allocate their developers to fixing a bug either. But in either case the solution is to prioritize spending developer hours on the bug instead of on some other aspect of your project. In fact, volunteer driven projects have one huge resource that corporations don't, a theoretically infinite supply of developers to work on the project. Anyone with an interest can pick up the task of fixing the bug. That's the promise of open source right? Many eyes making all bugs shallow.
As for incentives, apparently both corporations and volunteer projects are "incentivized" to preserve their reputation. If volunteer projects weren't, we wouldn't be having this insane discussion where some people are claiming filing a bug report is tantamount to blackmail.
The only difference between the volunteer project and the corporation is even the head of a volunteer project can't literally force someone to work on an issue under the threat of being fired. I guess technically they could threaten to expel them from the project and I'm sure some bigger projects could also deny funding from their donation pool to a developer that refuses to play ball, but obviously that's not quite the same as being fired from your day job.
> Great, then they should loop in the people they're paying on any notification of a vulnerability.
If only there was some generally agreed upon and standardized way of looping the right people in on notifications of a bug. Some sort of "bug report" that you could give a team. It could include things like what issue you think you've found, places in the code that you believe are the cause of the issue, possibly suggested remediations, maybe even a minimum test case so that you can easily reproduce and validate the bug. Even better if there were some sort email address[1] that you could send these sorts of reports to if you didn't necessarily want to make them public right away. Or maybe there could be a big public database you could submit the reports to where anyone could see things that need work and could pick up the work[2] even if the maintainers themselves didn't. That would be swell, I'm sure some smart person will figure out a system like that one day.
[1]: https://ffmpeg.org/security.html [2]: https://ffmpeg.org/bugreports.html
It may be the case that ffmpeg cannot reasonably support every format while maintaining the same level of security. In that case, it makes sense for distros to disable some formats by default. I still think it’s great that they’re supported by the ffmpeg project.
I agree there would probably need to be some unified guidance about which formats to enable.
I don't understand how anyone believes that behavior is acceptable.
If you're an unpaid volunteer? Yeah - nah. They can tell you "Sorry, I'm playing with my cat for the next 3 months, maybe I'll get to it after that?", or just "Fuck off, I don't care."
(I'm now playing out a revenge fantasy in my head where the ffmpeg team does nothing, and Facebook or Palantir or someone similar get _deeply_ hacked via the exploit Google published and theat starts the planets biggest ever pointless lawyers-chasing-the-deepest-pockets fight.)
In this particular case it’s hardly obvious which patch you should submit. You could fix this particular bug (and leave in place the horrible clunky codec that nobody ever uses) OR you could just submit a patch that puts it behind a compile flag. This is really a decision for the maintainers, and submitting the latter (much better!) patch would not save the maintainers any meaningful amount of time anyway.
You can say publicly that “there is an ABC class vulnerability in XYZ component” so that users are aware of the risk.
This also informs users that it’s not safe to use ffmpeg or software derived from it to open untrusted files, and perhaps most importantly releasing this tells the distro package maintainers to disable the particular codec when packaging.
But no. Intentionally or not, there was a whole drama created around it [1], with folks being criticized [2] for saying exactly what you said above, because their past (!) employers.
Instead of using the situation to highlight the need for more corporate funding for opensource projects in general, it became a public s**storm, with developers questioning their future contributions to projects. Shameful.
Judging from some online responses I think it's working too. I honestly don't see how ffmpeg's response is remotely acceptable.
Nobody who takes security even remotely seriously should decode untrusted media files outside of a sandboxed environment. Modern media formats are in themselves so complex one starts wondering if they're actually Turing complete, and in ffmpeg the attack surface is effectively infinitely large.
The issue is CVE slop because it just doesn't matter if you consider the big picture.
Some example issues to illustrate my point:
https://issuetracker.google.com/issues/436511754 https://issuetracker.google.com/issues/445394503 https://issuetracker.google.com/issues/436510316 https://issuetracker.google.com/issues/433502298
This is software that is directly or indirectly run by millions of people on untrusted media files without sandboxing. It's not even that they don't care about security, it's that they're unaware that they should care. It should go without saying that they don't deserve to be hacked just because of that. Big companies doing tons of engineering work to add defense in depth for use cases on their own infrastructure (via sandboxing or disabling obsolete codecs) doesn't help those users. Finding and fixing the vulnerabilities does.
Again, Google has been doing this sort of thing for over a decade and has found untold thousands of vulnerabilities like this one. It is not at all clear to me that their doing so has been all that valuable.
Yeah. It's called YouTube... Why run fuzzers if you can get people to upload a few million random videos every day? ;-)
(I wonder if the BigSleep AI was trained on or prompted with YouTube error logs?)
https://googleprojectzero.blogspot.com/2021/12/a-deep-dive-i...
> If you used the scan to pdf functionality of a [Xerox] like this a decade ago, your PDF likely had a JBIG2 stream in it.
That's not an obscure format, that's an old format. Meanwhile with ffmpeg we're talking about > decoding LucasArts Smush codec, specifically the first 10-20 frames of Rebel Assault 2, a game from 1995.
That's both old and obscure.Your point is still taken, but just to clarify that these are different situations. JBIG2 is included for legacy. The Lucas art codec is included for... completion's sake(?)
If the format is old and obscure, and the implementation is broken, it shouldn't be on by default.
But old AND obscure, well it's nice that it is supported but enabled by default? Fully with you there.
The ffmpeg project need to get in touch and get then to assign copyright to the ffmpeg project, then delete that format/decoder from ffmpeg. Then go back to Google with an offer to licence then a commercial version of ffmpeg with the fixed SANM ANIM v0 decoder, for the low low price of only 0.0001% of YouTube's revenue every year. That'd likely make them the best funded open source project ever, if they pulled it off.
Why? - It makes continued downstream consumption easier, you don't have to rely on fragile secret patches. - It gives back to projects that helped you to begin with, it's a simple form of paying it forward. - It all around seems like the "ethical" and "correct" thing to do.
Unfortunately, in my experience, there's often a lot of barriers within companies to upstream. Reasons can be everything from compliance, processes, you name it... It's unfortunate.
I have a very distinct recollection of talks about hardware aspirations and upstreaming software fixes at a large company. The cultural response was jarring.
Certainly easier to give a good bug report and let upstream write the change, if they will.
A tech company I worked at once had a "sponsorship fund" to "sponsor causes" that employees wanted, it was actually good money but a drop in the bucket for a company. A lot of employees voted for sponsoring Vue.js, which is what we used. Eventually, after months of silence, legal/finance decided it was too much work.
But hey it wasn't an exception. The local animal shelter was the second most voted and legal/finance also couldn't figure it out how to donate.
In the end the money went to nowhere.
The only "developer marketing" they were doing was sending me in my free time to do panels with other developers in local universities and conferences. Of course it was unpaid, but in return I used it to get another job.
Several software engineers left, several didn't sign it.
Yes, company was very toxic apart of that. Yeah, I should name and shame but I won't be doxxing myself.
Basically I got to do the work on company time&dime, but I couldn't give my employer credit, due to this kind of legal red tape.
I liked that teamlead
Sorry, that's ridiculous. Basically every major free software dependency of every major platform or application is maintained by people on the payroll of one or another tech giant (edit: or an entity like LF or Linaro funded by the giants, or in a smaller handful of cases a foundation like the FSF with reasonably deep industry funding). Some are better than others, sure. Most should probably be doing more. FFMpeg in particular is a project that hasn't had a lot of love from platform vendors (most of whom really don't care about software codecs or legacy formats anymore), and that's surely a sore point.
But to pretend that SteamOS is the only project working with upstreams is just laughable.
Valve does seem somewhat rare position of making a proper Linux distro work well with games. Google’s Chromebooks don’t contribute to the linux ecosystem in the same holistic fashion it seems.
Yes, I've also worked on OpenStack components at a university, and there I see Red Hat or IBM employees pushing up loads of changes. I don't know if I've ever seen a Walmart, UnitedHealth, Chase Bank, or Exxon Mobil (to pick some of the largest companies) email address push changes.
Ultimately there will always be some healthcare rationing. This happens in every country. For example, the UK NHS has death panels which decide that certain treatments won't be covered at all because they're not cost effective. Resources are limited and demand is effectively infinite. So the only real question is how we do the rationing.
UHG has been caught denying claims for things that employers already paid them to cover for their employees. You can't blame HR departments for that. You also can't blame HR for UHG upcoding/overbilling which eats into the limited resources of hospitals and the limited resource of taxpayer money ultimately resulting in fewer people able to get the healthcare they need just so that UHG can line their own pockets.
While HR departments do have their own issues, they're nowhere near the level of pure evil that UHG is.
The work to upstream our changes was included in the Statements of Work which Walmart signed off on, and our time spent on those efforts was billed to them.
The stats for those projects will have recorded my former employer as the direct source of those contributions - but they wouldn't have existed had it not been for Walmart.
Feel free to read lore.kernel.org, and sort out where the people contributing many patches actually work.
It's not entirely unlike if someone said "the only person I know writing books successfully is Brandon Sanderson." I do think "you ought to go check out your local book store" would be a valid response.
I don't even have to link the xkcd comic because everyone already knows which one goes here.
As a side note YC and tech startups themselves have become reality TV. Your goal should be Valve! You should be Gabe Newell! You don’t need to be famous! Just build something valuable and be patient
It used to be normal to build a business slowly over 20 years. Now everyone grabs for the venture capital, grows so fast they almost burst, and the venture capital inevitably ends in enshittification as companies are forced by shareholders to go against their business model and shit over their customers in order to generate exponential profit margins.
GabeN was also a MS developer back in the day and likely would have been well off regardless, but he didn't need to play the YC A-B-let's-shoehorn-AI bullshit games that are 100% de rigeour for all startups in 2025.
I was playing Fallout 3 on WINE well before Valve got involved with minimal tweaks or DIY effort.
Proton with Steam works flawlessly for most things including AAA games like RDR2 and it's great, but don't forget that WINE was out there making it work for a while
Yes, but Valve's involvement handled "the last 10% takes the 90% of the time" part of WINE, and that's a great impact, IMHO.
Trivia: I remember WINE guys laughing at WMF cursor exploit, then finding the exploit works on WINE too and fix it in panic, and then bragging bug-for-bug compatibility with Windows. It was hilarious.
Also, WINE allowed Linux systems to be carrier for Windows USB flash drive virii without being affected by them for many years.
Even if Wine was 90% there technologically, the most important 90% is really that last 10.
Windows support had gotten a boost from .NET going open source as well as other stuff MS began to relax about. It also helped that OpenGL was put to rest and there was a new graphics API that could reasonably emulate DirectX. I don't know much about the backstory of Mesa, but it's pretty cool tech that has been developing for a long time.
No, it didn't help giving them copies of licenses that have the usual liability clauses.
It seems a lot of corporate lawyers fundamentally misunderstand open source.
I believe in FOSS and can make an argument that lots of people on HN will accept, but many outside this context will not understand it or care.
I still had to upstream anonymously, though.
Like, here's the deal: The work is proper, legit opensource. You can use it for free, with no obligations.
But if your company makes a profit from it, you're expected to either donate money to the project or contribute code back in kind. (Eg security patches, bug fixes, or contribute your own opensource projects to the ecosystem, etc).
If you don't, all issues you raise and PRs get tagged with a special "moocher" status. They're automatically - by default - ignored or put in a low priority bin. If your employees attend any events, or join a community discord or anything like that, you get a "moocher" badge, so everyone can see that you're a parasite or you work for parasites. Thats ok; opensource licenses explicitly allow parasites. I'm sure you're a nice person. But we don't really welcome parasites in our social spaces, or allow parasites to take up extra time from the developers.
I can usually stop short of providing code and file a bug that explains the replication case and how to fix it. I've taken patches and upstreamed them pseudonymously on my own time when the employer believed the GPL meant they couldn't own the modifications.
If after all that you still want to label me a moocher at cons, that's your choice.
That seems to imply that Apple employees are prohibited from being good internet citizens and e.g. helping people out with any kind of software issue. This presumably includes contributing to open source, although I'm sure they can get approval for that. But the fact they have to get approval for it is already a chilling effect.
I’ve been at several companies where upstreaming was encouraged for everything. The fewer internal forks we could maintain, the better.
What surprised me was how many obstacles we’d run into in some of the upstream projects. The amount of time we lost to trying to appease a few maintainers who were never happy with code unless they wrote it themselves was mind boggling.
For some projects you can basically forget about upstreaming anything other than an obvious and urgent bug fix because the barriers can be so high.
Any patch sent in also needs to be maintained into the future, and most of the time it's the maintainers that need to do that, not the people contributing the patch. Therefore any feature-patches (as opposed to simple bugfixes) are quite often refused, even if they add useful functionality, because the maintainers conclude they will not be able to maintain the functionality into the future (because no one on the maintaining team has experience in a certain field, for example).
The quality bar for a 'drive by patch' which is contributed without the promise of future support is ridiculously high and it has to be. Other peoples' code is always harder to maintain than your own so it has to make up for that in quality.
Will you be still making a fuss over it?
Maybe the developer intends to some day change the internal implementation, such that that particular boolean flag wouldn't make sense any more. Or they're considering taking out the option entirely, and thus simplifying the codebase by making it so it only works one way.
Maybe the developer just doesn't care about your use case. If I have a project that works fine for what I do with it, why should I also care about some other use case you have for my work? I'm not your employee. Your product doesn't put a roof over my head.
I don't want a job where I do free work, for a bunch of companies who all make money off my work. That's a bad deal. Its a bad deal even if my code gets better as a result. I have 150 projects on github. I don't want to be punished if any of those projects become popular.
We can't go around punishing projects like ffmpeg or ruby on rails for the crime of being useful.
The pattern I have seen is that if you want to contribute a fix into a project, you are expected to "engage with the community", wear their badge, invest into the whole thing. I don't want to be in your community, I want to fix a bug in a thing I'm using and go on with my life.
Given the usual dynamics of online communities which are getting somehow increasingly more prone to dramas, toxicity, tribalism, and polarization, I just as increasingly want to have no part in them most of the time.
I think many projects would be better for having a lane for drive-by contributors who could work on fixing bugs that prevent their day-to-day from working without expectations of becoming full-time engaged. The project could set an expectation that "we will rewrite your patch as we see fit so we could integrate and maintain it, if we need/want to". I wouldn't care as long as the problem is taken care of in some way.
You could also just pay for it.
Most open source projects have way more patches contributed than the core developers can handle, so they tend to only accept those from the friendliest contributors or with the highest code/documentation quality.
Some simple setting expose like you describe can sometimes go in without a fuss or it can stall, that depends on a lot of factors. Like the other reply states: it could go against future plans. Or it could be difficult for the maintainer to see the ramifications of a simple looking change. It sucks that it is that way (I have sent in a few patches for obscure CUPS bugs which have stayed in limbo, so I know the feeling ;-) ) but it is hardly surprising. From a project's point of view drive-by patches very often cost more than they add so to get something included you often need to do a very thorough writeup as for why something is a good idea.
> I just as increasingly want to have no part in them most of the time. If all people you meet are assholes.... ;-P Not to say you are an asshole, or at least not more than most people, but I have been in this situation myself more than once, and it really pays to stay (overly) polite and not let your annoyance about being brushed off slip through the mask. The text-only nature of these kind of communications are very sensitive to misinterpretations and annoyances.
It would be nice if all you'd need for a patch to be included somewhere was for it to be useful. But alas there's a certain amount of social engineering needed as well. And imho this has always been the case. If you feel it gets increasingly hostile that's probably your own developer burnout speaking (by do I know that one :-P )
Then say you don't expect contributions at all. That's a fair game, I'm ok with it. I will then exercise my rights granted by your license in another way (forking and making my own fix most likely). My gripe is with projects that write prominently "PRs welcome", and then make enough red tape to signal that nah, not really.
That brings us full circle to the topic because one important thing that gets people motivated into accepting other people's changes to their code is being paid.
If you work in FOSS side projects as well as a proprietary day job, you know it: you accept changes at work that you wouldn't in those side projects.
In the first place, you write the code in ways you wouldn't due to conventions you disagree with, in some crap language you wouldn't use voluntarily, and so it geos.
People working on their own FOSS project want everything their way, because that's one of the benefits of working on your own FOSS project.
True. In my case I literally had to fight for it. Our lawyers were worried about a weakened patent portfolio and whatnot. In my case at least I won and now we have a culture of upstreaming changes. So don't give up the fight, you might win.
git clone https://git.ffmpeg.org/ffmpeg.git
cd ffmpeg
git log --pretty=format:"%ae" | grep -E "chromium\\.org|google\\.com" | wc -l
prints 643I sympathize and understand those issues for small companies, but after a certain size those excuses stop being convincing.
Especially for a software company like Google who runs dozens of open source projects, employs an army of lawyers to monitor compliance, and surely has to deal with those issues on a daily basis anyway.
At some point there needs to be pushback. Companies get a huge amount of value from hobbiest open source projects, and eventually they need to start helping out or be told to go away.
None of us want the economy to be hurt, right?
Sounds like they'll just throw their employees to work on it rather than monetarily fund it, that way they can aura farm.
Roles fixing FFmpeg bugs would be a hard sell in this environment, imho.
Most of them would just pirate in the old days, and most FOSS licences give them clear conscience to behave as always.
Yes, GPL 3 is a lot ideologically but it was trying to limit excessive leeching.
Now that I have opened the flood gates of a 20 year old debate, time to walk away.
So I'm not sure what GPLv3 really has to do with it in this case, if it under was a "No billion dollar company allowed" non-free-but-source-available license, this same thing would have happened if the project was popular enough for Project Zero to have looked at it for security issues.
But opening security issues here is not related to that in any way. It's an obscure file format Google definitely doesn't use, the security issue is irrelevant to Google's usages of it.
The critique would make sense if Google was asking for ffmpeg to implement something that Google wanted, instead of sending a patch. But they don't actually care about this one, they aren't actually asking for them to fix it for their benefit, they are sending a notice of a security issue that only affects people who are not Google to ffmpeg.
"and Google provided substantial donations to SPI's general fund".
The amounts don't appear to be public (and what is enough!?)
If Google wants to force a faster turnaround on the fixes, they can send the reports with patches or they can pay for prioritization.
And like so many posters in this thread, you seem to be under the impression that Google needed this fixed at some specific timeline. In reality the fix timeline, or even a total lack of a fix, makes no impact to them. They almost certainly already disable these kinds of codecs in their build. They reported this for the good of the ecosystem and the millions of users who were vulnerable.
I think really if there's all these machine generated meaningless security reports, wasting time with that sounds like a very sensible complaint, but then they should point at that junk as the problem.
But for the specific CVE discussed it really looks to me like they are doing everything right: it's a real, default-configuration exploitable issue, they reported it and ffmpeg didn't fix or ask for any extension then it gets publicly disclosed after 90 days per a standard security disclosure policy.
There's a reason Google turned into year 2000 Microsoft "it's viral!" re. the AGPL. They're less able to ignore the intent of the license and lock away their changes.
1) dedicating compute resources to continuously fuzzing the entire project
2) dedicating engineering resources to validating the results and creating accurate and well-informed bug reports (in this case, a seriously underestimated security issue)
3) additionally for codecs that Google likely does not even internally use or compile, purely for the greater good of FFMPEG's user base
Needless to say, while I agree Google has a penny to spare to fund FFMPEG, and should (although they already contribute), I do not agree with funding this maintainer.
Providing a real CVE is a contribution, not a burden. The ffmpeg folks can ignore it, since by all indications it's pretty minor.
1) allow the vulnerabilities to remain undiscovered & unpatched zero-days (stop submitting "slop" CVEs.)
2) supply the patches (which i'm sure the goalpost will move to the maintainers being upset that they have to merge them.)
3) fund the project (including the maintainers who clearly misunderstand the severity of the vulnerabilities and describe them as "slop") (no thank you.)
This entire thread defies logic.
The only thing that defies logic is how poorly your strawman is constructed.
Or I just want the $3.5 trillion company to also provide the patches to OSS libraries/programs/etc that their projects with hundreds of millions or billions in funding happen to find.
Crazy, I know.
Isn't a real CVE (like any bug report) both a contribution and a burden?
If it's to fix vulnerabilities, it seems within reason to expect a patch. If the reason Google isn't sending a patch is because they truly think the maintainers can fix it better, then that seems fair. But if Google isn't sending a patch because fixing vulns "doesn't scale" then that's some pretty weak sauce.
Maybe part of the solution is creating a separate low priority queue for bug reports from groups that could fix it but chose not to.
If someone goes on to use that code for serious purposes, that's on them. They were explicitly warned that this is not production commercial code. It's weekend hobby work. There's no ethical obligation to make your hobby code suitable for production use before you share it. People are allowed to write and share programs for fun.
Deliberate malware would be something like an inbuilt trojan that exfiltrates data (e.g. many commercial applications). Completely different.
If they wanted to market ffmpeg as a toy project only, not to be trusted, they could do that, but they are not doing that.
But they don't say they warrant their work. They have a notice reminding you that you are receiving something for free, and that thing comes with no support, and is not meant to be fit for any particular use you might be thinking of, and that if you want support/help fulfilling some purpose, you can pay someone (maybe even them if you'd like) for that service. Because the way the world works is that as a general principle, other people don't owe you something for nothing. This is not just some legal mumbo jumbo. This is how life works for everyone. It's clear that they're not being malicious (they're not distributing a virus or something), and that's the most you can expect from them.
Computer security is always contextual, but as a general rule, if you're going to be accepting random input from unknown parties, you should have an expert that knows how to do that safely. And as mentioned elsewhere in these comments, such an expert would already be compiling out codecs they don't need and running the processing in a sandboxed environment to mitigate any issues. These days even software written in-house is run in sandboxed environments with minimal permissions when competent professionals are making things. That's just standard practice.
So they should be proud that they support obscure codecs, and by default the onus is on no one to ensure it's free from bugs. If an engineer needs to make a processing pipeline, the onus is always on them to do that correctly. If they want to use a free, unsupported hobby tool as part of their serious engineering project, it's on them to know how to manage any risks involved with that decision. Making good decisions here is literally their job.
All I'm asking for right here is consistency about whether the library is mostly secure. The ethical requirement is to follow through on your claims and implications, while making claims and implications is completely optional.
> Computer security is always contextual, but as a general rule, if you're going to be accepting random input from unknown parties, you should have an expert that knows how to do that safely. And as mentioned elsewhere in these comments, such an expert would already be compiling out codecs they don't need and running the processing in a sandboxed environment to mitigate any issues.
Sandboxing is great defense in depth but most software should not require sandboxing. And expecting everyone to have an expert tweaking compilation is not realistic. Defaults matter, and security expectations need to be established between the site, the documentation, and the defaults, not left as a footgun for only experts to avoid.
People are allowed to make secure, robust software for fun. They can take pride in how good of a job they do at that. They can correctly point out that their software is the best. That still leaves them with no obligations at all for having shared their project for free.
If you are not an expert in hardening computers, don't run random untrusted inputs through it, or pay someone to deliver a turnkey hardened system to you. That someone might be Adobe selling their codecs/processing tools, or it might be an individual or a company like Redhat that just customizes ffmpeg for you. In any case, if you're not paying someone, you should be grateful for whatever goodwill you get, and if you don't like it, you can immediately get a full refund. You don't even have to ask.
The person doing serious things in a professional context is always the one with the obligation to do them correctly. When I was at IBM, we used exactly 1 external library (for very early processor initialization) and 1 firmware blob in the product I worked on, and they were paid deliverables from hardware vendors. We also paid for our compiler. Everything else (kernel, drivers, firmware, tools) was in-house. If companies want to use random free code they found on the Internet without any kind of contract in place, that's up to them.
It is if they fix bugs like this. Status quo everything is fine with their actions, they don't need to do anything they aren't already doing.
If they decide they don't want to fix bugs like this, I would say they have the ethical obligation to make it clear that the software is no longer mostly secure. This is quite easy to accomplish. It's not a significant burden in any way.
Basically, if they want to go the less-secure route, I want it to be true that they're "effectively saying" that all caps text you wrote earlier. That's all. A two minute edit to their front page would be enough. They could edit the text that currently says "A complete, cross-platform solution to record, convert and stream audio and video." I'll even personally commit $10 to pay for those two minutes of labor, if they decide to go that route.
I do think that contributing fuzzing and quality bug reports can be beneficial to a project, but it's just human nature that when someone says "you go ahead and do the work, I'll stand here and criticize", people get angry.
Rather than going off and digging up ten time bombs which all start counting down together, how about digging up one and defusing it? Or even just contributing a bit of funding towards the team of people working for free to defuse them?
If Google really wants to improve the software quality of the open source ecosystem, the best thing they could do is solve the funding problem. Not a lot of people set out to intentionally write insecure code. The only case that immediately comes to mind is the xz backdoor attempt, which again had a root cause of too few maintainers. I think figuring out a way to get constructive resources to these projects would be a much more impressive way to contribute.
This is a company that takes a lot of pride in being the absolute best of the best. Maybe what they're doing can be justified in some way, but I see why maintainers are feeling bullied. Is Google really being excellent here?
There's an appeals process: https://www.cve.org/Resources/General/Policies/CVE-Record-Di...
And of course CVE is not the only numbering system, there's OSV DB, GHSA, notcve.org etc.
The Linux kernel went in the opposite direction: Every bugfix that looks like it could be relevant to security gets a CVE[1]. The number of CVEs has increased significantly since it became a CNA.
Google is not a monolith. If you asked the board, or the shareholders of google what they thought of open source software quality they would say they don't give a rat's ass about it. Someone within google who does care has been given very limited resources to deal with the problem, and are approaching it in the most efficient way they can.
>it's just human nature that when someone says "you go ahead and do the work, I'll stand here and criticize", people get angry
Bug reports are not criticism, they are in fact contributions, and the human thing to do when someone contributes to your project is to thank them.
>This is a company that takes a lot of pride in being the absolute best of the best.
There was an era when people actually believed that google was the best of the best, rather than saying it as a rhetorical trick, and during that era they never would have dreamed of making such self centered demands of google. This project zero business comes across as the last vestige of a dying culture within google. Why do people feel the need to be so antagonitic towards it?
>I can tell you with 100% certainty that there are undiscovered vulnerabilities in the Linux kernel right now. Does that mean they should stop shipping?
Hence why I qualified "deliberately".
> Our mission is to make the discovery and exploitation of security vulnerabilities more difficult, and to significantly improve the safety and security of the Internet for everyone.
> We perform vulnerability research on popular software like mobile operating systems, web browsers, and open source libraries. We use the results from this research to patch serious security vulnerabilities, to improve our understanding of how exploit-based attacks work, and to drive long-term structural improvements to security.
> After finding a number of flaws in software used by many end-users while researching other problems, such as the critical "Heartbleed" vulnerability, Google decided to form a full-time team dedicated to finding such vulnerabilities, not only in Google software but any software used by its users.
The ideal outcome is that Project Zero sends its discoveries off to a team who triage and develop patches for the significant vulnerabilities, and then the communication with the project is a much more helpful one.
Google could staff a team that is responsible for remediating vulns in open source software that doesn't actually affect any of Google's products. Lord knows they've got enough money. I'd prefer it if they did that. But I don't really see the reasoning behind why they must do this or scrap all vuln research on open source software.
Re-read the article. There's CVEs and then there's CVEs. This is the former, and they're shoving tons of those down the throats of unpaid volunteers while contributing nothing back.
What Google's effectively doing is like a food safety inspection company going to the local food bank to get the food that they operate their corporate cafeteria on just to save a buck, then calling the health department on a monthly basis to report any and all health violations they think they might have seen, while contributing nothing of help back to the food bank.
This is an actual bug in submitted code. It doesn't matter that it's for some obscure codec, it's technically maintained by the ffmpeg project and is fair game for vulnerability reports.
Given that Google is also a major contributor to open-source video, this is more like a food manufacturer making sure that grocery stores are following health code when they stock their food.
Mind you, the grocery store has no obligation to listen to them in this metaphor and is free to just let the report/CVE sit for a while.
This is exploitable on a majority of systems as the codec is enabled by default. This is a CVE that is being severely underestimated.
Equally there is no requirement on ffmpeg to fix these CVEs nor any other.
And, of course, there is no requirement on end-users to run software from projects which do not consider untrusted-input-validation bugs to be high priority.
What's this even saying?
Then they're free to fork it and never use the upstream again.
I see this sort of sentiment daily. The sentiment that only what is strictly legal or required is what matters.
Sometimes, you know, you have to recognise that there are social norms and being a good person matters and has intrinsic value. A society only governed by what the written law of the land explicitly states is a dystopia worse than hell.
Google did more than what is "strictly legal or required", and what they did was submit a good and valid bug report. But for some reason we're mad because they didn't do even more. Why? The world is objectively a better place for having this bug report, at least now people know there's something to address.
The Copenhagen Interpetation of Ethics is annoyingly prevalent (https://forum.effectivealtruism.org/posts/QXpxioWSQcNuNnNTy/...)
That is plainly ridiculous. An orchard without heavy metals is obviously an ideal world in this case, but a world where people are at least informed of the places where the heavy metals are is orders of magnitude better than one where they’re unknowing getting heavy metal poisoning.
If you find yourself with potentially serious security bugs in your repo, then the social norm should be for you to take ownership of that because, well, it's your repo.
The socially unacceptable activity here should be treating security issues as an irritation, or a problem outside your control. If you're a maintainer, and you find yourself overwhelmed by genuine CVE reports, then it might be worth reflecting on the root cause of that. What ffmpeg did here was to shoot the messenger, which is non-normative.
- choosing to do this of their own volition
- are effectively just using their resources to throw bug reports over the wall unprompted.
- benefiting from the bugs getting fixed, but not contributing to them.
I would be very surprised if Google builds this codec when they build ffmpeg. If you run a/v codecs (like ffmpeg) in bulk, the first thing to do is sandbox the hell out of it. The second thing you do is strictly limit the containers and codecs you'll decode. Not very many people need to decode movies from old LucasArts games, for video codecs, you probably only want mpeg 1-4, h.26x, vp8, vp9, av1. And you'll want to have fuzzed those decoders as best you can too.
Nobody should be surprised that there's a security problem in this ancient decoder. Many of the eclectic codecs were written to mimic how the decoders that shipped with content were written, and most of those codecs were written assuming they were decoding a known good file, because why wouldn't they be. There's no shame, that's just how it is... there's too much to proactively investigate, so someone doing fuzzing and writing excellent reports that include diagnosis, specific location of the errors, and a way to reproduce are providing a valuable contribution.
Could they contribute more? Sure. But even if they don't, they've contributed something of value. If the maintainers can't or don't want to address it, that'd be reasonable too.
https://j00ru.vexillium.org/2014/01/ffmpeg-and-the-tale-of-a...
"While reading about the 4xm demuxer vulnerability, we thought that we could help FFmpeg eliminate many potential low-hanging problems from the code by making use of the Google fleet and fuzzing infrastructure we already had in place"
This is excellent, to be clear. But it's not compatible with the yarn currently being spun of a purely extractive relationship.
[1]: https://fflabs.eu
…. overheard at a meeting of CEO and CTO at generic evil mega tech corp recently.
Burning cash to generate spam bug reports to burden volunteer projects when you have the extra cash to burn to just fix the damn issue leaves a very sour taste in my mouth.
That's the difference between "it may or may not be that there's someone who cares" versus "no one should be running this software anywhere in the general vicinity of untrusted inputs".
+100000
My favorite 8.x or higher CVEs are the ones where you would have to take untrusted user input, bypass all the standard ways to ingest and handle that type of data, and pass it into some internal function of a library. And then the end result is that a regex call becomes more expensive.
Probably the right solution is to disable this codec. You should have to make a choice to compile with it; although if you're running ffmpeg in a context where security matters, you really should be hand picking the enabled codecs anyway.
If this happens another 1000 times (easily possible with AI) google just got free labour and free publicity for "discovering 1000 critical bugs (but not fixing them even so they were easy to do)"
Then if there's any changes or additional work to be done, you now have to spend time communicating with the patch sumbmitter, either getting them to make the requested changes, or rejecting their patch outright and writing it on your own.
And after all that we'd be right back here, only instead of the complain being "we don't have enough time to review all your bug reports" it would be "we don't have enough time to review all your PRs"
Maybe if it was an actual engineer from Google doing this they would have gotten a better response. Don’t expect people to treat AIs the same way we treat people.
But if you send me an automated report and then tell me to jump I’m telling you to f*ck off.
What happens if I send you an automated report that tells you of a meaningful problem you didn't know about, and don't tell you to jump?
I absolutely understand the issue that a filthy-rich company tries to leech off of real unpaid humans. I don't understand how that issue leads to "GTFO, we won't fix these bugs". That makes no sense to me.
If you announce a vulnerability (unspecified) is found in a project before the patch is released doesn't that just incentivize bad actors to now direct their efforts at finding a vulnerability in that project?
I don't see why actors would suddenly reallocate large amounts of effort especially since a patch is now known to be coming for the issue that was found and thus the usefulness of the bug (even if found) is rather limited.
Changing the norm to "We don't announce unpatched vulnerabilities but there is a deadline" was a massive improvement.
There is precedent for this: https://sqlite.org/consortium.html
Are the people who got scammed into "working for exposure" required to work for those people?
No, of course not, no one held a gun to their head, but it's still kind of crappy. The influencers that are "paying in exposure" are taking advantage of power dynamics and giving vague false promises of success in order to avoid paying for shit that they really should be paying for.
How to get people to use your app if it's not open source, and therefore not free?
For some projects, it feels better to have some people use it even if you did it for free than to just not do it at all (or do it and keep it in a drawer), right?
I am wondering, I haven't found a solution. Until now I've been open sourcing stuff, and overall I think it has maybe brought more frustration, but on the other hand maybe it has some value as my "portfolio" (though that's not clear).
Also I have never received requests from TooBigTech, but I've received a lot of requests from small companies/startups. Sometimes it went as far as asking for a permissive licence, because they did not want my copyleft licence. Never offered to pay for anything though.
I've said this on here before, but a few months ago I wrote a simple patch for LMAX Disruptor, which was merged in. I like Disruptor, it's a very neat library, and at first I thought it was super cool to have my code merged.
But after a few minutes, I started thinking: I just donated my time to help a for-profit company make more money. LMAX isn't a charity, they're trading company, and I donated my time to improve their software. They wouldn't have merged my code in if they didn't think it had some amount of value, and if they think it has value then they should pay me.
I'm not very upset over this particular example since my change was extremely simple and didn't take much time at all to implement (just adding annotations to interfaces), so I didn't donate a lot of labor in the end, but it still made me think that maybe I shouldn't be contributing to every open source project I use.
But I think a frame shift that might help is that you're not actually donating your time to LMAX (or whoever). You're instead contributing to make software that you've already benefited from become better. Any open source library represents many multiple developer-years that you've benefited from and are using for free. When you contribute back, you're participating in an exchange that started when you first used their library, not making a one-way donation.
> They wouldn't have merged my code in if they didn't think it had some amount of value, and if they think it has value then they should pay me.
This can easily be flipped: you wouldn't have contributed if their software didn't add value to your life first and so you should pay them to use Disruptor.
Neither framing quite captures what's happening. You're not in an exchange with LMAX but maintaining a commons you're already part of. You wouldn't feel taken advantage of when you reshelve a book properly at a public library so why feel bad about this?
Like how Apple stopped using up to date the GNU tools in 2008 because of GPL3. That moved showed me then that Apple did not want you to use your computer as your computer.
And of course, they won't share with each other. So another driver would be fear of a slight competitive disadvantage vs other-big-tech-monstrosity having a better version.
Now, in this scenario, some tech CEO, somewhere has this brilliant bright spark.
"Hey, instead of dumping all these manhours & resources into DIYing it, with no guarantee that we still won't be left behind - why don't we just throw 100k at the original oss project. We'll milk the publicity, and ... we won't have to do the work, and ... my competitors won't be able to use it"
I quite like this scenario.
It'll still cause Google and many others to panic, but weird and custom licenses are even worse for attracting business than open source ones.
People need to think about what licence they want to use for a project.
So not only would ffmpeg have multiple uncovered vulnerabilities, they would have less contributions and patches and less money for funding the maintainers. And for what? To satisfy the unfocused and mistaken rage of the peanut gallery online?
That's only one possible benefit.
Another could be to gain leverage on big tech companies via dual licensing. If Google, Amazon, etc want to continue using FFmpeg without forking, they could do so by paying for the old LGPL license. It would likely be cheaper than maintaining a fork. They'd also have to release their changes anyways due to LGPL if they ever ship it client side.
So the incentive to contribute rather than fork would remain, and the only difference is that they have to pay a licensing fee.
Ofc this is probably just a fantasy. Relicensing FFmpeg like this probably isn't easy or possible.
Probably could pull in millions per year.
Submit the bug AND the patch and be done with it; don't make it someone else's problem when it's an OSS library/tool. A for-profit vendor? Absolutely. But this? Hell naw.
Additionally, a search of the git commits shows a regular stream of commits from `google.com` addresses. So as near as I can tell, Google does regularly contribute code to the project, they are a customer of the project maintainer's commercial support company (and that fact is on the support company's website) and they submit high quality bug reports for vulnerabilities. What are we mad about again?
I'm not being dismissive. I understand the imperetive of identifying and fixing vulnerabilities. I also understand the detrimental impact that these problems can potentially have on Google.
What I don't understand is the choice to have a public facing project about this. Can anyone shine a light on this?
Their security team gaining experience on other projects can teach them some more diversity in terms of (malware) approaches and vulnerability classes, which can in turn be used to secure their own software better.
For other projects there's some vanity/reputation to be gained. Having some big names with impressive resumes publicly talk about their work can help attract talent.
Lastly, Google got real upset that the NSA spied on them (without their knowledge, they can't help against warrants of course).
Then again, there's probably also some Silicon Valley bullshit money being thrown around. Makes you wonder why they don't invest a little bit more to pay someone to submit a fix.
And pushing forward the idea that "responsible disclosure" doesn't mean the software creator can just sit on a bug for as long as they want and act superior and indignant when the researcher gives up and publishes anyway because the creator is dragging their ass.
There are many groups and companies that do security research on common software and then sell the resulting vulnerabilities to people who don’t have your best interests in mind. Having a major company get ahead of this and share the results is a win for all of us.
A lot of people in this comment section don’t understand the broader security ecosystem. There are many vendors who will even provide patched versions of software to work around security issues that aren’t yet solved upstream. Some times these patches disable features or break functionality, or simply aren’t at a level where upstream is interested yet. But patching known issues is valuable.
Getting patches accepted upstream in big open source projects isn’t always easy. They tend to want things done a certain way or have a high bar to clear for anyone submitting work.
I still fail to see why "ffmpeg is not allowed to fix bugs reported by corporations" is a good strategy. To me this sounds not logical.
It's not FFmpeg's problem if someone uses a vulnerability to pwn YouTube, it's Google's problem.
Also, in the article, they mention that Google's using AI to look for bugs and report them, and one of them that it found was a problem in the code that handles the rendering of a few frames of a game from 1995. That sort of slop isn't helping anyone. It's throwing the signal-to-noise ratio of the bug filings way the hell off.
What do we think the lesson corporations are going to take from this is?
1) "You should file patches with your bug reports"
2) "Even when you submit patches and you hire the developers of OSS projects as consultants, you will still get dragged through the mud if you don't contribute a patch with every bug report you make, so you might as well not contribute anything back at all"
From that perspective, the most likely problem is not that bugs are being reported, nor even that patches are not being included with bug reports. The problem is that a shift from human-initiated bug reports to large-scale LLM generation of bug reports by large corporate entities generates a lot more work and changes the value proposition of bug reports for maintainers.
Even if you use LLMs to generate bug reports, you should have a human vet and repro them as real and significant and ensure they are written up for humans accurately and concisely, including all information that would be pertinent to a human. A human can make fairly educated decisions on how to combine and prioritize bug reports, including some degree of triage based on the overall volume of submissions relative to their value. A human can be "trained" to conform to whatever the internal policies or requirements are for reports.
Go ahead and pay someone to do it. If you don't want to pay, then why are you dumping that work on others?
Even after this, managing the new backlog entries and indeed dealing with a significantly larger archive of old bug reports going forward is a significant drag on human labor - bug reports themselves entail labor. Again, the old value proposition was that this was outweighed by the value of the highest-value human-made reports and intangibles of human involvement.
Bug reports are, either implicitly or explicitly, requests to do work. Patches may be part of a solution, but are not necessary. A large corporate entity which is operationally dependent on an open source project and uses automation to file unusually large volumes of bug reports is not filing them to be ignored. It isn't unreasonable to ask them to pay for that work which they are, one way or another, asking to have done.
Look at the report that's the center of this controversy. It's detailed, has a clear explanation of the issue at hand, has references and links to the relevant code locations where the submitter believes the issue is and has a minimal reproduction of the issue to both validate the issue and the fix. We can assume the issue is indeed valid and significant as ffmpeg patched it before the 90 day disclosure window. There is certainly nothing about it that screams low effort machine generated report without human review, and at least one commenter in this discussion claims to have inside knowledge that all these reports are written by verified and written by humans before submission to the projects.
I won't pretend that it's a perfect bug report, but I will say if every bug report I got for the rest of my career was of this caliber, I'd be a quite happy with that.
> It isn't unreasonable to ask them to pay for that work which they are, one way or another, asking to have done.
Google quite literally hires some of the ffmpeg maintainers as consultants as attested to by those same maintainer's own website (fflabs.eu). They are very plainly putting cold hard cash directly into the funds of the maintainers for the express purpose of them maintaining and developing ffmpeg. And that's on top of the code their own employees submit with some regularity. As near as I can tell, Google does everything people complaining about this are saying they don't do, and it's still not enough. One has to wonder then what would be enough?
The recent iOS zero-day (CVE-2025-43300) targeted the rarely used DNG image format. How long before this FFMPEG vulnerability is exploited to compromise legacy devices in the wild, I wonder?
I’m not a fan of this grandstanding for arguably questionable funding. (I surely would not fund those who believe these issues are slop.) I’d like to think most contributors already understand the severity and genuinely care about keeping FFMPEG secure.
It's not a codec made for one game back in 1994, it's still in very active use today if you're using such a rare and uncommon program as Photoshop.
I read this as nobody wants CVEs open on their product, so you might feel forced to fix them. I find it more understandable if we talk about web frameworks: Wordpress don't want security CVEs open for months or years, or users would be upset they introduce new features while neglecting safety.
I am a nobody, and whenever I found a bug I work extra to attach a fix in the same issue. Google should do the same.
Now should Google? Probably, it would be nice but no one has to. The gift from Google is the discovery of the bug.
That does not impact their business or their operations in any way whatsoever.
> If it's a valid bug then it's a valid bug end of story.
This isn't a binary. It's why CVEs have a whole sordid scoring system to go along with them.
> Software owes it to its users to be secure
ffmpeg owes me nothing. I haven't paid them a dime.
I don't know what tools and backends they use exactly, but working purely by statistics, I'm sure some place in Google's massive cloud compute empire is relying on ffmpeg to process data from the internet.
Plus, this bug was reported by AI, so it was as much a proof of concept/experiment/demonstration of their AI security scanner as it was an attempt to help secure ffmpeg
That is true. At the same time Google also does not owe the ffmpeg devs anything either. It applies both ways. The whole "pay us or we won't fix this" makes no sense.
Then they can stop reporting bugs with their assinine one size fits all "policy." It's unwelcome and unnecessary.
> It applies both ways.
The difference is I do not presume things upon the ffmpeg developers. I just use their software.
> The whole "pay us or we won't fix this" makes no sense.
Pay us or stop reporting obscure bugs in unused codecs found using "AI" scanning, or at least, if you do, then change your disclosure policy for those "bugs." That's the actual argument and is far more reasonable.
Right, they should just post the 0days on their blog.
Software should be correct and secure. Of course this can’t always be the case but it’s what we should strive for. I think that’s baseline
There is no such obligation.
There is no warranty and software is provided AS-IS explicitly by the license.
You bring up licensing. I’m not talking about legally I’m talking about a social contract.
The social contract is “here is something I’ve worked on for free, and it is a gift. Take it or leave it.”
You want me to work on something for it? FYPM
I don't consider a security issue to be a "standard bug." I need to look at it, and [maybe] fix it, regardless of who reported it.
But in my projects, I have gotten requests (sometimes, demands) that I change things like the published API (a general-purpose API), to optimize some niche functionality for one user.
I'll usually politely decline these, and respond with an explanation as to why, along with suggestions for them to add it, after the fact.
Sure, triage it. It shouldn’t be publicly disclosed within a week of the report though, because the fix is still a relatively low priority.
How are you getting ffmpeg to process a stream or file type different from the one you’re expecting? Most use cases of ffmpeg are against known input and known output types. If you’re just stuffing user-supplied files through your tools, then yes you have a different threat model.
... That is how ffmpeg works? With default settings it auto-detects the input codec from the bitstream, and the output codec from the extension. You have to go out of your way to force the input codec and disable the auto-detection, and I don't think most software using ffmpeg as a backend would force the user to manually do it, because users can't be trusted to know those details.
If it’s a potential problem for home users, yeah, that’s an issue but it’s not every use of the tool.
The people who do use the stream type are at risk, and have been at risk all along. They need to stop using the stream type, or get the bug fixed, or triage the but as not exploitable.
This position likely to drive away maintainers. Generally the maintainers need these projects less than the big companies that use them. I'm not sure what Google's endgame is
I mean, I follow that account and never got this impression from them at all.
It’s a trillion dollar company. I’m sure they could find rifle through their couch cushions and find more than enough money and under-utilised devs to contribute finance or patches.
Especially if they’ve just spent all this cash on some ai tool.
Wakey wakey people, FOSS is here to F you. It can be free, while those, who rely on it and using it, also pay for it.
https://security.googleblog.com/2014/01/ffmpeg-and-thousand-...
Here's an archive link:
Google is under no obligation to work on FFmpeg.
Google leveraging AI to spam ffmpeg devs with bugs that range from real to obscure to wrongly reported may be annoying. But even then I still don't think Google is to be held accountable for reporting bugs nor is it required to fix bugs. Note: I do think Google should help pay for costs and what not. If they were a good company they would not only report bugs but also have had developers fix the bugs, but they are selfish and greedy, everyone knows that. Even then they are not responsible for bugs in ffmpeg. And IF the bug report is valid, then I also see no problem.
The article also confuses things. For instance:
"Many in the FFmpeg community argue, with reason, that it is unreasonable for a trillion-dollar corporation like Google, which heavily relies on FFmpeg in its products, to shift the workload of fixing vulnerabilities to unpaid volunteers"
How could Google do that? It is not Google's decision. That is up to volunteers. If they refuse to fix bug reports reported from Google then this is fine. But it is THEIR decision, not Google.
"With this policy change, GPZ announces that it has reported an issue on a specific project within a week of discovery, and the security standard 90-day disclosure clock then starts, regardless of whether a patch is available or not."
Well, many opinions here. I think ALL bugs and exploits should be INSTANTLY AND WITHOUT ANY DELAY, be made fully transparent and public. I understand the other side of the medal too, bla bla we need time to fix it bla bla. I totally understand it. Even then I believe the only truthful, honest way to deal with this, is 100% transparency at all times. This includes when there are negative side effects too, such as open holes. I believe in transparency, not in secrecy. There can not be any compromise here IMO.
"Many volunteer open source program maintainers and developers feel this is massively unfair to put them under such pressure when Google has billions to address the problem."
So what? Google reports issues. You can either fix that or not. Either way is a strategy. It is not Google's fault when software can be exploited, unless they wrote the code. Conversely, the bug or flaw would still exist in the code EVEN IF GOOGLE WOULD NOT REPORT IT. So I don't understand this part. I totally understand the issue of Google being greedy, but this here is not solely about Google's greed. This is also how a project deals with (real) issues (if they are not real then you can ask Google why they send out so much spam).
That Google abuses AI to spam down real human beings is evil and shabby. I am all for ending Google on this planet - it does so much evil. But either it is a bug, or not. I don't understand the opinion of ffmpeg devs "because it is Google, we want zero bug reports". That just makes no sense.
"The fundamental problem remains that the FFmpeg team lacks the financial and developer resources to address a flood of AI-created CVEs."
Well, that is more an issue in how to handle Google spamming down people. Sue them in court so that they stop spamming. But if it is a legit bug report, why is that a problem? Are ffmpeg devs concerned about the code quality being bad? If it is about money then even though I think all of Google's assets should be seized and the CEOs that have done so much evil in the last 20 years be put to court, it really is not their responsibility to fix anything written by others. That's just not how software engineering works; it makes no sense. It seems people confuse ethics with responsibilities here. The GPL doesn't mandate code fixing to be done; it mandates that if you publish a derivative etc... of the code, that code has to be published under the same licence and made available to people. That's about it, give or take. It doesn't say corporations or anyone else HAS to fix something.
"On the other hand, security experts are certainly right in thinking that FFmpeg is a critical part of the Internet’s technology framework and that security issues do need to be made public responsibly and addressed."
I am all for that too, but even stricter - all security issues are to be made public instantly, without delay, fully and completely. I went to open source because I got tired of Microsoft. Why would I want to go back to evil? Not being transparent here is no valid excuse IMO.
"The reality is, however, that without more support from the trillion-dollar companies that profit from open source, many woefully underfunded, volunteer-driven critical open-source projects will no longer be maintained at all."
Wait - so it is Google's fault if projects die due to lack of funding? How does that explanation work?
You can choose another licence model. Many choose BSD/MIT. Others choose GPL. And so forth.
"For example, Wellnhofer has said he will no longer maintain libxml2 in December. Libxml2 is a critical library in all web browsers, web servers, LibreOffice and numerous Linux packages. We don’t need any more arguments; we need real support for critical open source programs before we have another major security breach."
Yes, that is a problem - the funding part. I completely agree. I still don't understand the "logic" of trying to force corporations to have to do so when they are not obliged. If you don't want corporations to use your code, specify that in the licence. The GPL does not do that. I am confused about this "debate" because it makes no real sense to me from an objective point of view. The only part that I can understand pisses off real humans is when Google uses AI as a pester-spam attack orgy. Hopefully a court agrees and spits up any company with more than 100 developers into smaller entities the moment they use AI to spam real human beings.
Google has a pretty regular stream of commits in the ffmpeg git history, and is proudly declared to be a customer of `fflabs.eu`, which appears to be the ffmpeg lead maintainer's private consulting company. Two of the maintainers on ffmpeg's "hire a dev" page[1] are also listed as employees of fflabs[2]. Honestly, Google seems like they're being a model for how corporations can give back to OSS better and benefit from that, but instead everyone is up in arms because they don't give patches in all of their bug reports.
[1]: https://ffmpeg.org/consulting.html [2]: https://fflabs.eu/about/
Edit: Notably, Google is a paying client of FFmpeg's consulting entity.
No it's not "unreasonable" to ask for patches along with bug fixes, but it is unreasonable to be mad if they don't. They could just not file the bug reports at all, and that is an objectively worse outcome.
The nice thing is that the open source contributions done by a Googler aren't necessarily tied to their Google identity.
Your stance seems to be is that it is unreasonable to be annoyed by someone who is being unreasonable.
When I searched for synonyms for "unreasonable" in a major English language thesarus, the following synonyms were listed:
indefensible, mindless, reasonless, senseless, unjustified, untenable, unwarranted
So yes, it absolutely is valid for the FFMPEG crew to feel trolled by Project Zero.
No, that's not their stance.
You talked about whether ffmpeg was reasonable. They talked about whether ffmpeg was unreasonable.
You never accused google of being unreasonable, and they never mentioned it either.
So this idea of "responding to google being unreasonable" is a brand new premise. And I'm pretty sure they would disagree with that premise.
Google is absolutely being unreasonable here -- they should instruct their engineers to submit a patch when submitting CVEs, and FFMPEG is perfectly valid to engage in a little activism to nudge them along.
It's all connected but... Here, I'll phrase it more simply:
They didn't agree that google is being unreasonable. You are not interpreting them right.
I don't care how confident you are that google is being unreasonable. The "your stance seems to be" statement in your previous comment is wrong.
>it's not "unreasonable" to ask for patches along with bug fixes, but it is unreasonable to be mad if they don't
So the ask (make a patch for your CVEs) is reasonable. It follows that to fail to do so is unreasonable. Whether the poster agrees Google is unreasonable or not is up for debate, but if they choose to espouse that the request is reasonable and that Google is reasonable, they're putting forth an irrational belief not rooted in their own logic.
But hey, lots of folks on HN are biased towards Google for financial reasons, so I totally get it.
But either their stance is how I said, or if their stance differs they are a hypocrite, there really is no middle ground here.
Ah, that's where the confusion happens. It's your stance that it follows, but tpmoney was directly disagreeing with that logic.
tpmoney's stance, and my stance, is that it's reasonable to ask and it's also reasonable to say no.
It's not irrational to say that you can reasonably decline a reasonable request. Jeez.
(Also even if it was irrational, that wouldn't make tpmoney a hypocrite. That claim is just weird.)
The many large corporations should be funding these tools they depend on to increase time allocations and thus ability to be responsive but this isn't an either/or. These type of thinking erode the communities of such projects and minds of the contributors.
FWIW I've totally been that developer trapped in that perspective so I empathize, there are simply better mental stances available.
ffmpeg is complaining that security bugs are such a drag that it's driving people away from their hobby/passion projects. Well, if fixing security bugs isn't your passion, why not just say that? Say it's not your priority, and if someone else wants it to be a priority, they can write the patches. Problem solved?
Open source projects want people to use their work. Google wants bugs -- especially security ones -- found and fixed fast. Both goals make sense. The tension starts when open source developers expect payment for fixes, and corporations like Google expect fixes for free.
Paying for bug fixes sounds fair, but it risks turning incentives upside down. If security reports start paying the bills, some maintainers might quietly hope for more vulnerabilities to patch. That's a dangerous feedback loop.
On the other hand, Google funding open source directly isn't automatically better. Money always comes with strings. Funding lets Google nudge project priorities, intentionally or not -- and suddenly the "open" ecosystem starts bending toward corporate interests.
There's no neat solution. Software wants to be used. Bugs want to be found and fixed. But good faith and shared responsibility are supposed to be the glue that holds the open source world together.
Maybe the simplest fix right now is cultural, not technical: fewer arguments on Twitter, more collaboration, and more gratitude. If you rely on open source, donate to the maintainers who make your life easier. The ecosystem stays healthy when we feed it, not when we fight over it.
A license to define that nobody can expect a response? Or file bugs?
None of this has anything to do with the issue. They can just turn off Google’s access to the bug tracker. No license needed.
However Google is free to publish security research they find.
It would be most concerning if projects started including “Nobody is allowed to do security research on this project” licenses. Who would benefit from that?
If this isn't addressed, it makes this repo a target for actors that don't care about the welfare of Amazon, Google etc.
It seems quite predictable that someone will see this as a human weakness and try to exploit it, my question is whether we'll get them the support they need before the consequence hits, or whether we'll claim this was a surprise after the fact.
Doesn't google routinely contribute to ffmpeg? Certainly there's a lot of commits from `@google.com` email addresses here: https://git.ffmpeg.org/gitweb/ffmpeg.git?a=search&h=HEAD&st=...
It can be a hobby like model trains and it can be a a social context like joining a club or going to church.
But it's safe to say that nobody is volunteering "to make billionaires even more profit."
/s
Fixing a private fork takes 1/5-1/10 the time of shepherding a PR to meet the maintainers expectations. And why spend 5x dev time to contribute fixes to your competitor?
Filing bugs, etc, is also has some value, but if a big company uses a piece of open source software and makes money with it (even indirectly), they can contribute engineering time (or money).
Stallman was right.
Why would Google act if they got smart guys working for them for free? Stop fixing Google-reported bugs.
Part of the issue is that FFmpeg is almost a meta-project. It contains so many possible optional dependencies. Which is great for features, nit so great if you quickly want to know if you're exposed to the latest CVE.
Google might even prefer this deal, if it means more maintainer activity and fewer vulnerabilities.
So...your entire premise is patently false and wrong.
If anyone is struggling to triage bug reports in a Rust open source project, please contact me and I will see if this is something I can donate some recurring time to.
You really should check out much much code in e.g. the Linux kernel is written outside of "the West". It's not the 90s anymore.
The problem is the pressure to fix the bugs in x amount of time otherwise they will be made public. Additionally flooding the team with so many bugs that they can never keep up.
Perhaps the solution is not in do or don't submit but in the way how the patches are submitted. I think a solution would be to have a separate channel for these big companies to submit "flood" bug reports generated by AI, and if those reports won't be disclosed in x amount of time that would also take the pressure of the maintainers, the maintainers can set priories to most pressing issues and keep the backlog of smaller issues that may require attention in the future (or be picked up by small/new contributors).
I agree with some of the arguments that patching up vulnerabilities is important, but it's crazy to put that expectation on unpaid volunteers when you flood them with CVE's some completely irrelevant.
Also the solution is fairly simple: Either, you submit a PR instead of an issue. Or, you send a generous donation with the issue to reward and motivate the people who do the work.
The amount of revenue they generate using these packages will easily offset the cost of funding the projects, so I really think it's a fair expectation for companies to contribute either by delivering work or funds.
The compounding factor here is the automated reporting and disclosure process of Google's Project Zero. GPZ automatically discloses bugs after 90 days. Even if Google does not expect bugs to be fixed within this period, the FFmpeg devs clearly feel pressure.
But it is an open source project, basically a hobby for most devs. Why accept pressure at all? Continue to proceed in the time-honored method. If and when Youtube explodes because of a FFmpeg bug, Google has only itself to blame. They could have done something but decided to freeload.
I really don't see the issue.
It certainly does not seem ethically correct.
I'm sure Google could (and probably should) do even more to help, but FFMPEG directing social media rage at a company for contributing to their project is a bone-headed move. There are countless non-Google companies relying on FFMPEG that do much less for the project, and a shit show like this is certainly not going to encourage them to get involved.
I promise they are spending more on extra compute for resiliency and redundancy for FFMPEG issues than it would cost for a single SWE to just write a fix and then shepherd through the FFmpeg approval process.
Maybe AmaGoogSoft deserves this, but then what's the threshold? If I'm in charge of Zoom or Discord and one of my engineers finds a bug, should I let them report it and risk a public blow-up? Or does my company's revenue need to be below $1B? $100M? This just poisons the well for everyone.
I’m only half joking by the way.
I think it’s more than reasonable to demand that if the AI finds bugs, then the AI should spend a couple cents and output patches.
Should Google stop devoting resources to identifying and reporting security vulnerabilities in ffmpeg?
I cannot bring myself to a mindset where my answer to this question is also "yes".
It would be one thing if Google were pressuring the ffmpeg maintainers in their prioritization decisions, but as far as I can tell, Google is essentially just disclosing that this vulnerability exists?
Maybe the CVE process carries prioritization implications I don't fully understand. Eager to be educated if that is the case.
God bless you guys.
This is so basic it shouldn't even have to be said.
The last part is just wrong. Google does directly support the project's maintenance.
I see no reason to follow the 90 day timeline if the maintainer is working on it. Especially considering it is very possible for Google to overwhelm a project with thousands of vulnerability reports.
Otherwise, I don't think Google should issue a patch, just like in this case, only FFmpeg people know it is an obscure codec that nobody really uses, and maybe the reasonable approach would be to simply remove it. Google don't know that, unless they somehow take over the project.
And AFAIK Google is one of the biggest sponsor for SPI, which is the fiscal sponsor for ffmpeg. So not sure where the not paying thing came from.
The open source model is broken in this regard, licenses need to address revenue and impose fees on these companies, which can be used as bug bounties. Game engines do this and so should projects like FFMPEG, etc. The details are complex of course, but the current status quo is abusing people's good will.
JamesBarney•2mo ago
Msurrow•2mo ago
Now, if Google or whoever really feels like fixing fast is so important, then they could very well contribute by submitting a patch along with their issue report.
Then everybody wins.
danlitt•2mo ago
This is very far from obvious. If google doesn't feel like prioritising a critical issue, it remains irresponsible not to warn other users of the same library.
Msurrow•2mo ago
danlitt•2mo ago
GabrielTFS•2mo ago
saagarjha•2mo ago
xign•2mo ago
And occasionally you do see immediate disclosures (see below). This usually happens for vulnerabilities that are time-sensitive or actively being exploited where the user needs to know ASAP. It's very context dependent. In this case I don't think that's the case, so there's a standard delayed disclosure to give courtesy for the project to fix it first.
Note the word "courtesy". The public interest always overrides considerations for the project's fragile ego after some time.
(Some examples of shortened disclosures include Cloudbleed and the aCropalypse cropping bug, where in each case there were immediate reasons to notify the public / users)
foolswisdom•2mo ago
afiori•2mo ago
derf_•2mo ago
I don't want to discourage anyone from submitting patches, but that does not necessarily remove all (or even the bulk of) the work from the maintainers. As someone who has received numerous patches to multimedia libraries from security researchers, they still need review, they often have to be rewritten, and most importantly, the issue must be understood by someone with the appropriate domain knowledge and context to know if the patch merely papers over the symptoms or resolves the underlying issue, whether the solution breaks anything else, and whether or not there might be more, similar issues lurking. It is hard for someone not deeply involved in the project to do all of those things.
tpmoney•2mo ago
So when the xz backdoor was discovered, you think it would have been better to sit on that quietly and try to both wrest control of upstream away from the upstream maintainers and wait until all the downstream projects had reverted the changes in their copies before making that public? Personally I'm glad that went public early. Yes there is a tradeoff between speed of public disclosure and publicity for a vulnerability, but ultimately a vulnerability is a vulnerability and people are better off knowing there's a problem than hoping that only the good guys know about it. If a Debian bug starts tee-ing all my network traffic to the CCP and the NSA, I'd rather know about it before a patch is available, at least that way I can decide to shut down my Debian boxes.
Orygin•2mo ago
This bug is almost certainly too obscure to be found and exploited in the time the fix can be produced by Ffmpeg. On the other hand, this vuln being public so soon means any attacker is now free to develop their exploit before a fix is available.
If Google's goal is security, this vulnerability should only be disclosed after it's fixed or a reasonable time (which, according to ffmpeg dev, 90 days is not enough because they receive too many reports by Google).
tpmoney•2mo ago
But ultimately that's my point. You as an individual do not know who else has access or information about the bug/vulnerability you have found, nor do you have any insight into how quickly they intend to exploit that if they do know about it. So the right thing to do when you find a vulnerability is to make it public so that people can begin mitigating it. Private disclosure periods exist because they recognize there is an inherent tradeoff and asymmetry in making the information public and having effective remediations. So the disclosure period attempts to strike a balance, taking the risk that the bug is known and being actively exploited for the benefit of closing the gap between public knowledge and remediation. But inherently it is a risk that the bug reporter and the project maintainers are forcing on other people, which is why the end goal must ALWAYS be public disclosure sooner rather than later.
Orygin•2mo ago
Meanwhile the XZ backdoor was 100% meant to be used. I didn't say when and that doesn't matter, there is a malicious actor with the knowledge to exploit it. We can't say the same regarding the bug in a 1998 codec that was found by extensive fuzzing, and without obvious exploitation path.
Now, should it be patched? Absolutely, but should the patch be done asap at the cost of other maybe more important security patches? Maybe, maybe not. Not all bugs are security vulns, and not all security vulns are exploitable
tpmoney•2mo ago
I fully agree which is why I really don’t understand why everyone is all up in arms here. Google didn’t demand that this bug get fixed immediately. They didn’t demand that everything be dropped to fix a 25 year old bug. They filed a (very good and detailed) bug report to an open source product. They gave a private report out of courtesy and an acknowledgment of the tradeoffs inherent in public bug disclosure, but ultimately a bug is a bug, it’s already public because the source code is public. If the ffmpeg devs didn’t feel it was important to fix right away, nothing about filing a bug report, privately or publicly changes any of that.
Orygin•2mo ago
I can understand that stance for serious bugs and security vulnerabilities. I can understand such delays for a company with a big market cap to put pressure on them. But these delays are exactly like a demand put on the company: fix it asap or it gets public. We wouldn't have to do this if companies in general didn't need to get publicly pressured into fixing their stuff. Making it public has two objectives: Warn users they may be at risk, and force the publisher to produce a fix asap or else risk a reputation hit.
> If the ffmpeg devs didn’t feel it was important to fix right away, nothing about filing a bug report, privately or publicly changes any of that.
It does change how they report. Had they given more time or staggered their reports over time, Ffmpeg wouldn't have felt pressure to publish fixes asap. Even if the devs can say they won't fix, any public project will want to keep a certain quality level and not let security vulnerabilities get public.
In the end, had these reports been made by random security researchers, no drama would have happened. But if I see Google digging up 25 years old bugs, is it that much to expect them to provide a patch with it?
tpmoney•2mo ago
I think this is where the disconnect is. To my mind there is no "do this or else" message here, because there is no "or else". The report is a courtesy advance notice of a bug report that WILL be filed, no matter what the ffmpeg developers do. It's not like this is some awful secret that Google is promising not to disclose if ffmpeg jumps to their tune.
Further, the reality is most bug reports are never going to be given a 90 day window. Their site requests that if you find a security vulnerability you email their security team, but it doesn't tell you not to also file a bug report, and their bug report page doesn't tell you not to file anything you think might be a security or vulnerability bug to the tracker. And a search through the bug tracker shows more than a few open issues (sometimes years old) reporting segfault crashes, memory leaks, un-initialized variable access, heap corruption, divide by zero crashes, buffer overflows, null pointer dereferences and other such potential safety issues. It seems the ffmpeg team has no problems generally with having a backlog of these issues, so certainly one more in a (as we've been repeatedly reminded) 25 year old obscure codec parser is hardly going to tank their reputation right?
> In the end, had these reports been made by random security researchers, no drama would have happened.
And now we get to what is really the heart of the matter. If anyone else has reported this bug in this way, no one would care. It's not that Google did anything wrong, it's that Google has money so everyone is mad that they didn't do even more than they already do. And frankly that attitude stinks. It's hard enough getting corporations to actually contribute back to open source projects, especially when the license doesn't obligate them to at all. I'm not advocating holding corporations to some lesser standard, if the complaint was that Google was shoving unvalidated, and un-validatable low effort reports into the bug tracker, or that they actually were harassing the ffmpeg developers with constant followups on their tickets and demands for status updates then that would be poor behavior that we would be equally upset about if it came from anyone. But like you said, any other security researcher behaving the same way would be just fine. Shitting on Google this way for behaving according to the same standards outlined on ffmpeg's own website because of who they are and not what they've done just tells other corporations that it doesn't matter if you contribute code and money in addition to bug reports, if you don't do something to someone's arbitrary standard based on WHO you are, rather than WHAT you do, you'll get shit on for it. And that's not going to encourage more cooperation and contributions from the corporations that benefit from these projects.
jsnell•2mo ago
And if you believe this is a "small-ish" bug just because the ffmpeg Twitter account's gaslighting about "20 frames of a single video in Rebel Assault", then surely it being disclosed would be irrelevant? The only way the disclosure timeline makes a difference is if ffmpeg too think that the bug is serious.
Ygg2•2mo ago
Not publicly disclosing it also carries risk. Library users get wrong impression that library has no vulnerabilities, while numerous bugs are reported but don't appear due to FOSS policy.
phoronixrly•2mo ago
NobodyNada•2mo ago
You can never be sure that you're the only one in the world that has discovered or will discover a vulnerability, especially if the vulnerability can be found by an LLM. If you keep a vulnerability a secret, then you're leaving open a known opportunity for criminals and spying governments to find a zero day, maybe even a decade from now.
For this one in particular: AFAIK, since the codec is enabled by default, anyone who processes a maliciously crafted .mp4 file with ffmpeg is vulnerable. Being an open-source project, ffmpeg has no obligation to provide me secure software or to patch known vulnerabilities. But publicly disclosing those vulnerabilities means that I can take steps to protect myself (such as disabling this obscure niche codec that I'm literally never going to use), without any pressure on ffmpeg to do any work at all. The fact that ffmpeg commits themselves to fixing known vulnerabilities is commendable, and I appreciate them for that, but they're the ones volunteering to do that -- they don't owe it to anyone. Open-source maintainers always have the right to ignore a bug report; it's not an obligation to do work unless they make it one.
Vulnerability research is itself a form of contribution to open source -- a highly specialized and much more expensive form of contribution than contributing code. FFmpeg has a point that companies should be better about funding and contributing to open-source projects that they rely on, but telling security researchers that their highly valuable contribution is not welcome because it's not enough is absurd, and is itself an example of making ridiculous demands for free work from a volunteer in the open-source community. It sends the message that white-hat security research is not welcome, which is a deterrent to future researchers from ethically finding and disclosing vulnerabilities in the future.
As an FFmpeg user, I am better off in a world where Google disclosed this vulnerability -- regardless of whether they, FFmpeg, or anyone else wrote a patch -- because a vulnerability I know about is less dangerous than one I don't know about.
arccy•2mo ago