There have been a lot of cases where something once deemed "unreachable" eventually was reachable, sometimes years later, after a refactoring and now there was an issue.
I'm only half-joking when I say that one of the premier selling points of GPL over MIT in this day and age is that it explicitly deters these freeloading multibillion-dollar companies from depending on your software and making demands of your time.
I don’t think many projects see acquiring unpaying corporate customers as a goal.
On some of our projects this has been a great success. We have some strong outside contributors doing work on our project without us needing to pay them. In some cases, those contributors are from companies that are in direct competition with us.
On other projects we've open sourced, we've had people (including competitors) use, without anyone contributing back.
Guess which projects stay open source.
I'm interested in people (not companies, or at least I don't care about companies) being able to read, reference, learn from, or improve the open source software that I write. It's there if folks want it. I basically never promote it, and as such, it has little uptake. It's still useful though, and I use it, and some friends use it. Hooray. But that's all.
Security issues like this are a prime example of why all FOSS software should be at least LGPLed. If a security bug is found in FOSS library, who's the more motivated to fix it? The dude who hacked the thing together and gave it away, or the actual users? Requesting that those users share their fixes is farrr from unreasonable, given that they have clearly found great utility in the software.
This isn't a popularity contest and I'm sick of gamification of literally everything.
I find that hard to believe.
Things like "panics on certain content" like [1] or [2] are "security bugs" now. By that standard anything that fixes a potential panic is a "security bug". I've probably fixed hundreds if not thousands of "security bugs" in my career by that standard.
Barely qualifies as a "security bug" yet it's rated as "6.2 Moderate" and "7.5 HIGH". To say nothing of gazillion "high severity" "regular expression DoS" nonsense and whatnot.
And the worst part is all of this makes it so much harder to find actual high-severity issues. It's not harmless spam.
[1]: https://github.com/gomarkdown/markdown/security/advisories/G...
That is not true at all. Availability is also critical. If nobody can use bank accounts, bank has no purpose.
You’re correct that inaccessible money are useless, however one could make the case that they’re secure.
And no one is talking about safety-critical systems. You are moving the goalposts. Does a gas pedal use a markdown or XML parser? No.
> Does a gas pedal use a markdown or XML parser? No.
Cars in general use, extensively: https://en.wikipedia.org/wiki/AUTOSAR
Control integrity, nonrepudiation, confidentiality, privacy, ...
Also, define what you mean by "utility" because there's inability to convert a Word document, inability to stop a water treatment plant from poisoning people, and ability to stop a fire requiring "utility".
If a hacker wants to DoS their own browser I’m fine with that.
And even if it somehow could, it's 1) just not the same thing as "I lost all my money" – that literally destroys lives and the bank not being available for a day doesn't. And 2) almost every bug has the potential to do that in at least some circumstances – circumstances with are almost never true in real-world applications.
I wouldn't personally classify these as denial of service. They are just bugs. 500 status code does not mean that server uses more resources to process it than it typically does. OOMing your browser has no impact to others. These should be labeled correctly instead of downplaying the significance of denial of service.
Like I said in my other comment, there are two entities - the end-user and the service provider. The service provider/business loses money too when customers cannot make transactions (maybe they had promise to keep specific uptime and now they need to pay compensations). Or they simple get bankrupted because they lost their users.
Even customers may lose money or something else when they can't make transactions. Or maybe identification is based on bank credentials on some other service. The list goes on.
Not necessarily. 500 might indicate the process died, which might take more resources to startup, have cold cache, whatever. If you spam that repeatedly it could easily take down the site.
I agree with your point broadly though that the risk of such things are grossly overstated, but i think we should be careful about going in the opposite direction too far.
That is true, but the status code 500 alone does not reveal that; it is speculation. Status codes are not always used correctly. It is typically just indicator to dig deeper. There might be a security issue, but the code itself is not enough.
Maybe this just the same general problem of false positives. Proving something requires more effort and more time and people tend to optimise things.
It's true that once upon a time, libxml was a critical path for a lot of applications. Those days are over. Protocols like SOAP are almost dead and there's not really a whole lot of new networking applications using XML in any sort of manor.
The context where these issues could be security bugs is an ever-vanishing usecase.
Now, find a similar bug in zlib or zstd and we could talk about it being an actual security bug.
Quite the opposite. NETCONF is XML https://en.wikipedia.org/wiki/NETCONF and all modern ISP/Datacenter routers/switches have it underneath and most of the time as a primary automation/orchestration protocol.
That being said, I don't think that libxml2 has support for the dark fever dream that is XMLDSig, which SAML depends on.
DoS results in whatever the system happens to do. It may well result in bad things happening, for example stopping AV from scanning new files, breaking rate limiting systems to allow faster scanning, hogging all resources on a shared system for yourself, etc. It's rarely a security issue in isolation, but libraries are never used in isolation.
A bug in a library that does rate limiting arguably is a security issue because the library itself promises to protect against abuse. But if I make a library for running Lua in redis that ends up getting used by a rate limiting package, and my tool crashes when the input contains emoji, that's not a security issue in my library if the rate limiting library allows emails with punycode emoji in them.
"Hogging all of the resources on a shared system" isn't a security bug, it's just a bug. Maybe an expensive one, but hogging the CPU or filling up a disk doesn't mean the system is insecure, just unavailable.
The argument that downtime or runaway resource use due is considered a security issue but only if the problem is in someone else's code is some Big Brained CTO way of passing the buck onto open source software. If it was true, Postgres autovacuuming due to unpleasant default configuration would be up there with Heartbleed.
Maybe we need a better way of alerting downstream users of packages when important bugs are fixed. But jamming these into CVEs and giving them severities above 5 is just alert noise and makes it confusing to understand what issues an organization should actually care about and fix. How do I know that the quadratic time regexp in a string formatting library used in my logging code is even going to matter? Is it more important than a bug in the URL parsing code of my linter? It's impossible to say because that responsibility was passed all the way downstream to the end user. Every single person needs to make decisions about what to upgrade and when, which is an outrageous status quo.
(And other examples) That's a fallacy of looking for the root cause. The library had an issue, the system had an issue and together they resulted in a problem for you. Some issues will be more likely to result in security problems than others, so we classify them as such. We'll always deal with probabilities here, not clear lines. Otherwise we'll just end up playing a blame game "sure, this had a memory overflow, but it's package fault for not enabling protections that would downgrade it to a crash", "no it's deployments fault for not limiting that exploit to just this users data partition", "no it's OS fault for not implementing detailed security policies for every process", ...
Does availability not matter to you? Great. For others, maybe it does, like you are some medical device segfaulting or OOMing in an unmanaged way on a cfg upload is not good. 'Availability' is a pretty common security concern for maybe 40 years now from an industry view.
> The viewpoint expressed by Wellnhofer's is understandable, though one might argue about the assertion that libxml2 was not of sufficient quality for mainstream use. It was certainly promoted on the project web site as a capable and portable toolkit for the purpose of parsing XML. Open-source proponents spent much of the late 1990s and early 2000s trying to entice companies to trust the quality of projects like libxml2, so it is hard to blame those companies now for believing it was suitable for mainstream use at the time.
I think it's very obvious that the maintainer is sick of this project on every level, but the efforts to trash talk its quality and the contributions of all previous developers doesn't sit right with me.
This is yet another case where I fully endorse a maintainer's right to reject requests and even step away from their project, but in my opinion it would have been better to just make an announcement about stepping away than to go down the path of trash talking the project on the way out.
(Disclosure: I'm a past collaborator with Nick on other projects. He's a fantastic engineer and a responsible and kind person.)
I think that's seriously over-estimating the quality of software in mainstream browsers and operating systems. Certainly some parts of mainstream OS's and browsers are very well written. Other parts, though...
“Three.”
“Like, the number 3? As in, 1, 2, …?”
“Yes. If you’re expecting me to pick, this will be CVE-3.”
a) nonsense in which case nobody should spend any time fixing this (I'm thinking things like the frontend DDOS CVEs that are common) b) an actual problem in which case a compliance person at one of these mega tech companies will tell the engineers it needs to be fixed. If the maintainer refuses to be the person fixing it (a reasonable choice), the mega tech company will eventually just do it.
I suppose the risk is the mega tech company only fixes it for their internal fork.
You owe them nothing. That fact doesn’t mean maintainers or users should be a*holes to each other, it just means that as a user, you should be grateful and you get what you get, unless you want to contribute.
Or, to put it another way: you owe them exactly what they’ve paid for!
Many open source developers feel a sense of responsibility for what they create. They are emotionally invested in it. They may want to be liked or not be disliked.
You’re able to not care about these things. Other people care but haven’t learned how to set boundaries.
It’s important to remember, if you’re not understanding what a majority of people are doing, you are the different one. The question should be “Why am I different?” not “Why isn’t everyone else like me?”
“Here’s the solution” comes off far better than, “I don’t understand why you don’t think like me.”
Some open source projects which are well funded and/or motivated to grow are giddy with excitement at the prospect you might file a bug report [1,2]. Other projects will offer $250,000 bounties for top tier security bugs [3].
Other areas of society, like retail and food service, take an exceptionally apologetic, subservient attitude when customers report problems. Oh, sir, I'm terribly sorry your burger had pickles when you asked for no pickles. That must have made you so frustrated! I'll have the kitchen fix it right away, and of course I'll get your table some free desserts.
Some people therefore think doing a good job, as an open source maintainer, means emulating these attitudes. That you ought to be thankful for every bug report, and so very, very sorry to everyone who encounters a crash.
Needless to say, this isn't a sustainable way to run a one-person project, unless you're a masochist.
[1] https://llvm.org/docs/Contributing.html#id5 [2] https://dev.java/contribute/test/ [3] https://bughunters.google.com/about/rules/chrome-friends/574...
I used to work on a kernel debugging tool and had a particularly annoying security researcher bug me about a signed/unsigned integer check that could result in a target kernel panic with a malformed debug packet. Like you couldn't do the same by just writing random stuff at random addresses, since you are literally debugging the kernel with full memory access. Sad.
What I do is I add the following disclaimer to my GitHub issue template: "X is a passion project and issues are triaged based on my personal availability. If you need immediate or ongoing support, then please purchase a support contract through my software company: [link to company webpage]".
https://www.statista.com/chart/25795/active-github-contribut...
"Microsoft is now the leading company for open source contributions on GitHub" (2016)
That's reasonable, being a maintainer is a thankless job.
However i think there is a duty to step aside when that happens. If nobody can take the maintainer's place, then so be it, its still better than the alternative. Being burned out but continuing anyways just hurts everyone.
Its absolutely not the security researcher's fault for reporting real albeit low severity bugs (to be clear though, entirely reasonable for maintainers to treat low severity security bugs as public. The security policy is the maintainer's decision, its not right to blame researchers for following the policy maintainers set)
zppln•5h ago
mschuster91•5h ago
Relevant XKCD: https://xkcd.com/2347/
burnt-resistor•3h ago