:(
that or full public disclosure.
that's the point - to put pressure on them to CARE.
While intelligence agencies are an obvious benefitiary, this would also give leverage of government over capital
I don't think a headphone jack which you can get via a super cheap USB-C adaptor, makes the justification for a 1000 Euro paperweight.
[1] https://www.gsmarena.com/size-compare-3d.php3?idPhone1=12380...
I have an older car with an old stereo where the only external input is via jack. Worked perfectly fine with my old phone. When I got a new Samsung, I went through the hassle of trying several "combined usb-c charger and audio jack adaptor" only to eventually find out they can only work in on mode or the other, not both at the same time. I ended up throwing away my old phone holder and spending even more money on one with built-in wireless charging so I could both listen to a damn music and charge my phone at the same time while driving.
I can't tell you how many times I've bought something small that should reasonably do two things at once, but can't. Literal e-waste garbage.
The solution is to use a USB hub with an integrated DAC. I use an older version of this: https://satechi.net/products/mobile-pro-hub-sd
0. They don't work on all models. Not product lines, e.g. not "all Pixel phones" or so, no, reviews mention "works with Pixel 3 but not Pixel 3a". You need to either waste a bunch of resources sending various ones back and forth, or scour listings until you find one where a review mentioned it works with the model you have. It turns out that all the ones I ordered work on the two USB-C phones I have by now (one from work, one privately) but...
1. The quality of the mic conversion is so bad that people cannot understand what I'm saying. It's described as though I'm speaking while holding the phone under water. Plugging the headphones into my work laptop makes it clear that the mic itself is not the problem, nor the meeting software or my WiFi or anything
2. Loose contacts in most of the converters, if not from the start then after a handful of uses. The headphone cable itself somehow doesn't have that problem, so I don't think that's a me problem (many reviews also mentioned it)
3. You can't charge at the same time. I've tried wireless charging but that makes the device overheat. There are adapter models that will let you also plug in a power cable, but I didn't buy one for some reason. Probably all of them had bad reviews about all of the aforementioned problems and I didn't find a single one that sounded like it was worth a try
4. You need to plug it in at the right time. One of the converters needed to be plugged in before joining the meeting. Another one after. The OS or meeting software (not sure) wouldn't route the audio correctly otherwise
And cheap phones manage to include headphone jacks somehow. It's just a status symbol when manufacturers exclude it from more expensive models, it doesn't seem to serve any purpose as the Zenphone 10 shows by having it and also being great on all other fronts -- except one.
> a 1000 Euro paperweight
It's actually 700€.
It does everything I want. After searching a few days for what models are small, have a headphone jack, and are capable of running Android 14 or so, I was so happy to find that the Zenphone 10 checked all boxes. Then I found out why it didn't initially show up: Asus was the manufacturer that I had previously excluded because you can't root the device. It's not your device: the manufacturer maintains control over what you can and cannot do with it. You can't make full-system backups, for example, because access to your apps' data folders isn't part of what they allow you. The device was easily worth the 700€ because it sounded like I could finally stop wasting my time on choosing which compromise I wanted to make (huge size, no jack, or old chipset were the main options). Finding out there was a dealbreaker after all felt like an ice bath. I just won't buy something where I can't access my own data and make a fricking backup
About the lie, they've repeated multiple times this would be an option a year ago...
See https://www.reddit.com/r/zenfone/comments/1ccy11g/asus_is_wo...
Cisco have gone even further, by forgetting about their security announcements page, so any recognition is now long lost into the void.
https://sec.cloudapps.cisco.com/security/center/resources/ci...
I reported a vulnerability in some HR software they owned, but alas I can't even find where it used to live on the internet now.
I'm not sure where they got that from, Asus have been making motherboards and other pc parts since at least the 90s...
https://www.techspot.com/news/95425-years-gigabyte-asus-moth...
https://www.reddit.com/r/ASUS/comments/tg3u2n/removing_bloat...
https://www.reddit.com/r/ASUS/comments/ojsq80/nahimic_servic...
https://cve.mitre.org/data/board/archives/2016-06/msg00006.h...
(my old blog is long gone from tumblr, but I archived it:)
https://gist.github.com/indrora/2ae05811a2625a6c5e69c677db6e...
Good safety/security culture encourages players to not hide their problems. Corporations are greedy bastards. They'll do everything to hide their security mistakes.
You are also making legitimate, fixable in a month issues available for everyone which increases their chances to be exploited a lot.
I don't think you can fathom the amount of people that have phones with roughly 3 years of no android updates as their primary device with which they use all the digital services they use, Banking, Texting, Doomscrolling, Porn, ...
Users, especially the most likely to be exploited are already vulnerable to so much shit and even when there's a literal finished fix available, these vendors do shit about it. Only when their bottomline is threatened because even my mom knows "Don't buy anything with ASUS on it, your bank account gets broken into if you do" will we see change.
In fact they would be just as vulnerable to any new responsibly disclosed issues as they would if they were immediately “irresponsibly” disclosed because again, they never update anyway.
I do. I'm an embedded software developer in a team that cares about having our software up-to-date a lot.
> Users, especially the most likely to be exploited are already vulnerable to so much shit and even when there's a literal finished fix available, these vendors do shit about it. Only when their bottomline is threatened because even my mom knows "Don't buy anything with ASUS on it, your bank account gets broken into if you do" will we see change.
Yes individuals are quite exploitable. That's why I really like EU's new regulations Cyber Resiliency Act and new Radio Equipment Directive. When governments enforce reasonable disclosure and fixing timelines, then threaten your company's ability to sell things in a market alltogether, if you don't comply, it works wonders. Companies hate not being able to make money. So all the extra security policies and vulnerability tracking we have been experimenting with and secure-by-default languages are now the highest priority for us.
EU regulation makes sure that you're not going to be sold a router that's instantly hackable in a year. It will also force chip manufacturers to have meaningful maintenance windows like 5-10 years due to pressure from ODMs. That's why you're seeing all the smartphone manufacturers have extended support timelines, it is not pure market pressure. They didn't give fuck about it for more than 10 years. When EU came with a big stick though...
Spreading word-of-mouth knowledge works until a point. Having your entire product line being banned entering a market works almost every time.
This is why I despise the Linux CNA for working against the single system that tries to hold vendors accountable. Their behavior is infantile.
The actually responsible thing to do is to disclose immediately, fully and publically (and maybe anonymously to protect yourself). Only after the affected company has repeatedly demonstrated that they do react properly, they might earn the right for a very time-limited heads-up of say 5 work days or something.
That irresponsibly delayed limited disclosure is even called "responsible disclosure" is an instance of newspeak.
It's just that there are some companies EVERYONE knows are shitty. ASUS is one of them.
But corporations making big bucks from their software need to be able to fix things quickly. They took money for their software, so it is their responsibility. If they cannot react on a public holiday, tough luck. Just look at their payment terms. Do they want their money within 30 days or 25 work days? Usually it is the former, they don't care about your holidays, so why should anyone care about theirs? Also, the bad guys don't care about their victims' holidays. You are just giving them extra time to exploit. The only valid argument would be that the victims might not be reading the news about your disclosure on a holiday. But since you are again arguing about software used by a lot of companies (as opposed to private users), I don't see a problem there. They also have their guards on duty and their maintenance staff on call for a broken pipe or something.
What's most important is that I'm saying we should revert the "benefit of the doubt". A vast majority of corporations have shitty security handling. Even the likes of Google talk big with their 90 day time window from private irresponsible disclosure to public disclosure. And even Google regularly fails to fix things within those 90 days. So the default must be immediate public and full disclosure. Only when companies have proven their worth by correctly reacting to a number of those, then they can be given the "benefit of the doubt" and a heads up.
Because otherwise, when the default is irresponsible private disclosure, they will never have any incentive to get better. Their users will always be in danger unknowingly. The market will not have information to decide whether to continue buying from them. The situation will only get worse.
> The only valid argument would be that the victims might not be reading the news about your disclosure on a holiday. But since you are again arguing about software used by a lot of companies (as opposed to private users), I don't see a problem there.
Let's say MegacorpA is a big Software Vendor that makes some kind of Software other Companies use to manage some really sensitive user data. Even if MegacorpA fixes their stuff on the 25th 2 hours after they got an e-mail from you, all their clients might not react that fast and thus a public disclosure could cause massive harm to end users, even if MegacorpA did everything right.
Ultimately, I guess my argument is that there's not a one size fits all solution. But "responsible disclosure" should be reserved for companies acting responsibly.
Because it is not corporations who are reacting on public holidays, but developer human beings.
It is not corporations that are reacting to install patches on a Friday, but us sysadmins who are human beings.
I get that companies sit on vulnerabilities, but isn't fair warning... fair?
If the reason for responsible disclosure is to ensure that no members of the public is harmed as a result of said disclosure, should it not be a conversation between the security researcher and the company?
The security researcher should have an approx. idea of how or what to do to fix, and give a reasonable amount of time for a fix. If the fix ought to have been easy, then a short time should suffice, and vice versa.
Maybe the mitigation is for the company to take its service down while it works on the problem. Again, a good incentive to avoid that in the first place. Also an incentive to not waste any time after a report comes in, to see and act on it immediately, etc.
At some point, we have to balance customer risk from disclosing immediately with companies sitting on vulnerabilities for months, vulnerabilities that may be actively exploited.
Let's take one of the most disastrous bugs in recent history: meltdown.
Speculative execution attacks inside the CPU. This required (in Paul Turners words): putting a warehouse of trampolines around an overly energetic 7-year old.
This, understandably took a lot of time, both for microcode and OS vendors.. it took even longer to fix it in silicone.
Not everyone is running SaaS that can deploy silently, or runs a patch cadence that can be triggered in minutes.
I work in AAA games and I'm biased, we have to pass special certifications to release patches, if your publisher has good relations, waiting for CERT by itself (after you have a validated fix) is 2 weeks.
If the industry practice would be few days to disclosure just maybe those practices might change or maybe there would be a (extra paid) option to skip the line for urgent stuff.
What actually should have happened there is a full recall of all affected hardware. Microcode fixes and payments for lost performance in the mean time, until the new hardware arrives.
Meltdown was a desaster, but not only because the bugs themselves were bad. But also especially because we let Intel and AMD get away scott free.
Dev time + test time + upload to cdn , is often longer than a week.
Still, they did it, because we decided safety is important to us.
How is that in any way the responsibility of independent randos on the internet?
If you truly believe these issues should be fixed, the right answer would be to hold companies accountable for timely security patches, overseen and managed by a government department.
I'm not sure thats a good idea, but expecting random security researchers to somehow hold massive billion dollar Enterprises accountable is silly.
A week is an example and not a definitive value dictated by law, statute, or regulation.
When you report the vulnerability you give the developer a timeline of your plans, and if they can't make the deadline they can come back to you and request more time.
And this is exactly what the parent poster is against - because it is possible to continuously extend this date.
So a gunshy researcher stays anonymous to keep their risk lower. They craft a disclosure with a crypto signature. They wait for the developer to post a public announcement about the disclosure that doesn't expose a ton of detail but does include the signature hash and general guidance about what to do until a fix is released.
The researcher then posts their own anonymous public announcement with however much detail they choose. They might wait 24 hours or 7 days or 7 months. They might make multiple announcements with increasing levels of detail. Each announcement includes the original hash.
Anybody can now make an announcement at any time about the vulnerability. If an announcement is signed by the same key as the original and contains more detail than given by the developer, the public can argue back and forth about who is being more or less responsible.
Now the researcher can negotiate with the developer anonymously and publicly. The researcher can claim a bounty if they ever feel safe enough to publicly prove they are the author of the original report.
Developers who routinely demonstrate responsible disclosure can earn the trust of researchers. Individual researchers get to decide how much they trust and how patient they are willing to be. The public gets to critique after the fact whether they sympathize more with the developer or the researcher. Perhaps a jury can decide which was liable for the level of disclosure they each pursued.
> The security researcher should have an approx. idea of how or what to do to fix
Any expectation put on the security researcher beyond "maybe don't cause unnecessary shit storms with zero days" needs to be met with an offer of a fat contract.
One hour is absurd for another reason, what timezone are you in? And they? What country, and therefore, is it a holiday?
You may say "but vulnerability", and yes. 100% no heel dragging.
But all companies are not staffed with 100k devs, and a few days, a week is a balance between letting every script kiddie know, and the potenital that it may be exploited in the wild currently.
If one is going to counter unreasonable stupidity, use reasonable sensibility. One hour is the same as no warning.
You've got it backwards.
The vuln exists, so the users are already at risk; you don't know who else knows about the vuln, besides the people who reported it.
Disclosing as soon as known means your customers can decide for themselves what action they want to take. Maybe they wait for you, maybe they kill the service temporarily, maybe they kill it permanently. That's their choice to make.
Denying your customers information until you've had time to fix the vuln, is really just about taking away their agency in order to protect your company's bottom line, by not letting them know they're at risk until you can say, "but we fixed it already, so you don't need to stop using us to secure yourself, just update!"
Which is again, a problem created by the companies themselves. The way this should work is that the researcher discloses to the company, and the company reaches out to and informs their customers immediately. Then they fix it.
But instead companies refuse to tell their customers when they're at risk, and make it out to be the researchers that are endangering people, when those researchers don't wait on an arbitrary, open-ended future date.
> Increasing the chance of a bad actor actually doing something with a vulnerability seems bad, actually.
Unless you know who knows what already, this is unprovable supposition (it could already be being exploited in the wild), and the arguments about whether POC code is good or bad is well tread, and covers this question.
You are just making the argument that obscurity is security, and it's not.
Whew, thank god for public disclosure with no prior warning to the people who would've been best equipped to retrieve their knife.
---
This was clearly not the best way to handle the situation.
Sure, you didn't know that the thief was unaware of the knife before your announcement, but he sure as shit was aware afterwards. You not knowing what they know is not a good reason to indiscriminately yell to no one in particular.
I did not make the argument that obscurity is security. The knife being under a trashcan is a risk and should be addressed by management. But that doesn't mean non-obscurity automatically improves security.
> I did not make the argument that obscurity is security... But that doesn't mean non-obscurity automatically improves security.
... egad. Yes, having information doesn't mean people will do the right thing with it, but you're not everyone's mommy/god/guardian. People should have the choice themselves about what actions they want to take, and what's in their own best interests.
And obscuring the information that they need to make that choice, in the name of not making them less secure, is, ipso facto, asserting that the obscuring is keeping them more secure than they otherwise might be.
So yes, you absolutely are arguing for obscurity as security.
I'm arguing that unveiling the obscurity can lead to attacks that wouldn't have happened otherwise, and you are partially to blame for those if they happen (which is true). I am not saying it was "more secure" before the disclosure. Just that, in the world afterwards, you must take responsibility for everyone knowing, including people who did not know before and abuse that knowledge.
Except his old knife he already had with him isn't made for exploiting the flaw in the vest, so it doesn't work. He needs to go home and build a new one, and the people in the mall can go home before he comes back, now that they know their vests are flawed. Otherwise, someone who comes in and is aware of the flaw when the users are not, can stab everyone, and they'd have no clue they were vulnerable.
In real-world terms, the kind of mass-exploitation that people use to fear monger about disclosure already happens everyday, and most people don't notice. The script kid installing a monero miner on your server should not be driving the conversation, it should be the IC spook recording a journalist/ dissident/ etc.
> Just that, in the world afterwards, you must take responsibility for everyone knowing, including people who did not know before and abuse that knowledge.
This is just a generalized argument for censorship of knowledge. Yes, humans can use knowledge to do bad things. No, that does not justify hiding information. No, that does not make librarians/ researchers/ teachers responsible for the actions of those that learn from them.
This seems like an unnecessary constraint to bolster your point instead of actually addressing what the other person is saying.
In this analogy, why can’t the old knife exploit the flaw? If the problem with the vest allows a sharp implement through the material when inserted at the correct angle or in the correct place, any sharp object should do.
To bring this back to the real world, this is all unfolding in virtual/digital spaces. The attacker doesn’t need to physically go anywhere, nor can potential victims easily leave the store in many cases. And the attacker often needs very little time to start causing harm thanks to the landscape of tools available today.
You are shopping at a store along with some other customers. When entering the store, you notice a gun laying on the ground by the door. You keep coming back every week, pointing it out, asking if that's intended or not.
They continue to ignore you, or explain how it's intended; a good thing even!
Eventually someone with malicious intent also sees the gun, picks it up, shoots a fellow customer, puts it back where it was, and walks off.
By the next day, miraculously, management will have found the time and resources to remove the gun.
Why do the companies that make the software hate your mom so much they push out release after release of shit? We're all fine with these developers crapping on the floor as long as we give them 30 days to clean up their steaming pile.
If instead every release was capable of instantly ruining someone's life, maybe we'd be more capable of releasing secure software and judging what software is secure.
If that was common practice, bad actors would make sure to be a registered customer of all interesting targets, so that they get informed early about vulnerabilities before there is a fix. And it would create a black market for that information.
When someone gets the information “Asus BIOS has an RCE vulnerability related to driver installation”, they’ll be able to figure out the details quickly with high probability, like OP did.
The real threat comes from the vast number of opportunistic attackers who lack the skills to discover vulnerabilities themselves but are perfectly capable of weaponizing public disclosures and proof-of-concepts. These bottom-feeders represent a much larger attack surface that only materializes after public disclosure.
Responsible disclosure gives vendors time to patch before this larger wave of attackers gets access to the vulnerability information. It's not about protecting company reputation - it's about minimizing the window of mass exploitation.
Timing the disclosure to match the fix release is actually the most practical approach for everyone involved. It eliminates the difficult choice customers would otherwise face - either disrupt their service entirely or knowingly remain vulnerable.
Most organizations simply can't afford the downtime from abruptly cutting off a service, nor can they accept the risk of continuing with a known vulnerability. Providing the fix simultaneously with disclosure allows for orderly patch deployment without service interruption.
This coordinated approach minimizes disruption while still addressing the security issue - a balanced solution that protects both the security and continuity needs of end users.
If they do nothing after a reasonable amount of time, escalate to regulators or change bank. Then once they release information that some processes are changed: “thanks to XXX working at YYY for helping us during it”. You win, they win, clients win, everybody wins.
Unwanted public disclosure directly leads to public exploitation, there is nothing good at all about it.
For example, there is a RCE in Discord (totally statistically certain due to the rendering engine, just not public yet), and this is going to be exploited only if someone shares the technical details.
If you don’t disclose it, it’s not like someone else will discover it tomorrow. It’s possible, but not more likely than it was yesterday. If you disclose it, you make sure that everybody with malicious intent knows about it.
Then customers are aware, Discord is pressured to act/shamed, and then you proceed with your private disclosure with a window.
Yes, limited disclosure will make people start hunting for the vuln, but it's still more than enough time for me to revoke an API key, lock down an internet-facing service, turn off my Alexa (no, I don't/won't own one), uninstall the app, etc. And it's better than me not knowing, and someone is intruding into my system in the meantime.
I can't count how many people did incorrect or unnecessary fixes for log4shell, even months after it was disclosed.
Or you report immediately to the press, press reports, police secures bank building, investigates sloppy practices, customers win, you are a hero, inept banksters and robbers go to jail.
> It eliminates the difficult choice customers would otherwise face - either disrupt their service entirely or knowingly remain vulnerable.
You decided they are better off not having to make that choice, so you make it for them whether they like it or not.
In fact, you made the worst choice for them, because you chose that they'd remain unknowingly vulnerable, so they can't even put in temporary mitigations or extra monitoring, or know to be on the lookout for anything strange.
> Most organizations simply can't afford the downtime from abruptly cutting off a service, nor can they accept the risk of continuing with a known vulnerability.
Now this is an interesting part, because the first half is true depending on the service, but bad (that's a BCDR or internet outage issue waiting to happen), and the second half is just wrong (show me a company that doesn't know and accept that they have past-SLA vulns unpatched, criticals included, and I'll show you a company that's lying either to themselves or their customers).
> This coordinated approach minimizes disruption while still addressing the security issue - a balanced solution that protects both the security and continuity needs of end users.
This is not a balanced approach, this is a lowest-common-denominator approach that favors service providers over service users. You don't know if it protects someone's security needs, because people have different security needs: a journalist being targeted by a state actor can have the same iphone as someone's retired grandma, or infotainment system, or home assistant, etc.
I've managed bug bounty and unpaid disclosure programs, professionally, and I know firsthand that it's the company's interests that responsible disclosure serves, first and foremost.
Instead of just one bad actor using that vulnerability on Andrew select targets, your proposal will have a few tens of thousands bots performing drive by attacks on millions of victims.
With current practice, you can be as sloppy and reckless as you want, and when you create vulnerabilities because of that, you somehow almost push the "responsibility" onto the person who discovers it, and you aren't discouraged from recklessness.
Personally, I think we need to keep the good part of responsible disclosure, but also phase in real penalties for the parties responsible for creating vulnerabilities that are exploited.
(A separate matter is the responsibility of parties that exploit the vulnerabilities. Some of those may warrant stronger criminal-judicial or military responses than they appear to receive.)
Ideal is a societal culture of responsibility, but in the US in some ways we've been conditioning people to be antisocial for decades, including by elevating some of the most greedy and arrogant to role models.
I have a problem with this framing. Sure, some vulnerabilities are the result of recklessness, and there’s clearly a problem to be solved when it comes to companies shipping obviously shoddy code.
But many vulnerabilities happen despite great care being taken to ship quality code. It is unfortunately the nature of the beast. A sufficiently complex system will result in vulnerabilities even a careful person could not have predicted.
To me, the issue is that software now runs the world, despite these inherent limitations of human developers and the process of software development. It’s deployed in ever more critical situations, despite the industry not having well defined and enforceable standards like you’d find in some engineering disciplines.
What you’re describing is a scenario that would force developers to just stop making software, on top of putting significantly more people at risk.
I still believe the industry has a problem that needs to be solved, and it needs a broad culture shift in the dev community, but disagree that shining a bright light on every hole such that it causes massive harm to “make devs accountable” is a good or even reasonable solution.
At this point, the software development field is about operating within the system decided by those others, with the goal of personally getting money.
After you've made the CEO and board accountable, I think dev culture will adapt almost immediately.
Beware of attempts to push engineering licensing or certifications, etc. as a solution here. Based on everything we've seen in the field in recent decades, that will just be used at the corporate level as a compliance letter-but-not-spirit tool to evade responsibility (as well as a moat to upstart competitors), and a vendor market opportunity for incompetent leeches.
First you make CEO and board accountable, and then let the dev culture change, and then, once you have a culture of people taking responsibility, then you'll have the foundation to add in licensing (designed in good faith) as an extra check on that, if that looks worthwhile.
Good. I work in code security/SBOM, the amount of shit software from entities that should otherwise be creating secure software should worry you.
Businesses care very little about security and far more about pushing the new feature fast. And why not, there is no real penalty for it.
I’m more open to harsher limits on commercial software, especially in certain categories. But underneath all of this we’re discussing an ecosystem and a culture which can’t be cleanly separated.
Some of the binary thinking I see in this thread would be deeply damaging to parts of that ecosystem with potentially major unintended consequence. Open source software is critically important for human rights/freedom. Taken at face value, many of the comments here directly threaten that freedom.
I’m not assuming that’s your stance, but I’m curious how you see the open source aspect of this considering how significant its role is - especially in the security space.
OpenSSL for example. Any security flaw in this package has worldwide effects, but we would be lessor without it.
Another example is the xz software that was attacked and then pulled into distributions. We were just lucky it was caught relatively early.
To be clear, I have far less sympathy for big software shops that pump out negligently bad code and then have to be prodded to fix it, but they’re not the only players involved.
I think as a field we're actually reasonably good at quantifying most of these risks and applying practices to reduce the risk. Once in a blue moon you do have "didn't see that coming" cases but those cause a very minor part of the damage that people suffer because of sw vulnerabilities. Most harm is caused by classes of vulnerabilities that are boringly pedestrian.
Maybe it's time we get professional standards if this is how we are going to behave?
Why is a cracked bridge dangerous? Because anyone traveling over it or under it is at risk of being hurt if the bridge collapses. Warning people that it is cracking does not increase the likelihood of a collapse.
Why is a software vulnerability dangerous? Because anyone who knows about it and has nefarious intent can now use it as a weapon against those who are using the vulnerable software, and the world is full of malicious actors actively seeking new avenues to carry out attacks.
And there are quite a few people who would exploit the knowledge of an unlocked door if given the chance.
There’s a very clear difference in the implications between these scenarios.
A vulnerable piece of software is always dangerous.
There are large numbers of state funded exploit groups and otherwise blackhat organizations that find and store these vulnerabilities waiting for the right opportunity, say economic warfare.
Much like building safe bridges from the start we need the same ideology in software. The 'we can always patch it later' is eventually going to screw us over hard.
But we also have to deal with the reality of the situation in front of us.
I will maintain that the differences between the implications of revealing a crack in a bridge vs. prematurely revealing a vulnerability to literally the entire world are stark. I find it pretty problematic to continue comparing them and a rather poor analogy.
> There are large numbers of state funded exploit groups and otherwise blackhat organizations that find and store these vulnerabilities
This underscores my point. What you’ve been describing is a scenario in which those organizations are handed new ammunition for free (assuming they don’t already have the vuln in their catalog).
And this is mostly BS too. People don't write bug free software, they write features.
Other industries had to license professional engineers to keep this kind of crap from being a regular issue.
If all our software was as simple as a bridge, then we could have that. A bridge is 5 sheets of plans, 10 pages of founding checks, 30 pages of calculations, 100 pages of material specs. You can read all those in a day. Check the calculations in a week. Next bridge will be almost the same.
Now tell me about any software where the spec is that short and simple. /bin/cat? /bin/true? Certainly not the GNU versions of those.
Software is different because we don't build 1000 almost-identical bridges with low complexity. We always build something new and bespoke, with extremely high complexity compared to any kind of building or infrastructure. Reproduction is automatic, so there will never be routine. Totally different kind of job, where a licensed professional will not help at all.
With what I do I work with a lot of larger companies and get to see the crap they push out with no architectural design and no initial security posture. I see apps with thousands of packages, including things like typosquats. I see the quality of the security teams which are contractors following checklists with no idea what they mean.
Saying that actual professions would make no difference sounds insane to me. Again, to me, it sounds like every other industry in saying 'self regulation is fine, we're special, we'll manage ourselves".
Licensed professionals checked a dam built by licensed professionals. Dam broke, killed people. Everyone claims to be innocent and the other party didn't read the right reports or didn't report the right problems: https://www.ecchr.eu/fileadmin/Fallbeschreibungen/Case_Repor... It is all just another method of shifting blame.
What really helps more than prescriptive regulation is liability. As soon as there is a strict liability for software companies, things will get better. What could also help is mandatory insurance for software producers. Then the insurance companies will either charge them big bucks or demand proof of safety and security.
Maybe this is part of the problem?
It's only paradoxical if you've never considered the inherent conflicts present in everything before.
The "responsible" in "responsible disclosure" relates to the researchers responsibility to the producer, not the companies responsibility to their customers. The philosophical implication is that the product does what it was designed to do, now you (the security researcher) is making it do something you don't think it should do, and so you should be responsible for how you get that out there. Otherwise you are damaging me, the corporation, and that's just irresponsible.
As software guys we probably consider security issues a design problem. The software has a defect, and it should be fixed. A breakdown in the responsibility of the corporation to their customer. "Responsible disclosure" considers it external to the software. My customers are perfectly happy, you have decided to tell them that they shouldn't be. You've made a product that destroys my product, you need to make sure you don't destroy my product before you release it.
The security researcher is not primarily responsible to the public, they are responsible to the corporation.
It's not a paradox, it's just a simple inversion of responsibility.
Unless the researcher works for the corporation on an in-house security team, what’s your reasoning for this?
Why are they more responsible to the corporation they don’t work for than for to the people they’re protecting (depending on the personal motivations of the individual security researcher I guess).
(and in the world of FOSS you might have "maintainer-coordinated" too)
Because you need to take a look at the fuller picture. If every vuln was published immediately the entire industry would need to be designed differently. We wouldn't push features at a hundred miles per hour but instead have pipelines more optimized for security and correctness.
There is almost no downside currently for me to write insecure shit, someone else will debug it for me and I'll have months to fix it.
All manufacturers must pay an annual fee to an insurance scheme which covers the case of insolvency of manufacturers.
This is a prime example where a hyperbole completely obliterates the point one is trying to make.
This is a prime example of someone not getting the joke everyone else got. [0] [0] https://www.washingtonpost.com/wp-srv/national/longterm/unab...
- protects the privacy of folks submitting
- vets security vulns. Everything they disclose is exploitable.
- publishes disclosures publicly at a fixed cadence.
- allows companies to pay to subscribe to an "early feed" of disclosures which impact them. This money is used to reward those submitting disclosures, pay the bills, and take some profit.
A bug bounty marketplace, if you will. That is slightly hostile to corporations. Would that be legal, or extortion?
The quality is seriously lacking. They have dismissed many valid findings.
From what I understood, the service is also (very) expensive. Wild.
I've had three really bad experiences with unskilled H1 triagers that the next vuln I find from a company that uses H1 will go instantly public. I'm never going to spend that much effort again, to get a triager that would actually bother to triage.
I think there is serious potential for this.
Most folks don't put up with faulty products unless by decision, like those 1 euro/dollar shops, so why should software get a pass.
Normal people don't care about vulnerabilities. They use phones that haven't received updates in three years to do their finances. If you spam the news with CVEs, people will just get tired of hearing about how every company sucks and become apathetic once there's a real threat.
The EU is working on a different solution. Stores are not permitted to sell products with known vulnerabilities under new cybersecurity regulations. That means if ASUS keeps fucking up, their motherboards become dead stock and stores won't want to sell their hardware anymore. That's not just computer hardware, but also smart fridges and smart washing machines. Discover a vulnerability in your dish washer and you may end up costing the dish washer industry millions in unusable stock if their vendors haven't bothered to add a way to update the firmware.
What are the specifics on that? Like does the vulnerability need to be public or is it enough if just the vendor knows about it? Does everyone need to stop selling it right away if new vulnerability is discovered or do they some time patch it? I'm pretty sure software like Windows almost definitely has some unfixed vulnerabilities that Microsoft knows about and is in process of fixing every single day of the year. Currently even if they do have a fix, they would end up postponing it until next patch Tuesday.
And what even is "vulnerability" in this context? Remote RCE? DRM bypass?
>instead of them saying it allows for arbitrary/remote code execution they say it “may allow untrusted sources to affect system behaviour”.
Sounds like Asus did in fact deny the bug.
Do stores have to patch known vulnerabilities before releasing the product to customers or can customers install the patch?
Invidious https://inv.nadeko.net/watch?v=cbGfc-JBxlY
YouTube https://youtube.com/watch?v=cbGfc-JBxlY
"ASUS emailed us last week (...) and asked if they could fly out to our office this week to meet with us about the issues and speak "openly." We told them we'd be down for it but that we'd have to record the conversation. They did say they wanted to speak openly, after all. They haven't replied to us for 5 days. So... ASUS had a chance to correct this. We were holding the video to afford that opportunity. But as soon as we said "sure, but we're filming it because we want a record of what's promised," we get silence."
Edit: formatting
Expect my view is consistent with reality, though: they’re chasing profits and getting away with it, so why go on the record and look bad if they can ignore & spend that time on marketing.
If a person comes to talk business with a camera attached to his head, I know he does not come in good faith.
Seems fair to take a camera.
Asking for a friend who is thinking about building a new PC soon.
That was the day I learned you literally cannot develop a computer motherboard without Intel's permission. Turns out Intel is no different than the likes of Nintendo.
Chinese "tinker" has been making countless "x99" motherboard that reuse consumer chipset like h81 or b85.
I don't think Intel approve that
That said, in their X670 / B650 they have the same setting as what this article is about, and it could be equally as broken on the software side as Asus's is, but I wouldn't know because I don't use Windows so I disabled it.
This only remains true in so far as no-one directly registered for a driverhub subdomain. Anyone with a wildcard could have exploited this, silent to certificate transparency?
I'm pointing out that a wildcard at the apex of your domain (which is what basically everyone means when saying 'a wildcard'), would not work for this attack. Instead if you were to perform the attack using a wildcard certificate, it would need to be issued for '*.asus.com.example.com.' - which would certainly be obvious in certificate transparency logs.
RFC6125 limits wildcards to a left-most label (6.4.3. paragraph 2): https://www.rfc-editor.org/rfc/rfc6125.html#section-6.4.3
I don't know of any CA that allows for wildcard characters within the label, other than when the whole label is a wildcard, but it is possible under that RFC.
The CA/Browser Forum's baseline requirements dictates how any publicly trusted CA should operate, and it defines a wildcard certificate in section 1.6.1 (page 26) here https://cabforum.org/working-groups/server/baseline-requirem...
> Wildcard Certificate: A Certificate containing at least one Wildcard Domain Name in the Subject Alternative Names in the Certificate.
> Wildcard Domain Name: A string starting with “*.” (U+002A ASTERISK, U+002E FULL STOP) immediately followed by a Fully‐Qualified Domain Name.
Now of course with your own internal CA, you have complete free reign to issue certificates - as long as they comply with the technical requirements of your software (i.e. webserver and client).
Also note that a cert issued as '..example.com.' would only match 'hi.com.example.com.', not an additional three labels.
- Would a self-signed cert work? Those aren’t in transparency logs.
- Does it have to be HTTPS?
All this, for literally nought
A small startup with a marketcap of only 15 B. What is more than understandable is that you give a shit not only about your crappy products but the researcher that did a HUGE work for your customers.
I truly feel bad for researchers doing this kind of work only to get them dismissed/trashed like this. So unfair.
The only thing that is ought to be done is not to purchase ASUS products.
On top of it all, the software they offer is slow and buggy on brand-new hardware.
But most of those issues also exist with AMD's or Gigabyte's drivers, most hardware vendors seem trashy like that. Like, if you install Samsung Magician (for their SSDs) then that even asks you if you're in the EEA (because of the privacy laws I suspect), it's absolutely crazy.
Microsoft should make it *significantly* harder to ship drivers outside of Windows Update and they should forbid any telemetry/analytics without consent.
I find Linux's hardware support model significantly nicer, although some rarer things do not work OOB, there's none of this bullshit.
My laptop has a fan and keyboard LED application that requires kernel access and takes over a minute to display a window on screen. Not to mention being Windows only.
Words can barely describe just how aggravating that thing was. One of the best things I've ever done is reverse engineer that piece of crap and create a Linux free software replacement. Mine works instantly, I just feed it a configuration file. I intend to do this for every piece of hardware I buy from now on.
In that sense fwupd has been an amazing development, as there's now a chance that you can update the firmware of your hardware on Linux and don't have to boot Windows.
USB stuff was really nice to work with. Wireshark made it really easy to intercept the control commands. For example, to configure my keyboard's RGB LEDs I need to send 0xCC01llrrggbb7f over the USB control channel; the ll identifies the LED and rrggbb sets the color. Given this sort of data it's a simple matter to make a program to send it.
Reverse engineering ACPI stuff seems to be more involved. I wasn't able to intercept communications on the Windows side. On Linux I managed to dump DSDT tables and decompile WMI methods but that just gave me stub code. If there's anything in there it must be somehow hidden. I'm hoping someone more experienced will provide some pointers in this thread.
https://www.csoonline.com/article/573965/how-to-update-your-...
No. No no no no no no no NO! That just centralises even more control to MS.
What we really need is for more people to develop open-source Windows drivers for existing hardware, or encourage the use of Linux.
I feel sorry for this guy, having deviated from the original issue. Though it'd only took a couple of seconds to note the WLAN chipset from specs or OEM packaging and then heading to station-drivers.
This was also the very reason I dislike Asus, I don't want a BIOS flag/switch that natively interact with a component in OS layer.
The practice of "injecting pre-installed software through BIOS" is such a deal-breaker. Unfortunately this seems to be widely adopted by the major players in motherboard market.
Reminder that WAFs are an anti-pattern: https://thedailywtf.com/articles/Injection_Rejection
When ASUS acquired the NUC business from Intel, they kept BIOS updates going but at some point a “MyASUS” setup app got added to the UEFI like with their other motherboards. Thankfully, it also had an option to disable and IIRC it defaults to disabled, at least if you updated the BIOS from an Intel NUC version.
Reminds me of the time I reported SQL disclosure vuln to Vivaldi and their WAF banned my account for - wait for it - 'SQL injection attempt' so hard their admin was unable to unlock it :)
IshKebab•20h ago
swinglock•19h ago
_pdp_•19h ago
charcircuit•19h ago
Pesthuf•19h ago
7bit•18h ago
stavros•17h ago