> This kind of advice is well-intentioned but misleading. It consumes the limited time people have to protect themselves and diverts attention from actions that truly reduce the likelihood and impact of real compromises.
I dislike any methodology that claims its intent is to talk down to people for whatever declared reasoning. People are capable, and should be helped to make decisions based on all available information.
> Regularly change passwords: Frequent password changes were once common advice, but there is no evidence it reduces crime, and it often leads to weaker passwords and reuse across accounts.
When I worked as a security professional the breaches were nearly always from someone's password getting leaked in a separate public breach. If those individuals had changed that password the in house breach would have been avoided.
> Use a password manager
Sage advice.
> When I worked as a security professional the breaches were nearly always from someone's password getting leaked in a separate public breach. If those individuals had changed that password the in house breach would have been avoided.
You completely missed the point. The good advice is to not reuse passwords. That alone would have stopped the in house breach.
To relay a quote, with the source not being very important: "I'm not going to waste a dime on cybersecurity when my officers need bullets and armor." People can be intelligent and capable and have minimal (if you're lucky) bandwidth or tolerance for cybersecurity advice. It's not the crisis they see every day. The advice given to unwilling listeners has to be focused and prioritized.
And... Password leaks and therefore rotations aren't an issue if people are using a strong main password for their manager. Then a leak doesn't transfer to another account and the manager will loudly tell them when a password is found in breach data -- which lines up with NIST's modern advice of avoiding password complexity and rotation, since they've found it to lead to minimal (at best) gained security.
And i would definitely not agree with everything in this letter.
Personally, i think the worst part about it is handling a low probability as something that's not gonne happen. Thats, especially in IT-Sec, one of the worst practices.
To take on point as example - the "never scan public QR codes".
Apart from the fact that there have been enaugh exploits in the past (The USSD "Remote Wipe", iOS 11 Camera Notification Spoofing (iOS, 2018), ZBar Buffer Overflow (CVE-2023-40889), etc) even without an 0day exploit qr codes can pose a relevant risk.
As a simple example, not to long ago i was in a restaurant which only had their menu in form of a qr code to scan. Behind the QR code was the link to an PDF showing the menu. This PDF was hosted on a free to use webservice that allowed to upload files and get a QR code link to them. There was no account managed control about the pdf that they linked to, it could be replaced at any time opening a whole different world of possible exploitations via whatever file is being returned.
Sure you could argue "this is not a QR code vulnerability just bad practice by the restaurant owner" - but that's the point. For the user there is literally no difference if the QR code itself has a malicious payload or if the URL behind it has (etc etc).
While we in the tech world might understand the difference, for the John and Jane Doe this is the same thing. And for them its still a possible danger.
Apart from that, recently a coworker linked me a "hacker" video on youtube showing a guy in an interview talking about the O.MG cable. Sure, you might say this is also an absolutely non standard attack vector, yet it still exists. And people should be aware it does.
My point is - by telling people that all those attack vectors are basically "urban myths" you just desensitize the already not well enough informed public from the dangers the "digital" poses to them. And from my personal view, we should rather educate more than tell them "don't worry it will be fine".
When was the last time you saw an un-targeted mass 0-day exploit campaign? There haven't been any for modern browsers. If we're talking about 0-days, you likely known there have been zero-click iMessage/WhatsApp vulnerabilities in the past. There's no protecting against those, but you're not here warning users to disable iMessage and WhatsApp. What's more realistic is making sure users keep their software updated, and trust that QR codes and links aren't going to waste a 0-day worth a million dollars on you.
you really don't know what they did.
In the time of containerized OSs and virtualized-everything it's silly to guess.
Ill try explain based on your example with "any link".
If you type amazon.com you trust that there will be amazon.com returned and not any maleware. On a QR code, the target url isn't as obvious so the user should be aware that a qr code, even if for example below it says "hackernews - the best news in the IT world" the qr code could still link to "https://news.xn--combinator-xwi.com" (edit : because ycombinator is a nice website it auto resolves the unicode char here : bad example tho but i dont have the time to recraft it and i guess you know unicode link/url tricks therefor i can just let it be the way i pasted it) did u spot the difference? Its not a regular "y" and just could get you on a fishing page. So ye even just know "urls" that you review on a qr code still can be dangerous if not typed by yourself. And than, for alot of users it prolly wouldn't event take that of a measure to trick them. Its not like the average Jane/John Doe does very good on url verification - else alot of scammers would go bancrupt.
Therefor i hope you understand you don't need a 0day. I also stated that in my answer but you seem to be so keen focusing on me listing some 0days (to disprove the initial article) that you kinda lost my point.
Also - sure everyone should keep his/her device updated - noone said anything else. Apart from that no i wouldn't recommend people to use whatsapp but that was't the point and im not actually sure why you mentioning it but here i said it : i wouldn't recommend it if that helps ¯\_(ツ)_/¯
Edit: not to forget - i for myself know that clicking on unknown links poses a certain risk and have several measures in place to reduce this risk.
If you are an online service provider, sure. Low probability means it's going to happen, especially as you scale with users.
For a small business IT team? You can't keep a clean sheet, the strategy is to reduce the probabilities of an incident and reducing its damage, but it will never be zero, if only because you have non-technical users that need to do actual work.
p(incident) is just yet another variable you need to do tradeoff engineering on, and obsessing over reducing it to 0 will probably compromise other tradeoffs like ease of use of the system.
It's a special case of ironic when in an attempt to get a specific variable to 0 (which is impossible with most variables anyways) you end up compromising that specific variable. So if you force users to use lots of passwords and password managers and MFA, and limit their capabilities, they end up circumventing your security systems and advice, so they introduce an issue (but of course it will be the users fault, and not the CISO's fault, their job is secure).
You cannot reduce the risks to 0 - that's a matter of fact and i would never claim you could.
I tend to say its a question of cost/gain. If the cost the attacker has to pay (work/invest/...) is higher than the possible gain (data/funds/...) you are on a good track for your companies security.
Im btw not working for an ISP, rather something you would see as a smaller sized IT company. Therefor i also have certain points where i in theory could go alot harder on security, but i don't because its not feasible.
Another thing especially in that regard i find important is trying to educate your users, at least we work on that. We don't just enforce hard rules on them, but we also try to make sure they understand why we have these rules and mechanisms in place - not to annoy them but to protect them.
Finally, thats my favorite point of your article, "force users to use lots of passwords ".
Well our business has to undergo regular audits by partners which are lets say rather meticulous when it comes to the security of our systems. These enforce certain things on us we have to than enforce on our users even if we don't think its good.
So ye, now you can blame on me that i enforced something on our users, but keep in mind - it was also enforced on me - i even discussed certain things with these partners trying to explain to them why some measures sound cool on paper but in reality are just impractical - not that anyone would care. So we implement it.
Therefor the next time you argue that some security measure is just an CISO that doesn't really care about its users, maybe keep in mind that some things are forced upon us even tho we don't like and don't support them.
I can see why you would take "online service provider" to mean an ISP, but I meant it to include SaaS and apps like whatsapp, google, etc.. as well
>Therefor the next time you argue that some security measure is just an CISO that doesn't really care about its users
Oh I didn't mean to imply that, there's no doubt that IT admins that overimplement security policies care in general, the critique is not about motives, rather the efficiency. I don't argue that they don't care or even that they are wildly inefficient, just that they are suboptimal on this specific point by going overboard.
It's somewhere between impractical and impossible to evaluate a URL and know anything about its "safety". So if you can't make your Web browser impervious enough to tolerate basically any crap a server may send back to your satisfaction, then your only answer is a total walled garden.
While i pointed out that i think that the claim of public qr codes are always safe and cannot pose any danger is wrong, i also didn't state you should wall yourself in and handle like everything is f0rk3d.
You, as with everything in life, should evaluate whats worth risks and what not. Scanning a QR code in a museum linking an audio track to describe the exhibt, scanning a qr code in a restaurant for a menu, scanning a qr code from a sticker on a traffic light.
These are 3 completly different scenarios that can be weighted different and therefor not be answered with a single "yep good/bad" for every situation. My initial point regarding the article was that i don't think stating scanning public placed qr codes is always safe. People should not just NEVER scan a public qr, but they should understand possible risks, they should learn how to evaluate which risks are worth taking, and also learn what thinks they should look for. My point is that of make the public more informed.
> The true risk is social engineering scams...
Exactly. My grandma is very susceptible to phishing and social engineering, I don't want her scanning random QR codes that would lead to almost identical service to the one she would think she is on and end up with identity theft or the likes.
> Regularly change passwords: Frequent password changes were once common advice, but there is no evidence it reduces crime, and it often leads to weaker passwords and reuse across accounts.
Database leaks happen all the time.
The point is to use unique passwords. If there is a leak, hopefully it is detected and then it is appropriate to change the password.
What you are judging then is a whole set of policies, which is a bit too controlling, you will most often not have absolute control over the users policy set, all you can do is suggest policies which may or may not be adopted, you can't rely on their strict adoption.
A similar case is on the empiric efficacy of birth control. The effectiveness of abstinence based methods is lower than condoms in practice. Whereas theoretically abstinence based birth control would be better, who cares what the rates are in theory? The actual success rates are what matters.
Plus, if your password gets stolen, there's a good chance most of the damage has already been done by the time you change the password based on a schedule, so any security benefit is only for preventing long-term access by account hijackers.
We call on software manufacturers to take responsibility for building software that is secure by design and secure by default—engineered to be safe before it ever reaches users—and to publish clear roadmaps showing how they will achieve that goal.Also, any password manager that's "cloud based" is potentially a security hole. Yeah, they say the server is secure. Right.
You think of someone stealing your password vault and cracking AES? The vault is E2EE.
Realistically, most users benefit from using a reputable cloud-based password manager, and should focus on securing it with a strong password and MFA. You should also change your passwords if your password manager is breached.
The open letter tries to steer us towards reputable guides, linking to this one by EFF: https://ssd.eff.org/module/choosing-the-password-manager-tha...
It's not reasonable to assume their server is "secure" not just from evil-hakzors and script kiddies, but also from government agencies with things like Technical Capability Notices and secret FISA warrants and NSLs with gag orders (or whatever their jurisdictional equivalents are), and also from threats like offensive cybersecurity firms with clients like disgruntled royalty in nepotistic moncharcy nations states who send bonesaw murder teams after dissident journalists.
I (mostly) trust AES (assuming it's properly implemented, and I exclude the NSA from that, and the equivalent agencies in at least a handful of other major nation states).
I have a lot less trust in owners and executives at my password vault vendor or their cloud hosting company or their software supply chain. If I were them, I'm pretty sure I wouldn't be able to stick up for my users the way Ladar Levison and Lavabit did. There's no doubt that the right federal agency could apply enough pressure on me and my family/friends to make me give up all my users unencrypted vaults. Sorry, but true.
There are absolutely valid use cases, but they are much fewer and further between than people claim.
I simply can't remember dozens of passwords, so a pw manager is the best I can do realistically. Yes, it's a single point of failure, but so is using the same pw everywhere.
Considering the amount of times my email has ended up in a leaked dataset, and the only accounts I've ever had visibly compromised were ones I did not use a password manager for, this seams to be the correct mindset.
I think if used correctly they can be a net benefit, but the question is how many users actually use them correctly. Isn't the security they offer based on a user only having to remember a single complex and unique password for the manager, and then let it handle unique and complex passwords for everything else. The question is, however, how many users just set the password manager password to 'ImSecure123!' and use it to autofill the same old reused passwords they've always used?
Even if your password vault is stored on the cloud you're likely using a very secure passphrase for it that has 0 reuse anywhere else, so even if your password vault is stolen it's impossible to brute force.
For a hacker to comprise your password vault it would likely involve hacking your computer, which if you're keeping your software updated is a very difficult task these days without the target user's active help.
It's not at all akin to that.
Firstly, every respectable password manager requires multi-factor authentication to log in to. Someone finding out the password to your manager is almost never sufficient. They would probably need to find it out as well as gain physical access to a device of yours which has the manager installed.
Secondly, the whole issue of "use one password for everything" is that if one site gets hacked and they store passwords insecurely (or, indeed, if the people who run the site are themselves malicious), then someone can use that same password to access all of your other accounts. So you have to trust the security of every single site you make an account with.
Using a password manager doesn't have that problem, since each site is being provided with a different password. So then you don't have to trust any website, you only have to trust the password manager itself. And you don't have to use a big cloud-hosted one if you distrust them - there are many password managers that you can just run locally on your computer (though without the cloud benefits of backup / disaster recovery). You can also just use a notebook with a padlock or something - frankly it doesn't really matter how you track your passwords, as long as nobody can get to it but you, and you use a different password for everything, and you have some plan for disaster recovery. That's the idea.
At least that ups the threshold to "someone who can not only poison your dns or MITM your network, but can also generate trusted TLS certs for the website domain they're phishing for".
"The lobby group for Australian telcos has declared that SMS technology should no longer be considered a safe means of verifying the identity of an individual during a banking transaction."
If you say so...
Sadly there could potentially also be a supply chain attack that happens to make its way into the client you use to view your supposedly secure vault. Odds are they use npm, btw.
What do you expect your browser security levels to the max to do? Browsers are designed to be secure from default settings.
The most common attacks:
- Phishing
- Getting the user to run the malware themselves
- Credential reuse
- Literal physical theft
- Users uploading their own stuff completely willingly to some sketchy service
Vulnerabilities in the services you use are important, but you can't update those yourself :)
> Getting the user to run the malware themselves
Here are two good reasons for not trusting a password manager that stores your vault online.
On the other hand, most people have no backup strategy for their digital life.
As a result of this change, print clients running versions of Windows prior to Windows 10, version 2004 and Windows Server, version 2004 (Build number 19041) will intentionally fail to print to remote print servers running Windows 11, versions 24H2 or 25H2, and Windows Server 2025, that have installed this update, or later updates. Attempting to print from an unsupported print client to an updated print server will fail with one of the following errors: […]”
> Browsers are designed to be secure from default settings.
Not quite. They are usually designed to be both fast and safe, but neither goal is considered "done" yet in modern ones. If you want max security, you'll likely have to disable all performance boosts like JS JIT.
Maximizing privacy is a somewhat different goal, and recommendations for how to do so would differ from person to person. Some people really don't care about privacy. And for some other people, adblocker and tracking-blocker software is sufficient for their privacy needs. Whereas for certain people in certain parts of the world, literally the only way they can browse the Web safely is with Tor running on a temporary TailsOS drive.
The entire point of end-to-end encryption is that you don't need to trust the server. If your password manager has access to your secrets (i.e. you don't control the secret key/password itself), then you have bigger problems than a potentially untrustworthy host.
Then they changed to the web app and implemented teams, which is what we use today.
Work has decided the risk of 1Password going rogue is acceptable - but that's in the full knowledge that since they are serving the Javascript that's doing the client side encryption/decryption, there's no guarantee they can't serve (or be coerced into serving) malicious JavaScript that decrypts and exfiltrates all credentials and secrets any user has access to.
Pragmatically, I'm (mostly) OK with accepting that. If we have a threat model that realistically includes the sort of state level actor who could coerce a company like 1Password to launch an exploit against us - then we've lost already. Like James Mikkens said "YOU'RE STILL GONNA BE MOSSAD'D UPON!!!"
One of my hobbies is recreational paranoia though. So I use something else (KeyPass) for my personal stuff now.
We spend so much time training people that if you hit update, it’s going to suck: you’re going to suddenly get ads in your favorite app, or some new feature is going to get paywalled, or the UI is going to completely change with no warning. It seems counterproductive to accept that our industry does this stuff and then publish an open letter finger-wagging people for not updating.
There are still legitimate reasons to clear cookies, to turn off Bluetooth/NFC beaconing, and to occasionally rotate passwords (vis a vis password managers) as it costs nothing to accomplish, and very little in the way of tradeoffs. So...why not?
The probability of a random individual being the target of a sophisticated state sponsored attack is low, but the probability of being caught up in a larger dragnet and for data to be classified, aggregated and profiled is very high. So why not make it just a bit harder for them all?
If anything, let's chip away at this problem bit by bit. Make their life a bit harder...their datacenters a bit hotter. Add random fud to the cookie values, constantly switch VPN endpoints, randomize your mac address on every WiFi association, constantly delete old comments, accounts, create throwaway accounts, create proxies and intermediaries, rotate your password and 2FA -- use any legal means to frustrate any adversarial entities -- commercial or otherwise. They want information? They want your data? Fine, overwhelm them with it. THAT should be the proper modern privacy-focused manifesto. This is utterly bewildering...
...but then I get to the signatories and this nonsense suddenly made all the sense in the world:
> Sincerely, Heather Adkins, VP, Cybersecurity Resilience Officer, Google
> Aimee Cardwell, former CISO UnitedHealthGroup
> Curt Dukes, former NSA IA Director, and Cybersecurity Executive > Tony Sager, former NSA Executive
> Ben Adida, VotingWorks
> Geoff Belknap, Deputy CISO, Microsoft
The corporate CISO club is behind this.
My mom using those would be one “I don’t know where I put that” away from permanently losing access to her pictures or any other similar access. This is as potentially harmful as any attack.
I realize not everyone is using a physically stripped burner, a graphene os install, etc and not everyone works at a high value financial, govt, or infra target but for those of us who need to deal with opsec or are commonly targeted by spear phishing this advice seems abysmal.
In the current political climate of the US, if you are living or traveling here and the current party isn't cheering for you personally, you really should be considering both state-sponsored attacks and no longer have the luxury of assuming good faith by the state. Telling people to enable cheap drive by attacks that are in active use by certain government agencies is irresponsible malpractice at best and actively evil at worst.
Source: I've worked at analytics companies that actively deanonymized users using cookies when available. We used wifi and Bluetooth details when available. We built "multi channel marketing" which was just taking any information we could scrape from the user to fingerprint them and cross reference and deanonymize them so we could sell interactions to businesses like geofenced price discrimination, value of users, and could offer cross website information on shopping habits/financial profile. The shit I did 15 years ago didn't go away and no matter how much I wish I didn't write that, it was the tip of the iceberg and relatively benign.
If you implement a password manager, you must mandate auto-fill only and actively discourage (via training) copy/paste of credentials to a web site. Train the users to view “auto-fill not working” as a red flag. (This doesn’t apply to non-website credentials). Mandate all passwords to be auto-generated. Mandate that the only manually-entered password is the one for the password manager. Of course, you must have MFA on the password manager entry.
This will allow your users to comply with frequent password rotations much more easily. Auto-fill requirement/culture is critical to reducing phishing success, especially for tired eyes.
> Secret questions
No, my mother's maiden name is not a secret. And some questions like "who was your best friend in elementary school?" might have different answers depending on when you ask me. Plus, unless my best friend's name was Jose Pawel Mustafa Mungabi de la Svenson-Kurosawaskiwitz (we used to call him Joe) it's pretty easy to guess with a dictionary attack. The only way to answer these questions securely is to make up an answer that's impossible to guess, which results in a second password.
> You password must contain these particular characters
I understand that this rule is to prevent people from using passwords like "kittycat", but "k!ttyc4T" is still less secure than "horse battery staple correct".
How in the #%*^ did you figure out my secret question?
I absolutely hate security theatre. And these kinds of things are just that. In fact, I’m sure that difficult to remember passwords make us less secure as we forget or write them down.
Well... something like that. Please don't use exactly "horse battery staple correct".
(This is actually a FR to any password manager's product team: it's time to treat things like 2FA recovery code and secret question answers as first class citizen in your product).
> the real world across industry, academia, and government.
Gotcha, so no one here gives a shit about privacy. They only care about avoiding the inconveniences of fraud and leaked secrets.
Use a password manager and a feature-complete adblocker (ublock origin on Firefox). Send messages over end-to-end encrypted channels. Use a VPN along with your adblocker and some kind of cookie/browser-id isolation if you don't want your traffic stalked.
Yesyes, I do know that Big Ad can mostly stitch together some proxy profile of me anyway, but it would be more blurry.
PS: If the text is real and not trolling, the keyword in the text is 'rarely happen', which we could apply to car seatbelts then.
you mean partial web pages?
most browsers use DNS over HTTPS
How do you get from a malicious DNS response to a browser-validated TLS cert for the requested host?
Isn't the better advice to avoid clicking through certificate warnings? That applies both on and off open wifi networks.
There is a privacy concern, as DNS queries would leak. Enabling strict DoH helps (which is not the default browser setting).
This ones known. Therefore I just cannot believe that those who wrote the open letter did not even though about such significant events from the past year, I remark the past year, or even on zero-days.
We are talking about people connecting to an unknown unsupervised network, that we do not know what new vulnerabilities will be published on main stream also, and the ones of the open letter know it because they are hiding behind the excuse of "rarely".
This gets complicated because you're not safe on your home or corporate network either when CAs are breached. The incident everyone talks about, DigiNotar (2011), had stolen CA keys issuing certificates that intercepted traffic across several ISPs. If that's the threat you're looking to handle, "avoid public wifi" isn't the right answer. Perhaps you're doing certificate pinning, application level signing, closed networks, etc.
> Entrust (2024)
I recently wrote a blog post[1] about CA incidents, so I notice this one isn't like the others. Entrust's PKI business was not impacted by the hack and Entrust remains a trusted CA.
> Click here or use your login
Password manager autofill is the solution there, both on public wifi and on a corporate network. Perhaps an ad blocker as well.
> people connecting to an unknown unsupervised network
Aren't most people's home networks "unsupervised"?
Do you notice that your proposed solutions try to fix a problem, isn't it? The open letter does not propose solutions; it merely denies them.
It is needed to be sincere with people, those "incidents" have happened for a long time, and unfortunately will keep happening (given the history), bad actors hunting, yesterday the CAs, and tomorrow? So if one connect to an open wifi one may fall victim to a trap, probably not at home but in an Airport or other crowded places with long waits, and even if you do not browse another app in background will be trying to do it.
It was needed many years to make people just sightly aware, and now they -if the text is real- pretend to undo it. But to be sincere I really do not mind much, I just perceive that open letter as malicious.
Are there specific, modern examples of CA compromise being used to target low-risk individuals? Is that a common attack vector for low-risk individuals and small businesses?
- CISOs aren’t actually officers of the company and are typically 2-3 levels below the actual officers
- CISOs only exist to have someone to deflect blame onto after the inevitable breach
- If a company actually cared about security they wouldn’t put it in a silo
Tech and non tech users have a budget to spend on IT Sec, so if you impose a lot of useless or marginally useful rituals along with the useful prophylaxis, the user will be forced to drop some of the measures, so it's better to drop some rules early on by policy rather than letting users decide what good practices to avoid.
For 5 - session cookies are one of the main things stealers look for. Deleting cookies is absolutely good advice until browsers build in better mitigations against cookie theft.
For 6 - if there was a standard interface how password managers could rotate my creds, I would sure as hell use it. Force rotating passwords is only "bad" if people need to remember them. Any random credentials stored in a vault absolutely should be rotated periodically, there is no reason not to.
I don't see the point of this letter, none of the "bad" advice they call out is harmful to security in any way, if people feel safer avoiding public wifi, so be it. Is it just a call out to other cisos to update their security hygiene powerpoints?
> This kind of advice is well-intentioned but misleading. It consumes the limited time people have to protect themselves and diverts attention from actions that truly reduce the likelihood and impact of real compromises.
When you've got 15 seconds to _maybe_ get someone to change their behavior for the better, you need to discard everything that's not essential and stay very very far away from "yes, but" in your explanations.
And some of the previous advice they’re stepping back from like avoiding QR codes you’re unfamiliar with is still good advice; you should be careful and not expose yourself too much.
2. People will not use complex solutions unless actively and rigidly enforced.
3. At best, we can hope that they can create one really good passphrase. That's combined with MFA.
There are people that are exceptions to those, but they're vanishingly small percentage of the population. And unfortunately, there are a way, way more people that think they have something better but are deluding themselves -- like bad card counters that casinos are happy to have at the blackjack table or non-experts rolling their own crypto.
Let me fix this for you:
> Never use SMS one-time codes as a last resort
Where I see a flaw in this is the initial login.
If you're not already on your computer to access the password manager, how do you retrieve the essentially non-memorisable password to unlock your computer in order to get to the password manager to retrieve the essentially non-memorisable password?
The password to unlock the computer, therefore, must be able to be remembered. This pretty much excludes 16-character auto-generated passwords for anyone but a savant.
Am I missing something obvious here? (MFA using an authenticator app on the phone? Is that something that Windows / Mac/ Linux supports?)
And any password length requirement beyond 8 always ends up being just a logical extension of 8 character password (like putting 1234 at the end), if 16 characters is required one would just type their standard password in twice.
If a any of the old passwords (potentially from unrelated applications) get leaked, it's almost trivial to guess current password.
It's a wetware limitation. Not that we don't have methods that could improve it, it's just that they're not yet implemented at this specific point of contact. Interestingly.
But the message, absolutely on board with it.
the nuanced advice is for unsolved problems, so ... if you ask a nuanced question, you'll be ... directed back to foundational security advice ?
I don't think they've thought about it, perhaps Signatories shouldn't play risk-advice cosplay at population levels.
I'd be annoyed if this influenced policy because of who they are and not how dumb the logic is.
MerrimanInd•2mo ago