But in the end of I want Alice to talk to Bob and know they and only them are talking I'd like to guarantee that. Instead companies are spending tons of money and work hours doing Eve's work for her, installing her tools and getting it all nicely configured for when she logs in.
How many times do we have to backdoor our crypto systems to realize we're not building doors for just us but for everyone else as well?
It need not be a single point of failure. You can set these things up with redundancy. There's certainly an element of adding risk, your interception box is a big target to do unauthorized interception or tampering; but there's also an element of reducing risk --- you'd be potentially able to see and respond to traffic that would be opaque otherwise.
Yes, so instead of one box with the keys to decrypt all the traffic flowing through the network I'll have multiple boxes that have the ability to decrypt all the traffic. Multiple machines to update and secure and guard against those getting attacked or else everything gets broken.
Finding a way to subvert that authentication or, better yet, bypass it entirely, could put U.S. military networks that can be reached over the public Internet at risk of remote exploitation. Those networks can often also reach other military networks not directly exposed to the public Internet.
But I agree experts should know better when of any solid proof is lacking. Or any proof at all.
Not that "F5 announces attack by state sponsored hackers", "F5 announces attack by nation-state backed hackers", or "F5 announces attack from nationally backed hackers" have to be invalid, particularly since the latter is often what is actually most specifically correct anyways.
If F5 can't do that, what is their actual value proposition?
North Korea really does spend a lot of money on this, and so does Russia and China. And US and Israel, for that matter.
Sure thing. It's so hard not to hate this PR stuff when they can't even be a tiny bit humble. "The hackers were so sophisticated and organized, we didn't even have a change! They could've hacked everyone!"
> In response to this incident, we are taking proactive measures to protect our customers
Such as, fixing the bugs or the structural problems that led to you being hacked and leaking information about even more bugs that you left undisclosed and just postponed to fix it? This wording sounds like they're now going the extra mile to protect their customers and makes it sound like a good thing, when keeping your systems secure and fixing known bugs should've been the first meters they should've gone.
Just be honest, you fucked up twice. It's shit, but it happens. I just hate PR.
F5, Inc. (“F5”) engaged NCC Group to perform (i) a security assessment of critical F5 software source code, including critical software components of the BIG-IP product, as provided by F5, and (ii) a review of portions of the software development build pipeline related to the same, and designated as critical by F5 (collectively, the “In-Scope Items”). NCC Group’s assessment included a source code security review by 76 consultants over a total of 551 person-days of effort.
Wonder what the bill was?
https://www.cisa.gov/news-events/directives/ed-26-01-mitigat...
Is it just me?
It seems more likely that we do not KNOW how the access was used.
They claim the vulnerabilities discovered through the exfiltration were not used though.
> We have confirmed that the threat actor exfiltrated files from our BIG-IP product development environment and engineering knowledge management platforms. These files contained some of our BIG-IP source code and information about undisclosed vulnerabilities we were working on in BIG-IP.
> We have no knowledge of undisclosed critical or remote code vulnerabilities, and we are not aware of active exploitation of any undisclosed F5 vulnerabilities.
That admits nearly every possible class of outcome as long they did not actively already know about it and chose to say they did not. The specific words that their lawyers intentionally drafted explicitly even allow them to intentionally spend effort to destroy any evidence that would lead them to learn if the vulnerabilities were used and still successfully claim that they were telling the truth in a court of law. You should not assume their highly paid lawyers meant anything other than the most tortured possible technically correct statement.
PR statements drafted by legal are a monkey's paw. Treat them like it.
I downvoted you for complaining about downvotes, so at least you know the reason for one of them now.
Something about this statement screams that companies are setting themselves up for free money from big old gov'ment welfare titties. I keep seeing it pop up again and again and it only makes sense in that context.
Its the boogyman like terrorism. We need infinite money to fight the bad guys.
If I were running a country practically my highest priority would be cyberattacks and defense. The ability to arbitrarily penetrate even any corporate network, let alone military network, is basically infinite free IP.
I understand human nature.
There's a thousand things to point at that would make it plausible. I might even convince myself of it out of sheer embarrassment.
This is a fantasy.
Not saying that these companies would turn down corporate welfare given the chance, but I’ll offer an alternative explanation: it shifts accountability away from the company by positing a highly resourced attacker the company could not reasonably be expected to protect against.
If you have a physical security program that you’ve spent millions of dollars on, and a random drug addict breaks in and steals your deepest corporate secrets people are going to ask questions.
If a foreign spy does the same, you have a bit more room to claim there’s nothing you could have done to prevent the theft.
I’ve seen a bunch of incident response reports over the years. It is extremely common for IR vendors to claim that an attack has some hallmark or another of a nation-state actor. While these reports get used to fund the security program, I always read those statements as a “get out of jail free” card for the CISOs who got popped.
I agree. I think what we are split on is purpose/intent.
>could not reasonably be expected to protect against.
Why not? If I'm hiring a cybersec thats probably in my top 3 reasons to hire them, if not them then who? Number one is probably compliance/regulation.
> “get out of jail free”
This is one of my red flags I also keep seeing. Whoops we can't do the thing we say we do. The entire sec industry seems shady AF. Which is why I think they are a huge future rent seek lobby. Once the insurance industry catches on.
> these reports get used to fund the security program
So we agree?
I… don’t think so? Your original comment was that companies claim nation state attack as a way to get government funding. That has nothing to do with assessing blame for an attack.
> Why not? If I'm hiring a cybersec thats probably in my top 3 reasons to hire them, if not them then who?
If you think you as a private entity can defend against a tier 1 nation state group like the NSA or Unit 8200, you are gravely mistaken. For one thing, these groups have zero day procurement budgets bigger than most company market caps.
That’s why companies reflexively blame nation state actors. It isn’t to get government funding. It is to avoid blame for an attack by framing it as something they could not have prevented.
> So we agree?
No, I don’t believe we do.
do you mean they pay companies to put backdoors into products? or you mean they just go hunting for vulnerabilities. maybe both?
Of course, one might still be concerned that the hardware that the software is running on, could be compromised. (A mathematical proof that a program behaves in a particular way, only works under the assumption that the thing that executes the program works as specified.) Maybe one could have some sort of cryptographic verification of correct execution in a way where the verifier could be a lot less computationally powerful while still providing high assurance that the computations were done correctly. And then, if the verifier can be a lot less powerful while still checking with high assurance that the computation was done correctly, then perhaps the verifier machine could be a lot simpler and easier to inspect, to confirm that it is honest?
Generally the government (as of now) is not paying private (but maybe some Critical Infrastructure companies) companies to secure things. We are in the very early stages of figuring out how to hold companies accountable for security breaches, and part of that is figuring out if they should have stopped it.
A lot of that comes down to a few principles:
* How resourced is the defender versus the attacker? * Who was the attacker (attribution matters - (shoutout @ImposeCost on Twitter/X) * Was the victim of the attack performing all reasonable steps to show the cause wasn't some form of gross negligence.
Nation state attacker jobs aren't particularly different from many software shops.
* You have teams of engineers/analysts whose job it is to analyze nearly every piece of software under the sun and find vulnerabilities.
* You have teams whose job it is to build the infrastructure and tooling necessary to run operations
* You have teams whose job it is to turn vulnerabilities into exploits and payloads to be deployed along that infrastructure
* You have teams of people whose job it is to be hands on keyboard running the operation(s)
Depending on the victim organization, if a top-tier country wants what you have, they are going to get it and you'll probably never know.
F5 is, at least by q2 revenue[0], we very profitable, well resourced company that has seen some things and been victims of some high profile attacks and vulns over the years. It's likely that they were still outmatched because there's been a team of people who found a weakness and exploited it.
When they use verbage like nation-state, it's to give a signal that they were doing most/all the right things and they got popped. The relevant government officials already know what happened, this is a signal to the market that they did what they were supposed to and aren't negligent.
[0] -https://www.f5.com/company/news/press-releases/earnings-q2-f...
The attacker needs to find 1 fault in a system to start attacking a system, the company needs to plug ALL of them to be successful, continually for all updates, for all staff, for all time.
Having been on both sides of that fence, I dont envy the defenders, it is a losing battle.
Being on the defenders side, I would say it is not a losing battle.
It is a matter if convenience versus security: not using up to date libraries because it requires some code rewrites and “aint nobody got time for that”, adding too much logic to functions and scooe creep instead of segregating services, not microsegmenting workloads, using service accounts with full privileges because figuring out what you actually need takes too much time; and the list could go on.
I am not blaming all developers and engineering managers for this because they might not know about all the intricacies of building secure services - part of the blame is on the ops and security people who don’t understand them either and think they’re secure when they are not. Amd those folks should know better.
And third, hubris: we have all the security solutions that are trendy now, we’re safe. Do they actually work? No one knows.
Many of these companies can keep up to date assuming their vendors report correctly, The exploits that are not publicly documented are rarely fixed.
That it was a nation-state actor may have allowed them some grace, as it didn't result in individuals' details being wholesale sold on the dark web, and the fallout was most-likely a national security issue.
It would definitely have helped the CCP target individuals who were vulnerable to recruitment due to their financial status. Especially when combined with the Office of Personnel Management data hack.
That is to say, sometimes nation state hackers _were_ behind the compromise. F5 is a very believable and logical target for such groups.
From the published CISA mitigation[0]:
A nation-state affiliated cyber threat actor has
compromised F5’s systems and exfiltrated files, which
included a portion of its BIG-IP source code and
vulnerability information. The threat actor’s access to
F5’s proprietary source code could provide that threat
actor with a technical advantage to exploit F5 devices and
software.
> Its the boogyman [sic] like terrorism.Or maybe it is a responsible vulnerability disclosure whose impact is described thusly[0]:
This cyber threat actor presents an imminent threat to
federal networks using F5 devices and software. Successful
exploitation of the impacted F5 products could enable a
threat actor to access embedded credentials and Application
Programming Interface (API) keys, move laterally within an
organization’s network, exfiltrate data, and establish
persistent system access. This could potentially lead to a
full compromise of target information systems.
0 - https://www.cisa.gov/news-events/directives/ed-26-01-mitigat...Until this happens, its just CYA at its best to hide flaws in their systems and procedures.
I don’t know why, but this sounds a bit like backdoors.
Why would anyone have confidence in F5’s analysis?
Adding some malicious code to the BIG-IP software would require a long time for the attackers to persist in f5's systems undetected until they understood the current code. Not a zero percent chance, but pretty unlikely.
https://my.f5.com/manage/s/article/K000157005
In October 2025, F5 rotated its signing certificates and keys used to cryptographically sign F5-produced digital objects.
As a result:
BIG-IP and BIG-IQ TMOS product versions released in October 2025 and later are signed with new certificates and keys
BIG-IP and BIG-IQ TMOS product versions released in October 2025 and later contain new public keys used to verify certain F5-produced objects released in October 2025 and later
BIG-IP and BIG-IQ TMOS product versions released in October 2025 and later may not be able to verify certain F5-produced objects released prior to October 2025
BIG-IP and BIG-IQ TMOS product versions released prior to October 2025 may not be able to verify certain F5-produced objects released in October 2025 and later
Translated =>
We don't know whether they have used or are going to use our NSA-mandated backdoors.
Sometimes when a company engages law enforcement, law enforcement can request that they not divulge that the company knows about the problem so that forensics can begin tracking the problem.
I won't speak how often or how competent law enforcement are though, but it can happen.
Sony & many others have proved pretty comprehensively that brand reputation isn't really impacted by breaches, even in high profile consumer facing businesses. That trickles down to B2B: if your clients don't care, why should you.
That leaves legal risk as the only other motivating factor. If that's been effectively neutered, it doesn't make economic sense for companies to do due diligence with breaches.
As far as I'm aware, Yahoo were the last company to suffer any significant impact from the US legal system due to a breach.
But you are right, at F5's size and moneys, incentives for public disclosure are not aligned in the public's favor. Damage control, in all its meanings, has taken priority lately over transparency.
completely missed your point
I really have no idea how security people think this is a good thing aside from checkbox compliance but man-o-man do they love it.
tru3_power•23h ago
bangaladore•22h ago
Is it that (through some mechanism) an actor gained access to F5's sytems, and literally found undisclosed vulnerabilities documented within F5's source control / documentation that affects F5's products?
If so, lol.
tru3_power•20h ago
dwd•15h ago
"Here be dragons" is also a good search if you're responsible for security hardening legacy code.