It seems to me a parallel path that should be pursued is to make the impact less damaging. Don't assume that things like birth dates, names, addresses, phone numbers, emails, SSNs, etc are private. Shut down the avenues that people use to "steal identities".
I hate the term stealing identity, because it implies the victim made some mistake to allow it to happen. When what really happened is the company was lazy to verify that the person they're doing business with is actually who they say they are. The onus and liability should be on the company involved. If a bank gives a loan to you under my name, it should be their problem, not mine. It would go away practically overnight as a problem if that were changed. Companies would be strict about verifying people, because otherwise they'd lose money. Incentives align.
Identify theft is not the only issue with data leaks / breaches, but it seems one of the more tractable.
You may enjoy this sketch: https://www.youtube.com/watch?v=CS9ptA3Ya9E
It’s honestly unclear if the damage from data breaches exceeds the cost of eliminating it. The only case where I see that being clear is in respect of national security.
Definitely not. Damage is done to customers but costs to eliminate are on the company. Why should company invest more if there are no meaningful consequences for them?
What is the evidence for this?
The cost of identity fraud clocks in around $20bn a year [1]. A good fraction of that cost gets picked up (and thus managed) by financial institutions and merchants.
I’m sceptical we could harden our nation’s systems for a few billion a year.
[1] https://javelinstrategy.com/research/2024-identity-fraud-stu...
One factor you know (data) and the other you posess, or you are (biometrics).
IdP issues both factors, identification is federated to them.
Kind of happens when you are required to supply driver's license, which technically you own and is federated id if checked in government system, but can be easily forged with knowledge factors alone.
Unfortunately banks and governments here use facial recognition for the second factor, which has big privacy concerns, and the tendency I think will be federal government as sole IdP. Non-biometroc factors might have practical difficulties at scale, but fingerprint would be better than facial. It's already taken in most countries and could be easily federated. Not perfect but better than the alternatives imo.
It's a strong factor if required in person, the problems start when accepting it remotely. But having to go to the bank seems like the past.
Financial consequences to the company might be a deterrent, of course then you're dealing with hundreds or thousands of people potentially unemployed because the company was bankrupted by something as simple as a mistake in a firewall somewhere or an employee falling victim to a social engineering trick.
I think the path is along the lines of admitting that cloud, SaaS and other internet-connected information systems cannot be made safe, and dramatically limiting their use.
Or, admitting that a lot of this information should be of no consequence if it is exposed. Imagine a world where knowing my name, SSN, DOB, address, mother's maiden name, and whatever else didn't mean anything.
Consider the intent of not hiring enough security staff and supporting them appropriately. It looks a lot like an accident. You could even say it causes accidents.
Clearly it's possible.
If we were serious about preventing these kinds of things from happening, we could.
Bottom line though a good many folks here would loudly resist that kind of oversight on their work and their busineses, and for somewhat valid reasons. Data breaches hardly ever cause hundreds of deaths in a violent fireball.
If the consequences of an airline crash were just some embarassment and some inconvenience for the passengers, they would happen a lot more.
Also people almost never go to jail for airline crashes, even when they cause hundreds of deaths. We investigate them, and maybe issue new regulations, not to punish mistakes, but to try to eliminate the possibilty of them happening again.
Insurance people will be happy to tell you the price of the average citizen's life. Estimate the total cost to the economy, divide by the average citizen's life-value and you have the statistical deaths caused by this type of incident. Draw a fireball next to it for dramatic effect.
But generally, I don't think _every_ SaaS needs to be tightly regulated. But everyone that handles customer data needs to be. It would also very quickly make them stop hovering up any data they can get their fingers on and instead would make them learn how to provide their services securely without even having access to the data, because having that data suddenly becomes a liability instead of an opportunity.
This is not quite accurate. In the US for example, the NTSB investigates the causes of an incident, and the FAA carries out any subsequent enforcement action. Whereas the NTSB may rule the cause as pilot error due to negligence for example, the FAA may revoke the pilot's license and/or prosecute them in a civil case to the tune of a hundred thousand dollars and/or refer them to the Department of Justice for criminal prosecution.
I should probably clarify: There are two types of people that climed that back then. Those trying to gaslight us, and those naiv enough to actually believe the gaslighting. Severe negligence has to be proofen, and that is not easy, and there is a lot of wiggle room in court. Executives being liable for what they did during their term is just not coming, sorry kids.
Unless it's e2e encrypted (like in Proton Mail or Proton Drive), these incidents will occur. Manage your risk accordingly.
So who the hell was the "third-party, cloud-based CRM system"?
> This notice applies to individuals who participate in any of the following programs under the closed line of business:
> • Long term care
> • Medical
> • Medical supplemental
> • Hospital income
> • Cancer and disease specific coverage
> • Dental benefits
> The Covered Entity’s actions and obligations are undertaken by Allianz employees as well as the third parties who perform services for the Covered Entity. However, Allianz employees perform only limited Covered Entity functions – most Covered Entity administrative functions are performed by third party service providers.
It sold long term care insurance policies until 2010.
(Disclosure, I happen to have worked at Allianz Life a long time ago, though I have no nonpublic information about any of this.)
[0] https://www.allianzlife.com/-/media/Files/Allianz/PDFs/about...
So I think it matters, I think access systems should be designed with a wider set of human behaviors in mind, and there should be technical hurdles to leaking a majority of customers' personal information.
I’ve got another reply here with details but suffice it to say misconfigured Salesforce tenants are all over the internet.
The status quo, nobody gives a crap, with the regulators literally doing nothing, cannot continue. In the UK, the ICO is as effective as Ofwat. (The regulator that was just killed for being pointlessly and dangerously usless)
(Edit: fix autocorrect)
What happens to customers of the affected company in this case? Does this not now pass on a second problem to the people actually affected?
Security researchers, white-hat hackers, and even grey-hat hackers should have strong legal protections so long as they report any security vulnerabilities that they find.
The bad guys are allowed to constantly scan and probe for security vulnerabilities, and there is no system to stop them, but if some good guys try to do the same they are charged with serious felony crimes.
Experience has show we cannot build secure systems. It may be an embarrassing fact, but many, if not all, of our largest companies and organizations are probably completely incapable of building secure systems. I think we try to avoid this fact by not allowing red-team security researches to be on the lookout.
It's funny how everything has worked out for the benefit of companies and powerful organizations. They say "no, you can't test the security of our systems, we are responsible for our own security, you cannot test our security without our permission, and also, if we ever leak data, we aren't responsible".
So, in the end, these powerful organizations are both responsible for their own system security, and yet they also are not responsible, depending on whichever is more convenient at the time. Again, it's funny how it works out that way.
Are companies responsible for their own security, or is this all a big team effort that we're all involved in? Pick a lane. It does feel like we're all involved when half the nation's personal data is leaked every other week.
And this is literally a matter of national security. Is the nation's power grid secure? Maybe? I don't know, do independent organizations verify this? Can I verify this myself by trying to hack the power grid (in a responsible white-hat way)? No, of course not; I would be committing a felony to even try. Enabling powerful organizations to hide their security flaws in their systems, that's the default, they just have to do nothing and then nobody is allowed to research the security of their systems, nobody is allowed to blow the whistle.
We are literally sacrificing national security for the convenience of companies and so they can avoid embarrassment.
Pinky promise?
We need something like the salvage law.
It's an unpopular idea because its bullshit. Building secure systems is trivial and at the skill level of a junior engineer. Most of these "hacks" are not elaborate attacks utilizing esoteric knowledge to discover new vectors. They are the same exploit chains targeting bad programming practices, out of date libraries, etc.
Lousy code monkeys or medicore programmers are the ones introducing vulnerabilities. We all know who they are. We all have to deal with them thanks to some brilliant middle manager figuring out how to cut costs for the org.
I'd suggest you try and build a secure system for > 150k employees before you make sweeping statements like that.
I worked for an SME that dealt with some sensitive customer data. I mentioned to the CEO that we should invest some time in improving our security. I got back that "what's the big deal, if anyone wants to look they can just look..."
This story spans a lot of different concerns, only few of which are related to coding skills. Building secure software means defending in breadth, always, not fucking up once, against an armada of bots and creative hackers that only need to get lucky once.
The issue is there is too little repercusions for companies making software in shitty ways.
Each data breach should hurt the company approximately to the size of it.
Equifax breach should have collapsed the company. Fines should be in tens of billions of dollars.
Then under such banhammer software would be built correctly, security would becared about, internal audits would be made (real ones) and people would care.
Currently as things stand. There is ZERO reason to care about security.
Facebook was breached what last month?
Google is an ad company. They can’t sell data that’s breached. They basically do email, and with phishing at epidemic levels, they’ve failed the consumer even at that simple task.
All are too big to fail so there is only congress to blame. While people like Rho Khana focus their congressional resources on the Epstein intrigue citizens are having their savings stolen by Indian scammers and there is clearly no interest and nothing on the horizon to change that.
The issue is not that its illegal to put on a white hat, break into the user database and steal 125 million accounts as proof of security issue.
The problem is people getting sued for saying "Hey, I stumbled upon the fact that you can log into any account by appending the account-number to the url of your website.".
There certainly is a line seperating ethical hacking (if you can even call it hacking in some cases) and prodding and probing at random targets in the name of mischief and chaos.
There are a lot of shades of grey that you are ignoring.
> I say this often, and it's quite an unpopular idea, and I'm not sure why.
>
> Security researchers, white-hat hackers, and even grey-hat hackers should have
> strong legal protections so long as they report any security vulnerabilities
> that they find.
>
> The bad guys are allowed to constantly scan and probe for security
> vulnerabilities, and there is no system to stop them, but if some good guys
> try to do the same they are charged with serious felony crimes.
So let me get this straight, you want to give unsuccessful bad actors an escape
hatch by claiming white-hat intentions when they get caught probing systems?Me, neither, if that helps.
I've worked with Allianz's cybersecurity personas previously on EBRs/QBRs, and the issue is they (like a lot of European companies) are basically a confederation of subsidiaries with various independent IT assets and teams, so shadow IT abounds.
They have subsidiaries numbering in the dozens, so there is no way to unify IT norms and standards.
There is an added skills issue as well (most DACH companies I've dealt with have only just started working on building hybrid security posture management - easily a decade behind their American peers), but it is a side effect of the organizational issues.
That is their choice though - they could setup a technology services subsidiary, and then provide IT services to the other subsidiaries, transparently to the end users in those subsidiaries.
SoftTalker•4h ago
Our industry is pathetic.
Rotundo•4h ago
SoftTalker•4h ago
But I think that fundamentally, secure cloud-based SaaS is impossible. This stuff needs to be on-prem and airgapped from the internet. That makes some functionality complicated or impossible, but we're seeing that what we have now is not working.
nothercastle•4h ago
filleokus•2h ago
We don't have any details now, but I wouldn't be surprised if the cloud-based CRM provider didn't have a very technical interesting weakness, but rather that some kind of social engineeringy method was used.
If global companies like this instead had stuff running on-prem all around the world the likelihood of more technical vulnerabilities seems MORE likely to me.
(Air gapping is of course possible, but in my experience, outside of the most security sensitive areas the downsides are simply not acceptable. Or the "air gapping" is just the old "hard shell" / permitter based access-model...)
mr_mitm•1h ago
BinaryIgor•4h ago