It seems to me a parallel path that should be pursued is to make the impact less damaging. Don't assume that things like birth dates, names, addresses, phone numbers, emails, SSNs, etc are private. Shut down the avenues that people use to "steal identities".
I hate the term stealing identity, because it implies the victim made some mistake to allow it to happen. When what really happened is the company was lazy to verify that the person they're doing business with is actually who they say they are. The onus and liability should be on the company involved. If a bank gives a loan to you under my name, it should be their problem, not mine. It would go away practically overnight as a problem if that were changed. Companies would be strict about verifying people, because otherwise they'd lose money. Incentives align.
Identify theft is not the only issue with data leaks / breaches, but it seems one of the more tractable.
You may enjoy this sketch: https://www.youtube.com/watch?v=CS9ptA3Ya9E
By definition wealth and power enables those who have it to modify the status quo, otherwise we would call that having delusions of wealth and power.
> By definition wealth and power enables those who have it to modify the status quo, otherwise we would call that having delusions of wealth and power.
Wealth and power do not obligate one to do the right thing, it’s true. That doesn’t mean that the onus doesn’t remain on the power structure whose decision making capabilities are enabled by technological means. There’s no one else whose hands last touched the apparatus from whence their technological power flows. We could ask the public collectively to respond, but their hands don’t rest upon the levers of power.
Should we raise taxes on the lower and middle classes to pay for it too? There’s no need, as the truly wealthy don’t need to touch their assets directly or even pay them any mind, as they have hired help for that. The devaluation of privacy hits the little people first, and hardest. Elites are not able to be bothered even performatively by these issues, as they are not subject to these particular failure modes of society.
https://en.wikiquote.org/wiki/Anatole_France
> Cela consiste pour les pauvres à soutenir et à conserver les riches dans leur puissance et leur oisiveté. Ils y doivent travailler devant la majestueuse égalité des lois, qui interdit au riche comme au pauvre de coucher sous les ponts, de mendier dans les rues et de voler du pain.
> It is the duty of the poor to support and sustain the rich in their power and idleness. In doing so, they have to work before the laws' majestic equality, which forbids rich and poor alike to sleep under bridges, beg in the streets and steal loaves of bread.
It’s not the same for the little people. That was my point.
The only thing I said was it’s tautological… if they couldn’t affect the status quo to their desires, then they wouldn’t be considered to have wealth or power in the first place.
It seems to be completely unrelated to the tautology.
I argue even if the above is true, they truly are elites if the word actually describes anyone accurately and not just aspirationally, and that there are too many cooks in the kitchen for one to rise above the din.
If bank mandated security controls are breached, or they don't provide adequate controls, I feel like that that's on them. But if they've done their part and you've been irresponsible then that's on you. But where's the dividing line? And saying the banks have more responsibility can also justify more biometrics and surveillance.
Is the differentiating factor here that the bank (or whatever) is allowing access with insecure credentials (name, date of birth, phone number) instead of the primary credentials?
If they gave a loan or a credit card to someone pretending to be me. That’s now on my credit rating and historically very difficult to undo.
It’s honestly unclear if the damage from data breaches exceeds the cost of eliminating it. The only case where I see that being clear is in respect of national security.
Definitely not. Damage is done to customers but costs to eliminate are on the company. Why should company invest more if there are no meaningful consequences for them?
What is the evidence for this?
The cost of identity fraud clocks in around $20bn a year [1]. A good fraction of that cost gets picked up (and thus managed) by financial institutions and merchants.
I’m sceptical we could harden our nation’s systems for a few billion a year.
[1] https://javelinstrategy.com/research/2024-identity-fraud-stu...
One factor you know (data) and the other you posess, or you are (biometrics).
IdP issues both factors, identification is federated to them.
Kind of happens when you are required to supply driver's license, which technically you own and is federated id if checked in government system, but can be easily forged with knowledge factors alone.
Unfortunately banks and governments here use facial recognition for the second factor, which has big privacy concerns, and the tendency I think will be federal government as sole IdP. Non-biometroc factors might have practical difficulties at scale, but fingerprint would be better than facial. It's already taken in most countries and could be easily federated. Not perfect but better than the alternatives imo.
It's a strong factor if required in person, the problems start when accepting it remotely. But having to go to the bank seems like the past.
The challenge in cyber security is that the person potentially stealing your identity lives on the other side of the world and that's why the focus is on the end user to be as secure as they can. But if you have something stolen from you, you are still the victim.
EU fixed the incentives with GPRS and DORA, that was the easy part. In theory company that doesn't follow "secure by design" will end up bankrupt by (revenue dependent) fines. In practice the enforcement is lack luster, courts are lenient and international cases take ages, even if both countries are in EU.
Financial consequences to the company might be a deterrent, of course then you're dealing with hundreds or thousands of people potentially unemployed because the company was bankrupted by something as simple as a mistake in a firewall somewhere or an employee falling victim to a social engineering trick.
I think the path is along the lines of admitting that cloud, SaaS and other internet-connected information systems cannot be made safe, and dramatically limiting their use.
Or, admitting that a lot of this information should be of no consequence if it is exposed. Imagine a world where knowing my name, SSN, DOB, address, mother's maiden name, and whatever else didn't mean anything.
Consider the intent of not hiring enough security staff and supporting them appropriately. It looks a lot like an accident. You could even say it causes accidents.
Clearly it's possible.
If we were serious about preventing these kinds of things from happening, we could.
Bottom line though a good many folks here would loudly resist that kind of oversight on their work and their busineses, and for somewhat valid reasons. Data breaches hardly ever cause hundreds of deaths in a violent fireball.
If the consequences of an airline crash were just some embarassment and some inconvenience for the passengers, they would happen a lot more.
Also people almost never go to jail for airline crashes, even when they cause hundreds of deaths. We investigate them, and maybe issue new regulations, not to punish mistakes, but to try to eliminate the possibilty of them happening again.
Insurance people will be happy to tell you the price of the average citizen's life. Estimate the total cost to the economy, divide by the average citizen's life-value and you have the statistical deaths caused by this type of incident. Draw a fireball next to it for dramatic effect.
But generally, I don't think _every_ SaaS needs to be tightly regulated. But everyone that handles customer data needs to be. It would also very quickly make them stop hovering up any data they can get their fingers on and instead would make them learn how to provide their services securely without even having access to the data, because having that data suddenly becomes a liability instead of an opportunity.
This is not quite accurate. In the US for example, the NTSB investigates the causes of an incident, and the FAA carries out any subsequent enforcement action. Whereas the NTSB may rule the cause as pilot error due to negligence for example, the FAA may revoke the pilot's license and/or prosecute them in a civil case to the tune of a hundred thousand dollars and/or refer them to the Department of Justice for criminal prosecution.
I should probably clarify: There are two types of people that climed that back then. Those trying to gaslight us, and those naiv enough to actually believe the gaslighting. Severe negligence has to be proofen, and that is not easy, and there is a lot of wiggle room in court. Executives being liable for what they did during their term is just not coming, sorry kids.
Shareholders may well be based overseas so it'd be very difficult to actually enforce the fines. They might also use overseas limited liability investment corporations, so fines would just bankrupt those companies leaving the actual shareholders never falling below zero.
There's also the political issues that'd come from potentially giving fines to millions of people because their pension funds invested in a company that had a data breach.
Unless it's e2e encrypted (like in Proton Mail or Proton Drive), these incidents will occur. Manage your risk accordingly.
So who the hell was the "third-party, cloud-based CRM system"?
> This notice applies to individuals who participate in any of the following programs under the closed line of business:
> • Long term care
> • Medical
> • Medical supplemental
> • Hospital income
> • Cancer and disease specific coverage
> • Dental benefits
> The Covered Entity’s actions and obligations are undertaken by Allianz employees as well as the third parties who perform services for the Covered Entity. However, Allianz employees perform only limited Covered Entity functions – most Covered Entity administrative functions are performed by third party service providers.
It sold long term care insurance policies until 2010.
(Disclosure, I happen to have worked at Allianz Life a long time ago, though I have no nonpublic information about any of this.)
[0] https://www.allianzlife.com/-/media/Files/Allianz/PDFs/about...
So I think it matters, I think access systems should be designed with a wider set of human behaviors in mind, and there should be technical hurdles to leaking a majority of customers' personal information.
I’ve got another reply here with details but suffice it to say misconfigured Salesforce tenants are all over the internet.
The status quo, nobody gives a crap, with the regulators literally doing nothing, cannot continue. In the UK, the ICO is as effective as Ofwat. (The regulator that was just killed for being pointlessly and dangerously usless)
(Edit: fix autocorrect)
What happens to customers of the affected company in this case? Does this not now pass on a second problem to the people actually affected?
Security researchers, white-hat hackers, and even grey-hat hackers should have strong legal protections so long as they report any security vulnerabilities that they find.
The bad guys are allowed to constantly scan and probe for security vulnerabilities, and there is no system to stop them, but if some good guys try to do the same they are charged with serious felony crimes.
Experience has show we cannot build secure systems. It may be an embarrassing fact, but many, if not all, of our largest companies and organizations are probably completely incapable of building secure systems. I think we try to avoid this fact by not allowing red-team security researches to be on the lookout.
It's funny how everything has worked out for the benefit of companies and powerful organizations. They say "no, you can't test the security of our systems, we are responsible for our own security, you cannot test our security without our permission, and also, if we ever leak data, we aren't responsible".
So, in the end, these powerful organizations are both responsible for their own system security, and yet they also are not responsible, depending on whichever is more convenient at the time. Again, it's funny how it works out that way.
Are companies responsible for their own security, or is this all a big team effort that we're all involved in? Pick a lane. It does feel like we're all involved when half the nation's personal data is leaked every other week.
And this is literally a matter of national security. Is the nation's power grid secure? Maybe? I don't know, do independent organizations verify this? Can I verify this myself by trying to hack the power grid (in a responsible white-hat way)? No, of course not; I would be committing a felony to even try. Enabling powerful organizations to hide their security flaws in their systems, that's the default, they just have to do nothing and then nobody is allowed to research the security of their systems, nobody is allowed to blow the whistle.
We are literally sacrificing national security for the convenience of companies and so they can avoid embarrassment.
Pinky promise?
I think there’s a better solution somewhere in between doing nothing, and letting bumbling idiots recklessly fool with things they shouldn’t be messing with.
One thing that is not a good option is the status-quo we're discussing here, in which a "bumbling idiot" can take down a city power grid. If that's how things are, the we shouldn't cower and hope we remain safe from every idiot out there, we need to shake things up and find the problems now. Hopefully without actually taking out any power grid.
The problem here is that most security testing is not just the hollywood narrative of "some people running nmap and finding critical vulnerabilities that take down the power grid". Plenty of the real-world security vulnerabilities in large-scale systems that do exist are at the interface between technology and humans, and those are the vulnerabilities that computer science often can't reasonably fix: social engineering, trust systems, physical-layer exploits, etc.
In securing any large system, there are going to be many low-impact issues that do exist but aren't necessarily important (or even desirable) to fix because the impact to fix them is too high, and the likelihood of exploit is low because it is impractical as an attack vector. But legalizing the exploit of these edge cases would guarantee you'd see issues, because you're creating a financial opportunity where there was previously not one.
For example: we don't need to incentivize a wave of thousands of script kiddies fiddling with their power meters, trying to social engineer support staff, running DoS scripts against the public website, etc. Those things aren't helpful in improving critical infrastructure, they're just going to cause a nuisance and make things difficult for people.
Also, we need to clarify the scenario because you said:
> the likelihood of exploit is low
but you also mention the need to stop people "accidentally" exploiting the system, so which is it?
A system that can be accidentally broken by bumbling idiots does not deserve protection IMO.
I didn't say anything about DDoS in my comment. DoS is a term referring to a loss of availability. Availability is one of the three fundamental parts to the CIA triad, so yes, it is absolutely something security researchers evaluate.
> Also, we need to clarify the scenario because you said:
> the likelihood of exploit is low
> but you also mention the need to stop people "accidentally" exploiting the system, so which is it?
I said "accidentally doing harm". For a real world exploit to happen, you have to have a couple of different things align. First, you need a vulnerability. Second, you need some way that somebody could exploit that vulnerability. Third, you need a reason that somebody's going to do it. A vulnerability simply existing isn't enough to make it a problem.
Now, in an academic lab environment, most people don't really care about the likelihood of exploit or the motivations of an attacker. Because the point is academic computer science.
But the people who secure systems in the real world have to care about the likelihood of exploitations in the motivations of their attackers. Because it's not possible to secure everything in a production environment, where you also have to ensure the availability of the system and the usability of the system to your stakeholders. You always have to make a compromise between the two.
So, in the real world: the locality of the attacker, the legal environment, and the impact of the exploit all play very significant roles in how someone might weigh a significance of an exploit.
To make up a contrived example:
Let's say that all I have to do to cancel electricity service, create an online account using the information from a power bill, and press the cancel button. There's an obvious exploit here. I could dig through my neighbor's trash, get a copy of their bill, create an account, and shut off their power.
Do we wanna legalize this activity? No, I don't think so. Are we at risk of a nation state exploiting this? No, probably not because they don't have access to everyone's trash everywhere. Also, you couldn't really do this at scale because it would be obviously not intended. Should we require more authentication just to say we've plugged the hole? Also, probably not. Electricity service has to be accessible to people. We can't require onerous authentication, when many of the customers may be elderly, disabled, etc. Instead, we as a society solve this problem by making this activity a crime. In this works just fine, because anyone who has physical access is already in that legal jurisdiction as well.
I'm sure you can imagine dozens of other similar scenarios. The point is that information security is a lot more complicated than just adding authentication to a webpage. Information security isn't a technology problem. It's a people using technology problem.
I don't think we want to legalize activity similar to what is in my above scenario. That's the kind of situation where people may be accidentally causing harm that they wouldn't be doing now, because they would go to jail. But if you legalize it, people are going to do it in an attempt to monetize it.
We need something like the salvage law.
It's an unpopular idea because its bullshit. Building secure systems is trivial and at the skill level of a junior engineer. Most of these "hacks" are not elaborate attacks utilizing esoteric knowledge to discover new vectors. They are the same exploit chains targeting bad programming practices, out of date libraries, etc.
Lousy code monkeys or medicore programmers are the ones introducing vulnerabilities. We all know who they are. We all have to deal with them thanks to some brilliant middle manager figuring out how to cut costs for the org.
I'd suggest you try and build a secure system for > 150k employees before you make sweeping statements like that.
I worked for an SME that dealt with some sensitive customer data. I mentioned to the CEO that we should invest some time in improving our security. I got back that "what's the big deal, if anyone wants to look they can just look..."
This story spans a lot of different concerns, only few of which are related to coding skills. Building secure software means defending in breadth, always, not fucking up once, against an armada of bots and creative hackers that only need to get lucky once.
I'm not sure which is worse:
1) Creating secure systems is hard, and we often fail at it.
2) Creating secure systems is easy, and we often fail at it.
I don't know which is worse, but I know for sure we often fail at it.
The issue is there is too little repercusions for companies making software in shitty ways.
Each data breach should hurt the company approximately to the size of it.
Equifax breach should have collapsed the company. Fines should be in tens of billions of dollars.
Then under such banhammer software would be built correctly, security would becared about, internal audits would be made (real ones) and people would care.
Currently as things stand. There is ZERO reason to care about security.
If anything it's yet another point AGAINST them - if they can't guarantee secure software without the caveat of running on a closed hardware black box then it's not secure software.
Facebook was breached what last month?
Google is an ad company. They can’t sell data that’s breached. They basically do email, and with phishing at epidemic levels, they’ve failed the consumer even at that simple task.
All are too big to fail so there is only congress to blame. While people like Rho Khana focus their congressional resources on the Epstein intrigue citizens are having their savings stolen by Indian scammers and there is clearly no interest and nothing on the horizon to change that.
source? A quick search suggests the "breach" is a bunch of credentials that got harvested/phished got leaked, not that facebook themselves got breached.
>Google is an ad company. They can’t sell data that’s breached. They basically do email, and with phishing at epidemic levels, they’ve failed the consumer even at that simple task.
In other words, they haven't been breached, but you still think they're bad people.
The context was privacy and people being victimized by Indian scammers. We know those scammers use Facebook to gather info and target victims, all without any actual breach taking place. To me, not having a breach does not make Facebook “good”.
"seems like" is doing a lot of the heavy lifting here. I'm not aware of instances where facebook was "selling personal info to third parties". It does use personal info to sell ads to third parties, but characterizing that as "selling personal info" is a stretch.
>We know those scammers use Facebook to gather info and target victims, all without any actual breach taking place.
This just sounds like "scammers are viewing public facebook profiles and using facebook messenger to communicate with victims", in that case I'm not sure how facebook deserves flak here.
Are you being facetious? Yes, yes, yes, they have.
If you want to see more for the same company, try appending "-{YEAR_OF_KNOWN_DATA_BREACH}" to skip the ones you've already read, though this will tend to exclude companies who have multiple data breaches in one year.
https://en.m.wikipedia.org/wiki/2018_Google_data_breach
https://www.npr.org/2021/04/09/986005820/after-data-breach-e...
https://support.microsoft.com/en-us/topic/national-public-da...
However, open intermediary victims up to contributory lawsuits and everyone will have to take security more seriously. Think twice before you connect that new piece of shit IoT device.
The penalty should be massive enough to affect changes in the business model itself. If you do not store raw data it cannot be exfiltrated.
The issue is not that its illegal to put on a white hat, break into the user database and steal 125 million accounts as proof of security issue.
The problem is people getting sued for saying "Hey, I stumbled upon the fact that you can log into any account by appending the account-number to the url of your website.".
There certainly is a line seperating ethical hacking (if you can even call it hacking in some cases) and prodding and probing at random targets in the name of mischief and chaos.
But for white-hat hacking you want prowling. And it's very difficult to create technical definitions that productively distinguish "good" prowlers from "bad" prowlers. So why even try to draw a distinction between types of prowlers? Maybe prowling information systems online shouldn't be a crime at all, given the nature of information systems.
There are a lot of shades of grey that you are ignoring.
I think allowing red-teams to run wild is a better solution, but I can agree with other solutions too.
If those who claim sole responsibility want to be responsible, I'm okay with that too. I really just want us to pick a lane.
So again, are you willing to accept sole legal and financial liability?
> I say this often, and it's quite an unpopular idea, and I'm not sure why.
>
> Security researchers, white-hat hackers, and even grey-hat hackers should have
> strong legal protections so long as they report any security vulnerabilities
> that they find.
>
> The bad guys are allowed to constantly scan and probe for security
> vulnerabilities, and there is no system to stop them, but if some good guys
> try to do the same they are charged with serious felony crimes.
So let me get this straight, you want to give unsuccessful bad actors an escape
hatch by claiming white-hat intentions when they get caught probing systems?Then there would be some sort of evidence the guy was a "good guy". Like when a cop shoots your dog and suffers no consequences.
there is a reason they are popular for security roles.
that's not the same as a white-hat license but it shows that you registered, that you made it clear where you're at, and that you've had some minimum ethical and professional training.
I do work in security, the average person would write this off as "oh just shitty software" and do nothing about it, however when one know what the error means and you know how the software works, errors are easy to turn into exploitable systems.
I once had a bank account that fucked up data validation because i had '; in the transfer description of 120 characters. Immediately abusable sql injection.
After my first time reporting this OBVIOUS flaw to a bank along with how it can be abused in both database modification and xss injection, I had to visit the local law enforcement with lawyers because they believe that 'hacking' had taken place.
I now report every vuln behind fake emails, on fake systems in non extradition countries accessed via proxy on vpn. Even then I have the legal system attempting to find my real name and location and threaten me with legal action.
Bad actors come from non extradition countries which wouldnt even TALK to you about the problem, You'd just have to accept you get hacked and that is the end of the situation.
Its people like yourself who can't see past the end of their nose to realise where the real threats are. You don't have "it straight".
I took it as a take on the face of the proposal: "hackers should have strong legal protections so long as they report any security vulnerabilities that they find."
As stated, it's ripe for abuse. Perhaps they could have been more charitable and assumed some additional implicit qualifiers. But defining those qualifiers is precisely the difficult part, perhaps intractably difficult.
In the US private investigators often require a license to work, but AFAIU that license doesn't actually exempt them from any substantive laws. Rather, it's more a mechanism to make it easier for authorities and citizens to excuse (outside the legal process) otherwise suspicious behavior.
Rather than give special protections to a certain class of people, why not define the crimes to not encompass normal investigative behaviors typical in the industry. In particular, return to stronger mens rea elements rather than creeping in the direction of strict liability. Adding technical carveouts could end up making for a harsher system; for example, failing to report in an acceptable manner (when, what, where, how?) might end up sealing the fate of an otherwise innocent tech-adept person poking around.
This would be an acceptable alternative, and may even be workable.
> failing to report in an acceptable manner (when, what, where, how?) might end > up sealing the fate of an otherwise innocent tech-adept person poking around.
You've hit exactly the problem, I feel like you too might be working in this area. Not many people come to this kind of logical conclusion.
Until then, they'll continue to not care. The solution is not a legal framework presuming good samaritans will secure the networks and systems of the world.
It's literally part of their COGS.
Me, neither, if that helps.
Some people still live in places where you can leave your doors unlocked and not worry.
Leave it to the tech industry to bring Internet of Shit locks to your doorstep.
Would you be upset if in the course of their unsolicited work, these white/grey hats found your wife's nudes in the digital equivalent of kicking over a rock? Full legal protection of course.
Ignore if they kept a copy for themselves for later use, they promised to delete them <wink>.
If you own a property where a million people live, that might not be a bad idea at all.
You're trying to say companies should have sole responsibility over their systems. I say, let them have sole legal and financial liability as well then.
The crime here is the tech. The companies aren’t to blame. Programmers and tech companies are. If there was no internet or “tech industry” we’d all be so much better off it’s painful to even contemplate.
This pattern comes up constantly, and it is extremely demoralizing.
I think there’s a lot of things that many people would agree should be protected. For instance, people who report vulnerabilities they just happen to stumble upon.
But on the other end of the spectrum, there are a lot of pen testing activities that are pretty likely to be disruptive. And some of them would be disruptive, even on otherwise secure systems, if we gave the entire world carte blanche to perform these activities.
There are certainly some realms of security where technology can solve anything, like cryptographic algorithms. But at the interface of technology and society, security still highly relies on the rule of law and living in a high trust society.
Also if things stored in those databases weren't plain strings, but tokens (in asymmetric cryptography sense) so that only the service owns it, and in case of a leak user can use it to get a payout from the service, this problem would be solved.
But no business is interested in provably making their users secure, it would be a self-sabotage. It's always just a security theater.
The risks associated with medical malpractice certainly slows the pace of innovation in healthcare, but maybe that’s ok.
I've worked with Allianz's cybersecurity personas previously on EBRs/QBRs, and the issue is they (like a lot of European companies) are basically a confederation of subsidiaries with various independent IT assets and teams, so shadow IT abounds.
They have subsidiaries numbering in the dozens, so there is no way to unify IT norms and standards.
There is an added skills issue as well (most DACH companies I've dealt with have only just started working on building hybrid security posture management - easily a decade behind their American peers), but it is a side effect of the organizational issues.
That is their choice though - they could setup a technology services subsidiary, and then provide IT services to the other subsidiaries, transparently to the end users in those subsidiaries.
LinkedIn, for example is a goldmine for social engineering, and there's no way to secure a profile from being viewed by logged-in users, even if they are unconnected.
I'm surprised more employers don't closely audit their employees profiles.
Thank god that only customers' personal data was stolen. As long as the CEO's personal data or the members's of the board of directors personal data is safe, nobody gives a fuck. /s
SoftTalker•6mo ago
Our industry is pathetic.
Rotundo•6mo ago
SoftTalker•6mo ago
But I think that fundamentally, secure cloud-based SaaS is impossible. This stuff needs to be on-prem and airgapped from the internet. That makes some functionality complicated or impossible, but we're seeing that what we have now is not working.
nothercastle•6mo ago
filleokus•6mo ago
We don't have any details now, but I wouldn't be surprised if the cloud-based CRM provider didn't have a very technical interesting weakness, but rather that some kind of social engineeringy method was used.
If global companies like this instead had stuff running on-prem all around the world the likelihood of more technical vulnerabilities seems MORE likely to me.
(Air gapping is of course possible, but in my experience, outside of the most security sensitive areas the downsides are simply not acceptable. Or the "air gapping" is just the old "hard shell" / permitter based access-model...)
mr_mitm•6mo ago
BinaryIgor•6mo ago