A) in EU; GDPR will trump whatever BS they want to try B) no confirmation affected users were notified C) aggro threats D) nonsensical threats, sourced to Data Privacy Officer w/seemingly 0 scruples and little experience
Due to B), there's a strong responsibility rationale.
Due to rest, there's a strong name and shame rationale. Sort of equivalent to a bad Yelp review for a restaurant, but for SaaS.
Why sign anything at all? The company was obviously not interested in cooperation, but in domination.
So you never report to actual organization but to the security organization, like you did. And they would be more equiped to deal with this, maybe also validate how serious this issue is. Assign a reward as well.
So you are researcher, you report your thing and can't be sued or bullied by organization that is offending in the first place.
Right now the climate in the world is whistleblowers get their careers and livihoods ended. This has been going on for quite a while.
The only practical advice is ignore it exists, refuse to ever admit to having found a problem and move on. Leave zero paper trail or evidence. It sucks but its career ending to find these things and report them.
The idea is to make it easier to fix the vulnerability than to sue to shut people up.
For credit assignment, the person could direct people to the non profit’s website which would confirm discovery by CVE without exposing too many details that would allow the company to come after the individual.
Edit: I looked into it a bit and things seems to check out, this person has scuba diving certifications on their LinkedIn and the site seems real and high-effort. While I also don’t have solid proof that it’s not AI generated either, making accusations like this based on no evidence doesn’t seem good at all
You can also see the format and pacing differs greatly from posts on their blog made before LLMs were mainstream, e.g. https://dixken.de/blog/monitoring-dremel-digilab-3d45
While I wouldn't go so far as to say the post is entirely made up (it's possible the underlying story is true) - I would say that it's very likely that OP used an LLM to edit/write the post.
I saw one or two sigils (ex. a little eager to jump to lists)
It certainly has real substance and detail.
It's not, like, generic LinkedIn post quality.
You could tl;dr it to "autoincrementing user ids and a default password set = vulnerability, and the company responded poorly." and react as "Jeez, what a waste of time, I've heard 1000 of these stories."
I don't think that reaction is wrong, per se, and I understand the impulse. I feel this sort of thing more and more as I get older.
But, it fitting into a condensed structure you're familiar with isn't the same as "this is boring slop." Moby Dick is a book about some guy who wants revenge, Hamlet is about a king who dies.
Additionally, I don't think what people will interpret from what you wrote is what you meant, necessarily. Note the other reply at this time, you're so confident and dismissive that they assume you're indicating the article should be removed from HN.
I assure you, the incompetence in both securing systems and operating these vulnerability management systems and programs is everywhere. You don't need an LLM to make it up.
(my experience is roughly a decade in cybersecurity and risk management, ymmv)
Regarding your allergy, my best guess is that it is generated by Claude, not ChatGPT, and they have different tells, so you may be sensitive to one but not the other. Regarding plausibility, that's the thing that LLMs excel at. I do agree it is very plausible.
Nothing in the original message refers to it being clickbait, the core complaint is the LLM-like tone and the lack of substance, which you also just threw it there without references ironically.
> What, exactly, is the problem with disclosing the nature of the article for people who wish to avoid spending their time in that way?
It's alright as long as it's not based on faith or guesswork.
[1] Unlike LLM-generated articles, posting LLM-generated comments is actually against the rules.
You also have to take into account that the medium is the message[1]. In a nutshell, the more people read LLM generated posts and interact with chatbots, the higher the influence of LLM style in their writing -- the whole "delve" comes to mind, and double dashes. So even if you have a machine that correctly identified LLM generated posts, you can't be sure it'll keep working.
[1] https://web.mit.edu/allanmc/www/mcluhan.mediummessage.pdf
Let's say you are the LLM detecting genius you paint yourself to be. Well guess what? You're human and you're going to make mistakes, if you haven't made a bunch of them already. So if you have nothing better to add to a post than to guess this, you probably shouldn't say anything at all. Like you said, it's not even against the rules.
I also enjoy all the "vibes" people list out for why they can tell, as though there was any rhyme or reason to what they're saying. Models change and adapt daily so the "heading structure" or "numbered list" ideas become outdated as you're typing them.
The same could be said of the accusation being levied here.
Lmao Apple will not do anything for actual malware when reported with receipts, besides sending you a form letter assuring you "experts will look into it, now fuck off" then never contact you again. Ask me how I know. To their credit, I suspected they ran it through useless rudimentary automated checks which passed and they were back in business like a day later.
If your expectation is they will do something about shitty coding practices half the App Store would be banned.
Ask while you are in an EU country, request appeal and initiate Out-of-court dispute resolution.
Or better yet: let the platform suck, and let this be the year of the linux desktop on iPhone :)
Generally speaking, I think case law has avoided shooting the messenger, but if you use your unauthorized access to find PII on minors, you may be setting yourself up for problems, regardless if the goal is merely dramatic effect. You can, instead, document everything and hypothesize the potential risks of the vulnerability without exposing yourself to accusation of wrongdoing.
For example, the article talks about registering divers. The author could ask permission from the next diver to attempt to set their password without reading their email, and that would clearly show the vulnerability. No kids "in harm's way".
What are the odds an insurer would reach for a lawyer? They probably have several on speed dial.
Based on this interaction, you have wonder what it's like to file a claim with them.
https://www.youtube.com/watch?v=O7NsjpiPK7o
Insurance company would not cover a decompression chamber for someone who has severe decompression sickness, it is a life-threatening condition that requires immediate remediation.
The idea that you possible neurological DCS and you must argue on the phone with an insurance rep about if you need to be life-flighted to the nearest chamber is just.... Mind blowing
There are only a few globally relevant diving insurers. DAN America is US based. DiveAssure is not Maltese. AquaMed is German. The one large diving insurer that is actually headquartered and registered in Malta is DAN Europe. Given that the organization is described as being registered in Malta and subject to Maltese supervisory processes, DAN Europe becomes the most plausible candidate based on structure and jurisdiction alone.
Huh, apparently they're registered in Malta, what a coincidence...
> every account was provisioned with a static default password
Hehehe. I failed countless job interviews for mistakes much less serious than that. Yet someone gets the job while making worse mistakes, and there are plenty of such systems on production handling real people's data.
So it sort of makes sense that companies would go on the attack because there's a risk that their insurance company will catch wind and they'll be on the hook.
By the way, I had a story when I accidentally hacked an online portal in our school. It didn't go much and I was "caught" but anyways. This is how we learn to be more careful.
I believe in every single system like that it's fairly possible to find a vulnerability. Nobody cares about them and people that make those systems don't have enough skill to do it right. Data is going to be leaked. That's the unfortunate truth. It gets worse with the come of AI. Since it has zero understanding of what it is actually it will make mistakes that would cause more data leaks.
Even if you don't consider yourself as an evil person, would you still stay the same knowing real security vulnerability? Who knows. Some might take advantage. Some won't and still be punished for doing everything as the "textbook way".
Unless absolutely necessary, do not get involved in any legal battles or anything potentially involving lawyers. Not getting involved is always less expensive and less problematic. Unless your ass is covered by some big and influential organisation; never fight on your own. No one was dying here; people's lives were not in danger. Acting hostile and poking a company to sue you won't bring you anything good. His post won't make waves big enough and he might lose clients instead of gaining them.
He did everything exactly by the book and in the end was even nice enough to not publish the company's name, despite the legal threat being bullshit and him being entirely in the right.
The only sensible approach here would have been to cease all correspondence after their very first email/threat. The nation of Malta would survive just fine without you looking out for them and their online security.
Cold approach vulnerability reports to non-technical organizations quite frankly scare them. It might be like someone you've never met telling you the door on your back bedroom balcony can be opened with a dummy key, and they know because they tried it.
Such organizations don't kmow what to do. They're scared, thinking maybe someone also took financial information, etc. Internal strife and lots of discussions usually occur with lots of wild specualation (as the norm) before any communication back occurs.
It just isn't the same as what security forward organizations do, so it often becomes as a surprise to engineers when "good deed" seems to be taken as malice.
1) If you make legal disclosure too hard, the only way you will find out is via criminals.
2) If other industries worked like this, you could sue an architect who discovered a flaw in a skyscraper. The difference is that knowledge of a bad foundation doesn’t inherently make a building more likely to collapse, while knowledge of a cyber vulnerability is an inherent risk.
3) Random audits by passers-by is way too haphazard. If a website can require my real PII, I should be able to require that PII is secure. I’m not sure what the full list of industries would be, but insurance companies should be categorically required to have an cyber audit, and laws those same laws should protect white hats from lawyers and allow class actions from all users. That would change the incentives so that the most basic vulnerabilities are gone, and software engineers become more economical than lawyers.
Here all databases with personal information must be registered there and data must be secure.
The problem is this is literally a matter of national security, and currently we sacrifice national security for the convenience of wealthy companies.
Also, we all have our private data leaked multiple times per month. We see millions of people having their private information leaked by these companies, and there are zero consequences. Currently, the companies say, "Well, it's our code, it's our responsibility; nobody is allowed to research or test the security of our code because it is our code and it is our responsibility." But then, when they leak the entire nation's private data, it's no longer their responsibility. They're not liable.
As security issues continue to become a bigger and bigger societal problem, remember that we are choosing to hamstring our security researchers. We can make a different choice and decide we want to utilize our security researchers instead, for the benefit of all and for better national security. It might cause some embarrassment for companies though, so I'm not holding my breath.
xvxvx•1h ago
I came across a pretty serious security concern at my company this week. The ramifications are alarming. My education, training and experience tells me one thing: identify, notify, fix. Then when I bring it to leadership, their agenda is to take these conversations offline, with no paper trail, and kill the conversation.
Anytime I see an article about a data breach, I wonder how long these vulnerabilities were known and ignored. Is that just how business is conducted? It appears so, for many companies. Then why such a focus on security in education, if it has very little real-world application?
By even flagging the issue and the potential fallout, I’ve put my career at risk. These are the sort of things that are supposed to lead to commendations and promotions. Maybe I live in fantasyland.
calvinmorrison•1h ago
Simple as. Not your company? not your problem? Notify, move on.
refulgentis•1h ago
I had a bit of a feral journey into tech, poor upbringing => self taught college dropout waiting tables => founded iPad point of sale startup in 2011 => sold it => Google in 2016 to 2023
It was absolutely astounding to go to Google, and find out that all this work to ascend to an Ivy League-esque employment environment...I had been chasing a ghost. Because Google, at the end of the day, was an agglomeration of people, suffered from the same incentives and disincentives as any group, and thus also had the same boring, basic, social problems as any group.
Put more concretely, couple vignettes:
- Someone with ~5 years experience saying approximately: "You'd think we'd do a postmortem for this situation, but, you know how that goes. The people involved think they're an organization-wide announcement that you're coming for them, and someone higher ranked will get involved and make sure A) it doesn't happen or B) you end up looking stupid for writing it."
- A horrible design flaw that made ~50% of users take 20 seconds to get a query answered was buried, because a manager involved was the one who wrote the code.
xvxvx•1h ago
bubblewand•59m ago
Whatever the selection process is for gestures broadly at everything, it's not selecting for being both (hell, often not for either) able and willing to do a good job, so far as what the job is apparently supposed to be. This appears to hold for just about everything, reputation and power be damned. Exceptions of high-functioning small groups or individuals in positions of power or prestige exist, as they do at "lower" levels, but aren't the norm anywhere as far as I've been able to discern.