Update: obviously I just skimmed this, per responses below.
* Port 443 exposed to the internets. This can allow attackers to gain access to information you have. $10k fee for discovery
* Your port 443 responds with "Server: AmazonS3" header. This can allow attackers to identify your hosting company. $10k fee for discovery.
Please remit payment and we will offer instructions for remediation.
> During our conversation, the Cerca team acknowledged the seriousness of these issues, expressed gratitude for the responsible disclosure, and assured me they would promptly address the vulnerabilities and inform affected users.
Well that was the decent thing to do and they did it. Beyond that it is their internal problem and, especially they did fix the issue according to the article.
Engineers can be a little too open and naive. Perhaps his first contacts was with the technical team but then managament and the legal team got hold of the issue and shut it off.
> Well that was the decent thing to do and they did it. Beyond that it is their internal problem and, especially they did fix the issue according to the article.
They didn't inform anyone, as far as I can tell. Especially users need(ed) to be informed.
It's also at least good practice to let security researchers know schedule of when it's safe to inform the public, otherwise in the future disclosure will be chaotic.
Not clear why "the public" should be informed, either.
Ultimately they thanked the researcher and fixed the issue, job done.
https://portal.ct.gov/ag/sections/privacy/reporting-a-data-b...
I doubt this is an engineering team’s naivete meeting a rational legal team’s response. I’d guess it’s rather facing marketing or management naivete that sticking your head in the sand is the correct way to deal with a potential data leak story.
Open for discussion - What would make them pay attention?
When I contacted the company about this, they didn't thank me or really acknowledge the problem. They fixed it about a month later by requiring login to view order URLs. I feel like they should have let their customers know all their PII data was exposed - I know they didn't, I never got such a notification.
If they're scared of such things, then maybe they shouldn't be making and marketing a dating app. It's not 2003 anymore, and this isn't some innocent app - they're collecting information on passports and sexual preferences for thousands of people. They should be aware of the responsibility that ought to come with that.
If you just text out passwords to anybody who asks, are they really doing unauthorized access? Lol.
I’m sure it was illegal somehow, though.
> "The attorney for the government should decline prosecution if available evidence shows the defendant’s conduct consisted of, and the defendant intended, good-faith security research."
https://www.wired.com/2013/03/att-hacker-gets-3-years/
I am not endorsing this interpretation of the CFAA, but this kid needs a lawyer.
They may have "patched" the ability to exploit it in this way, but the plaintext data is still there in that same fragile architecture and still being handled by the same org that made all of these same fundamental mistakes in the first place. Yikes.
As you are probably well aware, we do not live in that world. Companies like Equifax can suffer breaches exposing the personal information of millions and stock still goes up.
Companies don’t like to talk about this, and they bury these costs deep down in their financial statements. But trust me, they’re quite substantial.
I wouldn't say there's no penalty (they might have to pay for a year of identity theft protection or a fine).
I agree that the consequences are not in line with the damage to the public or customer base.
Trasmitting information via HTTPS is usually enough to say your app uses "encryption and other industry-standard measures to protect your data."
Thats when its time to inform them you are dumping the vuln to the public in 90 days due to their silence.
That doesn't make it right, and the treatment of the researcher here was completely inappropriate, but telling young researchers to just go full disclosure without being careful about documentation, legal advice and staying within the various legal lines is itself irresponsible.
It's an especially superficial argument on this story, where the underlying vulnerability has essentially already been disclosed.
They are public and intended to be publicly accessed. A clever teenager [1] noticed -- hey, is that a sequential serial number? Well, yes it was. And so he downloaded all the FOIA documents. Well it turns out they aren't public. The government hosted all the FOIA documents that way, including self-disclosures (which include sensitive information and are only released to the person who the information is about). They never intended to publicly release a small subset of those URLs. (Even though they were transparently guessable.)
Unauthorized access of a computer system carries up to 10 years in prison. The charges were eventually dropped [2] and I don't think a conviction was ever likely. Poor fellow still went through the whole process of being dragged out of bed by armed police.
[1] https://www.cbc.ca/news/canada/nova-scotia/freedom-of-inform...
[2] https://www.techdirt.com/2018/05/08/police-drop-charges-file...
Edit: should have read the linked article before commenting. It totally wasn't, and the charges were dropped...after thoroughly harassing the kid.
Following up on the threat is much less common, and the best way to prevent that (IMO) is to remove the motivation to do so: Once the vuln is public and further threats can not prevent the publication, just draw more negative attention to the company, the company has much fewer incentives to threaten or follow up on threats already made.
It's not a guarantee, you can always hit a vindicative and stupid business owner, but usually publishing in response to threats isn't just the right thing to do (to discourage such attempts) but also the smart thing to do (to protect yourself).
I'm so tired of researchers being ignored when they bring a serious vuln to a company to be met with silence and/or resistance on top of them never alerting their users about it.
We are literally sacrificing national security for the convenience of wealthy companies.
Presuming perfect communication which is never the case for security vulnerabilities on a consumer application.
Maybe to make it easier to build the form accepting the OTP? Oversight?
I can't think of any other reasons.
When Pinterest's new API was released, they were spewing out everything about a user to any app using their OAuth integration, including their 2FA secrets. We reported and got a bounty, but this sort of shit winds up in big companies' APIs, who really should know better.
It’s very sensible and an obvious solution if you don’t think about the security of it.
A dating app is one of the most dangerous kinds of app to make due to all the necessary PII. this is horrible.
This is big brain energy. Why bother needing to make yet another round trip request when you can just defer that nonsense to the client!
^another article on this
https://georgetownvoice.com/2025/04/06/georgetown-students-c...
They should feel bad about not communicating with the "researcher" after the fact, too. If i had been blown off by a "company" after telling them everything was wide open to the world for the taking, the resulting "blog post" would not be so polite.
STOP. MAKING. APPS.
There's nothing wrong with making your POC/MVP with all of the cool logic that shows what the app will do. That's usually done to gain funding of some sort, but before releasing. Part of the releasing stage should be a revamped/weaponized version of the POC, and not the damn POC itself. The weaponized version should have security stuff added.
That's much better than telling people stop making apps.
If all of the developers were named and shamed, would you, as a hiring manager, ever hire them to develop an app for you? Or would you, in fact, tell them to stop making apps?
They enabled stalkers. There's no possible way to argue that they didn't, you don't know, and some random person just looked into it because their friends mentioned the app and found all of this. I guarantee if anyone with a modicum of security knowledge looks the platform over there's going to be a lot more issues.
It's one thing to be curious and develop something. It's another to seek VC/investments to "build out the service" by collecting PII and not treating it as such. Stop. Making. Apps.
Also, if we're talking about a company that had a hiring manager in the process of making an app and did not hire employees with security knowledge somewhere in the process, then the entire company is rotten.
Let me flip this on its head though with your same logic. If you're the type of person that would be willing to provide an app your passport information. Stop. Using. Apps.
The disclosure didn't show every API endpoint, just a few dealing with auth and profiles. They also mentioned only a few PII, you can tell because there were multiple screenshots spread throughout the post. I'm harping on passport for the reason you specify, too; but mostly that information shouldn't be stored...
This can only be solved by regulation.
"Class Immobility" (95% of users unlock this without trying!)
How to unlock: Be denied access to an accredited education. Work twice as hard for half the recognition. Watch opportunities pass you by while gatekeepers congratulate themselves!
At the end of the day the masses will finally get tired of the fuckery of programmers doing whatever they want and start putting laws in place, and the laws will be passed by the stupidest people among us.
Programmers now should start looking into standards of professional behaviors before they are forced on them by law.
And sure, if your follow-up is "that won’t change," I get it, but that doesn’t mean the open nature of programming is the problem.
>At the end of the day the masses will finally get tired of the fuckery of programmers doing whatever they want and start putting laws in place, and the laws will be passed by the stupidest people among us.
I agree laws will pass eventually but it won't start from the people. They rarely even think or hear about it and there are no repercussions so those voices are easily forgotten. Eventually, it will be some big tech corp executive or politician moving into government convincing them to create a security auditing authority to extract money from these companies and/or shut them down.
I'm sure we can find some holier than thou types to fill chairs with security auditors for the new "SSC" once it's greenlit.
Nonetheless: "two months old vulnerability" and "two months old students-made app/service".
It's hard to tell these days what is real.
Linkedin shows 2024 founded, and 2-10 employees. And that same Linkedin page has a post which directly links to this blurb: https://www.readfeedme.com/p/three-college-seniors-solved-th...
The date of this article is May 2025, and it references an interview with the founders.
You know what else was an app built by university students? The Facebook. We're all familiar with the "dumb fucks" quote, with Meta's long history of abusing their users' PII, and their poor security practices that allowed other companies to abuse it.
So, no. This type of behavior must not be excused, and should ideally be strongly regulated and fined appropriately, regardless of the age or experience of the founders.
Apps on the app store are hardly much better than anywhere else.
iOS users still spend more dollars per average in apps than android ones even if android has more users i think ?
Better to make secure operating systems that inform users of bad access patterns and let the developers be free to produce.
Nothing protects you from giving info to a broken backend though, so people should be more cautious and repercussions for insecure backends should be higher.
But if they are asking for your passport, then they have access to it. It's not a third party asking and providing them with some checkmark or other reduced risk data.
Or by someone "government-like" such as Apple or Google.
Governments should not be confirming shit.
But for something like a dating site, It's enough for the API to just return a boolean verified/not-verified for the ID status (or an enum of something like 'not-verified', 'passport', 'drivers-license', etc.). There's no real need to display any of the details to the client/UI.
(In contrast with, say, and airline app where you need to select an identity document for immigration purposes, where you'd want to give the user more details so they can make the choice. But even then, as they do in the United app, they only show the last few digits of the passport number... hopefully that's all that's sent over their internal API as well.)
Before I was allowed to hand out juice cups at my kids' preschool, I had to do a 2 hour food safety course and was subject to periodic inspections. That is infinity% more oversight than I received when storing highly sensitive information for ~10^5 users.
Though I'm sceptical it would help. API design is generally not taught in university courses, and perhaps shouldn't (too specific).
I instead feel that GDPR has already done a lot of heavy lifting. By raising the price of "find out", people got a bit more careful about the "fuck around" part. It seems to push companies to take it seriously.
The step two is forcing companies to take security breaches and security disclosures seriously, which CRA (Cyber Resilience Act) may help.... at the cost of swamps of byrocratic overhead that is also included ofcourse.
>First things first, let’s log in. They only use OTP-based sign in (just text a code to your phone number), so I went to check the response from triggering the one-time password. BOOM – the OTP is directly in the response, meaning anyone’s account can be accessed with just their phone number.
They don't explain it, but I'm assuming that the API is something like api.cercadating.com/otp/<phone-number>, so you can guess phone numbers and get OTP codes even if you don't control the phone numbers.
>The script basically just counted how many valid users it saw; if after 1,000 consecutive IDs it found none, then it stopped. So there could be more out there (Cerca themselves claimed 10k users in the first week), but I was able to find 6,117 users, 207 who had put their ID information in, and 19 who claimed to be Yale students.
I don't know if the author realizes how risky this is, but this is basically what weev did to breach AT&T, and he went to prison for it.[0] Granted, that was a much bigger company and a larger breach, but I still wouldn't boast publicly about exploiting a security hole and accessing the data of thousands of users without authorization.
I'm not judging the morality, as I think there should be room for security researchers to raise alarms, but I don't know if the author realizes that the law is very much biased against security researchers.
[0] https://en.wikipedia.org/wiki/Goatse_Security#AT&T/iPad_emai...
The bigger thing is just that there's no actual win in scraping here. It doesn't make the vulnerability report any more interesting; it just reads like they're trying to make the whole thing newsier. Some (very small) risk, zero reward.
They mention guessing phone numbers, and then the API call for sending the OTP... literally just returns the OTP.
From context, I assume it's closer to the latter, but it would have been helpful for the author to explain it a bit better.
A few people try to warn you but you choose not to listen and, in fact, you recruit the government to make it easier to enter such places with safeguards that don't actually protect you from the disease and encourage you to enter more frequently.
You're then surprised why you're ill to the brink of death and blame the location as the sole cause for your ails. Yes, the location is to blame but so are you for continuing to enter even after getting sick.
Why do you do this? Because you want something. Convenience, pleasure, a distraction, etc. But you refuse to acknowledge that its killing you.
This is how we should view optional services that require us to give our PII data in exchange for hours of attention-grabbing content. They're designed to sell your eyeballs and data to advertisers. You know this already but you can't say no. You're sick and refuse to acknowledge it.
This is a nice fantasy, but realistically it means you shouldn't use probably 90% of services out there, which isn't reasonable for most people. Plus, there are plenty of companies with treasure troves full of data on you that have equally questionable data security/privacy practices that you've never even directly interacted with.
We need regulation. There is no other alternative. And we need to stop blaming victims of data breaches for companies not putting basic security measures in place. I don't think it's unreasonable to expect every company you interact with to securely store your sensitive data. If a place was physically making people ill like in your thought experiment, they wouldn't be around for very long; I think we should demand the same for our data.
It's some self-promo or whatever scheme/scam bullshit.
And I posted this blog because I think people will find it interesting!
Happy to answer any other questions when I get back to my computer :)
A&B testing of post names seems to lead some useful information ;)
I don't see your reference to "Georgetown students..." in either the website link or the user's submissions? Was it modified?
To limit his legal exposure as a researcher, I think it would have been enough to create a second account (or ask a friend to create a profile and get their consent to access it).
You don't have to actually scrape the data to prove that there's an enumeration issue. Say your id is 12345, and your friend signs up and gets id 12357 - that should be enough to prove that you can find the id and access the profile of any user.
As others have said, accessing that much PII of other users is not necessary for verifying and disclosing the vulnerability.
While you can definitely want PII protected and scrape data to prove a point it’s unnecessary and hypocritical.
They'll say things like...
"Well, how long will that take?"
or, "What's really the risk of that happening?"
or, "We can secure it later, let's just get the MVP out to the customer now"
So, as an employee, I do what my employer asks of me. But, if somebody sues my employer because of some hack or data breach, am I going to be personally liable because I'm the only one who "should have known better"?
Not sure where you are located, but I don't know of any case where an individual rank-and-file employee has been held legally responsible for a data breach. (Hell, usually no one suffers any consequences for data breaches. At most the company suffers a token fine and they move on without caring.
A few years ago I was put in the situation where I needed to do this and it created a major shitstorm.
“I’m not putting that in writing” they said.
However it did have the desired effect and they backed down.
You do need to be super comfortable with your position in the company to pull that stunt though. This was for a UK firm and I was managing a team of DevOps engineers. So I had quite a bit of respect in the wider company as well as stronger employment rights. I doubt I’d have pulled this stunt if I was a much more replaceable software engineer in an American startup. And particularly not in the current job climate.
But yea, the lack of security standards across organizations of all sizes is pitiful. Releasing new features always seems to come before ensuring good security practices.
That’s the best way I can think of to align incentives correctly. Right now there’s very little downside to storing as much user information as possible. Data breach? Just tweet an apology and keep going.
This is a little extreme IMO. PII encompasses a lot of data, including benign things like email address stored only for authentication and contact purposes.
I forgot my password.
Type your username:
Your password is hunter2.
Vibes.
"Sorry, you can't use password qwerty123. This password is already used by user SweetLemon13115"
I requested my data and all the image URLs are publicly accessible - and the URLs provided are both your own images and the images of anyone who’d ever viewed your profile
years later I saw their instagram ad and tried to see if the issue still exists, and yes it did. Basically anyone with the knowledge of their API endpoints (which is easy to find using the app-proxy-server) you have full on admin capabilities and access to all messages, matching, etc.
I wonder if I should go back and try again... :-?
under the hood they're all the same, just with different theming and market segmentation
swyx•3h ago
there can always be another side to this story but also wtf. this kind of shit makes me want to charles-proxy every new app i run because who knows what security any random startup has
genewitch•1h ago
Years ago there was a firmware for mango travel routers that let you MITM anything connected to it, and i bought two of them, and then the information about how to set it up disappeared (i can't find it). the GL.iNet mango travel routers, is what i mean. I have one wireguarded with the switch set to shut off access or wireguard only; the other one is for IOT devices and is connected via 10mbit, so even if someone managed to hack one of the two IOT things here they couldn't exfil very much, and i'd notice the blinking.
andrewmcwatters•1h ago
nerdsniper•1h ago
Certificate pinning frustrates Charles by hampering MITM attempts. It can be difficult to extract/replace pinned certificates from the latest versions of Android/iOS apps. Often you can extract them from older versions using specialized tools, if old-enough versions exist and those certificates are still valid for API endpoints of interest.
andrewmcwatters•1h ago
It's like saying IDA Pro is just an interesting piece of software for looking at binaries, but the grandparent comment is surely from someone who doesn't look at these utilities, so I guess that's why I didn't press it.