Update: obviously I just skimmed this, per responses below.
* Port 443 exposed to the internets. This can allow attackers to gain access to information you have. $10k fee for discovery
* Your port 443 responds with "Server: AmazonS3" header. This can allow attackers to identify your hosting company. $10k fee for discovery.
Please remit payment and we will offer instructions for remediation.
> During our conversation, the Cerca team acknowledged the seriousness of these issues, expressed gratitude for the responsible disclosure, and assured me they would promptly address the vulnerabilities and inform affected users.
Well that was the decent thing to do and they did it. Beyond that it is their internal problem and, especially they did fix the issue according to the article.
Engineers can be a little too open and naive. Perhaps his first contacts was with the technical team but then managament and the legal team got hold of the issue and shut it off.
> Well that was the decent thing to do and they did it. Beyond that it is their internal problem and, especially they did fix the issue according to the article.
They didn't inform anyone, as far as I can tell. Especially users need(ed) to be informed.
It's also at least good practice to let security researchers know schedule of when it's safe to inform the public, otherwise in the future disclosure will be chaotic.
Not clear why "the public" should be informed, either.
Ultimately they thanked the researcher and fixed the issue, job done.
Because it's the law in some states now.
Furthermore mandated reporting requirements is how you keep companies from making stupid security decisions in the first place. Mishandling data this way should be a business ending event.
https://portal.ct.gov/ag/sections/privacy/reporting-a-data-b...
I doubt this is an engineering team’s naivete meeting a rational legal team’s response. I’d guess it’s rather facing marketing or management naivete that sticking your head in the sand is the correct way to deal with a potential data leak story.
Then you have no duty to report the vuln to the company and instead should feel free to disclose it to the world.
A little politeness goes a long ways on both sides.
Open for discussion - What would make them pay attention?
When I contacted the company about this, they didn't thank me or really acknowledge the problem. They fixed it about a month later by requiring login to view order URLs. I feel like they should have let their customers know all their PII data was exposed - I know they didn't, I never got such a notification.
If they're scared of such things, then maybe they shouldn't be making and marketing a dating app. It's not 2003 anymore, and this isn't some innocent app - they're collecting information on passports and sexual preferences for thousands of people. They should be aware of the responsibility that ought to come with that.
If you just text out passwords to anybody who asks, are they really doing unauthorized access? Lol.
I’m sure it was illegal somehow, though.
> "The attorney for the government should decline prosecution if available evidence shows the defendant’s conduct consisted of, and the defendant intended, good-faith security research."
https://www.wired.com/2013/03/att-hacker-gets-3-years/
I am not endorsing this interpretation of the CFAA, but this kid needs a lawyer.
They may have "patched" the ability to exploit it in this way, but the plaintext data is still there in that same fragile architecture and still being handled by the same org that made all of these same fundamental mistakes in the first place. Yikes.
As you are probably well aware, we do not live in that world. Companies like Equifax can suffer breaches exposing the personal information of millions and stock still goes up.
Companies don’t like to talk about this, and they bury these costs deep down in their financial statements. But trust me, they’re quite substantial.
I wouldn't say there's no penalty (they might have to pay for a year of identity theft protection or a fine).
I agree that the consequences are not in line with the damage to the public or customer base.
Trasmitting information via HTTPS is usually enough to say your app uses "encryption and other industry-standard measures to protect your data."
Thats when its time to inform them you are dumping the vuln to the public in 90 days due to their silence.
That doesn't make it right, and the treatment of the researcher here was completely inappropriate, but telling young researchers to just go full disclosure without being careful about documentation, legal advice and staying within the various legal lines is itself irresponsible.
It's an especially superficial argument on this story, where the underlying vulnerability has essentially already been disclosed.
They are public and intended to be publicly accessed. A clever teenager [1] noticed -- hey, is that a sequential serial number? Well, yes it was. And so he downloaded all the FOIA documents. Well it turns out they aren't public. The government hosted all the FOIA documents that way, including self-disclosures (which include sensitive information and are only released to the person who the information is about). They never intended to publicly release a small subset of those URLs. (Even though they were transparently guessable.)
Unauthorized access of a computer system carries up to 10 years in prison. The charges were eventually dropped [2] and I don't think a conviction was ever likely. Poor fellow still went through the whole process of being dragged out of bed by armed police.
[1] https://www.cbc.ca/news/canada/nova-scotia/freedom-of-inform...
[2] https://www.techdirt.com/2018/05/08/police-drop-charges-file...
Edit: should have read the linked article before commenting. It totally wasn't, and the charges were dropped...after thoroughly harassing the kid.
Following up on the threat is much less common, and the best way to prevent that (IMO) is to remove the motivation to do so: Once the vuln is public and further threats can not prevent the publication, just draw more negative attention to the company, the company has much fewer incentives to threaten or follow up on threats already made.
It's not a guarantee, you can always hit a vindicative and stupid business owner, but usually publishing in response to threats isn't just the right thing to do (to discourage such attempts) but also the smart thing to do (to protect yourself).
I'm so tired of researchers being ignored when they bring a serious vuln to a company to be met with silence and/or resistance on top of them never alerting their users about it.
We are literally sacrificing national security for the convenience of wealthy companies.
On second thought, maybe physical buildings are not a good analogy.
Presuming perfect communication which is never the case for security vulnerabilities on a consumer application.
Maybe to make it easier to build the form accepting the OTP? Oversight?
I can't think of any other reasons.
When Pinterest's new API was released, they were spewing out everything about a user to any app using their OAuth integration, including their 2FA secrets. We reported and got a bounty, but this sort of shit winds up in big companies' APIs, who really should know better.
It’s very sensible and an obvious solution if you don’t think about the security of it.
A dating app is one of the most dangerous kinds of app to make due to all the necessary PII. this is horrible.
This is big brain energy. Why bother needing to make yet another round trip request when you can just defer that nonsense to the client!
But they also said it was a project by two students. And I could absolutely see students (or even normal developers) who aren’t used to thinking about security make that mistake. It is a very obvious way to implement it.
In retrospect I know that my senior project had some giant security issues. There were more things to look out for than I knew about at that time.
I suspect it's a framework thing; they're probably directly serializing an object that's put in the database (ORM or other storage system) to what's returned via HTTP.
^another article on this
https://georgetownvoice.com/2025/04/06/georgetown-students-c...
They should feel bad about not communicating with the "researcher" after the fact, too. If i had been blown off by a "company" after telling them everything was wide open to the world for the taking, the resulting "blog post" would not be so polite.
STOP. MAKING. APPS.
There's nothing wrong with making your POC/MVP with all of the cool logic that shows what the app will do. That's usually done to gain funding of some sort, but before releasing. Part of the releasing stage should be a revamped/weaponized version of the POC, and not the damn POC itself. The weaponized version should have security stuff added.
That's much better than telling people stop making apps.
If all of the developers were named and shamed, would you, as a hiring manager, ever hire them to develop an app for you? Or would you, in fact, tell them to stop making apps?
They enabled stalkers. There's no possible way to argue that they didn't, you don't know, and some random person just looked into it because their friends mentioned the app and found all of this. I guarantee if anyone with a modicum of security knowledge looks the platform over there's going to be a lot more issues.
It's one thing to be curious and develop something. It's another to seek VC/investments to "build out the service" by collecting PII and not treating it as such. Stop. Making. Apps.
Also, if we're talking about a company that had a hiring manager in the process of making an app and did not hire employees with security knowledge somewhere in the process, then the entire company is rotten.
Let me flip this on its head though with your same logic. If you're the type of person that would be willing to provide an app your passport information. Stop. Using. Apps.
The disclosure didn't show every API endpoint, just a few dealing with auth and profiles. They also mentioned only a few PII, you can tell because there were multiple screenshots spread throughout the post. I'm harping on passport for the reason you specify, too; but mostly that information shouldn't be stored...
Way back when I last used a dating site, a significant percentage of profiles ended up being placeholders for scams of some sort.
In fact, several texted me a link to some bogus "identity verification" site under the guise of "I get too many fake bot profile hits"... Read the fine print, and you're actually signing up for hundreds of dollars worth of pron subscriptions.
If the dating app itself verified people were real, AND took reports of spam seriously, AND kept that information in a way that wasn't insecure, it'd be worth it.
This can only be solved by regulation.
"Class Immobility" (95% of users unlock this without trying!)
How to unlock: Be denied access to an accredited education. Work twice as hard for half the recognition. Watch opportunities pass you by while gatekeepers congratulate themselves!
At the end of the day the masses will finally get tired of the fuckery of programmers doing whatever they want and start putting laws in place, and the laws will be passed by the stupidest people among us.
Programmers now should start looking into standards of professional behaviors before they are forced on them by law.
And sure, if your follow-up is "that won’t change," I get it, but that doesn’t mean the open nature of programming is the problem.
>At the end of the day the masses will finally get tired of the fuckery of programmers doing whatever they want and start putting laws in place, and the laws will be passed by the stupidest people among us.
I agree laws will pass eventually but it won't start from the people. They rarely even think or hear about software security as something other than an amorphous boogie man, and there are no repercussions so any voices are easily forgotten. Eventually, it will be some big tech corp executive or politician moving into government convincing them to create a security auditing authority to extract money from these companies and/or shut them down.
I'm sure we can find some holier than thou types to fill chairs with security auditors for the new "SSC" once it's greenlit.
Nonsese. I've met PhDs in computer science that were easily out-performed by kids fresh out of coding bootcaps. Do you think that spending 5 years doing a few written exampls makes you competent at cyber security? Absurd.
Should've know when they said interpreters and compilers.
Incidentally I replied with sarcasm to theirs as well so it all works out.
Until the balance of incentives changes, I don't see any meaningful change in behavior unfortunately.
Instead, I think this is the fair approach: anyone is free to make a website/app/VR world whatever, but if it stores any kind of PII, you had better know what you are doing. The problem is not security. The problem is PII. If someone's AWS key got hacked, leaked and used by others, well it's bad, but that's different from my personal information getting leaked and someone applying for a credit card on my behalf.
Nonetheless: "two months old vulnerability" and "two months old students-made app/service".
It's hard to tell these days what is real.
Linkedin shows 2024 founded, and 2-10 employees. And that same Linkedin page has a post which directly links to this blurb: https://www.readfeedme.com/p/three-college-seniors-solved-th...
The date of this article is May 2025, and it references an interview with the founders.
You know what else was an app built by university students? The Facebook. We're all familiar with the "dumb fucks" quote, with Meta's long history of abusing their users' PII, and their poor security practices that allowed other companies to abuse it.
So, no. This type of behavior must not be excused, and should ideally be strongly regulated and fined appropriately, regardless of the age or experience of the founders.
Perhaps, like GDPR, HIPAA, and similar, any (web|platform)apps that contain login details and/or PII must thoroughly distance themselves from haphazard, organic, unprofessional, and (bad) amateurish processes and technologies and conform to trusted, proven patterns, processes, and technologies that are tested, audited, and preferably formally proven for correctness. Without formalization and professional standards, there are no standards and these preventable, reinvent-the-wheel-badly hacks will continue doing the same thing and expecting a different result™. Massive hacks, circumvention, scary bugs, other attacks will continue. And, I think this means a proper amount of accreditation, routine auditing, and (the scary word, but smartly) regulation to drag the industry (kicking-and-screaming if need by by showing using appropriate leadership on the government/NGO-SGE side) from an under-structured wild west™ into professionalism.
It's astonishing to me the ease of which software developers can wreak _real_ measurable damage to billions of lives and have no real liability for it.
Software developers shouldn't call themselves engineers unless they're licensed, insured and able to be held liable for their work in the same way a building engineer is.
There are all sorts of failures in the structural space. How many pumped reinforced concrete buildings are being built in Miami right now? How many of them will be sound in 50-75 years? How likely is the architect/PE’s ghost to get sued?
PE’s are smart professionals and do a valuable service. But they aren’t magic, and they all have a boss.
I think there defo a line where bug in your puzzle app don't need a license vs AI that drive your 50k+ tesla
Civil engineering requires licensing because there are specific activities that are reserved for licensed engineers, namely things that can result in many people dying.
If a major screwup doesn't even motivate victims to sue a company then a license is not justified.
Observing that each individual harm may not be worth the effort of suing over is evidence that the justice system is not effective at addressing harm in the aggregate, not evidence of lack of major harm.
https://en.wikipedia.org/wiki/2017_Equifax_data_breach
Or how about four suicides and 900+ wrongful convictions?
https://en.wikipedia.org/wiki/British_Post_Office_scandal
Not to mention the various dating app leaks that led to extortion, suicides and leaking of medical information like HIV status. And not to forget the famous Therac-25 that killed people as direct result of a race condition.
Where's the threshold for you?
I'm not saying I'm pro identity theft or data breach or something, but the industry culture is vastly different
people here are pro on move fast break things some of idea, I think you just cant tbh
The distinction between creating virtual software and physical structures is fairly obvious.
Of course physical engineers that create buildings and roads need to be regulated for safety.
And there are restrictions already for certain software industries, such as healthcare.
Many other forms of software do not have the same hazards so no license should be needed, as it would be prone for abuse.
I don't think anyone is proposing that Flappy Bird or Python scripts on Github should be outlawed. Just like you can still build a robot at home but not a bridge in the town center.
No mention of PII or any specifics.
SWE already has regulations. I see no need for a license requirement...
Concerning PII, it's kind of hypocritical for the gov to regulate when the NSA was proven to be collecting data on everyone against their will or knowledge.
You can sign a liability waiver and do all sorts of dangerous things.
>most “real” engineering field have had licensing requirements for a century, without any real complaints against that process).
Most newer engineering fields are trending away from licensing, not towards it. For example, medical device and drug engineering doesn't use it at all.
Check their comments there's screeds about compelling labor over like basic concepts.
Civil engineering works well because we mostly figured it out anyway. But looking at PCI, SOX and others, we'd probably just require people to produce a book's worth of documentation and audit trail that comes with their broken software.
We had two security teams. Security and compliance. It was not possible to be secure and compliant, so the compliance team had to document every deviance from the IRS standard and document why, then self-report us and the customer to audit the areas where we were outside the lines. That took a dozen people almost a year to do.
All of that existed because a US state (S Carolina iirc) was egregiously incompetent and ended up getting breached. Congress “did something” about it.
They all update their recommendation and standards routinely, and do a reasonably good job at being professional organizations.
The current state of this as regards to the tech sector doesn't mean its impossible to implement.
Thats why all the usual standards (PCI, SOC2 in particular) are performative in practice. There's nothing that holds industry accountable to be better and there is nothing, from a legal stand point, that backs up members of the association if they flag an entity or individual for what would be effectively malpractice.
You can't stop someone from doing electrical repairs on their own home but if the house burns down as a result, the homeowners' insurance will probably just deny the claim, and then they risk losing their mortgage. Basically, if you make it bureaucratically difficult to do the wrong thing, you'll encourage more of the right thing.
If you're looking for a regulatory fix, I would prefer something like a EU-style requirement on handling PII. Even the US model--suing in cases of privacy breaches--seems like it could be pretty effective in theory, if only the current state of privacy law was a little less pro-corporate. Civil suits could make life miserable for the students who developed this app.
You don't know what you don't know; sometimes people can think they do know what they're doing and they just haven't encountered situations otherwise. We were all new to programming once; no one would ever become a solid engineer if they prevented themselves from building anything out of fear of doing something wrong that they did not account for out of lack of experience.
US tech is built on the "go fast, break things" mentality. Companies with huge backers routinely fail at security, and some of them actually spend money to suppress those who expose the companies' poor privacy/security practices.
If anything, college kids could at least reasonably claim ignorance, whereas a lot of HN folks here work for companies who do far worse and get away with it.
Some companies, some unicorns, knowingly and wilfully break laws to get ahead. But they're big, and people are getting rich working for them, so we don't crucify them.
It’s a trade-off between shipping fast and courting risk. I’m not judging one over the other; it comes down to what you’re willing to accept, not what you wish for.
Apps on the app store are hardly much better than anywhere else.
iOS users still spend more dollars per average in apps than android ones even if android has more users i think ?
Better to make secure operating systems that inform users of bad access patterns and let the developers be free to produce.
Nothing protects you from giving info to a broken backend though, so people should be more cautious and repercussions for insecure backends should be higher.
But if they are asking for your passport, then they have access to it. It's not a third party asking and providing them with some checkmark or other reduced risk data.
Or by someone "government-like" such as Apple or Google.
Governments should not be confirming shit.
They'll be confirming data that is publicly available.
(not an attack on you. I have to say that every time I see someone say anything along the lines of "the government should do it")
But for something like a dating site, It's enough for the API to just return a boolean verified/not-verified for the ID status (or an enum of something like 'not-verified', 'passport', 'drivers-license', etc.). There's no real need to display any of the details to the client/UI.
(In contrast with, say, and airline app where you need to select an identity document for immigration purposes, where you'd want to give the user more details so they can make the choice. But even then, as they do in the United app, they only show the last few digits of the passport number... hopefully that's all that's sent over their internal API as well.)
Before I was allowed to hand out juice cups at my kids' preschool, I had to do a 2 hour food safety course and was subject to periodic inspections. That is infinity% more oversight than I received when storing highly sensitive information for ~10^5 users.
Though I'm sceptical it would help. API design is generally not taught in university courses, and perhaps shouldn't (too specific).
I instead feel that GDPR has already done a lot of heavy lifting. By raising the price of "find out", people got a bit more careful about the "fuck around" part. It seems to push companies to take it seriously.
The step two is forcing companies to take security breaches and security disclosures seriously, which CRA (Cyber Resilience Act) may help.... at the cost of swamps of byrocratic overhead that is also included ofcourse.
I mean, do you trust that the chemical industry will self regulate and keep dangerous chemicals out of your drinking water?
Then why do we trust software companies to keep you and your data safe?
We will get more regulations over time no matter how much we complain about it because people are rather lazy at the end of the day and more money for less work is a powerful motivator.
I think we'll need to start pushing on lawmakers.
>First things first, let’s log in. They only use OTP-based sign in (just text a code to your phone number), so I went to check the response from triggering the one-time password. BOOM – the OTP is directly in the response, meaning anyone’s account can be accessed with just their phone number.
They don't explain it, but I'm assuming that the API is something like api.cercadating.com/otp/<phone-number>, so you can guess phone numbers and get OTP codes even if you don't control the phone numbers.
>The script basically just counted how many valid users it saw; if after 1,000 consecutive IDs it found none, then it stopped. So there could be more out there (Cerca themselves claimed 10k users in the first week), but I was able to find 6,117 users, 207 who had put their ID information in, and 19 who claimed to be Yale students.
I don't know if the author realizes how risky this is, but this is basically what weev did to breach AT&T, and he went to prison for it.[0] Granted, that was a much bigger company and a larger breach, but I still wouldn't boast publicly about exploiting a security hole and accessing the data of thousands of users without authorization.
I'm not judging the morality, as I think there should be room for security researchers to raise alarms, but I don't know if the author realizes that the law is very much biased against security researchers.
[0] https://en.wikipedia.org/wiki/Goatse_Security#AT&T/iPad_emai...
The bigger thing is just that there's no actual win in scraping here. It doesn't make the vulnerability report any more interesting; it just reads like they're trying to make the whole thing newsier. Some (very small) risk, zero reward.
They mention guessing phone numbers, and then the API call for sending the OTP... literally just returns the OTP.
From context, I assume it's closer to the latter, but it would have been helpful for the author to explain it a bit better.
Put very simply, they exposed an endpoint that took a phone number as input to send a OTP code. That's reasonable and many companies do this without issue. The problem is, instead of just sending the OTP code they _returned the code to the client_ as well.
There is never a good reason to do this, it defeats the entire purpose. The only reason you send a code to a phone is for the user to enter to prove they "own" that phone number.
It's like having a secure vault but leaving a post-it note with the combination stuck to it.
A few people try to warn you but you choose not to listen and, in fact, you recruit the government to make it easier to enter such places with safeguards that don't actually protect you from the disease and encourage you to enter more frequently.
You're then surprised why you're ill to the brink of death and blame the location as the sole cause for your ails. Yes, the location is to blame but so are you for continuing to enter even after getting sick.
Why do you do this? Because you want something. Convenience, pleasure, a distraction, etc. But you refuse to acknowledge that its killing you.
This is how we should view optional services that require us to give our PII data in exchange for hours of attention-grabbing content. They're designed to sell your eyeballs and data to advertisers. You know this already but you can't say no. You're sick and refuse to acknowledge it.
This is a nice fantasy, but realistically it means you shouldn't use probably 90% of services out there, which isn't reasonable for most people. Plus, there are plenty of companies with treasure troves full of data on you that have equally questionable data security/privacy practices that you've never even directly interacted with.
We need regulation. There is no other alternative. And we need to stop blaming victims of data breaches for companies not putting basic security measures in place. I don't think it's unreasonable to expect every company you interact with to securely store your sensitive data. If a place was physically making people ill like in your thought experiment, they wouldn't be around for very long; I think we should demand the same for our data.
Right, and when you go to the grocery store you catch listeria every time? Oh wait, food handling is rather safe because of well enforced regulation.
The problem with libertarians is they don't think of the wide spread public effects of their behaviors. Trash piles up outside their house and suddenly bears are eating the neighbors.
It's some self-promo or whatever scheme/scam bullshit.
And I posted this blog because I think people will find it interesting!
Happy to answer any other questions when I get back to my computer :)
A&B testing of post names seems to lead some useful information ;)
I don't see your reference to "Georgetown students..." in either the website link or the user's submissions? Was it modified?
To limit his legal exposure as a researcher, I think it would have been enough to create a second account (or ask a friend to create a profile and get their consent to access it).
You don't have to actually scrape the data to prove that there's an enumeration issue. Say your id is 12345, and your friend signs up and gets id 12357 - that should be enough to prove that you can find the id and access the profile of any user.
As others have said, accessing that much PII of other users is not necessary for verifying and disclosing the vulnerability.
While you can definitely want PII protected and scrape data to prove a point it’s unnecessary and hypocritical.
They'll say things like...
"Well, how long will that take?"
or, "What's really the risk of that happening?"
or, "We can secure it later, let's just get the MVP out to the customer now"
So, as an employee, I do what my employer asks of me. But, if somebody sues my employer because of some hack or data breach, am I going to be personally liable because I'm the only one who "should have known better"?
Not sure where you are located, but I don't know of any case where an individual rank-and-file employee has been held legally responsible for a data breach. (Hell, usually no one suffers any consequences for data breaches. At most the company suffers a token fine and they move on without caring.
A few years ago I was put in the situation where I needed to do this and it created a major shitstorm.
“I’m not putting that in writing” they said.
However it did have the desired effect and they backed down.
You do need to be super comfortable with your position in the company to pull that stunt though. This was for a UK firm and I was managing a team of DevOps engineers. So I had quite a bit of respect in the wider company as well as stronger employment rights. I doubt I’d have pulled this stunt if I was a much more replaceable software engineer in an American startup. And particularly not in the current job climate.
But yea, the lack of security standards across organizations of all sizes is pitiful. Releasing new features always seems to come before ensuring good security practices.
That’s the best way I can think of to align incentives correctly. Right now there’s very little downside to storing as much user information as possible. Data breach? Just tweet an apology and keep going.
This is a little extreme IMO. PII encompasses a lot of data, including benign things like email address stored only for authentication and contact purposes.
Things like photos of IDs/passports should be considered yellowcake.
That might be the only way to give the issue the attention it deserves.
I forgot my password.
Type your username:
Your password is hunter2.
Vibes.
"Sorry, you can't use password qwerty123. This password is already used by user SweetLemon13115"
I requested my data and all the image URLs are publicly accessible - and the URLs provided are both your own images and the images of anyone who’d ever viewed your profile
A URL with a cryptic file name is theoretically just as secure as a a random password.
years later I saw their instagram ad and tried to see if the issue still exists, and yes it did. Basically anyone with the knowledge of their API endpoints (which is easy to find using the app-proxy-server) you have full on admin capabilities and access to all messages, matching, etc.
I wonder if I should go back and try again... :-?
I've sent 2 big bugs like this, one Funimation and one for a dating app.
Funimation you could access anyones PII and shop orders, they ignored me until I sent a linkedin message to their CTO with his PII (CC number) in it.
The "dating" app well they were literally spewing private data (admin/mod notes, reports, private images, bcrytped password, ASIN, IP, etc) via a websocket on certain actions. I figured out those actions that triggered it, emailed them and within 12 hours they had fixed it and made a bug bounty program to pay me out of as a thank you.
Importantly, I also didn't use anyone else's data/account, I simply made another account that I attacked to prove. Yes it cost me a monthly sub ~$10 to do so. But they also refunded that.
under the hood they're all the same, just with different theming and market segmentation
Perhaps a hold over from testing (where you don't always want to send the SMS). Maybe just the habit/pattern of returning the item you just created in the DB and not remembering to mark the field as private. There are a whole slew easy foot-guns. I'm not defending it but I doubt it's to do client-side validation, that would be insanity. It's easy enough to not notice a body on a response that you don't care about client side, "200? Cool, keep moving". It's still crazy they were returning the OTP and I sure hope it wasn't on purpose.
I briefly worked with a company where I had to painfully explain to the lead engineer that you can't trust anything that comes from the browser; because a hacker can curl whatever they want.
Our relationship deteriorated from there. Needless to say, I don't list the experience on my resume.
swyx•7h ago
there can always be another side to this story but also wtf. this kind of shit makes me want to charles-proxy every new app i run because who knows what security any random startup has
genewitch•6h ago
Years ago there was a firmware for mango travel routers that let you MITM anything connected to it, and i bought two of them, and then the information about how to set it up disappeared (i can't find it). the GL.iNet mango travel routers, is what i mean. I have one wireguarded with the switch set to shut off access or wireguard only; the other one is for IOT devices and is connected via 10mbit, so even if someone managed to hack one of the two IOT things here they couldn't exfil very much, and i'd notice the blinking.
andrewmcwatters•6h ago
nerdsniper•5h ago
Certificate pinning frustrates Charles by hampering MITM attempts. It can be difficult to extract/replace pinned certificates from the latest versions of Android/iOS apps. Often you can extract them from older versions using specialized tools, if old-enough versions exist and those certificates are still valid for API endpoints of interest.
andrewmcwatters•5h ago
It's like saying IDA Pro is just an interesting piece of software for looking at binaries, but the grandparent comment is surely from someone who doesn't look at these utilities, so I guess that's why I didn't press it.