[0]: https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/...
[1]: https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/...
Which Chrome has transmuted into "we do whatever we want to do". Remember their attempt to remove confirm/prompt?
The W3C's plan was for HTML4 to be replaced by XHTML. What we commonly call HTML5 is the WHATWG "HTML Living Standard."
no wonder they were sidelined
That being said, I'm not against removing features but neither this or the original post provide any substantial rationale on why it should be removed. Uses for XSLT do exist and the alternative is "just polyfill it" which is awkward especially for legacy content.
But walled gardens like YouTube, Discord, ChatGPT and suchlike that are delivered via the browser are prospering. And as a cross platform GUI system, html is astonishingly popular.
I’m sure you can come up with more examples of extremely high value business which would not have happened without the web.
Like 90%+ of internet traffic goes to a handful of sites owned by tech giants. Most of what's left is SEO garbage serving those same tech giants' ad networks.
See the agenda here: https://github.com/whatwg/html/issues/11131#issuecomment-274...
Should we remove XSLT from the web platform? – 4 days ago (89 comments):
https://news.ycombinator.com/item?id=44909599
XSLT – Native, zero-config build system for the Web – 27th June 2025 (328 comments):
https://news.ycombinator.com/item?id=44949857
Google is killing the open web, today, 127 comments
Imagine that tomorrow, Google announces plans to stop supporting HTML and move everyone to its own version of "CompuServe", delivered only via Google Fiber and accessible only with Google Chrome. What headline would you suggest for that occasion? "Google is killing the open web" has already been used today on an article about upcoming deprecation of XSLT format.
All the other alternatives are meaningless, including Firefox.
I am one of the few folks on my team that still uses Firefox, all our projects dropped support for it like 5 years ago.
> "- Google had a plan called "Project NERA" to turn the web into a walled garden they called "Not Owned But Operated". A core component of this was the forced logins to the chrome browser you've probably experienced (surprise!)"
To "not own but operate" seems to go into the direction of the parent comment.
Also this: https://news.ycombinator.com/item?id=28976574
[0]: https://web.archive.org/web/20211024063021/https://twitter.c...
I've taken the flags off that post now.
The flagged post is a perfect example. It contains just a fraction of factual information, but it was enough for bot farms to engage. Manipulators get mad at truth.
If there is a polyfill I'm not sure making it in Javascript makes sense but web assembly could work.
Too heated? Looked pretty civil and reasonable to me. Would it be ridiculous to suggest that the tolerance for heat might depend on how commenters are aligned with respect to a particular vendor?
Google ignored everything, pushed on with the removal, and now pre-emptively closed this discussion, too
To be fair to Google, they've consistently steam-rolled the standards processes like that for as long as I can remember, so it really isn't new.
> We didn't forgot your decade of fuckeries, Google.
> You wanted some heated comment? You are served.
> the JavaScript brainworm that has destroyed the minds of the new generation
> the covert war being waged by the WHATWG
> This is nothing short of technical sabotage, and it’s a disgrace.
> breaking yet another piece of the open web you don't find convenient for serving people ads and LLM slop.
> Are Google, Apple, Mozilla going to pay for the additional hosting costs incurred by those affected by the removal of client-side XSLT support?
> Hint: if you don't want to be called out on your lies, don't lie.
> Evil big data companies who built their business around obsoleting privacy. Companies who have built their business around destroying freedom and democracy.
> Will you side with privacy and freedom or will you side with dictatorship?
Bullshit like this has no place in an issue tracker. If people didn’t act like such children in a place designed for productive conversation, then maybe the repo owners wouldn’t be so trigger happy.
Google definitely throws its weight around too much w.r.t. to web standards, but this doesn't seem too bad. Web specifications are huge and complex so trying to size down a little bit while maintaining support for existing sites is okay IMO.
That breaks old unmaintained but still valuable sites.
I completely understand the security and maintenance burdens that they're bringing up but breaking sites would be unacceptable.
(But of course, XML Stylesheets are most widely used with RSS feeds, and Google probably considers further harm to the RSS ecosystem as a bonus. sigh)
Fortunately, Thunderbird still has support for feeds and doesn't seem to have been afflicted by the same malaise as the rest of the org chart. Who knows how long that will last.
XSLT 1.0 is still useful though, and absolutely shouldn't be removed.
Them: "community feedback" Also them: <marks everything as off topic>
This came about after the maintainer of libxml2 found giving free support to all these downstream projects (from billionaire and trillionaire companies) too much.
Instead of just funding him, they have the gall to say they don't have the money.
While this may be true in a micocosm of that project, the devs should look at the broader context and who they are actually working for.
My first job in software was as a software test development intern at a ~500 employee non-profit, in about 2008 when I was about 19 or 20 years old. Writing software to test software. One of my tasks during the 2 years I worked there was to write documentation for their XML test data format. The test data was written in XML documents, then run through a test runner for validation. I somehow found out about XSLT and it seemed like the perfect solution. So I wrote up XML schemas for the XML test data, in XSD of course. The documentation lived in the schema, alongside the type definitions. Then I wrote an XSLT document, to take in those XML schemas and output HTML pages, which is also basically XML.
So in effect what I wrote was an XML program, which took XML as input, and outputted XML, all entirely in the browser at document-view time.
And it actually worked and I felt super proud of it. I definitely remember it worked in our official browser (Internet Explorer 7, natch). I recall testing it in my preferred browser, Firefox (version 3, check out that new AwesomeBar, baby), and I think I got it working there, too, with some effort.
I always wonder what happened with that XML nightmare I created. I wonder if anyone ever actually used it or maybe even maintained it for some time. I guess it most likely just got thrown away wholesale during an inevitable rewrite. But I still think fondly back on that XSLT "program" even today.
I wrote my personal website in XML with XSLT transforming into something viewable in the browser circa 2008. I was definitely inspired by CSS Zen Garden where the same HTML gave drastically different presentation with different CSS, but I thought that was too restrictive with too much overly tricky CSS. I thought the code would be more maintainable by writing XSLT transforms for different themes of my personal website. That personal webpage was my version of the static site generator craze: I spent 80% of the time on the XSLT and 20% on the content of the website. Fond memories, even though I found XSLT to be incredibly difficult to write.
My first rewrite of my site, as I moved it away from Yahoo, into my own domain was also in XSLT/XML.
Eventually I got tired of keeping it that way, and rewrite the parsing and HTML generation into PHP, but kept the site content in XML, to this day.
Every now and then I think about rewriting it, but I rather do native development outside work, and don't suffer from either PHP nor XML allergies.
Doing declarative programming in XSLT was cool though.
Imagine people put energy into writing that thick of a book about XML. To be filed into the Theology section of a library
At the end of the (very long) process, I just hard-coded the reference request XML given by the particularly problematic endpoints, put some regex replacements behind it, and called it a day.
I agree RSS parsing is nice to have built into browsers. (Just like FTP support, that I genuinely miss in Firefox nowadays, but allegedly usage was too low to warrant the maintenance.) I also don't really understand the complaint from the Chrome people that are proposing it: "it's too complex, high-profile bugs, here's a polyfill you can use". Okay, why not stuff that polyfill into the browser then? Then it's already inside the javascript sandbox that you need to stay secure anyway, and everything just stays working as it was. Replacing some C++ code sounds like a win for safety any day of the week
On the other hand, I don't normally view RSS feeds manually. They're something a feed parser (in my case: Blogtrottr and Antennapod) would work with. I can also read the XML if there is a reason for me to ever look at that for some reason, or the server can transform the RSS XML into XHTML with the same XSLT code right? If it's somehow a big deal to maintain, and RSS is the only thing that uses it, I'm also not sure how big a deal it is to have people install an extension if they view RSS feeds regularly on sites where the server can do no HTML render of that information. It's essentially the same solution as if Chrome would put the polyfill inside the browser: the browser transforms the XML document inside of the JS sandbox
It may be that I don't notice when I use it, if the page just translates itself into XHTML and I would never know until opening the developer tools (which I do often, fwiw: so many web forms are broken that I have a habit of opening F12, so I always still have my form entries in the network request log). Maybe it's much more widespread than I knew of. I have never come across it and my job is testing third-party websites for security issues, so we see a different product nearly every week (maybe those sites need less testing because they're not as commonly interactive? I may have a biased view of course)
I think I've read some governments still use it, which would make sense since they usually don't have a super high budget for tons of developers, so they have to stick to the easy way to do things.
As much of a monopoly as Chrome is, if they actually try to remove it they're likely to get a bunch of government web pages outright stating "Chrome is unsupported, please upgrade to Firefox or something".
Which government or governmental organizations are you talking about?
[0] https://news.ycombinator.com/item?id=44909599
[1] https://www.congress.gov/117/bills/hr3617/BILLS-117hr3617ih....
I think the principle behind it is wonderful. https://www.example.com/latest-posts is just an XML file with the pure data. It references an XSLT file which transforms that XML into a web page. But I've tried using it in the past and it was such a pain to work with. Representing things like for loops in markup is a fundamentally inefficient thing to do, JavaScript based templating is always going to win out from the developer experience viewpoint, especially when you're more than likely going to need to use JS for other stuff anyway.
It's one of those purist things I yearn for but can never justify. Shipping XML with data and a separate template feels so much more efficient than pre-prepared HTML that's endlessly repetitive. But... gzip also exists and makes the bandwidth savings a non-issue.
Especially considering the amount of complex standards they have qualms about from WebUSB to 20+ web components standards
> On the other hand, I don't normally view RSS feeds manually.
Chrome metrics famously underrepresent corporate installation. There could be quite a few corporate applications using XSLT as it was all the rage 15-20 years ago.
XSLT (and basically anything else that existed when HTML5 turned ten years old) is old code using old quality standards and old APIs that still need to be maintained. Browsers can rewrite them to be all new and modern, but it's a job very few people are interested in (and Google's internal structure heavily prioritizes developing new things over maintaining old stuff).
Nobody is making a promotion by modernizing the XSLT parser. Very few people even use XSLT in their day to day, and the biggest product of the spec is a competitor to at least three of the four major browser manufacturers.
XSLT is an example of exciting tech that failed. WebSerial is exciting tech that can still prove itself somehow.
The corporate installations still doing XSLT will get stuck running an LTS browser like they did with IE11 and the many now failed strains of technology that still supports (anyone remember ActiveX?).
5G was another hype word. Can't say that's not useful! I don't really notice a difference with 4G (and barely with 3G) but apparently on the carrier side things got more efficient and it is very widely adopted
I guess there's a reason the Gartner hype cycle ends with widespread adoption and not with "dead and forgotten": most things are widely picked up for a reason. (Having said that, if someone can tell me what the unique selling point of an NFT was, I've not yet understood that one xD)
You'd use XSLT to translate your data into a webpage. Or a mobile device that supported WML/WAP. Or a desktop application.
That was the dream, anyhow.
XML was unfairly demonized for the baggage that IBM and other enterprise orgs tied to it, but the standard itself was frigging amazing and powerful.
Converting a simple manually edited xml database of things to html was awesome. What I mostly wanted the ability to pass in a selected item to display differently. That would allow all sorts of interactivity with static documents.
I know this is for security reason but why not update the XSLT implementation instead. And if feature that aren't used get dropped, they might as well do it all in one good. I am sure lots of HTML spec aren't even used.
I am of the opinion that it is to remove one of the last ways to build web applications that don't have advertising and tracking injected into them.
Er, how so? What stops you from doing so in HTML/JS/CSS ?
So the libxml/libxslt unpaid volunteer maintainer wants to stop doing 'disclosure embargo' of reported security issues: https://gitlab.gnome.org/GNOME/libxml2/-/issues/913 Shortly after that, Google Chrome want to remove XSLT support.
Coincidence?
Source (yawaramin): https://news.ycombinator.com/item?id=44925104
PS: Seems libxslt which is used by Blink has an (unpaid) maintainer but nothing going on there really, seems pretty unmaintained https://gitlab.gnome.org/GNOME/libxslt/-/commits/master?ref_...
PS2: Reminds me all of this https://xkcd.com/2347/ A shame that libxml and libxslt could not get more support while used everywhere. Thanks for all the hard work to the unpaid volunteers!
It'd be much better that Google did support the maintainer, but given the apparent lack of use of XSLT 1.0 and the maintainer already having burned out, stopping supporting XSLT seems like the current best outcome:
> "I just stepped down as libxslt maintainer and it's unlikely that this project will ever be maintained again"
The suggestion of using a polyfill is a bit nonsensical as I suspect there is little new web being written in XSLT, so someone would have to go through all the old pages out there and add the polyfill. Anyone know if accomplishing XSLT is possible with a Chrome extension? That would make more sense.
Cool example with the recipes page :)
There are also very valid comments in there about why removal would still hurt existing sites and applications, especially for embedded devices.
https://github.com/whatwg/html/pull/11563#issuecomment-31970...
With how bloated browsers are right now, good riddance IMO
Since it's a microcontroller, modifying that server and pushing the firmware update to users is probably also a pain.
Unusual use case, but an reasonable one.
The browsers today are too bloated and it is difficult to create a new browser engine. I wish there were simpler standards for "minimal browser", for example, supporting only basic HTML tags, basic layout rules, WASM and Java bytecode.
Many things, like WebAudio or Canvas, could be immplemented using WASM modules, which as a side effect, would prevent their use for fingerprinting.
And no WebAudio and Canvas couldn't be implemented in client WASM without big security implication. If by module you mean inside the browser, them, what is the point of WASM here ?
Would make it possible to create spec-compliant browsers with a subset of the web platform, fulfilling different use cases without ripping out essentials or hacking them in.
Historic reasons, and it sounds like they want it to contain zero template engines. You could transpile a subset of Jinja or Mustache to XSLT, but no one seems to do it or care.
I think a dedicated unsupported media type -> supported media type WASM transformation interface would be good. You could use it for new image formats and the like as well. There are things like JXL.js that do this:
[0] ~0.001% usage according to one post there
This is still a massive number of people who are going to be affected by this.
But at the end of the day, you only really need one, and the type attribute was phased out of the script tag entirely, and Javascript won.
It is actively used today.
For better or worse, http is no longer just for serving textual documents.
Because XSLT is part of the web standards.
XSLT is a specification for a "template engine" and not a specific engine. There are dozens of XSLT implementations.
Mozilla notably doesn't use libxslt but transformiix: https://web.mit.edu/ghudson/dev/nokrb/third/firefox/extensio...
> and not Jinja for example?
Jinja operates on text, so it's basically document.write(). XSLT works on the nodes itself. That's better.
> Also it can be reimplemented using JS or WASM.
Sort of. JS is much slower than the native XSLT transform, and the XSLT result is cacheable. That's huge.
I think if you view XSLT as nothing more than ancient technology that nobody uses, then I can see how you could think this is ok, but I've been looking at it as a secret weapon: I've been using it for the last twenty years because it's faster than everything else.
I bet Google will try and solve this problem they're creating by pushing AMP again...
> The browsers today are too bloated
No, Google's browser today is too bloated: That's nobody's fault but Google.
> and it is difficult to create a new browser engine
I don't recommend confusing difficult to create with difficult to sell unless you're looking for a reason to not do something: There's usually very little overlap between the two in the solution.
Nobody is going to process million of DOM nodes with XSLT because the browser won't be able to display them anyway. And one can write a WASM implementation.
Serving a server-generated HTML page could be even faster.
Except it isn't.
Lots of things could be faster than they are.
They actually thought about it, and decided not to do it :-/
XSLT is a templating language (like HTML is a content language), not a template engine like Blink or WebKit is a browser engine.
> Also it can be reimplemented using JS or WASM.
Changing the implementation wouldn't involve taking the language out of the web platform. There wouldn't need to be any standardization talk about changing the implementation used in one or more browsers.
Audio and canvas are fundamental I/O things. You can’t shift them to WASM.
You could theoretically shift a fair bit of Audio into a WASM blob, just expose something more like Mozilla’s original Audio Data API which the Web Audio API defeated for some reason, and implement the rest atop that single primitive.
2D canvas context includes some rendering stuff that needs to match DOM rendering. So you can’t even just expose pixel data and implement the rest of the 2D context in a WASM blob atop that.
And shifting as much of 2D context to WASM as you could would destroy its performance. As for WebGL and WebGPU contexts, their whole thing is GPU integration, you can’t do that via WASM.
So overall, these things you’re saying could be done in WASM are the primitives, so they definitely can’t.
Which means more unreadable code.
But if they decide to remove XSLT from spec, I would be more than happy if they remove JS too. The same logic applies.
- This isn't Chrome doing this unilaterally. https://github.com/whatwg/html/issues/11523 shows that representatives from every browser are supportive and there have been discussions about this in standards meetings: https://github.com/whatwg/html/issues/11146#issuecomment-275...
- You can see from the WHATNOT meeting agenda that it was a Mozilla engineer who brought it up last time.
- Opening a PR doesn't necessarily mean that it'll be merged. Notice the unchecked tasks - there's a lot to still do on this one. Even so, give the cross-vendor support for this is seems likely to proceed at some point.
It's an issue open on the HTML spec for the HTML spec maintainers to consider. It was opened by a Chrome engineer after at least two meetings where a Mozilla engineer raised the topic, and where there was apparently vendor support for it.
This is happening after some serious exploits were found: https://www.offensivecon.org/speakers/2025/ivan-fratric.html
And the maintainer of libxslt has stepped down: https://gitlab.gnome.org/GNOME/libxml2/-/issues/913
To be completely honest, though, I'm not sure what people expect to get out of it. I dug into this a while ago for a rather silly reason and I found that it's very inside baseball, and unless you really wanted to get invested in it it seems like it'd be hard to meaningfully contribute.
To be honest if people are very upset about a feature that might be added or a feature that might be removed the right thing to do is probably to literally just raise it publicly, organize supporters and generally act in protest.
Google may have a lot of control over the web, but note that WEI still didn't ship.
Everyone likes to complain as a user of open source. Nobody likes to do the difficult work.
The main thing that seems unaddressed is the UX if a user opens a direct link to an XML file and will now just see tag soup instead of the intended rendering.
I think this could be addressed by introducing a <?human-readable ...some url...?> processing instruction that browsers would interpret like a meta tag redirect. Then sites that are interested could put that line at the top of their XML files and redirect to an alternative representation in HTML or even to a server-side or WASM-powered XSLT processor for the file.
Sort of like an inverse of the <link rel="alternate" ...> solution that the post mentioned.
The only thing this doesn't fix is sites that are abandoned and won't update or are part if embedded devices and can't update.
HTTP has already had this since the 90s. Clients send the Accept HTTP header indicating which format they want and servers can respond with alternative representations. You can already respond with HTML for browsers and XML for other clients today. You don’t need the browser to know how to do the transformation.
This would work without special syntax in the XML file.
https://thedailywtf.com/articles/Sketchy-Skecherscom
Also world of warcraft used to.
Can’t think of recent examples though.
[1] https://blog.startifact.com/posts/xee/
... Largely because of lack of help from major users such as browsers.
Speaking from personal experience, working on libxslt... not easy for many reasons beyond the complexity of XSLT itself. For instance:
- libxslt is linked against by all sorts of random apps and changes to libxslt (and libxml) must not break ABI compatibility. This often constrains the shape of possible patches, and makes it that much harder to write system fixes.
- libxslt reaches into libxml and reuses fields in creative ways, e.g. libxml's `xmlDoc` has a `compression` field that is ostensibly for storing the zlib compression level [1], but libxslt has co-opted it for a completely different purpose [2].
- There's a lot of missing institutional knowledge and no clear place to go for answers, e.g. what does a compile-time flag that guards "refactored parts of libxslt" [3] do exactly?
[1] https://gitlab.gnome.org/GNOME/libxml2/-/blob/ca10c7d7b513f3...
[2] https://gitlab.gnome.org/GNOME/libxslt/-/blob/841a1805a9a9aa...
[3] https://gitlab.gnome.org/GNOME/libxslt/-/blob/841a1805a9a9aa...
Out of those three projects, two are notoriously under-resourced, and one is notorious for constantly ramming through new features at a pace the other two projects can't or won't keep up with.
Why wouldn't the overworked/underresourced Safari and Firefox people want an excuse to have less work to do?
This appeal to authority doesn't hold water for me because the important question is not 'do people with specific priorities think this is a good idea' but instead 'will this idea negatively impact the web platform and its billions of users'. Out of those billions of users it's quite possible a sizable number of them rely on XSLT, and in my reading around this issue I haven't seen concrete data supporting that nobody uses XSLT. If nobody really used it there wouldn't be a need for that polyfill.
Fundamentally the question that should be asked here is: Billions of people use the web every day, which means they're relying on technologies like HTML, CSS, XML, XSLT, etc. Are we okay with breaking something that 0.1% of users rely on? If we are, okay, but who's going to tell that 0.1% of a billion people that they don't matter?
The argument I've seen made is that Google doesn't have the resources (somehow) to maintain XSLT support. One of the googlers argued that new emerging web APIs are more popular, and thus more deserving of resources. So what we've created is a zero-sum game where any new feature added to the platform requires the removal of an existing feature. Where does that game end? Will we eventually remove ARIA and/or screen reader support because it's not used by enough people?
I think all three browser vendors have a duty to their users to support them to the best of their ability, and Google has the financial and human resources to support users of XSLT and is choosing not to.
This is part of why web standards processes need to be very conservative about what's added to the web, and part of why a small vocal contingent of web people are angry that Google keeps adding all sorts of weird stuff to the platform. Useful weird stuff, but regardless.
Says who? You keep mentioning this 0.1% threshold yet…
1. I can’t find any reference to that do you have examples / citations?
2. On the contrary here’s a paper that proposes a 3x higher heuristic: https://arianamirian.com/docs/icse2019_deprecation.pdf
3. It seems there are plenty of examples of features being removed above that threshold NPAPI/SPDY/WebSQL/etc.
4. Resources are finite. It’s not a simple matter of who would be impacted. It’s also opportunity cost and people who could be helped as resources are applied to other efforts.
1. not trillion dollar tech companies
or
2. not 99% funded from a trillion dollar tech company.
I have long suspected that Google gives so much money to Mozilla both for the default search option, but also for massive indirect control to deliberately cripple Mozilla in insidious ways to massively reduce Firefox's marketshare. And I have long predicted that Google is going to make the rate of change needed in web standards so high that orgs like Mozilla can't keep up and then implode/become unusable.
Arguably, we could lighten the load on all three teams (especially the under-resourced Firefox and Safari teams) by slowing the pace of new APIs and platform features. This would also ease development of browsers by new teams, like Servo or Ladybird. But this seems to be an unpopular stance because people really (for good reason) want the web platform to have every pet feature they're an advocate for. Most people don't have the perspective necessary to see why a slower pace may be necessary.
This has never ever made sense because Mozilla is not at all afraid to piss in Google's cheerios at the standards meetings. How many different variations of Flock and similar adtech oriented features did they shoot down? It's gotta be at least 3. Not to mention the anti-fingerprinting tech that's available in Firefox (not by default because it breaks several websites) and opposition to several Google-proposed APIs on grounds of fingerprinting. And keeping Manifest V2 around indefinitely for the adblockers.
People just want a conspiracy, even when no observed evidence actually supports it.
>And I have long predicted that Google is going to make the rate of change needed in web standards so high that orgs like Mozilla can't keep up and then implode/become unusable.
That's basically true whether incidentally or on purpose.
Billions of people use the web every day. Should the 99.99% of them be vulnerable to XSLT security bugs for the other 0.01%?
I don't think anyone is arguing that XSLT has to be fast.
You could probably compile libxslt to wasm, run it when loading xml with xslt, and be done.
Does XSLT affect the DOM after processing, isn't it just a dumb preprocessing step, where the render xhtml is what becomes the DOM.
The first strategy is obviously correct, but Google wants strategy 2.
Applied to each individually it seems to make sense. However the aggregate effect is kill off a substantial portion of the web.
In fact, it's an argument to never add a new web technology: Should 100% of web users be made vulnerable to bugs in a new technology that 0% of the people are currently using?
Plus it's a false dichotomy. They could instead address XSLT security... e.g., as various people have suggested, by building in the XSLT polyfill they are suggesting all the XSLT pages start using as an alternative.
This is also not a fair framing. There are lots of good reasons to deprecate a technology, and it doesn't mean the users don't matter. As always, technology requires tradeoffs (as does the "common good", usually.)
Seriously though, if I were forced to maintain every tiny legacy feature in a 20 year old app... I'd also become a "former" dev :)
Even in its heyday, XSLT seemed like an afterthought. Probably there are a handful of legacy corporate users hanging on to it for dear life. But if infinitely more popular techs (like Flash or FTP or non HTTPS sites) can be deprecated without much fuss... I don't think XSLT has much of a leg to stand on...
Like more or less everyone that hosts podcasts. But the current trend is for podcast feeds to go away, and be subsumed into Spotify and YouTube.
Flash was not part of the web platform. It was a plugin, a plugin that was, over time, abandoned by its maker.
FTP was not part of the web platform. It was a separate protocol that some browsers just happened to include a handler for. If you have an FTP client, you can still open FTP links just fine.
Non-HTTPS sites are being discouraged, but still work fine, and can reasonably be expected to continue to work indefinitely, though they are likely to be discouraged a bit harder over time.
XSLT is part of the web platform. And removing it breaks various things.
Did anybody bother checking with Microsoft? XML/XSLT is very enterprisey and this will likely break a lot of intranet (or $$$ commercial) applications.
Secondly, why is Firefox/Gecko given full weight for their vote when their marketshare is dwindling into irrelevancy? It's the equivalent of the crazy cat hoarder who wormed her way onto the HOA board speaking for everyone else. No.
[0] https://gs.statcounter.com/browser-market-share/all/germany
[1] https://gs.statcounter.com/browser-market-share/desktop/germ...
There was not really a vote in the first place and FF is still dependant on google. Otherwise FF (users) represants a vocal and somewhat influental minority, capable of creating shitstorms, if the pain level is high enough.
Personally, I always thought XSLT is somewhat weird, so I never used it. Good choice in hindsight.
>very, very few websites
Doesn't include all the corporate web sites that they are probably blocked from getting such telemetry for. These are the users that are pushing back.
Here's we're talking about killing off XSLT used in the intended, documented, standard way.
I wonder what the next step of removing less-popular features will be. Probably the SMIL attributes in favor of CSS for SVG animations, they've been grumbling about those for a while. Or maybe they'll ultimately decide that they don't like native MathML support after all. Really, any functionality that doesn't fit in the mold of "a CSS attribute" or "a JS method" is at risk, including most things XML-related.
Is that a good thing or a bad thing?
Technical people like us have our desires. But the billions of people doing banking on their browsers probably have different priorities.
In any case, there's no limit on how far one can disregard compatibility in the name of security. Just look at the situation on Apple OSes, where developers are kept on a constant treadmill to update their programs to the latest APIs. I'd rather not have everything trend in that direction, even if it means keeping shims and polyfills that aren't totally necessary for modern users.
What I'm trying to say is that it's a false dichotomy in most cases: implementations could almost eliminate the attack surface while maintaining the same functionality, and without devoting any more ongoing effort. Such as, for instance, JS polyfills, or WASM blobs, which could be subjected to the usual security boundaries no matter how bug-ridden and ill-maintained they are internally.
But removing the functionality is often seen as the more expedient option, and so that's what gets picked.
In the absence of anyone raring to do that, removal seems the more sensible option.
When did they do that? Can I not still ftp://example.com in the url bar?
Which is miles better than having to having to use calcs for CSS animation timing which requires a kludge of CSS variables/etc to keep track of when something begins/ends time-wise, if wanting to avoid requiring Javascript. And some years ago Firefox IIRC didn't even support time-based calcs.
When Chromium announced the intent to deprecate SMIL a decade back (before relenting) it was far too early to consider that given CSS at that time lacked much of what SMIL allowed for (including motion along a path and SVG attribute value animations, which saw CSS support later). It also set off a chain of articles and never-again updated notes warning about SMIL, which just added to confusion. I remember even an LLM mistakenly believing SMIL was still deprecated in Chromium.
The other suggestions ignored seemed to be "if this is about security, then fund the OSS, project. Or swap to a newer safer library, or pull it into the JS sandbox and ensure support is maintained." Which were all mostly ignored.
And "if this is about adoption then listen to the constant community request to update the the newer XSLT 3.0 which has been out for years and world have much higher adoption due to tons of QoL improvements including handling JSON."
And the argument presented, which i don't know (but seems reasonable to me), is that XSLT supports the open web. Google tried to kill it a decade ago, the community pushed back and stopped it. So Google's plan was to refuse to do anything to support it, ignore community requests for simple improvements, try to make it wither then use that as justification for killing it at a later point.
Forcing this through when almost all feedback is against it seems to support that to me. Especially with XSLT suddenly/recebtly gaining a lot of popularity and it seems like they are trying to kill it before they have an open competitor in the web.
Fixed that typo for you.
I'm very practically using Debian Linux on ChromeOS to develop test and debug enterprise software. I even compile and run some native code. It is very much more than just the web.
So is WSL on Windows. I wouldn't call Windows "just the web".
There's also nothing stopping me from building and running local desktop GUI software on the VM.
In fact, a VM is better in that I can back up and restore the image easily.
this is a perfectly reasonable course of action if the feedback is "please don't" but the people saying "please don't" aren't people who are actually using it or who can explain why it's necessary. it's a request for feedback, not just a poll.
I'd presume that most of those people are using it in some capacity, it's just that their numbers are seen as too minor to influence the decision.
> explain why it's necessary
No feature is strictly necessary, so that's a pretty high standard.
I think the idea of that is reasonable. If I used XSLT on my tiny, low-traffic blog, I think it's reasonable for browser devs to tell me to update my code. Even if 100 people like me said the same thing, that's still a vanishingly small portion of the web, a rounding error, protesting it.
I'd expect the protests to be disproportionate in number and loudness because the billion webmasters who couldn't care less aren't weighing in on it.
Now, I'm not saying this with a strong opinion on this specific proposal. It doesn't affect me either way. It's more about the general principle that a loud number of small webmasters opposing the move doesn't mean it's not a good idea. Like, people loudly argued about removing <marquee> back in the day, but that happened to be a great idea.
(And if you did want to tell the entire world to update their code, and have any chance of them following through with it, you'd better make sure there's an immediate replacement ready. Log4Shell would probably still be a huge issue today if it couldn't be fixed in place by swapping out jar files.)
Unlike your average Angular project. Building on top of minified Typescript is rather unreasonable and integrating with JSON means you have a less than reliable data transfer protocol without schema, so validation is a crude trial and error process.
There's no elegance in raw XML et consortes, but the maturity of this family means there are also very mature tools so in practice you don't have to look at XML or XSD as text, you can just unmarshal it into your programming language of choice (that is, if you choose a suitable one) and look at it as you would some other data structure.
XSLT came across as a little esoteric.
> my main concern is for the “long tail” of the web—there's lots of vital information only available on random university/personal websites last updated before 2005
It's a strong argument for me because I run a lot of old webpages that continue to 'just work', as well as regularly getting value out of other people's old pages. HTML and JS have always been backwards compatible so far, or at least close enough that you get away with slapping a TLS certificate onto the webserver
But I also see that we can't keep support for every old thing indefinitely. See Flash. People make emulators like Ruffle that work impressively well to play a nostalgic game or use a website on the Internet Archive whose main menu (guilty as charged) was a Flash widget. Is that the way we should go with this, emulators? Or a dedicated browser that still gets security updates, but is intended to only view old documents, the way that we see slide film material today? Or some other way?
[0]: https://chromewebstore.google.com/detail/xslt-polyfill/hlahh...
I think that's a tradeoff.
Simplest approach would be to just distribute programs, but the Web is more than that!
Another simple approach would be to have only HTML and CSS, or even only HTML, or something like Markdown, or HTML + a different simple styling language...
and yet nothing of that would offer the features that make web development so widespread as a universal document and application platform.
The discussions don't address that. That surprises me, because these seem to be the people in charge of the spec.
The promise is, "This is HTML. Count on it."
Now it would be just, "This is HTML for now. Don't count on it staying that way, though."
Not saying it should never be done, but it's a big deal.
They are removing XSLT just for being a long-tail technology. The same argument would apply to other long-tail web technologies.
So what they're really proposing is to cut off the web's long tail.
(Just want to note: The list of long-tail web technologies will continue to grow over time... we can expect it to grow roughly in proportion to the rate at which web technologies were added around 20 years in the past. Meaning we can expect an explosion of long-tail web technologies soon enough. We might want to think carefully about whether the people currently running the web value the web's long tail the way we would like.)
I get that people are more reacting to the prospect of browsers removing existing support, but I was pretty surprised by how short the PR was. I assumed it was more intertwined.
Almost no one ever uses it: metrics show only around 0.02% of phone calls use this feature. So we’re planning on deprecating and then removing it.
—⁂—
Just an idea that occurred to me earlier today. XSLT doesn’t get a lot of use, but there are still various systems, important systems, that depend upon it. Links to feeds definitely want it, but it’s not just those sorts of things.
Percentages only tell part of the story. Some are tiny features that are used everywhere, others are huge features that are used in fewer places. Some features can be removed or changed with little harm—frankly, quite a few CSS things that they have declined to address on the grounds of usage fall into this category, where a few things would be slightly damaged, but nothing would be broken by it. Other features completely destroy workflows if you change or remove them—and XSLT is definitely one of these.
I don't understand the point in having a JS polyfill and then expecting websites to include it if they want to use XSLT stuff. The beauty of the web is that shit mostly just works going back decades, and it's led to all kinds of cool and useful bits of information transfer. I would bet money that so much of the weird useful XSLT stuff isn't maintained as much today - and that doesn't mean it's not content worth keeping/preserving.
This entire issue feels like it would be a nothing-burger if browser vendors would just shove the polyfill into the browser and auto-run it on pages that previously triggered the fear-inducing C++ code paths.
What exactly is the opposition to this? Even reading the linked issue, I don't see an argument against this that makes much sense. It solves every problem the browser vendors are complaining about and nothing functionally changes for end users.
Ex. https://source.chromium.org/chromium/chromium/src/+/main:thi...
https://source.chromium.org/chromium/chromium/src/+/main:thi...
https://github.com/WebKit/WebKit/blob/65b2fb1c3c4d0e85ca3902...
Mozilla has an in-house implementation at least:
https://github.com/mozilla-firefox/firefox/tree/5f99d536df02...
It seems like the answer to the compat issue might be the MathML approach. An outside vendor would need to contribute an implementation to every browser. Possibly taking the very inefficient route since that's easy to port.
At least that's how my cynical side feels anymore.
Quite fun at the time
troupo•3h ago
webstrand•2h ago
delfinom•2h ago
arccy•1h ago
bayindirh•2h ago
Google is boneheaded and hostile to open web at this point, explicitly.
agwa•2h ago
Go changed their telemetry to opt-in based on community feedback, so I'm not sure what point you're trying to make with that example.
bayindirh•2h ago
I spent days in that thread. That uproar was “a bunch of noisy minority which doesn’t worth listening” for them.
arccy•2h ago
bayindirh•1h ago
The GitHub discussion is there: https://github.com/golang/go/discussions/58409
but the words I of Russ I cited is here: https://groups.google.com/g/golang-dev/c/73vJrjQTU1M/m/WKj7p...
Copying verbatim:
So as a person who just started programming Go and made some good technical comments didn't matter at all. Only people with clout has mattered, and the voice had to come from the team itself. Otherwise we the users' influence is "fuck all" (sorry, my blood boils every time I read this comment from Russ).rafram•2h ago
troupo•2h ago
tptacek•2h ago
bayindirh•1h ago
ummonk•2h ago
therealmarv•2h ago
Probably a browser extension on the user side can do the same job if an XSLT relying page cannot be updated.
ummonk•1h ago
esprehn•43m ago
uyzstvqs•2h ago