There's potentially a real need for something like XSLT + XForms for low-to-no JS interactivity.
Even a basic JS-free, HTML-modifying operation for WebForms would go a long way towards that (ie: insert a row, delete an element matching this ID on click), etc.
I’m curious if you could describe more about what you envision. I have a difficult time imagining:
1. How the stateful/interactive aspects of XForms (such as insert/delete, as you mention) would be implemented client side without JS.
2. How XSLT factors into that vision.
It might be lack of imagination on my part! Admittedly, I don’t know everything about XSLT and maybe it has more interactive capabilities than I’m aware of. But at least as far as I know, when a browser vendor did implement XForms functionality it still ultimately depended on JS to achieve certain behaviors. It’s been a while since I was looking into this, but IIRC it looked conceptually similar to XUL. And granted, that might mean the JS is provided by a browser, and encapsulated from user space similar to browsers’ use of shadow DOM for standard form controls now.
1: I also have good reason to believe my work on this prior project helped keep XSLT alive in Chrome the last time there was a major push to deprecate it! Albeit totally inadvertently: my work just happened to correlate very closely to a spike in usage metrics around the same time, which was then cited as a reason to hold off on deprecation.
1. Associate a form element with a non-JS action, ie: add-element, remove-element, modify-element.
2. Allow those actions to make use of <template> elements when adding or modifying elements, and/or XPath selectors.
3. Add a new <input type=id> (or similar) that auto-generates unique UUIDs for form rows.
A mockup of what we'd get, though it's actually focused on pure HTML (it would be XML-compatible, however). This is 100% a straw-man, probably not even fully-self-consistent, but giving an idea of what I would want:
<h1>Declarative Form Example</h1>
<form action="/update" method="post">
<table id="people">
<thead>
<tr>
<th>First</th><th>Last</th><th>Email</th><th></th>
</tr>
</thead>
<tbody>
<!-- rows appear here -->
</tbody>
</table>
<button type="submit">Save</button>
</form>
<template id="row-template">
<tr data-row data-row-id data-bind:dataset(rowId)="row_id">
<td data-bind:text="first"><input type="hidden" name="first"><input type="uuid" name="row_id" autogenerate></td>
<td data-bind:text="last"><input type="hidden" name="last"></td>
<td>
<a data-bind:attr(href)="email_link" data-bind:text="email"></a>
<input type="hidden" name="email"><input type="hidden" name="email_link">
</td>
<td>
<button formmutate="remove"
formtarget='tr[data-row-id="{{row_id}}"]'
aria-label="Remove row"></button>
</td>
</tr>
</template>
<form id="add-person">
First <input name="first" required>
Last <input name="last" required>
Email <input name="email" type="email" required>
<input type="hidden" name="email_link" value="mailto:{{email}}">
<button formmutate="add" formtarget="#people > tbody" formtemplate="#row-template">Add</button>
</form>
Some existing standards/specs/proposals I cribbed this from:- https://html.spec.whatwg.org/multipage/scripting.html#the-te...
- (defunct) https://html.spec.whatwg.org/multipage/form-elements.html#th...
https://github.com/whatwg/html/issues/11582#issuecomment-321...
Frankly I think this is a tempest in a teapot, and the primary reason people are complaining is because Google is sponsoring the idea, not because it's going to harm users in some tangible way.
Then some random tag on guy presumably after this hit HN "I like that it doesn't have ads," which has literally nothing to do with the issue, lol.
> Thanks for raising these 6 examples of sites publishing XML files. We can add it to the existing list of 357 sites
This feels like an email I'd get from HR. The point of the topic was the ease in finding those 6, not to discuss those 6 specifically. Maybe being direct pisses people off more than this corporate-styled language, though.
XSLT accomplishes what json-ld and semantic html never managed.
The question of abstract efficiency via reuse is academic. If the XML documents were the ones that users accessed most of the time, or were the only documents available, that might change the analysis. But that isn't the case.
Or we could deprecate support for .txt and .xml in the browser itself, for the same reasons.
We obviously don't want that. It's valuable to have support for multiple formats, and it's especially valuable to have a single file that can be used by both machines and humans.
Not everything that has a benefit is accepted by customers. Some stuff just doesn't sell.
You can't just use the metrics of "what do people who don't know how to disable telemtry use?" to make decisions for everyone.
As more than one VP has said at my company: "In God I trust. Everyone else needs data."
This isn't really leading anywhere.
But I think that was maybe just a mistake? Maybe he meant https://www.congress.gov/bill/117th-congress/house-bill/3617...
The counterpoint seems to be "well it has Javascript so we don't need actual features since you can theoretically write anything in JS" but one of the nicest things about it was having that toolkit available to Javascript. You can spin up a DOM for arbitrary XML and apply a bunch of natively-compiled, fast tools to it, coordinating and supplementing that with JS. Then present that to the UI as HTML. It's very nice.
I'm on team "the right move is to upgrade these, not remove them".
If they want to remove things to simplify the spec I've got a list I'd suggest, but it's mostly stuff added in the last decade.
As I said in another thread, not everything of value or benefit gains meaningful adoption. A rational actor doesn't preserve everything of value for its own sake. At best, we call people who do "museum curators," and at worst, "hoarders."
That it still has any use on notable sites at all seems like a decent cue to at least try not neglecting it before declaring nobody wants to use it.
... however, it's mostly useful for doing things that owners of walled Web platforms have no interest in, like working with interoperable protocols & formats. They've been killing those through active moves and neglect for 15 years, so why would they do something that makes that easier & nicer?
Worst comic ever!
I don't care one way or the other about XSLT, but fucking hell, I would like a boatload more intellectual honesty in the world. Being angry is not a good reason to cherry-pick your data. It is a reason to step away from the argument and cool down before you re-engage with honesty and clarity.
Consider, for example, RSS/Atom feeds. Certainly there are <link /> tags you can add, but since none of the major browsers do anything with those anymore, we're left dropping clickable links to the feeds where users can see them. If someone doesn't know about RSS/Atom, what's their reward for clicking on those links? A screenful of robot barf.
These resources in TFA are another example of that. The government or regulatory bodies in question want to provide structured data. They want people to be able to find the structured data. The only real way of doing that right now is a clickable link.
XSLT provides a stopgap solution, at least for XML-formatted data, because it allows you to provide that clickable, discoverable link, without risking dropping unsuspecting folks straight into the soup. In fact, it's even better than that, because the output of the XSLT can include an explainer that educates people on what they can do with the resource.
If browsers still respected the <link /> tag for RSS/Atom feeds, people probably wouldn't be pushing back on this as hard. But what's being overlooked in this conversation is that there is a real discoverability need here, and for a long time XSLT has been the best way to patch over it.
Part of the reason HTML 5/LS was created was to preserve the behaviour of existing sites and malformed markup such as omitting html/head/body tags or closing tags. I bet some of those had the same usage as XSLT on the web.
Really wish registerProtocolHandler were more popular. And I really wish registerContentType hadn't been dropped!
Web technology could be such a nexus of connectivity. We could have the web interacting with so much, offering tools for so much. Alas, support has largely gotten worse decade by decade. And few have taken up the chance.
Bluesky is largely using at:// urls. Eventually we probably could argue for support for our protocol. But web+at:// is permission less. Tools like https://pdsls.com can just become web based tools, with near no effort, if they want.
The microformats folks have a standard to embed machine-readable feed data into HTML, which seems a lot more practical to me after seeing how browser just ignore RSS:
https://microformats.org/wiki/h-feed
(I haven't tried it but it seems fine)
"Remove mentions of XSLT from the html spec" - https://news.ycombinator.com/item?id=44952185 - Aug 2025 (522 comments)
Should we remove XSLT from the web platform? - https://news.ycombinator.com/item?id=44909599 - Aug 2025 (96 comments)
I’m sure ActiveX and Silverlight removal did too. And iframes not sharing cross domain cookies. And HTTP mixed content warnings. I get it, some of these are not web specs, but some were much more popular than XSLT is now.
The government will do what they do best, hire a contractor to update the site to something more modern. Where it will sit unchanged until that spec too is removed, some years from now.
It seems like a very easy fix for the handful of websites that still use it.
Btw, you can also apply an XSLT sheet to an XML document using standard JavaScript: https://developer.mozilla.org/en-US/docs/Web/API/XSLTProcess...
Google says it's "too difficult" and "resource intensive" to maintain...but they've deliberately left that part of the browser to rot instead of incrementally upgrading it to a modern XSLT standard as new revisions were released so it seems like a problem of their own making.
Given their penchant for user-hostile decisions it's hard to give the chrome team the benefit of the doubt here that this is being done purely for maintainability and for better security (especially given their proposal of just offloading it to a js polyfill).
It's commercially beneficial to make the web standard so complex that it's more or less impossible to implement, since it lets you monopolise the browser market. However complexity only protects incumbents if you can persuade enough people to use the overcomplicated bits. If hardly anyone uses it, like xslt, then it's a cost for the incumbent which new entrants might get away without paying. So there's no real upside for Google in supporting it. And you can't expect commercial enterprises to do something without any upside.
This makes absolutely no sense.
We could've had such a nice language. The efforts for a cleaner language and web platform API were there, but doctrine always said no because of legacy and people have moved on to alternatives now.
It’s that currently you can open an XML file (including feeds) with an associated stylesheet and the stylesheet gets applied, which can be used to render an HTML document on the client side from an xml source like a feed.
apparently firefox has their own implementation. not that that makes things any better. the firefox implementation appears to be just as bad in shape as the libxslt bindings in chrome. see here for more details: https://news.ycombinator.com/item?id=44910050
And yet they have no qualms shoving huge attack surfaces in the form of WebUSB, WebSerial, WebMIDI, WebTransport, WebBluetooth, WebKitchenSink, most of which have as much usage as XSLT: https://chromestatus.com/metrics/feature/timeline/popularity... or https://chromestatus.com/metrics/feature/timeline/popularity...
font: supported in all browsers https://caniuse.com/?search=font
frameset: supported in all browsers https://caniuse.com/?search=frameset
applet: supported in all browsers https://caniuse.com/?search=applet
> All of them were rarely used and had better alternatives available.
"Rarely used" is not enough of a justification
1. Keep the standards simple. Avoid adding features if you can. Standards define implementations. Don't invert that pattern and make the standards morbidly obese.
2. Keep the features orthogonal. Don't create multiple ways of doing the same thing. Make sure that each feature plays well with the the others.
3. Maintain backwards compatibility. Don't break anything that depends on your standard. Don't frustrate your implementers and their customers with an endless game of whack-a-mole.
4. All the above are on a best-effort basis. Exceptions are acceptable under exceptional circumstances.
For some reason, the WHATWG has the diametrically opposite belief on all the above. Perhaps they should be called the Web upside-down standards. You have no problem adding features to the standards faster than anyone can read it. But maintaining and upgrading an old feature is somehow too far beyond your capability to justify keeping it around. I guess it's back to uni for me to figure out how I got this so wrong.
It's incorrect to say there are no removals, as we do not have <MARQUEE> anymore.
https://caniuse.com/mdn-html_elements_marquee
The marquee element is deprecated but is supported by all major Web browsers.
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="somefile.xsl"?>
...
Then the browser fetch somefile.xsl and uses that to transform the xml into html (which is then rendered as per the uzhe).Whether the browser does all that in C or C++ or Rust or Javascript or by sending it to an LLM is an implementation detail.
If the browser drops xslt support and just displays it as text (or syntax-hilighted xml), I, as the xml author, can't polyfill that by adding javascript to the xml because it won't be executed. You need to back up a level.
You could set up a web page that fetches the xml and fetch the xslt and transforms it into html and then displays it (so a browser in your browser so you can browse while you browse) but I wouldn't call that a polyfill exactly. If you're going to do that, it would be less work and a better user experience to do the transform server side.
Removals sometimes take a decade or more, or sometimes don't happen at all. Just because the vendors would like to remove something, doesn't mean they can.
For example, MutationEvents were deprecated in 2011 and just removed last year.
So this is just the beginning of the process. Even the PR to remove XLST from the spec: it doesn't mean it's being merged soon. Removal from the spec is different from remove from engines.
"Is it possible that software is not like anything else, that it is meant to be discarded: that the whole point is to always see it as a soap bubble?" - Alan Perlis
BUT since it's there, don't take it out, or at least take it out of the spec and then require all browser vendors to auto-install a plugin that re-performs the missing feature, and warn that this page relies on a plugin or something
They are not peeling away browser complexity. They revel in browser complexity. They gleefuly make browser as complex as possible.... as long as their promotions depend on it: https://news.ycombinator.com/item?id=44989576
See if you parse an HTML page these days - heck, if you're lucky Anubis girl will let you in and see the javascript trash soup, maybe even taste it.
The XML viewer is still there, there are colors and you can collapse the nodes.
XML was an abomination in terms of format but there were some really good ideas in the early web. I remember you could point to a stylesheet to apply to an XML file.
I really wish we could apply CSS stylesheets to a JSON.
You still can. That's exactly what this article is about: XSLT is that stylesheet. You can publish XML-structured data that machines can ingest, and you can attach XSLT to that data so that humans can view the exact same file in a human-friendly way. The browser constructs, according to the XSL transform rules, a Document Object Model out of the XML, and then it usually applies CSS to that DOM. No duplication of files, no "click here to get to the actual data." I've got some OPML (lists of podcasts) and RSS (podcast) XML files that are also human-readable web pages thanks to this; the exact same web address works for your favorite podcast app and for human viewing.
> I really wish we could apply CSS stylesheets to a JSON.
I don't think what you want is CSS in the first instance, but in the second: first you'd want a template transformation language like XLST is for XML. You'd want to be able to say not just "show these values in this font" but "for each key matching this pattern, create a DOM element containing that key, then another element containing its value," and then style them with CSS when a whole document has been generated out of the JSON. XML with XSLT almost always also included CSS to handle fonts and whatnot.
spullara•3h ago
bryanrasmussen•2h ago
did you mean:
it seems like a pretty cut and dried case for removal (of a feature that is not only not used but even in cases where it might be useful the sites have html versions and don't expect people to click on the xml links) because it will break many government sites?
or
it seems like a pretty cut and dried case for removal (of a feature that is not only not used but even in cases where it might be useful the sites have html versions and don't expect people to click on the xml links) because the feature is not used etc. etc. ?