That civilization ascended but we did not.
But well, Firefox didn't do it that way, thus no proper use.
FTR, there’s also XSLTProcessor (https://developer.mozilla.org/en-US/docs/Web/API/XSLTProcess...) available from Javascript in the browser. I use that on my homepage, to fetch and transform-to-HTML said Atom feed, then embed it:
const atom = new XMLHttpRequest, xslt = new XMLHttpRequest;
atom.open("GET", "feed.xml"); xslt.open("GET", "atom2html.xsl");
atom.onload = xslt.onload = function() {
if (atom.readyState !== 4 || xslt.readyState !== 4) return;
const proc = new XSLTProcessor;
proc.importStylesheet(xslt.responseXML);
const frag = proc.transformToFragment(atom.responseXML, document);
document.getElementById("feed").appendChild(frag.querySelector("[role='feed']"));
};
atom.send(); xslt.send();
Server-side, I’ve leveraged XSLT (2.0) in the build process of another website, to slightly transform (X)HTML pages before publishing (canonicalize URLs, embed JS & CSS directly in the page, etc.): https://github.com/PaulCapron/pwa2uwp/blob/master/postprod.x...I forget the _exact_ mechanics but it was definitely all done in Internet Explorer with XML as the actual content per-page and transformation was done using XSLT.
Interesting but I definitely hate XSLT as a result. Considering this was my first real programming job, it's surprising I continued.
Philosophically I like the idea but that era needed a lot more attention to tool quality and interoperability. I’m not sure anything anyone on an XML-based standards committee did after the turn of the century mattered as much as it would have if the money had been spent improving tools and avoiding things like so many tools relying on libxml2 / libxslt and thus never getting support for newer versions of XPath, XSLT, etc.
There was a third way in which PHP was often better: continuing after errors has drawbacks from a security and correctness perspective but as a user it was often the case that you could read what you wanted on a partially functioning page. I think we’ve largely matured out of the point where that’s good, but people often did benefit at the time.
1. remember, this was before things like Sentry happened for front end error collection, and XML-related issues still can’t be handled by JavaScript even now.
https://svn.apache.org/repos/asf/commons/sandbox/gsoc/2010/s...
Written as a pipeline of XSLT transformations. Ran natively in the browser to execute SCXML documents to control UI logic. Good times.
> — when written well
LMAO
You really had to bend your mind to do things with it.
Like... grouping had to be invented by Steve(?) Muench before anyone could do it. This is why it was called Muenchian grouping.
OTOH, later XSLT versions are just badly designed programming languages with a weird syntax. No wonder none wants to use them.
Funnily only one guy kind of succeeded to implement them, the editor of the spec and the author of Saxon himself ))
I am sure he earned many millions since then on obscure contracts with the likes of SAP and Oracle.
It was quite difficult to learn, and when I did, I found that I could write stuff that almost worked, but not quite…
Not poetry that's any good though.
> OTOH, later XSLT versions are just badly designed programming languages with a weird syntax.
Not that XSLT 1.x was anything other than that, later version were just piling garbage on a foundation which would have been better not existing in the first place.
Client side templating, custom elements, validation against schemas, native databinding for forms, we could have had it all and threw it away; instead preferring to rebuild it over and over again in React until the end of time.
The only way I was ever able to get anything done with XLST was to use Microsoft's script extensions to drop down into JavaScript and just solve the problem with a few lines of code. And that begs the question of why am I not just solving this problem with a few lines of JavaScript code instead of inviting XSLT to the party?
More on XML, XSLT, SGML, DSSSL, and the DDJ interview "A Triumph of Simplicity: James Clark on Markup Languages and XML":
Pretty cool to see XSLT mentioned in 2025!
If somebody were really interested in bring XSLT into the 2020 the best bet may be to drop the XML bit and ask questions like: how do I use production rules to transform a POJO (plain ordinary Java object) into another object, how do I use production rules to transform JSON documents, transform XML/JSON/HTML bidirectionally to and from platform objects, etc. Based on XML it is just too easy to get into the weeds like "should this be represented as an attribute or an child element", namespaces, entities and so many details that get in the way of seeing XSLT for what it is.
It's fun to work in such a declarative way, although all the XML gets tiring. I learned a ton there, though. XSLT is great for its intended purpose, but maybe the fact that you can also use it for other things is a risk.
XPath is probably the most useful part of it.
[1] Oddly, SPARQL-in-Turtle isn't half bad to my eye, see https://spinrdf.org/sp.html
Stopped reading here because the XSLT story with DITA-OT is so godawful that it's been the primary driver to move off DITA altogether at nearly every place I've worked at or with that used it. The one exception spent nearly $1M/year on third-party tooling to get away from having to deal with it, and that tooling under the hood was a Mechanical Turk support engineer writing the XSLTs for their weird-ass custom req.
--
XMLUI
https://news.ycombinator.com/item?id=44625292#44625559 (~30/317 comments; yesterday)
> No mention of XSLT?
(interesting mention of a better way, vintage "kiselyov's SXML/SSAX" for scheme - https://ssax.sourceforge.net/ ?)
--
XSLT – Native, zero-config build system for the Web
https://news.ycombinator.com/item?id=44393817 (328 comments; ~month ago)
The regret I feel about all this is palpable. I desperately want those hours and hours of coding back.
diulin•8h ago
sbennettmcleish•8h ago
jerf•8h ago
And a lot of that "badness" is precisely that XSLT is a very closed, scarcity-minded language where basic library and language features have to be routed through a standards committee (you think it's hard to get a new function into Python's or Go's standard library?), when all you really need is an XPath library and any abundance-mindset language you can pick up, where if you need something like "regular expression" support you can just go get a library. Or literally anything else you may need to process an XML document, which is possibly anything. Which is why a general-purpose language is a good fit.
That "What's New In XSLT 3.0" is lunatic nonsense if you view it through the lens of being a programming language. What programming language gets associative arrays after 18 years? And another 8 years after that you still can't really count on that being available?
Programming languages tend to have either success feed success, or failure feed failure. Once one of those cascades start it's very difficult to escape from them. XSLT is pretty firmly in the latter camp, probably kept alive only by the fact it's a standard and that still matters to some people. It's frozen because effectively nobody cares, because it's frozen, because nobody cares.
I definitely recommend putting XPath in your toolbelt if you have to deal with XML at all though.
froh42•5h ago
(It was a Building Information System, Fire Alarms, Access, Lots of business rules stored in XML)
While the XML was easier to transform in XSLT than in the native C++, and yes, XSLT was probably the right tool at that time I developed a deep hatred for XSLT at that time. It felt like a functional language that had just all the important parts removed.
Yes, pattern matching is a good thing, but hey - I can do pattern matching for rules in any decent language. It was just the amount of existing code that prevented me from porting it to another language.
(And I remember a few ugly hacks, where I exposed "programming language" stuff from C# - which we also used - to the XSLT processor)
However, with all the XSLT ugliness: XPath is amazing! I love that.
mcswell•4h ago
jerf•3h ago
It's just XPath. Grab your favorite programming language, grab XPath, start transforming and outputting. You still probably ought to learn XPath, but unlike XSLT as a whole, learning XPath rather quickly starts paying off.
mcswell•2h ago
jerf•1h ago
There are two problems. First, everyone is obsessed with trying to label it "declarative" when it is waaaaay better understood as a library for driving around a multicursor imperatively. Call it declarative if you like but I've had way more success driving it effectively imperatively.
Second, each XPath clause has three parts: The "axes", which is the "direction" you want to drive the cursor, the "node selection" which selects which nodes you are talking about (usually the tag name), and then optional filters in [], which can then itself recurse in some ways into further XPath specifications, as well as using some functions and predicates to filter. Fortunately for your convenience, there's a default axis, selecting nodes by tag name is easy, and the filters are indeed optional. Unfortunately for your understanding, there's other defaults and shortcuts and all the "tutorials" and "cheat sheets" and all that jazz teach only the shortcuts, but if you learn only the shortcuts the whole language feels random and very difficult to understand. You really need to learn the full version of the selector clause first, practice by writing it out fully a few times, and then starting to use the shortcuts.
(You can even see the "node selection" as just a type of filter that looks at the tag name most of the time, in which case there are two parts. But it's really confusing when tutorials don't distinguish very well between those two things and mangle it all up into one undifferentiated ball.)
It's not that hard if you are taught it correctly, but I have yet to find something that teaches it correctly online.
gomodon•8h ago
ssdspoimdsjvv•7h ago
https://qt4cg.org/specifications/xslt-40/Overview.html
Last updated just under a week ago!