I have made a couple of simple websites using PHP to bolt on reusable elements (header, footer, navigation), just because it is the solution that probably will work for ~decades without much churn. But XSLT would be even better!
Works a treat and makes most frameworks for the same seem completely pointless.
i am thinking of something like
index.html
<div><navigation/></div>
index.js function navigation() {
document.querySelector('navigation').replaceWith('<a href="...">...')
}
or maybe let customnodes = {
navigation: '<a href="...">...',
...
}
then add a function that iterates over customnodes and makes the replacements. even better if i could just iterate over all defined functions. (there is a way, i just didn't take the time to look it up) then the functions could be: function navigation() {
return '<a href="...">...'
}
and the iterator would wrap that function with the appropriate document.querySelector('navigation').replaceWith call.Google did not unilaterally decide to kill XSLT - https://news.ycombinator.com/item?id=44987239
Recent and also related:
XSLT removal will break multiple government and regulatory sites - https://news.ycombinator.com/item?id=44987346 - Aug 2025 (99 comments)
"Remove mentions of XSLT from the html spec" - https://news.ycombinator.com/item?id=44952185 - Aug 2025 (523 comments)
Should we remove XSLT from the web platform? - https://news.ycombinator.com/item?id=44909599 - Aug 2025 (96 comments)
The real barrier to adoption for any tool are the network effects of other existing tools which create attention barriers and cultural barriers which may hinder adoption of superior alternatives.
A tool has to adhere and build on top of existing conceptual baggage in order to be appealing to the masses of developers.
This is partly because developers believe that the tools they're using now are cutting-edge and optimal... So a radical conceptual reinvention of their current favorite tools will look to them like a step backwards, regardless of how much further it can take them forward.
pyuser583•3h ago
But that universe did not happen.
Lots of "modern" tooling works around the need. For example, in a world of Docker and Kubernetes, are those standards really that important?
I would blame the adoption of containerization for the lack of interest in XML standards, but by the time containerization happened, XML had been all but abandoned.
Maybe it was the adoption of Python, whose JSON libraries are much nicer than XML. Maybe it was the fact that so few XML specs every became mainstream.
In terms of effort, there is a huge tail in XML, where you're trying to get things working, but getting little in return for that effort. XLST is supposed to be the glue that keeps it all together, but there is no "it" to keep together.
XML also does not play very nice with streaming technologies.
I suspect that eventually XML will make a comeback. Or maybe another SGML dialect. But that time is not now.
warkdarrior•3h ago
Not sure how that is true. XML is a specification for a data format, but you still need to define the schema (i.e., elements, attributes, their meaning). It's not like XML for web pages (XHTML?) could also serve as XML for Linux container descriptions or as XML for Android app manifests.
th0ma5•3h ago
the_mitsuhiko•3h ago
XHTML being based on XML tried to be a strict standard in a world where a non-strict standard existed and everybody became just very much aware on a daily that a non-strict standard is much easier to work with.
I think it's very hard to compete with that.
kstrauser•1h ago
Know what? Life's too short to lose time to remembering to close a self-closing tag.
About the time XHTML 1.1 came along, we collectively bailed and went to HTML5, and it was a breath of fresh air.
ndriscoll•1h ago
Then React introduced faux-XML as JSX except with this huge machinery of a runtime javascript virtual DOM instead of basic template expansion and everyone loves it? And if this react playground I've opened up reflects reality, JSX seems to literally require you to balance opening/closing your tags. The punch-line of the whole joke.
What was the point of this exercise? Why do people use JSX for e.g. blogs when HTML templating is built into the browser and they do nothing dynamic? For many years it's been hard to shake the feeling that it isn't some trick to justify 6 figure salaries for people making web pages that are simple enough that an 8 year old should be up to the task.
That same nagging feeling reassures me about our AI future though. Easy ways to do things have been here the whole time, yet here we are. I don't think companies are as focused on efficiency as they pretend. Clearly social aspects like empire building dominate.
kstrauser•37m ago
assimpleaspossi•2h ago
somat•2h ago
Asterisk: except namespaces, I loathe those, you are skipping happily along chewing through your XML, xpathing left and right, and then find out some psychopath has decided to use namespaces, and now every thing has become super awkward and formal.
layer8•2h ago
Well, I guess we could do it like libraries in C-land and have every schema add its own informal identifier prefix to avoid name collisions. But there’s a reason why programming languages moved to namespaces as an explicit notion.
echelon•2h ago
XHTML would have made the Semantic Web (capital letters) possible. Someone else could have done search better. We might have had a proper P2P web.
They wanted sloppy, because only Google scale could deal with that.
Hopefully the AI era might erode that.
hnlmorg•2h ago
WJW•2h ago
bawolff•29m ago
XHTML's failure had nothing to do with it, and is basically unrelated. Even if xhtml won i fail to see how that would have helped semantic web in any way shape or form.
SigmundA•2h ago
Not sure why just as good as JSON, if you are going to stream and parse you need a low level push or pull parser not a DOM just like JSON. See SAX for Java or XmlReader / XmlWriter in .Net.
XSLT 3 even had a streaming mode I believe which was badly needed but had constraints due to not having the whole document in memory at once.
I liked XSLT but there is no need for it, javascript is good enough if not better, many times you needed to do a xslt script tag to get some thing done it couldn't do on its own anyway, might as well use a full language with good libraries for handling XML instead. See Linq to XML etc.
somat•2h ago
SigmundA•1h ago
Again don't really agree, its just most developers don't seem to understand the difference between a DOM or parsing JSON into a full object vs using a streaming reader or writer so they need to be hand fed a format that forces it on them such as line based CSV.
Maybe if JSON and XML allowed top level multiple documents / objects it would have helped like JSON lines.
Aurornis•2h ago
The pro-XML narrative always sounded like what you wrote, as far back as I can remember: The XML people would tell you it was beautiful and perfect and better than everything as long as everyone would just do everything perfectly right at every step. Then you got into the real world and it was frustrating to deal with on every level. The realities of real-world development meant that the picture-perfect XML universe we were promised wasn't practical.
I don't understand your comparison to containerization. That feels like apples and oragnes.
mattmanser•2h ago
And those bizarre designs went straight into XML, properties often in attributes, nodes that should have been attributes, over nesting, etc.
And we blamed XML for the mess where often it was just inexperience in software design as an industry that was the real cause. But XML had too much flexibility compared to the simplicity of the later JSON, meaning it helped cause the problem. JSON 'solved' the problem by being simpler.
But then the flip side was that it was too strict and starting one in code was a tedious pita where you had to specify a schema even though it didn't exist or even matter most of the time.
Aurornis•2h ago
The few staunch XML supporters I worked with always wanted to divert blame to something else, refusing to acknowledge that maybe XML was the wrong tool for the job or even contributing to the problems.
toyg•2h ago
The hard truth is that XML lost to the javascript-native format (JSON). Any JavaScript-native format would have won, because "the web" effectively became the world of JavaScript. XML was not js-friendly enough: the parsing infrastructure was largely based on C/C++/Java, and then you'd get back objects with verbose interfaces (again, a c++/java thing) rather than the simple, nested dictionaries that less-skilled "JS-first" developers felt at ease with.
mpyne•2h ago
It's a dumber format but that makes it a better lingua franca between all sorts of programming languages, not just Javascript, especially if you haven't locked in on a schema.
Once you have locked in on a schema and IDL-style tooling to autogenerate adapter classes/objects, then non-JSON interchange formats become viable (if not superior). But even in that world, I'd rather have something like gRPC over XML.
em-bee•1h ago
mikepurvis•2h ago
UlisesAC4•36m ago
smarx007•2h ago
This makes sense.
However, there are two ways to address it:
1) Work towards a more advanced system that addresses the issues (for example, RDF/Turtle – expands XML namespaces to define classes and properties, represents graphs instead of being limited to trees unlike XML and JSON)
2) Throw it away and start from scratch. First, JSON. Then, JSON schema. Jq introduces a kind of "JSONPath". JSONL says hi to XML stream readers. JSONC because comments in config files are useful. And many more primitives that existed around XML were eventually reimplemented.
Note how the discussion around removing XSLT 1 support similarly has two ways forward: yank it out or support XSLT 3.
I lean towards Turtle replacing XML over JSON, and for XSLT 3 to replace XSLT 1 support in the browsers.
mpyne•2h ago
Don't miss that they were reimplemented properly.
Even XML schemas, the one thing you'd think they were great at, ended up seeing several different implementation beyond the original DTD-based schema definitions and beyond XSD.
Some XML things were absolute tire fires that should have been reimplemented even earlier, like XML-DSIG, SAML, SOAP, WS-everything.
It's not surprising devs ended up not liking it, there are actual issues trying to apply XML outside of its strengths. As with networking and the eventual conceit of "smart endpoints, dumb pipes" over ESBs, not all data formats are better off being "smart". Oftentimes the complexity of the business logic is better off in the application layer where you can use a real programming language.
smarx007•1h ago
Of course not! W3C SHACL shapes, on the other hand...
schema.org is also a move in the right direction
themafia•2h ago
XML without attributes probably would have seen wide and ready adoption.
bawolff•36m ago
They tended to be design by comittee messes that included every possible use case as an option.
Anyone who has ever had the misfortune of having to deal with SAML knows what i'm talking about. Its a billion line long specification, everyone only implements 10% of it, and its full of hidden gotchas that will screw up your security if you get them wrong. (Even worse, the underlying xml-signature spec is literally the worst way to do digital signatures possible. Its so bad you'd think someone was intentionally sabotaging it)
In theory this isn't xml's fault, but somehow XML seems to attract really bad spec designers.
SoftTalker•2h ago
JSON is too simplistic.
Something built from s-expressions would probably have been ideal but we've known that for 70 years.
hnlmorg•2h ago
Now I do think there is a need for the complexity supported by XML to exist, but 99% of the time JSON or similar is good enough while being easy to work with.
That all said, XHTML was amazing. I’d have loved to see XHTML become the standard for web markup. But alas that wasn’t to be.
spankalee•2h ago
So XHTML lost to the much more forgiving HTML.
There was an idea to make a forgiving XML for web use cases: https://annevankesteren.nl/2007/10/xml5 but it never got traction.
hnlmorg•48m ago
But I do agree that I’m likely in the minority of people (outside of web developers at least) that thought that way.
Devasta•2h ago
What a pity.
ndriscoll•48m ago
WJW•2h ago
It got abandoned because it sucks. New technology gets adopted because it's good. XML standard were just super meh and difficult to work with. There's really not much more to it than that.
johannes1234321•2h ago
With JSON you can dump data structures from about any language straight out and it's okay to start toying around and experimenting. Over time you might add logic for filtering out some fields, rename others, move stuff a little around without too much trouble.
Also quickly writing the structure up by hand works a lot faster in any editor, without having to repeat closing tags (while at some point closing brackets and braces will take their tribute)
However I agree: once you got the XML machinery, there is a lot of power in it.
madeofpalk•2h ago
jackero•2h ago
The idea behind XLST is nice — creating a stylesheet to transform raw data into presentation. The practice of using it was terrible. It was ugly, it was verbose, it was painful, it had gotchas, it made it easier for scrapers, it bound your data to your presentation more, and so on.
Most of the time I needed to generate XML to later apply a XLST style sheet, the resulting XML document was mostly a one off with no associated spec and not a serious transport document. It begged the question of why I was doing this extra work.
ndriscoll•1h ago
The entire point of XSLT is to separate your data from its presentation. That's why it made it easy to scrape. You could return your data in a more natural domain model and transform it via a stylesheet to its presentation.
And in doing so it is incredibly concise (mostly because XPath is so powerful).