That's been tried multiple times over the last two decades and it just ends up with a patchwork of conventions and rules defining how to jam a square peg into a round hole.
"Developers keep making this bad choice over and over" is a statement worthy of deeper examination. Why? There's usually a valid reason for it. In this instance JSON + JS framework of the month is simply much easier to work with.
While oftentimes what happens is is "oh, this thing seems to be working. And it looks easy. Great! Moving on.."
1) Google doing whatever they want with matters that affect every single human on the planet.
2) Google running a farse with "public feedback" program where they don't actually listen, or in this case ask for feedback after the fact.
3) Google not being truthful or introspective about the reasons for such a change, especially when standardized alternatives have existed for years.
4) Honestly, so much of standard, interoperable "web tech" has been lost to Chrome's "web atrocities" and IE before that... you'd think we've learned the lesson to "never again" have a dominant browser engine in the hands of a "for profit" corp.
NaCL? Mozilla won this one. Wasm is a continuation of asm.js.
Dart? It now compiles to Wasm but has mostly failed to replace js while Typescript filled the niche.
Sure, Google didn’t care much for XML. They had a proper replacement for communication and simple serialisation internally in protobuf which they never actually try to push for web use. Somehow json ended up becoming the standard.
I personally don’t give much credit to the theory of Google as a mastermind patiently under minding the open web for years via the standards.
Now if we talk about how they have been pushing Chrome through their other dominant products and how they have manipulated their own products to favour it, I will gladly agree that there is plenty to be said.
I think, from my experience at least, that we keep getting these "component reuse" things coming around "oh you can use Company X's schema to validate your XML!" "oh you can use Company X's custom web components in your web site!" etc etc yet it rarely if ever seems to be used. It very very rarely ever feels like components/schemas/etc can be reused outside of their intended original use cases, and if they can they are either so trivially simple it's hardly worth the effort, or they are so verbose and cumbersome and abstracted trying to be all things to all people then it is a real pain to work with. (And for the avoidance of doubt I don't mean things like tailwind et Al here)
I'm not sure who keeps dreaming these things up with this "component reuse" mentality but I assume they are in "enterprise" realms where looking busy and selling consulting is more important than delivering working software that just uses JSON :)
I use it to store complex 3D objects. It works surprisingly well.
Use the correct tool for the job. If that tool is XML, then I use it instead of $ShinyThing.
They had like 50 different configurators built at different times using different tech etc. (my memory is a bit fuzzy here as to how many they had etc. but it was a lot) So of course they wanted to make a solution for putting their codebase together and also make it easy to make new configurators.
So they built a React application to take a configurator input format that would tell you how to build this application and what components to render and blah blah blah etc.
Cool. But the configurator format was in JSON so they needed to make an editor for their configurator format.
They didn't have a schema or anything like this they made up the format as they went along, and they designed the application as they went along by themselves, so application designed by programmers with all the wonder that description entails.
That application at the end was just a glorified tree editor that looked like crap and of course had all sorts of functionality behavior and design mixed in with its need to check constraints for outputting a particular JSON structure at a particular point. Also programmed in React.
There was about 10 programmers, including several consultants who had worked on this for over a year when I came along, and they were also shitting bricks because they had only managed to port over 3 configurators, and every time they ported a new one they needed to add in new functionality to the editor and the configurator compiler, and there was talk of redesigning the whole configurator editor cause it sucked to use.
Obviously the editor part should have been done in XML. Then people could have edited the XML by learning to use XML spy, they could have described their language in XML schema real easy, and so forth.
But no they built everything in React.
The crowning hilarity - this application at most would ever be used by about 20 people in the world and probably not more than 10 people at all.
I felt obligated by professional pride (and also by the fact that I could see no way could this project keep being funded indefinitely so it was to my benefit to make things work) to explain how XML would be a great improvement over this state of affairs but they wouldn't hear of it.
After about 3 months on it was announced the project would be shut down in the next year. All that work wasted on an editor that could probably have been done by one expert in a month's time.
In hindsight, it is hard to imagine a JSON-based RSS-style standard struggling to catch. The first project every aspiring JS developer would be doing is how to add a feed to their website.
Funny how went from "use it for everything" (no matter how suitable) to "don't use it for anything new" in just under two decades.
To me XML as a configuration file format never made sense. As a data exchange format it has always been contrived.
For documents, together with XSLT (using the excellent XPath) and the well thought out schema language RelaxNG it still is hard to beat in my opinion.
example formats that should not ever be JSON
TEI https://tei-c.org/ EAD https://www.loc.gov/ead/ docbook https://docbook.org/
are three obvious ones.
basically anything that needs to combine structured and unstructured data and switch between the two at different parts of your tree are probably better represented as XML.
Earlier I had only seen the mix of values in body and values in tags. With one even being a tag called "value".
Thanks for showing more examples of XML being used to write unreadable messes.
You can get XML and convert it to everything. I use it to model 3D objects for example, and the model allows for some neat programming tricks while being efficient and more importantly, human readable.
Except being small, JSON is worst of both worlds. A hacky K/V store, at best.
<MyRoot> <AnElement> <Item></Item> </AnElement> </MyRoot>
Serialize that to a JavaScript object, then tell me, is "AnElement" a list or not?
That's one of the reasons why XML is completely useless on the web. The web is full of XML that doesn't have a schema because writing one is a miserable experience.
Consider the following example:
<MyRoot>
<AnElement type="list" items="1">
<Item>Hello, World!</Item>
</AnElement>
<MyRoot>
Most parsers have type aware parsing, so that if somebody tucks string to a place where you expect integer, you can get an error or nil or "0" depending on your choice.Efficient is also... questionable. It requires the full turing machine power to even validate iirc. (surely does to fully parse). by which metric is XML efficient?
For hands-on experience, I used rapidxml for parsing said 3D object files. A 116K XML file is parsed instantly (the rapidxml library's aim is to have speed parity with strlen() on the same file, and they deliver).
Converting the same XML to my own memory model took less than 1ms including creation of classes and interlinking them.
This was on 2010s era hardware (a 3rd generation i7 3770K to be precise).
Verifying the same file against an XSLT would add some milliseconds, not more. Considering the core of the problem might took hours on end torturing memory and CPU, a single 20ms overhead is basically free.
I believe JSON and XML's readability is directly correlated with how the file is designed and written (incl. terminology and how it's formatted), but to be frank, I have seen both good and bad examples on both.
If you can mentally parse HTML, you can mentally parse XML. I tend to learn to parse any markup and programming language mentally so I can simulate them in my mind, but I might be an outlier.
If you're designing a file format based on either for computers only, approaching Perl level regular expressions is not hard.
Oops, forgot the link:
That’s always been the main flaw of XML.
There are very few use case where you wouldn’t be better served by an equivalent more efficient binary format.
You will need a tool to debug xml anyway as soon as it gets a bit complex.
Most of the time you will actually be debugging what’s inside the file to understand why it caused an issue and find if that comes from the writing or receiving side.
It’s pretty much like with a binary format honestly. XML basically has all the downside of one with none of the upside.
It's also pretty easy to emit, "I didn't find what I'm looking for under $ELEMENT" while parsing the file, or "I expected a string but I got $SOMETHING at element $ELEMENT".
Maybe I'm distorted because I worked with XML files more than decade, but I never spent more than 30 seconds while debugging an XML parsing process.
Also, this was one of the first parts I "sealed" in the said codebase and never touched it again, because it worked, even if the coming file is badly formed (by erroring out correctly and cleanly).
How is it a sailed ship?
Lost? The format is literally everywhere and a few more places. Hard to say something lost when it's so deeply embedded all over the place. Sure, most developers today reach for JSON by default, but I don't think that means every other format "lost".
Not sure why there is always such a focus on who is the "winner" and who is the "loser", things can co-exists just fine.
And I don't care at all about the feelings of AI agents. That a tool that's barely existed for 15 minutes doesn't need a feature is irrelevant when talking about whether or not to continue supporting features that have been around for decades.
Conceptually it was beautiful: We had a set of XSL transforms that could generate RSS, Atom, HTML, and a "cleaned up" XML from the same XML generated by our frontend, or you could turn off the 2-3 lines or so of code used to apply the XSL on the server side and get the raw XML, with the XSLT linked so the browser would apply it.
Every URL became an API.
I still like the idea, but hate the thought of using XSLT to do it. Because of how limited it is, we ended up having e.g. multiple representations of dates in the XML because trying to format dates nicely in XSLT for several different uses was an utter nightmare. This was pervasive - there was no realistic prospect of making the XML independent of formatting considerations.
??? Why do you think this?
No, which means you’ll never see them get the level of polish or investment that closed stuff gets. Because when it’s closed you can make people pay or monetize it with advertising.
I’m not cheering for this. Don’t shoot the messenger. I’m pointing out why things are this way.
A major problem is that while free software efforts can build working software, it often takes orders of magnitude more work to make software mere mortals can use. That kind of UI/UX polish is also the work programmers hate doing, so you have to pay them to do it. Therefore closed stuff always wins on UI/UX. That means it always takes the network effect. UX polish is the moat that free has never been able to cross.
Yes. (Slack. Orion. Since when were servers free?)
The web basically fractures into people who watch ads and complain about paywalls and those who don’t.
One, corporate cash is just as good as people cash. Two, people absolutely paid for WhatsApp before it was acquired. And three, I am a people and I personally pay for Microsoft 365 and on occasion have used Teams.
They definitely weren’t bought by corporations because they care about open standards or great UX.
Slack proves my point. It's closed and vertically integrated and people pay for it. Nobody paid for the open precursors to Slack so they stagnated.
I think you got it clearly reversed in your mind...
Part of the community really hated XHTML and its strictness. I remember Mozilla being at the vanguard then rather than Google.
I think the situation was and is a lot more messy and complicated than what the article presents but presenting it fully would make for a less compelling narrative.
As is I don’t really buy it personally.
One way is to tell everyone to use Firefox (uBlock origin works there)
It is still an issue that the Mozilla Foundation is still 80% funded by Google though, so this needs to be solved first.
Somehow Firefox needs to be moved away from Mozilla if they cannot find an alternative funding source other than Google.
Developing software is hard - and OSS hasn't found a way to do hard things yet.
If that is the case, we need to come together and donate thousands to ladybird en masse.
It might take around ~30 years for adoption but it is a start.
Don't you think Google and the other big tech companies already has massive influence in the W3C and web standards?
If a company breaks something so only their path works it's short term practicality to use it and long term practicality to fight for an alternative that keeps control in the developers hand.
Monopolies are terrible for software developers. Quality and customisation tend to go down, which means less value for the Devs.
Every Chrome installation or related fork, plus Electron shippments, counts.
Please don’t do this.
The good thing is that it makes you strong and resilient to pain over time. It's painfully unreadable. It's verbose (ask chatgpt to write a simple if statement). Loops? - here's your foreach and that's all we have. Debugging is for weak, stdout is your debugger.
It's just shit tech, period. I hope devs that write soul harvesting surveillance software at Google go to hell where they are forced to write endless xslt's. Maybe that's the reason they want to remove it from Chrome.
I can't imagine wanting to use anything more complex than a for-each loop in XSLT. You can hack your way into doing different loops but that's like trying to implement do/while in Haskell.
Is it that I've grown too comfortable with thinking in terms of functional programming? Because the worst part of XSLT I can think of is the visual noise of closing brackets.
E.g. showing the last element of the list with different something ``` <xsl:for-each select="items/item"> <xsl:choose> <xsl:when test="position() = last()"> <span style="color:red;"> <xsl:value-of select="."/> </span> </xsl:when> <xsl:otherwise> <span style="color:blue;"> <xsl:value-of select="."/> </span> </xsl:otherwise> </xsl:choose> </xsl:for-each> ```
Or ask chatgpt to count total weight of a shipping based on xml with items that have weights. I did and it's too long to paste here.
> It's not the most beautiful language, I'll give you that, but it's really not as bad as people make it out to be.
TBH I can say that about any language or platform that I ever touched. ZX Spectrum is not that bad, although has its limits. That 1960x 29-bit machine is not that bad, just takes time to get used to it. C++ is not that bad for web development, it's totally doable too.
The thing is that some technologies are more suitable for modern tasks then others, you'll just do much, much more (and better) with JSON model and JS code than XSLT.
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="../Site.xsl"?>
<?xml-stylesheet type="text/css" href="../RenderFail.css"?>
<RDWAPage
helpurl="http://go.microsoft.com/fwlink/?LinkId=141038"
(…)
So I doubt XSLT is going away any time soon.So the theory is that Google chose the name of its AI -- easily one of the hardest and most revenue-impacting naming decisions it's made in years -- in order to create a name collision with a protocol nobody's heard of that's trying to revive GOPHER?
This is so obviously false that you have to re-read the rest of the article with the knowledge that the author is misunderstanding what they're seeing.
Much of what the author describes is increasing security and not wanting to work with XML.
Yeah, you're right that Google probably didn't look at a list of open web technologies that they disagree with and choose one for their new tool. I guess I'll call that "malicious intention".
I'm sure that, however the name was picked, Google's lawyers looked for prior uses of the name. I'm sure it came up, and Google shrugged its shoulders in indifference. Maybe someone brought up the fact this would hurt some open standard or whatever, but nobody in power cared. Is this the same kind of malicious? Probably not, but it still shows that Google doesn't care about the open web and the collateral damage they cause.
https://books.google.com/ngrams/graph?content=gemini%2Cbard&...
Imo this misses the point a bit. If it is neglected and is going to keep producing bugs and not many people are developing on it, then it maybe makes sense to kill it.
This also means new browsers won’t have to implement it maybe?
In any case, I do not use google at all unless forced. My old gmail address is a "dump" where if a site asks for am email, they get that one. I only long into to gmail to delete the "spam" I get.
- Stable Array.sort (2008 – 2018): Of course it doesn't have to be stable, spec does not dictate it right now, it is good for performance, and some other browser even started to do this like we do: http://crbug.com/v8/90 . - Users don't userstyle (2015– ) Of course we absolutely can an will remove this feature from the core, despite it is mandated by several specifications: https://bugs.chromium.org/p/chromium/issues/detail?id=347016 . - SMIL murder attempt was addressed in the OP article (I think they keep similar sentiment towards MathML either) but luckily was eventually retracted. I guess/hope this XSLT will have similar "storm in the teacup" trajectory.
The same can be said about their search engine. This most likely has already altered the outcomes of elections and should have been investigated years ago.
Devasta•3h ago
You can see it clear as day in the github thread that they weren't asking permission, they were doing it no matter what, all their concerns about security just being the pretext.
It would have been more honest of them to just tell everyone to go fuck themselves.
simonw•2h ago
jacquesm•2h ago
jon-wood•1h ago
mschuster91•1h ago
That is not a combination of words that should be mentioned in the same sentence as the word XML or, even worse, XSLT.
XML has its value in enterprise and reliable application development because the tooling is very old, very mature and very reliable. But it's not something taught in university any more, it's certainly not taught in "coding bootcamps", simply because it's orders of magnitude more complex than JSON to wrap your head around.
Of course, JSON has jsonschema, but in practice most real-world usages of JSON just don't give a flying fuck.
JimDabell•1h ago
XSLT has been around for decades so why are you speaking in hypotheticals, as if it’s an up-and-coming technology that hasn’t been given a fair chance yet? If it hasn’t achieved that by now, it never will.
jeroenhd•1h ago
XSLT is designed to work on XML while HTML documents are almost always SGML-based. The semantics don't work the same and applying XML engines on HTML often breaks things in weird and unexpected ways. basic HTML parsing rules like "a <head> tag doesn't need to be closed and can simply be auto-closed by a <body>" will seriously confuse XML engines. To effectively use XSLT to extract information from the web, you'd first need to turn HTML into XML.
oefrha•1h ago
aragilar•44m ago
El_Camino_Real•1h ago
JumpCrisscross•1h ago
Reflexively siding with the tech majors is about as dogmatic as reflexively siding against them.
aragilar•48m ago
JimDabell•1h ago
It seems entirely reasonable to be concerned about XSLT’s effects on security:
> Although XSLT in web browsers has been a known attack surface for some time, there are still plenty of bugs to be found in it, when viewing it through the lens of modern vulnerability discovery techniques. In this presentation, we will talk about how we found multiple vulnerabilities in XSLT implementations across all major web browsers. We will showcase vulnerabilities that remained undiscovered for 20+ years, difficult to fix bug classes with many variants as well as instances of less well-known bug classes that break memory safety in unexpected ways. We will show a working exploit against at least one web browser using these bugs.
— https://www.offensivecon.org/speakers/2025/ivan-fratric.html
— https://www.youtube.com/watch?v=U1kc7fcF5Ao
youngtaff•1h ago
They also seem to be putting pressure on the library maintainer resulting in them saying they’re not going to embargo security bugs