This would make creating competition easier and reduce attack surface. As a nice side effect, it would become impossible to use canvas or web audio for fingerprinting.
Firstly, it puts a huge burden of non-value-adding work onto developers and the organisations they work for.
Secondly it would lead to even higher frequency and prevalence of people inventing their own half-arsed ways of doing things that used to be in the box.
Thirdly, it would simply move the attack surface into an emergent library ecosystem without really solving anything.
Fourthly, it would increase website payloads even further. Developers have historically been awful at using bandwidth efficiently (still a concern in many scenarios due to connectivity limitations and costs), and we don’t need to offer more opportunities for them to demonstrate how terrible and undisciplined they are at it.
Fifthly, not everyone wants or needs (or should!) to learn web assembly in the same way that not everyone wants or needs to learn x86/64 assembly, ARM assembly, C or Rust.
Sixthly, it would lead to a huge amount of retooling and rewriting which, yes, to some extent would happen anyway because, apparently, we all love endless churn masquerading as progress, but it would be considerably worse.
The web would become significantly buggier and more unusable as a result of all of the above.
I suppose we can expect support for XML to be dropped soon as well, since libxml2 maintenance is ending this year.
I don't buy the excuse of low number of users. Google's AMP has abysmal usage numbers, yet they're still maintaining that garbage.
Google has been a net negative for the web, and is directly responsible for the shit show it is today. An entirely expected outcome considering it is steered by corporate interests.
I believe they didn’t just because most of politicians don’t know anything about software.
Being aware of the problems that “governmatization” of open source can bring it still is something I expect to be picked up by countries.
Part of the reason google chrome won the browser wars is because they are willing to make decisions like this. Kitchen sink software is bad software.
Some peple are doing that[1]. It's not a matter of desire, but of the amount of effort and resources required to build and maintain the insanity of the modern web stack.
> Part of the reason google chrome won the browser wars is because they are willing to make decisions like this.
Eh, no. Google Chrome won because it is backed by one of the largest adtech corporations with enough resources and influence to make it happen. They're better at this than Microsoft was with IE, but that's not saying much. When it launched it introduced some interesting and novel features, but it's now nothing but a marketing funnel for Google's services.
Ah yes. That's why Chrome bravely refuses to be a kitchen sink. It only has a small set of available APIs like USB, MIDI, Serial, Sensors (Ambient Light, Gyroscopes etc.), HID, Bluetooth, Barcode detection, Battery Status, Device Memory, Credential Management, three different file APIs, Gamepads, three different background sync APIs, NFC...
There have been other removals, but few of them were of even specified features, and I don’t think any of them have been universally available. One of the closest might be showModalDialog <https://web.archive.org/web/20140401014356/http://dev.opera....>, but I gather mobile browsers never supported it anyway, and it was a really problematic feature from an implementation perspective too. You could argue Mutation Events from ~2011 qualifies¹; it was supplanted by Mutation Observers within two years, yet hung around for over a decade before being removed. As for things like Flash or FTP, those were never part of the web platform. Nor were they ever anything like universal anyway.
And so here they are now planning to remove a well-entrenched (if not especially commonly used) feature against the clearly-expressed will of the actual developers, in a one year time frame.
—⁂—
¹ I choose to disqualify Mutation Events because no one ever finished their implementation: WebKit heritage never did DOMAttrModified, Gecko/Trident heritage never did DOMNodeInsertedIntoDocument or DOMNodeRemovedFromDocument. Flimsy excuse, probably. If you want to count it, perhaps you’ll agree to consider XSLT the first time a major, standard, baseline-available feature will be removed?
<blink> was never universal, contrary to popular impression: <https://en.wikipedia.org/wiki/Blink_element#:~:text=The%20bl...>, it was only ever supported by Netscape/Gecko/Presto, never Trident/WebKit. Part of the joke of Blink is that it never supported <blink>.
> Netscape only agreed to remove the blink tag from their browser if Microsoft agreed to get rid of the marquee tag in theirs during an HTML ERB meeting in February 1996.
Fun times. Both essentially accusing the other of having a dumb tag.
[1] For example: https://www.nagpuruniversity.ac.in/
Indian Rail <https://www.indianrail.gov.in/> has one containing the chart from a mid-2024 train accident, an invitation to contribute a recording of the national anthem from 2021, and a link to parcel booking. Oh, and “NEW!” animated GIFs between the three items.
I think this sets a concerning precedent for future deprecations, where parts of the web platform are rugpulled from developers because it's convenient for the browser vendors.
These aren't horrible formats or standards. XSLT is actually somewhat elegant.
Why? Answer this question: how can you use XML in a way that does not create horrible security vulnerabilities?
I know the answer, but it is extremely nontrivial, and highly dependent on which programming language, library, and sometimes even which library function you use. The fact that there's no easy way to use XML without creating a security footgun is reason enough to avoid it.
There's plenty of reasons to criticize XML, and plenty more to criticize XSLT. But security being the one you call out feels at least moderately disingenuous. It's a criticism of the library, not the standard or the format.
XML is so complex that a 100% bug-free compliant library is inherently insecure, and the vulnerability is a "user is holding it wrong" siutation, they should have disabled specific XML features etc. That means XML is an inherently much more insecure format.
If you removed support for anything that has/could have security vulnerabilities you would remove everything.
I feel like there is a bit of a no true scotsman to this.
XSLT was always kind of on the side. If FTP or flash weren't part of the web platform than i dont know that xslt is either. Flash might not be "standard" but it certainly had more users in its heyday than xslt ever did.
Does removal of tls 1.1 count here? Its all kind of a matter of definitions.
Personally i always thought the <keygen> tag was really cool.
I’m not a Chrome dev but I think they have decent reasons for going this way.
On the other… I’m still a bit uncomfortable with the proposed change because it reads as another example of Google unilaterally dictating the future of the web, which I’ve never liked or supported.
Feeling quite conflicted.
API’s should provide content in the format asked of them. CSS should be used to style that content.
This is largely solved in RFC-6838 which is about “how media types, representation and the interoperability problem is solved”. https://datatracker.ietf.org/doc/rfc6838/
Already supported by .NET Web APIs, Django, Spring, Node, Laravel, RoR, etc.
Less mature ecosystems like Golang have solutions, they’re just very much patch-work/RYO.
Or even use OpenResty or njs in Nginx, which puts the transformation in the web service layer and not the web application layer. So your data might be JSON blob, it’ll convert to HTML in real-time. Something similar can be achieved elsewhere like Apache using mod_lua etc.
I think bastardising one format (HTML), to support another format (JSON), is probably not the right move. We’ve already done that with stuff like media queries which have been abused for fingerprinting, or “has” CSS selectors for shitty layout hacks by devs who refuse to fix the underlying structure.
Typically, these use XSLT on the backend to transform the content to HTML to be sent to the web browser.
And there's RSS which was mentioned in the previous discussions. Podcasts will typically have HTML renderings of that data, but if you opened the RSS in a web browser you could use XSLT to provide a user-friendly view of the content.
XSLT can also be used to provide fallback rendering for unsupported content, such as converting MathML to HTML for browsers without support. -- Chrome as of 109 supports MathML Core, but doesn't support the content markup (used for more semantic markup of common constructs like N-ary sum, integrals, etc.), so would still need something like XSLT to convert that markup to the presentation markup supported by Chrome.
In 19th century Russia there was a thinker, N. F. Fedorov, who wanted to revive all dead people. He saw it as the ultimate goal of humanity. (He worked in a library, a very telling occupation. He spent most of what he earned to support others.) We do not know how to revive dead people or if we can do that at all; but we certainly can revive old tech or just not let it die.
Of course, this job is not for everyone. We cannot count on the richest, apparently, they're too busy getting richer. This is a job for monks.
The browser vendors are arguing XSLT is neither good - it's adoption has always been lacking because of complexity and has now become a niche technology because better alternatives exist - nor working, see the mentioned security and maintenance issues. I think they have a good point there.
The question isn't whether or not you use XSLT yourself, it's whether you use a different feature that could be deemed unprofitable and slammed on the chopping block. And therefore a question of whether it wouldn't be better for everyone for this work to be publicly funded instead.
Why would the public sector feel bound to support it as opposed to pivot in the same direction the winds are blowing?
Outside the idiocy of this particular administration in the US, gov is pivoting toward more commercial norms (with compliance/etc for gov cloud and etc compliance).
XSLT lets you build completely static websites without having to use copy paste or a static website generator to handle the common stuff like menus.
http://www.blogabond.com/xsl/vistacular.xml
The upside is that the entire html page is content. I defy google to not figure out what to index here:
view-source:http://www.blogabond.com/xsl/vistacular.xml
The downside is everything else about the experience. Hence my 15 years of not bothering to implement it in a usable way.
bugbuddy•3h ago
imiric•2h ago
XSLT is no more "baggage" than HTML itself. Removing it in no way "moves the web forward". And integrating technologies part of the current hype cycle, which very well may disappear in a year, is a terrible idea.