One could now use that exact sentence to describe the most popular open document format of all: HTML and CSS.
It is complex but not complicated. You can start with just a few small parts and get to a usable and clean document within hours from the first contact with the languages. The tags and rules are usually quite self-describing while consice and there are tons and tons of good docs and tools. The development of the standards is also open and you can peek there if you want to understand decisions and rationals.
It's about making software that would display a document in that format correctly.
I.e., a browser.
Wordsmithing your way around this doesn't make them any easier.
Evidence for this is in the very words used: unnecessary, complex, bloated, convoluted. These are very human terms that are thus subject to personal interpretation and opinions.
It shouldn't be surprising then that their "claim" thus fails scrutiny. All they actually meant to say is that HTML and CSS are both verbose standards with a lot of particularities - still something subjective, but I think page / word / character counts are pretty agreeable attributes to estimate this with in an objective way. Hence why I brought those up exactly.
Though I agree that the web standards are extremely large. Not sure if they are too large, given their cross-platform near OS layer functionality.
<p>I don't think that's true.<br>
Perhaps you're thinking of xhtml?
Observe the lack of a closing p tag, to say nothing of the multiple self-closing tags in html: hr, img, link, meta, ...https://html.spec.whatwg.org/multipage/grouping-content.html...
big reason for that is that they where not designed for modern requirements like being used as a general purpose application UI toolkit
especially CSS was designed printable documents, not modern websites
and HTML was designed to represent the core semantic structure of a "classical" document (and not a too fancy one either), with minimal formatting (e.g. bold, italic, underline) but even on old websites it was very common to not be used like that at all (e.g. think old table for the whole site to create header and side bar tricks, now doable nicer with HTML5/modern CSS)
so its kinda a markup and style language chosen in the very early internet days only to realize shortly later that websites develop in a direction very mismatched to the designs of both languages (but both happen to be squeezable into their new roles, barely).
Kinda funny. But not really the situation behind OOXML.
What isn’t acknowledged is that a lot of that complexity isn’t purely malicious. OOXML had to capture decades of WordPerfect/Office binary formats, include every oddball feature ever shipped, and satisfy both backwards‑compatibility and ISO standardisation. A comprehensive schema will inevitably have “dozens or even hundreds of optional or overloaded elements” and long type hierarchies. That’s one reason why the spec is huge. Likewise, there’s a difference between a complicated but documented standard and a closed format—OOXML is published (you can go and download those 8 000 pages), and the parts of it that matter for basic interoperability are quite small compared with the full kitchen‑sink spec.
That doesn’t mean the criticism is wrong. The sheer size and complexity of OOXML mean that few free‑software developers can afford to implement more than a tiny subset. When the bar is that high, the practical effect is the same as lock‑in. For simple document exchange, OpenDocument is significantly leaner and easier to work with, and interoperability bodies like the EU have been encouraging governments to use it for years. The takeaway for anyone designing document formats today should be the same as the article’s closing line: complexity imprisons people; simplicity and clarity set them free.
Considering how little most free software makes they can't afford to do a lot of things. It's not a hard bar to hit.
What surprises me is how well LibreOffice handles various file formats, not just OOXML. In some cases LibreOffice has the absolute best support for abandoned file formats. I'm not the one maintaining them, so it's easy enough for me to say "See, you managed just fine". It much be especially frustrating when you have the OpenDocument format, which does effectively the same thing, only simpler.
The complexity is not artificial, it is completely organic and natural.
It is incidental complexity born of decades of history, backwards compatibility, lip-service to openness, and regulatory compliance checkbox ticking. It wasn't purposefully added, it just happened.
Every large document-based application's file format is like this, no exceptions.
As a random example, Adobe Photoshop PSD files are famously horrific to parse, let alone interpret in any useful way. There are many, many other examples, I don't aim to single out any particular vendor.
All of this boils down to the simple fact that these file formats have no independent existence apart from their editor programs.
They're simply serialised application state, little better than memory-dumps. They encode every single feature the application has, directly. They must! Otherwise the feature states couldn't be saved. It's tautological. If it's in Word, Excel, PowerPoint, or any other Office app somewhere, it has to go into the files too.
There are layers and layers of this history and complex internal state that has to be represented in the file. Everything from compatibility flags, OLE embedding, macros, external data source, incremental saves, the support for quirks of legacy printers that no longer exist, CYMK, external data, document signing, document review notes, and on and on.
No extra complexity had to be added to the OOXML file formats, that's just a reflection of the complexity of Microsoft Office applications.
Simplicity was never engineered into these file formats. If it had been, it would have been a tremendous extra effort for zero gain to Microsoft.
Don't blame Microsoft for this either, because other vendors did the exact same thing, for the exact same pragmatic reasons.
You might not add features, but well that is most likely losing proposition against those competitors that have features. As generally normal users want some tiny subset of features. Be it images, tables, internal links, comments, versions.
It's also not sufficient to find that "perfect" lean and mean application that happens to cover precisely the 10% that you need for yourself, because now you can't interchange content with other people that need different features!
I regularly open and edit Office documents created by others that utilise features I had never even heard of. I didn't know until very recently that Power Point has extensive animation support, or that Excel embeds Python, or that both it and Power BI can reach out to OData API endspoints to refresh data tables or even ingest Parquet directly.
You might not need that, but the guy that prepared the report for you needed it.
What do they expect people to do, remove features in order to support other formats? Users won't like that.
If you're working with an XML schema that is served up in XSD format, using code gen is the best (only) path. I understand it's old and confusing to the new generation, but if you just do it the boomer way you can have the whole job done in like 15 minutes. Hand-coding to an XML interface would be like cutting a board with an unplugged circular saw.
One example I work with sometimes is almost 1MB of xsds and thats a rather small internal data tool. They even have restful json variant but its not that used, and complexity is roughly the same (you escape namescape hell, escaping xml chars etc but then tooling around json is a bit less evolved). Xml to object mapping tool is a must.
https://news.ycombinator.com/item?id=44613270
If you have this much complexity and there is nothing you can do to reduce it, then the next best thing is to have an incredibly convenient way to stand up a perfect client on the other side of the fence within a single business day.
I do also think that Office should have created separate formats for project files and export files; if an RTF can hold onto all the formatting details of a typical Word document sufficient for pixel-accurately rendering it for example, then they should have conveyed that better and promoted it as the default export format (along with the idea of an export format), rather than immediately hitting people with a popup that claims their data will be partially lost. If this does exist (just not as an RTF), this point still stands - I don't use it, nobody I know uses it, so it may as well not exist.
Current state of affairs is people passing around docx, xlsx, etc. files, which are project files, hence why they (have to) contain (fancifully) serialized application state. Imagine if people passed around PSDs rather than PNGs. Or if people passed around FLPs rather than WAVs, FLACs or MP3s. It's this separation between the features of a document / spreadsheet / presentation and the features of the authoring software that appears to be completely absent from Microsoft Office, and this is something that just based on the information I have available, MS can legitimately be faulted for. Transitioning from a bespoke binary format to an XML based format with schemas available did basically nothing to help this.
And while it might seem like that I'm suggesting that export formats are this cleanly definable, self-evident things, I don't actually mean to suggest so either. It'd have had to have been a business decision. To where to draw the line would have been a decision that apparently never came to be debated internally, from what anyone can currently tell in retrospect at least, from the outside.
and most of the time they do not use their open standard, but the other document type.
The artificial vendor lockin is real.
We do this about once a quarter in the banking industry. It takes about an hour on average.
Having a debate about the quality of OOXML feels like a waste of time, though. This was all debated in public when Microsoft was making its proprietary products into national standards, and nobody on Microsoft's side debated the formats on the merits because there obviously weren't any, except a dubious backwards compatibility promise that was already being broken because MS Office couldn't even render OOXML properly. People trying to open old MS Office documents were advised to try Openoffice.
They instead did the wise thing and just named themselves after their enemy ("Open Office? Well we have Office Open!"), offered massive discounts and giveaways to budget-strapped European countries for support, and directly suborned individual politicians.
Which means to me that it's potentially a winnable battle at some point in the future, but I don't know why now would be a better outcome than then. Maybe if you could trick MS into fighting with Google about it. Or just maybe, this latest media push is some submarine attempt by Google to start a new fight about file formats?
What you want is a compiler (e.g., into a different document format) or an interpreter (e.g., for running a search or a spell checker).
That's a task that's massively complicated because you cannot give an LLM the semantic definition of the XML and your target (both typically are under documented and under specified). Without that information, the LLM would almost certainly generate an incomplete or broken implementation.
I feel qualified to opine on this as both a former power user of Word and someone building a word processor for lawyers from scratch[1]. I've spent hours pouring over both the .doc and OOXML specs and implementing them. There's a pretty obvious journey visible in those specs from 1984 when computers were under powered with RAM rounding to zero through the 00's when XML was the hot idea to today when MSFT wants everyone on the cloud for life. Unlike say an IDE or generic text editor where developers are excited to work on and dogfood the product via self-hosting, word processors are kind of boring and require separate testing/QA.
It's not "artificial", it's just complex.
MSFT has the deep pockets to fund that development and testing/QA. LibreOffice doesn't.
The business model is just screaming that GPL'd LibreOffice is toast.
[1] Plug: https://tritium.legal
As for complexity, an illustration-- while using M365 I recently was confounded by a stretch of text that had background highlighting that was neither highlight markup, not paragraph or style formatting. An AI turned me onto an obscure dialog for background shading at a text level which explained the mystery. I've been a sophisticated user of M365 for decades and never encountered such a thing, nor have a clear idea of why anyone would use text-level background formatting in preference of the more obvious choices. Yet, there it is. With that kind of complexity and obscurity in the actual product, it's inevitable the file format would be convoluted and complex.
or MS might find itself accidentally toasting themself
a lot of places (including very important MS Office customers) insist in a open document format for various reasons
if MS convinces people that LibreOffice and similar is toast because they can't afford keeping steep with the format in question because it's too expensive they also might end up convincing this customers that it's also too expensive to _them_, and try to find way to switch away from MS Office
I think this needs to end and it is up to ordinary people to seek alternatives.
Apart from LibreOffice, we still have many other alternatives.
Instead of perfect looks, we should focus on the content. Formats like markdown are nice, because they force you to do this. The old way made sense 30 yers ago when information was consumed on paper.
"Interoperability" is something technical enthusiasts talk about and not something that users creating documents care about outside of the people they share with seeing exactly what was created.
In other words, fidelity to the printed page isn't really as important or as magical today as it was in 1984.
Death to user friendlyness! Advanced users only! /s
I was to 99% bound to editor and terminal and comfortable. In no world would a excel or word fit into my workflow
I still do, but today nobody is expecting me to use specific software anymore so it feels less rebellious
For most documents nowadays it makes no sense to see them as a representation of physical paper. And the word paradigm of representing a document as a if it were a piece of paper is obsolete in many areas where it is still being used.
Ironically Atlassian, with confluence, is a large force pushing companies away from documents as a representation of paper.
For most content this is not the case. It is also obviously easier to go from a page agnostic format to a format which relies intrinsically on page size.
Ofc, if we stop really caring what things look like we could save lot of energy and time. Just go back to pure HTML without any JavaScript or CSS...
Having written many papers, reports and my entire Ph. D. thesis in Latex, and also moved between LaTeX classes/templates when changing journals... I'm inclined to agree to an extent. I think every layout system has a final hand-tweaking component (like inline HTML in markdown for example), but LaTeX has a very steep learning curve once you go beyond the basic series of plots and paragraphs. There are so many tricks and hacks for padding and shifting and adjusting your layout, and some of them are "right" and others are "wrong" for really quite esoteric reasons (like which abstraction layer they work at, or some priority logic).
Of course in the end it's extremely powerful and still my favourite markup language when I need something more powerful than markdown (although reStructuredText is not so bad either). But it's really for professionals with the time to learn a layout system.
Then again there are other advantages to writing out the layout, when it comes to archiving and accessibility, due to the structured information contained in the markup beyond what is rendered. arXiv makes a point about this and forces you to submit LaTeX without rendering the PDF, so that they can really preserve it.
Yet simple Markdown documents automatically converted into pdf by pandoc look ten times better than most MS Office documents I've had to deal with over the past couple of decades. Most MS Office users have very little knowledge of its capabilities and do things like adjusting text blocks with spaces, manually number figures (which results in broken references that lead to the wrong figure — or nowhere), manually apply styles to text instead of using style presets (resulting in similar things being differently styled), etc.
In my experience you do that more in Word then in Latex (the I added some paragraphs here and wtf is that picture two pages later doing now problem).
The issue is to some degree quite fundamental to the underlying challenges of laying out formatted text with embedded things, affecting both word and LaTex.
through that is assuming you now how to properly use Word / LaTex if you don't you can cause yourself a huge amount of work ;)
At work I use our ChatGPT page to generate an HTML+CSS skeleton of what I want and tweak that. It's quicker for me than doing the equivalent in word, and easier to manipulate later. Most of the time I don't need anyone else editing my docs, so it works out.
You want to be able to do everything just right for the looks. Because there always will be someone negotiating down because your PDF report does not look right and they know a competitor who „does this heading exactly right”.
In theory if you have garbled content that is not acceptable of course, but small deviations should be tolerated.
Unfortunately we have all kinds of power games where you want exact looks. You don’t always have option to walk away from asshole customers nitpicking on BS issues.
None of the two conditions are reality, of course.
So yeah, that’s not happening.
exactly, and that's great.
do you really think surgeons/nurses waste time getting intimate knowledge of all the machines they use?
Do you really think bus drivers have intimate knowledge of the engines of their buses, transmission lines etc?
come on, be reasonable.
similar if someone random non technical person just needs to write idk. 5 paragraphs of text with a head line, no high requirements for formatting, no kind of templates, no anything fancy why would you force them to not have WYSIWYG if that is the perfekt fit for their use case in every single aspect?
Similar markdown doesn't scale to a lot of writing requirements, like _AT ALL_. I know because I have written pretty much any thesis or larger reports etc. during my studies in markdown. And I had to step out of markdown all the time. Weather that is by using inline latex, or by tweaking markdown to PDF conversion templates (by interleaving non mark down sources with markdown by splitting the markdown into many different files and folders which each could be markdown or anything else, and/or inline latex to include non markdown sources imported into latex) etc. It was a nice pipeline, but it also wasn't really markdown anymore, but instead some amalgamation of markdown LaTex and other things. As a programmer that was fine, but that doesn't scale at all to the "standard" user of office applications.
Many documents are created for looks rather than content.
Even TypeScript encourages artificial complexity of interfaces and creates lock-in, that's why Microsoft loves it. That's why they made it Turing Complete and why they don't want TypeScript to be made backwards with JavaScript via the type annotations ECMAScript proposal. They want complex interfaces and they want all these complex interfaces to be locked into their tsc compiler which they control.
They love it when junior devs use obscure 'cutting edge' or 'enterprise grade' features of their APIs and disregard the benefits of simplicity and backwards compatibility.
And I bet they didn't switch to XML because it was superior to their old file formats, but simply because of the unbelievable XML hype that existed for a short time in the late 1990s and early 2000s.
OOXML was, if anything, an attempt to get ahead of requirements to have a documented interoperable format. I believe it was a consequence of legal settlements with the US or EU but am too tired at the moment to look up sources proving that.
depends
you can have well, clean and fully documented binary formats which are relatively easy to parse (e.g. msgpack, cbor, bson)
you might still not know what the parsed things mean, but that also applies to text formats (including random documented binary blob fields, thanks to base64 they also fit into any text format)
Being able to layer markup with text before, inside elements, and after is especially important --- as anyone with HTML knowledge should know. Being able to namespace things so, you know, that OLE widget you pulled into your documents continue to work? Even more important. And that third-party compiled plugin your company uses for some obscure thing? Guess what. Its metadata gets correctly embedded and saved also, and in a way that is forward and backwards compatible with tooling that does not have said plugin installed.
So no, it wasn't 'hype'.
There was also huge hype. XML databases, anyone? XML is now an also-ran next to json, yaml, markdown. At the time, it was XML all the things!
The OOXML format is likely a not very deeply thought out XML serialization of the in memory structure or of the old binary format, done under time pressure (there was legal pressure on Microsoft at the time).
it somewhat looks like that, but that old binary format changed with every nth yearly major new version and IMHO it looks like not being far away from a slightly serialized dump of their internal app data structures ;)
but
putting aside that they initially managed to incorrectly implement their own standard OOXML and the mess that "accident" caused
they also did support import and even exports (with limited features) of the Open Document format before even fully supporting OOXML, and even use that as standard save option.... (when edition such a document)
like there really was no technical reason why they couldn't just have adopted the Open Document format, maybe at worst with some "custom" (but open and "standardized" (by MS itself) extensions to it)
MS at the time had all insensitive to comply as bad in faith as they could get away with
and what we saw at that time was looking like exactly that
sure hidden behind "accidents" and incompetence
but lets be honest if a company has all interest and insensitive to make something in bad faith and make it go bad absurdly and then exactly that happens then it's very naive to assume that it was actually accidentally most likely it wasn't
that doesn't mean any programmer sat down and intentional thought about how to make it extra complicated, there is no need for that and that would just be a liability, instead you do bad management decision, like (human) resource starve the team responsible (especially keep you best seniors away), give them all messed up deadlines, give them all messed up requirements you know can't work out. Mess up communication channels. Only give them bad tooling for the task. etc. etc. Most funny thing due to how messy software production often is the engines involved might not even notice ;), means no liability on that side.
This is just not right.
They where not required (AFIK) and in some edge cases also didn't provide a perfect conversion of all old documents to the open format. Actually even just converting between different versions of their proprietary formats had a tendency to break things sometimes! (back then)
> unbelievable XML hype that existed for a short time in the late 1990s and early 2000s.
(EDIT: actually 2006, so uh, maybe XML hype) we speak about ~2010, the hype was pretty dead again at that time, and the main reason they choose it is to position it as "completion" to emerging standardized open office document formats which all used XML as markup language (except they don't really use XML as mark down language but more like serialization to JSON but way more complex, but that doesn't matter they mostly need to convince not supper tech affine people about them "no longer trying to hamper competition" to preclude legislative action and governments from switching to other office suites due to the closed format making them worry).
so they where more then able to
- do a clean design, if anyway a lot of old "proprietary" documents break subtly when converting it doesn't matter (and they did break)
- just adopt OpenDocument format
A lot of people are locked in because those import/export features are typically imperfect (or perhaps the documents themselves are) and will badly and often "invisibly" (to the non-Office user) break something.
But honestly these days, the only time I use Word is to keep my resume up to date once per quarter. That’s a really simple document.
Its called antitrust.
What exactly have they done about it?
According to who? With what proof? And how/why do they get to be the arbiters of that?
They normally get asked to investigate by other interested parties, and then ask other independent experts in the field.
> And how/why do they get to be the arbiters of that?
By being the government?
Microsoft doesn't have to sell their software in Europe if they don't like the rules.
I don't remember every element enough to render from memory, but ChatGPT's example feels about right:
OpenDocument
<text:p text:style-name="Para"> This is some <text:span text:style-name="Bold">bold text</text:span> in a paragraph. </text:p>
OOXML
<w:p> <w:pPr> <w:pStyle w:val="Para"/> </w:pPr> <w:r> <w:t>This is some </w:t> </w:r> <w:r> <w:rPr> <w:b/> </w:rPr> <w:t>bold text</w:t> </w:r> <w:r> <w:t> in a paragraph.</w:t> </w:r> </w:p>
OpenDocument is not always 100% "simple," but it's logical and direct. Comprehensible on sight. OOXML is...something else entirely. Keep in mind the above are the simplest possible examples, not including named styles, footnotes, comments, change markup, and 247 other features commonly seen in commercial documents. The OpenDocument advantage increases at scale. In every way except breadth of adoption.
Respect to MS for keeping the lights on.
People need to understand that there is no MS format per se, but different standards from which you can choose. Years ago, when OpenDocument was fairly popular, MS was kind of hesitant to use an XML format. XML is a strict format, no matter the syntax.
And I bet that MS intended such a complicated format to prevent Open Source Projects from developing parsers and MS from losing market share this way. I bet there are considerations about such a strategy discussed at the time, buried in Archive.org.
On the other hand, MS didn't want nor see the XML chaos, which would follow later on. XML is a format, and all it demands is being formally correct. It is like Assembler, fixed instruction sets with lots of freedom, and only the computer needs to "understand" the code - if it runs, ship it.
ZEN of whatever cannot be enforced. JavaScript was once the Web's assembly language. Everything was possible, but you had to do the gruntwork and encapsulate every higher-level function in a module that consisted of hundreds of LoCs. Do in hundreds of LoCs, what a simple instruction in Python could achieve with one.
Babel came, TypeScript, and today I lost track of all the changes and features of the language and its dialects. The same goes for PHP, Java, C++, and even Python. So many features that were hyped, and you must learn this crap nevertheless, because it is valid code.
Humans cannot stand a steady state. The more you add to something, the more active and valuable it seems. I hate feature creep — kudos to all the compiler devs, who deserve credit for keeping the lights on.
It wouldn’t surprise me at all if it simply was “the XML schema mostly follows how our implementation represents this kind of stuff”.
The source code of MS Word almost certainly has lots of now weird-looking design choices based on having to run in constrained memory. It also has dark corners for “we released a version that did this slightly different, so we have to keep supporting it”
That’s exactly what it was. They originally had a binary representation (.doc) which was pretty much just a straight-up dump of their internal data structures to disk. When they felt forced to make an “open” “xml-based” format, they basically converted their binary serialization to XML without changing what it represented at all. It was basically malicious compliance.
(i.e. they worked around the "XML is a strict format" part ;) )
or at least it was that way way back then when OOXML was new and the whole scandal about MS "happening" to not correctly implement their own standard thing was still news (so like 10+ years ago)
Which they then carried over into OOXML.
Just to be clear, MS has back then and recently again repeatedly shown very clearly they the whole embrace extend extinguish thing is the core of their action for most things open or standardized(1). And what is a better way to "extinguish" open text standards by making one themself which is build in a way guaranteed to not work well, i.e. fail, for anyone(/most) but first party MS products and then use that to push the propaganda fud that open text standards just can't be good.
So I'm very sure them having an obscure, hyper complex, OOXML "open standard" format where actually implementing it standard compliant is far from sufficient for correct displayed/interpreted documents is a very intentional thing.
But if you already have a mess internally it is a very good move to just use/expand on that, because it does give you a excuse why things ended up how they are and save implementation time.
----
(1): disclaimer: In between there where a view years where they acted quite friendly; Specific dev of MS still love Open Source in a honest way; in some areas open source also has won; and in some places it's just a very bad time vor "extend and extinguish" so it's not (yet) done; And sometimes it's done very slowly and creepingly; So yes you will find good MS open source project and contributions. But its still pretty much everywhere no matter in which direction you look as long as you look close enough.
like XML is a mark up language so it _should_ interleave quite "naturally" and well for text formatting tasks (i.e. see OpenDocument example or supper simple "ancient style" HTML)
but OOXML looks more like someone force serialized some live OOP object hierarchy with (potential cyclic) references and tone of subclasses etc.
tl;dr: i.e. it looks a loot similar to a simplified form of how text editors internal represent formatted test
like w:r looks like a text section, you could say a r_ow of wide characters or words, w:p looks like a subclass of a implicit type which is basically a `Vec<w:r>`, w:pPr looks like ".presentation" property of w:p, same for w:rPr, probably both being subtypes of some generic Presentation base class. w:t looks like a generic `.text: String` property. w:pStyle looks like a property of Presentation or it's ParagraphPresentation sub-class, it's `w:val` property makes it look like it's a shared reference which can be looked up by the key `"Para"`. w:b is just another subclass of Presentation you can use in any context etc.
which opens the question
"do they mostly just dump their internal app state"?
and did they make their format that over-complicated and "over" flexible so that they can just change their internal structure and still dump it?
which would also explain how they might have ended up with "accidentally" incorrectly implementing their own standard around 10 years ago during early OOXML times
and if so isn't that basically "proof" that OOXML isn't really an open format but just a "make pretend" of one?
So I guess they're going back to that old strategy...
Edit: Source might have been this: https://news.ycombinator.com/item?id=39402595 , so part of it might have been an urban myth.
It only exists because Microsoft was desperate to avoid antitrust consequences for the dominance of Office 25 years ago.
Back in the early 2000's I wrote readers and writers for it and made pretty heavy use of the format at my job at the time.
The biggest problem with SpreadsheetML was that it expected the extension to be .XML - Microsoft had some sort of magic that would still associate the files with Excel on Windows but it wasn't super reliable. We started using .xls but after an update Excel started barking about files with the wrong extension.
There are at least two ways to get from such an XML document to a PDF; we used pdfLaTeX, modified to handle our extra constructs, and then XeLaTeX.
I won't say it was a simple toolpath, but it allowed us to do at least two things that would have been difficult with Word or OpenOffice:
(1) It gave us an archival XML format, which will probably be readable and understandable for centuries. For grammars of endangered languages, that's important, because the languages won't be around more than a couple decades.
(2) It gave us the ability to cleanly typeset documents that had multiple scripts (including both Roman and various right-to-left scripts, like Arabic and Thaana).
khelavastr•6mo ago
yftsui•6mo ago
ranger_danger•6mo ago
https://news.ycombinator.com/item?id=44606646
But if you dig hard enough, there's actually links to more evidence of why it is that complicated... so I don't think it was necessarily intentionally done as a method of lock-in, but where's the outrage in that? /s
"Complicated file format has legitimate reasons for being complicated" just doesn't have the same ring to it as a sensationalized accusation with no proof.
mjevans•6mo ago
'special case everything we ever used to do in office so everything renders exactly the same'
Instead of offering some suitable placebo for properly rendering into a new format ONCE with those specific quirks fixed in place?
lozenge•6mo ago
"You have opened your Word 97 document in Office 2003. The quirks have been removed, so it might look different now. Check every page before saving as docx."
"You have pasted from a Word 97 document into an Office 2003 OOXML document. Some things will not work."
ranger_danger•6mo ago
And for the pedantic, yes it warns you when saving as a .docx that "not all features are supported", but it does that every time, for every document, so nobody pays attention to it or has any idea what it even means. To me the way it handles this is just completely unacceptable.
mjevans•6mo ago
In an ideal world a converter would generate E.G. a 1200 PPI render of each page, then compare it to a similar render as provided in the nearest rendition in the allowed simple new format. Those could be diffed to produce a highlight of areas that changed.
The software could then ask if the transcription from one format to the other was close enough, or if there were some corner case that wasn't good enough.
Bonus points, collect feedback if the end user is willing to submit examples.
dathinab•6mo ago
having a lot of intend to keep it complicated and cause vendor locking and comply in bad faith
and this being very easy to archive just by not trying to improve on a status quo and creating a standard where you are the only one to decide what goes in where. Or other simple things like intentionally putting a senior engineer you know tends to painfully overweening things but keep it in a working state, etc. etc. Just by management decisions done in a higher level then the project you can pretty reliable mess up things in various ways as needed pretty reliable as long as you have enough people to choose from.
pessimizer•6mo ago
What is it about serializing XML that would optimize the expression of a data model?
constantcrying•6mo ago
Obviously parsing the XML is trivial. What is not trivial is what you do with parsed XML and what the parsed structure represents.
dathinab•6mo ago
everything on top of the XML AST is the issue