<table>
<tr> <td> A1 <td> B1 <td> C1
<tr> <td> A2 <td> B2 <td> C2
<tr> <td> A3 <td> B3 <td> C3
</table>
is valid and reads better than if the row and data elements were closed (and on separate rows because it would be too much noise otherwise) (of course the whitespaces are different, if they matter for some reason). For a 3x3 table 5 lines vs ~15 lines.The problem is when you have long cells that you’d normally word wrap inside the cell, everything else ends up misaligned in your markup language. Or when you need to add styling to text in a cell, suddenly it’s unreadable again. Or when there’s more than a small few number of columns thus causing each row to word wrap inside your IDE, etc
I think it makes far more sense to just acknowledge that tables are going to ugly, compose them elsewhere, and then export them to your markup language following that language’s specification strictly.
<p>
text1
<p>
text2
</p>
</p>edit: Indeed, it creates three: the </p> seems to create an empty paragraph tag. Not the first time I've been surprised by tag soup rules.
It doesn't make the code valid according to the specifications.
So I think your argument here is tough to take at face value. It feels a lot more like you’re arguing personal preference as fact.
Though if a linter is formatting the whole codebase on its own in an homogeneous way, and someone else will deal with the added parsing complexity, that might feel okayish also to me.
Generally speaking, the less clutter the better. A bit like with a js codebase which is semicolon free where possible.
For pleasant experience of read and write, html in a simple text editor is very low quality. Pug for example is bringing far less clutter, though mandatory space indentation could be avoided with some alternative syntactic choices.
Why?
> You may think that's invalid HTML, but browser will parse it and won't indicate any kind of error.
It isn’t an opinion, it literally is invalid HTML.
What you’re responding to is an assumption that I was suggesting browsers couldn’t render that. Which isn’t what I claimed at all. I know full well that browsers will gracefully handle incorrect HTML, but that doesn’t mean that the source is magically compliant with the HTML specification.
I don't know why. Try it out. That's the way browsers are coded.
> It isn’t an opinion, it literally is invalid HTML.
It matters not. You're writing HTML for browser to consume, not for validator to accept. And most of webpages are invalid HTML. This very HN page contains 412 errors and warnings according to W3C validator, so the whole point of HTML validness is moot.
I’m not saying you’re wrong, but I’d need more than that to be convinced. Sorry.
> It matters not. You're writing HTML for browser to consume, not for validator to accept.
It matters because you’re arguing a strawman argument.
We weren’t discussing what a browser can render. We were discussing the source code.
So your comment wasn’t a rebuttal of mine. It was a related tangent or addition.
So basically my point is:
1. You can avoid closing some tags, letting browser to close tags for you. It won't do any harm.
2. You can choose to explicitly close all tags. It won't do anything for valid HTML, but it'll introduce subtle and hard to find DOM bugs by adding empty elements.
So you're trying to improve HTML source readability by risking to introduce subtle bugs.
If you want to do that, I'd recommend to implement HTML validation for build or test pipeline at least.
Another alternative is to use HTML comments to close tags, as this closing tag is supposed to be documentation-only and won't be used by browser in a proper code.
You posted a terse comment with some HTML. I responded specifically about that comment and HTML. And you’re now elaborating on things as a rebuttal to my comment despite the fact that wasn’t the original scope of my comment.
Another example of that is how you’ve quoted my reply to the 2 vs 3 elements, and then answered a completely different question (one I didn’t even ask).
I don’t think you’re being intentionally obtuse but it’s still a very disingenuous way to handle a discussion.
The syntax is invalid, but that's because the final </p> has no opening <p> that it can close.
This may have been relevant 9 years ago, but today, just pick and auto-formatter like prettierjs and have it close these tags for you.
> XHTML, being based on XML as opposed to SGML, is notorious for being author-unfriendly due to its strictness
This strictness is a moot point. Most editors will autocomplete the closing tag for you, so it's hardly "unfriendly". Besides, if anything, closing tags are reader-friendly (which includes the author), since they make it clear when an element ends. In languages that don't have this, authors often add a comment like `// end of ...` to clarify this. The article author even acknowledges this in some of their examples ("explicit end tags added for clarity").
But there were other potential benefits of XHTML that never came to pass. A strict markup language would make documents easier to parse, and we wouldn't have ended up with the insanity of parsing modern HTML, which became standardized. This, in turn, would have made it easier to expand the language, and integrate different processors into the pipeline. Technologies like XSLT would have been adopted and improved, and perhaps we would have already had proper HTML modules, instead of the half-baked Web Components we have today. All because browser authors were reluctant to force website authors to fix their broken markup. It was a terrible tradeoff, if you ask me.
So, sure, feel free to not close HTML tags if you prefer not to, and to "educate" everyone that they shouldn't either. Just keep it away from any codebases I maintain, thank you very much.
To be fair, I don't mind not closing empty elements, such as `<img>` or `<br>`. But not closing `<p>` or `<div>` is hostile behavior, for no actual gain.
> if the element is one of the void elements, or if the element is a foreign element, then there may be a single U+002F SOLIDUS character (/)
If you're going to be pedantic, at least be correct about it.
[1]: https://html.spec.whatwg.org/multipage/syntax.html#start-tag...
> On void elements, it does not mark the start tag as self-closing but instead is unnecessary and has no effect of any kind. For such void elements, it should be used only with caution — especially since, if directly preceded by an unquoted attribute value, it becomes part of the attribute value rather than being discarded by the parser.
(The void elements are listed here: https://developer.mozilla.org/en-US/docs/Glossary/Void_eleme... )
<br/>This non-closing talisman means that <div/> or <script/> are not closed, and will mess up nesting of elements.
I hand write my HTML sometimes, and in those cases it’s often very basic documents consisting of maybe an outer container div, a header and a nav with a ul of li for the navigation items and then an inner container div and maybe an article element, and then the contents are mostly p and figure elements and various level headings.
In this case, there is no mental overhead of omitting closing li and closing p at the end of the line, and I omit them because I am allowed to and it’s still readable and fine.
(But it might be better if you make a habit of doing so.)
Or am I pointing out that closing tags is a human social issue, with aspects ranging from practical & reasonable, to ridiculous & widely exploited?
<nav id=main-nav>
<ul>
<li><a href="/">Home</a>
<li><a href="/hamburgers/">Hamburgers</a>
<li><a href="/sausages/">Sausages</a>
</ul>
</nav>If you don't close your <p> and <li> tags, you risk accidentally having content in the wrong place.
It's something to avoid because it can have bad consequences, not because it (somehow?) makes you a bad person.
Laziness doesn't play a role. This isn't XML where you need to repeat yourself over and over again or abusing a bug in the rendering logic; it's following the definitions markdown language you're writing content in.
If you're not too familiar with the HTML language then it's always a safe bet to close your tags, of course.
Literally saving four bytes.
Correction: there was also the issue of Ä and Ö. Those were &AUML; and &OUML; I think.
If that's a comment you get, write better code. It does not matter to me whether closing p-tags is mandatory or optional. If you don't do it, I don't want you working on the same code base as me.
This kind of knowledge makes for fun blog posts, but if people direct these kind of comments to me. You're obviously using your knowledge to just patronize and lecture people.
In fact, most web browsers now automatically insert these closing tags for the user.
This feature has been around for many years now.
However, I have found that many organizations still require that the closing tags be included explicitly.
I am curious how other organizations determine when to use the "the spec allows it" as a reason to not include the closing tags.
What point do developers cross from merely allowing this to being considered a technical debt?
Have you ever utilized the feature of the specification that caused you a problem later?
Just because it worked on the one browser you tested it on, doesn't mean it's always worked that way, or that it will always work that way in the future...
Every browser treats html/etc differently... I've run into css issues before on Chrome for android, because I was writing using Chrome for desktop as a reference.
You'd think they should be the same because they come from the same heritage, but no...
All browsers have worked this way for decades. It’s standard HTML that has been in widespread use since the beginning of the web. The further back you go, the more normal it was to write HTML in this style. You can see in this specification from 1992 that <p> and <li> don’t have closing tags at all:
https://info.cern.ch/hypertext/WWW/MarkUp/Tags.html
Maybe there were obscure browsers that had bugs relating to this back in the mid 90s, but I don’t recall any from the late 90s onwards. Can you name a browser released this millennium that doesn’t understand optional closing tags?
I learned HTML quite late, when HTML 5 was already all the rage, and I never understood why the more strict rules of XML for HTML never took off. They seem so much saner than whatever soup of special rules and exceptions we currently have. HTML 5 was an opportunity to make a clear cut between legacy HTML and the future of HTML. Even though I don't have to, I strive to adhere to the stricter rules of closing all tags, closing self-closing tags and only using lower-case tag names.
Because browsers close some tags automatically. And if your closing tag is wrong, it'll generate empty element instead of being ignored. Without even emitting warning in developer console. So by closing tags you're risking introducing very subtle DOM bugs.
If you want to close tags, make sure that your building or testing pipeline ensures strict validation of produced HTML.
Internet Explorer failing to support XHTML at all (which also forced everyone to serve XHTML with the HTML media type and avoid incompatible syntaxes like self-closing <script />), Firefox at first failing to support progressive rendering of XHTML, a dearth of tooling to emit well-formed XHTML (remember, those were the days of PHP emitting markup by string concatenation) and the resulting fear of pages entirely failing to render (the so-called Yellow Screen of Death), and a side helping of the WHATWG cartel^W organization declaring XHTML "obsolete". It probably didn't help that XHTML did not offer any new features over tag-soup HTML syntax.
I think most of those are actually no longer relevant, so I still kind of hope that XHTML could have a resurgence, and that the tag-soup syntax could be finally discarded. It's long overdue.
Meanwhile, in any other formal language (including JS and CSS!), the standard assumption is that syntax errors are fatal, the responsibility for fixing lies with the page author, but also that fixing those errors is not a difficult problem.
Why is this a problem for HTML - and only HTML?
We could be more strict for new content, but why bother if you have to include the legacy parser anyway. And the HTML5 algorithm brings us most of the benefits (deterministic parsing) of a stricter syntax while still allowing the looseness.
Try going to any 1998 web page in a modern browser... It's generally so broken so as to be unusable.
As well as every page telling me to install flash, most links are dead, most scripts don't run properly (vbscript!?), tls versions now incompatible, etc.
We shouldn't put much effort into backwards compatibility if it doesn't work in practice. The best bet to open a 1998 web page is to install IE6 in a VM, and everything works wonderfully.
The web owes its success to having low barriers to entry and very quickly became a mixture of pages hand coded by people who weren't programmers, content produced by CMS systems which included stuff the content author didn't directly control and weren't necessarily reliable at putting tags into the right place, and third party widgets activated by pasting in whatever code the third party had given you. And browsers became really good at attempting to rendering erroneous and ambiguous markup (and for that matter were usually out of date or plain bad at rigidly implementing standards)
There was a movement to serve XHTML as XML via the application/xhtml+xml MIME type never took off because browsers didn't do anything with it except loading a user-hostile error page if a closing tag was missed (or refusing to load it at all in the case of IE6 and older browsers), and if you wanted to do clever transformation of your source data, there were ways to achieve that other than formatting the markup sent to the browser as a subset of XML
Netscape started this. NCSA was in favor of XML style rules over SGML, but Netscape embraced SGML leniency fully and several tools of that era generated web pages that only rendered properly in Netscape. So people voted with their feet and went to the panderers. If I had a dollar for every time someone told me, “well it works in Netscape” I’d be retired by now.
Well, this is not entirely true: XML namespaces enabled attaching arbitrary data to XHTML elements in a much more elegant, orthogonal way than the half-assed solution HTML5 ended up with (the data-* attribute set), and embedding other XML applications like XForms, SVG and MathML (though I am not sure how widely supported this was at the time; some of this was backported into HTML5 anyway, in a way that later led to CVEs). But this is rather niche.
Original SGML was actually closer to markdown. It had various options to shorten and simplify the syntax, making it easy to write and edit by hand, while still having an unambiguous structure.
The verbose and explicit structure of xhtml makes it easier to process by tools, but more tedious for humans.
Especially for casual users of HTML.
And markdown tables are harder to write than HTML tables. However, they are generally easier to read. Unless multi line cell.
It’s kind of a huge deal that I can give a Markdown file of plain text content to somebody non-technical and they aren’t overwhelmed by it in raw form.
HTML fails that same test.
The third way of a bare tag is where the confusion comes from.
Now, we can discuss whether we should optimize for the unfamiliar reader, and whether the illusion of actual meaning the trailing slash in HTML5 can be harmful.
I would note that exactly like trailing slashes, indentation doesn't mean anything for the parser in C-like languages and can be written misleadingly, yet we do systematically use it, even when no unfamiliar reader is expected.
Now, maybe someone writing almost-XHTML (closing all tags, putting trailing slashes, quoting all the attributes) should go all the way and write actual XHTML with the actual XHTML content type and benefit from the strict parser catching potential errors that can backfire and that nobody would have noticed with the HTML 5 parser.
> On void elements, [the trailing slash] does not mark the start tag as self-closing but instead is unnecessary and has no effect of any kind. For such void elements, it should be used only with caution — especially since, if directly preceded by an unquoted attribute value, it becomes part of the attribute value rather than being discarded by the parser.
It was mainly added to HTML5 to make it easier to convert XHTML pages to HTML5. IMO using the trailing slash in new pages is a mistake. It makes it appear as though the slash is what closes the element when in reality it does nothing and the element is self-closing because it's part of a hardcoded set of void elements. See here for more information: https://github.com/validator/validator/wiki/Markup-%C2%BB-Vo...
Besides, at this point technologies like tree-sitter make editor integration a moot point: once tree-sitter knows how to parse it, the editor does too.
A p or li tag, at least when used and nested properly, logically ends where either the next one begins or the enclosing block ends. Closing li also creates the opportunity for nonsensical content inside of a list but not in any list item. Of course all of these corner cases are now well specified because people did close their tags sometimes.
While this is true I’ve never liked it.
<p>blah<p>blah2</p>
Implies a closing </p> in the middle. But <p>blah<span>blah2</p>
Does not. Obviously with the knowledge of the difference between what span and p represent I understand why but in terms of pure markup it’s always left a bad taste in my mouth. I’ll always close tags whenever relevant even if it’s not necessary.In practice, modern HTML splits the difference with rigorous and well defined but not necessarily intuitive semantics.
So we'll add another syntax for browsers to handle.
But.
The future of HTML will forever contain content that was first handtyped in Notepad++ in 2001 or created in Wordpress in 2008. It's the right move for the browser to stay forgiving, even if you have rules in your personal styleguide.
1. The autoclose syntax does not exist in HTML5, and a trailing slash after a tag is always ignored. It's therefore recommended to avoid this syntax. I.e write <br> instead of <br />. For details and a list of void elements, see https://developer.mozilla.org/en-US/docs/Glossary/Void_eleme...
2. It's not mandatory to close tags when the parser can guess where they end. E.g. a paragraph cannot contain any line-block, so <p>a<div>b</div> is the same as <p>a</p><div>b</div>. It depends on the context, but putting an explicit end tag is usually less error-prone.
Some rules of thumb, perhaps:
— Do not omit if it is a template and another piece of HTML is included in or after this tag. (The key fact, as always, is that we all make errors sometimes—and omitting a closing tag can make an otherwise small markup error turn your tree into an unrecognisable mess.)
— Remember, the goal in the first place is readability and improved SNR. Use it only if you already respect legibility in other ways, especially the lower-hanging fruit like consistent use of indentation.
— Do not omit if takes more than a split-second to get it. (Going off the HTML spec, as an example, you could have <a> and <p> as siblings in one container, and in that case if you don’t close some <p> it may be non-obvious if an <a> is phrasing or flow content.)
The last thing you want is to require the reader of your code to be more of an HTML parser than they already have to be.
For me personally this makes omitting closing tags OK only in simpler hand-coded cases with a lot of repetition, like tables, lists, definition lists (often forgotten), and obviously void elements.
<p><p></p>
Should the second <p> be nested or not? aaaaa<b>aaaa<i>aaaaa</b>aaaa</i>aaaaa
I just tried this! <table>
<tr><td>aaaaa<td>bbb
<tr><td>ccccccccccc<td>dddddddddd
<tr><td>eeeeeeeeeeeeeeeee
</table>
Which works and is much cleaner than the usual table tag soup. It makes sense as on their own <td> and <tr> tags have no meaning.(This is especially relevant with "void" tags. E.g. if someone wrote "<img> hello </img>" then the "hello" is not contained in the tag. You could use the self closing syntax to make this more obvious -- Edit: That's bad advice, see below.)
e.g. how do you think a browser will interpret this markup?
<div />
<img />
A lot of people think it ends up like this (especially because JSX works this way): <div></div>
<img>
but it's actually equal to this: <div>
<img>
</div>That said, your linter is going to drive you crazy if you don't close tags, no?
Payload size is a moot point given gzip.
https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/...
But honestly no answer to "what does the browser do with this sort of thing" fits into an HN comment anymore. I'm glad there's a standard, but there's a better branch of the multiverse where the specification of what to do with bad HTML was written from the beginning and is much, much simpler.
I'd also wish people would stop calling every element-specific behavior HTML parsers do "liberal and tag-soup"-like. Yes WHATWG HTML does define error recovery rules, and HTML had introduced historic blunders to accomodate inline CSS and inline JS, but almost always what's being complained about are just SGML empty elements (aka HTML void elements) or tag omission (as described above) by folks not doing their homework.
[1]: https://sgmljs.sgml.net/docs/html5.html#tag-omission (see also XML Prague 2017 proceedings pp. 101ff)
delaminator•1d ago
Netscape Navigator did, in fact, reject invalid HTML. Then along came Internet Explorer and chose “render invalid HTML dwim” as a strategy. People, my young naive self included, moaned about NN being too strict. NN eventually switched to the tag soup approach. XHTML 1.0 arrived in 2000, attempting to reform HTML by recasting it as an XML application. The idea was to impose XML’s strict parsing rules: well-formed documents only, close all your tags, lowercase element names, quote all attributes, and if the document is malformed, the parser must stop and display an error rather than guess. XHTML was abandoned in 2009. When HTML5 was being drafted in 2004-onwards, the WHATWG actually had to formally specify how browsers should handle malformed markup, essentially codifying IE’s error-recovery heuristics as the standard.
Sniffnoy•7h ago
The oldest public HTML documentation there is, from 1991, demonstrates that <li>, <dt>, and <dd> tags don't need to be closed! And the oldest HTML DTD, from 1992, explicitly specifies that these, as well as <p>, don't need closing. Remember, HTML is derived from SGML, not XML; and SGML, unlike XML, allows for the possibility of tags with optional close. The attempt to make HTML more XML-like didn't come until later.
cvcount•6h ago
bazoom42•6h ago
Leaving out closing tags is possible when the parsing is unambigous. E.g <p>foo<p>bar is unambiguous becuse p elements does not nest, so they close automatically by the next p.
The question about invalid HTML is a sepearate issue. E.g you can’t nest a p inside an i according to the spec, so how does a browser render that? Or lexical error like illegal characters in a non-quoted attribute value.
This is where it gets tricky. Render anyway, skip the invalid html, or stop rendering with an error message? HTML did not specify what to do with invalid input, so either is legal. Browsers choose to go with the “render anyway” approach, but this lead to different outputs in different browsers, since it wasn’t agreed upon how to render invald html.
The difference between Netscape and IE was that Netscape in more cases would skip rendering invalid HTML, where IE would always render the content.
ezequiel-garzon•6h ago
This is clear in Tim Berners-Lee's seminal, pre-Netscape "HTML Tags" document [0], through HTML 4 [4] and (as you point out) through the current living standard [5].
[0] https://www.w3.org/History/19921103-hypertext/hypertext/WWW/...
[4] https://www.w3.org/TR/html401/intro/sgmltut.html#h-3.2.1
[5] https://html.spec.whatwg.org/multipage/syntax.html#optional-...
pornel•6h ago
Because table layout was common, a missing </table> was a common error that resulted in a blank page in NN. That was a completely unintentional bug.
Optional closing tags were inherited from SGML, and were always part of HTML. They're not even an error.
neilv•5h ago
Around 2000, I was meeting with Tim Berners-Lee, and I mentioned I'd been writing a bunch of Web utility code. He wanted to see, so I handed him some printed API docs I had with me. (He talked and read fast.)
Then I realized he was reading the editorializing in my permissive parser docs, about how browser vendors should've put a big error/warning message on the window for invalid HTML.
Which suddenly felt presumptuous of me, to be having opinions about Web standards, right in front of Tim Berners-Lee at the time.
(My thinking with the prominent warning message that every visitor would see, in mid/late-'90s, was that it would've been compelling social pressure at the time. It would imply that this gold rush dotcom or aspiring developer wasn't good at Web. Everyone was getting money in the belief that they knew anything at all about Web, with little way to evaluate how much they knew.)
hinkley•42m ago