frontpage.
newsnewestaskshowjobs

Made with ♥ by @iamnishanth

Open Source @Github

fp.

Building a Custom Clawdbot Workflow to Automate Website Creation

https://seedance2api.org/
1•pekingzcc•1m ago•1 comments

Why the "Taiwan Dome" won't survive a Chinese attack

https://www.lowyinstitute.org/the-interpreter/why-taiwan-dome-won-t-survive-chinese-attack
1•ryan_j_naughton•1m ago•0 comments

Xkcd: Game AIs

https://xkcd.com/1002/
1•ravenical•3m ago•0 comments

Windows 11 is finally killing off legacy printer drivers in 2026

https://www.windowscentral.com/microsoft/windows-11/windows-11-finally-pulls-the-plug-on-legacy-p...
1•ValdikSS•3m ago•0 comments

From Offloading to Engagement (Study on Generative AI)

https://www.mdpi.com/2306-5729/10/11/172
1•boshomi•5m ago•1 comments

AI for People

https://justsitandgrin.im/posts/ai-for-people/
1•dive•6m ago•0 comments

Rome is studded with cannon balls (2022)

https://essenceofrome.com/rome-is-studded-with-cannon-balls
1•thomassmith65•12m ago•0 comments

8-piece tablebase development on Lichess (op1 partial)

https://lichess.org/@/Lichess/blog/op1-partial-8-piece-tablebase-available/1ptPBDpC
2•somethingp•13m ago•0 comments

US to bankroll far-right think tanks in Europe against digital laws

https://www.brusselstimes.com/1957195/us-to-fund-far-right-forces-in-europe-tbtb
3•saubeidl•14m ago•0 comments

Ask HN: Have AI companies replaced their own SaaS usage with agents?

1•tuxpenguine•17m ago•0 comments

pi-nes

https://twitter.com/thomasmustier/status/2018362041506132205
1•tosh•19m ago•0 comments

Show HN: Crew – Multi-agent orchestration tool for AI-assisted development

https://github.com/garnetliu/crew
1•gl2334•19m ago•0 comments

New hire fixed a problem so fast, their boss left to become a yoga instructor

https://www.theregister.com/2026/02/06/on_call/
1•Brajeshwar•21m ago•0 comments

Four horsemen of the AI-pocalypse line up capex bigger than Israel's GDP

https://www.theregister.com/2026/02/06/ai_capex_plans/
1•Brajeshwar•21m ago•0 comments

A free Dynamic QR Code generator (no expiring links)

https://free-dynamic-qr-generator.com/
1•nookeshkarri7•22m ago•1 comments

nextTick but for React.js

https://suhaotian.github.io/use-next-tick/
1•jeremy_su•23m ago•0 comments

Show HN: I Built an AI-Powered Pull Request Review Tool

https://github.com/HighGarden-Studio/HighReview
1•highgarden•24m ago•0 comments

Git-am applies commit message diffs

https://lore.kernel.org/git/bcqvh7ahjjgzpgxwnr4kh3hfkksfruf54refyry3ha7qk7dldf@fij5calmscvm/
1•rkta•26m ago•0 comments

ClawEmail: 1min setup for OpenClaw agents with Gmail, Docs

https://clawemail.com
1•aleks5678•33m ago•1 comments

UnAutomating the Economy: More Labor but at What Cost?

https://www.greshm.org/blog/unautomating-the-economy/
1•Suncho•40m ago•1 comments

Show HN: Gettorr – Stream magnet links in the browser via WebRTC (no install)

https://gettorr.com/
1•BenaouidateMed•41m ago•0 comments

Statin drugs safer than previously thought

https://www.semafor.com/article/02/06/2026/statin-drugs-safer-than-previously-thought
1•stareatgoats•43m ago•0 comments

Handy when you just want to distract yourself for a moment

https://d6.h5go.life/
1•TrendSpotterPro•44m ago•0 comments

More States Are Taking Aim at a Controversial Early Reading Method

https://www.edweek.org/teaching-learning/more-states-are-taking-aim-at-a-controversial-early-read...
2•lelanthran•46m ago•0 comments

AI will not save developer productivity

https://www.infoworld.com/article/4125409/ai-will-not-save-developer-productivity.html
1•indentit•51m ago•0 comments

How I do and don't use agents

https://twitter.com/jessfraz/status/2019975917863661760
1•tosh•57m ago•0 comments

BTDUex Safe? The Back End Withdrawal Anomalies

1•aoijfoqfw•1h ago•0 comments

Show HN: Compile-Time Vibe Coding

https://github.com/Michael-JB/vibecode
7•michaelchicory•1h ago•1 comments

Show HN: Ensemble – macOS App to Manage Claude Code Skills, MCPs, and Claude.md

https://github.com/O0000-code/Ensemble
1•IO0oI•1h ago•1 comments

PR to support XMPP channels in OpenClaw

https://github.com/openclaw/openclaw/pull/9741
1•mickael•1h ago•0 comments
Open in hackernews

HTML as an Accessible Format for Papers (2023)

https://info.arxiv.org/about/accessible_HTML.html
262•el3ctron•2mo ago

Comments

el3ctron•2mo ago
Accessibility barriers in research are not new, but they are urgent. The message we have heard from our community is that arXiv can have the most impact in the shortest time by offering HTML papers alongside the existing PDF.
lalithaar•2mo ago
Hello, I was going through html versions of my preprints on Arxiv, thank you for all that you guys do Please do let me know if the community could contribute through any means for the same
dginev•2mo ago
You can help make LaTeXML better, or you can simply report issues when you spot them during reading. Some we have collected automatically (any errors and missing packages), but others we can't - wrong colors, broken aspect ratios of figures, weirdly layed out author lists, etc.
lalithaar•2mo ago
I was reading through this article too, glad to have found it on here
ForceBru•2mo ago
Is this new or somehow updated? HTML versions of papers have been available for several years now.

EDIT: indeed, it was introduced in 2023: https://blog.arxiv.org/2023/12/21/accessibility-update-arxiv...

Tagbert•2mo ago
From the paper...

Why "experimental" HTML?

Did you know that 90% of submissions to arXiv are in TeX format, mostly LaTeX? That poses a unique accessibility challenge: to accurately convert from TeX—a very extensible language used in myriad unique ways by authors—to HTML, a language that is much more accessible to screen readers and text-to-speech software, screen magnifiers, and mobile devices. In addition to the technical challenges, the conversion must be both rapid and automated in order to maintain arXiv’s core service of free and fast dissemination.

ForceBru•2mo ago
No I mean _arXiv_ has had experimental support for generating HTML versions of papers for years now. If you visit arXiv, you'll see a lot of papers have generated HTML alongside the usual PDF, so I'm trying to understand whether the article discussed any new developments. It seems like it's not new at all
daemonologist•2mo ago
There are pretty often problems with figure size and with sections being too narrow or wide (for comfortable reading). The PDF versions are more consistently well-laid-out.
fooofw•2mo ago
It's kind of fun to compare this formulation with the seemingly contradictory official arXiv argument for submitting the TeX source [1]:

> 1. TeX has many advantages that make it ideal as a format for the archives: It is plain text, it is compact, it is freely available for all platforms, it produces extremely high-quality output, and it retains contextual information.

> 2. It is thus more likely to be a good source from which to generate newer formats, e.g., HTML, MathML, various ePub formats, etc. [...]

Not that I disagree with the effort and it surely is a unique challenge to, at scale, convert the Turing complete macro language TeX to something other than PDF. And, at the same time, the task would be monumentally more difficult if only the generated PDFs were available. So both are right at the same time.

[1] https://info.arxiv.org/help/faq/whytex.html#contextual

tosti•2mo ago
Working with both at the same time makes their strengths and pitfalls shine. It's like that dual-boot computer where you're constantly in the wrong OS.

HTML has better separation of concerns than latex. Latex does typesetting a lot better than html. HTML layout can differ wildly in the same document. Latex documents are easier to layout in the first place.

...etc...

inglor•2mo ago
You're right https://github.com/arXiv/arxiv-docs/blob/develop/source/abou... this needs a 2023 tag @dang
ashleyn•2mo ago
Can't help but wonder if this was motivated in part by people feeding papers into LLMs for summary, search, or review. PDF is awful for LLMs. You're effectively pigeonholed into using (PAYING for) Adobe's proprietary app and models which barely hold a candle to Gemini or Claude. There are PDF-to-text converters, but they often munge up the formatting.
jrk•2mo ago
Not sure when you last tried, but Gemini, Claude, and ChatGPT have all supported pretty effective PDF input for quite a while.
Barbing•2mo ago
>Did you know that 90% of submissions to arXiv are in TeX format, mostly LaTeX? That poses a unique accessibility challenge: to accurately convert from TeX—a very extensible language used in myriad unique ways by authors—to HTML, a language that is much more accessible to screen readers and text-to-speech software, screen magnifiers, and mobile devices.

Challenging. Good work!

sega_sai•2mo ago
Unfortunately I didn't see the recommendation there on what can be done for old papers. I checked, and only my papers after 2022 have an HTML version. I wish they'd make some kind of 'try html' button for those.
sundarurfriend•2mo ago
Do the older papers work via [Ar5iv](https://ar5iv.labs.arxiv.org/) ?

> View any arXiv article URL [in HTML] by changing the X to a 5

The line

> Sources upto the end of November 2025.

sounds to me like this is indeed intended for older articles.

dginev•2mo ago
ar5iv tracks the arXiv collection with a one month lag. Exactly as to signal that this is not the "official" arXiv rendering. It is also a showcase predating the arXiv /html/ route, but largely using the same technology. Nowadays maintained by the same people (hi!)

There used to be another showcase, called arxiv-vanity. They captured what happened pretty well with their farewell post on their homepage:

https://www.arxiv-vanity.com/

rootnod3•2mo ago
Maybe unpopular, but papers should be in n markdown flavor to be determined. Just to have them more machine readable.
doc_ick•2mo ago
Not unpopular, but a lot of the publishing companies would have to agree to that and make their own formatting/structure rules.

I also haven’t had good luck with images/graphs/custom tables in anything but typist/latex.

xigoi•2mo ago
Compared to HTML, Markdown is very bad at being mahcine-readable.
nateroling•2mo ago
Seeing the Gemini 3 capabilities, I can imagine a near future where file formats are effectively irrelevant.
doc_ick•2mo ago
Tell that to publishing companies.
DANmode•2mo ago
Files.

Truth in general, if we aren't careful.

sansseriff•2mo ago
Seriously. More people need to wake up to this. Older generations can keep arguing over display formats if they want. Meanwhile younger undergrad and grad students are getting more and more accustomed to LLMs forming the front end for any knowledge they consume. Why would research papers be any different.
JadeNB•2mo ago
> Meanwhile younger undergrad and grad students are getting more and more accustomed to LLMs forming the front end for any knowledge they consume.

Well, that's terrifying. I mean, I knew it about undergrads, but I sure hoped people going into grad school would be aware of the dangers of making your main contact with research, where subtle details are important, through a known-distorting filter.

(I mean, I'd still be kinda terrified if you said that grad students first encounter papers through LLMs. But if it is the front end for all knowledge they consume? Absolutely dystopian.)

sansseriff•2mo ago
I admit it has dystopian elements. It’s worth deciding what specifically is scary though. The potential fallibility or mistakes of the models? Check back in a few months. The fact they’re run by giant corps which will steal and train on your data? Then run local models. Their potential to incorporate bias or persuade via misalignment with the reader’s goals? Trickier to resolve, but various labs and nonprofits are working on it.

In some ways I’m scared too. But that’s the way things are going because younger people far prefer the interface of chat and question answering to flipping through a textbook.

Even if AI makes more mistakes or is more misaligned with the reader’s intentions than a random human reviewer (which is debatable in certain fields since the latest models game out), the behavior of young people requires us to improve the reputability of these systems. (Make sure they use citations, make sure they don’t hallucinate, etc). I think the technology is so much more user friendly that fixing the engineering bugs will be easier than forcing new generations to use the older systems.

s0rce•2mo ago
Can you elaborate? Are you never reading papers directly but only using Gemini to reformat or combine/summarize?
nateroling•2mo ago
I mean that when a computer can visually understand a document and reformat and reinterpret it in any imaginable way, who cares how it’s stored? When a png or a pdf or a markdown doc can all be be read and reinterpreted into an infographic or a database or an audiobook or an interactive infographic the original format won’t matter.
qart•2mo ago
I have family members with health conditions that require periodic monitoring. For some tests, a phlebotomist comes home. For some tests, we go to a hospital. For some other tests, we go to a specialized testing center. They all give us PDFs in their own formats. I manually enter the data to my spreadsheet, for easy tracking. I use LLMs for some extraction, but they still miss a lot. At least for the foreseeable future, no LLM will ever guarantee that all the data has been extracted correctly. By "guarantee", I mean someone's life may depend on it. For now, doctors take up the responsibility of ensuring the data is correct and complete. But not having to deal with PDFs would make at least a part of their job (and our shared responsibilities) easier.
jas39•2mo ago
Pandoc can convert to svg. It can then be inlined in html. Looks just like latex, though copy/paste isn't very useful
stephenlf•2mo ago
That doesn’t solve the accessibility issue, though. You need semantic tags.
sundarurfriend•2mo ago
[Sept 2023] as per the wayback machine.
billconan•2mo ago
I don't think HTML is the right approach. HTML is better than PDF, but it is still a format for displaying/rendering.

the actual paper content format should be separated from its rendering.

i.e. it should contain abstract, sections, equations, figures, citations etc. but it shouldn't have font sizes, layout etc.

the viewer platforms then should be able to style the content differently.

afavour•2mo ago
Wouldn’t that be CSS?
billconan•2mo ago
no

<div class="abstract-container">

<div class="abstract">

<pre><code> abstract text ... </code></pre>

</div>

<div class="author-list">

<ol>

<li>author one</li>

<li>author two</li>

<ol>

</div>

should be just:

[abstract]

abstract text

[authors]

author one | email | affiliation

author two | email | affiliation

afavour•2mo ago
Sounds like XML and XSL would be a great fit here. Shame it’s being deprecated.

But you could still use HTML. Elements with a dash in are reserved for custom elements (that is, a new standardised element will never take that name) so you could do:

    <paper-author-list>
      <paper-author />
    </paper-author-list>
And it would be valid HTML. Then you’d style it with CSS, with

    paper-author {
      display: list-item;
    }
And so on.
bawolff•2mo ago
Nothing is stopping you from using server side XSL. I personally dont think its a great fit, but people need to stop acting like xsl has been wiped from the face of the earth.
afavour•2mo ago
Yes but we’re specifically talking about a display format here. Something requiring a server side transform before being viewable by a user is a clear step backwards.
bawolff•2mo ago
How so? I can't think of any advantage to having client side xsl over outputting two files, in this context.
afavour•2mo ago
The discussion is about the form in which you share papers. With HTML you just share the HTML file, it opens instantly on basically any device.

If you distribute the paper as XML with an XSLT transform you need to run something that’ll perform that transform before you can read the paper. No matter whether that transform happens on the server or on the client it’s still an extra complication in the flow of sharing information.

xworld21•2mo ago
Indeed, LaTeXML (the software used by arXiv) converts LaTeX to a semantic XML document which is turned to HTML using primarily XSLT!
panzi•2mo ago
There is <article> <section> <figure> <legend>, but yes, <abstract> and <authors> is missing as such. But there are meta tags for such things. Then there is RDF and Thing. Not quite the same, I know, but it's not completely useless.
kevindamm•2mo ago
and you could shim these gaps with custom components, hypothetically
dimal•2mo ago
Perfect is the enemy of good. HTML is good enough. Let’s get this done.

And as another commenter has pointed out, HTML does exactly what you ask for. If it’s done correctly, it doesn’t contain font sizes or layout. Users can style HTML differently with custom CSS.

billconan•2mo ago
mixing rendering definitions with content (PDF) is something from the printer era, that is unsuitable for the digital era.

HTML was a digital format, but it wanted to be a generic format for all document types, not just papers, so it contains a lot of extras that a paper format doesn't need.

for research papers, since they share the same structure, we can further separate content from rendering.

for example, if you want to later connect a paper with an AI, do you want to send <div class="abstract"> ... ?

or do some nasty heuristic to extract the abstract? like document. getElementsByClassName("abstract")[0] ?

simonw•2mo ago
All of the interesting LLMs can handle a full paper these days without any trouble at all. I don't think it's worth spending much time optimizing for that use-case any more - that was much more important two years ago when most models topped out at 4,000 or 8,000 tokens.
bob1029•2mo ago
> HTML is better than PDF

I disagree. PDF is the most desirable format for printed media and its analogues. Any time I plan to seriously entertain a paper from Arxiv, I print it out first. I prefer to have the author's original intent in hand. Arbitrary page breaks and layout shifts that are a result of my specific hardware/software configuration are not desirable to me in this context of use.

ACCount37•2mo ago
I agree that PDF is best for things that are meant to be printed, no questions. But I wonder how common actually printing those papers is?

In research and in embedded hardware both, I've met some people who had entire stacks of papers printed out - research papers or datasheets or application notes - but also people who had 3 monitors and 64GB of RAM and all the papers open as browser tabs.

I'm far closer to the latter myself. Is this a "generational split" thing?

pfortuny•2mo ago
Possibly, but then again, when I need to study a paper, I print it, when I need just to skim it and use a result from it, it is more likely that I just read it on a screen (tablet/monitor). That is the difference for me.
s0rce•2mo ago
I used to print papers, probably stopped about 10 years ago. I now read everything in Zotero where I can highlight and save my annotations and sync my library between devices. You can also seamlessly archive html and pdfs. I don't see people printing papers in my workplace that often unless you need to read them in a wet lab where the computer is not convenient.
cluckindan•2mo ago
HTML alone is in fact not a format for displaying/rendering. Done properly, it is a structural representation of the content. (This is often called ”semantic HTML”.)

They are converting to HTML to make the content more accessible. Accessibility in this context means a11y, in effect ”more accessible” equates to ”more compatible with screen readers”.

While PDF documents can be made accessible, it is way easier to do it in HTML, where browsers build an actual AOM (accessibility object model) tree and expose it to screen readers.

>it should contain abstract, sections, equations, figures, citations etc.

So <article>, <section>, <math>, <figure>, <cite>, etc.

benatkin•2mo ago
Much of it is a structural representation of how to display the content.
cluckindan•2mo ago
In practice, sometimes. But in principle, hard disagree.

HTML was explicitly designed to semantically represent scientific documents. [1]

”HTML documents represent a media-independent description of interactive content. HTML documents might be rendered to a screen, or through a speech synthesizer, or on a braille display. To influence exactly how such rendering takes place, authors can use a styling language such as CSS.” [2]

1: https://html.spec.whatwg.org/multipage/introduction.html#bac...

2: https://html.spec.whatwg.org/multipage/introduction.html#:~:...

Theodores•2mo ago
I like Arxiv and what they are doing, however, do the auto-generated HTML files contain nothing more than a sea of divs dressed with a billion classes?

I would be delighted if they could do better than that, with figcaptions as well as figures, and sections 'scoped' with just one <h2-6> heading per section. They could specify how it really should be done, the HTML way, with a well defined way of doing the abstract and getting the cited sources to be in semantic markup yet not in some massive footer at the back.

There should also be a print stylesheet so that the paper prints out elegantly on A4 paper. Yes, I know you can 'print to PDF' but you can get all the typesetting needed in modern CSS stylesheets.

Furthermore, they need to write a whole new HTML editor that discards WYSIWYG in favour of semantic markup. WYSIWYG has held us back by decades as it is useless for creating a semantic document. We haven't moved on from typewriters and the conventions needed to get those antiques to work, with word processors just emulating what people were used to at the time. What we really need is a means to evolve the written word, so that our thinking is 'semantic' when we come to put together documents, with a 'document structure first' approach.

LaTeX is great, however, last time I used it was many decades ago, when the tools were 'vi' (so not even vim) and GhostScript, running on a Sun workstation with mono screen. Since then I have done a few different jobs and never have I had the need to do anything in LaTex or even open a LaTeX file. In the wild, LaTeX is rarer than hen's teeth. Yet we all read scientific papers from time to time, and Arxiv was founded on the availability of Tex files.

The lack of widespread adoption of semantic markup has been a huge bonus to Google and other gatekeepers that have the money to develop their own heuristics to make sense of 'seas of divs'. As it happens, Google have also been somewhat helpful with Chrome and advancing the web, even if it is for their gatekeeping purposes.

The whole world of gatekeeping is also atrocious in academia. Knowledge wants to be free, but it is also big business to the likes of Springer, who are already losing badly to open publishing.

As you say, in this instance, accessibility means screen readers, however, I hope that we can do better than that, to get back to the OG Tim Berners Lee vision of what the web should be like, as far as structuring information is concerned.

dginev•2mo ago
You will be delighted. Feel free to inspect some sources.
o11c•2mo ago
The hope for semantic HTML died the day they said "stop using <i>, use <em>", regardless of what the actual purpose of the italics was (it's usually not emphasis).
cluckindan•2mo ago
Who said that? The semantics are different.

The <i> HTML element represents a range of text that is set off from the normal text for some reason, such as idiomatic text, technical terms, taxonomical designations, among others. Historically, these have been presented using italicized type, which is the original source of the <i> naming of this element.

The <em> element is for words that have a stressed emphasis compared to surrounding text, which is often limited to a word or words of a sentence and affects the meaning of the sentence itself.

Typically this element is displayed in italic type. However, it should not be used to apply italic styling; use the CSS font-style property for that purpose. Use the <cite> element to mark the title of a work (book, play, song, etc.). Use the <i> element to mark text that is in an alternate tone or mood, which covers many common situations for italics such as scientific names or words in other languages.

pwdisswordfishy•1mo ago
> Who said that?

Unfortunately, a lot of people who missed the point entirely.

(We can, however, still disagree with the commenter that this "killed" semantic HTML. Fond of overstating things a bit?)

m-schuetz•2mo ago
That's a purist stance that's never going to work out in praxtice. Authors will always want to adjust the presentation of content, and html might be even better suited for that than Latex, which as bad at both.
vatsachak•2mo ago
Why do we like HTML more than pdfs?

HTML rendering requires you to be connected to the internet, or setting up the images and mathJax locally. A PDF just works.

HTML obviously supports dynamic embedding, such as programs, much better but people just usually post a github.io page with the paper.

devnull3•2mo ago
> HTML rendering requires you to be connected to the internet

Not really. One can always generate a self-contained html. Both CSS and JS (if needed) can be inline.

vatsachak•2mo ago
True but the webdev idiom is injecting things such as mathjax from a cdn. I guess one can pre-render the page and save that, but that's kind of like a PDF already
recursive•2mo ago
Why would html rendering require a network connection? It doesn't seem to on my machine.
vatsachak•2mo ago
Things like LaTeX equation rendering are hosted on a cdn
krapp•2mo ago
They can be but don't need to be. Any javascript can be localized like HTML and CSS.
vatsachak•2mo ago
That's fair, but imagine trying to get the average reader up to speed with something like npm.
krapp•2mo ago
You don't actually need npm either. You can literally just distribute everything - html, css, images and js in a zipped folder and open it locally.
nine_k•2mo ago
Try opening a PDF on a phone screen.
vatsachak•2mo ago
I do it all the time to read papers. It's easy
mmooss•2mo ago
epub 'just works' locally, and it's html under the hood.
teddy-smith•2mo ago
It's extremely easy to convert HTML/CSS to a PDF with the print to PDF feature of the browser.

All papers should be in HTML/CSS or Tex then just simply converted to PDF.

Why are we even talking about this?

tefkah•2mo ago
What are you talking about? No one’s writing their paper in HTML.

The problem is having the submissions be in TeX and converting that to HTML, when the only output has been PDF for so long.

The problem isn’t converting HTML to PDF, it’s making available a giant portion of TeX/pdf only papers in HTML.

If you’re arguing that maybe TeX then shouldn’t be the source format for papers then I agree, but other than Typst (which also isn’t perfect about HTML output yet) there aren’t that many widely accepted/used authoring formats for physics/math papers, which is what ArXiV primarily hosts.

teddy-smith•2mo ago
This is what I'm talking about. HTML/CSS is more powerful than PDF or TEX.

https://csszengarden.com/

nkrisc•2mo ago
So, uh, where do the HTML versions of the papers come from?
teddy-smith•2mo ago
Ground truth.
nkrisc•2mo ago
What do you mean by that? That researchers should be authoring their papers in HTML?
benatkin•2mo ago
It's easy to convert PDF to HTML/CSS, with similar results.

Either way it gets shoehorned.

ekjhgkejhgk•2mo ago
LOL what. You're either trolling, or you've never written a paper in your life.
teddy-smith•2mo ago
It sounds like you might not understand the power of modern HTML/CSS.
carlosjobim•2mo ago
Except you can't have page breaks, three links in a row, anchor links.
teddy-smith•2mo ago
@media print { .page, .page-break { break-after: page; } }
carlosjobim•2mo ago
It doesn't function in real use, it's just theoretical.
teddy-smith•2mo ago
https://developer.mozilla.org/en-US/docs/Web/CSS/Reference/P...

Literally part of Mozilla's docs.

carlosjobim•2mo ago
That's theory. Can you send me a link to any html file where this actually works? It's a problem I'd love to have solved.

Edit to clarify: The break-after property works with the worthless print dialogues, but doesn't function with "Export to PDF", which is what most people will want to use.

crazygringo•2mo ago
Have you ever written a paper for publication?

HTML doesn't support the necessary features. Citations in various formats, footnotes, references to automatically numbered figures and tables, I could go on and on.

HTML could certainly be extended to support those, but it hasn't been. That's why we're talking about this.

teddy-smith•2mo ago
Come on are you serious? HTML/CSS is more powerful than TEX or PDF.

https://csszengarden.com/

crazygringo•2mo ago
Did you fully read my comment? Please point me to where HTML/CSS provide the features I listed.

It doesn't really matter if HTML/CSS is more powerful at a hundred other layout things, if it doesn't provide the absolute necessary features for papers.

teddy-smith•2mo ago
Citations in various formats,

> https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/...

> https://codepen.io/tag/citation

footnotes

>https://codepen.io/SitePoint/pen/QbMgvY

references to automatically numbered figures and tables

> https://stackoverflow.com/questions/25869906/table-auto-numb...\

> https://codepen.io/MikeKelley/pen/GpXmEd

crazygringo•1mo ago
I don't think you understand.

Citations need to generate reference lists. Footnotes require automatic placement at the bottom of each page. Your examples of numbered tables are numbering the rows, not the tables. And figure numbers need to be referenced in the text.

None of what you're pointing to does what academic papers need. Why are you trying to push this agenda?

cubefox•2mo ago
This is not new, the title should say (2023). They have shipped the HTML feature with "experimental" flag for two years now, but I don't know whether there is even any plan to move out of the experimental phase.

It's not much of an "experiment" if you don't plan to use some experimental data to improve things somehow.

ekjhgkejhgk•2mo ago
I wish epub was more common for papers. I have no idea if there's any real difficulties with that, or just not enough demand.
pspeter3•2mo ago
Why epub? Isn’t it just HTML under the hood?
ekjhgkejhgk•2mo ago
Because I can open it on my ereader.
silon42•2mo ago
I think it should also have JS disabled (I hope!)
mmooss•2mo ago
epub is html, under the hood

Is there an epub reader that can format text approximately as usably and beautifully as pdf? What I've seen makes it noticeably harder to read longer texts, though I haven't looked around much.

epub also lacks annotation, or at least annotation that will be readable across platforms and time.

hombre_fatal•2mo ago
Because what makes epub a format on top of html is just that someone QA'ed it and wrote the html/css with it in mind. Especially considering things like diagrams and tables.

Not really what you want researchers to waste their time doing.

But you can use any of the numerous html->epub packagers yourself.

leobg•2mo ago
It must have been around 1998. I was editor of our school’s newspaper. We were using Corel Draw. At some point, I proposed that we start using HTML instead. In the end, we decided against it, and the reasons were the same that you can read here in the comments now.
DominikPeters•2mo ago
As an arXiv author who likes using complicated TeX constructions, the introduction of HTML conversion has increased my workload a lot trying to write fallback macros that render okay after conversion. The conversion is super slow and there is no way to faithfully simulate it locally. Still I think it's a great thing to do.
xworld21•2mo ago
I believe dginev's Docker image https://github.com/dginev/ar5ivist is very close to what runs on arXiv and can be run locally. It uses a recent LaTeXML snapshot from September.
_dain_•2mo ago
Wasn't the World Wide Web invented at CERN specifically for sharing scientific papers? Why are we still using PDFs at all?
fsh•2mo ago
No, it wasn't. Scientists at CERN used DVI and later PDF like everyone else. HTML has no provisions for typesetting equations and is therefore not suitable for physics papers (without much newer hacks such as MathML).
teddy-smith•2mo ago
Why not typeset in something else and import the image into html/css?
cxr•1mo ago
MathML isn't new. It predates Windows 98 and the birth of a substantial part of HN's userbase.
ComputerGuru•2mo ago
If the Unicode consortium would spend less time and effort on emoji and more on making the most common/important mathematical symbols and notations available/renderable in plain text, maybe we could move past the (LA)TeX/PDF marriage. OpenType and TrueType now (edit: for well over a decade, actually) support the necessary conditional rendering required to perform complicated rendering operations to get sequences of Unicode code points to display in the way needed (theoretically, anyway) and with fallback missing-glyph-only font family substitution support available pretty much everywhere allowing you to seamlessly display symbols not in your primary font from a fallback asset (something like Noto, with every Unicode symbol supported by design, or math-specific fonts like Cambria Math or TeX Gyre, etc), there are no technical restrictions.

I’ve actually dug into this in the past and it was never lack of technical ability that prevented them from even adding just proper superscript/subscript support before, but rather their opinion that this didn’t belong in the symbolic layer. But since emoji abuse/rely on ZWJ and modifiers left and right to display in one of a myriad of variations, there’s really no good reason not to allow the same, because 2 and the squares symbol are not semantically the same (so it’s not a design choice).

An interesting (complete) tangent is that Gemini 3 Pro is the only model I’ve tested (I do a lot of math-related stuff with LLMs) that absolutely will not under any circumstances respect (system/user) prompt requests to avoid inline math mode (aka LATeX) in the output, regardless of whether I asked for a blanket ban on TeX/MathJax/etc or when I insisted that it use extended unicode codes points to substitute all math formula rendering (I primarily use LLMs via the TUI where I don’t have MathJax support, and as familiar as I once was with raw TeX mathematical notations and symbols, it’s still quite easy to confuse unrendered raw output by missing something if you’re not careful). I shared my experiment and results here – Gemini 3 Pro would insist on even rendering single letter constants or variables as $k$ instead of just k (or k in markdown italics, etc) no matter how hard I asked it not to (which makes me think it may have been overfit against raw LATeX papers, and is also an interesting argument in favor of the “VL LLMs are the more natural construct”): https://x.com/NeoSmart/status/1995582721327071367?s=20

hannahnowxyz•2mo ago
Have you tried a two-pass approach? For example, where prompt #1 is "Which elliptic curves have rational parameterizations?", and then prompt #2 (perhaps to a smaller/faster model like Gemma) is "In the following text, replace all LaTeX-escaped notation with Markdown code blocks and unicode characters. For example, $F_n = F_{n - 1} + F_{n - 2}$ should be replaced with `Fₙ = Fₙ₋₁ + Fₙ₋₂`. <Response from prompt #1>". Although it's not clear how you would want more complex things to be converted.
baby•2mo ago
I've done latex -> mathml -> markdown and it works quite well
yannis•2mo ago
It is actually quicker to ask using LaTeX markup!
toastal•2mo ago
reStructuredText support :math: roles. AsciiDoc has stem blocks. Why do folks keep trying to shoehorn Markdown into everything, creating yet another fork, when there are other lightweight markup languages that support actual features for technical blogs/documentation?
moelf•2mo ago
https://github.com/stevengj/subsuper-proposal
crazygringo•2mo ago
I don't understand. No matter what fancy things you do with superscripts and subscripts, you're not going to be able to do even basic things you need for equations like use a fraction bar, or parentheses that grow in height to match the content inside them.

At a fundamental level, Unicode is for characters, not layout. Unicode may abuse the ZWJ for emoji, but it still ultimately results in a single emoji character, not a layout of characters. So I don't really understand what you're asking for.

lukan•2mo ago
Agreed. I think MathML is intended for layout of formulas and integrated into browsers nowdays, but I never used it, so don't know if essentials are missing?
bsder•2mo ago
> No matter what fancy things you do with superscripts and subscripts, you're not going to be able to do even basic things you need for equations like use a fraction bar, or parentheses that grow in height to match the content inside them.

Why not? Things like Arabic ligatures already do that, no?

austinjp•2mo ago
This is interesting to me, but I am very naive about this. Can you explain, or point to where I could learn more?
bsder•2mo ago
I'd start with HarfBuzz: https://github.com/harfbuzz/harfbuzz

That's the open source font shaping engine. It does a lot of work to handle font shaping and rendering for languages that can't really be reduced to characters.

bruce343434•2mo ago
Arabic ligatures? Do you mean the unicode point for the basmala for instance? That's pretty "hardcoded", I think math requires more composability
SOTGO•2mo ago
I'm almost surprised that Gemini 3 uniquely has this problem. I would have expected that responses from any LLM that require complex math notation would almost certainly be LaTeX heavy, given the abundance of LaTeX source material in the training data. I suppose it is a flaw if a model can't avoid LaTeX, but given that it is the standard (and for the foreseeable future too) I don't know what appropriate output would look like. For "pure" mathematics or similar topics I think LaTeX (or system that represents a superset of LaTeX) is the only acceptable option.
raincole•2mo ago
Math formulas are far far far more complex than unicode emojis. I don't even know how to start comparing them.
franga2000•2mo ago
The whole "we need latex because of math" thing has been nothing more than a bad excuse for a very long time. Math notation is too varied to include in Unicode (some papers have to invent new notation!), but even if we had it, authors would still insist on latex. You can already make responsive and largely accessible papers that render to HTML, with latex familiar syntax for equations, bibtex for references and all the footnotes/figures/tables/captions you might want.

But authors still refuse. It's not real science if the layout isn't two-column, written in an old serif font, tables and figures float randomly disconnected from their reference points, code isn't syntax higlighted and has completely nonsensical line breaks... If the reader wants to read it on a phone, or needs to change to font to be larger or more legible, they're not a real scientist and don't deserve to read real papers.

Seriously, what the fuck?? Even the economists are laughing at us with their MS Word and third-party cloud-based bibliography plugin subscription.

gus_massa•2mo ago
Authors just follow any format mandated by the journals.

In unoficial notes for the classes, most authors use single column, and try to remember the magic spell to keep the figures in place. Something like [H!] ???

Also most books are single column.

percentcer•2mo ago
Dumb question but what stops browsers from rendering TeX directly (aside from the work to implement it)? I assume it's more than just the rendering
pwdisswordfishy•2mo ago
For starters, TeX is Turing-complete, and the tokenizer is arbitrarily reprogrammable at runtime.
ErroneousBosh•2mo ago
Okay then, what would stop you rendering TeX to SVG and embedding that?

Edit: Genuine question, not rhetorical - I don't know how well it would work but it sounds like it should.

fooofw•2mo ago
That would (mostly if not always) work in the sense of reproducing the layout of the pages, but would defeat the purpose of preserving the semantic information present in the TeX file (what is a heading, a reference and to what, a specific math environment, etc.) which is AFAIK already mostly dropped on conversion to PDF by the latex compiler.
ErroneousBosh•2mo ago
Couldn't you write a TeX renderer that emitted HTML (or RST, or Markdown, or whatever) with SVG for the equations?
fooofw•1mo ago
I think this project is based on LaTeXML (https://math.nist.gov/~BMiller/LaTeXML/) which is exactly that (except for the SVG part)
gbear605•2mo ago
Browsers already support JavaScript anyway, so why not add another Turing-complete language into the mix? (Not even accounting for CSS technically being Turing-complete, or WASM, or …)
fph•2mo ago
As far as I know the Tex team has been working hard lately on supporting accessible "tagged pdfs". Hopefully one day Tex/Latex output will be accessible by default and conversion to HTML will not be needed.
bo1024•2mo ago
You mean a display engine that works like an HTML renderer, except starting from TeX source instead of HTML source? I think you could get something that mostly works, but it would be a pain and at the end you wouldn't have CSS or javascript, so I don't think browser makers are interested.
dginev•2mo ago
Hi, an arXiv HTML Papers developer here.

As a very brief update - we are pending a larger update.

You will spot many (many) issues with our current coverage and fidelity of the paper rendering. When they jump at you, please report them to us. All reports from the last 2 years have landed on github. We have made a bit of progress since, but there are (a lot of) more low-hanging fruit to pick.

Project issues:

https://github.com/arXiv/html_feedback/issues/

The main bottleneck at the moment is developer time. And the main vehicle for improvements on the LaTeX side of things continues to be LaTeXML. Happy to field any questions.

istillwritecode•2mo ago
I would like to write code for latexml to translate a package but I found the documentation to be hard to understand. That might be what is holding developers back. I looked at this a year ago and gave up.
dginev•2mo ago
Tell us what you would need described in a tutorial to be productive, as well as your background with the technologies involved (TeX/LaTeX, perl, XML, XSLT, HTML). Probably best as a new issue:

https://github.com/brucemiller/LaTeXML/issues

It's a pretty deep rabbit hole, but I wholeheartedly agree most standard package support incantations should be easy and few to use.

chr15m•2mo ago
Wish I could upvote this harder. Thank you arXiv!
RandyOrion•2mo ago
For arXiv papers, I prefer HTML format much more than PDF format.

Compared to PDF format, HTML format is much more accessible because of browsers. Basically I can reuse my browser extensions to do anything I like without hassle, like translation, note taking, sending texts to LLMs, and so on.

For now, arXiv offers two HTML services: the default one in https://arxiv.org/html/xxxx.xxxxx , and the alternative one in https://ar5iv.labs.arxiv.org/html/xxxx.xxxxx , here 'x' is a placeholder for a number or digit.

The most glaring problem of the default HTML service is the coverage of papers. Sometimes it just doesn't work, e.g., https://arxiv.org/html/2505.06708 . The solution may be switch to alternative HTML service, e.g., https://ar5iv.labs.arxiv.org/html/2505.06708 .

Note that alternative HTML service also has coverage problem. Sometimes both HTML services fail, e.g. https://arxiv.org/abs/2511.22625 .

rhubarbtree•2mo ago
Serious question: do websites from the 90s work well in modern browsers? Because PDFs from that time view fine.
cxr•1mo ago
Aside from sites that used non-standard stuff like ActiveX or Java applets, the general answer is "yes".

And to respond to your implied criticism: the stability/reliability/fidelity of PDFs is a myth. It would be hard to say how many dozens of PDFs I've come across in the last two years that don't look the same across devices/viewers (or sometimes just fail to render in their entirety). This played a significant part in a cascade of errors in one incident I know of that resulted in the payout of a claim more than $1,000 but less than $10,000—not to mention a lot of strife and anger for the persons involved over the course of multiple months before resolution.

(As I write this now, I realize I'd almost forgotten about the fact that almost every time I've taken something to FedEx or UPS to be printed at a self-service kiosk, the result has been unusable, so I've had to take it to the clerk to have them print it instead.)

HTML at least has the property that it's still trivial to access and extract the data if you run into either malformed inputs or ones that are valid but incompatible/unsupported by whatever viewer (browser) you happen to be using, which is a lot more than you can say for more opaque formats like Java, PDF, and Flash.

notorandit•2mo ago
Thee problem is the viewer, not the format. We are talking about accessibility and scientific papers, where fancy animations and transitions are not core features.

LaTeX and TeX are the de facto standard for this context and converting all existing documents is a lot of work and energy to be spent for basically little gain, if any.

constantcrying•2mo ago
Reading this thread many people do not seem to understand what to the problem even is. What researchers writing Papers want is a low effort/high flexibility way to write documents (Nobody wants to write their paper in HTML). For a paper to be printed it needs to be in some printable format, like PDF. To provide accessibility and accommodate the changing ways papers are read, which is increasingly online, HTML is also a desirable output.

What really is needed is a markup language which natively can target both PDF and HTML. This is something typst is working on, but I am not aware of any other project, which either comes close to the features of LaTeX or supports both target formats.

To me this is the only reasonably way to address the accessibility and usability issues around Papers. Have one markup, with sufficient accessibility features, which simultaneously targets HTML and PDF.

zipy124•2mo ago
The biggest issue with papers for me today is that they don't allow videos as anything other than supplemental materials to be downloaded, or linking to a web-page that has them. I want to embed gif's or videos in my papers directly!
cxr•1mo ago
Here in the Muggle world, there's no material I know of that can be used to produce a type of paper that supports moving images.