Not so PDFs.
I'm far from an expert on the format, so maybe there is some semantic support in there, but I've seen plenty of PDFs where tables are simply an loose assemblage of graphical and text elements that, only when rendered, are easily discernible as a table because they're positioned in such a way that they render as a table.
I've actually had decent luck extracting tabular data from PDFS by converting the PDFs to HTML using the Poppler PDF utils, then finding the expected table header, and then using the x-coordinate of the HTML elements for each value within the table to work out columns, and extract values for each rows.
It's kind of groaty but it seems reliable for what I need. Certainly much moreso than going via formatted plaintext, which has issues with inconsistent spacing, and the insertion of newlines into the middle of rows.
It's possible to create the same PDF in many, many, many ways.
Some might lean towards exporting a layout containing text and graphics from a graphics suite.
Others might lean towards exporting text and graphics from a word processor, which is words first.
The lens of how the creating app deals with information is often something that has input on how the PDF is output.
If you're looking for an off the shelf utility that is surprisingly decent at pulling structured data from PDFs, tools like cisdem have already solved enough of it for local users. Lots of tools like this out there, many do promise structured data support but it needs to match what you're up to.
This is false. PDFs are an object graph containing imperative-style drawing instructions (among many other things). There’s a way to add structural information on top (akin to an HTML document structure), but that’s completely optional and only serves as auxiliary metadata, it’s not at the core of the PDF format.
Indeed. Therein lies the rub.
Why?
Because no matter the fact that I've spent several years of my latent career crawling and parsing and outputting PDF data, I see now that pointing my LLLM stack at a directory of *.pdf just makes the invisible encoding of the object graph visible. It's a skeptical science.
The key transclusion may be to move from imperative to declarative tools or conditional to probabilistic tools, as many areas have in the last couple decades.
I've been following John Sterling's ocaml work for a while on related topics and the ideas floating around have been a good influence on me in forests and their forester which I found resonant given my own experience:
https://www.jonmsterling.com/index/index.xml
https://github.com/jonsterling/forest
I was gonna email john and ask whether it's still being worked on as I hope so, but I brought it up this morning as a way out of the noise that imperative programming PDF has been for a decade or more where turtles all the way down to the low-level root cause libraries mean that the high level imperative languages often display the exact same bugs despite significant differences as to what's being intended in the small on top of the stack vs the large on the bottom of the stack. It would help if "fitness for a particular purpose" decisions were thoughtful as to publishing and distribution but as the CFO likes to say, "Dave, that ship has already sailed." Sigh.
¯\_(ツ)_/¯
We get to learn a lot when something is new to us.. at the same time the untouchable parts of PDF to Text are largely being solved with the help of LLMs.
I built a tool to extract information from PDFs a long time ago, and the break through was having no ego or attachment to any one way of doing it.
Different solutions and approaches offered different depth or quality of solutions and organizing them to work together in addition to anything I built myself provided what was needed - one place where more things work.. than not.
For longer PDFs I've found that breaking them up into images per page and treating each page separately works well - feeing a thousand page PDF to even a long context model like Gemini 2.5 Pro or Flash still isn't reliable enough that I trust it.
As always though, the big challenge of using vision LLMs for OCR (or audio transcription) tasks is the risk of accidental instruction following - even more so if there's a risk of deliberately malicious instructions in the documents you are processing.
The article is in the context of an internet search engine, the corpus to be converted is of order 1 TB. Running that amount of data through an LLM would be extremely expensive, given the relatively marginal improvement in outcome.
I've found Google's Flash to cut my OCR costs by about 95+% compared to traditional commercial offerings that support structured data extraction, and I still get tables, headers, etc from each page. Still not perfect, but per page costs were less than one tenth of a cent per page, and 100 gb collections of PDFs ran to a few hundreds of dollars.
For the first I can run a segmentation model + traditional OCR in a day or two for the cost of warming my office in winter. For the second you'd need a few hundred dollars and a cloud server.
Feel free to reach out. I'd be happy to have a chat and do some pro-bono work for someone building a open source tool chain and index for the rest of us.
There are a few tools that allow inspecting a PDF's contents (https://news.ycombinator.com/item?id=41379101) but they stop at the level of the PDF's objects, so entire content streams are single objects. For example, to use one of the PDFs mentioned in this post, the file https://bfi.uchicago.edu/wp-content/uploads/2022/06/BFI_WP_2... has, corresponding to page number 6 (PDF page 8), a content stream that starts like (some newlines added by me):
0 g 0 G
0 g 0 G
BT
/F19 10.9091 Tf 88.936 709.041 Td
[(Subsequen)28(t)-374(to)-373(the)-373(p)-28(erio)-28(d)-373(analyzed)-373(in)-374(our)-373(study)83(,)-383(Bridge's)-373(paren)27(t)-373(compan)28(y)-373(Ne)-1(wGlob)-27(e)-374(reduced)]TJ
-16.936 -21.922 Td
[(the)-438(n)28(um)28(b)-28(er)-437(of)-438(priv)56(ate)-438(sc)28(ho)-28(ols)-438(op)-27(erated)-438(b)28(y)-438(Bridge)-437(from)-438(405)-437(to)-438(112,)-464(and)-437(launc)28(hed)-438(a)-437(new)-438(mo)-28(del)]TJ
0 -21.923 Td
and it would be really cool to be able to see the above “source” and the rendered PDF side-by-side, hover over one to see the corresponding region of the other, etc, the way we can do for a HTML page. cpdf -output-json -output-json-parse-content-streams in.pdf -o out.json
Then you can play around with the JSON, and turn it back to PDF with cpdf -j out.json -o out.pdf
No live back-and-forth though. [
[ { "F": 0.0 }, "g" ],
[ { "F": 0.0 }, "G" ],
[ { "F": 0.0 }, "g" ],
[ { "F": 0.0 }, "G" ],
[ "BT" ],
[ "/F19", { "F": 10.9091 }, "Tf" ],
[ { "F": 88.93600000000001 }, { "F": 709.0410000000001 }, "Td" ],
[
[
"Subsequen",
{ "F": 28.0 },
"t",
{ "F": -374.0 },
"to",
{ "F": -373.0 },
"the",
{ "F": -373.0 },
"p",
{ "F": -28.0 },
"erio",
{ "F": -28.0 },
"d",
{ "F": -373.0 },
"analyzed",
{ "F": -373.0 },
"in",
{ "F": -374.0 },
"our",
{ "F": -373.0 },
"study",
{ "F": 83.0 },
",",
{ "F": -383.0 },
"Bridge's",
{ "F": -373.0 },
"paren",
{ "F": 27.0 },
"t",
{ "F": -373.0 },
"compan",
{ "F": 28.0 },
"y",
{ "F": -373.0 },
"Ne",
{ "F": -1.0 },
"wGlob",
{ "F": -27.0 },
"e",
{ "F": -374.0 },
"reduced"
],
"TJ"
],
[ { "F": -16.936 }, { "F": -21.922 }, "Td" ],
This is just a more verbose restatement of what's in the PDF file; the real questions I'm asking are:- How can a user get to this part, from viewing the PDF file? (Note that the PDF page objects are not necessarily a flat list; they are often nested at different levels of “kids”.)
- How can a user understand these instructions, and “see” how they correspond to what is visually displayed on the PDF file?
I have a bunch of documents right now that are annual statutory and financial disclosures of a large institute, and they are just barely differently organized from each year to the next to make it too tedious to cross compare them manually. I've been looking around for a tool that could break out the content and let me reorder it so that the same section is on the same page for every report.
This might be it.
The 'liveness' here is that you can derive multiple downstream cells (e.g. filters, groupings, drawing instructions) from the initial parsed PDF, which will update as you swap out the PDF file.
The problem of getting a representative body is (surprisingly) much harder than the annotation. I know. I spent quite some time years ago doing this.
Shameless plug: I use this under the hood when you prefix any PDF URL with https://pure.md/ to convert to raw text.
- The header is parsed in a way that I suspect would mislead an LLM: "BRETT GUTHRIE, KENTUCKY FRANK PALLONE, JR., NEW JERSEY CHAIRMAN RANKING MEMBER ONE HUNDRED NINETEENTH CONGRESS". Guthrie is the chairman and Pallone is the ranking member, but that isn't implied in the text. In this particular case an LLM might already know that from other sources, but in more obscure contexts it will just have to rely on the parsed text.
- It isn't converted into Markdown at all, the structure is completely lost. If you only care about text then I guess that's fine, and in this case an LLM might do an ok job at identifying some of the headers, but in the context of this discussion I think ai.toMarkdown() did a bad job of converting to Markdown and a just ok job of converting to text.
I would have considered this a fairly easy test case, so it would make me hesitant to trust that function for general use if I were trying to solve the challenges described in the submitted article (Identifying headings, Joining consecutive headings, Identifying Paragraphs).
I see that you are trying to minimize tokens for LLM input, so I realize your goals are probably not the same as what I'm talking about.
Edit: Another test case, it seems to crash on any Arxiv PDF. Example: https://pure.md/https://arxiv.org/pdf/2411.12104.
Fixed, thanks for reporting :-)
First it's all the same font size everywhere, it's also got bolded "headings" with spaces that are not bolded. Had to fix my own handling to get it to process well.
This is the search engine's view of the document as of those fixes: https://www.marginalia.nu/junk/congress.html
Still far from perfect...
Heh, in my experience with PDFs that's a tautology
I would wager that they’re using OCR/LLM in their pipeline.
The only PDF parsing scenario I would consider putting my name on is scraping AcroForm field values from standardized documents.
In the e-Discovery field, it's commonplace for those providing evidence to dump it into a PDF purely so that it's harder for the opposing side's lawyers to consume. If both sides have lots of money this isn't a barrier, but for example public defenders don't have funds to hire someone (me!) to process the PDFs into a readable format, so realistically they end up taking much longer to process the data, which takes a psychological toll on the defendant. And that's if they process the data at all.
The solution is to make it illegal to do this: wiretap data, for example, should be provided in a standardized machine-readable format. There's no ethical reason for simple technical friction to be affecting the outcomes of criminal proceedings.
Fundamentally, the solution to this problem is to not create it in the first place. There's no reason for there to be a structured data -> PDF -> AI -> structured data pipeline when we can just force people providing evidence to provide the structured data.
It can be very tempting to look at the problem as a simple question of computation and data. There's a standard so it must be able to be implemented 1:1. But (and this is very similar to the web) baked into any rendering engine anyone actually uses are a million different little hacks and special conditionals built up over years of dealing with PDFs in the wild, that adds up to something which performs how users expect, not what is actually 100% formally correct.
What's the timeline for this solution to pay off
But generally speaking, you don't have that control.
https://academic.oup.com/auk/article/126/4/717/5148354
The first page is classic with two columns of text, centered headings, a text inclusion that sits between the columns and changes the line lengths and indentations for the columns. Then we get the fun of page headers that change between odd and even pages and section header conventions that vary drastically.
Oh... to make things even better, paragraphs doing get extra spacing and don't always have an indented first line.
Some of everything.
If you stepped back you could imagine the app that originally had captured/produced the PDF — perhaps a word processor — it was likely rendering the text into the PDF context in some reasonable order from it's own text buffer(s). So even for two columns, you rather expect, and often found, that the text flowed correctly from the left column to the right. The text was therefore already in the correct order within the PDF document.
Now, footers, headers on the page — that would be anyone's guess as to what order the PDF-producing app dumped those into the PDF context.
[1] https://graphmetrix.com/trinpod-server https://trinapp.com
What I really want to do is take all these docs and just reorder all the content such that I can look at page n (or section whatever) scrolling down and compare it between different years by scrolling horizontally. Ideally with changes from one year to the next highlighted.
Can your product do this?
1. reliable OCR from documents (to index for search, feed into a vector DB, etc)
2. structured data extraction (pull out targeted values)
3. end-to-end document pipelines (e.g. automate mortgage applications)
Marginalia needs to solve problem #1 (OCR), which is luckily getting commoditized by the day thanks to models like Gemini Flash. I've now seen multiple companies replace their OCR pipelines with Flash for a fraction of the cost of previous solutions, it's really quite remarkable.
Problems #2 and #3 are much more tricky. There's still a large gap for businesses in going from raw OCR outputs —> document pipelines deployed in prod for mission-critical use cases. LLMs and VLMs aren't magic, and anyone who goes in expecting 100% automation is in for a surprise.
You still need to build and label datasets, orchestrate pipelines (classify -> split -> extract), detect uncertainty and correct with human-in-the-loop, fine-tune, and a lot more. You can certainly get close to full automation over time, but it's going to take time and effort. The future is definitely moving in this direction though.
Disclaimer: I started a LLM doc processing company to help companies solve problems in this space (https://extend.ai)
This is hard because:
1. Unlike a business workflow which often only deals with a few specific kinds of documents, you never know what the user is going to get. You're making an abstract PDF reader, not an app that can process court documents in bankruptcy cases in Delaware.
2. You don't just need the text (like in traditional OCR), you need to recognize tables, page headers and footers, footnotes, headings, mathematics etc.
3. Because this is for human consumption, you want to minimize errors as much as possible, which means not using OCR when not needed, and relying on the underlying text embedded within the PDF while still extracting semantics. This means you essentially need two different paths, when the PDF only consists of images and when there are content streams you can get some information from.
3.1. But the content streams may contain different text from what's actually on the page, e.g. white-on-white text to hide information the user isn't supposed to see, or diacritics emulation with commands that manually draw acute accents instead of using proper unicode diacritics (LaTeX works that way).
4. You're likely running as a local app on the user's (possibly very underpowered) device, and likely don't have an associated server and subscription, so you can't use any cloud AI models.
5. You need to support forms. Since the user is using accessibility software, presumably they can't print and use a pen, so you need to handle the ones meant for printing too, not just the nice, spec-compatible ones.
This is very much an open problem and is not even remotely close to being solved. People have been taking stabs at it for years, but all current solutions suck in some way, and there's no single one that solves all 5 points correctly.
As someone who had to build custom tools because VLMs are so unreliable: anyone that uses VLMs for unprocessed images is in for more pain than all the providers which let LLMs without guard rails interact directly with consumers.
They are very good at image labeling. They are ok at very simple documents, e.g. single column text, centered single level of headings, one image or table per page, etc. (which is what all the MVP demos show). They need another trillion parameters to become bad at complex documents with tables and images.
Right now they hallucinate so badly that you simply _can't_ use them for something as simple as a table with a heading at the top, data in the middle and a summary at the bottom.
I definitely vaguely remember doing some incredibly cool things with PDFs and OCR about 6 or 7 years ago. Some project comes to mind... google tells me it was "tesseract" and that sounds familiar.
It was a bit of a heuristic hack; it was 20 years ago but as I recall poppler's ancient API didn't really represent text runs in a way you'd want for an accessibility API. A version of the multicolumn select made it in but it was a pain to try to persuade poppler's maintainer that subsequent suggestions to improve performance were ok - because they used slightly different heuristics so had different text selections in some circumstances. There was no 'right' answer, so wanting the results to match didn't make sense.
And that's how kpdf got multicolumn select, of a sort.
Using tessaract directly for this has probably made more sense for some years now.
Didn't work well/was a very naive way to search for answers (which is prob good/idk what kind of trouble I'd have gotten in if it let me or anyone else who used it win all the time), but it was fun to build.
Looks like you’ve found my PDF. You might want this version instead:
PDFs are often subpar. Just see the first example: standard Latex serif section title. I mean, PDFs often aren’t even well-typeset for what they are (dead-tree simulations).
[1] No sarcasm or truism. Some may just want to submit a paper to whatever publisher and go through their whole laundry list of what a paper ought to be. Wide dissemanation is not the point.
%PDF-1.4
1 0 obj
<<
/CreationDate (D:2025)
/Producer
>>
endobj
2 0 obj
<<
/Type /Catalog
/Pages 3 0 R
>>
endobj
4 0 obj
<<
/Type /Font
/Subtype /Type1
/Name /F1
/BaseFont /Times-Roman
>>
endobj
5 0 obj
<<
/Font << /F1 4 0 R >>
/ProcSet [ /PDF /Text ]
>>
endobj
6 0 obj
<<
/Type /Page
/Parent 3 0 R
/Resources 5 0 R
/Contents 7 0 R
>>
endobj
7 0 obj
<<
/Length 8 0 R
>>
stream
BT
/F1 50 Tf
1 0 0 1 50 752 Tm
54 TL
(PDF is)'
((a) a text format)'
((b) a graphics format)'
((c) (a) and (b).)'
()'
ET
endstream
endobj
8 0 obj
53
endobj
3 0 obj
<<
/Type /Pages
/Count 1
/MediaBox [ 0 0 612 792 ]
/Kids [ 6 0 R ]
>>
endobj
xref
0 9
0000000000 65535 f
0000000009 00000 n
0000000113 00000 n
0000000514 00000 n
0000000162 00000 n
0000000240 00000 n
0000000311 00000 n
0000000391 00000 n
0000000496 00000 n
trailer
<<
/Size 9
/Root 2 0 R
/Info 1 0 R
>>
startxref
599
%%EOF"2.3.2 Portability
A PDF file is a 7-bit ASCII file, which means PDF files use only the printable subset of the ASCII character set to describe documents even those with images and special characters. As a result, PDF files are extremely portable across diverse hardware and operating system environments."
https://opensource.adobe.com/dc-acrobat-sdk-docs/pdfstandard...
Disclaimer: I'm the founder.
Second, I converted pdf into pages of jpg. Gemini performed exceptional. Near perfect text extraction with intact format in markdown.
Maybe there's internal difference when processing pdf vs jpg inside the model.
The only way to solve that is with a segmentation model followed by a regular OCR model and whatever other specialized models you need to extract other types of data. VLM aren't ready for prime time and won't be for a decade on more.
What worked was using doclaynet trained YOLO models to get the areas of the document that were text, images, tables or formulas: https://github.com/DS4SD/DocLayNet if you don't care about anything but text you can feed the results into tesseract directly (but for the love of god read the manual). Congratulations, you're done.
Here's some pre-trained models that work OK out of the box: https://github.com/ppaanngggg/yolo-doclaynet I found that we needed to increase the resolution from ~700px to ~2100px horizontal for financial data segmentation.
VLMs on the other hand still choke on long text and hallucinate unpredictably. Worse they can't understand nested data. If you give _any_ current model nothing harder than three nested rectangles with text under each they will not extract the text correctly. Given that nested rectangles describes every table no VLM can currently extract data from anything but the most straightforward of tables. But it will happily lie to you that it did - after all a mining company should own a dozen bulldozers right? And if they each cost $35.000 it must be an amazing deal they got, right?
rad_gruchalski•6h ago
zzleeper•6h ago
rad_gruchalski•4h ago
favorited•1h ago
My use-case was specifically testing their performance as command-line tools, so that will skew the results to an extent. For example, PDFBox was very slow because you're paying the JVM startup cost with each invocation.
Poppler's pdftotext utility and pdfminer.six were generally the fastest. Both produced serviceable plain-text versions of the PDFs, with minor differences in where they placed paragraph breaks.
I also wrote a small program which extracted text using Chrome's PDFium, which also performed well, but building that project can be a nightmare unless you're Google. IBM's Docling project, which uses ML models, produced by far the best formatting, preserving much of the document's original structure – but it was, of course, enormously slower and more energy-hungry.
Disclaimer: I was testing specific PDF files that are representative of the kind of documents my software produces.
egnehots•5h ago
Which is different from extracting "text". Text in PDF can be encoded in many ways, in an actual image, in shapes (think, segments, quadratic bezier curves...), or in an XML format (really easy to process).
PDF viewers are able to render text, like a printer would work, processing command to show pixels on the screen at the end.
But often, paragraph, text layout, columns, tables are lost in the process. Even though, you see them, so close yet so far. That is why AI is quite strong at this task.
lionkor•5h ago
rad_gruchalski•2h ago
The purpose of my original comment was to simply say: there’s an existing implementation so if you’re building a pdf file viewer/editor, and you need inspiration, have a look. One of the reasons why mozilla is doing this is to be a reference implementation. I’m not sure why people are upset with this. Though, I could have explained it better.
rad_gruchalski•4h ago
Regarding tables, this here https://www.npmjs.com/package/pdf-table-extractor does a very good job at table interpretation and works on top of pdf.js.
I also didn’t say what works better or worse, neither do I go into PDF being good or bad.
I simply said that a ton of problems have been covered by
iAMkenough•4h ago
The PDF itself is still flawed, even if pdf.js interprets it perfectly, which is still a problem for non-pdf.js viewers and tasks where "viewing" isn't the primary goal.
rad_gruchalski•2h ago