Ranking: 1. LlamaCloud/LlamaParse 2. GroundX 3. Unstructured.io 4. Google RAG Engine 5. Docling ... capability gap... 6. Azure - Document Intelligence 7. AWS - Textract 8. LlamaIndex (DIY)
Instead of just chunking text and throwing it into an embedding model, WFGY builds a persistent semantic resonance layer — meaning it tracks context through formatting breaks, footnotes, diagram captions, even corrupted OCR sections.
The engine applies multiple self-correcting pathways (we call them BBMC and BBPF) so even when parsing is incomplete or wrong, reasoning still holds. That’s crucial if your source materials are academic papers, messy reports, or 1000+ page archives.
It’s open source. No tuning. Works with any LLM. No tricks.
Backed by the creator of tesseract.js (36k) — who gets why document mess is the real challenge.
Check it out: https://github.com/onestardao/WFGY
At Retab.com, we focus on the “hard pre-RAG” layer: turning raw documents : including scanned reports, OCR messes, financial statements, or regulatory filings... into clean, structured, model-ready data.
Instead of relying on embeddings over noisy text chunks, we use schema-driven generation, multi-LLM consensus, and an evaluation UI to ensure output is accurate, complete, and explainable. No manual parsing, no hallucinations, just structured JSON (or any format you want), ready for retrieval, agents, or analytics.
We work with teams doing RAG on contracts, audits, earnings reports, etc.. anywhere that “close enough” isn’t good enough. Happy to run your hardest docs through Retab if you want to benchmark against WFGY or LlamaParse
randomname4325•6mo ago