I’m Adithya, a 22-year-old researcher from India. I work with a lot of document processing models while building AI pipelines, and one pain kept repeating: every model has its own inference code, preprocessing steps, and output format. Swapping models or testing new ones meant rewriting a lot of boilerplate each time.
So I built Omnidocs—an open source library to run document processing models through a simple, unified API, with a vision-first approach to understanding documents.
Key features:
> Pick a task and a model, run inference with one interface > Supports common document tasks: Text extraction, OCR, Table extraction, Layout analysis and Structured Extraction ... > 16+ models supported out of the box (many more integrations to come) > Runs locally on Mac or GPUs (MLX and vLLM backends supported) > Works with VLM APIs like GPT, Claude, Gemini and many more that support Open Responses API spec > Designed to quickly build and test document processing pipelines
This has helped me prototype document workflows much faster and compare models easily.
Would love feedback on the API design, developer experience, and what integrations would make this more useful.