I'm on Windows, if it matters. Is there anything like that out there, already (mostly) built?
I'm on Windows, if it matters. Is there anything like that out there, already (mostly) built?
Don't mean to be snarky, apologies if it comes across like that. I'm genuinely curious
I'd basically want Everything as an LLM: https://www.voidtools.com/support/everything/, but also with file content indexing.
https://github.com/icereed/paperless-gpt
https://docs.paperless-ngx.com/#features
These options seem far from... user friendly. Another concern is resource usage, I wonder how low LLMs will go (especially as far RAM and GPU requirements are concerned).
In this case LLMs with their ability to find semantic equivalence might be a great help. And with the current state of affairs I even think that an LLM with a sufficiently large context window might absorb some kind of the file system dump with directory paths and file names and answer a question about some obscure file from the past.
Let's say there's a HDD somewhere that has thousands of files: text files, PDFs, XLS, PPT, DOC, etc. That doesn't sound like a huge amount of data to me.
However, there doesn't seem an out-of-the-box solution to ingest this into an LLM and run it to ask it simple stuff like "can you list all the invoices from 2023 and their paths?" without requiring stuff like 16GB of RAM and 8GB of VRAM, which basically puts this "search" solution out of reach for the average laptop, especially Windows laptop, in the last 5 years and probably for the next 5-10 years, too.
It's a shame, but, oh well...
For pure search you're almost certainly better off building an index of CLIP embeddings and then doing cosine similarity with a query embedding to find things. I have gigabytes of reaction images and memes I've been thinking about doing this with.
what helped me was
- ran ocr on images with tesseract (slow but it works)
- used unstructured and langchain to parse and chunk stuff even spreadsheets and emails
- embedded chunks with sentence-transformers and indexed it with faiss
- then built a local llm agent (used a quantized mistral model) to rerank results smartly
its rough but works like a semantic grep for your whole disk
if you want less diy paperless-ng plus anythingllm plus a lightweight embed model could work or wait some months and someone will wrap it all in an electron app with stripe on the homepage lol
funny how much time we spend trying to find stuff we already wrote
We need better ways to properly classify all that data with accurate metadata and quick ways to pick out a small subset of data for A.I. to analyze.
Databases were designed to do this with relational tables (e.g. find all customers from New York who bought our product in 2024); but file systems were not designed to do this with files (e.g. find all pictures I took with my phone in 2020).
A.I. can be a great tool to find patterns in files to answer important questions, but it will be incredibly slow if it has to analyze too many files for every query.
DHRicoF•7mo ago
Is your data organized or is just a dump of unrelated content?
- If you have a bag of files without any metadata the best option is to create something like a RAG, with a pre OCR step for image files (or even some multimodal model call).
- If the content is well organized with a logic structure an agent could extract information with a little look around.
Is static or varies day by day?
- If is static you could index all at once, if not, an agent that pick what to reindex would be a better call.
I'm not aware of a solution like this, but seems doable as an MCP server. But the cost will scale quiclky.
oblio•7mo ago
I have zounds of old invoices, spreadsheets created to quickly figure something out, etc.
I'd also want the tool to run in the background to update the index.
I've found something potentially interesting:
https://anythingllm.com/