The workflow is simple: you connect your Dropbox, Google Drive, OneDrive, Dropbox accounts, then scan documents individually or in bulk. The AI analyzes each document and adds inline comments for lines that might contain sensitive or non-compliant data, with suggestions for corrections. There’s also a reporting page that summarizes the types of issues across all scanned documents. We’ve been testing entirely with synthetic/fake data.
If you want to see it in action, here’s a short demo video showing the tool workflow (all fake data): https://www.safedocs-ai.com/video/demo.mp4
I’m mostly looking for feedback from this community:
- Would a tool like this actually help teams in their workflow?
- Any obvious privacy/security pitfalls I might be missing scanning across multiple platforms?
- Ideas for making the AI’s annotations helpful without overwhelming users?
Any thoughts, feature ideas, or general feedback would be hugely appreciated. I’m trying to figure out whether this would be genuinely useful for compliance teams before building more.
For those curious to try it yourself: https://www.safedocs-ai.app/login
pavel_lishin•3h ago
kinottohw•3h ago
pavel_lishin•2h ago
kinottohw•2h ago
hobofan•2h ago
Everything I see reads like you have a strange understanding of "local" and shouldn't be trusted with building such software.
kinottohw•2h ago
hobofan•2h ago
pavel_lishin•1h ago
That... doesn't sound local, dude. "Locally" would mean that the LLM is actively running in my browser, and in my browser only, which is not what you're describing.
I understand that you're claiming that the documents aren't being stored permanently, but they're still being transferred to your servers, and their full contents are being read there by something.
hobofan•2h ago
kinottohw•1h ago