I built a browser extension called PromptShield to tackle this. It scans inputs in real-time and blocks 150+ sensitive data types (e.g., credit card numbers, SSNs, credentials) before they reach AI platforms. It runs locally on Chrome/Edge, with regex-based detection and minimal latency. No data leaves the client, addressing privacy concerns. Comptetes with other DLP solutions but is lightweight, easy to install and purposefully made for GenAI.
The backend is a Python Flask API that uses regex and DLP APIs for data analysis. The browser extension interfaces with the DOM, intercepts inputs, and queries the API. Based on sensitivity settings, it either blocks, warns, or allows the input to proceed.
After months of grinding, we just landed our first enterprise customer, which validated the problem for me. But I’m curious about the broader landscape:
Have others seen GenAI-related data leaks in their orgs? What’s the scale of this issue? How are you approaching shadow AI at least at the browser level?
For those building security tools, how do you balance usability vs. strict enforcement in enterprise settings (prevent vs detect)?
Any lessons on going from 1 to 10 customers? Our first took ages, but trials are picking up.