I am sharing a project I built called PromptPrivacy.
With employees constantly adopting new AI chatbots, it is impossible to manually read and track the radically different legal privacy policies of every single tool. This creates a massive risk for corporate data leakage, or Shadow AI.
To solve this, I built PromptPrivacy as an automated AI transparency platform. It functions as a searchable directory that scrapes, analyses, and scores the privacy policies of over 70 AI models every single week. The platform translates all that complex legal jargon into a simple 0 to 100 privacy score so security teams and users know exactly what happens to their data.
Under the hood, I engineered a hardened Python microservice for the backend and a React web portal for the frontend. The scraping and analysis are orchestrated weekly via GitHub Actions, and the data is stored securely in a Supabase PostgreSQL database.
With my background in security and certifications like Security+, Google Cybersecurity, and AWS AI Practitioner, I focused heavily on a Defence in Depth posture. The interesting part is that I do not actually know how to code. I built the entire platform using AI-powered IDEs, primarily AntiGravity. I let the AI handle the syntax while I focused on the security controls like Pydantic validation and strict Row Level Security (RLS) policies.
Since I am not an arty person, the design is very straightforward and focused strictly on the threat intelligence.
I would love to hear your thoughts on the automation pipeline, the scoring system, or the general approach.
Sonofg0tham•2h ago
I am sharing a project I built called PromptPrivacy.
With employees constantly adopting new AI chatbots, it is impossible to manually read and track the radically different legal privacy policies of every single tool. This creates a massive risk for corporate data leakage, or Shadow AI.
To solve this, I built PromptPrivacy as an automated AI transparency platform. It functions as a searchable directory that scrapes, analyses, and scores the privacy policies of over 70 AI models every single week. The platform translates all that complex legal jargon into a simple 0 to 100 privacy score so security teams and users know exactly what happens to their data.
Under the hood, I engineered a hardened Python microservice for the backend and a React web portal for the frontend. The scraping and analysis are orchestrated weekly via GitHub Actions, and the data is stored securely in a Supabase PostgreSQL database.
With my background in security and certifications like Security+, Google Cybersecurity, and AWS AI Practitioner, I focused heavily on a Defence in Depth posture. The interesting part is that I do not actually know how to code. I built the entire platform using AI-powered IDEs, primarily AntiGravity. I let the AI handle the syntax while I focused on the security controls like Pydantic validation and strict Row Level Security (RLS) policies.
Since I am not an arty person, the design is very straightforward and focused strictly on the threat intelligence.
I would love to hear your thoughts on the automation pipeline, the scoring system, or the general approach.