We’ve built a SOTA deepfake detection system (95%+ on in-the-wild content) using adversarial architecture since before 2024 elections.
Working to augment X’s media workflow for real-time classification of uploaded/loaded images/videos. Specifically with massive uptick in dangerous ai content about world conflict and wars.
How it works:
• Media → DB/DOM → API call
• Returns: class (real/fake), confidence, C2PA metadata, known image similarity, VLM reasoning
Video demo: real-time classification overlay while scrolling X: https://x.com/kenjon/status/2029278211817742738?s=46
API docs / extension: docs.bitmind.ai