Hey HN, I'm Kim. Background: Israeli Intelligence, fraud detection at the Israeli SEC, Berkeley ML, and Harvard TinyML on edge.
I built DeFake because I tested the major deepfake detection APIs and they were wrong more than 50% of the time. They return a single confidence score with zero explanation—which is useless for insurance claims or legal evidence.
What it is: Scroll through videos, vote real or AI-generated, and test your intuition against our engine.
How it works: Instead of one model giving a score, we check multiple independent signal categories—metadata and provenance, digital watermarks, visual forensics (physics violations, anatomy, temporal consistency), and blind human consensus. No single signal is reliable. But when metadata says "shot on iPhone" and the sensor noise pattern doesn't match that sensor, that divergence is meaningful. Real footage is boringly consistent across layers. Fakes have mismatches.
Hardware & Account Verification: Using my background in TinyML to cross-reference C2PA metadata, hardware-backed TEE capture certificates, and account signals.
Visual Forensics: Swarms of ML models catching physics violations (e.g., mattress grids shifting) and temporal consistency.
Descriptive AI: We translate raw signals into a 16-section Truth Bundle report that explains why a video is flagged, automating $5,000 human expert analysis.
The Vote Mechanic: Everyone votes blind before seeing the AI analysis or other votes. This gives us an independent human signal. When 80%+ of voters agree, they match ground truth ~90% of the time. When it's 50/50, those are genuinely hard cases.
The Business: Free game (data flywheel). We offer $199 forensic reports vs. $5,000 human experts, reducing analysis from 15+ hours to seconds.
Market insight: Sensity AI (Amsterdam, profitable on $3.2M) sells technical reports needing expert interpreters. They're going government-only. We're building self-serve for US insurance/legal.
What breaks it: Heavy compression (WhatsApp/Telegram) strips metadata and creates AI-like artifacts. Working on it.
Try to break it. I’m here to answer technical questions about the forensic reporting, the waterfall architecture, or the audit trail.
ortalboh•1h ago
I built DeFake because I tested the major deepfake detection APIs and they were wrong more than 50% of the time. They return a single confidence score with zero explanation—which is useless for insurance claims or legal evidence.
What it is: Scroll through videos, vote real or AI-generated, and test your intuition against our engine.
Try it: https://game.defakes.com/feed?utm_source=hackernews
How it works: Instead of one model giving a score, we check multiple independent signal categories—metadata and provenance, digital watermarks, visual forensics (physics violations, anatomy, temporal consistency), and blind human consensus. No single signal is reliable. But when metadata says "shot on iPhone" and the sensor noise pattern doesn't match that sensor, that divergence is meaningful. Real footage is boringly consistent across layers. Fakes have mismatches.
Hardware & Account Verification: Using my background in TinyML to cross-reference C2PA metadata, hardware-backed TEE capture certificates, and account signals.
Visual Forensics: Swarms of ML models catching physics violations (e.g., mattress grids shifting) and temporal consistency.
Descriptive AI: We translate raw signals into a 16-section Truth Bundle report that explains why a video is flagged, automating $5,000 human expert analysis.
The Vote Mechanic: Everyone votes blind before seeing the AI analysis or other votes. This gives us an independent human signal. When 80%+ of voters agree, they match ground truth ~90% of the time. When it's 50/50, those are genuinely hard cases.
The Business: Free game (data flywheel). We offer $199 forensic reports vs. $5,000 human experts, reducing analysis from 15+ hours to seconds.
Market insight: Sensity AI (Amsterdam, profitable on $3.2M) sells technical reports needing expert interpreters. They're going government-only. We're building self-serve for US insurance/legal.
What breaks it: Heavy compression (WhatsApp/Telegram) strips metadata and creates AI-like artifacts. Working on it.
Try to break it. I’m here to answer technical questions about the forensic reporting, the waterfall architecture, or the audit trail.