The problem: C2PA/Content Credentials embed metadata in the file. Screenshot the image and the provenance is gone. AI detectors are probabilistic and unreliable.
OPP takes a different approach, an external fingerprint registry. When a generator creates an image, a 3-layer signature (SHA-256 + PDQ perceptual hash + CLIP ViT-L/14 embedding) is registered in a central index. Anyone can verify an image by querying the index. The signature survives any transformation because it's not embedded in the file. Think Shazam for images, but for provenance.
Matching pipeline: exact hash lookup → CLIP cosine similarity via Qdrant HNSW (sub-10ms at billions) → PDQ hamming distance enrichment. Only verified AI generators can mint. Verification is open.
The interesting part (new feature): I designed and implemented "Adaptive Variant Tracking." When a verification query finds a high-confidence fuzzy match (CLIP > 0.92, PDQ distance < 20), the system automatically mints a variant signature linked to the original. This means the registry "learns" the screenshot/crop/edit. The next verification of that same screenshot becomes an O(1) exact hash match instead of a costly vector search. The most viral/circulated images (highest misuse risk) become the fastest to verify.
Looking for feedback on the protocol design and poking holes in the design approach.