I had hours of B-roll and long-form content, but finding a specific 5-second clip (e.g., "a red car passing by" or "a specific quote about React state") meant manually watching the footage at 2x speed.
What I built: Matriq indexes video content visually and aurally. It uses multimodal embeddings to understand scene context, action, and dialogue.
The Use Case:
While it works for general editing, I’ve found the best use case is Content Repurposing. Creators with 100+ hours of archives can now instantly find "viral hooks" or specific topics to turn into Shorts/Reels without re-watching old footage.
It’s in beta. I’d love you to try breaking it or give feedback on the retrieval accuracy.
Link: https://matriq.video