The problem: interviews, assessments, and even live video are becoming increasingly synthetic. Deepfakes, copilots, teleprompters — it’s getting hard to tell what’s real in real time.
We’re building a lightweight detection system focused on:
real-time video + audio analysis
behavioral signals (latency, eye patterns, response dynamics)
practical deployment inside hiring workflows
This is not a research paper project — we’re building a working MVP now.
Stack direction:
WebRTC ingestion
Python backend (FastAPI)
real-time inference + streaming signals
lightweight front-end surface
We’re looking for a founding-level builder who:
likes hard real-time problems
is comfortable shipping rough MVPs
can work async with a small team
This is part-time initially (~40–60 hrs total to start), paid + equity.
If you’ve built real-time systems, WebRTC tooling, or weird low-latency AI stuff — would love to talk.
Happy to share more context with anyone curious.
turbiakjohn•1h ago
I like that you’re focused on real-time signals instead of just building another research-heavy detection model. The behavioral angle (latency, eye movement, response dynamics) sounds especially compelling if done right.
Would love to hear more about what you’ve built so far.
Happy to chat