>I kept seeing candidates produce near-perfect solutions while struggling to explain even basic trade-offs. Some would go quiet for long stretches, then suddenly type out optimal code extremely quickly. When I probed deeper, it became obvious that what they were writing didn’t match their level of understanding.
Why do you need a tool to detect cheating? Your 'analogue' approach here is the correct one.
Also to answer your second question we’ve been finding it’s extremely difficult to detect by eye. Also we lack any video evidence to support our claim of cheating which can be problematic when we review the candidates.
As the cheating tools get more advanced I think it’s going to be impossible to detect without some sort of application running on the computer
danlah•1h ago
After a while, something started to feel off.
I kept seeing candidates produce near-perfect solutions while struggling to explain even basic trade-offs. Some would go quiet for long stretches, then suddenly type out optimal code extremely quickly. When I probed deeper, it became obvious that what they were writing didn’t match their level of understanding.
At first I chalked it up to nerves or luck. But the patterns repeated too often to ignore. Eye movements toward a second screen. Answers that looked exactly like what tools like ChatGPT, Copilot, or interview-specific “AI copilots” would generate.
Then we started seeing candidates who had looked very strong in the interview struggle significantly once they were doing real work.
I started digging and realized there’s now an entire ecosystem of tools designed specifically to help people cheat in live technical interviews. There are desktop apps that watch your screen and feed you real-time solutions. There are “invisible” interview copilots marketed as undetectable. People openly discuss using second laptops, virtual machines, hidden browser windows, or even having someone else assist remotely during the interview.
I spoke to other interviewers and hiring managers. Almost everyone had the same experience. Some admitted they’d quietly stopped trusting remote interviews altogether, even though they didn’t want to go back to fully on-site hiring.
That didn’t sit right with me. Remote interviews aren’t the core problem. The problem is that interviewers have lost basic visibility into what’s actually happening during the session.
So I built Interview Watchdog.
The idea isn’t to automatically judge candidates or let AI decide who passes. The goal is simply to restore the kind of signal you’d have if you were sitting in the same room. Interview Watchdog is a desktop application that records screen and webcam activity during interviews and detects suspicious behavior patterns, including those used by tools that market themselves as "undetectable". When something looks off, it timestamps the moment and lets interviewers replay that part of the interview to make their own judgment, rather than relying on a black-box decision. It can also embed existing tools like CoderPad so teams don’t have to change how they already interview.
I’m posting here because I’m curious if others are seeing the same erosion of trust in technical interviews. I’d also love feedback from people who interview frequently, or who’ve been on the candidate side and feel the system is broken in other ways.
This project came directly out of frustration from interviewing hundreds of candidates and realizing the system was being gamed, often at the expense of genuinely strong engineers.
Happy to answer questions or hear why this is a bad idea.
— Dan (interviewwatchdog.com)