I built Fair Screen, a lightweight tool that detects hidden AI-assisted cheating during remote interviews — without recording the user’s screen or invading privacy.
Over the last year, “undetectable” interview-assistant tools have exploded. They overlay real-time AI prompts, code, or answers in transparent/non-shareable windows, work through virtual desktops, or hide inside remote sessions. Platforms like Zoom, Meet, Teams, etc. can’t see these windows because of sandboxing, so interviewers have no idea when answers are coming from an AI tool sitting just outside the captured screen.
Fair Screen takes a different approach:
Instead of scanning processes or capturing screen data, it watches for the behavior of the window system itself — invisible overlays, transparent windows, remote desktop footprints, crosshair-style cursor changes, VM artifacts, and other harmless signals that these tools unintentionally leave behind.
These signals are surfaced in real time to the interviewer in a simple dashboard. No recording, no screenshots, no process killing, no monitoring software. Just “this looks like an invisible window is present” or “this looks like RDP/VM behavior.”
Why I built it:
I kept hearing the same story from interviewers:
Answers that were too perfect
Strange pauses
Eyes scanning an invisible script
Cursor turning into a crosshair
Candidates reading off screen in a way that video can’t show
There were zero tools aimed at detecting this without spying or collecting user data. The only solutions were invasive proctoring, which nobody likes.
anantha2024•9m ago
Over the last year, “undetectable” interview-assistant tools have exploded. They overlay real-time AI prompts, code, or answers in transparent/non-shareable windows, work through virtual desktops, or hide inside remote sessions. Platforms like Zoom, Meet, Teams, etc. can’t see these windows because of sandboxing, so interviewers have no idea when answers are coming from an AI tool sitting just outside the captured screen.
Fair Screen takes a different approach: Instead of scanning processes or capturing screen data, it watches for the behavior of the window system itself — invisible overlays, transparent windows, remote desktop footprints, crosshair-style cursor changes, VM artifacts, and other harmless signals that these tools unintentionally leave behind.
These signals are surfaced in real time to the interviewer in a simple dashboard. No recording, no screenshots, no process killing, no monitoring software. Just “this looks like an invisible window is present” or “this looks like RDP/VM behavior.”
Why I built it: I kept hearing the same story from interviewers:
Answers that were too perfect
Strange pauses
Eyes scanning an invisible script
Cursor turning into a crosshair
Candidates reading off screen in a way that video can’t show
There were zero tools aimed at detecting this without spying or collecting user data. The only solutions were invasive proctoring, which nobody likes.
How it works (technical summary):
Uses OS-level window enumeration (non-invasive, metadata only)
Identifies windows that are non-shareable, click-through, or overlaying the main screen
Detects artifacts of remote sessions and VMs through display, compositor, and input characteristics
Streams only these signals (not content) to the interviewer dashboard
Interviewer sees a live feed of “risk indicators,” not the actual screen
What it does NOT do:
No screen recording
No screenshots
No keylogging
No process scanning
No network monitoring
No content analysis
It is intentionally privacy-first.
Live demo: https://fairscreen.co
(You can generate a session and see how the dashboard reacts.)
I would really appreciate feedback from the HN community on:
The technical approach
Privacy tradeoffs
Edge cases I may have missed
Ideas for making this more transparent and trustworthy
Whether there’s a better way to handle false positives
This is currently free to use while I gather feedback and refine the detection heuristics.
Happy to answer any technical questions!