It sounded simple: build a system to detect cheating and improve transparency. In reality, we uncovered an entire underground industry.
There were micro-earpieces hidden in collars, cameras inside keyboards, remote helpers watching live feeds, even analog VGA splitters used to bypass screen capture. The more we built defenses, the more creative the cheating networks became. And it wasn’t just local — we found similar patterns reported around the world in GMAT, GRE, IELTS, TOEFL, and professional certifications. Cheating has become industrialized.
We built an AI-based proctoring system that combines computer vision, behavioral analytics, and biometric verification linked to Kazakhstan’s national ID database. We also developed a custom hardware scanner that can detect micro-earpieces invisible to normal detectors.
Since the national rollout in late 2023: • Over 1.2 million exams have been conducted. • The pass rate dropped nearly threefold, once fake attempts disappeared. • The number of driver’s licenses issued fell from about 1.1 million → 620 thousand — not due to lower demand, but because the process finally became honest.
The biggest lesson: this isn’t a technical project, it’s an ecosystem problem. Every time we close one loophole, someone invests in finding another. It’s a constant cat-and-mouse dynamic that feels closer to cybersecurity than education.
I’d love to hear thoughts from people who’ve worked on: – Scaling GovTech / AI systems internationally while keeping public trust. – Adapting products for different data-sovereignty and privacy frameworks. – Using “ethical hacking” or adversarial testing as a go-to-market model for regulated industries.
Not trying to sell anything — genuinely curious how others have navigated these problems at scale.
(If you’re interested in the full background and stats, we wrote a longer breakdown here: https://medium.com/@yyermanov/how-kazakhstan-reinvented-driver-testing-with-ai-and-cut-fraud-in-half-b10755ed32ac)