In March 2025, CNBC reported on how Google was responding to AI-assisted cheating in its coding interviews. Around the same time, a recurring thread on r/cscareerquestions estimated that roughly 80% of candidates use LLMs on top-of-funnel coding tests. A controlled experiment by interviewing.io confirmed what many hiring teams already suspected: candidates were using ChatGPT during LeetCode-style interviews and getting away with it.
This is not the cheating problem you dealt with in 2020. Back then, the biggest risk was a candidate switching tabs to Google an answer. Today, the threat is a candidate running ChatGPT in a second window, using a stealth browser extension that feeds AI-generated answers through an invisible overlay, or paying a proxy service to complete the entire assessment. Online proctoring software is no longer a nice-to-have feature for technical assessments. It is the difference between hiring a capable developer and hiring someone skilled at prompting an LLM.
The numbers confirm the urgency. HackerEarth's 2025 Technical Hiring Landscape Report found that proctoring usage across its platform jumped from 64% in January to 77% by July, with approximately 64.5% of all assessment events proctored by year-end. Employers are catching on, but the tactics keep evolving.
This article was originally published in 2020 covering six classic cheating methods. This 2026 update expands the list to ten tactics, adds four AI-era cheating methods that did not exist when this piece was first written, and maps each tactic to the specific proctoring feature that stops it. By the end, you will know how every modern cheating tactic works, which proctoring control catches it, and what HackerEarth's Smart Browser does that standard lockdown browsers cannot.
Why Online Assessment Cheating Got Harder to Detect in 2026
Cheating tools have professionalized. Products like Final Round AI, InterviewCoder, LockedIn AI, and Sensei AI are paid SaaS applications built specifically to help candidates beat technical interviews. They are not crude hacks. They are polished subscription services with onboarding flows, support documentation, and "undetectable" marketing claims.
The scale of the problem shows up across multiple data points. Recurring discussions on r/cscareerquestions and r/ExperiencedDevs suggest the majority of candidates now use LLMs during top-of-funnel code screens. A February 2026 Built In article asked "Is Using AI in a Job Interview Cheating?" and concluded the answer depends on context, reflecting how quickly candidate-side norms have shifted. Meanwhile, research from 2023 found that traditional interviews disproportionately reward impression management over actual competence, which gives AI-assisted candidates an even larger advantage.
The ten tactics below are ordered from the oldest and most basic to the newest and most sophisticated. The first six are updated versions of the classic methods from the original 2020 article. The last four are AI-era tactics that were not part of the conversation when this piece was first published.
10 Ways Candidates Cheat Online Assessments (and How to Stop Them)
1. Switching tabs to look up answers ("El switch-a-tab-aroo")
The classic. A candidate opens a new browser tab, searches for the answer, and switches back. It is the oldest trick in online assessments and, ironically, the easiest one to catch in 2026. Most candidates still try it because they assume the assessment platform only records their answers, not their browser behavior.
Why it still happens: Candidates underestimate how much metadata the platform captures. A quick tab switch feels invisible.
How to stop it: Full-screen mode enforcement prevents the candidate from navigating away without triggering an alert. Automatic tab-switch detection logs every instance and can trigger automatic logout after a set number of violations. Custom timers on MCQs add time pressure that makes switching impractical. HackerEarth's proctoring system flags every tab switch and surfaces it in the recruiter's review dashboard.
2. Copy-pasting code from another source ("El copy-paste-o")
Still the most common technical assessment cheating method. The 2026 variation: candidates copy from ChatGPT directly into the code editor rather than from Stack Overflow. The source has changed, but the mechanic is the same.
Why it is harder to catch than it looks: Modern clipboard managers on Windows and macOS let candidates store multiple copied snippets and insert them with a single keystroke. The paste itself takes less than a second.
How to stop it: Copy-paste lock in the code editor blocks the action entirely. A plagiarism checker compares every submission against the full corpus of candidate answers for the same test. Code playback records every keystroke as a video. A copy-paste event appears as a single large insertion rather than iterative typing, making it immediately visible during review.
3. Getting someone else to take the test ("El imperson-anaa-tor")
The friend-takes-the-test scenario has evolved into a cottage industry. Proxy test-taking services now advertise "managed assessment completion" as a paid offering, complete with professional developers who specialize in clearing top-of-funnel code screens.
Why it is harder to catch than it looks: Unlike tab-switching or copy-pasting, impersonation leaves no in-test behavioral fingerprint unless you verify identity throughout the session.
How to stop it: Randomized webcam snapshots (two per minute on HackerEarth by default) capture the candidate's face at unpredictable intervals. IP address lock restricts the session to a single network. Automatic impersonation detection flags mismatches between the registered candidate and the person on camera. For high-stakes assessments, government-ID verification at session start adds another layer. Behavioral biometrics that compare typing rhythm across sessions can also surface inconsistencies.
4. Looking at a second screen, phone, or notes ("El dos screen-o")
A secondary device hidden just out of the webcam's view. The 2026 angle: candidates now prop a tablet beneath the desk running ChatGPT, glancing down periodically to read AI-generated answers.
Why it is harder to catch than it looks: If the device is positioned below the webcam frame, it is physically invisible to a standard webcam capture.
How to stop it: AI-powered gaze tracking detects when a candidate's eyes consistently move off-screen. Automatic mobile phone detection flags devices visible in the webcam frame. Randomized webcam snapshots catch candidates mid-glance. For high-stakes assessments, dual-camera proctoring (laptop camera plus phone camera showing the workspace) eliminates blind spots. Audio proctoring catches candidates who read answers aloud from a second device.
5. Having someone in the room help ("El dos candidate-o")
A friend whispering answers, a partner reading prompts off-camera, or a study group collaborating in the same room. The helpful accomplice has always been a risk with remote proctored assessments, and it remains difficult to catch without audio monitoring.
Why it is harder to catch than it looks: If the second person stays out of the webcam frame and speaks quietly, visual proctoring alone will not catch them.
How to stop it: Audio monitoring detects background voices and conversation patterns. Randomized webcam snapshots occasionally capture a second person in the frame. AI background-voice detection flags audio anomalies for reviewer follow-up. A plagiarism checker catches identical or near-identical submissions from candidates who received the same whispered answers.
6. Restroom breaks and other unmonitored exits ("El missing suspect-o")
The classic disappearing act. The candidate leaves the frame, consults notes or a phone, and returns. Especially common on longer assessments where a mid-test break feels natural.
Why it is harder to catch than it looks: A candidate who pauses for two minutes looks identical to someone who genuinely needed a break.
How to stop it: Custom timers per question keep the clock running and make extended absences costly. Automatic logout triggers when the candidate leaves the webcam frame for longer than the configured threshold. Full-screen lockdown ensures that leaving the test screen flags the session.
7. Using ChatGPT, Claude, or other LLMs in a separate window (NEW for 2026)
This is the dominant cheating tactic in 2026. A candidate runs an LLM in a second browser window or a dedicated desktop app, types the question in, receives a solution, then retypes or paraphrases the AI output into the assessment editor. Tab-switch detection catches the obvious version. The harder version: the candidate uses an entirely separate device, making browser-level detection useless.
Why it is harder to catch than it looks: On r/jobs, threads ask "Is using ChatGPT during an online assessment cheating?" with many candidates arguing it is not. The normalization of AI tools means candidates are less likely to feel they are doing anything wrong. And second-device usage bypasses all browser-level monitoring entirely.
How to stop it: HackerEarth's Smart Browser is a desktop application that locks down the entire operating system for the duration of the assessment. It blocks all other applications, not just other browser tabs. A candidate cannot switch to a ChatGPT window, a desktop AI app, or any other program while the test is active. Code playback analysis detects AI-generated code patterns: long blocks inserted at once versus iterative, exploratory coding with corrections. The plagiarism detection engine is tuned for AI-generated code patterns. Behavioral analysis compares the current submission against the candidate's other work to flag inconsistencies.
See how Smart Browser locks down the entire desktop →
8. Real-time interview copilots: Final Round AI, LockedIn AI, Sensei AI (NEW for 2026)
Stealth browser extensions and overlay applications that listen to interview audio, transcribe questions in real time, send them to GPT-4, Claude, or Gemini, and feed answers back to the candidate through an on-screen overlay invisible on screen share. These tools are explicitly marketed as "undetectable."
Why they are dangerous: The candidate appears fully engaged. They look at the screen, type, and respond at a natural pace, but they are reading AI-generated answers from an overlay that standard screen-sharing software does not capture. On r/recruiting, employers share frustration about candidates who perform brilliantly in live interviews but struggle with basic tasks on day one. These copilot tools are the likely explanation.
How to stop them: Smart Browser blocks the installation and execution of overlay applications during the assessment session. Behavioral pattern analysis catches the telltale latency between a question being asked and an unusually polished answer appearing. Code playback exposes the giveaway: no exploration, no errors, no iteration, just clean code appearing in complete blocks. Live FaceCode interviews with system-design diagram questions force on-the-fly thinking that these copilot tools cannot replicate.
9. AI-based code generation tools embedded in IDEs (NEW for 2026)
GitHub Copilot, Cursor, and Codeium running locally on a candidate's machine. If the assessment allows candidates to use their own IDE (common with take-home tests), AI auto-completion operates invisibly. The candidate types a comment describing what they need, and the IDE generates the implementation.
Why it is harder to catch than it looks: The code appears to be typed normally. There is no copy-paste event, no tab switch, no external application. The AI is embedded inside the development tool itself.
How to stop it: HackerEarth Assessments uses its own browser-based IDE, which means there is no Copilot integration available. Smart Browser locks the entire desktop environment, preventing candidates from opening a local IDE alongside the test. Code playback analysis detects inhuman typing speed and patterns, such as perfectly structured code produced faster than normal human output.
10. Virtual machines and screen-sharing tools (NEW for 2026)
A candidate runs the assessment inside a virtual machine, then has an accomplice remotely access the host machine and complete the test. Alternatively, they use screen-sharing applications like TeamViewer or AnyDesk to let a friend control the keyboard in real time. Discussions on r/AskHR highlight this as a growing concern for remote take-home tests.
Why it is harder to catch than it looks: From the assessment platform's perspective, everything looks normal. The correct candidate appears to be taking the test in a standard browser. The remote access happens at the operating system level, below what a browser-based tool can see.
How to stop it: Smart Browser detects virtual machine environments and flags them before the assessment begins. Screen-sharing and screen recording application detection blocks programs like TeamViewer and AnyDesk from running during the session. IP-based session monitoring flags unexpected network changes. Behavioral typing biometrics compare keystroke patterns against the candidate's established baseline.
How HackerEarth's Online Proctoring Software Catches All 10
The tactics above range from basic (tab switching) to sophisticated (stealth interview copilots). No single proctoring feature catches all of them. That is why HackerEarth's proctoring stack works in four layers, each designed to address a different category of cheating.
Layer 1: Identity and environment verification
Before the assessment begins, this layer confirms who is taking the test and where they are taking it.
- Webcam-based candidate verification at session start
- Randomized webcam snapshots throughout the session (two per minute by default)
- IP address lock restricting the test to a single network
- Automatic impersonation detection flagging face mismatches
Layer 2: Browser and device lockdown (Smart Browser)
This is the layer that separates modern AI proctoring software from legacy tools. HackerEarth's Smart Browser is a desktop-level lockdown, not a browser-tab restriction.
- Blocks all other applications during the assessment
- Prevents virtual machine usage
- Detects and blocks screen-sharing tools
- Disables AI overlay extensions and copilot tools
- Locks copy-paste functionality
- Detects multiple monitors
A traditional lockdown browser only controls the browser tab, leaving the rest of the desktop unprotected. Smart Browser locks the entire computer. That distinction matters because the most dangerous cheating tools in 2026 operate at the application layer, not the browser layer.
Try HackerEarth Assessments free →
Layer 3: In-test behavior monitoring
During the assessment, continuous AI invigilation catches suspicious behavior in real time.
- Tab-switch alerts with configurable thresholds
- Full-screen mode enforcement
- Automatic mobile phone detection via AI gaze tracking
- Audio proctoring detecting background voices
- Custom MCQ timers adding time pressure
- Automatic logout when the candidate leaves the webcam frame
Layer 4: Post-submission analysis
After the test, automated analysis catches cheating that was not flagged during the live session.
- AI plagiarism checker comparing every submission against the full corpus
- Code playback (full keystroke replay showing exactly how the code was written)
- Behavioral pattern analysis comparing the submission against the candidate's other work
- Negative marking on MCQs to discourage random guessing
These four layers work together. A candidate who bypasses one layer (for example, by using a second device to dodge browser-level controls) gets caught by another (webcam snapshots detecting off-screen gaze, code playback revealing AI-generated patterns).
HackerEarth's assessment platform is trusted by over 4,000 companies, supports a community of 4.5 million developers, and has facilitated more than 150 million assessments. The question library contains over 23,000 questions across 41+ programming languages. The platform is ISO 27001 certified and GDPR compliant.
Online Proctoring Software: Tradeoffs and Best Practices
No remote proctoring software is 100% foolproof, and anyone who claims otherwise is selling something. The goal is not perfect prevention. It is raising the cost and difficulty of cheating high enough that the return does not justify the effort. Academic research on collusion prevention in assessments frames this as an optimization problem: each additional proctoring layer increases the resources a cheater must invest, and at some point the investment exceeds the payoff.
The most effective approach is tiered proctoring:
- Low-stakes screening assessments: Full-screen enforcement, plagiarism detection, and tab-switch monitoring provide adequate coverage without adding friction for honest candidates.
- High-stakes final-round assessments: Full Smart Browser lockdown combined with webcam snapshots, audio proctoring, and code playback.
- Live interviews: FaceCode pair-programming sessions where the candidate writes code, explains their reasoning, and responds to follow-up questions in real time. No async cheating tool works here.
A few best practices apply regardless of the tier:
- Communicate proctoring rules to candidates before the assessment begins. Transparency reduces the intent to cheat and improves the overall candidate experience.
- Use behavioral flags rather than punitive automated decisions. Flag suspicious activity for human review instead of auto-rejecting candidates.
- Layer multiple controls instead of relying on any single feature. The strongest proctoring is the combination, not any individual tool.
Even with comprehensive async proctoring, live interviews remain the strongest verification of real skill. The AI Interview Agent and live FaceCode coding interviews serve as the final-round ground truth that validates what async assessments surfaced. For deeper guidance on building a robust remote proctoring strategy for online assessments, HackerEarth's proctoring best-practices guide covers configuration recommendations in detail.
Conclusion
The cheating playbook has changed. Tab-switching and copy-pasting were the threats in 2020. In 2026, the threats are ChatGPT, stealth interview copilots, IDE-embedded AI, and professional proxy services. Your online proctoring software needs to account for all of them.
The operating principle is straightforward: assume every candidate has AI assistance available and design your proctoring controls around that assumption. Four layers of defense (identity verification, desktop lockdown, in-test monitoring, and post-submission analysis) create the coverage that no single feature can deliver alone.
Book a demo of HackerEarth's full proctoring stack →
If you are also exploring how AI interview assistants are reshaping the hiring process, the evolution in proctoring and the evolution in AI-assisted interviewing are two sides of the same challenge: ensuring that the person you hire is the person who demonstrated the skill.
Frequently Asked Questions
How do candidates cheat on online assessments?
Candidates cheat on online assessments by switching tabs to look up answers, copy-pasting code from external sources, asking another person to take the test for them, looking off-screen at notes or a second device, using AI tools like ChatGPT and stealth browser extensions, and exploiting unmonitored breaks. Modern online proctoring software stops these tactics with AI-powered webcam snapshots, secure browser lockdowns, plagiarism detection, code playback, and impersonation checks.
Can online proctoring software detect ChatGPT?
Yes, through multiple methods. ChatGPT used in another browser tab is detected via tab-switch monitoring. ChatGPT used on a second device is caught through webcam-based gaze tracking, randomized snapshots, and Smart Browser's desktop lockdown (which prevents local AI applications from running). AI-generated code is flagged through plagiarism checking and code playback pattern analysis, which reveals code that appears in large complete blocks rather than through iterative development.
How does AI proctoring work?
AI proctoring uses computer vision to analyze webcam feeds, behavioral biometrics to track typing rhythm and mouse movement, audio analysis to detect background voices, and plagiarism detection to compare submissions. It operates as a flag-then-verify model: AI flags suspicious behavior, and a human reviewer makes the final determination. It is not an autonomous decision system.
What is a Smart Browser, and how is it different from a lockdown browser?
A Smart Browser is HackerEarth's desktop application that locks down the entire operating system for the duration of the assessment. It blocks other applications, virtual machines, screen-sharing tools, AI overlay extensions, and copy-paste. A traditional lockdown browser only controls the browser tab, leaving the rest of the desktop unprotected. The difference is between a locked browser tab and a locked computer.
Can a remote assessment tool detect a second monitor or device?
Yes. Second monitors are detected by Smart Browser at the operating system level. Second devices (phones, tablets) are detected by AI webcam analysis that tracks gaze direction and by randomized snapshots that capture the candidate looking away from the primary screen.
Is online proctoring legal and GDPR compliant?
Yes, when implemented transparently. HackerEarth's proctoring is GDPR compliant and ISO 27001 certified. Best practice is to communicate proctoring rules to candidates before the assessment begins and provide an alternative process for candidates who decline, which is typically relevant in high-stakes regulatory contexts.









