Keerthi Kumar

Author
Keerthi Kumar

Blogs
With years spent in HR trenches, Keerthi is passionate about what makes organizations tick—people. Their writing dives deep into behavioral interviews, talent strategy, and employee experience.
author’s Articles

Insights & Stories by Keerthi Kumar

Whether you're building your first team or scaling culture across regions, Keerthi Kumar's articles offer human-first insights rooted in real practice.
Clear all
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Filter
Filter

How Candidates Cheat Online Assessments in 2026 | 10 Fixes

In March 2025, CNBC reported on how Google was responding to AI-assisted cheating in its coding interviews. Around the same time, a recurring thread on r/cscareerquestions estimated that roughly 80% of candidates use LLMs on top-of-funnel coding tests. A controlled experiment by interviewing.io confirmed what many hiring teams already suspected: candidates were using ChatGPT during LeetCode-style interviews and getting away with it.

This is not the cheating problem you dealt with in 2020. Back then, the biggest risk was a candidate switching tabs to Google an answer. Today, the threat is a candidate running ChatGPT in a second window, using a stealth browser extension that feeds AI-generated answers through an invisible overlay, or paying a proxy service to complete the entire assessment. Online proctoring software is no longer a nice-to-have feature for technical assessments. It is the difference between hiring a capable developer and hiring someone skilled at prompting an LLM.

The numbers confirm the urgency. HackerEarth's 2025 Technical Hiring Landscape Report found that proctoring usage across its platform jumped from 64% in January to 77% by July, with approximately 64.5% of all assessment events proctored by year-end. Employers are catching on, but the tactics keep evolving.

This article was originally published in 2020 covering six classic cheating methods. This 2026 update expands the list to ten tactics, adds four AI-era cheating methods that did not exist when this piece was first written, and maps each tactic to the specific proctoring feature that stops it. By the end, you will know how every modern cheating tactic works, which proctoring control catches it, and what HackerEarth's Smart Browser does that standard lockdown browsers cannot.

Why Online Assessment Cheating Got Harder to Detect in 2026

Cheating tools have professionalized. Products like Final Round AI, InterviewCoder, LockedIn AI, and Sensei AI are paid SaaS applications built specifically to help candidates beat technical interviews. They are not crude hacks. They are polished subscription services with onboarding flows, support documentation, and "undetectable" marketing claims.

The scale of the problem shows up across multiple data points. Recurring discussions on r/cscareerquestions and r/ExperiencedDevs suggest the majority of candidates now use LLMs during top-of-funnel code screens. A February 2026 Built In article asked "Is Using AI in a Job Interview Cheating?" and concluded the answer depends on context, reflecting how quickly candidate-side norms have shifted. Meanwhile, research from 2023 found that traditional interviews disproportionately reward impression management over actual competence, which gives AI-assisted candidates an even larger advantage.

The ten tactics below are ordered from the oldest and most basic to the newest and most sophisticated. The first six are updated versions of the classic methods from the original 2020 article. The last four are AI-era tactics that were not part of the conversation when this piece was first published.

10 Ways Candidates Cheat Online Assessments (and How to Stop Them)

1. Switching tabs to look up answers ("El switch-a-tab-aroo")

The classic. A candidate opens a new browser tab, searches for the answer, and switches back. It is the oldest trick in online assessments and, ironically, the easiest one to catch in 2026. Most candidates still try it because they assume the assessment platform only records their answers, not their browser behavior.

Why it still happens: Candidates underestimate how much metadata the platform captures. A quick tab switch feels invisible.

How to stop it: Full-screen mode enforcement prevents the candidate from navigating away without triggering an alert. Automatic tab-switch detection logs every instance and can trigger automatic logout after a set number of violations. Custom timers on MCQs add time pressure that makes switching impractical. HackerEarth's proctoring system flags every tab switch and surfaces it in the recruiter's review dashboard.

2. Copy-pasting code from another source ("El copy-paste-o")

Still the most common technical assessment cheating method. The 2026 variation: candidates copy from ChatGPT directly into the code editor rather than from Stack Overflow. The source has changed, but the mechanic is the same.

Why it is harder to catch than it looks: Modern clipboard managers on Windows and macOS let candidates store multiple copied snippets and insert them with a single keystroke. The paste itself takes less than a second.

How to stop it: Copy-paste lock in the code editor blocks the action entirely. A plagiarism checker compares every submission against the full corpus of candidate answers for the same test. Code playback records every keystroke as a video. A copy-paste event appears as a single large insertion rather than iterative typing, making it immediately visible during review.

3. Getting someone else to take the test ("El imperson-anaa-tor")

The friend-takes-the-test scenario has evolved into a cottage industry. Proxy test-taking services now advertise "managed assessment completion" as a paid offering, complete with professional developers who specialize in clearing top-of-funnel code screens.

Why it is harder to catch than it looks: Unlike tab-switching or copy-pasting, impersonation leaves no in-test behavioral fingerprint unless you verify identity throughout the session.

How to stop it: Randomized webcam snapshots (two per minute on HackerEarth by default) capture the candidate's face at unpredictable intervals. IP address lock restricts the session to a single network. Automatic impersonation detection flags mismatches between the registered candidate and the person on camera. For high-stakes assessments, government-ID verification at session start adds another layer. Behavioral biometrics that compare typing rhythm across sessions can also surface inconsistencies.

4. Looking at a second screen, phone, or notes ("El dos screen-o")

A secondary device hidden just out of the webcam's view. The 2026 angle: candidates now prop a tablet beneath the desk running ChatGPT, glancing down periodically to read AI-generated answers.

Why it is harder to catch than it looks: If the device is positioned below the webcam frame, it is physically invisible to a standard webcam capture.

How to stop it: AI-powered gaze tracking detects when a candidate's eyes consistently move off-screen. Automatic mobile phone detection flags devices visible in the webcam frame. Randomized webcam snapshots catch candidates mid-glance. For high-stakes assessments, dual-camera proctoring (laptop camera plus phone camera showing the workspace) eliminates blind spots. Audio proctoring catches candidates who read answers aloud from a second device.

5. Having someone in the room help ("El dos candidate-o")

A friend whispering answers, a partner reading prompts off-camera, or a study group collaborating in the same room. The helpful accomplice has always been a risk with remote proctored assessments, and it remains difficult to catch without audio monitoring.

Why it is harder to catch than it looks: If the second person stays out of the webcam frame and speaks quietly, visual proctoring alone will not catch them.

How to stop it: Audio monitoring detects background voices and conversation patterns. Randomized webcam snapshots occasionally capture a second person in the frame. AI background-voice detection flags audio anomalies for reviewer follow-up. A plagiarism checker catches identical or near-identical submissions from candidates who received the same whispered answers.

6. Restroom breaks and other unmonitored exits ("El missing suspect-o")

The classic disappearing act. The candidate leaves the frame, consults notes or a phone, and returns. Especially common on longer assessments where a mid-test break feels natural.

Why it is harder to catch than it looks: A candidate who pauses for two minutes looks identical to someone who genuinely needed a break.

How to stop it: Custom timers per question keep the clock running and make extended absences costly. Automatic logout triggers when the candidate leaves the webcam frame for longer than the configured threshold. Full-screen lockdown ensures that leaving the test screen flags the session.

7. Using ChatGPT, Claude, or other LLMs in a separate window (NEW for 2026)

This is the dominant cheating tactic in 2026. A candidate runs an LLM in a second browser window or a dedicated desktop app, types the question in, receives a solution, then retypes or paraphrases the AI output into the assessment editor. Tab-switch detection catches the obvious version. The harder version: the candidate uses an entirely separate device, making browser-level detection useless.

Why it is harder to catch than it looks: On r/jobs, threads ask "Is using ChatGPT during an online assessment cheating?" with many candidates arguing it is not. The normalization of AI tools means candidates are less likely to feel they are doing anything wrong. And second-device usage bypasses all browser-level monitoring entirely.

How to stop it: HackerEarth's Smart Browser is a desktop application that locks down the entire operating system for the duration of the assessment. It blocks all other applications, not just other browser tabs. A candidate cannot switch to a ChatGPT window, a desktop AI app, or any other program while the test is active. Code playback analysis detects AI-generated code patterns: long blocks inserted at once versus iterative, exploratory coding with corrections. The plagiarism detection engine is tuned for AI-generated code patterns. Behavioral analysis compares the current submission against the candidate's other work to flag inconsistencies.

See how Smart Browser locks down the entire desktop →

8. Real-time interview copilots: Final Round AI, LockedIn AI, Sensei AI (NEW for 2026)

Stealth browser extensions and overlay applications that listen to interview audio, transcribe questions in real time, send them to GPT-4, Claude, or Gemini, and feed answers back to the candidate through an on-screen overlay invisible on screen share. These tools are explicitly marketed as "undetectable."

Why they are dangerous: The candidate appears fully engaged. They look at the screen, type, and respond at a natural pace, but they are reading AI-generated answers from an overlay that standard screen-sharing software does not capture. On r/recruiting, employers share frustration about candidates who perform brilliantly in live interviews but struggle with basic tasks on day one. These copilot tools are the likely explanation.

How to stop them: Smart Browser blocks the installation and execution of overlay applications during the assessment session. Behavioral pattern analysis catches the telltale latency between a question being asked and an unusually polished answer appearing. Code playback exposes the giveaway: no exploration, no errors, no iteration, just clean code appearing in complete blocks. Live FaceCode interviews with system-design diagram questions force on-the-fly thinking that these copilot tools cannot replicate.

9. AI-based code generation tools embedded in IDEs (NEW for 2026)

GitHub Copilot, Cursor, and Codeium running locally on a candidate's machine. If the assessment allows candidates to use their own IDE (common with take-home tests), AI auto-completion operates invisibly. The candidate types a comment describing what they need, and the IDE generates the implementation.

Why it is harder to catch than it looks: The code appears to be typed normally. There is no copy-paste event, no tab switch, no external application. The AI is embedded inside the development tool itself.

How to stop it: HackerEarth Assessments uses its own browser-based IDE, which means there is no Copilot integration available. Smart Browser locks the entire desktop environment, preventing candidates from opening a local IDE alongside the test. Code playback analysis detects inhuman typing speed and patterns, such as perfectly structured code produced faster than normal human output.

10. Virtual machines and screen-sharing tools (NEW for 2026)

A candidate runs the assessment inside a virtual machine, then has an accomplice remotely access the host machine and complete the test. Alternatively, they use screen-sharing applications like TeamViewer or AnyDesk to let a friend control the keyboard in real time. Discussions on r/AskHR highlight this as a growing concern for remote take-home tests.

Why it is harder to catch than it looks: From the assessment platform's perspective, everything looks normal. The correct candidate appears to be taking the test in a standard browser. The remote access happens at the operating system level, below what a browser-based tool can see.

How to stop it: Smart Browser detects virtual machine environments and flags them before the assessment begins. Screen-sharing and screen recording application detection blocks programs like TeamViewer and AnyDesk from running during the session. IP-based session monitoring flags unexpected network changes. Behavioral typing biometrics compare keystroke patterns against the candidate's established baseline.

How HackerEarth's Online Proctoring Software Catches All 10

The tactics above range from basic (tab switching) to sophisticated (stealth interview copilots). No single proctoring feature catches all of them. That is why HackerEarth's proctoring stack works in four layers, each designed to address a different category of cheating.

Layer 1: Identity and environment verification

Before the assessment begins, this layer confirms who is taking the test and where they are taking it.

  • Webcam-based candidate verification at session start
  • Randomized webcam snapshots throughout the session (two per minute by default)
  • IP address lock restricting the test to a single network
  • Automatic impersonation detection flagging face mismatches

Layer 2: Browser and device lockdown (Smart Browser)

This is the layer that separates modern AI proctoring software from legacy tools. HackerEarth's Smart Browser is a desktop-level lockdown, not a browser-tab restriction.

  • Blocks all other applications during the assessment
  • Prevents virtual machine usage
  • Detects and blocks screen-sharing tools
  • Disables AI overlay extensions and copilot tools
  • Locks copy-paste functionality
  • Detects multiple monitors

A traditional lockdown browser only controls the browser tab, leaving the rest of the desktop unprotected. Smart Browser locks the entire computer. That distinction matters because the most dangerous cheating tools in 2026 operate at the application layer, not the browser layer.

Try HackerEarth Assessments free →

Layer 3: In-test behavior monitoring

During the assessment, continuous AI invigilation catches suspicious behavior in real time.

  • Tab-switch alerts with configurable thresholds
  • Full-screen mode enforcement
  • Automatic mobile phone detection via AI gaze tracking
  • Audio proctoring detecting background voices
  • Custom MCQ timers adding time pressure
  • Automatic logout when the candidate leaves the webcam frame

Layer 4: Post-submission analysis

After the test, automated analysis catches cheating that was not flagged during the live session.

  • AI plagiarism checker comparing every submission against the full corpus
  • Code playback (full keystroke replay showing exactly how the code was written)
  • Behavioral pattern analysis comparing the submission against the candidate's other work
  • Negative marking on MCQs to discourage random guessing

These four layers work together. A candidate who bypasses one layer (for example, by using a second device to dodge browser-level controls) gets caught by another (webcam snapshots detecting off-screen gaze, code playback revealing AI-generated patterns).

HackerEarth's assessment platform is trusted by over 4,000 companies, supports a community of 4.5 million developers, and has facilitated more than 150 million assessments. The question library contains over 23,000 questions across 41+ programming languages. The platform is ISO 27001 certified and GDPR compliant.

Online Proctoring Software: Tradeoffs and Best Practices

No remote proctoring software is 100% foolproof, and anyone who claims otherwise is selling something. The goal is not perfect prevention. It is raising the cost and difficulty of cheating high enough that the return does not justify the effort. Academic research on collusion prevention in assessments frames this as an optimization problem: each additional proctoring layer increases the resources a cheater must invest, and at some point the investment exceeds the payoff.

The most effective approach is tiered proctoring:

  • Low-stakes screening assessments: Full-screen enforcement, plagiarism detection, and tab-switch monitoring provide adequate coverage without adding friction for honest candidates.
  • High-stakes final-round assessments: Full Smart Browser lockdown combined with webcam snapshots, audio proctoring, and code playback.
  • Live interviews: FaceCode pair-programming sessions where the candidate writes code, explains their reasoning, and responds to follow-up questions in real time. No async cheating tool works here.

A few best practices apply regardless of the tier:

  • Communicate proctoring rules to candidates before the assessment begins. Transparency reduces the intent to cheat and improves the overall candidate experience.
  • Use behavioral flags rather than punitive automated decisions. Flag suspicious activity for human review instead of auto-rejecting candidates.
  • Layer multiple controls instead of relying on any single feature. The strongest proctoring is the combination, not any individual tool.

Even with comprehensive async proctoring, live interviews remain the strongest verification of real skill. The AI Interview Agent and live FaceCode coding interviews serve as the final-round ground truth that validates what async assessments surfaced. For deeper guidance on building a robust remote proctoring strategy for online assessments, HackerEarth's proctoring best-practices guide covers configuration recommendations in detail.

Conclusion

The cheating playbook has changed. Tab-switching and copy-pasting were the threats in 2020. In 2026, the threats are ChatGPT, stealth interview copilots, IDE-embedded AI, and professional proxy services. Your online proctoring software needs to account for all of them.

The operating principle is straightforward: assume every candidate has AI assistance available and design your proctoring controls around that assumption. Four layers of defense (identity verification, desktop lockdown, in-test monitoring, and post-submission analysis) create the coverage that no single feature can deliver alone.

Book a demo of HackerEarth's full proctoring stack →

If you are also exploring how AI interview assistants are reshaping the hiring process, the evolution in proctoring and the evolution in AI-assisted interviewing are two sides of the same challenge: ensuring that the person you hire is the person who demonstrated the skill.

Frequently Asked Questions

How do candidates cheat on online assessments?

Candidates cheat on online assessments by switching tabs to look up answers, copy-pasting code from external sources, asking another person to take the test for them, looking off-screen at notes or a second device, using AI tools like ChatGPT and stealth browser extensions, and exploiting unmonitored breaks. Modern online proctoring software stops these tactics with AI-powered webcam snapshots, secure browser lockdowns, plagiarism detection, code playback, and impersonation checks.

Can online proctoring software detect ChatGPT?

Yes, through multiple methods. ChatGPT used in another browser tab is detected via tab-switch monitoring. ChatGPT used on a second device is caught through webcam-based gaze tracking, randomized snapshots, and Smart Browser's desktop lockdown (which prevents local AI applications from running). AI-generated code is flagged through plagiarism checking and code playback pattern analysis, which reveals code that appears in large complete blocks rather than through iterative development.

How does AI proctoring work?

AI proctoring uses computer vision to analyze webcam feeds, behavioral biometrics to track typing rhythm and mouse movement, audio analysis to detect background voices, and plagiarism detection to compare submissions. It operates as a flag-then-verify model: AI flags suspicious behavior, and a human reviewer makes the final determination. It is not an autonomous decision system.

What is a Smart Browser, and how is it different from a lockdown browser?

A Smart Browser is HackerEarth's desktop application that locks down the entire operating system for the duration of the assessment. It blocks other applications, virtual machines, screen-sharing tools, AI overlay extensions, and copy-paste. A traditional lockdown browser only controls the browser tab, leaving the rest of the desktop unprotected. The difference is between a locked browser tab and a locked computer.

Can a remote assessment tool detect a second monitor or device?

Yes. Second monitors are detected by Smart Browser at the operating system level. Second devices (phones, tablets) are detected by AI webcam analysis that tracks gaze direction and by randomized snapshots that capture the candidate looking away from the primary screen.

Is online proctoring legal and GDPR compliant?

Yes, when implemented transparently. HackerEarth's proctoring is GDPR compliant and ISO 27001 certified. Best practice is to communicate proctoring rules to candidates before the assessment begins and provide an alternative process for candidates who decline, which is typically relevant in high-stakes regulatory contexts.

HackerEarth Introduces Full-Stack Assessments At Hire10(1) Conference

The new HackerEarth Assessments feature of full-stack assessments will facilitate both backend and frontend developer assessment for recruiters.



HackerEarth has just announced the addition of full-stack assessments to help recruiters efficiently evaluate the coding skills of full-stack developers. HackerEarth’s CEO, Sachin Gupta, made the announcement today at Hire10(1), HackerEarth’s flagship virtual conference to help recruiters and engineering leaders hire top developers and build great tech teams.



According to the 2020 HackerEarth Developer Survey, more developers — over 35% — have expertise in full-stack development than in any other category. Yet evaluating these skills is notoriously difficult because full-stack development spans across multiple skills and requires a high level of customization based on the specific technology stack that an organization uses. HackerEarth’s full-stack assessment solution is highly flexible and customizable, supports a large number of out-of-the-box tech stacks, while simultaneously allowing any custom stack to be installed in the development environment. These assessments also include a powerful browser-based IDE built on top of Theia editor, providing developers the same code writing experience in the browser as they would get on their own systems.



“Organizations are increasingly looking to recruit full-stack developers. In fact, recent data from a survey done by Indeed shows that the demand for full-stack developers in the U.S. increased by 206% between 2015-2018. However, finding a good full-stack developer is hard since the assessment process is notoriously difficult and can take hours, days, and even weeks” said HackerEarth CEO, Sachin Gupta. “We added this feature to simplify the process and are working to make it easy for recruiters to proctor over longer time periods and assess using various programming languages. Our goal is to enable recruiters to evaluate developer skills from early candidate screening to more in-depth assignments in the later stages of recruitment, as well as to provide the best performance and experience possible for full-stack candidates.”

Key Benefits of the HackerEarth Full-stack Assessments Include:

Fullstack Assessments
  • Flexibility: Can be used for full-stack, or for assessment of either frontend or backend skills independently
  • Breadth of Languages Supported: Includes Python, Django, Java (Spring), Node JS for backend, and React and Angular JS for frontend
  • Customizability: Recruiters can customize the development environment and the task based on their specific technology stack. Candidates can then build and run entire applications within the HackerEarth portal
  • Ease of Implementation: Gives full access to the HackerEarth library of 13,000+ questions which allows recruiters to quickly build out full-stack assessments or create custom questions as well
  • Automation: Fully automated backend assessment, with frontend coming soon

For a better review process, the product has a detailed report section with additional functions including:

Fullstack reporting
  • Real-time recording of actions taken while building the application in the form of log files which recruiters can download and review
  • A preview function to help recruiters and candidates check the build easily
  • Automated screenshots of the application built by the candidate so recruiters can quickly evaluate their progress

Features Coming Soon:

  • Automated code quality score
  • Fool-proof proctoring for longer-term assignments
  • Support for more languages and frameworks, including PHP and Ruby on Rails
  • Auto-evaluation for frontend
  • The full-stack infrastructure will also be used to create specialized assessments for roles like Cybersecurity engineer, Game developer, etc.
Learn more about HackerEarth’s Full-stack Assessments.

This is Recruiting - Demystifying bias in recruiting and how to tackle it.

Welcome to another interesting episode of "This is Recruiting", a series that equips HR professionals and tech recruiters across the globe to gain actionable insights from fellow recruiters to take their hiring to the next level.

In this episode, we caught up with somebody special, someone with a gold mine of useful information regarding technical recruitment. David Windley, CEO, IQTalent Partners, who is also Board Chair for the Society for Human Resource Management (SHRM) shares with us a generation's worth of recruiting wisdom and valuable insights that he's picked up over the decades. Having spent around 30 years in corporate HR, David is one of the leading industry experts in the world of recruitment. From all his years of observing, dealing with, and building processes around bias in hiring, he has much to say and offers us timeless advice on some of the best ways to tackle it.

The first step is always to call it out, he says. It begins with acknowledging that bias exists, then by rooting out the bad biases that aren't performance-driven out of the process, and lastly by building workable systems around that.

He maintains that the only way to overcome bias is by having recruiters zero back to the original principle of assessing the individual based on merits alone, remembering that they need to have the best interests of the broader organization in mind, and not give in to their personal inhibitions and prejudices.

This is Recruiting - Reducing bias in the hiring cycle.

Sachin:

In your opinion, how important is it for an organization to focus on reducing bias while hiring?

David:

So, let's set aside the social issues. There are reasons to do it because of the broader social good. But let's just talk as a business.

Our goal when we're trying to hire people is to really find the right people that will be the best performers in our organisation - as an individual and collectively within our culture and company. So, when we're trying to find the right characteristics that will lead to good performance and when we have bias creep in here - it's only going to hinder our process of finding the ideal candidate for the position.

Bias that's unrelated directly to performance will only cause you to sub-optimize in your decisions. From a pure business perspective - all of us should want to address this issue.

Sachin:

I'm sure you would have seen the length and breadth of different organizations and the functions within. In your experience, do you see certain functions that tend to be more diverse? Or the converse of that?

David:

Yeah. Depending on how you talk about diversity. There is ethnic diversity, there is gender diversity, and then the broadest of all - diversity in thought and perspectives. But, yeah. If you just look at demographics and statistics, there are certain functions that lean more towards certain gender demographics and also ethnic demographics. So that's true.

Again, that doesn’t necessarily mean that we should just then assume -- because at a macro level -- those statistics are what they are. That means anything for any individual.

So going back to the first question. A very good example of how bias creeps in is when someone looks at a macro and just makes an assumption based on ‘association by group’. But how much of those macro statistics have bias built into it is due to maybe reasons like bias in society, etc. So yes, at a macro level there are just historical differences in certain functions. The point is for any individual that you are assessing, you are trying to discern that person's capabilities, skill sets, and competencies; whether they're going to be a good performer and fit for your organization.
The Go-Getter’s guide to diversity hiring in tech
Sachin:

Considering that humans are hardwired to align with people similar to themselves; affinity bias is so hardwired into us that it isn't that easy to overcome. So, in such a situation what are your guiding principles that help you make the right decisions in the recruiting process? And what have you done with your team over the years?

David:

Yes, I think you make a really good point. That's where we start with this issue on bias - to understand that it is natural for humans to categorize. That's just how our brains work.

There is just so much information out there that we have to categorize things and it's how we work. We need to just realize that bias is a natural thing and that we all have biases.

We all hear messages, we grow up in our societies, and whatever messages or things we learn or observe in those societies, they enter our unconscious and conscious mind.

So, let's first just demystify it. Bias exists. And the first thing to do about it is admitting that that's the case. The issue now is to deal with the unrelated biases and to get that out of the process, so it doesn't get in the way.

Why do I say that? Since there are obviously some good biases too. For example, I have a bias for people that are self-starters. I think that's an okay bias because it's performance-related. But having a bias about someone's gender, or someone's ethnicity, or race is not directly related to those sorts of performance behaviors. So, from a process point of view,
  • It's good to have a structured interview assessment process that identifies the characteristics and competencies that you're looking for.
  • Having structured questions around that and having a nice feedback loop as a team to make sure that when you're assessing, you are, in fact, talking about those characteristics.
  • Not relying on the shorthand - "Joe is a good guy. I like Joe." That is not a good assessment. That doesn't work.
Want to keep going? Sachin and David go on to talk about centralized recruiting teams, the role of AI in reducing bias, hiring patterns and outlier statistics, diversity training, and more.

Listen to our entire conversation with David here.