Shivam Gupta

Author
Shivam Gupta

Blogs
From dorm rooms to boardrooms, Shivam has built a career connecting young talent to opportunity. Their writing brings fresh, student-centric views on tech hiring and early careers.
author’s Articles

Insights & Stories by Shivam Gupta

Shivam Gupta explores what today’s grads want from work—and how recruiters can meet them halfway. Expect a mix of optimism, strategy, and sharp tips.
Clear all
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Filter
Filter

Guide to Conducting Successful System Design Interviews in 2025

What is Systems Design?

Systems Design is an all encompassing term which encapsulates both frontend and backend components harmonized to define the overall architecture of a product.

Designing robust and scalable systems requires a deep understanding of application, architecture and their underlying components like networks, data, interfaces and modules.

Systems Design, in its essence, is a blueprint of how software and applications should work to meet specific goals. The multi-dimensional nature of this discipline makes it open-ended – as there is no single one-size-fits-all solution to a system design problem.

What is a System Design Interview?

Conducting a System Design interview requires recruiters to take an unconventional approach and look beyond right or wrong answers. Recruiters should aim for evaluating a candidate’s ‘systemic thinking’ skills across three key aspects:

How they navigate technical complexity and navigate uncertainty
How they meet expectations of scale, security and speed
How they focus on the bigger picture without losing sight of details

This assessment of the end-to-end thought process and a holistic approach to problem-solving is what the interview should focus on.

What are some common topics for a System Design Interview

System design interview questions are free-form and exploratory in nature where there is no right or best answer to a specific problem statement. Here are some common questions:

How would you approach the design of a social media app or video app?

What are some ways to design a search engine or a ticketing system?

How would you design an API for a payment gateway?

What are some trade-offs and constraints you will consider while designing systems?

What is your rationale for taking a particular approach to problem solving?

Usually, interviewers base the questions depending on the organization, its goals, key competitors and a candidate’s experience level.

For senior roles, the questions tend to focus on assessing the computational thinking, decision making and reasoning ability of a candidate. For entry level job interviews, the questions are designed to test the hard skills required for building a system architecture.

The Difference between a System Design Interview and a Coding Interview

If a coding interview is like a map that takes you from point A to Z – a systems design interview is like a compass which gives you a sense of the right direction.

Here are three key difference between the two:

Coding challenges follow a linear interviewing experience i.e. candidates are given a problem and interaction with recruiters is limited. System design interviews are more lateral and conversational, requiring active participation from interviewers.

Coding interviews or challenges focus on evaluating the technical acumen of a candidate whereas systems design interviews are oriented to assess problem solving and interpersonal skills.

Coding interviews are based on a right/wrong approach with ideal answers to problem statements while a systems design interview focuses on assessing the thought process and the ability to reason from first principles.

How to Conduct an Effective System Design Interview

One common mistake recruiters make is that they approach a system design interview with the expectations and preparation of a typical coding interview.
Here is a four step framework technical recruiters can follow to ensure a seamless and productive interview experience:

Step 1: Understand the subject at hand

  • Develop an understanding of basics of system design and architecture
  • Familiarize yourself with commonly asked systems design interview questions
  • Read about system design case studies for popular applications
  • Structure the questions and problems by increasing magnitude of difficulty

Step 2: Prepare for the interview

  • Plan the extent of the topics and scope of discussion in advance
  • Clearly define the evaluation criteria and communicate expectations
  • Quantify constraints, inputs, boundaries and assumptions
  • Establish the broader context and a detailed scope of the exercise

Step 3: Stay actively involved

  • Ask follow-up questions to challenge a solution
  • Probe candidates to gauge real-time logical reasoning skills
  • Make it a conversation and take notes of important pointers and outcomes
  • Guide candidates with hints and suggestions to steer them in the right direction

Step 4: Be a collaborator

  • Encourage candidates to explore and consider alternative solutions
  • Work with the candidate to drill the problem into smaller tasks
  • Provide context and supporting details to help candidates stay on track
  • Ask follow-up questions to learn about the candidate’s experience

Technical recruiters and hiring managers should aim for providing an environment of positive reinforcement, actionable feedback and encouragement to candidates.

Evaluation Rubric for Candidates

Facilitate Successful System Design Interview Experiences with FaceCode

FaceCode, HackerEarth’s intuitive and secure platform, empowers recruiters to conduct system design interviews in a live coding environment with HD video chat.

FaceCode comes with an interactive diagram board which makes it easier for interviewers to assess the design thinking skills and conduct communication assessments using a built-in library of diagram based questions.

With FaceCode, you can combine your feedback points with AI-powered insights to generate accurate, data-driven assessment reports in a breeze. Plus, you can access interview recordings and transcripts anytime to recall and trace back the interview experience.

Learn how FaceCode can help you conduct system design interviews and boost your hiring efficiency.

How HackerEarth's Smart Browser Protects Assessment Integrity in the Age of AI

AI-assisted cheating is the single biggest threat to technical hiring assessments right now. With tools like ChatGPT capable of solving basic to intermediate coding problems in seconds, recruiters face a difficult question: how do you know a candidate actually solved the test themselves?

HackerEarth's Smart Browser addresses this directly. It is a purpose-built desktop application that locks down the testing environment, preventing candidates from accessing AI tools, external resources, or any form of assistance during an assessment.

The results are measurable. Internal data shows that assessments conducted through the Smart Browser see significantly lower solvability rates, meaning the candidates who pass are genuinely skilled rather than AI-assisted.

This article breaks down exactly what the Smart Browser does, the data behind its impact on assessment integrity, how it compares to standard browser-based proctoring, when to use it versus allowing AI, and the technical requirements for getting started. Whether you are running high-volume campus hiring or screening senior developers, this guide will help you decide if the Smart Browser fits your assessment strategy.

What Is the Smart Browser and Why Does It Matter?

The Smart Browser is a dedicated desktop application that candidates download and install before taking a HackerEarth assessment. Unlike standard browser-based tests (where candidates take assessments in Chrome, Firefox, or Safari), the Smart Browser creates a controlled environment that restricts access to everything outside the test window.

Think of it as the difference between an open-book exam and a supervised, closed-room test. Browser-based proctoring can detect tab switches and flag suspicious behaviour, but determined candidates can still work around it. The Smart Browser removes those workarounds entirely by operating as a standalone application with system-level restrictions.

This distinction matters because the rise of large language models has fundamentally changed the cheating landscape. A 2024 study published in the British Journal of Educational Technology found that AI-assisted cheating in online assessments increased by over 60% between 2022 and 2024. Standard browser-based proctoring was not designed to counter this level of sophistication.

For recruiters and hiring managers evaluating remote proctoring for online assessments, the Smart Browser represents the most rigorous option available within the HackerEarth platform.

Core Features and Restrictions

The Smart Browser prevents the following candidate actions during an assessment:

  • Screen sharing the test window with any application or service
  • Keeping other applications open during the test (all non-essential apps are blocked)
  • Resizing the test window to view content behind it
  • Using multiple monitors (only the primary display is active)
  • Taking screenshots or recording the test window
  • Running the test inside a virtual machine (VM detection is built in)
  • Accessing browser developer tools
  • Viewing OS notifications that might contain copied content

The application also restricts specific keystrokes and key combinations:

  • All function keys and combos (F1, F5 + Alt, etc.)
  • Alt + Tab (application switching)
  • Ctrl + Alt + Delete (task manager access)
  • Ctrl + C and Ctrl + V (copy-paste)
  • OS superkeys (Windows Key, Mac Command Key) and their combinations

These restrictions collectively ensure that candidates cannot access ChatGPT, code repositories, documentation, or any external resource during the test.

What the Smart Browser Data Reveals About Assessment Integrity

One year after launching the Smart Browser, HackerEarth analysed the impact of this feature on assessment outcomes. The central metric was solvability, which measures how many candidates successfully solve each question type.

A well-designed assessment should have a solvability rate between 10% and 20%, depending on difficulty level and candidate pool size. Too high, and the assessment is not differentiating effectively. Too low, and the test may be unreasonably difficult.

Here is what the data showed.

Scenario A: Assessments Without Smart Browser

When candidates took assessments in a standard browser environment (with basic proctoring but no Smart Browser restrictions), solvability was higher across all question types. The standard proctoring still presented a challenge due to HackerEarth's rich question library, but candidates had opportunities to use external tools, including AI assistants.

Even with these solvability rates, the assessments were not trivially easy. However, the risk remained: genuine candidates who solved problems independently competed on an uneven playing field with those using ChatGPT or similar tools.

Scenario B: Assessments With Smart Browser

After implementing the Smart Browser on the same assessments, solvability decreased significantly across every question type. The controlled environment ensured that only candidates who could solve problems using their own knowledge and reasoning passed the test.

The Solvability Impact

The overall decrease in solvability when Smart Browser was enabled confirms a critical insight: a meaningful percentage of candidates in unproctored environments were relying on external assistance.

This does not mean every candidate in Scenario A was cheating. But the data demonstrates that stricter proctoring separates candidates who can solve problems independently from those who cannot. For recruiting teams, this translates to a higher-quality shortlist where every candidate on the list has demonstrated verified skills.

The bottom line: the Smart Browser does not make tests harder. It makes the results more trustworthy.

Why Proctored Assessments Matter More in the Age of AI

The assessment integrity challenge is not theoretical. Large language models are improving at an accelerating pace, and their ability to solve coding problems grows with each model release.

Consider the progression:

  • GPT-3.5 (2022): Could solve basic coding challenges and common algorithm problems
  • GPT-4 (2023): Handled intermediate and some advanced coding tasks, including data structures and system design questions
  • GPT-4o and Claude 3.5 (2024-2025): Consistently solve complex, multi-step coding problems with high accuracy

For take-home coding assessments sent without proctoring, AI can now handle a significant portion of the question types recruiters rely on to evaluate candidates. This puts hiring teams in a position where unproctored test results may not reflect actual candidate ability.

The problem extends beyond individual cheating. When AI-assisted candidates advance to interviews and cannot replicate their assessment performance, your engineering team wastes hours on interviews that should never have happened. That is time pulled directly from product work.

Robust proctoring through tools like the Smart Browser directly addresses this by ensuring that the assessment stage of your hiring funnel produces reliable signals. When combined with technical interview platforms that evaluate candidates in real-time, you create a multi-layered process where AI-assisted cheating has no viable entry point.

Smart Browser vs. Standard Browser-Based Proctoring

Not all proctoring is equal. Understanding the differences helps you choose the right level of security for each assessment.

Feature Standard Browser Proctoring Smart Browser
Tab-switch detection Yes Not applicable (other apps blocked)
Copy-paste restriction Partial (can be bypassed) Full (keystroke-level blocking)
External application access Detected but not prevented Completely prevented
Multiple monitor usage Flagged after the fact Blocked at the system level
Screenshot and screen recording Detected in some cases Blocked at the OS level
Virtual machine detection Limited Built-in VM detection
AI tool access (ChatGPT, etc.) Detectable via tab switching Blocked entirely
Candidate experience impact Minimal (browser-based) Requires app download

Standard browser-based proctoring works well for lower-stakes assessments or situations where you want a lighter candidate experience. It detects suspicious behaviour and flags it for review.

The Smart Browser is designed for higher-stakes assessments where you need certainty, not just flags. When the cost of a bad hire is significant (senior engineering roles, for example), the trade-off of requiring an app download is well worth the integrity it provides.

For teams looking to improve the candidate experience while still maintaining assessment integrity, a tiered approach works well: use browser-based proctoring for initial screening rounds and reserve the Smart Browser for final technical assessments.

When to Use the Smart Browser (and When to Allow AI)

This is where the decision becomes strategic rather than technical. The Smart Browser gives you the ability to create a fully locked-down assessment environment. But that does not mean you should use it for every test.

Option 1: Block AI Access Entirely

Use the Smart Browser when the primary goal is evaluating a candidate's core programming skills, specifically:

  • Syntax familiarity and language proficiency
  • Problem-solving ability without external assistance
  • Algorithm design and optimisation under constraints
  • Code efficiency and clean coding practices

This approach is ideal for high-volume hiring where you need to filter large candidate pools efficiently. Campus recruitment drives, associate-level engineering roles, and standardised skill assessments are strong use cases.

The Smart Browser ensures that every candidate who passes the assessment did so on their own ability. Your shortlist becomes a reliable signal for the next stage.

Option 2: Allow AI to Expand the Assessment Scope

For senior or specialised roles, consider allowing AI tool access during assessments. Many experienced developers already use AI assistants as part of their daily workflow. Evaluating how a candidate uses AI (prompt engineering, code review, solution refinement) can reveal higher-order skills that a locked-down test cannot measure.

Think of it the way writing professionals use spell checkers. The tool does not replace skill; it augments it. For roles where AI collaboration is part of the job, testing candidates without AI access may actually give you a less accurate picture of their real-world capabilities.

In these scenarios, focus the assessment on:

  • System design and architectural thinking
  • Code review, debugging, and optimisation of AI-generated code
  • Problem decomposition and communication
  • Creativity and novel approaches to ambiguous problems

The key is matching your proctoring level to what you are trying to measure. The Smart Browser is a tool, not a mandate.

Technical Requirements and Getting Started

System Requirements

The Smart Browser is a lightweight desktop application. Candidates need to download and install it before the assessment begins. The supported environments include:

  • Windows: Windows 10 and Windows 11
  • macOS: Version 13.5 (Ventura) and above
  • Linux: Ubuntu 20.04, 22.04, and 24.04

The application requires a stable internet connection throughout the assessment. Specific hardware requirements (RAM, disk space) are minimal, as the application primarily functions as a controlled browser environment rather than a resource-intensive program.

Rolling Out Smart Browser to Candidates

Communication matters when introducing a proctored environment. Candidates who are surprised by a desktop application download are more likely to drop off or have a negative experience. Best practices include:

  • Notify candidates in advance. Include Smart Browser requirements in the assessment invitation email with clear download instructions.
  • Provide a test run. Allow candidates to install and verify the application before the scheduled assessment window.
  • Offer technical support. Link to troubleshooting guides and provide a support contact for installation issues.
  • Explain the purpose. Frame the Smart Browser as a fairness measure. Candidates who are confident in their skills generally appreciate a level playing field.

Setting up the Smart Browser on the recruiter side is straightforward. Within the HackerEarth assessment configuration, toggle the Smart Browser proctoring option when creating or editing a test. The platform handles the rest, including generating candidate-facing instructions.

For teams exploring AI-powered interview tools alongside proctored assessments, the Smart Browser integrates within the broader HackerEarth ecosystem. You can use it for the assessment stage and pair it with AI or human-led interviews for subsequent evaluation rounds.

Security, Privacy, and Candidate Data

Assessment proctoring involves monitoring candidate behaviour, which raises legitimate privacy questions. Transparency about what the Smart Browser does (and does not do) builds trust with candidates and ensures compliance with data protection standards.

What the Smart Browser monitors:

  • Application and window activity on the candidate's device during the test
  • Keystroke restrictions (blocking specific key combinations, not logging keystrokes)
  • Attempts to access restricted functionality (screenshots, screen sharing, virtual machines)

What the Smart Browser does not do:

  • It does not access the candidate's webcam or microphone (unless webcam proctoring is separately enabled)
  • It does not log personal files, browsing history, or data outside the test session
  • It does not remain active or collect data after the assessment is completed

HackerEarth follows industry-standard security practices including data encryption in transit and at rest. For enterprise customers with specific compliance requirements (SOC 2, GDPR), the platform offers detailed documentation on data handling practices.

If your organisation operates under strict data governance policies, review HackerEarth's security documentation or contact the support team to confirm alignment with your requirements.

Make Your Assessments Trustworthy

Assessment integrity is not a nice-to-have. It is the foundation that every subsequent hiring decision rests on. If your assessments can be gamed with AI tools, your shortlists are unreliable, your engineering team wastes time on mismatched interviews, and your cost-per-hire rises.

The Smart Browser gives you a proven, data-backed way to restore trust in your assessment results. The solvability data speaks clearly: when candidates cannot rely on external help, only the genuinely skilled ones advance.

If you are ready to strengthen your assessment process, explore HackerEarth's technical assessment platform to see the Smart Browser in action. Or book a demo to discuss how proctored assessments fit your hiring workflow.

Frequently Asked Questions

What is the Smart Browser in HackerEarth?

The Smart Browser is a desktop application that candidates download to take HackerEarth assessments in a fully controlled, proctored environment. It prevents access to external applications, AI tools, copy-paste functions, and other potential cheating vectors during a test.

Does the Smart Browser prevent candidates from using ChatGPT?

Yes. Because the Smart Browser blocks all external applications and restricts keystroke combinations like Ctrl+C and Ctrl+V, candidates cannot access ChatGPT or any other AI assistant during the assessment.

What operating systems does the Smart Browser support?

The Smart Browser supports Windows 10 and 11, macOS 13.5 (Ventura) and above, and Ubuntu 20.04, 22.04, and 24.04 on Linux.

Does the Smart Browser affect test difficulty?

No. The Smart Browser does not change the questions or their difficulty level. It changes the testing environment. Solvability decreases because candidates can no longer use external assistance, meaning results more accurately reflect individual ability.

Should every assessment use the Smart Browser?

Not necessarily. Use the Smart Browser for assessments where verifying independent problem-solving ability is the primary goal. For senior roles where AI tool usage is part of the expected workflow, consider allowing AI access and evaluating how candidates leverage it.

How do candidates install the Smart Browser?

Candidates receive a download link as part of their assessment invitation. The installation process takes a few minutes. Providing advance notice and a test-run opportunity reduces drop-off rates and technical issues.

How Does HackerEarth Combat The Use Of ChatGPT And Other LLMs In Tech Hiring Assessments?

Ever since ChatGPT made a public debut in November 2022, it has been the fodder for headlines. Its popularity proves that there isn’t a single industry or vertical that will not be fundamentally reshaped by generative AI platforms in the near future. Recruiting, in general, and technical assessments, in particular, are no different.

While ChatGPT can be used in technical recruiting to make manual work more manageable, it also has a proven drawback – candidates have been using it to answer take-home coding tests during the hiring process.

Due to the growing concern around the use of generative AI in coding tests, we decided to address the topic head-on and help our users understand the measures we have put in place to detect, prevent, and manage such practices.

But first, a note about LLMs and their use cases

LLM stands for Large Language Model, a machine-learning model designed to process and generate human-like natural language. LLMs are typically built using neural networks and deep learning algorithms and trained on vast amounts of text data to learn patterns and relationships between words and phrases.

LLMs aim to generate coherent and relevant responses to natural language inputs, such as questions, statements, or commands. This makes them useful for a wide range of applications, including language translation, chatbots, content generation, sentiment analysis, and answering questions.

LLMs have become increasingly popular in recent years due to advances in deep learning algorithms and the availability of large datasets. Some of the most well-known LLMs include GPT (Generative Pre-trained Transformer), BERT (Bidirectional Encoder Representations from Transformers), and T5 (Text-to-Text Transfer Transformer).

The growing demand for LLMs has led to some burning questions. Businesses are wondering about a future where LLMs are integral to day-to-day work and can generate more profits. In the tech industry, many have welcomed LLMs like ChatGPT as an extension of the existing coding tools, and are looking at ways of integrating the platform into their coding process.

Here’s how LLMs can transform the way we interact with computers and other digital devices:

  1. Language translation: You can use LLMs to automatically translate text from one language to another. This is particularly useful for businesses operating in multiple countries and trying to reach a global audience.
  2. Chatbots: LLMs can help chatbots respond to customer inquiries in natural language, saving significant time and money by automating customer service tasks.
  3. Content generation: Use LLMs to generate content for websites or social media. For example, an LLM could be trained to write news articles or social media posts based on a given topic.
  4. Sentiment analysis: Analyze text data and determine the sentiment behind it with LLMs. This is useful for businesses looking to monitor customer feedback or social media activity.
  5. Answering questions: You can leverage LLMs to answer questions in natural language. For example, an LLM could be trained to answer questions about a company’s products or services.
  6. Summarization: Automatically summarize long documents or articles with LLMs. This is useful for businesses looking to quickly extract key information from large volumes of text.

So now, what is ChatGPT?

ChatGPT is a Large Language Model (LLM) based on the GPT (Generative Pre-trained Transformer) architecture. It is one of the most advanced LLMs available and is capable of generating human-like responses to natural language inputs.

ChatGPT is trained on vast amounts of text data and uses a deep learning algorithm to generate responses to user inputs. It can engage in conversations on a wide range of topics and is capable of providing contextually relevant and coherent responses.

One of the key advantages of ChatGPT is its ability to generate natural language responses in real time. This makes it a useful tool for a variety of applications, including chatbots, virtual assistants, and customer service platforms.

OpenAI, a leading AI research organization, developed ChatGPT. It is based on the GPT-3 architecture, which was trained on a massive dataset of over 45 terabytes of text data. Overall, ChatGPT represents a significant advancement in the field of Natural Language Processing and has the potential to transform the way we interact with computers and other digital devices.

It is a powerful tool that is being used in a variety of applications and has the potential to drive innovation and growth across a spectrum of industries.

How to use ChatGPT for answering coding tests?

Many developers use this tool to generate code snippets to solve specific problems in coding tests. If they can define their parameters and conditions, ChatGPT can produce a working code that can be used in the functions.

ChatGPT can answer complex technical questions which are both theoretical and practical. However, one of the shortcomings of ChatGPT is that it is not yet fully capable of answering questions based on logical reasoning. It interprets the question literally instead of contextually. This means that ChatGPT can also not answer context-based questions accurately.

ChatGPT works well when answering technical questions that are theoretical. It has been trained rigorously on that database. Even with easy coding questions, ChatGPT provides excellent results but with complex scenario-based questions, it fails to provide the right solution sometimes. It is not yet able to create complete modules for a full-stack question.

Also read: 8 Unconsciously Sexist Interview Questions You’re Asking Your Female Candidates

ChatGPT + Coding tests — Plagiarism or Progress?

The bottom line is: ChatGPT will make it infinitely easier for candidates to generate code and ace their take-home assignments. Currently, this capability is limited to simple, theory-based questions. However, the platform will inevitably learn and get better at generating complex code. Consequently, it could be used to answer all coding tests.

At HackerEarth, we have always maintained that skills are the only criteria for evaluation. However, a developer using an AI tool to answer a question muddles the selection and evaluation process.

The AI-shaped elephant in the room then begs us to pick a side. Either we conclude that the use of any generative AI by a candidate in a coding test amounts to plagiarism and is unacceptable. Or, we chalk it up to changing times and get on board with the progress.

The first approach

This is best suited for mass hiring drives, where recruiters are hard-pressed to curate a pool of candidates through a process of elimination. Plagiarism via ChatGPT in hiring assessments can be one of the criteria for elimination. It allows you to narrow your candidate list down to the developers who answered the coding test without the support of an external tool.

The second approach

This works well when hiring fewer candidates, perhaps for a highly technical role. ChatGPT is here to stay; senior developers use it to generate or evaluate complex code. Allowing candidates for such roles to use ChatGPT in coding tests would mean expanding the understanding of skill-based evaluation in these scenarios.

We could draw a parallel between these candidates and writers who use a spellchecker to proofread their assignments. AI-based writing assistants have become an industry-wide best practice, so the writer in this example would not lose any points for using one.

Instead, they would be evaluated on their research and analytical skills or creativity – which an AI–based writing assistant cannot substitute – and not necessarily on their use of an external tool. In theory, one could use the same rationale to justify and accept the use of ChatGPT in hiring assessments by candidates.

Given both these approaches, we at HackerEarth have decided to support both schools of thought in our Assessments platform. Those who want to ensure their candidates cannot use ChatGPT for answering tests can do so with our advanced proctoring features. And the hiring managers who do not mind the use of ChatGPT can write to support@hackerearth.com to understand how the LLM can be integrated into HackerEarth Assessments.

How does HackerEarth detect the use of ChatGPT in hiring assessments?

With the increasing use of ChatGPT, many of our customers have written to us to ask how we plan to combat the use of ChatGPT in hiring assessments. HackerEarth Assessments is known for its robust proctoring settings. We have added new features to detect the recent spate of plagiarism via ChatGPT in hiring assessments.

Let me walk you through these new additions:

1. Smart Browser

HackerEarth has introduced new advanced proctoring features including a Smart Browser. This is available with the HackerEarth Assessments desktop application. This builds on our existing proctoring features and establishes a highly rigorous proctoring method to prevent the use of ChatGPT and other LLMs.

Smart Browser includes the following settings that detect the use of ChatGPT:

  • Candidates are not allowed to keep other applications open during the test
  • They are also not allowed to:
    • Resize the test window
    • Use multiple monitors during the test
    • Share the test window
    • Take screenshots of the test window
    • Record the test window
    • Use restricted keystrokes
    • View OS notifications
    • Run the test window within a Virtual Machine
    • Use browser developer tools

To learn more about the Smart Browser, read this article.

At the time of writing this article, Smart Browser is only available upon request. To request access, please get in touch with your Customer Success Manager or contact support@hackerearth.com.

Also read: 3 Things To Know About Remote Proctoring

2. Tab switch proctoring setting

Use HackerEarth’s tab switch proctoring setting during tests. This setting allows you to set the number of times a candidate can move out of the test environment. The default setting is for 5 instances, which means that candidates are allowed to switch tabs 5 times during the test duration. On the 6th try, they will be automatically logged out of the system. The default number can be changed if required.

When this proctoring setting is enabled, the system warns the candidate each time they move out of the test environment. The following actions are considered as ‘moving out of the test environment. However, please note that this is not an exhaustive list:

  • Switching tabs
  • Switching windows
  • Opening new applications on the computer, including system popups like anti-virus notifications, Lync notifications, Skype notifications, etc.
  • Any action taken to close notifications is also counted as leaving the test environment.

The assumption is that candidates would need to switch tabs to access ChatGPT. By not allowing candidates to move out of the test environment beyond a set number of times, we can detect and prevent the use of ChatGPT.

3. Full-screen proctoring setting

Enable this feature to enhance the proctoring of a hiring assessment and allow your candidates to take the assessment only in a full-screen mode. As soon as the candidate opens up the assessment, the screen goes into full screen and candidates cannot exit this mode. If they try to exit the mode, they will be logged out of the assessment.

Reduce ChatGPT usage in your assessments by not allowing candidates to open any new tabs while giving the assessment. To learn more about HackerEarth’s proctoring settings, read this article.

4. Diverse question types

HackerEarth has a rich library of logical reasoning questions that cannot be answered easily via ChatGPT. We tested our questions on ChatGPT, and we can say with reliable accuracy that it cannot answer logical reasoning questions correctly because it cannot understand contextual questions.

Here’s one of the many examples of logical reasoning questions that we asked ChatGPT to test its capabilities:

Example of a complex question type that ChatGPT can't answer

ChatGPT cannot produce code for full-stack questions. HackerEarth has a vast library of full-stack questions that can be used in the assessments and are well protected from the impact of ChatGPT.

While ChatGPT can help write the code for some modules, it cannot fully answer a full-stack question with all the functions. Compiling these separate functions to create a single module requires skill and ingenuity.

Similarly, recruiters can use file upload questions to make their assessments more robots. These questions have complex scenarios and functions that ChatGPT cannot answer completely.

Essential insights about using ChatGPT in hiring

  • LLM stands for Large Language Model. It is a type of machine learning model designed to process and generate human-like natural language.
  • ChatGPT is a Large Language Model (LLM) based on the GPT (Generative Pre-trained Transformer) architecture. It is one of the most advanced LLMs available and is capable of generating human-like responses to natural language inputs.
  • You can use ChatGPT to answer easy coding questions and MCQs. It can help write accurate code snippets for function modules.
  • Recruiters, avoid using MCQs with direct answers as candidates can easily answer them through ChatGPT.
  • HackerEarth provides various solutions that help us detect and prevent the usage of ChatGPT. These include:
    • Smart browser
    • Tab switch proctoring setting
    • Full-screen mode
    • Diverse and complex question types

Doing away with using ChatGPT in hiring assessments

In many ways, we are all just waking up to the power of AI. With new advancements every day, no one is sure what the future will unfold, but we all should be ready to embrace the moment when AI becomes an integral part of daily functions.

Technical assessments can still be curated without the interference of AI platforms like ChatGPT to ensure skill-first evaluation. HackerEarth Assessments has introduced advanced proctoring settings like Smart Browser, tab-switch detection, full-screen mode, and a vast library of complex engineering questions that are not easily answerable by ChatGPT.

The product mavens at HackerEarth work relentlessly to ensure our product is firewalled against the latest challenges and developments. Tech recruiters and hiring managers can rest assured that the validity and sanctity of our assessments haven’t been affected by the use of ChatGPT.

We will keep a keen eye on upcoming changes in this area and improve the product over time to combat future challenges and ensure a plagiarism-free hiring experience for our clients.

The Ultimate Playbook For Better Hiring FREE EBOOK

Frequently Asked Questions (FAQs)

#1 Where does HackerEarth see pre-interview tests and interviewing to be moving to in a world where ChatGPT exists?

The world of interviewing and pre-interview tests will see significant changes in the foreseeable future. We also need to understand that as new features and platforms emerge, the solutions to detect and prevent their use will go through multiple iterations.

In the near future, advanced proctoring settings and new question types that are not easily answerable using ChatGPT can help protect pre-interview tests from the impact of ChatGPT. We are also working on foolproof methods for plagiarism detection, which can circumnavigate ChatGPT’s upgrades.

#2 With ChatGPT being able to solve MCQs, programming, etc. in a few minutes, does HackerEarth have a different set of problems that can be used?

ChatGPT can quickly solve MCQs and simple programming problems, which is a big concern. However, HackerEarth has a wide variety of questions that recruiters can use to combat the usage of ChatGPT. We have a library of full-stack question types. As previously discussed, it will be difficult for a candidate to search for different modules and compile them to complete the question. It is a time taking and complex process, so candidates will prefer to do these questions on their own.

Moreover, ChatGPT cannot understand logical real-life scenarios. The accuracy of such answers is poor. Use a mix of logical reasoning MCQs, DevOps, and Selenium questions to check the versatility of a candidate.