Shivam Gupta

Author
Shivam Gupta

Blogs
From dorm rooms to boardrooms, Shivam has built a career connecting young talent to opportunity. Their writing brings fresh, student-centric views on tech hiring and early careers.
author’s Articles

Insights & Stories by Shivam Gupta

Shivam Gupta explores what today’s grads want from work—and how recruiters can meet them halfway. Expect a mix of optimism, strategy, and sharp tips.
Clear all
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Filter
Filter

Guide to Conducting Successful System Design Interviews in 2025

What is Systems Design?

Systems Design is an all encompassing term which encapsulates both frontend and backend components harmonized to define the overall architecture of a product.

Designing robust and scalable systems requires a deep understanding of application, architecture and their underlying components like networks, data, interfaces and modules.

Systems Design, in its essence, is a blueprint of how software and applications should work to meet specific goals. The multi-dimensional nature of this discipline makes it open-ended – as there is no single one-size-fits-all solution to a system design problem.

What is a System Design Interview?

Conducting a System Design interview requires recruiters to take an unconventional approach and look beyond right or wrong answers. Recruiters should aim for evaluating a candidate’s ‘systemic thinking’ skills across three key aspects:

How they navigate technical complexity and navigate uncertainty
How they meet expectations of scale, security and speed
How they focus on the bigger picture without losing sight of details

This assessment of the end-to-end thought process and a holistic approach to problem-solving is what the interview should focus on.

What are some common topics for a System Design Interview

System design interview questions are free-form and exploratory in nature where there is no right or best answer to a specific problem statement. Here are some common questions:

How would you approach the design of a social media app or video app?

What are some ways to design a search engine or a ticketing system?

How would you design an API for a payment gateway?

What are some trade-offs and constraints you will consider while designing systems?

What is your rationale for taking a particular approach to problem solving?

Usually, interviewers base the questions depending on the organization, its goals, key competitors and a candidate’s experience level.

For senior roles, the questions tend to focus on assessing the computational thinking, decision making and reasoning ability of a candidate. For entry level job interviews, the questions are designed to test the hard skills required for building a system architecture.

The Difference between a System Design Interview and a Coding Interview

If a coding interview is like a map that takes you from point A to Z – a systems design interview is like a compass which gives you a sense of the right direction.

Here are three key difference between the two:

Coding challenges follow a linear interviewing experience i.e. candidates are given a problem and interaction with recruiters is limited. System design interviews are more lateral and conversational, requiring active participation from interviewers.

Coding interviews or challenges focus on evaluating the technical acumen of a candidate whereas systems design interviews are oriented to assess problem solving and interpersonal skills.

Coding interviews are based on a right/wrong approach with ideal answers to problem statements while a systems design interview focuses on assessing the thought process and the ability to reason from first principles.

How to Conduct an Effective System Design Interview

One common mistake recruiters make is that they approach a system design interview with the expectations and preparation of a typical coding interview.
Here is a four step framework technical recruiters can follow to ensure a seamless and productive interview experience:

Step 1: Understand the subject at hand

  • Develop an understanding of basics of system design and architecture
  • Familiarize yourself with commonly asked systems design interview questions
  • Read about system design case studies for popular applications
  • Structure the questions and problems by increasing magnitude of difficulty

Step 2: Prepare for the interview

  • Plan the extent of the topics and scope of discussion in advance
  • Clearly define the evaluation criteria and communicate expectations
  • Quantify constraints, inputs, boundaries and assumptions
  • Establish the broader context and a detailed scope of the exercise

Step 3: Stay actively involved

  • Ask follow-up questions to challenge a solution
  • Probe candidates to gauge real-time logical reasoning skills
  • Make it a conversation and take notes of important pointers and outcomes
  • Guide candidates with hints and suggestions to steer them in the right direction

Step 4: Be a collaborator

  • Encourage candidates to explore and consider alternative solutions
  • Work with the candidate to drill the problem into smaller tasks
  • Provide context and supporting details to help candidates stay on track
  • Ask follow-up questions to learn about the candidate’s experience

Technical recruiters and hiring managers should aim for providing an environment of positive reinforcement, actionable feedback and encouragement to candidates.

Evaluation Rubric for Candidates

Facilitate Successful System Design Interview Experiences with FaceCode

FaceCode, HackerEarth’s intuitive and secure platform, empowers recruiters to conduct system design interviews in a live coding environment with HD video chat.

FaceCode comes with an interactive diagram board which makes it easier for interviewers to assess the design thinking skills and conduct communication assessments using a built-in library of diagram based questions.

With FaceCode, you can combine your feedback points with AI-powered insights to generate accurate, data-driven assessment reports in a breeze. Plus, you can access interview recordings and transcripts anytime to recall and trace back the interview experience.

Learn how FaceCode can help you conduct system design interviews and boost your hiring efficiency.

How HackerEarth's Smart Browser Has Increased Integrity of Assessments In the Age of AI

At HackerEarth, we take pride in building robust proctoring features for our tech assessments.

The tech teams we work with want to hire candidates with the right skills for the job, and it helps no one if the candidates can easily ace tests by plagiarizing answers. HackerEarth Assessments has always boasted of robust proctoring settings to ensure that our assessments help users find the right skill match every single time. And to add to it we launched our anti-ChatGPT feature called Smart Browser last year.

In case you missed the launch announcement, our Smart Browser is a unique new feature which only allows candidates to attempt a test in a HackerEarth desktop application with stricter proctoring features than those provided by our browser test environment. Smart Browser prevents the following candidate actions:

  • Screen sharing the test window
  • Keeping other applications open during the test
  • Resizing the test window
  • Using multiple monitors during the test
  • Taking screenshots of the test window
  • Recording the test window
  • Restrict keystrokes that are as follows:
    • All function keys and combos involving keys such as:
      • F1, F5 + Alt, etc.
      • Alt + Tab
      • Ctrl + Alt + Delete
      • Ctrl + V
      • Ctrl + C
    • OS superkeys and combos involving these keys ex: Windows Key, Mac Command Key, Windows Key + C
  • Viewing OS notifications
  • Running the test window within a Virtual Machine
  • Usage of browser developer tools
HackerEarth Smart Browser Proctoring

A year after the launch of this feature, we wanted to understand the impact of using this feature on the take home assignments sent to candidates. We decided to look at the difference in solvability between assessments where Smart Browser was used for proctoring, and the ones without.

What the data from Smart Browser shows us

One way to check a test's integrity is to see how highly solvable it is. If a coding test scores high on solvability, then candidates would find it easy to crack; and anyone would pass the assessment. Creating the perfect coding assessments requires finding the right solvability, which should neither be too high nor too low. According to expert estimates, a solvability percentage of 10-20% is considered to be ideal, which can change according to the difficulty level chosen by the recruiting teams and the number of candidates taking the test.

Now, Smart Browser helps users set a high proctoring environment, which makes it difficult for candidates to use any unfair practices while taking the assessments, allowing only genuine candidates to solve the questions in the assessments.

This brings us to the following observations:

Scenario A:

Some of our users chose not to implement the Smart Browser feature while conducting the assessments; instead allowing candidates the option to use an LLMs to answer questions. We found that the solvability of different questions is different in this scenario. The table given will explain the solvability of different question types in the assessment without the Smart Browser.

Solvability-of-the-assessment-without-the-use-of-Smart-Browser
Solvability of tech assessments without the use of Smart Browser


This is still a difficult-to-solve assessment for the candidates due to HackerEarth’s rich question library. But without the Smart Browser implementation, there is still a chance of candidates using unfair practices or ChatGPT for plagiarism, which makes the process unfair for those candidates who are genuine in their attempts.

Scenario B:

After implementing the Smart Browser feature on these same assessments, we found that the solvability of various question types and the average solvability decreased significantly. The given table shows the solvability of different question types after implementing the Smart Browser.
Solvability-of-the-assessment-with-the-use-of-Smart-Browser
Solvability of tech assessments with the use of Smart Browser

This clearly demonstrates that implementing the Smart Browser feature for assessments helps decrease solvability and provides you, as recruiters, a much more genuine and serious pool of candidates who were able to solve that assessment without using any external help.

The table below shows the decrease in solvability when the Smart Browser is used in comparison to the assessment where the Smart Browser is not implemented.

Decrease-in-the-solvability-of-the-assessment while using smart browser
Overall decrease in the solvability of tech assessments when Smart Browser is used

Should you implement the Smart Browser for your next assessment?

LLMs like ChatGPT are making it easier for candidates to write code for take-home tech assignments. While most LLMs can currently handle basic coding tasks, they are getting better at building complex code. This raises the question: could AI eventually solve any coding challenge?


Tech recruiting teams have two options here:

Forbid the use of AI in coding tests completely: This is ideal for large-scale hiring where efficiency is key. HackerEarth can detect ChatGPT use and eliminate candidates who rely on it. This leaves only those who completed the test independently.

Embrace AI in coding tests: This is better for hiring a small number of highly technical roles. Many experienced developers use ChatGPT to write or analyze complex code. Allowing such candidates to use AI during tests broadens the scope of skill assessment. Think of it like writers using spell checkers. We don't penalize them for using AI tools. We judge them on research, analytical skills, and creativity - qualities AI can't replicate. Similarly, there are instances where we can accept AI use in coding tests for specific roles.

The data above clearly shows that the difficulty and solvability levels of coding questions significantly increases when HackerEarth’s Smart Browser is used for proctoring. Tech recruiters may want to employ this feature in assessments where the primary objective is to evaluate a candidate's core programming skills, such as syntax familiarity, problem-solving ability without external assistance, and code efficiency.

Similarly, they may want to allow the use of LLMs in scenarios where the primary focus is on assessing problem-solving skills, creativity, and the ability to communicate effectively about code.

We leave the final decision of using the Smart Browser up to you, but we recommend that you consider using it to attract a pool of genuine candidates who can clear assessments without external help, and make your company’s assessment process more transparent and reliable.

Head over here to check the Smart Browser out for yourself! Or write to us at support@hackerearth.com to know more.

How Does HackerEarth Combat The Use Of ChatGPT And Other LLMs In Tech Hiring Assessments?

Ever since ChatGPT made a public debut in November 2022, it has been the fodder for headlines. Its popularity proves that there isn’t a single industry or vertical that will not be fundamentally reshaped by generative AI platforms in the near future. Recruiting, in general, and technical assessments, in particular, are no different.

While ChatGPT can be used in technical recruiting to make manual work more manageable, it also has a proven drawback – candidates have been using it to answer take-home coding tests during the hiring process.

Due to the growing concern around the use of generative AI in coding tests, we decided to address the topic head-on and help our users understand the measures we have put in place to detect, prevent, and manage such practices.

But first, a note about LLMs and their use cases

LLM stands for Large Language Model, a machine-learning model designed to process and generate human-like natural language. LLMs are typically built using neural networks and deep learning algorithms and trained on vast amounts of text data to learn patterns and relationships between words and phrases.

LLMs aim to generate coherent and relevant responses to natural language inputs, such as questions, statements, or commands. This makes them useful for a wide range of applications, including language translation, chatbots, content generation, sentiment analysis, and answering questions.

LLMs have become increasingly popular in recent years due to advances in deep learning algorithms and the availability of large datasets. Some of the most well-known LLMs include GPT (Generative Pre-trained Transformer), BERT (Bidirectional Encoder Representations from Transformers), and T5 (Text-to-Text Transfer Transformer).

The growing demand for LLMs has led to some burning questions. Businesses are wondering about a future where LLMs are integral to day-to-day work and can generate more profits. In the tech industry, many have welcomed LLMs like ChatGPT as an extension of the existing coding tools, and are looking at ways of integrating the platform into their coding process.

Here’s how LLMs can transform the way we interact with computers and other digital devices:

  1. Language translation: You can use LLMs to automatically translate text from one language to another. This is particularly useful for businesses operating in multiple countries and trying to reach a global audience.
  2. Chatbots: LLMs can help chatbots respond to customer inquiries in natural language, saving significant time and money by automating customer service tasks.
  3. Content generation: Use LLMs to generate content for websites or social media. For example, an LLM could be trained to write news articles or social media posts based on a given topic.
  4. Sentiment analysis: Analyze text data and determine the sentiment behind it with LLMs. This is useful for businesses looking to monitor customer feedback or social media activity.
  5. Answering questions: You can leverage LLMs to answer questions in natural language. For example, an LLM could be trained to answer questions about a company’s products or services.
  6. Summarization: Automatically summarize long documents or articles with LLMs. This is useful for businesses looking to quickly extract key information from large volumes of text.

So now, what is ChatGPT?

ChatGPT is a Large Language Model (LLM) based on the GPT (Generative Pre-trained Transformer) architecture. It is one of the most advanced LLMs available and is capable of generating human-like responses to natural language inputs.

ChatGPT is trained on vast amounts of text data and uses a deep learning algorithm to generate responses to user inputs. It can engage in conversations on a wide range of topics and is capable of providing contextually relevant and coherent responses.

One of the key advantages of ChatGPT is its ability to generate natural language responses in real time. This makes it a useful tool for a variety of applications, including chatbots, virtual assistants, and customer service platforms.

OpenAI, a leading AI research organization, developed ChatGPT. It is based on the GPT-3 architecture, which was trained on a massive dataset of over 45 terabytes of text data. Overall, ChatGPT represents a significant advancement in the field of Natural Language Processing and has the potential to transform the way we interact with computers and other digital devices.

It is a powerful tool that is being used in a variety of applications and has the potential to drive innovation and growth across a spectrum of industries.

How to use ChatGPT for answering coding tests?

Many developers use this tool to generate code snippets to solve specific problems in coding tests. If they can define their parameters and conditions, ChatGPT can produce a working code that can be used in the functions.

ChatGPT can answer complex technical questions which are both theoretical and practical. However, one of the shortcomings of ChatGPT is that it is not yet fully capable of answering questions based on logical reasoning. It interprets the question literally instead of contextually. This means that ChatGPT can also not answer context-based questions accurately.

ChatGPT works well when answering technical questions that are theoretical. It has been trained rigorously on that database. Even with easy coding questions, ChatGPT provides excellent results but with complex scenario-based questions, it fails to provide the right solution sometimes. It is not yet able to create complete modules for a full-stack question.

Also read: 8 Unconsciously Sexist Interview Questions You’re Asking Your Female Candidates

ChatGPT + Coding tests — Plagiarism or Progress?

The bottom line is: ChatGPT will make it infinitely easier for candidates to generate code and ace their take-home assignments. Currently, this capability is limited to simple, theory-based questions. However, the platform will inevitably learn and get better at generating complex code. Consequently, it could be used to answer all coding tests.

At HackerEarth, we have always maintained that skills are the only criteria for evaluation. However, a developer using an AI tool to answer a question muddles the selection and evaluation process.

The AI-shaped elephant in the room then begs us to pick a side. Either we conclude that the use of any generative AI by a candidate in a coding test amounts to plagiarism and is unacceptable. Or, we chalk it up to changing times and get on board with the progress.

The first approach

This is best suited for mass hiring drives, where recruiters are hard-pressed to curate a pool of candidates through a process of elimination. Plagiarism via ChatGPT in hiring assessments can be one of the criteria for elimination. It allows you to narrow your candidate list down to the developers who answered the coding test without the support of an external tool.

The second approach

This works well when hiring fewer candidates, perhaps for a highly technical role. ChatGPT is here to stay; senior developers use it to generate or evaluate complex code. Allowing candidates for such roles to use ChatGPT in coding tests would mean expanding the understanding of skill-based evaluation in these scenarios.

We could draw a parallel between these candidates and writers who use a spellchecker to proofread their assignments. AI-based writing assistants have become an industry-wide best practice, so the writer in this example would not lose any points for using one.

Instead, they would be evaluated on their research and analytical skills or creativity – which an AI–based writing assistant cannot substitute – and not necessarily on their use of an external tool. In theory, one could use the same rationale to justify and accept the use of ChatGPT in hiring assessments by candidates.

Given both these approaches, we at HackerEarth have decided to support both schools of thought in our Assessments platform. Those who want to ensure their candidates cannot use ChatGPT for answering tests can do so with our advanced proctoring features. And the hiring managers who do not mind the use of ChatGPT can write to support@hackerearth.com to understand how the LLM can be integrated into HackerEarth Assessments.

How does HackerEarth detect the use of ChatGPT in hiring assessments?

With the increasing use of ChatGPT, many of our customers have written to us to ask how we plan to combat the use of ChatGPT in hiring assessments. HackerEarth Assessments is known for its robust proctoring settings. We have added new features to detect the recent spate of plagiarism via ChatGPT in hiring assessments.

Let me walk you through these new additions:

1. Smart Browser

HackerEarth has introduced new advanced proctoring features including a Smart Browser. This is available with the HackerEarth Assessments desktop application. This builds on our existing proctoring features and establishes a highly rigorous proctoring method to prevent the use of ChatGPT and other LLMs.

Smart Browser includes the following settings that detect the use of ChatGPT:

  • Candidates are not allowed to keep other applications open during the test
  • They are also not allowed to:
    • Resize the test window
    • Use multiple monitors during the test
    • Share the test window
    • Take screenshots of the test window
    • Record the test window
    • Use restricted keystrokes
    • View OS notifications
    • Run the test window within a Virtual Machine
    • Use browser developer tools

To learn more about the Smart Browser, read this article.

At the time of writing this article, Smart Browser is only available upon request. To request access, please get in touch with your Customer Success Manager or contact support@hackerearth.com.

Also read: 3 Things To Know About Remote Proctoring

2. Tab switch proctoring setting

Use HackerEarth’s tab switch proctoring setting during tests. This setting allows you to set the number of times a candidate can move out of the test environment. The default setting is for 5 instances, which means that candidates are allowed to switch tabs 5 times during the test duration. On the 6th try, they will be automatically logged out of the system. The default number can be changed if required.

When this proctoring setting is enabled, the system warns the candidate each time they move out of the test environment. The following actions are considered as ‘moving out of the test environment. However, please note that this is not an exhaustive list:

  • Switching tabs
  • Switching windows
  • Opening new applications on the computer, including system popups like anti-virus notifications, Lync notifications, Skype notifications, etc.
  • Any action taken to close notifications is also counted as leaving the test environment.

The assumption is that candidates would need to switch tabs to access ChatGPT. By not allowing candidates to move out of the test environment beyond a set number of times, we can detect and prevent the use of ChatGPT.

3. Full-screen proctoring setting

Enable this feature to enhance the proctoring of a hiring assessment and allow your candidates to take the assessment only in a full-screen mode. As soon as the candidate opens up the assessment, the screen goes into full screen and candidates cannot exit this mode. If they try to exit the mode, they will be logged out of the assessment.

Reduce ChatGPT usage in your assessments by not allowing candidates to open any new tabs while giving the assessment. To learn more about HackerEarth’s proctoring settings, read this article.

4. Diverse question types

HackerEarth has a rich library of logical reasoning questions that cannot be answered easily via ChatGPT. We tested our questions on ChatGPT, and we can say with reliable accuracy that it cannot answer logical reasoning questions correctly because it cannot understand contextual questions.

Here’s one of the many examples of logical reasoning questions that we asked ChatGPT to test its capabilities:

Example of a complex question type that ChatGPT can't answer

ChatGPT cannot produce code for full-stack questions. HackerEarth has a vast library of full-stack questions that can be used in the assessments and are well protected from the impact of ChatGPT.

While ChatGPT can help write the code for some modules, it cannot fully answer a full-stack question with all the functions. Compiling these separate functions to create a single module requires skill and ingenuity.

Similarly, recruiters can use file upload questions to make their assessments more robots. These questions have complex scenarios and functions that ChatGPT cannot answer completely.

Essential insights about using ChatGPT in hiring

  • LLM stands for Large Language Model. It is a type of machine learning model designed to process and generate human-like natural language.
  • ChatGPT is a Large Language Model (LLM) based on the GPT (Generative Pre-trained Transformer) architecture. It is one of the most advanced LLMs available and is capable of generating human-like responses to natural language inputs.
  • You can use ChatGPT to answer easy coding questions and MCQs. It can help write accurate code snippets for function modules.
  • Recruiters, avoid using MCQs with direct answers as candidates can easily answer them through ChatGPT.
  • HackerEarth provides various solutions that help us detect and prevent the usage of ChatGPT. These include:
    • Smart browser
    • Tab switch proctoring setting
    • Full-screen mode
    • Diverse and complex question types

Doing away with using ChatGPT in hiring assessments

In many ways, we are all just waking up to the power of AI. With new advancements every day, no one is sure what the future will unfold, but we all should be ready to embrace the moment when AI becomes an integral part of daily functions.

Technical assessments can still be curated without the interference of AI platforms like ChatGPT to ensure skill-first evaluation. HackerEarth Assessments has introduced advanced proctoring settings like Smart Browser, tab-switch detection, full-screen mode, and a vast library of complex engineering questions that are not easily answerable by ChatGPT.

The product mavens at HackerEarth work relentlessly to ensure our product is firewalled against the latest challenges and developments. Tech recruiters and hiring managers can rest assured that the validity and sanctity of our assessments haven’t been affected by the use of ChatGPT.

We will keep a keen eye on upcoming changes in this area and improve the product over time to combat future challenges and ensure a plagiarism-free hiring experience for our clients.

The Ultimate Playbook For Better Hiring FREE EBOOK

Frequently Asked Questions (FAQs)

#1 Where does HackerEarth see pre-interview tests and interviewing to be moving to in a world where ChatGPT exists?

The world of interviewing and pre-interview tests will see significant changes in the foreseeable future. We also need to understand that as new features and platforms emerge, the solutions to detect and prevent their use will go through multiple iterations.

In the near future, advanced proctoring settings and new question types that are not easily answerable using ChatGPT can help protect pre-interview tests from the impact of ChatGPT. We are also working on foolproof methods for plagiarism detection, which can circumnavigate ChatGPT’s upgrades.

#2 With ChatGPT being able to solve MCQs, programming, etc. in a few minutes, does HackerEarth have a different set of problems that can be used?

ChatGPT can quickly solve MCQs and simple programming problems, which is a big concern. However, HackerEarth has a wide variety of questions that recruiters can use to combat the usage of ChatGPT. We have a library of full-stack question types. As previously discussed, it will be difficult for a candidate to search for different modules and compile them to complete the question. It is a time taking and complex process, so candidates will prefer to do these questions on their own.

Moreover, ChatGPT cannot understand logical real-life scenarios. The accuracy of such answers is poor. Use a mix of logical reasoning MCQs, DevOps, and Selenium questions to check the versatility of a candidate.