Tech Tutorials

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Algorithm on how to find the day of a week

In 1970, John Horton Conway devised an algorithm, known as the “Doomsday Algorithm,” to calculate the day of the week for any given date using mental math. It's easy to remember and doesn’t require a calculator.

This algorithm uses the formula: (d + m + y + [y/4] + c) mod 7

Here, d is the day, m is the month, y is the year, and c is the century number.

Each weekday is assigned a number using modulo 7. For instance:

  • Sunday = 1
  • Monday = 2
  • ... and so on.

What Do We Know?

Common years have 365 days, and leap years have 366. Each week has 7 days. The modulo properties help identify patterns in calendar dates.

Since 365 mod 7 = 1, each new year starts a day ahead of the previous year (unless it's a leap year, which adds one more day shift).

Some months start on the same day of the week. For instance, in a common year, April and July start on the same day.

Corresponding months in common year

Corresponding Months

In common years:

  • January and October
  • February, March, and November
  • April and July
  • No month matches with August
Common year alignment

In leap years:

  • January, April, and July
  • February and August
  • March and November
  • No month matches with October
Leap year alignment

Tomohiko Sakamoto's Optimization

Sakamoto created an optimized version of the Doomsday Algorithm considering leap years. For example:

  • January 31 + February 28 = 59 days → 59 mod 7 = 3
  • So, March 1 is 3 days ahead of January 1

To accommodate leap years accurately, use the formula:

y/4 - y/100 + y/400

This adjusts the extra day added every four years, removed every 100 years, and re-added every 400 years.

For months less than March, we subtract one from the year to adjust for leap year effect:

y -= m < 3

We also use a predefined list to handle the month values:

{0, 3, 2, 5, 0, 3, 5, 1, 4, 6, 2, 4}

C++ Code

int dow(int y, int m, int d)
{
  static int t[] = {0, 3, 2, 5, 0, 3, 5, 1, 4, 6, 2, 4};
  y -= m < 3;
  return (y + y/4 - y/100 + y/400 + t[m-1] + d) % 7;
}

Python Code

def day_of_week(year, month, day):
    t = [0, 3, 2, 5, 0, 3, 5, 1, 4, 6, 2, 4]
    year -= month < 3
    return (year + int(year/4) - int(year/100) + int(year/400) + t[month-1] + day) % 7

This algorithm provides a quick and efficient way to calculate the day of the week for any date using simple arithmetic and a small lookup table.

PYTHON DIARIES CHAPTER 4 → FUNCTION PART - 1

If you’re new to Python, it’s recommended to read the Python Diaries - Chapter 1, Chapter 2, and Chapter 3 as we build on some concepts covered there.

What is a Function?

A function is a block of organized, reusable code that performs a single, related task. It enables modularity and code reuse. Think of a function as a black box—it takes input, processes it, and produces output.

Function Black Box

def name_of_function(arguments):
    # your code
    return result  # returning is optional

Example:

def sum_of_numbers(a, b):
    c = a + b
    return c

print(sum_of_numbers(2, 3))

Output: 5

Three main parts of a function:

  1. Input
  2. Logic/Code
  3. Output

Input

Data provided to the function for processing. It can be passed via:

  1. Parameters/arguments
  2. Global variables
  3. Memory location

Logic/Code

Statements that define what the function does with the input.

Logic can range from a single line to hundreds of lines depending on the task.

Output

The result of processing the input. Output can be provided via:

  1. Return statement
  2. Modification of a memory location
  3. Print statement

Variable Scope

  • Local variables: Defined inside a function and inaccessible outside it.
  • Global variables: Declared outside all functions, accessible inside if declared with global.

Note: Global scope is module-wide. Built-in scope includes variables/literals defined by Python and usable anywhere in a program.

Returning Multiple Values

You can return multiple values from a function using a comma-separated list.

Example:

Given an array, find number K and return its index and the previous element (assume K is not at index 0 and elements are distinct).

array = [5, 6, 7, 12, 2, 4, 3, 9, 25, 29]
k = 29

def searching():
    global array
    global k
    array.sort()
    index = 0
    while index < len(array):
        if array[index] == k:
            return index, array[index - 1]
        index += 1

index, previous_value = searching()
print(index)
print(previous_value)

Output:

9
25

sort() vs sorted()

  • sort() modifies the original list in-place.
  • sorted() returns a new sorted list.

Example:

array = [3, 4, 1, 2, 5]
new_arr = sorted(array)

print("array ->", array)
print("new_arr ->", new_arr)

array_2 = [2, 3, 5, 4, 1]
array_2.sort()
print("array_2 ->", array_2)

Output:

array -> [3, 4, 1, 2, 5]
new_arr -> [1, 2, 3, 4, 5]
array_2 -> [1, 2, 3, 4, 5]

Returning multiple values as a tuple: If you use a single variable on the left-hand side when returning multiple values, Python will pack them into a tuple.

That wraps up the basics of functions in Python. More advanced topics will follow in the upcoming articles. Stay tuned!

Designing a Logistic Regression Model

Data is key for making important business decisions. Depending upon the domain and complexity of the business, there can be many different purposes that a regression model can solve. It could be to optimize a business goal or find patterns that cannot be observed easily in raw data.

Even if you have a fair understanding of maths and statistics, using simple analytics tools you can only make intuitive observations. When more data is involved, it becomes difficult to visualize a correlation between multiple variables. You have to then rely on regression models to find patterns, which you can’t find manually, in the data.

In this article, we will explore different components of a data model and learn how to design a logistic regression model.

1. Logistic equation, DV, and IDVs

Before we start design a regression model, it is important to decide the business goal that you want to achieve with the analysis. It mostly revolves around minimizing or maximizing a specific (output) variable, which will be our Dependent variable (DV).

You must also understand the different metrics that are available or (in some cases) metrics that can be controlled to optimize the output. These metrics are called predictors or independent variables (IDVs).

A generalized linear regression equation can be represented as follows:

Y = β₀ + β₁X₁ + β₂X₂ + ... + βₚXₚ

where Xᵢ are the IDVs, βᵢ are the coefficients of the IDVs and β₀ is intercept.

This equation can be represented as follows:

Yᵢⱼ = Σₐ₌₁ᵖ Xᵢₐ βₐⱼ + eᵢⱼ

Logistic regression can be considered as a special case of linear regression where the DV is categorical or continuous. The output of this function is mostly probabilistic and lies between 0 to 1. Therefore, the equation of logistic regression can be represented in the exponential form as follows:

Y = 1 / (1 + e⁻ᶠ⁽ˣ⁾)

which is equivalent to:

Y = 1 / (1 + e⁻(β₀ + β₁X₁ + β₂X₂ + ... + βₚXₚ))

As you can see, the coefficients represent the contribution of the IDV to the DV. If Xᵢ is positive, then the high positive value increases the output probability whereas the high negative value of βᵢ decreases the output.

2. Identification of data

Once we fix our IDVs and DV, it is important to identify the data that is available at the point of decisioning. A relevant subset of this data can be the input of our equation, which will help us calculate the DV.

Two important aspects of the data are:

  • Timeline of the data
  • Mining the data

For example, if the data captures visits on a website, which has undergone suitable changes after a specific date, you might want to skip the past data for better decisioning. It is also important to rule out any null values or outliers, as required.

This can be achieved with a simple piece of code in R, which will have the following method:

Uploading the data from the .csv file and storing it as training.data. We shall use a sample data from imdb, which is available on Kaggle.

> training.data <- read.csv('movie_data.csv',header=T,na.strings=c(""))
> sapply(training.data,function(x) sum(is.na(x)))
color      director_name      num_critic_for_reviews      duration       director_facebook_likes  
19         104                50                          15             104
> training.data$imdb_score[is.na(training.data$imdb_score)] <- mean(training.data$imdb_score, na.rm=T)

You can also think of additional variables that can have a significant contribution to the DV of the model. Some variables may have a lesser contribution towards the final output. If the variables that you are thinking of are not readily available, then you can create them from the existing database.

When we are dealing with the non real-time data that we capture, we should be clear about how fast this data is captured so that it can provide a better understanding of the IDVs.

3. Analyzing the model

Building the model requires the following:

  • Identifying the training data on which you can train your model
  • Programming the model in any programming language, such as R, Python etc.

Once the model is created, you must validate the model and its efficiency on the existing data, which is of course different from the training data. To put it simply, it is estimating how your model will perform.

One efficient way of splitting the training and modelling data are the timelines. Assume that you have data from January to December, 2015. You can train the model on data from January to October, 2015. You can then use this model to determine the output on the data from November and December. Though you already know the output for November and December, you will still run the model to validate it.

You can arrange the data in chronological order and classify it as training and test data from the following array:

> train <- data[1:X,] 
> test <- data[X:last_value,]

Fitting the model includes obtaining coefficients of each of the predictors, z-value, and P-value etc. It estimates how close the estimated values of the IDVs in the equation are when compared to the original values.

We use the glm() function in R to fit the model. Here's how you can use it. The 'family' parameter that is used here is 'binomial'. You can also use 'poisson' depending upon the nature of the DV.

> reg_model <- glm(Y ~., family=binomial(link='logit'), data=train)
> summary(reg_model)
> glm(formula = Y ~ ., family = binomial(link = "logit"), data = train)

Let's use the following dataset which has 3 variables:

> trees
   Girth Height Volume
1    8.3     70   10.3
2    8.6     65   10.3
3    8.8     63   10.2
4   10.5     72   16.4
...

Fitting the model using the glm() function:

> data(trees)
> plot(Girth~Volume, data=trees)
> abline(model)
> plot(Volume~Girth, data=trees)
> model <- glm2(vol ~ gir, family=poisson(link="identity"))
> abline(model)
> model
Call:  glm2(formula = vol ~ gir, family = poisson(link = "identity"))

Coefficients:
(Intercept)          gir  
    -30.874        4.608  

Degrees of Freedom: 30 Total (i.e. Null);  29 Residual
Null Deviance:     247.3 
Residual Deviance: 16.54 	AIC: Inf

Fitting the regression model

In this article, we have covered the importance of identifying the business objectives that should be optimized and the IDVs that can help us achieve this optimization. You also learned the following:

  • Some of the basic functions in R that can help us analyze the model
  • The glm() function that is used to fit the model
  • Finding the weights of the predictors with their standard deviation

In our next article, we will use larger data-sets and validate the model that we will build by using different parameters like the KS test.

Scaling database with Django and HAProxy

MySQL – Primary data store

At HackerEarth, we use MySQL database as the primary data store. We have experimented with a few NoSQL databases on the way, but the results have been largely unsatisfactory. The distributed databases like MongoDB or CouchDB aren't very scalable or stable. Right now, our status monitoring services use RethinkDB for storing the data in JSON format. That's all about using NoSQL database for now.

With the growing amount of data and number of requests every second, it turns out that the database becomes a major bottleneck to scale the application dynamically. At this point if you are thinking that there are mythical (cloud) providers who can handle the growing need of your application, you couldn't be more wrong. To make matters worse, you can't spin a new database whenever you want to just like you do with your frontend servers. Achieving horizontal scalability at all levels requires massive re-architecture of the system while being completely transparent to the end user. This is what a part of our team has focused on in the last few months, resulting in high uptime and availability.

It was becoming difficult for the master (and only) MySQL database to handle the heavy load. We thought we will delay any scalability at this level till the single database could handle the load. We would work on other high priority tasks instead. But that wasn't to be, and we experienced some down time. After that we did a rearchitecture of our application, sharded the database, wrote database routers and wrappers on top of django ORM, put HAProxy load balancer infront of the MySQL databases, and refactored our codebase to optimize it significantly.

The image below shows a part of the architecture we have at HackerEarth. Many other components have been omitted for simplicity.

Database slaves and router

The idea was to create read replicas and route the write queries to the master database and read queries to slave (read replica) databases. But that was not simple either. We couldn't and wouldn't want to route all the read queries to slaves. There were some read queries which couldn't afford stale data, which comes as a part of database replication. Though stale data might be the order of just a few seconds, these small number of read queries couldn't even afford that. The first database router was simple:
class MasterSlaveRouter(object):

"""
Represents the router for database lookup.
"""
def __init__(self):
if settings.LOCAL:
self._SLAVES = []
else:
self._SLAVES = SLAVES

def db_for_read(self, model, **hints):
"""
Reads go to default for now.
"""
return 'default'

def db_for_write(self, model, **hints):
"""
Writes always go to default.
"""
return 'default'

def allow_relation(self, obj1, obj2, **hints):
"""
Relations between objects are allowed if both objects are
in the default/slave pool.
"""
db_list = ('default',)
for slave in zip(self._SLAVES):
db_list += slave

if obj1._state.db in db_list and obj2._state.db in db_list:
return True
return None

def allow_migrate(self, db, model):
return True

All the write and read queries go to the master database, which you might think is weird here. Instead, we wrote getfromslave(), filterfromslave(), getobjector404fromslave(), getlistor404fromslave(), etc. as part of django ORM in our custom managers to read from slave. So whenever we know we can read from slaves, we call one of these functions. This was a sacrifice made for those small number of read queries which couldn't afford the stale data. Custom database manager to fetch data from slave:
# proxy_slave_X is the HAProxy endpoint, which does load balancing

# over all the databases.
SLAVES = ['proxy_slave_1', 'proxy_slave_2']

def get_slave():
"""
Returns a slave randomly from the list.
"""
if settings.LOCAL:
db_list = []
else:
db_list = SLAVES

return random.choice(db_list)

class BaseManager(models.Manager):
# Wrappers to read from slave databases.
def get_from_slave(self, *args, **kwargs):
self._db = get_slave()
return super(BaseManager, self).get_query_set().get(*args, **kwargs)

def filter_from_slave(self, *args, **kwargs):
self._db = get_slave()
return super(BaseManager, self).get_query_set().filter(
*args, **kwargs).exclude(Q(hidden=True) | Q(trashed=True))

HAProxy for load balancing

Now there could me many slaves at a time. One option was to update the database configuration in settings whenever we added/removed a slave. But that was very cumbersome and inefficient. A better way was to put a HAProxy load balancer in front of all the databases and let it detect which one is up or down and route the read queries according to that. This would mean never editing the database configuration in our codebase — just what we wanted. A snippet of /etc/haproxy/haproxy.cfg:
listen mysql *:3305

mode tcp
balance roundrobin
option mysql-check user haproxyuser
option log-health-checks
server db00 db00.xxxxx.yyyyyyyyyy:3306 check port 3306 inter 1000
server db01 db00.xxxxx.yyyyyyyyyy:3306 check port 3306 inter 1000
server db02 db00.xxxxx.yyyyyyyyyy:3306 check port 3306 inter 1000

The configuration for the slave in settings now looked like this:
DATABASES = {

'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'db_name',
'USER': 'username',
'PASSWORD': 'password',
'HOST': 'db00.xxxxx.yyyyyyyyyy',
'PORT': '3306',
},
'proxy_slave_1': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'db_name',
'USER': 'username',
'PASSWORD': 'password',
'HOST': '127.0.0.1',
'PORT': '3305',
},
'analytics': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'db_name',
'USER': 'username',
'PASSWORD': 'password',
'HOST': 'db-analytics.xxxxx.yyyyyyyyyy',
'PORT': '3306',
},
}

But there is a caveat here too. If you spin off a new server with the HAproxy configuration containing some endpoints which don't exist, HAproxy will throw an error and it won't start, making the slave useless. It turns out there is no easy solution to this, and haproxy.cfg should contain existing server endpoints while initializing. The solution then was to let the webserver update its HAproxy configuration from a central location whenever it starts. We wrote a simple script in fabric to do this. Besides, the webserver already used to update its binary when the spin off is from an old image.

Database sharding

Next, we sharded the database. We created another database — analytics. It stores all the computed data, and it forms a major part of read queries. All the queries to the analytics database are routed using the following router:
class AnalyticsRouter(object):

"""
Represents the router for analytics database lookup.
"""
def __init__(self):
if settings.LOCAL:
self._SLAVES = []
self._db = 'default'
else:
self._SLAVES = []
self._db = 'analytics'

def db_for_read(self, model, **hints):
"""
All reads go to analytics for now.
"""
if model._meta.app_label == 'analytics':
return self._db
else:
return None

def db_for_write(self, model, **hints):
"""
Writes always go to analytics.
"""
if model._meta.app_label == 'analytics':
return self._db
else:
return None

def allow_relation(self, obj1, obj2, **hints):
"""
Relations between objects are allowed if both objects are
in the default/slave pool.
"""

if obj1._meta.app_label == 'analytics' or \
obj2._meta.app_label == 'analytics':
return True
else:
return None

def allow_migrate(self, db, model):
if db == self._db:
return model._meta.app_label == 'analytics'
elif model._meta.app_label == 'analytics':
return False
else:
return None

To enable the two routers, we need to add them in our global settings:
DATABASE_ROUTERS = ['core.routers.AnalyticsRouter', 'core.routers.MasterSlaveRouter']


Here the order of routers is important. All the queries for analytics are routed to the analytics database and all the other queries are routed to the master database or their slaves according the nature of queries. For now, we have not put slaves for analytics database but as the usage grows that will be fairly straightforward to do. At the end, we had an architecture where we could spin off new read replicas and route the queries fairly simply and had a high performance load-balancer in front of the databases. All this has resulted in a much higher uptime and stability in our application, and we could focus more on what we love to do — building products for programmers. We already had an automated deployment system in place, which made the experimentation easier and enabled us to test everything thoroughly. The refactoring and optimization that we did in codebase and architecture also reduced the server count by more than two times. This has been a huge win for us, and we are now focusing on rolling out exciting products in the next few weeks. Stay tuned!

I would love to know how others have solved similar problems. Do give suggestions and point out potential quirks.

P.S. You might be interested in The HackerEarth Data Challenge that we are running.

Follow me @vivekprakash. Write to me at vivek@hackerearth.com.

This post was originally written for the HackerEarth Engineering blog by Vivek Prakash.

Starting with Python

Thinking about starting with Python but unsure where to begin? Maybe you’re wondering if your current system setup is enough to get started.

Before choosing where to code, ask yourself two essential questions:

  • What environment are you most comfortable with—Mac, Windows, or Linux?
  • What is your end goal—building desktop apps, web apps, or something else?

Let’s look at the available options from these perspectives:

Linux

Python was originally developed for Linux and runs seamlessly on it. Working with multiple Python versions is easier due to PEP 394.

  • Ubuntu: Python 2.7 and 3.5 are available by default on Ubuntu 16.04. For older versions, refer to this guide to install newer versions.
  • Arch: Comes with Python 3.5 by default, accessible via the python command.
  • RedHat/CentOS: Ships with Python 2.7. For minimal Unix distributions, Python can be compiled from source.

Common Editors: Vim and Emacs are popular in Unix/Linux environments.

Windows

Download Python from the official website. After installation, set the system path to access Python via the command line.

Thanks to Steve Dower’s efforts, Python on Windows now supports pip and virtualenv very well. Multiple Python versions can be handled using batch scripts. Refer to this article for more info.

Popular Editor: PyCharm

Mac

Mac OS X 10.8+ comes with Python 2.7 pre-installed. GUI execution may have some quirks but command-line execution and editing with Vim or Emacs works well.

Anaconda

If you're into data science, Anaconda provides a streamlined Python setup with “conda” for managing packages and environments.

PyPy

PyPy is a fast alternative to CPython, using a Just-in-Time compiler. It supports various systems and is great for performance-heavy applications.

Starting the Python Interpreter

To start coding, install Python and open a terminal. Type python and press Enter. You’ll see a prompt like this:

Python prompt

At the prompt, type:

print("Hello World.")
Print Hello World

Congratulations! You’ve written your first Python program.

Python on Raspberry Pi and Microcontrollers

If you're into hardware, explore MicroPython or PyMite.

Python is the primary language for Raspberry Pi. Example: Check if a button is pressed using Python on Raspberry Pi. See example here.

Whichever system you use—Windows, Linux, or Mac—Python has you covered.

References and Further Reading:

Continuous Deployment System

This is one of the coolest and most important things we recently built at HackerEarth.

What's so cool about it? Just have a little patience, you will soon find out. But make sure you read till the end :)

I hope to provide valuable insights into the implementation of a Continuous Deployment System(CDS).

At HackerEarth, we iterate over our product quickly and roll out new features as soon as they are production ready. In the last two weeks, we deployed 100+ commits in production, and a major release comprising over 150+ commits is scheduled for launch within a few days. Those commits consist of changes to backend app, website, static files, database, and so on.

We have over a dozen different types of servers running, for example, webserver, code-checker server, log server, wiki server, realtime server, NoSQL server, etc. All of them are running on multiple EC2 instances at any point in time. Our codebase is still tightly integrated as one single project with many different components required for each server. When there are changes to the codebase, you need to update all the related dedicated servers and components when deploying in production. Doing that manually would have just driven us crazy and would have been a total waste of time!

Look at the table of commits deployed on a single day.


And with such speed, we needed an automated deployment system along with automated testing. Our implementation of CDS helped the team roll out features in production with just a single command: git push origin master. Also, another reason to use CDS is that we are trying to automate everything, and I see us going in right direction.

CDS Model

The process begins with the developer pushing a bunch of commits from his master branch to a remote repository, which in our case is set up on Bitbucket. We have set up a post hook on Bitbucket, so as soon as Bitbucket receives commits from the developer, it generates a payload(containing information about commits) and sends it to the toolchain server.

The toolchain server backend receives the payload and filters commits based on the branch and neglects any commit that is not from the master branch or of the type merge commit.


    def filter_commits(branch=settings.MASTER_BRANCH, all_commits=[]):

"""
Filter commits by branch
"""

commits = []

# Reverse commits list so that we have branch info in first commit.
all_commits.reverse()

for commit in all_commits:
if commit['branch'] is None:
parents = commit['parents']
# Ignore merge commits for now
if parents.__len__() > 1:
# It's a merge commit and
# We don't know what to do yet!
continue

# Check if we just stored the child commit.
for lcommit in commits:
if commit['node'] in lcommit['parents']:
commit['branch'] = branch
commits.append(commit)
break
elif commit['branch'] == branch:
commits.append(commit)

# Restore commits order
commits.reverse()
return commits


Filtered commits are then grouped intelligently using a file dependency algorithm.
    def group_commits(commits):

"""
Creates groups of commits based on file dependency algorithm
"""


# List of groups
# Each group is a list of commits
# In list, commits will be in the order they arrived
groups_of_commits = []

# Visited commits
visited = {}

# Store order of commits in which they arrived
# Will be used later to sort commits inside each group
for i, commit in enumerate(commits):
commit['index'] = i

# Loop over commits
for commit in commits:
queue = deque()

# This may be one of the group in groups_of commits,
# if not empty in the end
commits_group = []

commit_visited = visited.get(commit['raw_node'], None)
if not commit_visited:
queue.append(commit)

while len(queue):
c = queue.popleft()
visited[c['raw_node']] = True
commits_group.append(c)
dependent_commits = get_dependent_commits_of(c, commits)

for dep_commit in dependent_commits:
commit_visited = visited.get(dep_commit['raw_node'], None)
if not commit_visited:
queue.append(dep_commit)

if len(commits_group)>0:
# Remove duplicates
nodes = []
commits_group_new = []
for commit in commits_group:
if commit['node'] not in nodes:
nodes.append(commit['node'])
commits_group_new.append(commit)
commits_group = commits_group_new

# Sort list using index key set earlier
commits_group_sorted = sorted(commits_group, key= lambda
k: k['index'])
groups_of_commits.append(commits_group_sorted)

return groups_of_commits


The top commit of each group is sent for testing to the integration test server via rabbitmq. First, I wrote code which sent each commit for testing, but it was too slow. So Vivek suggested that I group commits from payload and run a test on the top commit of each group, which drastically reduced number of times tests are run. Integration tests are run on the integration test server. There is a separate branch called test on which tests are run. Commits are cherry-picked from master onto test branch. Integration test server is a simulated setup to replicate production behavior. If tests are passed, then commits are put in release queue from where they are released in production. Otherwise, the test branch is rolled back to a previous stable commit and clean-up actions are performed, including notifying the developer whose commits failed the tests.

Git Branch Model

We have been using three branches — master, test, and release. In the Master, the developer pushes the code. This branch can be unstable. Test branch is for the integration test server and release branch is for the production server. Release and test branches move parallel, and they are always stable. As we write more tests, the uncertainty of a bad commit being deployed to production will reduce exponentially.

Django Models

Each commit(or revision) is stored in the database. This data is helpful in many circumstances like finding previously failed commits, relating commits to each other using file dependency algorithm, monitoring deployment, etc. Following are the Django models used:* Revision- commithash, commitauthor, etc. * Revision Status- revisionid, testpassed, deployedonproduction, etc. * Revision Files- revisionid, filepath * Revision Dependencies. When the top commit of each group is passed to the integration test server, we first find its dependencies, that is, previously failed commits using the file dependency algorithm, and save it in the Revision Dependencies model so that we can directly query from the database the next time.
def get_dependencies(revision_obj):

dependencies = set()
visited = {}

queue = deque()
filter_id = revision_obj.id
queue.append(revision_obj)

while len(queue):
rev = queue.popleft()
visited[rev.id] = True
dependencies.add(rev)
dependent_revs = get_all_dependent_revs(rev, filter_id)

for rev in dependent_revs:
r_visited = visited.get(rev.id, None)
if not r_visited:
queue.append(rev)
#remove revision from it's own dependecies set.
#makes sense, right?
dependencies.remove(revision_obj)
dependencies = list(dependencies)
dependencies = sorted(dependencies, key=attrgetter('id'))
return dependencies

def get_all_dependent_revs(rev, filter_id):
deps = rev.health_dependency.all()
if len(deps)>0:
return deps

files_in_rev = rev.files.all()
files_in_rev = [f.filepath for f in files_in_rev]

reqd_revisions = Revision.objects.filter(files__filepath__in=files_in_rev, id__lt=filter_id, status__health_status=False)
return reqd_revisions

As we saw earlier in the Overview section, these commits are then cherry-picked onto the test branch from the master branch, and the process continues.

Deploying to Production

Commits that passed integration tests are now ready to be deployed. There are a few things to consider when deploying code to production, such as restarting webserver, deploying static files, running database migrations, etc. The toolchain code intelligently decides which servers to restart, whether to collect static files or run database migrations, and which servers to deploy on based on what changes were done in the commits. You might have noticed we do all this on the basis of types and categories of files changed/modified/deleted in the commits to be released. You might also have noted that we control deployment to production and test servers from the toolchain server (that's the one which receives payload from bitbucket). We use fabric to achieve this. A great tool indeed for executing remote administrative tasks!
from fabric.api import run, env, task, execute, parallel, sudo

@task
def deploy_prod(config, **kwargs):
"""
Deploy code on production servers.
"""

revision = kwargs['revision']
commits_to_release = kwargs['commits_to_release']

revisions = []
for commit in commits_to_release:
revisions.append(Revision.objects.get(raw_node=commit))

result = init_deploy_static(revision, revisions=revisions, config=config,
commits_to_release=commits_to_release)
is_restart_required = toolchain.deploy_utils.is_restart_required(revisions)
if result is True:
init_deploy_default(config=config, restart=is_restart_required)

All these processes take about 2 minutes for deployment on all machines for a group of commits or single push. Our life is a lot easier; we don't worry anymore about pushing our code, and we can see our feature or bug fix or anything else live in production in just a few minutes. Undoubtedly, this will also help us release new features without wasting much time. Now deploying is as simple as writing code and testing on a local machine. We also deployed the hundredth commit to production a few days ago using automated deployment, which stands testimony to the robustness of this system. P.S. I am an undergraduate student at IIT-Roorkee. You can find me @LalitKhattar.

This post was originally written for the HackerEarth Engineering blog by Lalit Khattar, Summer Intern 2013 @HackerEarth
In the Spotlight

Technical Screening Guide: All You Need To Know

Read this guide and learn how you can establish a less frustrating developer hiring workflow for both hiring teams and candidates.
Read More
Top Products

Explore HackerEarth’s top products for Hiring & Innovation

Discover powerful tools designed to streamline hiring, assess talent efficiently, and run seamless hackathons. Explore HackerEarth’s top products that help businesses innovate and grow.
Frame
Hackathons
Engage global developers through innovation
Arrow
Frame 2
Assessments
AI-driven advanced coding assessments
Arrow
Frame 3
FaceCode
Real-time code editor for effective coding interviews
Arrow
Frame 4
L & D
Tailored learning paths for continuous assessments
Arrow
Authors

Meet our Authors

Get to know the experts behind our content. From industry leaders to tech enthusiasts, our authors share valuable insights, trends, and expertise to keep you informed and inspired.
Ruehie Jaiya Karri
Kumari Trishya

Forecasting Tech Hiring Trends For 2023 With 6 Experts

2023 is here, and it is time to look ahead. Start planning your tech hiring needs as per your business requirements, revamp your recruiting processes, and come up with creative ways to land that perfect “unicorn candidate”!

Right? Well, jumping in blindly without heeding what this year holds for you can be a mistake. So before you put together your plans, ask yourselves this—What are the most important 2023 recruiting trends in tech hiring that you should be prepared for? What are the predictions that will shape this year?

We went around and posed three important questions to industry experts that were on our minds. And what they had to say certainly gave us some food for thought!

Before we dive in, allow me to introduce you to our expert panel of six, who had so much to say from personal experience!

Meet the Expert Panel

Radoslav Stankov

Radoslav Stankov has more than 20 years of experience working in tech. He is currently Head of Engineering at Product Hunt. Enjoys blogging, conference speaking, and solving problems.

Mike Cohen

Mike “Batman” Cohen is the Founder of Wayne Technologies, a Sourcing-as-a-Service company providing recruitment data and candidate outreach services to enhance the talent acquisition journey.

Pamela Ilieva

Pamela Ilieva is the Director of International Recruitment at Shortlister, a platform that connects employers to wellness, benefits, and HR tech vendors.

Brian H. Hough

Brian H. Hough is a Web2 and Web3 software engineer, AWS Community Builder, host of the Tech Stack Playbook™ YouTube channel/podcast, 5-time global hackathon winner, and tech content creator with 10k+ followers.

Steve O'Brien

Steve O'Brien is Senior Vice President, Talent Acquisition at Syneos Health, leading a global team of top recruiters across 30+ countries in 24+ languages, with nearly 20 years of diverse recruitment experience.

Patricia (Sonja Sky) Gatlin

Patricia (Sonja Sky) Gatlin is a New York Times featured activist, DEI Specialist, EdTechie, and Founder of Newbies in Tech. With 10+ years in Higher Education and 3+ in Tech, she now works part-time as a Diversity Lead recruiting STEM professionals to teach gifted students.

Overview of the upcoming tech industry landscape in 2024

Continued emphasis on remote work and flexibility: As we move into 2024, the tech industry is expected to continue embracing remote work and flexible schedules. This trend, accelerated by the COVID-19 pandemic, has proven to be more than a temporary shift. Companies are finding that remote work can lead to increased productivity, a broader talent pool, and better work-life balance for employees. As a result, recruiting strategies will likely focus on leveraging remote work capabilities to attract top talent globally.

Rising demand for AI and Machine Learning Skills: Artificial Intelligence (AI) and Machine Learning (ML) continue to be at the forefront of technological advancement. In 2024, these technologies are expected to become even more integrated into various business processes, driving demand for professionals skilled in AI and ML. Companies will likely prioritize candidates with expertise in these areas, and there may be an increased emphasis on upskilling existing employees to meet this demand.

Increased focus on cybersecurity: With the digital transformation of businesses, cybersecurity remains a critical concern. The tech industry in 2024 is anticipated to see a surge in the need for cybersecurity professionals. Companies will be on the lookout for talent capable of protecting against evolving cyber threats and ensuring data privacy.

Growth in cloud computing and edge computing: Cloud computing continues to grow, but there is also an increasing shift towards edge computing – processing data closer to where it is generated. This shift will likely create new job opportunities and skill requirements, influencing recruiting trends in the tech industry.

Sustainable technology and green computing: The global emphasis on sustainability is pushing the tech industry towards green computing and environmentally friendly technologies. In 2024, companies may seek professionals who can contribute to sustainable technology initiatives, adding a new dimension to tech recruiting.

Emphasis on soft skills: While technical skills remain paramount, soft skills like adaptability, communication, and problem-solving are becoming increasingly important. Companies are recognizing the value of these skills in fostering innovation and teamwork, especially in a remote or hybrid work environment.

Diversity, Equity, and Inclusion (DEI): There is an ongoing push towards more diverse and inclusive workplaces. In 2024, tech companies will likely continue to strengthen their DEI initiatives, affecting how they recruit and retain talent.

6 industry experts predict the 2023 recruiting trends

#1 We've seen many important moments in the tech industry this year...

Rado: In my opinion, a lot of those will carry over. I felt this was a preparation year for what was to come...

Mike: I wish I had the crystal ball for this, but I hope that when the market starts picking up again...

Pamela: Quiet quitting has been here way before 2022, and it is here to stay if organizations and companies...

Pamela Ilieva, Director of International Recruitment, Shortlister

Also, read: What Tech Companies Need To Know About Quiet Quitting


Brian: Yes, absolutely. In the 2022 Edelman Trust Barometer report...

Steve: Quiet quitting in the tech space will naturally face pressure as there is a redistribution of tech talent...

Patricia: Quiet quitting has been around for generations—people doing the bare minimum because they are no longer incentivized...

Patricia Gatlin, DEI Specialist and Curator, #blacklinkedin

#2 What is your pro tip for HR professionals/engineering managers...

Rado: Engineering managers should be able to do "more-with-less" in the coming year.

Radoslav Stankov, Head of Engineering, Product Hunt

Mike: Well first, (shameless plug), be in touch with me/Wayne Technologies as a stop-gap for when the time comes.

Mike “Batman” Cohen, Founder of Wayne Technologies

It's in the decrease and increase where companies find the hardest challenges...

Pamela: Remain calm – no need to “add fuel to the fire”!...

Brian: We have to build during the bear markets to thrive in the bull markets.

Companies can create internal hackathons to exercise creativity...


Also, read: Internal Hackathons - Drive Innovation And Increase Engagement In Tech Teams


Steve: HR professionals facing a hiring freeze will do well to “upgrade” processes, talent, and technology aggressively during downtime...

Steve O'Brien, Senior Vice President, Talent Acquisition at Syneos Health

Patricia: Talk to hiring managers in all your departments. Ask, what are the top 3-5 roles they are hiring for in the new year?...


Also, watch: 5 Recruiting Tips To Navigate The Hiring Freeze With Shalini Chandra, Senior TA, HackerEarth


#3 What top 3 skills would you like HR professionals/engineering managers to add to their repertoire in 2023 to deal with upcoming challenges?

6 industry experts predict the 2023 recruiting trends

Rado: Prioritization, team time, and environment management.

I think "prioritization" and "team time" management are obvious. But what do I mean by "environment management"?

A productive environment is one of the key ingredients for a productive team. Look at where your team wastes most time, which can be automated. For example, end-to-end writing tests take time because our tools are cumbersome and undocumented. So let's improve this.

Mike: Setting better metrics/KPIs, moving away from LinkedIn, and sharing more knowledge.

  1. Metrics/KPIs: Become better at setting measurable KPIs and accountable metrics. They are not the same thing—it's like the Square and Rectangle. One fits into the other but they're not the same. Hold people accountable to metrics, not KPIs. Make sure your metrics are aligned with company goals and values, and that they push employees toward excellence, not mediocrity.
  2. Freedom from LinkedIn: This is every year, and will probably continue to be. LinkedIn is a great database, but it is NOT the only way to find candidates, and oftentimes, not even the most effective/efficient. Explore other tools and methodologies!
  3. Join the conversation: I'd love to see new names of people presenting at conferences and webinars. And also, see new authors on the popular TA content websites. Everyone has things they can share—be a part of the community, not just a user of. Join FB groups, write and post articles, and comment on other people's posts with more than 'Great article'. It's a great community, but it's only great because of the people who contribute to it—be one of those people.

Pamela: Resilience, leveraging data, and self-awareness.

  1. Resilience: A “must-have” skill for the 21st century due to constant changes in the tech industry. Face and adapt to challenges. Overcome them and handle disappointments. Never give up. This will keep HR people alive in 2023.
  2. Data skills: Get some data analyst skills. The capacity to transfer numbers into data can help you be a better HR professional, prepared to improve the employee experience and show your leadership team how HR is leveraging data to drive business results.
  3. Self-awareness: Allows you to react better to upsetting situations and workplace challenges. It is a healthy skill to cultivate – especially as an HR professional.

Also, read: Diving Deep Into The World Of Data Science With Ashutosh Kumar


Brian: Agility, resourcefulness, and empathy.

  1. Agility: Allows professionals to move with market conditions. Always be as prepared as possible for any situation to come. Be flexible based on what does or does not happen.
  2. Resourcefulness: Allows professionals to do more with less. It also helps them focus on how to amplify, lift, and empower the current teams to be the best they can be.
  3. Empathy: Allows professionals to take a more proactive approach to listening and understanding where all workers are coming from. Amid stressful situations, companies need empathetic team members and leaders alike who can meet each other wherever they are and be a support.

Steve: Negotiation, data management, and talent development.

  1. Negotiation: Wage transparency laws will fundamentally change the compensation conversation. We must ensure we are still discussing compensation early in the process. And not just “assume” everyone’s on the same page because “the range is published”.
  2. Data management and predictive analytics: Looking at your organization's talent needs as a casserole of indistinguishable components and demands will not be good enough. We must upgrade the accuracy and consistency of our data and the predictions we can make from it.

Also, read: The Role of Talent Intelligence in Optimizing Recruitment


  1. Talent development: We’ve been exploring the interplay between TA and TM for years. Now is the time to integrate your internal and external talent marketplaces. To provide career experiences to people within your organization and not just those joining your organization.

Patricia: Technology, research, and relationship building.

  1. Technology: Get better at understanding the technology that’s out there. To help you speed up the process, track candidate experience, but also eliminate bias. Metrics are becoming big in HR.
  2. Research: Honestly, read more books. Many great thought leaders put out content about the “future of work”, understanding “Gen Z”, or “quiet quitting.” Dedicate work hours to understanding your ever-changing field.
  3. Relationship Building: Especially in your immediate communities. Most people don’t know who you are or what exactly it is that you do. Build your personal brand and what you are doing at your company to impact those closest to you. Create a referral funnel to get a pipeline going. When people want a job you and your company ought to be top of mind. Also, tell the stories of the people that work there.

7 Tech Recruiting Trends To Watch Out For In 2024

The last couple of years transformed how the world works and the tech industry is no exception. Remote work, a candidate-driven market, and automation are some of the tech recruiting trends born out of the pandemic.

While accepting the new reality and adapting to it is the first step, keeping up with continuously changing hiring trends in technology is the bigger challenge right now.

What does 2024 hold for recruiters across the globe? What hiring practices would work best in this post-pandemic world? How do you stay on top of the changes in this industry?

The answers to these questions will paint a clearer picture of how to set up for success while recruiting tech talent this year.

7 tech recruiting trends for 2024

6 Tech Recruiting Trends To Watch Out For In 2022

Recruiters, we’ve got you covered. Here are the tech recruiting trends that will change the way you build tech teams in 2024.

Trend #1—Leverage data-driven recruiting

Data-driven recruiting strategies are the answer to effective talent sourcing and a streamlined hiring process.

Talent acquisition leaders need to use real-time analytics like pipeline growth metrics, offer acceptance rates, quality and cost of new hires, and candidate feedback scores to reduce manual work, improve processes, and hire the best talent.

The key to capitalizing on talent market trends in 2024 is data. It enables you to analyze what’s working and what needs refinement, leaving room for experimentation.

Trend #2—Have impactful employer branding

98% of recruiters believe promoting company culture helps sourcing efforts as seen in our 2021 State Of Developer Recruitment report.

Having a strong employer brand that supports a clear Employer Value Proposition (EVP) is crucial to influencing a candidate’s decision to work with your company. Perks like upskilling opportunities, remote work, and flexible hours are top EVPs that attract qualified candidates.

A clear EVP builds a culture of balance, mental health awareness, and flexibility—strengthening your employer brand with candidate-first policies.

Trend #3—Focus on candidate-driven market

The pandemic drastically increased the skills gap, making tech recruitment more challenging. With the severe shortage of tech talent, candidates now hold more power and can afford to be selective.

Competitive pay is no longer enough. Use data to understand what candidates want—work-life balance, remote options, learning opportunities—and adapt accordingly.

Recruiters need to think creatively to attract and retain top talent.


Recommended read: What NOT To Do When Recruiting Fresh Talent


Trend #4—Have a diversity and inclusion oriented company culture

Diversity and inclusion have become central to modern recruitment. While urgent hiring can delay D&I efforts, long-term success depends on inclusive teams. Our survey shows that 25.6% of HR professionals believe a diverse leadership team helps build stronger pipelines and reduces bias.

McKinsey’s Diversity Wins report confirms this: top-quartile gender-diverse companies see 25% higher profitability, and ethnically diverse teams show 36% higher returns.

It's refreshing to see the importance of an inclusive culture increasing across all job-seeking communities, especially in tech. This reiterates that D&I is a must-have, not just a good-to-have.

—Swetha Harikrishnan, Sr. HR Director, HackerEarth

Recommended read: Diversity And Inclusion in 2022 - 5 Essential Rules To Follow


Trend #5—Embed automation and AI into your recruitment systems

With the rise of AI tools like ChatGPT, automation is being adopted across every business function—including recruiting.

Manual communication with large candidate pools is inefficient. In 2024, recruitment automation and AI-powered platforms will automate candidate nurturing and communication, providing a more personalized experience while saving time.

Trend #6—Conduct remote interviews

With 32.5% of companies planning to stay remote, remote interviewing is here to stay.

Remote interviews expand access to global talent, reduce overhead costs, and increase flexibility—making the hiring process more efficient for both recruiters and candidates.

Trend #7—Be proactive in candidate engagement

Delayed responses or lack of updates can frustrate candidates and impact your brand. Proactive communication and engagement with both active and passive candidates are key to successful recruiting.

As recruitment evolves, proactive candidate engagement will become central to attracting and retaining talent. In 2023 and beyond, companies must engage both active and passive candidates through innovative strategies and technologies like chatbots and AI-powered systems. Building pipelines and nurturing relationships will enhance employer branding and ensure long-term hiring success.

—Narayani Gurunathan, CEO, PlaceNet Consultants

Recruiting Tech Talent Just Got Easier With HackerEarth

Recruiting qualified tech talent is tough—but we’re here to help. HackerEarth for Enterprises offers an all-in-one suite that simplifies sourcing, assessing, and interviewing developers.

Our tech recruiting platform enables you to:

  • Tap into a 6 million-strong developer community
  • Host custom hackathons to engage talent and boost your employer brand
  • Create online assessments to evaluate 80+ tech skills
  • Use dev-friendly IDEs and proctoring for reliable evaluations
  • Benchmark candidates against a global community
  • Conduct live coding interviews with FaceCode, our collaborative coding interview tool
  • Guide upskilling journeys via our Learning and Development platform
  • Integrate seamlessly with all leading ATS systems
  • Access 24/7 support with a 95% satisfaction score

Recommended read: The A-Zs Of Tech Recruiting - A Guide


Staying ahead of tech recruiting trends, improving hiring processes, and adapting to change is the way forward in 2024. Take note of the tips in this article and use them to build a future-ready hiring strategy.

Ready to streamline your tech recruiting? Try HackerEarth for Enterprises today.

Code In Progress - The Life And Times Of Developers In 2021

Developers. Are they as mysterious as everyone makes them out to be? Is coding the only thing they do all day? Good coders work around the clock, right?

While developers are some of the most coveted talent out there, they also have the most myths being circulated. Most of us forget that developers too are just like us. And no, they do not code all day long.

We wanted to bust a lot of these myths and shed light on how the programming world looks through a developer’s lens in 2021—especially in the wake of a global pandemic. This year’s edition of the annual HackerEarth Developer Survey is packed with developers’ wants and needs when choosing jobs, major gripes with the WFH scenario, and the latest market trends to watch out for, among others.

Our 2021 report is bigger and better, with responses from 25,431 developers across 171 countries. Let’s find out what makes a developer tick, shall we?

Developer Survey

“Good coders work around the clock.” No, they don’t.

Busting the myth that developers spend the better part of their day coding, 52% of student developers said that they prefer to code for a maximum of 3 hours per day.

When not coding, devs swear by their walks as a way to unwind. When we asked devs the same question last year, they said they liked to indulge in indoor games like foosball. In 2021, going for walks has become the most popular method of de-stressing. We’re chalking it up to working from home and not having a chance to stretch their legs.

Staying ahead of the skills game

Following the same trend as last year, students (39%) and working professionals (44%) voted for Go as one of the most popular programming languages that they want to learn. The other programming languages that devs are interested in learning are Rust, Kotlin, and Erlang.

Programming languages that students are most skilled at are HTML/CSS, C++, and Python. Senior developers are more comfortable working with HTML/CSS, SQL, and Java.

How happy are developers

Employees from middle market organizations had the highest 'happiness index' of 7.2. Experienced developers who work at enterprises are marginally less happy in comparison to people who work at smaller companies.

However, happiness is not a binding factor for where developers work. Despite scoring the least on the happiness scale, working professionals would still like to work at enterprise companies and growth-stage startups.

What works when looking for work

Student devs (63%), who are just starting in the tech world, said a good career growth curve is a must-have. Working professionals can be wooed by offers of a good career path (69%) and compensation (68%).

One trend that has changed since last year is that at least 50% of students and working professionals alike care a lot more about ESOPs and positive Glassdoor reviews now than they did in 2020.


To know more about what developers want, download your copy of the report now!


We went a step further and organized an event with our CEO, Sachin Gupta, Radoslav Stankov, Head of Engineering at Product Hunt, and Steve O’Brien, President of Talent Solutions at Job.com to further dissect the findings of our survey.

Tips straight from the horse’s mouth

Steve highlighted how the information collated from the developer survey affects the recruiting community and how they can leverage this data to hire better and faster.

  • The insight where developer happiness is correlated to work hours didn’t find a significant difference between the cohorts. Devs working for less than 40 hours seemed marginally happier than those that clocked in more than 60 hours a week.
“This is an interesting data point, which shows that devs are passionate about what they do. You can increase their workload by 50% and still not affect their happiness. From a work perspective, as a recruiter, you have to get your hiring manager to understand that while devs never say no to more work, HMs shouldn’t overload the devs. Devs are difficult to source and burnout only leads to killing your talent pool, which is something that you do not want,” says Steve.
  • Roughly 45% of both student and professional developers learned how to code in college was another insight that was open to interpretation.
“Let’s look at it differently. Less than half of the surveyed developers learned how to code in college. There’s a major segment of the market today that is not necessarily following the ‘college degree to getting a job’ path. Developers are beginning to look at their skillsets differently and using various platforms to upskill themselves. Development is not about pedigree, it’s more about the potential to demonstrate skills. This is an interesting shift in the way we approach testing and evaluating devs in 2021.”

Rado contextualized the data from the survey to see what it means for the developer community and what trends to watch out for in 2021.

  • Node.js and AngularJS are the most popular frameworks among students and professionals.
“I was surprised by how many young students wanted to learn AngularJS, given that it’s more of an enterprise framework. Another thing that stood out to me was that the younger generation wants to learn technologies that are not necessarily cool like ExtJS (35%). This is good because people are picking technologies that they enjoy working with instead of just going along with what everyone else is doing. This also builds a more diverse technology pool.” — Rado
  • 22% of devs say ‘Zoom Fatigue’ is real and directly affects productivity.
“Especially for younger people who still haven’t figured out a routine to develop their skills, there is something I’d like you to try out. Start using noise-canceling headphones. They help keep distractions to a minimum. I find clutter-free working spaces to be an interesting concept as well.”

The last year and a half have been a doozy for developers everywhere, with a lot of things changing, and some things staying the same. With our developer survey, we wanted to shine the spotlight on skill-based hiring and market trends in 2021—plus highlight the fact that developers too have their gripes and happy hours.

Uncover many more developer trends for 2021 with Steve and Rado below:

View all

Best Pre-Employment Assessments: Optimizing Your Hiring Process for 2024

In today's competitive talent market, attracting and retaining top performers is crucial for any organization's success. However, traditional hiring methods like relying solely on resumes and interviews may not always provide a comprehensive picture of a candidate's skills and potential. This is where pre-employment assessments come into play.

What is Pre-Employement Assessment?

Pre-employment assessments are standardized tests and evaluations administered to candidates before they are hired. These assessments can help you objectively measure a candidate's knowledge, skills, abilities, and personality traits, allowing you to make data-driven hiring decisions.

By exploring and evaluating the best pre-employment assessment tools and tests available, you can:

  • Improve the accuracy and efficiency of your hiring process.
  • Identify top talent with the right skills and cultural fit.
  • Reduce the risk of bad hires.
  • Enhance the candidate experience by providing a clear and objective evaluation process.

This guide will provide you with valuable insights into the different types of pre-employment assessments available and highlight some of the best tools, to help you optimize your hiring process for 2024.

Why pre-employment assessments are key in hiring

While resumes and interviews offer valuable insights, they can be subjective and susceptible to bias. Pre-employment assessments provide a standardized and objective way to evaluate candidates, offering several key benefits:

  • Improved decision-making:

    By measuring specific skills and knowledge, assessments help you identify candidates who possess the qualifications necessary for the job.

  • Reduced bias:

    Standardized assessments mitigate the risks of unconscious bias that can creep into traditional interview processes.

  • Increased efficiency:

    Assessments can streamline the initial screening process, allowing you to focus on the most promising candidates.

  • Enhanced candidate experience:

    When used effectively, assessments can provide candidates with a clear understanding of the required skills and a fair chance to showcase their abilities.

Types of pre-employment assessments

There are various types of pre-employment assessments available, each catering to different needs and objectives. Here's an overview of some common types:

1. Skill Assessments:

  • Technical Skills: These assessments evaluate specific technical skills and knowledge relevant to the job role, such as programming languages, software proficiency, or industry-specific expertise. HackerEarth offers a wide range of validated technical skill assessments covering various programming languages, frameworks, and technologies.
  • Soft Skills: These employment assessments measure non-technical skills like communication, problem-solving, teamwork, and critical thinking, crucial for success in any role.

2. Personality Assessments:

These employment assessments can provide insights into a candidate's personality traits, work style, and cultural fit within your organization.

3. Cognitive Ability Tests:

These tests measure a candidate's general mental abilities, such as reasoning, problem-solving, and learning potential.

4. Integrity Assessments:

These employment assessments aim to identify potential risks associated with a candidate's honesty, work ethic, and compliance with company policies.

By understanding the different types of assessments and their applications, you can choose the ones that best align with your specific hiring needs and ensure you hire the most qualified and suitable candidates for your organization.

Leading employment assessment tools and tests in 2024

Choosing the right pre-employment assessment tool depends on your specific needs and budget. Here's a curated list of some of the top pre-employment assessment tools and tests available in 2024, with brief overviews:

  • HackerEarth:

    A comprehensive platform offering a wide range of validated skill assessments in various programming languages, frameworks, and technologies. It also allows for the creation of custom assessments and integrates seamlessly with various recruitment platforms.

  • SHL:

    Provides a broad selection of assessments, including skill tests, personality assessments, and cognitive ability tests. They offer customizable solutions and cater to various industries.

  • Pymetrics:

    Utilizes gamified assessments to evaluate cognitive skills, personality traits, and cultural fit. They offer a data-driven approach and emphasize candidate experience.

  • Wonderlic:

    Offers a variety of assessments, including the Wonderlic Personnel Test, which measures general cognitive ability. They also provide aptitude and personality assessments.

  • Harver:

    An assessment platform focusing on candidate experience with video interviews, gamified assessments, and skills tests. They offer pre-built assessments and customization options.

Remember: This list is not exhaustive, and further research is crucial to identify the tool that aligns best with your specific needs and budget. Consider factors like the types of assessments offered, pricing models, integrations with your existing HR systems, and user experience when making your decision.

Choosing the right pre-employment assessment tool

Instead of full individual tool reviews, consider focusing on 2–3 key platforms. For each platform, explore:

  • Target audience: Who are their assessments best suited for (e.g., technical roles, specific industries)?
  • Types of assessments offered: Briefly list the available assessment categories (e.g., technical skills, soft skills, personality).
  • Key features: Highlight unique functionalities like gamification, custom assessment creation, or seamless integrations.
  • Effectiveness: Briefly mention the platform's approach to assessment validation and reliability.
  • User experience: Consider including user reviews or ratings where available.

Comparative analysis of assessment options

Instead of a comprehensive comparison, consider focusing on specific use cases:

  • Technical skills assessment:

    Compare HackerEarth and Wonderlic based on their technical skill assessment options, focusing on the variety of languages/technologies covered and assessment formats.

  • Soft skills and personality assessment:

    Compare SHL and Pymetrics based on their approaches to evaluating soft skills and personality traits, highlighting any unique features like gamification or data-driven insights.

  • Candidate experience:

    Compare Harver and Wonderlic based on their focus on candidate experience, mentioning features like video interviews or gamified assessments.

Additional tips:

  • Encourage readers to visit the platforms' official websites for detailed features and pricing information.
  • Include links to reputable third-party review sites where users share their experiences with various tools.

Best practices for using pre-employment assessment tools

Integrating pre-employment assessments effectively requires careful planning and execution. Here are some best practices to follow:

  • Define your assessment goals:

    Clearly identify what you aim to achieve with assessments. Are you targeting specific skills, personality traits, or cultural fit?

  • Choose the right assessments:

    Select tools that align with your defined goals and the specific requirements of the open position.

  • Set clear expectations:

    Communicate the purpose and format of the assessments to candidates in advance, ensuring transparency and building trust.

  • Integrate seamlessly:

    Ensure your chosen assessment tool integrates smoothly with your existing HR systems and recruitment workflow.

  • Train your team:

    Equip your hiring managers and HR team with the knowledge and skills to interpret assessment results effectively.

Interpreting assessment results accurately

Assessment results offer valuable data points, but interpreting them accurately is crucial for making informed hiring decisions. Here are some key considerations:

  • Use results as one data point:

    Consider assessment results alongside other information, such as resumes, interviews, and references, for a holistic view of the candidate.

  • Understand score limitations:

    Don't solely rely on raw scores. Understand the assessment's validity and reliability and the potential for cultural bias or individual test anxiety.

  • Look for patterns and trends:

    Analyze results across different assessments and identify consistent patterns that align with your desired candidate profile.

  • Focus on potential, not guarantees:

    Assessments indicate potential, not guarantees of success. Use them alongside other evaluation methods to make well-rounded hiring decisions.

Choosing the right pre-employment assessment tools

Selecting the most suitable pre-employment assessment tool requires careful consideration of your organization's specific needs. Here are some key factors to guide your decision:

  • Industry and role requirements:

    Different industries and roles demand varying skill sets and qualities. Choose assessments that target the specific skills and knowledge relevant to your open positions.

  • Company culture and values:

    Align your assessments with your company culture and values. For example, if collaboration is crucial, look for assessments that evaluate teamwork and communication skills.

  • Candidate experience:

    Prioritize tools that provide a positive and smooth experience for candidates. This can enhance your employer brand and attract top talent.

Budget and accessibility considerations

Budget and accessibility are essential factors when choosing pre-employment assessments:

  • Budget:

    Assessment tools come with varying pricing models (subscriptions, pay-per-use, etc.). Choose a tool that aligns with your budget and offers the functionalities you need.

  • Accessibility:

    Ensure the chosen assessment is accessible to all candidates, considering factors like language options, disability accommodations, and internet access requirements.

Additional Tips:

  • Free trials and demos: Utilize free trials or demos offered by assessment platforms to experience their functionalities firsthand.
  • Consult with HR professionals: Seek guidance from HR professionals or recruitment specialists with expertise in pre-employment assessments.
  • Read user reviews and comparisons: Gain insights from other employers who use various assessment tools.

By carefully considering these factors, you can select the pre-employment assessment tool that best aligns with your organizational needs, budget, and commitment to an inclusive hiring process.

Remember, pre-employment assessments are valuable tools, but they should not be the sole factor in your hiring decisions. Use them alongside other evaluation methods and prioritize building a fair and inclusive hiring process that attracts and retains top talent.

Future trends in pre-employment assessments

The pre-employment assessment landscape is constantly evolving, with innovative technologies and practices emerging. Here are some potential future trends to watch:

  • Artificial intelligence (AI):

    AI-powered assessments can analyze candidate responses, written work, and even resumes, using natural language processing to extract relevant insights and identify potential candidates.

  • Adaptive testing:

    These assessments adjust the difficulty level of questions based on the candidate's performance, providing a more efficient and personalized evaluation.

  • Micro-assessments:

    Short, focused assessments delivered through mobile devices can assess specific skills or knowledge on-the-go, streamlining the screening process.

  • Gamification:

    Engaging and interactive game-based elements can make the assessment experience more engaging and assess skills in a realistic and dynamic way.

Conclusion

Pre-employment assessments, when used thoughtfully and ethically, can be a powerful tool to optimize your hiring process, identify top talent, and build a successful workforce for your organization. By understanding the different types of assessments available, exploring top-rated tools like HackerEarth, and staying informed about emerging trends, you can make informed decisions that enhance your ability to attract, evaluate, and hire the best candidates for the future.

Tech Layoffs: What To Expect In 2024

Layoffs in the IT industry are becoming more widespread as companies fight to remain competitive in a fast-changing market; many turn to layoffs as a cost-cutting measure. Last year, 1,000 companies including big tech giants and startups, laid off over two lakhs of employees. But first, what are layoffs in the tech business, and how do they impact the industry?

Tech layoffs are the termination of employment for some employees by a technology company. It might happen for various reasons, including financial challenges, market conditions, firm reorganization, or the after-effects of a pandemic. While layoffs are not unique to the IT industry, they are becoming more common as companies look for methods to cut costs while remaining competitive.

The consequences of layoffs in technology may be catastrophic for employees who lose their jobs and the firms forced to make these difficult decisions. Layoffs can result in the loss of skill and expertise and a drop in employee morale and productivity. However, they may be required for businesses to stay afloat in a fast-changing market.

This article will examine the reasons for layoffs in the technology industry, their influence on the industry, and what may be done to reduce their negative impacts. We will also look at the various methods for tracking tech layoffs.

What are tech layoffs?

The term "tech layoff" describes the termination of employees by an organization in the technology industry. A company might do this as part of a restructuring during hard economic times.

In recent times, the tech industry has witnessed a wave of significant layoffs, affecting some of the world’s leading technology companies, including Amazon, Microsoft, Meta (formerly Facebook), Apple, Cisco, SAP, and Sony. These layoffs are a reflection of the broader economic challenges and market adjustments facing the sector, including factors like slowing revenue growth, global economic uncertainties, and the need to streamline operations for efficiency.

Each of these tech giants has announced job cuts for various reasons, though common themes include restructuring efforts to stay competitive and agile, responding to over-hiring during the pandemic when demand for tech services surged, and preparing for a potentially tough economic climate ahead. Despite their dominant positions in the market, these companies are not immune to the economic cycles and technological shifts that influence operational and strategic decisions, including workforce adjustments.

This trend of layoffs in the tech industry underscores the volatile nature of the tech sector, which is often at the mercy of rapid changes in technology, consumer preferences, and the global economy. It also highlights the importance of adaptability and resilience for companies and employees alike in navigating the uncertainties of the tech landscape.

Causes for layoffs in the tech industry

Why are tech employees suffering so much?

Yes, the market is always uncertain, but why resort to tech layoffs?

Various factors cause tech layoffs, including company strategy changes, market shifts, or financial difficulties. Companies may lay off employees if they need help to generate revenue, shift their focus to new products or services, or automate certain jobs.

In addition, some common reasons could be:

Financial struggles

Currently, the state of the global market is uncertain due to economic recession, ongoing war, and other related phenomena. If a company is experiencing financial difficulties, only sticking to pay cuts may not be helpful—it may need to reduce its workforce to cut costs.


Also, read: 6 Steps To Create A Detailed Recruiting Budget (Template Included)


Changes in demand

The tech industry is constantly evolving, and companies would have to adjust their workforce to meet changing market conditions. For instance, companies are adopting remote work culture, which surely affects on-premises activity, and companies could do away with some number of tech employees at the backend.

Restructuring

Companies may also lay off employees as part of a greater restructuring effort, such as spinning off a division or consolidating operations.

Automation

With the advancement in technology and automation, some jobs previously done by human labor may be replaced by machines, resulting in layoffs.

Mergers and acquisitions

When two companies merge, there is often overlap in their operations, leading to layoffs as the new company looks to streamline its workforce.

But it's worth noting that layoffs are not exclusive to the tech industry and can happen in any industry due to uncertainty in the market.

Will layoffs increase in 2024?

It is challenging to estimate the rise or fall of layoffs. The overall state of the economy, the health of certain industries, and the performance of individual companies will play a role in deciding the degree of layoffs in any given year.

But it is also seen that, in the first 15 days of this year, 91 organizations laid off over 24,000 tech workers, and over 1,000 corporations cut down more than 150,000 workers in 2022, according to an Economic Times article.

The COVID-19 pandemic caused a huge economic slowdown and forced several businesses to downsize their employees. However, some businesses rehired or expanded their personnel when the world began to recover.

So, given the current level of economic uncertainty, predicting how the situation will unfold is difficult.


Also, read: 4 Images That Show What Developers Think Of Layoffs In Tech


What types of companies are prone to tech layoffs?

2023 Round Up Of Layoffs In Big Tech

Tech layoffs can occur in organizations of all sizes and various areas.

Following are some examples of companies that have experienced tech layoffs in the past:

Large tech firms

Companies such as IBM, Microsoft, Twitter, Better.com, Alibaba, and HP have all experienced layoffs in recent years as part of restructuring initiatives or cost-cutting measures.

Market scenarios are still being determined after Elon Musk's decision to lay off employees. Along with tech giants, some smaller companies and startups have also been affected by layoffs.

Startups

Because they frequently work with limited resources, startups may be forced to lay off staff if they cannot get further funding or need to pivot due to market downfall.

Small and medium-sized businesses

Small and medium-sized businesses face layoffs due to high competition or if the products/services they offer are no longer in demand.

Companies in certain industries

Some sectors of the technological industry, such as the semiconductor industry or automotive industry, may be more prone to layoffs than others.

Companies that lean on government funding

Companies that rely significantly on government contracts may face layoffs if the government cuts technology spending or contracts are not renewed.

How to track tech layoffs?

You can’t stop tech company layoffs, but you should be keeping track of them. We, HR professionals and recruiters, can also lend a helping hand in these tough times by circulating “layoff lists” across social media sites like LinkedIn and Twitter to help people land jobs quicker. Firefish Software put together a master list of sources to find fresh talent during the layoff period.

Because not all layoffs are publicly disclosed, tracking tech industry layoffs can be challenging, and some may go undetected. There are several ways to keep track of tech industry layoffs:

Use tech layoffs tracker

Layoff trackers like thelayoff.com and layoffs.fyi provide up-to-date information on layoffs.

In addition, they aid in identifying trends in layoffs within the tech industry. It can reveal which industries are seeing the most layoffs and which companies are the most affected.

Companies can use layoff trackers as an early warning system and compare their performance to that of other companies in their field.

News articles

Because many news sites cover tech layoffs as they happen, keeping a watch on technology sector stories can provide insight into which organizations are laying off employees and how many individuals have been affected.

Social media

Organizations and employees frequently publish information about layoffs in tech on social media platforms; thus, monitoring companies' social media accounts or following key hashtags can provide real-time updates regarding layoffs.

Online forums and communities

There are online forums and communities dedicated to discussing tech industry news, and they can be an excellent source of layoff information.

Government reports

Government agencies such as the Bureau of Labor Statistics (BLS) publish data on layoffs and unemployment, which can provide a more comprehensive picture of the technology industry's status.

How do companies reduce tech layoffs?

Layoffs in tech are hard – for the employee who is losing their job, the recruiter or HR professional who is tasked with informing them, and the company itself. So, how can we aim to avoid layoffs? Here are some ways to minimize resorting to letting people go:

Salary reductions

Instead of laying off employees, businesses can lower the salaries or wages of all employees. It can be accomplished by instituting compensation cuts or salary freezes.

Implementing a hiring freeze

Businesses can halt employing new personnel to cut costs. It can be a short-term solution until the company's financial situation improves.


Also, read: What Recruiters Can Focus On During A Tech Hiring Freeze


Non-essential expense reduction

Businesses might search for ways to cut or remove non-essential expenses such as travel, training, and office expenses.

Reducing working hours

Companies can reduce employee working hours to save money, such as implementing a four-day workweek or a shorter workday.

These options may not always be viable and may have their problems, but before laying off, a company owes it to its people to consider every other alternative, and formulate the best solution.

Tech layoffs to bleed into this year

While we do not know whether this trend will continue or subside during 2023, we do know one thing. We have to be prepared for a wave of layoffs that is still yet to hit. As of last month, Layoffs.fyi had already tracked 170+ companies conducting 55,970 layoffs in 2023.

So recruiters, let’s join arms, distribute those layoff lists like there’s no tomorrow, and help all those in need of a job! :)

What is Headhunting In Recruitment?: Types &amp; How Does It Work?

In today’s fast-paced world, recruiting talent has become increasingly complicated. Technological advancements, high workforce expectations and a highly competitive market have pushed recruitment agencies to adopt innovative strategies for recruiting various types of talent. This article aims to explore one such recruitment strategy – headhunting.

What is Headhunting in recruitment?

In headhunting, companies or recruitment agencies identify, engage and hire highly skilled professionals to fill top positions in the respective companies. It is different from the traditional process in which candidates looking for job opportunities approach companies or recruitment agencies. In headhunting, executive headhunters, as recruiters are referred to, approach prospective candidates with the hiring company’s requirements and wait for them to respond. Executive headhunters generally look for passive candidates, those who work at crucial positions and are not on the lookout for new work opportunities. Besides, executive headhunters focus on filling critical, senior-level positions indispensable to companies. Depending on the nature of the operation, headhunting has three types. They are described later in this article. Before we move on to understand the types of headhunting, here is how the traditional recruitment process and headhunting are different.

How do headhunting and traditional recruitment differ from each other?

Headhunting is a type of recruitment process in which top-level managers and executives in similar positions are hired. Since these professionals are not on the lookout for jobs, headhunters have to thoroughly understand the hiring companies’ requirements and study the work profiles of potential candidates before creating a list.

In the traditional approach, there is a long list of candidates applying for jobs online and offline. Candidates approach recruiters for jobs. Apart from this primary difference, there are other factors that define the difference between these two schools of recruitment.

AspectHeadhuntingTraditional RecruitmentCandidate TypePrimarily passive candidateActive job seekersApproachFocused on specific high-level rolesBroader; includes various levelsScopeproactive outreachReactive: candidates applyCostGenerally more expensive due to expertise requiredTypically lower costsControlManaged by headhuntersManaged internally by HR teams

All the above parameters will help you to understand how headhunting differs from traditional recruitment methods, better.

Types of headhunting in recruitment

Direct headhunting: In direct recruitment, hiring teams reach out to potential candidates through personal communication. Companies conduct direct headhunting in-house, without outsourcing the process to hiring recruitment agencies. Very few businesses conduct this type of recruitment for top jobs as it involves extensive screening across networks outside the company’s expanse.

Indirect headhunting: This method involves recruiters getting in touch with their prospective candidates through indirect modes of communication such as email and phone calls. Indirect headhunting is less intrusive and allows candidates to respond at their convenience.Third-party recruitment: Companies approach external recruitment agencies or executive headhunters to recruit highly skilled professionals for top positions. This method often leverages the company’s extensive contact network and expertise in niche industries.

How does headhunting work?

Finding highly skilled professionals to fill critical positions can be tricky if there is no system for it. Expert executive headhunters employ recruitment software to conduct headhunting efficiently as it facilitates a seamless recruitment process for executive headhunters. Most software is AI-powered and expedites processes like candidate sourcing, interactions with prospective professionals and upkeep of communication history. This makes the process of executive search in recruitment a little bit easier. Apart from using software to recruit executives, here are the various stages of finding high-calibre executives through headhunting.

Identifying the role

Once there is a vacancy for a top job, one of the top executives like a CEO, director or the head of the company, reach out to the concerned personnel with their requirements. Depending on how large a company is, they may choose to headhunt with the help of an external recruiting agency or conduct it in-house. Generally, the task is assigned to external recruitment agencies specializing in headhunting. Executive headhunters possess a database of highly qualified professionals who work in crucial positions in some of the best companies. This makes them the top choice of conglomerates looking to hire some of the best talents in the industry.

Defining the job

Once an executive headhunter or a recruiting agency is finalized, companies conduct meetings to discuss the nature of the role, how the company works, the management hierarchy among other important aspects of the job. Headhunters are expected to understand these points thoroughly and establish a clear understanding of their expectations and goals.

Candidate identification and sourcing

Headhunters analyse and understand the requirements of their clients and begin creating a pool of suitable candidates from their database. The professionals are shortlisted after conducting extensive research of job profiles, number of years of industry experience, professional networks and online platforms.

Approaching candidates

Once the potential candidates have been identified and shortlisted, headhunters move on to get in touch with them discreetly through various communication channels. As such candidates are already working at top level positions at other companies, executive headhunters have to be low-key while doing so.

Assessment and Evaluation

In this next step, extensive screening and evaluation of candidates is conducted to determine their suitability for the advertised position.

Interviews and negotiations

Compensation is a major topic of discussion among recruiters and prospective candidates. A lot of deliberation and negotiation goes on between the hiring organization and the selected executives which is facilitated by the headhunters.

Finalizing the hire

Things come to a close once the suitable candidates accept the job offer. On accepting the offer letter, headhunters help finalize the hiring process to ensure a smooth transition.

The steps listed above form the blueprint for a typical headhunting process. Headhunting has been crucial in helping companies hire the right people for crucial positions that come with great responsibility. However, all systems have a set of challenges no matter how perfect their working algorithm is. Here are a few challenges that talent acquisition agencies face while headhunting.

Common challenges in headhunting

Despite its advantages, headhunting also presents certain challenges:

Cost Implications: Engaging headhunters can be more expensive than traditional recruitment methods due to their specialized skills and services.

Time-Consuming Process: While headhunting can be efficient, finding the right candidate for senior positions may still take time due to thorough evaluation processes.

Market Competition: The competition for top talent is fierce; organizations must present compelling offers to attract passive candidates away from their current roles.

Although the above mentioned factors can pose challenges in the headhunting process, there are more upsides than there are downsides to it. Here is how headhunting has helped revolutionize the recruitment of high-profile candidates.

Advantages of Headhunting

Headhunting offers several advantages over traditional recruitment methods:

Access to Passive Candidates: By targeting individuals who are not actively seeking new employment, organisations can access a broader pool of highly skilled professionals.

Confidentiality: The discreet nature of headhunting protects both candidates’ current employment situations and the hiring organisation’s strategic interests.

Customized Search: Headhunters tailor their search based on the specific needs of the organization, ensuring a better fit between candidates and company culture.

Industry Expertise: Many headhunters specialise in particular sectors, providing valuable insights into market dynamics and candidate qualifications.

Conclusion

Although headhunting can be costly and time-consuming, it is one of the most effective ways of finding good candidates for top jobs. Executive headhunters face several challenges maintaining the g discreetness while getting in touch with prospective clients. As organizations navigate increasingly competitive markets, understanding the nuances of headhunting becomes vital for effective recruitment strategies. To keep up with the technological advancements, it is better to optimise your hiring process by employing online recruitment software like HackerEarth, which enables companies to conduct multiple interviews and evaluation tests online, thus improving candidate experience. By collaborating with skilled headhunters who possess industry expertise and insights into market trends, companies can enhance their chances of securing high-caliber professionals who drive success in their respective fields.

View all