In this ever-evolving world of IoT, the makers and developers have to keep their information log updated to stay relevant in the field. Having said that, it is not easy because of the lack of time and a one-stop site to get all the information.
Understanding this need, we have come up with a weekly post, “IoT NEWS packets,” which gives you snippets of what’s new in the field of IoT.
Here are the latest developments in the field of IoT this week:
Samsung Family Hub 2.0 at Samsung Forum 2017
The Samsung Family Hub 2.0 comes in the form of a smart refrigerator. It has a 21.5-inch LED touchscreen and is driven by a Tizen OS that manages up to five family members’ food, cooking, artistic, music, video, and memo needs. It has S Voice recognition technology and a wide range of commands to help you with so many things, from shopping lists to the weather. It is can even find recipes using what is in the fridge.
Global IoT security market is expectedto grow steadily at a CAGR of 47.91% by 2021
According to a study by Market Research Hub, the global IoT security market is all set to grow at a remarkable CAGR of 47.91% during the period 2017–2021. In terms of geography, the APAC region is projected to lead the global market in the coming years; one of the major reasons for this growth is the high adoption of IoT security solutions among enterprises.
Target’s Internet of Things Store Gets a Renovation
https://www.youtube.com/watch?v=mRHtgzkFjUM
Target Open House is back after seven long weeks of renovation. The connected device concept store gives the guests hands-on interaction with new products and services. It also gives the entrepreneur community a place to meet and learn from one another and consumers.
Target Open House makes it easier for startups to get their products in front of thousands of guests much before they’re available for sale.
Along with Open House, Target is launching a new tool called Mission Control. This software will help startups submit products to be showcased at the Open House, and it also offers a real-time dashboard for exhibitors to look at guest interactions, collect feedback, and understand traffic and sales for their products.
Micro Mote: An energy-efficient computer with deep learning capabilities
David Blaauw and Dennis Sylvester, computer scientists at the University of Michigan, have developed the “Micro Mote” computer to make smarter, energy efficient, and smaller sensors for medical devices and the Internet of Things. They have also used deep learning technologies for enhancing face and voice recognition capabilities. It incorporates a deep-learning processor that can operate a neural network while using just 288 micro watts.
IBM and Visa: Turn any connected device into a point of sale with Watson IoT
A new partnership between Visa and IBM Watson gives Visa access to as many as 6,000 IoT client companies. Visa allows them to provision Visa tokenization into their devices and effectively turn them into point-of-sale (POS) terminals that allow users to pay on-the-go.
For example, a pair of smart shoes might monitor a user’s running distance and after a certain number of miles remind him to buy a new pair, which he or she could do on the spot through an activity tracker or an app.
Azure IoT comes with new Azure Stream Analytics features
IoT solutions must monitor real-time data coming from various devices and take action when troubling patterns are found. This capability is referred to as “stream processing.” At the scale of IoT, customers need a robust and scalable solution.
Microsoft Azure Stream Analytics will meet these needs with the following features:
Native support for geospatial functions
Custom code with JavaScript
Low-latency dashboards
Job diagnostics logs
Visual Studio integration
French national railway company accelerates innovation with Watson IoT
IBM announced that French Railways operator SNCF is using Watson IoT on IBM Cloud to deliver superior customer experiences, greater operational efficiency, and enhanced rail safety to its 13.5 million daily commuters.
Now the mass transit Parisian lines and new generation trains are equipped with 2,000 sensors, which grab 70,000 data points per month. Rather than having to manually examine each train, SNCF engineers can remotely monitor up to 200 trains at a time for potential issues including door failures or air conditioning, all the while they are in transit.
Qualcomm announced 802.11ax WiFi technology for IoT gadgets
The technology called 802.11ax is the next evolutionary step of the WiFi technology improvements. Qualcomm is the first company to announce 802.11ax chips.
According to Qualcomm, the latest WiFi technology delivers four times more capacity than the current top WiFi routers, along with boosting speeds and wider coverage area.
Qualcomm’s 802.11ax chip employs techniques used in cellular communications to enhance WiFi efficiency without asking for more spectrum.
FluoWiFi – A Wireless Development Board for IoT
FluoWiFi has been designed to provide a powerful yet versatile IoT prototyping board that anyone can use and easily program using the Arduino IDE. It is a microcontroller board based on the ATmega644p by Atmel and the ESP32 module. ESP32 is a 2.4 GHz Wi-Fi and Bluetooth low power combo chip. It supports IPv4 and IPv6, Secure HTTP, CoAP, REST, and MQTT protocols ready to go.
50% of organizations in the US and Europe lag in IoT adoption
HCL released the findings of a first-of-its-kind survey of senior business and technology decision-makers in IoT in the major global enterprises.
A survey was conducted in 263 organisations in Europe and the U.S. and here are a few key findings:
50% of respondents said their organizations are already “behind the curve on IoT.”
49% of organizations are still struggling to get off the ground with IoT, “due to an uncoordinated and siloed approach.”
38% of respondents agree that the biggest barrier to IoT adoption is security concerns.
On average, only 48 percent of data collected from the IoT is analyzed, while IoT adapters take five days to turn data into insight.
“Artificial Intelligence (AI) is the science of how to get machines to
do the things they do in the movies.”- Astro Teller
Do you remember HAL 9000- the know-all machine, Baymax- the personal healthcare robot, Ava- the human looking
robot, and WALL-E- the cleaning robot? I am sure you do. After all, they are famous fictional AI characters that
made every sci-fi aficionado go nuts growing up.
Apperceptive, self-aware robots are closer to becoming a reality than you think.
Now, what exactly is AI?
Artificial Intelligence (AI) is defined as the ability of a machine or a computer program to think, learn, and
act like a human being.The bottom-line of AI is to develop systems that exceed or at least equal human
intelligence.
Sci-fi movies and TV shows have shown us multiple visions of how the future is going to be. The Jetsons, Ex
Machina or Star Wars…they all had a unique take on what life would be like years later.
So, how real are these fictional characters? (Ignore the oxymoron) Where are we with the technology?
This article is sort of a brief history of AI with some fictional AI characters and their real counterparts to
tell you how far we come on this amazing journey.
History of AI
We really can’t have history without some Greek bits thrown in. And unsurprisingly, the roots of AI can be traced
back to Greek mythology. As the American author Pamela McCorduck writes, AI began with “an ancient wish to forge
the gods.”
Greek myths aboutHephaestus, the blacksmith who manufactured mechanical servants, and the bronze man Talos, and
the construction of mechanical toys and models such as those made by Archytas of Tarentum, Daedalus, and Hero
are proof.
Alan Turing is widely credited for being one of the first people to come up with the idea of machines that think.
He was a British mathematician and WWII code-breaker who created the Turing test to determine a machine’s
ability to “think” like a human. Turing test is still used today.
His ideas were mocked at the time but they triggered an interest in theconcept, and the term “artificial
intelligence” entered public consciousness in the mid- 1950s, after Alan Turing died.
The field of AI research was formally founded in a workshop conducted by IBM at Dartmouth College during 1956. AI
has flourished a lot since then.
Some fictional characters that are reality
The following is a list of some fictional AI characters and their real counterparts with the features.
HAL 9000 versus IBM Watson
Remember the iconic scene of the movie, “2001: A Space Odyssey” when HAL refuses to open the pod bay doors
saying, “I’m sorry, Dave. I’m afraid I can’t do that.” If you don’t remember, then take a lookthe clip below:
The movie “2001: A Space Odyssey” gave one of the world’s best representations of AI in the form of HAL 9000.
HAL stands for Heuristically Programmed Algorithmic Computer. It is a sentient
computer (or artificial general intelligence) says Wikipedia. And it was the on-board computer on the spaceship
called Discovery 1.
It was designed to control the systems on the Discovery 1 spaceship and to interact with the astronaut crew of
the spaceship. Along with maintaining all the systems on Discovery, it is capable of many functions such as
speech recognition, lip reading, emotional interpretation, facial recognition, expressing emotions, and chess.
HAL is a projection of what a future AI computer would be like from a mid-1960s perspective.
The closest real counterpart to HAL 9000 that we can think of today isIBM Watson. It is a
supercomputer that combines AI and analytical software. Watson was named after IBM’s first CEO, Thomas J.
Watson. Watson secured the first position in Jeopardy in 2011, after beating former winners Brad Rutter and Ken
Jennings.
It is a “question answering” machine that is built on technologies such as advanced natural language processing,
machine learning, automated reasoning, information retrieval, and much more.
According to IBM, “The goal is to have computers start to interact in natural human terms across a range of
applications and processes, understanding the questions that humans ask and providing answers that humans can
understand and justify.”
Its applications in cognitive computing technology are almost endless. It can perform text mining and complex
analytics on large volumes of unstructured data.
Unlike HAL, it is working peacefullywith humans in various fields such as R&D Departments of companies as
Coca-Cola and Proctor and Gamble to come with new product ideas. Apart from this, it is being used in healthcare
industries where it is helping oncologists find new treatment methods for cancer. Watson is also used as a
chatbot to provide the conversation inchildren’s toys.
Terminator versus Atlas robots
One of the most recognizable movie entrances of all time is attributed to the appearance of ArnoldSchwarzenegger
in the movieTerminator as the killer robot, T-800.
T-800, the Terminator robot, has living tissue over a metal endoskeleton. It was programmed to kill on behalf of
Skynet.
Skynet, the creator of T-800, is another interesting character in the movie. It is a neural networks-based
artificially intelligent system that has taken over the world’s’ all computers to destroy the human race.
Skynet gained self-awareness and its creators tried to deactivate it after realizing the extent of its abilities.
Skynet, for self-preservation, concluded that all of humanity would attempt to destroy it.
There are no AIs being developed yet which have self-awareness and all that are there are programmed to help
mankind. Although, an exception to this is amilitary robot.
Atlas is a robot developed by the US military unit Darpa. It is a bipedal model developed by
Boston Dynamics which is designed for various search and rescue activities.
A video of a new version of Atlas was released in Feb 2016. The new version canoperate outdoors and indoors. It
is capable of walking over a wide range of terrains, including snow.
Currently, there are no killer robots but there is a campaign going on to stop them from ever being produced, and
the United Nations has said that no weapon should be ever operated without human control.
C-3PO versus Pepper
Luke: “Do you understand anything they’re saying?” C-3PO: “Oh, yes, Master Luke! Remember that I am fluent in over six million forms of
communication.”
C-3PO or See-Threepio is a humanoid robot from the Star Wars series who appears in the original Star Wars films,
the prequel, and sequel trilogy. It is played by Anthony Daniels in all the seven Star Wars movies. The intent
of his design was to assist in etiquette, translations, and customs so that the meetings of different cultures
can run smoothly. He keeps boasting about his fluency.
In real life too, companion robots are starting to take off.
Pepper is a humanoid robot designed by Aldebaran Robotics and SoftBank. It was introduced at a
conference on June 5, 2014, and was first showcased in Softbank mobile phone stores in Japan.
Pepper is not designed as a functional robot for domestic use. Instead, Pepper is made with the intent of “making
people happy,” to enhance their lives, facilitate relationships, and have fun with people. The creators of
Pepper are optimistic that independent developers will develop new uses and content for Pepper.
Pepper is claimed to be the first humanoid robot which is “capable of recognizing the principal human emotions
and adapting his behavior to the mood of his interlocutor.”
WALL-E versus Roomba
WALL-E is thetitle character of the animated science fiction movie of the same name. He is left to clean up after
humanity leaves Planet Earth in a mess.
In the movie, WALL-E is the only robot of his kind who is still functioning on Earth. WALL-E stands for Waste
Allocation Loader Lift: Earth Class. He is a small mobile compactor box with all-terrain treads, three-fingered
shovel hands, binocular eyes, and retractable solar cells for power.
Arobot that is closely related to WALL-E is Roomba, the autonomous robotic vacuum cleaner though it is not half
as cute as WALL-E.
Roomba is a series of vacuum cleaner robots sold by iRobot. It was first introduced in September 2002. It sold
over 10 million units worldwide as of February 2014. Roomba has a set of basic sensors that enable it to perform
tasks.
Some of its features include direction change upon encountering obstacles, detection of dirty spots on the floor,
and sensing steep drops to keep it from falling down the stairs. It has two wheels that allow 360° movements.
It takes itself back to its docking station to charge once the cleaning is done.
Ava versus Geminoid
Ava is a humanoid robot with artificial intelligence shown in the movie Ex Machina. Ava has a human-looking face
but a robotic body. She is an android.
Ava has the power to repair herself with parts from other androids. Atthe end of the movie, she uses their
artificial skin to take on the full appearance of a human woman.
Ava gains so much intelligence that she leaves her friend, Caleb trapped inside, ignoring his screams, and
escapes to the outside world. This is the kind of AI that people fear the most, but we are far away from gaining
the intelligence and cleverness that Ava had.
People are experimenting with making robots that look like humans.
A geminoid is a real person-based android. It behaves and appears just like its source human. Hiroshi Ishiguro, a
robotic engineer made a robotic clone of himself.
Hiroshi Ishiguro used silicon rubber to represent the skin. Recently, cosmetic company L’Oreal teamed up with a
bio-engineering start-up called Organovo to 3D print human skin. This will potentially make even more lifelike
androids possible.
Prof. Chetan Dube who is the chief executive of the software firm IPsoft, has also developed a virtual assistant
called Amelia. He believes “Amelia will be given human form indistinguishable from the real thing at some point
this decade.”
Johnny Cab versus Google self-driving car
The movie Total Recall begins in the year 2084, where a construction worker Douglas Quaid (Arnold Schwarzenegger)
is having troubling dreams about the planet Mars and a mysterious woman there. In a series of events, Quaid goes
to Mars where he jumps into a taxi called“Johnny Cab.”
The taxi is driver-less and to give it a feel like it has a driver, the taxi has a showy robot figure named
Johnny which interacts with the commuters. Johnny ends up being reduced to a pile of wires.
Google announced in August 2012 that itsself-driving car completed over 300,000 autonomous-driving accident-free
miles. In May 2014, a new prototype of its driverless car was revealed. It was fully autonomous and had no
steering wheel, gas pedal, or brake pedal.
According to Google’s own accident reports, its test cars have been involved in 14 collisions, of which 13 were
due to the fault of other drivers. But in 2016, the car’s software caused a crash for the first time. Alphabet
announced in December 2016 that the self-driving car technology would be under a new company called Waymo.
Baymax versus RIBA II
Remember the oscar winning movie Big Hero 6? I’m sure you do.
The story begins in the futuristic city of San Fransokyo, where Hiro Hamada, a 14-year-old robotic genius, lives
with his elder brother Tadashi. Tadashi builds an inflatable robot medical assistant named Baymax.
Don Hall, the co-director of the movie said, “Baymax views the world from one perspective — he just wants to help
people; he sees Hiro as his patient.”
In a series of events, Baymax sacrifices himself to save Hiro’s and Abigail’s (another character in the movie)
lives. Later, Hiro finds his healthcare chip and creates a new Baymax.
In Japan, the elderly population in need ofnursing care reached an astounding 5.69 million in2015. So, Japan
needs new approaches to assist care-giving personnel. One of the most arduous tasks for such personnel is
lifting a patient from the floor onto a wheelchair.
In 2009, the RIKEN-TRI Collaboration Center for Human-Interactive Robot Research (RTC), a joint project
established in 2007 and located at the Nagoya Science Park in central Japan, displayed a robot called RIBA
designed to assist carers in the above-mentioned task.
RIBA stands for Robot for Interactive Body Assistance. RIBA was capable of lifting a patient from a bed onto a
wheelchair and back. Although it marked a new course in the development of such care-giving robots. Some
functional limitations have prevented its direct commercialization.
RTC’s new robot, RIBA-II has overcome these limitations with added functionalities and power.
Summary
Soon a time will come when we won’t need to read a novel or watch a movie to be teleported to a world of robots.
Even then, let’s keep these fictional stories in mind as we stride into the future.
AI is here already and it will only get smarter with time. The greatest myth about AI is that it will be same as
our own intelligence with the same desires such as greed, hunger for power, jealousy, and much more.
We certainly have some interesting times to look forward to. All ed tech and career forecasts for this decade talk about artificial intelligence (AI) technologies, including machine learning, deep learning, and natural language processing, enabling digital transformation in ways that are quite “out there.”
To stay relevant in this economy, the brightest minds, naturally, want to stay ahead of the pack by specialising in these exciting fields.
Going back to school may not be a feasible or attractive route when looking for new career options for people who are already equipped with degrees in computer science, engineering, math, or statistics. So, they typically get certified from edX, Coursera, and Udacity. Read more top free courses from these ed platforms here.
In the U.S., many premier universities offer offline and online graduate programs in data science and only a few in machine learning. Some universities such as Johns Hopkins, Princeton, Rutgers, and University of Wisconsin–Madison offers machine learning/AI courses designed for data science, computer science, math, or stats graduate students.
But for students who can’t wait to learn on the job, we’ve put together a list of universities that offer graduate and/or PhD programs on the campus in the US and India.
Table of Contents
Universities / Colleges in the US
Carnegie Mellon University, Pennsylvania
University of Washington, Washington
Colombia University, New York
Stanford University, California
Texas A & M University, Texas
New York University, New York
Georgia Tech, Georgia
North Carolina State University, North Carolina
Northwestern University, Illionis
UC Berkley, California
Universities / Colleges in India
Great Lakes Institute of Management, Gurgaon / Chennai / Bengaluru
SP Jain School of Global Management, Pune
Narsee Monjee Institute of Management Studies, Mumbai
MISB Bocconi, Mumbai
Indian School of Business (ISB), Bengaluru
IIM Bangalore
Institute of Finance and International Management (IFIM), Bengaluru
Universities / Colleges in the US
1. Carnegie Mellon University, Pennsylvania
Situated in Pittsburgh, CMU has seven colleges and independent schools and is among the top 25 universities in the U.S. The Machine Learning Department offers three courses to introduce students to the concept of data-driven decision making:
Master of Science in Machine Learning which focuses on data mining.For information about the application procedure and deadlines, go here.
Secondary Master’s in Machine Learning which is open only to its PhD students, faculty, and staff.For information about admission requirements and application, go here.
Fifth Year Master’s in Machine Learning for its undergraduate students to get an MS by earning credits in ML courses.For information about program requirements and application, go here.
UW’s Master of Science in Data Science degree teaches students to manage, model, and visualize big data. Expert faculty from six of the university’s departments who teach this fee-based course expect the students to have “a solid background mathematics, computer programming and communication.” The course is designed for working professionals, with evening classes on the campus, who can enroll as part-time or full-time students.
For information about the application procedure and deadlines, go here.
For information about financial aid and cost of study, go here.
UW’s Certificate in Data Science teaches basic math, computer science, and analytics to aspiring data scientists. Professionals are expected to know some SQL, programming, and statistics. Data storage and manipulation tools (e.g. Hadoop, MapReduce), core machine learning concepts, types of databases, and real-life data science applications are part of the curriculum.
3. Columbia University, New York
Its Master of Science in Data Science is a great option for careerists who want to switch to data science. Students need to earn 30 credits, 21 by taking the core courses, including machine learning, and 9 credits by working on an elective (Foundations of Data Science, Cybersecurity, Financial and Business Analytics, Health Analytics, New Media Sense, Collect and Move Data, Smart Cities) from the Data Science Institute. The university offers both part-time and full-time options.
The department also has an online Certification of Professional Achievement in Data Sciences course. The Computer Science Department has a Machine Learning Track as a part of the MS degree in CS.
4. Stanford University, California
The Department of Statistics and Institute for Computational and Mathematical Engineering (ICME) offer an M.S. in Data Science, where it is a terminal degree for the former and a specialized track for ICME. There are several electives that range from machine learning to human neuroimaging methods for students, but strong math (linear algebra, numerical methods, probabilities, PDE, stats, etc.) and programming skills (C++, R) form the core of the course. Go to the homepage for more information about prerequisites and requirements.
For information about admissions and financial aid, go here.
5. Texas A&M University, Texas
The Houston-based university has a Master of Science in Analytics degree offered by the Department of Statistics. The course is tailored for “working professionals with strong quantitative skills.” What’s more, students can access Mays Business School courses as well. The part-time course, with evening classes, takes two years to complete. The program, which focuses on statistical modeling and predictive analysis, does have an online option.
The Master of Science in Data Science is for students with a strong programming and mathematical background. The Center for Urban Science and Progress and the Center for the Promotion of Research Involving Innovative Statistical Methodology work closely with the Center for Data Science. The university offers full-time and part-time options; students have to earn 36 credits and also have six electives to choose from. Tuition scholarships are available although not for university fees.
The on-campus Master of Science in Analytics program Georgia Tech offers opportunities to strengthen your skills in statistics, computing, operations research, and business. The instructors include experts from the College of Engineering, the College of Computing, and Scheller College of Business. Applicants to this premium tuition program are expected to be proficient in basic mathematical concepts such as calculus, statistics, and high-level computing languages such as C++ and Python. Depending on what their career goals are, students can choose from one of these tracks: Analytical Tools, Business Analytics, and Computational Data Analytics.
What’s great for the students is that the college has dedicated job placement assistance and chances to network with influencers in the data science industry.
The College of Computing has courses in artificial intelligence (AI) and machine learning (ML) at the undergraduate and graduate levels; they do not award degrees in these.
8. North Carolina State University
The Institute for Advanced Analytics offers a 10-month long Master of Science in Analytics degree. The program is “innovative, practical, and relevant.” The Summer session includes Statistics primer and Analytics tools and foundation. The Practicum, which last eight months in the fall and spring, teaches you a range of topics including data mining, machine learning, optimization, simulation & risk, web analytics, financial analytics, data visualization, and business concepts such as project management.
For information about application requirements and procedures, go here.
For information about the tuition and fees, go here.
9. Northwestern University, Illinois
McCormick School of Engineering and Applied Science offers a 15-month full-time MS in Analytics degree. The faculty “combines mathematical and statistical studies with instruction in advanced information technology and data management.” The course has an 8-month practicum project, 3-month summer internship, and a 10-week capstone project. Scholarships that cover up to 50% of the tuition are available on merit basis.
For information about admission requirements and procedures, go here.
For information about the tuition and funding, go here.
10. UC Berkeley, California
Although the Master of Information and Data Science is an online course, students have to attend a week on campus. The curriculum covers areas in social science, policy research, statistics, computer science, and engineering. The full-time option takes 12 to 20 months; the university lets you complete the course part time as well.
Great Lakes’ Post Graduate Program in Business Analytics and Business Intelligence has been ranked the best analytics course in the country by Analytics India Magazine. The course is designed for working professionals and is offered in its Chennai, Gurgaon, and Bengaluru campuses. The curriculum combines business management skills and analytics, including case studies and hands-on training in relevant tools such as Tableau, R, and SAS. Students have to attend 230 hours of classroom sessions and 110 hours of online sessions.
Students can opt for the full-time or part-time options of the Big Data & Analytics program offered by the Mumbai-based institute. People with prior work experience are given preference. The program has 10 core courses including cutting-edge topics such as machine learning, data mining, predictive modeling, natural language processing, visualization techniques, and statistics. Industry experts and academicians focus on application-based learning, teaching students how to apply current tools and technologies to extract valuable insights from big data.
It offers a 1-year Postgraduate Certificate Program in Business Analytics in partnership with University of South Florida. The course conducted in its Mumbai campus combines classroom training with online sessions. NMIMS will take 12 hours and USF Muma College of Business faculty will take 20 hours to instruct students on the current Business Analytical tools, methodologies, and technologies. Course covers topics such as introduction to statistics, database management, business intelligence and visualization, machine learning, big data analytics, data mining, financial analytics, and optimization. Students will learn how to tackle real-world business issues through the capstone project.
The 12-month Executive Program in Business Analytics is taught by renowned faculty from SDA Bocconi (Milan) and Jigsaw Academy at the Mumbai International School of Business Bocconi (MISB) campus in Mumbai. The course content comprises web analytics, statistics, visualization, R, time series, text mining, SAS, machine learning, Big Data (Sqoop, Flume, Pig, HBASE, Hive, Oozie, and SPARK), and digital marketing. Students learn core concepts of business analytics and its application across various domains.
For more information about the course curriculum, go here.
5. Indian School of Business (ISB)
ISB offers a Certificate Program in Business Analytics on its Hyderabad campus. The course is designed for working professionals (with at least 3 years of work experience) who have to spend 18 days at the institute during the 12-month program; a technology-aided learning platform takes over the rest of the time. The rigorous course is chock-full with lectures, projects, and assignments. The comprehensive curriculum also includes preparatory pre-term courses and a capstone project.
For more information about the course curriculum, go here.
6. IIM Bangalore
The year-long Certificate Program on Business Analytics and Intelligence comprises six modules and a project. The course content includes Data Visualization and Interpretation, Data Preprocessing and Imputation, Predictive Analytics: Supervised Learning Algorithms, Optimization Analytics, Stochastic Models, Data Reduction, Advanced Forecasting and Operations Analytics, Machine Learning Algorithms, Big Data Analytics,and Analytics in Finance and Marketing. The Institute would like the applicants to have a minimum of 3 years of work experience. Online classes are open to a limited number of participants, who must attend on-campus sessions as well.
For information about eligibility criteria, go here.
7. Institute of Finance and International Management (IFIM)
The Institute of Finance and International Management, Bangalore, offers a 15-month full-time Business Analytics program for working executives. Program features include live streaming and classroom sessions, opportunity to work with relevant IBM, OpenSource, and Microsoft software, and convenient weekend classes.
With the huge amounts of data pouring in and the need to apply analytical solutions to address business challenges, the future looks brighter than ever for data scientists and machine learning experts. Salaries are naturally high for these much sought-after skills.
For programmers and statisticians, getting certified is the next step. For students looking to distinguish themselves, these are great career opportunities.
In this post, we have put together a list of graduate programs offered by highly ranked institutes and universities in the US and India. On-campus courses are interactive; nothing can beat face-to-face contact with the faculty and peers, the friends you make, and the easy access to relevant resources.
“What is imagination?…It is a God-like, a noble faculty. It renders earth tolerable; it teaches us to live, in the tone of the eternal.” – Ada Lovelace to Charles Babbage
When Charles Babbage, in 1837, proposed a ”Fully programmable machine” which would be later called an Analytical engine, not even the government who seed-funded his Difference Engine believed him.
Undoubtedly the most influential machine in existence in today’s modern computer.
But back in the 19th century, when the world was drooling over the industrial revolution and railway tracks and steam engines, a machine which could think and calculate looked like a distant dream.
While most see the evolution of these advanced machines such as computers and smartphones as examples of electronic innovation, what people have taken for granted had been an evolution and the hard work of transforming a mechanical device into a self-thinking smart device which would become an integral part of our lives.
Charles Babbage – The father of the computer
In the 19th century, the concept of specialization had not breached the revered halls of universities and laboratories.
Most of the geniuses were polymaths, so was the Englishman Charles Babbage. Charles Babbage was a renowned mathematician, philosopher, and mechanical engineer of his times.
During those days, mathematical tables (such as your logbook) were manually made and were used in navigation, science, and engineering.
Since most of these tables were manually updated and calculated, the values in these tables varied frequently, giving inconsistent results during studies.
While at Cambridge, Charles Babbage noticed this flaw and thought of converting this mathematical-table based calculation into a mechanical product to avoid any discrepancies.
Difference Engine
In 1822, Charles Babbage decided to make a machine to calculate the polynomial function—a machine which would calculate the value automatically.
In 1823, the British government gave Charles Babbage £1700 (probably the first ever seed funding).
He named it the Difference Engine, possibly after the finite difference method is used to calculate.
Charles Babbage invited Joseph Clement to design his ambitious massive difference engine that had about 25,000 parts, weighed around 15 tons, and was 8 feet tall.
Despite the ample funding by the government, the engine never got completed. And in the late 1840s, he planned on making an improved engine.
But that was not completed either due to lack of funds.
In 1989–1991, scientists and engineers studying Charles Babbage’s research paper built the first difference engine, which is now placed in The Museum of the History of Science, Oxford.
How Does Charles Babbage’s Difference Engine work?
Wikipedia says: “A difference engine is an automatic mechanical calculator designed to tabulate polynomial functions.
The name derives from the method of divided differences, a way to interpolate or tabulate functions by using a small set of polynomial coefficients.”
Let’s take an example with a polynomial function R = x2 + 1
X
R
Difference 1
Difference 2
Step 1
0
1
1 (D11)
2 (D21)
Step 2
1
2
3 (D12)
2 (D22)
Step 3
2
5
5 (D13)
2 (D23)
Step 4
3
10
7 (D14)
2 (D24)
Step 5
4
17
9 (D15)
2 (D2)
To solve this manually, you need to solve the equation “n+1” times, where n is the polynomial. So, for the given equation, we need threesteps.
When X = 0, result of R = 1; X= 1, R =2; X=2, R= 5, and so on.
Difference 1 : D11 = R2 (Step 2) – R1 (Step 1) or D12 (Step 2) = R3 (Step 3) – R2 ( Step 2) and so on
So for the Difference 1 column in the table above,
D11 = 2 (R2) – 1(R1) = 1
D12 = 5 (R3) – 2(R2) = 3
D13 = 10 (R4) – 5(R3) = 5
Difference 2 : D21 = D12 (Difference 1 -Step 2) – D11( Difference 1- Step 1), and so on.
By subtracting two consecutive values from the Difference 1 column,
D21 = 3 (D12) – 1(D11) = 2
D22 = 5 (D13) – 3 (D12) = 2
Similarly, for a third-order equation, we can prepare a new column called Difference 3, and calculate it by subtracting two consecutive numbers from the last column.
*The values in the last column or the highest power value always remain constant in the last difference column.*
Since the engine could only add and subtract, some of the values from each column are given to the difference engine to feed the engine with information necessary for further calculations.
Working of a difference engine
Let’s take another example where you have to calculate the result for x = 3 from the above equation (R = x2 + 1), and the engine was already given the values of Step 1 and Step 2 columns (Refer to above table). The engine would follow the following steps:
Step 1: To calculate the value for D12, Step 1 difference 2 is added to Step 1 Difference 1, which is 2(D21) +1( D11)=3.
Step 2: This D12 when added with R2, which gives the result for Step 3 = 3 (D12) + 2( R2) = 5
Similarly, to calculate the result for x = 4
Step 1 – For X = 4, Step 2 – Difference 2 added to Difference 1 = 2 (D22) +1 (D12) = 5
Step 2 – Add value from Step 1 to Step 3 result R3, which is 5+5, giving the final value as 10
A difference engine (shown above) consisted of N+1 columns, where column N could only store constants and Column 1 showed the value of the current iteration.
And the machine was only capable of adding values from column n+1 to N.
The engine is programmed by setting initial values to the columns. Column 1 is set to the value of the polynomial at the start of computation.
Column 2 is set to a value derived from the first and higher derivatives of the polynomial at the same value of X.
Each column from 3 to N is set to a value derived from the first-order derivative. To simplify what the difference engine did, here is a simple code for Polynomial Function calculation using C++ –
#include <iostream>
#include <math.h>
using namespace std;
int d; // degree of the polynomial
int i; //
int c; //
int value;
int j;
int p;
int sum;
int main()
{
cout << "Enter the degree of the polynomial: " << endl;
cin >> d; // degree of the polynomial
cout << "The degree of the polynomial you entered was " << d << endl;
int *c = new int[i];
for(i = 0; i <= d; i++)
{
cout << "Enter coefficients: " << endl;
cin >> c[i];
int c[d+1];
}
cout << "There are " << d + 1 << " coefficients";
cout << " The coefficients are: ";
for (i = 0; i < d + 1; i++)
cout << "\n " << c[i];
cout << endl;
cout << " Enter the value for evaluating the polynomials" << endl;
cin >> value;
sum = 0;
cout << " The value is " << value << endl;
cout << "First polynomial is: " << endl;
cout << c[0] << "x^3 + " << c[1] << "x^2 + " << c[2] << "x + " << c[3] << endl;
{
for (i = 0; i <= d; i++)
p = 1;
{
for (j = 0; j <= (d - 1); j++)
p = p * value;
sum = sum + p;
sum = pow(c[0]*value,d)+pow(c[1]*value,d-1)+pow(c[2]*value,d-2)+pow(c[3]*value,d-3);
cout << "The sum is " << sum << endl;
}
}
}
The difference engine was never finished, and during its construction, Charles Babbage had a brilliant idea of using Punch Cards for calculation.
Till then, punch cards that had been used only for the mundane job of weaving would form the basis of future computer programming.
Punch Cards
Before Joseph Jacquard came up with the idea of punch cards, the weaving was done using draw looms. A drawloom generally used a “figure harness” to control the weaving pattern.
The drawloom required two operators to control the machine.
Although till 1801, punch cards were only used for individual weaving, Jacquard decided to use perforated papers with the mechanism, because he found that though being intricate, weaving was mechanical and repetitive.
Working
In the most basic form, a weaving design is made by passing onethread over another.
In a patterned weave, the threads crossing each other are not synchronized by equal blocks but are changed according to the required pattern.
A weaver controls the threads by pulling and releasing them.
When Joseph Jacquard came up with the idea of a loom, the fabric design in it was first copied on square papers.
This design on the square was translated into punch cards. These cards are stitched together in a continuous belt and fed into the loom.
The holes in the card controlled which threads are raised into the weaving pattern.
This automation allowed Jacquard to make designs and produce them again at lesser costs. Keeping this bunch of cards helped to reproduce the same design repeatedly with perfection on the same or another machine.
“Visualizing” the concept of using these punch cards to calculate, Charles Babbage described using them for the analytical engine.
In 1883, Charles Babbage was introduced to ayoung brilliant mathematician, Ada, who later became Countess of Lovelace, byher tutor.
He was impressed with Ada’sanalytical skills and invited her to look the difference engine, which fascinated her.
This formed the basis of a lasting friendship that continued until her death.
Ada Lovelace – The first programmer
Born to British poet Lord Byron and Annabella Milbanke, Augusta Ada Byron married William King-Noel, who was the first Earl of Lovelace.
Ada was a natural poet who found mathematics poetic.
Growing up, Ada’s education and her families’ influential presence got her in touch with a few prestigious innovators and literary figures of her time.
While studying mathematics, her tutor Mary Somerville introduced her to Charles Babbage, who, after his work on the unsuccessful Difference Engine, was working on an ambitious project of a machine which could solve any complex mathematical function (the Analytical Engine).
What you see below is a caricature image of the Analytical Engine as proposed by Charles Babbage.
The important parts of this engine still constitute our modern computers.
Part 1 – The Store, was what we now call Hard disk or memory
Part 2 – The Mill, was what we now call Central Processing Unit (Mill where the churning or production is done)
Part 3 – Steam engine, which would be the source of energy
Ada, impressed by the theory and concept of the Analytical Engine, decided to work with Charles Babbage onthe construction of the engine.
During her study of the Analytical Engine, she wrote a series of notes which explained the difference between a Difference Engine and an Analytical Engine.
She took up Bernoulli number theory and built a detailed algorithm on the process of calculating Bernoulli numbers using an Analytical engine which was demonstrated in Note G of her article shown below.
This made her the first programmer in the world. (This is disputed.)
Though her notes were never accepted, and as there was no funding or investment to back Charles Babbage’s fantastic idea, the analytical engine was never completed.
Here is a simple C++ program to the algorithm developed by Ada Lovelace in her lengthy notes:
// bernoulli_distribution
#include <iostream>
#include <random>
int main()
{
const int nrolls=10000;
std::default_random_engine generator;
std::bernoulli_distribution distribution(0.5);
int count=0; // count number of trues
for (int i=0; i<nrolls; ++i) if (distribution(generator)) ++count;
std::cout << "bernoulli_distribution (0.5) x 10000:" << std::endl;
std::cout << "true: " << count << std::endl;
std::cout << "false: " << nrolls-count << std::endl;
return 0;
Charles Babbage declined both the title of Knighthood and baronetcy and instead asked for a life peerage, but that wish wasn’t granted in his lifetime.
He died in 1871 ate the age of 79. Ada Lovelace died at the young age of 36 in 1852.
Her contribution to computer science for having come up with the “first” algorithm still remains one of the greatest controversies in technology history.
Irrespective of these facts, their contribution to the field of computer and programming cannot be ignored.
A super calculator which would be able to solve any mathematical problem and a device which would have the ability to think of ways to approach a problem is what Charles Babbage and Ada Lovelace thought of; this was the founding stone of the first programmable computer.
In the next article, we will discuss the use of Punch Cards and how with all technological developments in Europe, the USA got the first computer!
Data classification is a very important task in machine learning.Support Vector Machines (SVMs) are widely applied in the field of pattern classifications and nonlinear regressions. The original form of the SVM algorithm was introduced by Vladimir N. Vapnik and Alexey Ya. Chervonenkis in 1963. Since then, SVMs have been transformed tremendously to be used successfully in many real-world problemssuch as text (and hypertext) categorization,image classification,bioinformatics (Protein classification,Cancer classification), handwritten character recognition, etc.
Table of Contents
What is a Support Vector Machine?
How does it work?
Derivation of SVM Equations
Pros and Cons of SVMs
Python and R implementation
What is a Support Vector Machine(SVM)?
A Support Vector Machine is a supervised machine learning algorithm which can be used for both classification and regression problems. It follows a technique called the kernel trick to transform the data and based on these transformations, it finds an optimal boundary between the possible outputs.
In simple words, it does some extremely complex data transformations to figure out how to separate the data based on the labels or outputs defined.We will be looking only at the SVM classification algorithm in this article.
How does it work?
The main idea is to identify the optimal separating hyperplane which maximizes the margin of the training data. Let us understand this objective term by term.
What is a separating hyperplane?
We can see that it is possible to separate the data given in the plot above. For instance, we can draw a line in which all the points above the line are green and the ones below the line are red. Such a line is said to be a separating hyperplane.
Now the obvious confusion, why is it called a hyperplane if it is a line?
In the diagram above, we have considered the simplest of examples, i.e., the dataset lies in the 2-dimensional plane(R2). But the support vector machine can work for a general n-dimensional dataset too. And in the case of higher dimensions, thehyperplane is the generalization of a plane.
More formally, it is an n-1 dimensional subspace of an n-dimensional Euclidean space. So for a
1D dataset, a single point represents the hyperplane.
2D dataset, a line is a hyperplane.
3D dataset, a plane is a hyperplane.
And in the higher dimension, it is called a hyperplane.
We have said that the objective of an SVM is to find the optimal separating hyperplane. When is a separating hyperplane said to be optimal?
The fact that there exists a hyperplane separating the dataset doesn’t mean that it is the best one.
Let us understand the optimal hyperplane through a set of diagrams.
Multiple hyperplanes There are multiple hyperplanes, but which one of them is a separating hyperplane? It can be easily seen that line B is the one which best separates the two classes.
Multiple separating hyperplanes There can be multiple separating as well. How do wefind the optimal one? Intuitively, if we select a hyperplane which is close to the data points of one class, then it might not generalize well. So the aim is to choose the hyperplane which is as far as possible from the data points of each category.
In the diagram above, the hyperplane that meets the specified criteria for the optimal hyperplane is B.
Therefore, maximizing the distance between the nearest points of each class and the hyperplane would result in an optimal separating hyperplane. This distance is called the margin.
The goal of SVMs is to find the optimal hyperplane because it not only classifies the existing dataset but also helps predict the class of the unseen data. And the optimal hyperplane is the one which has the biggest margin.
Mathematical Setup
Now that we have understood the basic setup of this algorithm, let us dive straight into the mathematical technicalities of SVMs.
I will be assuming you are familiar withbasic mathematical concepts such as vectors, vector arithmetic(addition, subtraction, dot product) and the orthogonal projection. Some of these concepts can also be found in the article, Prerequisites of linear algebra for machine learning.
Equation of Hyperplane
You musthave come across the equation of a straight line as y=mx+c, where m is the slope and cis the y-intercept of the line.
The generalized equation of a hyperplane is as follows:
wTx=0
Here w and x are the vectors and wTx represents the dot product of the two vectors. The vector w is often called as the weight vector.
Consider the equation of the line as y−mx−c=0.In this case,
w=⎛⎝⎜−c−m1⎞⎠⎟ and x=⎛⎝⎜1xy⎞⎠⎟
wTx=−c×1−m×x+y=y−mx−c=0
It is just two different ways of representing the same thing. So why do we use wTx=0? Simply because it is easier to deal with this representation in thecase of higher dimensional dataset and w represents the vector which is normal to the hyperplane. This property will be useful once we start computing the distance from a point to the hyperplane.
Understanding the constraints
The training data in our classification problem is of the form {(x1,y1),(x2,y2),…,(xn,yn)}∈Rn×−1,1. This means that the training dataset is a pair of xi, an n-dimensional feature vector and yi, the label of xi. When yi=1 implies that the sample with the feature vector xi belongs to class 1 and if yi=−1 implies that the sample belongs to class -1.
In a classification problem, we thus try to find out a function, y=f(x):Rn⟶{−1,1}. f(x) learns from the training data set and then applies its knowledge to classify the unseen data.
There are an infinite number of functions, f(x) that can exist, so we have to restrict the class of functions that we are dealing with. In thecase of SVM’s, this class of functions is that of the hyperplanerepresented as wTx=0.
It can also be represented as w⃗ .x⃗ +b=0;w⃗ ∈Rn and b∈R
This divides the input space into two parts, one containing vectors of class ?1 and the other containing vectors of class +1.
For the rest of this article, we will consider 2-dimensional vectors. Let H0 be a hyperplane separating the dataset and satisfying the following:
w⃗ .x⃗ +b=0
Along with H0, we can select two others hyperplanes H1 and H2 such that they also separate the data and have the following equations:
w⃗ .x⃗ +b=δ and w⃗ .x⃗ +b=-δ
This makes Ho equidistant from H1 as well as H2.
The variable ? is not necessary so we can set ?=1 to simplify the problem as w⃗ .x⃗ +b=1 and w⃗ .x⃗ +b=-1
Next, we want to ensure that there is no point between them. So for this, we will select only those hyperplanes which satisfy the following constraints:
For every vector xieither:
w⃗ .x⃗ +b≤-1 for xi having the class ?1 or
w⃗ .x⃗ +b≥1 for xi having the class 1
Combining the constraints
Both the constraints stated above can be combined into a single constraint.
Constraint 1:
For xi having the class -1, w⃗ .x⃗ +b≤-1 Multiplying both sides by yi (which is always -1 for this equation) yi(w⃗ .x⃗ +b)≥yi(−1) which implies yi(w⃗ .x⃗ +b)≥1 for xi having the class?1.
Constraint 2:yi=1
yi(w⃗ .x⃗ +b)≥1 for xi having the class 1
Combining both the above equations, we get yi(w⃗ .x⃗ +b)≥1 for all 1≤i≤n
This leads to a unique constraint instead of two which are mathematically equivalent. The combined new constraint also has the same effect, i.e., no points between the two hyperplanes.
Maximize the margin
For the sake of simplicity, we will skip the derivation of the formula for calculating the margin, m which is
m=2||w⃗ ||
The only variable in this formula is w, which is indirectly proportional to m, hence to maximize the margin we will have to minimize ||w⃗ ||. This leads to the following optimization problem:
Minimize in (w⃗ ,b){||w⃗ ||22 subject to yi(w⃗ .x⃗ +b)≥1 for any i=1,…,n
The above is the case when our data is linearly separable. There are many cases where the data can not be perfectly classified through linear separation. In such cases, Support Vector Machine looks for the hyperplane that maximizes the margin and minimizes the misclassifications.
For this, we introduce the slack variable,ζi which allows some objects to fall off the margin but it penalizes them.
In this scenario, the algorithm tries to maintain the slack variable to zero while maximizing the margin. However, it minimizes the sum of distances of the misclassification from the margin hyperplanes and not the number of misclassifications.
Constraints now changes to yi(w⃗ .xi→+b)≥1−ζi for all 1≤i≤n,ζi≥0
and the optimization problem changes to
Minimize in (w⃗ ,b){||w⃗ ||22+C∑iζi subject to yi(w⃗ .x⃗ +b)≥1−ζi for any i=1,…,n
Here, the parameter C is the regularization parameter that controls the trade-off between the slack variable penalty (misclassifications) and width of the margin.
Small C makes the constraints easy to ignore which leads to a large margin.
Large C allows the constraints hard to be ignored which leads to a small margin.
For C=inf, all the constraints are enforced.
The easiest way to separate two classes of data is a line in case of 2D data and a plane in case of 3D data. But it is not always possible to use lines or planes and one requires a nonlinear region to separate these classes. Support Vector Machines handle such situations by using a kernel function which maps the data to a different space where a linear hyperplane can be used to separate classes. This is known as thekernel trick where the kernel function transforms the data into the higher dimensional feature space so that a linear separation is possible.
If ϕ is the kernel function which maps xito ϕ(xi), the constraints change toyi(w⃗ .ϕ(xi)+b)≥1−ζi for all 1≤i≤n,ζi≥0
And the optimization problem is
Minimize in (w⃗ ,b){||w⃗ ||22+C∑iζi subject to yi(w⃗ .ϕ(xi)+b)≥1−ζi for all 1≤i≤n,ζi≥0
We will not get into the solution of these optimization problems. The most common method used to solve these optimization problems is Convex Optimization.
Pros and Cons of Support Vector Machines
Every classification algorithm has its own advantages and disadvantages that are come into play according to the dataset being analyzed. Some of the advantages of SVMs are as follows:
The very nature of the Convex Optimization method ensures guaranteed optimality. The solution is guaranteed to be a global minimum and not a local minimum.
SVMis an algorithm which is suitable for both linearly and nonlinearly separable data (using kernel trick). The only thing to do is to come up with the regularization term, C.
SVMswork well on small as well as high dimensional data spaces. It works effectively for high-dimensional datasets because of the fact that the complexity of the training dataset in SVM is generally characterized by the number of support vectors rather than the dimensionality. Even if all other training examples are removed and the training is repeated, we will get the same optimal separating hyperplane.
SVMscan work effectively on smaller training datasets as they don’trely on the entire data.
Disadvantages of SVMs are as follows:
Theyarenot suitable for larger datasets because the training time with SVMs can be high and much more computationally intensive.
They areless effective on noisier datasets that have overlapping classes.
SVM with Python and R
Let us look at the libraries and functions used to implement SVM in Python and R.
Python Implementation
The most widely used library for implementing machine learning algorithms in Python is scikit-learn. The class used for SVMclassification in scikit-learn issvm.SVC()
C: It is the regularization parameter, C, of the error term.
kernel: It specifies the kernel type to be used in the algorithm. It can be ‘linear’, ‘poly’, ‘rbf’, ‘sigmoid’, ‘precomputed’, or a callable. The default value is ‘rbf’.
degree: It is the degree of the polynomial kernel function (‘poly’) and is ignored by all other kernels. The default valueis 3.
gamma: It is the kernel coefficient for ‘rbf’, ‘poly’, and ‘sigmoid’. If gamma is ‘auto’, then 1/n_features will be used instead.
There are many advanced parameters too which I have not discussed here. You can check them outhere.
One can tune the SVM by changing the parameters C,γ and the kernel function. The function for tuning the parameters available in scikit-learn is called gridSearchCV().
In the above code, the parameters we have considered for tuning are kernel, C, and gamma. The values from which the best value is to be are the ones written in the bracket. Here, we have only given a few values to be considered but a whole range of values can be given for tuning but it will take a longer time for execution.
R Implementation
The package that we will use for implementing SVM algorithm in R is e1071. The function used will be svm().
Inthis article, Ihave gone through a very basic explanation of SVM classification algorithm. I have left outa few mathematical complications such as calculating distances and solving the optimization problem. But I hope this gives you enough know-how abouthow a machine learning algorithm, that is,SVM, can be modified based on the type of dataset provided.
“There are two kinds of people in this world: Those who think VR will change the world. And, those who haven’t tried VR."
I read this quote somewhere not so long ago. And even while conservatively looking at the scope and scale of the technology, one can easily say that the Virtual Reality experience is exciting to say the least, whether you use a $10 Google Cardboard or a more expensive Oculus Rift or HTC Vive.
Every advancement in computing allows new form factors to emerge which allows us to use the power of computation to create new applications and experiences never seen or felt before. And Virtual Reality is one new form factor which stretches the boundaries of creativity for developers and storytellers.
Within some time any existing game developer can quickly start building applications for VR and for those who haven’t tried their hand on game development or come from non-technical background can learn to develop for VR in no time as well.
What does one need to know to start building for VR?
The Stack:Developing for VR is not much different from game development. Apart from the fact that some additional hardware and software dependencies are required, the basic tools for development do not vary much.Most VR, and even AR, applications require a gaming engine for development, such as Unity3d or Unreal Engine:
While these are new tools to learn for developers who do not have much experience in game development, the learning curve is relatively steep for you to start getting comfortable and start building your first application for VR. Many free resources are now also available online to learn.*Quick Tip: Start with Unity3d as it is less complex and a lot of open source knowledge base and resources are available easily.You will also require an appropriate SDK (Software Development Kit) depending on what device you want to build for:
Hardware:Although there is a necessary hardware dependency for a VR application, you don’t necessarily need to purchase an expensive device straightaway. When you’re just getting started, a cheaper Google Cardboard can do the job just fine, but it restricts your movement to just 3 DOF (Degrees of Freedom) as opposed to a 6 DOF HTC Vive or Oculus Rift, which allows an immersive room-scale experience.3 DOF means that although you will be able to see in X, Y, Z directions by the motion of your head mounted display (HMD) in the Virtual environment, you wouldn’t be able to move or touch anything. However,, 6 DOF allows for a room-scale experience. While a 6 DOF looks good on the face of it, there are downsides as well. Room-scale VR requires high computation performance with a high-end graphic card and RAM that you probably won’t get from your standard laptops and will require a desktop computer with optimal performance and also at least 6ft × 6ft free space as opposed to 3 DOF that requires just a standard smart phone with an inbuilt gyro (which is inbuilt in most modern smart phones that cost about ₹15,000 or more).Some common devices available in the market today are
It can sound a little intimidating in the beginning but actually learning to develop for VR is not all that hard. Once you get a hang of the game engine, one can quickly catch on and there are multiple avenues to explore if you want to learn how to create your VR application:YouTube Tutorials
Here are a few channels that provide good “Getting Started” tutorials for VR:
Online CoursesA lot of MOOC (massive open online course) courses have come out in the last few months, making it easier to learn and some of them are even free!Most of these courses are short (typically 2–4 weeks). Here are a few of them:
And these should be enough to get you started.But if you get stuck, here are some cool applications to follow if you are looking for some inspiration or just want to explore the diversity of the content being generated for VR:SUPERHOT VR
Explore HackerEarth’s top products for Hiring & Innovation
Discover powerful tools designed to streamline hiring, assess talent efficiently, and run seamless hackathons. Explore HackerEarth’s top products that help businesses innovate and grow.
Get to know the experts behind our content. From industry leaders to tech enthusiasts, our authors share valuable insights, trends, and expertise to keep you informed and inspired.
The last couple of years transformed how the world works and the tech industry is no exception. Remote work, a candidate-driven market, and automation are some of the tech recruiting trends born out of the pandemic.
While accepting the new reality and adapting to it is the first step, keeping up with continuously changing hiring trends in technology is the bigger challenge right now.
What does 2024 hold for recruiters across the globe? What hiring practices would work best in this post-pandemic world? How do you stay on top of the changes in this industry?
The answers to these questions will paint a clearer picture of how to set up for success while recruiting tech talent this year.
7 tech recruiting trends for 2024
Recruiters, we’ve got you covered. Here are the tech recruiting trends that will change the way you build tech teams in 2024.
Trend #1—Leverage data-driven recruiting
Data-driven recruiting strategies are the answer to effective talent sourcing and a streamlined hiring process.
Talent acquisition leaders need to use real-time analytics like pipeline growth metrics, offer acceptance rates, quality and cost of new hires, and candidate feedback scores to reduce manual work, improve processes, and hire the best talent.
The key to capitalizing on talent market trends in 2024 is data. It enables you to analyze what’s working and what needs refinement, leaving room for experimentation.
Having a strong employer brand that supports a clear Employer Value Proposition (EVP) is crucial to influencing a candidate’s decision to work with your company. Perks like upskilling opportunities, remote work, and flexible hours are top EVPs that attract qualified candidates.
A clear EVP builds a culture of balance, mental health awareness, and flexibility—strengthening your employer brand with candidate-first policies.
Trend #3—Focus on candidate-driven market
The pandemic drastically increased the skills gap, making tech recruitment more challenging. With the severe shortage of tech talent, candidates now hold more power and can afford to be selective.
Competitive pay is no longer enough. Use data to understand what candidates want—work-life balance, remote options, learning opportunities—and adapt accordingly.
Recruiters need to think creatively to attract and retain top talent.
Trend #4—Have a diversity and inclusion oriented company culture
Diversity and inclusion have become central to modern recruitment. While urgent hiring can delay D&I efforts, long-term success depends on inclusive teams. Our survey shows that 25.6% of HR professionals believe a diverse leadership team helps build stronger pipelines and reduces bias.
McKinsey’s Diversity Wins report confirms this: top-quartile gender-diverse companies see 25% higher profitability, and ethnically diverse teams show 36% higher returns.
It's refreshing to see the importance of an inclusive culture increasing across all job-seeking communities, especially in tech. This reiterates that D&I is a must-have, not just a good-to-have.
Trend #5—Embed automation and AI into your recruitment systems
With the rise of AI tools like ChatGPT, automation is being adopted across every business function—including recruiting.
Manual communication with large candidate pools is inefficient. In 2024, recruitment automation and AI-powered platforms will automate candidate nurturing and communication, providing a more personalized experience while saving time.
Remote interviews expand access to global talent, reduce overhead costs, and increase flexibility—making the hiring process more efficient for both recruiters and candidates.
Trend #7—Be proactive in candidate engagement
Delayed responses or lack of updates can frustrate candidates and impact your brand. Proactive communication and engagement with both active and passive candidates are key to successful recruiting.
As recruitment evolves, proactive candidate engagement will become central to attracting and retaining talent. In 2023 and beyond, companies must engage both active and passive candidates through innovative strategies and technologies like chatbots and AI-powered systems. Building pipelines and nurturing relationships will enhance employer branding and ensure long-term hiring success.
Recruiting Tech Talent Just Got Easier With HackerEarth
Recruiting qualified tech talent is tough—but we’re here to help. HackerEarth for Enterprises offers an all-in-one suite that simplifies sourcing, assessing, and interviewing developers.
Staying ahead of tech recruiting trends, improving hiring processes, and adapting to change is the way forward in 2024. Take note of the tips in this article and use them to build a future-ready hiring strategy.
The first part of this blog stresses the importance of asking the right technical interview questions to assess a candidate’s coding skills. But that alone is not enough. If you want to hire the crème de la crème of the developer talent out there, you have to look for a well-rounded candidate.
Honest communication, empathy, and passion for their work are equally important as a candidate’s technical knowledge. Soft skills are like the cherry on top. They set the best of the candidates apart from the rest.
Re-examine how you are vetting your candidates. Identify the gaps in your interviews. Once you start addressing these gaps, you find developers who have the potential to be great. And those are exactly the kind of people that you want to work with!
Let’s get to it, shall we?
What constitutes a good interview question?
An ideal interview should reveal a candidate’s personality along with their technical knowledge. To formulate a comprehensive list of questions, keep in mind three important characteristics.
Questions are open-ended – questions like, “What are some of the programming languages you’re comfortable with,” instead of “Do you know this particular programming language” makes the candidate feel like they’re in control. It is also a chance to let them reply to your question in their own words.
They address the behavioral aspects of a candidate – ensure you have a few questions on your list that allow a candidate to describe a situation. A situation where a client was unhappy or a time when the developer learned a new technology. Such questions help you assess if the candidate is a good fit for the team.
There is no right or wrong answer – it is important to have a structured interview process in place. But this does not mean you have a list of standard answers in mind that you’re looking for. How candidates approach your questions shows you whether they have the makings of a successful candidate. Focus on that rather than on the actual answer itself.
Designing a conversation around these buckets of interview questions brings you to my next question, “What should you look for in each candidate to spot the best ones?”
Hire GREAT developers by asking the right questions
Before we dive deep into the interview questions, we have to think about a few things that have changed. COVID-19 has rendered working from home the new normal for the foreseeable future. As a recruiter, the onus falls upon you to understand whether the developer is comfortable working remotely and has the relevant resources to achieve maximum productivity.
#1 How do you plan your day?
Remote work gives employees the option to be flexible. You don’t have to clock in 9 hours a day as long as you get everything done on time. A developer who hasn’t always been working remotely, but has a routine in place, understands the pitfalls of working from home. It is easy to get distracted and having a schedule to fall back on ensures good productivity.
#2 Do you have experience using tools for collaboration and remote work?
Working from home reduces human interaction heavily. There is no way to just go up to your teammate’s desk and clarify issues. Virtual communication is key to getting work done. Look for what kind of remote working tools your candidate is familiar with and if they know what collaborative tools to use for different tasks.
Value-based interview questions to ask
We went around and spoke to our engineering team, and the recruiting team to see what questions they abide by; what they think makes any candidate tick.
The result? – a motley group of questions that aim to reveal the candidate’s soft skills, in addition to typical technical interview questions and test tasks.
#3 Please describe three recent projects that you worked on. What were the most interesting and challenging parts?
This is an all-encompassing question in that it lets the candidate explain at length about their work ethic—thought process, handling QA, working with a team, and managing user feedback. This also lets you dig enough to assess whether the candidate is taking credit for someone else's work or not.
#4 You’ve worked long and hard to deliver a complex feature for a client and they say it’s not what they asked for. How would you take it?
A good developer will take it in their stride, work closely with the client to find the point of disconnect, and sort out the issue. There are so many things that could go wrong or not be to the client’s liking, and it falls on the developer to remain calm and create solutions.
#5 What new programming languages or technologies have you learned recently?
While being certified in many programming languages doesn't guarantee a great developer, it still is an important technical interview question to ask. It helps highlight a thirst for knowledge and shows that the developer is eager to learn new things.
#6 What does the perfect release look like? Who is involved and what is your role?
Have the developer take you through each phase of a recent software development lifecycle. Ask them to explain their specific role in each phase in this release. This will give you an excellent perspective into a developer’s mind. Do they talk about the before and after of the release? A skilled developer would. The chances of something going wrong in a release are very high. How would the developer react? Will they be able to handle the pressure?
SUBSCRIBE to theHackerEarth blog and enrich your monthly reading with our free e-newsletter – Fresh, insightful and awesome articles straight into your inbox from around the tech recruiting world!
#7 Tell me about a time when you had to convince your lead to try a different approach?
As an example of a behavioral interview question, this is a good one. The way a developer approaches this question speaks volumes about how confident they are expressing their views, and how succinct they are in articulating those views.
#8 What have you done with all the extra hours during the pandemic?
Did you binge-watch your way through the pandemic? I’m sure every one of us has done this. Indulge in a lighthearted conversation with your candidate. This lets them talk about something they are comfortable with. Maybe they learned a new skill or took up a hobby. Get to know a candidate’s interests and little pleasures for a more rounded evaluation.
Over to you! Now that you know what aspects of a candidate to focus on, you are well-equipped to bring out the best in each candidate in their interviews. A mix of strong technical skills and interpersonal qualities is how you spot good developers for your team.
If you have more pressing interview questions to add to this list of ours, please write to us at contact@hackerearth.com.
The minute a developer position opens up, recruiters feel a familiar twinge of fear run down their spines. They recall their previous interview experiences, and how there seems to be a blog post a month that goes viral about bad developer interviews.
While hiring managers, especially the picky ones, would attribute this to a shortage of talented developers, what if the time has come to rethink your interview process? What if recruiters and hiring managers put too much stock into bringing out the technical aspects of each candidate and don’t put enough emphasis on their soft skills?
A report by Robert Half shows that 86% of technology leaders say it’s challenging to find IT talent. Interviewing developers should be a rewarding experience, not a challenging one. If you don’t get caught up in asking specific questions and instead design a simple conversation to gauge a candidate’s way of thinking, it throws up a lot of good insight and makes it fun too.
Asking the right technical interview questions when recruiting developers is important but so is clear communication, good work ethic, and alignment with your organization’s goals.
Let us first see what kind of technical interview questions are well-suited to revealing the coding skills and knowledge of any developer, and then tackle the behavioral aspects of the candidate that sets them apart from the rest.
Recruit GREAT developers by asking the right questions
Here are some technical interview questions that you should ask potential software engineers when interviewing.
#1 Write an algorithm for the following
Minimum Stack - Design a stack that provides 4 functions - push(item), pop, peek, and minimum, all in constant order time complexity. Then move on to coding the actual solution.
Kth Largest Element in an array - This is a standard problem with multiple solutions of best time complexity orders where N log(K) is a common one and O(N) + K log(N) is a lesser-known order. Both solutions are acceptable, not directly comparable to each other, and better than N log(N), which is sorting an array and fetching the Kth element.
Top View of a Binary Tree - Given a root node of the binary tree, return the set of all elements that will get wet if it rains on the tree. Nodes having any nodes directly above them will not get wet.
Internal implementation of a hashtable like a map/dictionary - A candidate needs to specify how key-value pairs are stored, hashing is used and collisions are handled. A good developer not only knows how to use this concept but also how it works. If the developer also knows how the data structure scales when the number of records increases in the hashtable, that is a bonus.
Algorithms demonstrate a candidate’s ability to break down a complex problem into steps. Reasoning and pattern recognition capabilities are some more factors to look for when assessing a candidate. A good candidate can code his thought process of the algorithm finalized during the discussion.
Looking for a great place to hire developers in the US? Try Jooble!
#2 Formulate solutions for the below low-level design (LLD) questions
What is LLD? In your own words, specify the different aspects covered in LLD.
Design a movie ticket booking application like BookMyShow. Ensure that your database schema is tailored for a theatre with multiple screens and takes care of booking, seat availability, seat arrangement, and seat locking. Your solution does not have to extend to the payment option.
Design a basic social media application. Design database schema and APIs for a platform like Twitter with features for following a user, tweeting a post, seeing your tweet, and seeing a user's tweet.
Such questions do not have a right or wrong answer. They primarily serve to reveal a developer’s thought process and the way they approach a problem.
What do you understand by HLD? Can you specify the difference between LLD and HLD?
Design a social media application. In addition to designing a platform like Twitter with features for following a user, tweeting a post, seeing your tweet, and seeing a user's tweet, design a timeline. After designing a timeline where you can see your followers’ tweets, scale it for a larger audience. If you still have time, try to scale it for a celebrity use case.
Design for a train ticket booking application like IRCTC. Incorporate auth, features to choose start and end stations, view available trains and available seats between two stations, save reservation of seats from start to end stations, and lock them till payment confirmation.
How will you design a basic relational database? The database should support tables, columns, basic field types like integer and text, foreign keys, and indexes. The way a developer approaches this question is important. A good developer designs a solution around storage and memory management.
Here’s a pro-tip for you. LLD questions can be answered by both beginners and experienced developers. Mostly, senior developers can be expected to answer HLD questions. Choose your interview questions set wisely, and ask questions relevant to your candidate’s experience.
#4 Have you ever worked with SQL? Write queries for a specific use case that requires multiple joins.
Example: Create a table with separate columns for student name, subject, and marks scored. Return student names and ranks of each student. The rank of a student depends on the total of marks in all subjects.
Not all developers would have experience working with SQL but some knowledge about how data is stored/structured is useful. Developers should be familiar with simple concepts like joins, retrieval queries, and the basics of DBMS.
#5 What do you think is wrong with this code?
Instead of asking developer candidates to write code on a piece of paper (which is outdated, anyway), ask them to debug existing code. This is another way to assess their technical skills. Place surreptitious errors in the code and evaluate their attention to detail.
Now that you know exactly what technical skills to look for and when questions to ask when interviewing developers, the time has come to assess the soft skills of these candidates. Part 2 of this blog throws light on the how and why of evaluating candidates based on their communication skills, work ethic, and alignment with the company’s goals.
In today's competitive talent market, attracting and retaining top performers is crucial for any organization's success. However, traditional hiring methods like relying solely on resumes and interviews may not always provide a comprehensive picture of a candidate's skills and potential. This is where pre-employment assessments come into play.
What is Pre-Employement Assessment?
Pre-employment assessments are standardized tests and evaluations administered to candidates before they are hired. These assessments can help you objectively measure a candidate's knowledge, skills, abilities, and personality traits, allowing you to make data-driven hiring decisions.
By exploring and evaluating the best pre-employment assessment tools and tests available, you can:
Improve the accuracy and efficiency of your hiring process.
Identify top talent with the right skills and cultural fit.
Reduce the risk of bad hires.
Enhance the candidate experience by providing a clear and objective evaluation process.
This guide will provide you with valuable insights into the different types of pre-employment assessments available and highlight some of the best tools, to help you optimize your hiring process for 2024.
Why pre-employment assessments are key in hiring
While resumes and interviews offer valuable insights, they can be subjective and susceptible to bias. Pre-employment assessments provide a standardized and objective way to evaluate candidates, offering several key benefits:
Improved decision-making:
By measuring specific skills and knowledge, assessments help you identify candidates who possess the qualifications necessary for the job.
Reduced bias:
Standardized assessments mitigate the risks of unconscious bias that can creep into traditional interview processes.
Increased efficiency:
Assessments can streamline the initial screening process, allowing you to focus on the most promising candidates.
Enhanced candidate experience:
When used effectively, assessments can provide candidates with a clear understanding of the required skills and a fair chance to showcase their abilities.
Types of pre-employment assessments
There are various types of pre-employment assessments available, each catering to different needs and objectives. Here's an overview of some common types:
1. Skill Assessments:
Technical Skills: These assessments evaluate specific technical skills and knowledge relevant to the job role, such as programming languages, software proficiency, or industry-specific expertise. HackerEarth offers a wide range of validated technical skill assessments covering various programming languages, frameworks, and technologies.
Soft Skills: These employment assessments measure non-technical skills like communication, problem-solving, teamwork, and critical thinking, crucial for success in any role.
2. Personality Assessments:
These employment assessments can provide insights into a candidate's personality traits, work style, and cultural fit within your organization.
3. Cognitive Ability Tests:
These tests measure a candidate's general mental abilities, such as reasoning, problem-solving, and learning potential.
4. Integrity Assessments:
These employment assessments aim to identify potential risks associated with a candidate's honesty, work ethic, and compliance with company policies.
By understanding the different types of assessments and their applications, you can choose the ones that best align with your specific hiring needs and ensure you hire the most qualified and suitable candidates for your organization.
Leading employment assessment tools and tests in 2024
Choosing the right pre-employment assessment tool depends on your specific needs and budget. Here's a curated list of some of the top pre-employment assessment tools and tests available in 2024, with brief overviews:
A comprehensive platform offering a wide range of validated skill assessments in various programming languages, frameworks, and technologies. It also allows for the creation of custom assessments and integrates seamlessly with various recruitment platforms.
Provides a broad selection of assessments, including skill tests, personality assessments, and cognitive ability tests. They offer customizable solutions and cater to various industries.
Utilizes gamified assessments to evaluate cognitive skills, personality traits, and cultural fit. They offer a data-driven approach and emphasize candidate experience.
Wonderlic:
Offers a variety of assessments, including the Wonderlic Personnel Test, which measures general cognitive ability. They also provide aptitude and personality assessments.
Harver:
An assessment platform focusing on candidate experience with video interviews, gamified assessments, and skills tests. They offer pre-built assessments and customization options.
Remember: This list is not exhaustive, and further research is crucial to identify the tool that aligns best with your specific needs and budget. Consider factors like the types of assessments offered, pricing models, integrations with your existing HR systems, and user experience when making your decision.
Choosing the right pre-employment assessment tool
Instead of full individual tool reviews, consider focusing on 2–3 key platforms. For each platform, explore:
Target audience: Who are their assessments best suited for (e.g., technical roles, specific industries)?
Types of assessments offered: Briefly list the available assessment categories (e.g., technical skills, soft skills, personality).
Key features: Highlight unique functionalities like gamification, custom assessment creation, or seamless integrations.
Effectiveness: Briefly mention the platform's approach to assessment validation and reliability.
User experience: Consider including user reviews or ratings where available.
Comparative analysis of assessment options
Instead of a comprehensive comparison, consider focusing on specific use cases:
Technical skills assessment:
Compare HackerEarth and Wonderlic based on their technical skill assessment options, focusing on the variety of languages/technologies covered and assessment formats.
Soft skills and personality assessment:
Compare SHL and Pymetrics based on their approaches to evaluating soft skills and personality traits, highlighting any unique features like gamification or data-driven insights.
Candidate experience:
Compare Harver and Wonderlic based on their focus on candidate experience, mentioning features like video interviews or gamified assessments.
Additional tips:
Encourage readers to visit the platforms' official websites for detailed features and pricing information.
Include links to reputable third-party review sites where users share their experiences with various tools.
Best practices for using pre-employment assessment tools
Integrating pre-employment assessments effectively requires careful planning and execution. Here are some best practices to follow:
Define your assessment goals:
Clearly identify what you aim to achieve with assessments. Are you targeting specific skills, personality traits, or cultural fit?
Choose the right assessments:
Select tools that align with your defined goals and the specific requirements of the open position.
Set clear expectations:
Communicate the purpose and format of the assessments to candidates in advance, ensuring transparency and building trust.
Integrate seamlessly:
Ensure your chosen assessment tool integrates smoothly with your existing HR systems and recruitment workflow.
Train your team:
Equip your hiring managers and HR team with the knowledge and skills to interpret assessment results effectively.
Interpreting assessment results accurately
Assessment results offer valuable data points, but interpreting them accurately is crucial for making informed hiring decisions. Here are some key considerations:
Use results as one data point:
Consider assessment results alongside other information, such as resumes, interviews, and references, for a holistic view of the candidate.
Understand score limitations:
Don't solely rely on raw scores. Understand the assessment's validity and reliability and the potential for cultural bias or individual test anxiety.
Look for patterns and trends:
Analyze results across different assessments and identify consistent patterns that align with your desired candidate profile.
Focus on potential, not guarantees:
Assessments indicate potential, not guarantees of success. Use them alongside other evaluation methods to make well-rounded hiring decisions.
Choosing the right pre-employment assessment tools
Selecting the most suitable pre-employment assessment tool requires careful consideration of your organization's specific needs. Here are some key factors to guide your decision:
Industry and role requirements:
Different industries and roles demand varying skill sets and qualities. Choose assessments that target the specific skills and knowledge relevant to your open positions.
Company culture and values:
Align your assessments with your company culture and values. For example, if collaboration is crucial, look for assessments that evaluate teamwork and communication skills.
Candidate experience:
Prioritize tools that provide a positive and smooth experience for candidates. This can enhance your employer brand and attract top talent.
Budget and accessibility considerations
Budget and accessibility are essential factors when choosing pre-employment assessments:
Budget:
Assessment tools come with varying pricing models (subscriptions, pay-per-use, etc.). Choose a tool that aligns with your budget and offers the functionalities you need.
Accessibility:
Ensure the chosen assessment is accessible to all candidates, considering factors like language options, disability accommodations, and internet access requirements.
Additional Tips:
Free trials and demos: Utilize free trials or demos offered by assessment platforms to experience their functionalities firsthand.
Consult with HR professionals: Seek guidance from HR professionals or recruitment specialists with expertise in pre-employment assessments.
Read user reviews and comparisons: Gain insights from other employers who use various assessment tools.
By carefully considering these factors, you can select the pre-employment assessment tool that best aligns with your organizational needs, budget, and commitment to an inclusive hiring process.
Remember, pre-employment assessments are valuable tools, but they should not be the sole factor in your hiring decisions. Use them alongside other evaluation methods and prioritize building a fair and inclusive hiring process that attracts and retains top talent.
Future trends in pre-employment assessments
The pre-employment assessment landscape is constantly evolving, with innovative technologies and practices emerging. Here are some potential future trends to watch:
Artificial intelligence (AI):
AI-powered assessments can analyze candidate responses, written work, and even resumes, using natural language processing to extract relevant insights and identify potential candidates.
Adaptive testing:
These assessments adjust the difficulty level of questions based on the candidate's performance, providing a more efficient and personalized evaluation.
Micro-assessments:
Short, focused assessments delivered through mobile devices can assess specific skills or knowledge on-the-go, streamlining the screening process.
Gamification:
Engaging and interactive game-based elements can make the assessment experience more engaging and assess skills in a realistic and dynamic way.
Conclusion
Pre-employment assessments, when used thoughtfully and ethically, can be a powerful tool to optimize your hiring process, identify top talent, and build a successful workforce for your organization. By understanding the different types of assessments available, exploring top-rated tools like HackerEarth, and staying informed about emerging trends, you can make informed decisions that enhance your ability to attract, evaluate, and hire the best candidates for the future.
Layoffs in the IT industry are becoming more widespread as companies fight to remain competitive in a fast-changing market; many turn to layoffs as a cost-cutting measure. Last year, 1,000 companies including big tech giants and startups, laid off over two lakhs of employees. But first, what are layoffs in the tech business, and how do they impact the industry?
Tech layoffs are the termination of employment for some employees by a technology company. It might happen for various reasons, including financial challenges, market conditions, firm reorganization, or the after-effects of a pandemic. While layoffs are not unique to the IT industry, they are becoming more common as companies look for methods to cut costs while remaining competitive.
The consequences of layoffs in technology may be catastrophic for employees who lose their jobs and the firms forced to make these difficult decisions. Layoffs can result in the loss of skill and expertise and a drop in employee morale and productivity. However, they may be required for businesses to stay afloat in a fast-changing market.
This article will examine the reasons for layoffs in the technology industry, their influence on the industry, and what may be done to reduce their negative impacts. We will also look at the various methods for tracking tech layoffs.
What are tech layoffs?
The term "tech layoff" describes the termination of employees by an organization in the technology industry. A company might do this as part of a restructuring during hard economic times.
In recent times, the tech industry has witnessed a wave of significant layoffs, affecting some of the world’s leading technology companies, including Amazon, Microsoft, Meta (formerly Facebook), Apple, Cisco, SAP, and Sony. These layoffs are a reflection of the broader economic challenges and market adjustments facing the sector, including factors like slowing revenue growth, global economic uncertainties, and the need to streamline operations for efficiency.
Each of these tech giants has announced job cuts for various reasons, though common themes include restructuring efforts to stay competitive and agile, responding to over-hiring during the pandemic when demand for tech services surged, and preparing for a potentially tough economic climate ahead. Despite their dominant positions in the market, these companies are not immune to the economic cycles and technological shifts that influence operational and strategic decisions, including workforce adjustments.
This trend of layoffs in the tech industry underscores the volatile nature of the tech sector, which is often at the mercy of rapid changes in technology, consumer preferences, and the global economy. It also highlights the importance of adaptability and resilience for companies and employees alike in navigating the uncertainties of the tech landscape.
Yes, the market is always uncertain, but why resort to tech layoffs?
Various factors cause tech layoffs, including company strategy changes, market shifts, or financial difficulties. Companies may lay off employees if they need help to generate revenue, shift their focus to new products or services, or automate certain jobs.
In addition, some common reasons could be:
Financial struggles
Currently, the state of the global market is uncertain due to economic recession, ongoing war, and other related phenomena. If a company is experiencing financial difficulties, only sticking to pay cuts may not be helpful—it may need to reduce its workforce to cut costs.
The tech industry is constantly evolving, and companies would have to adjust their workforce to meet changing market conditions. For instance, companies are adopting remote work culture, which surely affects on-premises activity, and companies could do away with some number of tech employees at the backend.
Restructuring
Companies may also lay off employees as part of a greater restructuring effort, such as spinning off a division or consolidating operations.
Automation
With the advancement in technology and automation, some jobs previously done by human labor may be replaced by machines, resulting in layoffs.
Mergers and acquisitions
When two companies merge, there is often overlap in their operations, leading to layoffs as the new company looks to streamline its workforce.
But it's worth noting that layoffs are not exclusive to the tech industry and can happen in any industry due to uncertainty in the market.
Will layoffs increase in 2024?
It is challenging to estimate the rise or fall of layoffs. The overall state of the economy, the health of certain industries, and the performance of individual companies will play a role in deciding the degree of layoffs in any given year.
But it is also seen that, in the first 15 days of this year, 91 organizations laid off over 24,000 tech workers, and over 1,000 corporations cut down more than 150,000 workers in 2022, according to an Economic Times article.
The COVID-19 pandemic caused a huge economic slowdown and forced several businesses to downsize their employees. However, some businesses rehired or expanded their personnel when the world began to recover.
So, given the current level of economic uncertainty, predicting how the situation will unfold is difficult.
What types of companies are prone to tech layoffs?
Tech layoffs can occur in organizations of all sizes and various areas.
Following are some examples of companies that have experienced tech layoffs in the past:
Large tech firms
Companies such as IBM, Microsoft, Twitter, Better.com, Alibaba, and HP have all experienced layoffs in recent years as part of restructuring initiatives or cost-cutting measures.
Market scenarios are still being determined after Elon Musk's decision to lay off employees. Along with tech giants, some smaller companies and startups have also been affected by layoffs.
Startups
Because they frequently work with limited resources, startups may be forced to lay off staff if they cannot get further funding or need to pivot due to market downfall.
Small and medium-sized businesses
Small and medium-sized businesses face layoffs due to high competition or if the products/services they offer are no longer in demand.
Companies in certain industries
Some sectors of the technological industry, such as the semiconductor industry or automotive industry, may be more prone to layoffs than others.
Companies that lean on government funding
Companies that rely significantly on government contracts may face layoffs if the government cuts technology spending or contracts are not renewed.
How to track tech layoffs?
You can’t stop tech company layoffs, but you should be keeping track of them. We, HR professionals and recruiters, can also lend a helping hand in these tough times by circulating “layoff lists” across social media sites like LinkedIn and Twitter to help people land jobs quicker. Firefish Software put together a master list of sources to find fresh talent during the layoff period.
Because not all layoffs are publicly disclosed, tracking tech industry layoffs can be challenging, and some may go undetected. There are several ways to keep track of tech industry layoffs:
In addition, they aid in identifying trends in layoffs within the tech industry. It can reveal which industries are seeing the most layoffs and which companies are the most affected.
Companies can use layoff trackers as an early warning system and compare their performance to that of other companies in their field.
News articles
Because many news sites cover tech layoffs as they happen, keeping a watch on technology sector stories can provide insight into which organizations are laying off employees and how many individuals have been affected.
Social media
Organizations and employees frequently publish information about layoffs in tech on social media platforms; thus, monitoring companies' social media accounts or following key hashtags can provide real-time updates regarding layoffs.
Online forums and communities
There are online forums and communities dedicated to discussing tech industry news, and they can be an excellent source of layoff information.
Government reports
Government agencies such as the Bureau of Labor Statistics (BLS) publish data on layoffs and unemployment, which can provide a more comprehensive picture of the technology industry's status.
How do companies reduce tech layoffs?
Layoffs in tech are hard – for the employee who is losing their job, the recruiter or HR professional who is tasked with informing them, and the company itself. So, how can we aim to avoid layoffs? Here are some ways to minimize resorting to letting people go:
Salary reductions
Instead of laying off employees, businesses can lower the salaries or wages of all employees. It can be accomplished by instituting compensation cuts or salary freezes.
Implementing a hiring freeze
Businesses can halt employing new personnel to cut costs. It can be a short-term solution until the company's financial situation improves.
Businesses might search for ways to cut or remove non-essential expenses such as travel, training, and office expenses.
Reducing working hours
Companies can reduce employee working hours to save money, such as implementing a four-day workweek or a shorter workday.
These options may not always be viable and may have their problems, but before laying off, a company owes it to its people to consider every other alternative, and formulate the best solution.
Tech layoffs to bleed into this year
While we do not know whether this trend will continue or subside during 2023, we do know one thing. We have to be prepared for a wave of layoffs that is still yet to hit. As of last month, Layoffs.fyi had already tracked 170+ companies conducting 55,970 layoffs in 2023.
So recruiters, let’s join arms, distribute those layoff lists like there’s no tomorrow, and help all those in need of a job! :)
In today’s fast-paced world, recruiting talent has become increasingly complicated. Technological advancements, high workforce expectations and a highly competitive market have pushed recruitment agencies to adopt innovative strategies for recruiting various types of talent. This article aims to explore one such recruitment strategy – headhunting.
What is Headhunting in recruitment?
In headhunting, companies or recruitment agencies identify, engage and hire highly skilled professionals to fill top positions in the respective companies. It is different from the traditional process in which candidates looking for job opportunities approach companies or recruitment agencies. In headhunting, executive headhunters, as recruiters are referred to, approach prospective candidates with the hiring company’s requirements and wait for them to respond. Executive headhunters generally look for passive candidates, those who work at crucial positions and are not on the lookout for new work opportunities. Besides, executive headhunters focus on filling critical, senior-level positions indispensable to companies. Depending on the nature of the operation, headhunting has three types. They are described later in this article. Before we move on to understand the types of headhunting, here is how the traditional recruitment process and headhunting are different.
How do headhunting and traditional recruitment differ from each other?
Headhunting is a type of recruitment process in which top-level managers and executives in similar positions are hired. Since these professionals are not on the lookout for jobs, headhunters have to thoroughly understand the hiring companies’ requirements and study the work profiles of potential candidates before creating a list.
In the traditional approach, there is a long list of candidates applying for jobs online and offline. Candidates approach recruiters for jobs. Apart from this primary difference, there are other factors that define the difference between these two schools of recruitment.
AspectHeadhuntingTraditional RecruitmentCandidate TypePrimarily passive candidateActive job seekersApproachFocused on specific high-level rolesBroader; includes various levelsScopeproactive outreachReactive: candidates applyCostGenerally more expensive due to expertise requiredTypically lower costsControlManaged by headhuntersManaged internally by HR teams
All the above parameters will help you to understand how headhunting differs from traditional recruitment methods, better.
Types of headhunting in recruitment
Direct headhunting: In direct recruitment, hiring teams reach out to potential candidates through personal communication. Companies conduct direct headhunting in-house, without outsourcing the process to hiring recruitment agencies. Very few businesses conduct this type of recruitment for top jobs as it involves extensive screening across networks outside the company’s expanse.
Indirect headhunting: This method involves recruiters getting in touch with their prospective candidates through indirect modes of communication such as email and phone calls. Indirect headhunting is less intrusive and allows candidates to respond at their convenience.Third-party recruitment: Companies approach external recruitment agencies or executive headhunters to recruit highly skilled professionals for top positions. This method often leverages the company’s extensive contact network and expertise in niche industries.
How does headhunting work?
Finding highly skilled professionals to fill critical positions can be tricky if there is no system for it. Expert executive headhunters employ recruitment software to conduct headhunting efficiently as it facilitates a seamless recruitment process for executive headhunters. Most software is AI-powered and expedites processes like candidate sourcing, interactions with prospective professionals and upkeep of communication history. This makes the process of executive search in recruitment a little bit easier. Apart from using software to recruit executives, here are the various stages of finding high-calibre executives through headhunting.
Identifying the role
Once there is a vacancy for a top job, one of the top executives like a CEO, director or the head of the company, reach out to the concerned personnel with their requirements. Depending on how large a company is, they may choose to headhunt with the help of an external recruiting agency or conduct it in-house. Generally, the task is assigned to external recruitment agencies specializing in headhunting. Executive headhunters possess a database of highly qualified professionals who work in crucial positions in some of the best companies. This makes them the top choice of conglomerates looking to hire some of the best talents in the industry.
Defining the job
Once an executive headhunter or a recruiting agency is finalized, companies conduct meetings to discuss the nature of the role, how the company works, the management hierarchy among other important aspects of the job. Headhunters are expected to understand these points thoroughly and establish a clear understanding of their expectations and goals.
Candidate identification and sourcing
Headhunters analyse and understand the requirements of their clients and begin creating a pool of suitable candidates from their database. The professionals are shortlisted after conducting extensive research of job profiles, number of years of industry experience, professional networks and online platforms.
Approaching candidates
Once the potential candidates have been identified and shortlisted, headhunters move on to get in touch with them discreetly through various communication channels. As such candidates are already working at top level positions at other companies, executive headhunters have to be low-key while doing so.
Assessment and Evaluation
In this next step, extensive screening and evaluation of candidates is conducted to determine their suitability for the advertised position.
Interviews and negotiations
Compensation is a major topic of discussion among recruiters and prospective candidates. A lot of deliberation and negotiation goes on between the hiring organization and the selected executives which is facilitated by the headhunters.
Finalizing the hire
Things come to a close once the suitable candidates accept the job offer. On accepting the offer letter, headhunters help finalize the hiring process to ensure a smooth transition.
The steps listed above form the blueprint for a typical headhunting process. Headhunting has been crucial in helping companies hire the right people for crucial positions that come with great responsibility. However, all systems have a set of challenges no matter how perfect their working algorithm is. Here are a few challenges that talent acquisition agencies face while headhunting.
Common challenges in headhunting
Despite its advantages, headhunting also presents certain challenges:
Cost Implications: Engaging headhunters can be more expensive than traditional recruitment methods due to their specialized skills and services.
Time-Consuming Process: While headhunting can be efficient, finding the right candidate for senior positions may still take time due to thorough evaluation processes.
Market Competition: The competition for top talent is fierce; organizations must present compelling offers to attract passive candidates away from their current roles.
Although the above mentioned factors can pose challenges in the headhunting process, there are more upsides than there are downsides to it. Here is how headhunting has helped revolutionize the recruitment of high-profile candidates.
Advantages of Headhunting
Headhunting offers several advantages over traditional recruitment methods:
Access to Passive Candidates: By targeting individuals who are not actively seeking new employment, organisations can access a broader pool of highly skilled professionals.
Confidentiality: The discreet nature of headhunting protects both candidates’ current employment situations and the hiring organisation’s strategic interests.
Customized Search: Headhunters tailor their search based on the specific needs of the organization, ensuring a better fit between candidates and company culture.
Industry Expertise: Many headhunters specialise in particular sectors, providing valuable insights into market dynamics and candidate qualifications.
Conclusion
Although headhunting can be costly and time-consuming, it is one of the most effective ways of finding good candidates for top jobs. Executive headhunters face several challenges maintaining the g discreetness while getting in touch with prospective clients. As organizations navigate increasingly competitive markets, understanding the nuances of headhunting becomes vital for effective recruitment strategies. To keep up with the technological advancements, it is better to optimise your hiring process by employing online recruitment software like HackerEarth, which enables companies to conduct multiple interviews and evaluation tests online, thus improving candidate experience. By collaborating with skilled headhunters who possess industry expertise and insights into market trends, companies can enhance their chances of securing high-caliber professionals who drive success in their respective fields.