Gargi

Author
Gargi

Blogs
Gargi is a former product marketer with a love for growth loops and developer communities. Now, they decode hiring challenges with the same curiosity they brought to GTM plans.
author’s Articles

Insights & Stories by Gargi

Gargi's content is built for talent leaders who want results—fast. Actionable, relevant, and packed with real-world learnings from the frontlines of growth.
Clear all
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Filter
Filter

How these hackathon winners apply Machine Learning to minimize rash driving

Hackathons have become the go-to method for developers and companies to innovate in tech and build great products in a short span of time. With tens of thousands of developers participating in hackathons across the globe every month, it’s a great way to collaborate and work on solving real-life issues using tech.

“Along with being stimulating and productive, hackathons are fun” - says Team Vikings, who won the first prize (a brand new Harley-Davidson bike!) in the recently concluded GO-HACK hackathon. The team built Rashalytics - a comprehensive platform for analysing and minimising rash driving. And now, they have big plans of taking this hack live for the public.

Read on to know more about their amazing idea and how they built the platform.

What is Rashalytics?

Rashalytics is a system that promises to mitigate the problem of rash driving by intelligently incentivising or penalising the driver based on his driving style. It has been designed to reduce the number of accidents that have increased with the hyperlocal on-demand delivery requiring breakneck speeds of the various products.

The system is able to extract the data of rich metrics like sharp acceleration, hard braking, sharp turns, etc. from the driver's android phone, which is used to train the machine learning models.

Technologies/platforms/languages
  • Nodejs - To create the API server and the mock sensor data generator
  • Kafka - To build the data pipeline
  • Apache Spark - To process the real-time data stream and generate metrics to measure the driving quality
  • ReactJs - To create the dashboard web app
  • Google roads & maps API : To get the traffic and ETA data
Functionality

Machine learning challenge, ML challenge

The system primarily consists of 4 parts:.
  1. The Android app: Simulated by the team, this app aggregates locally and sends the chunks of sensor data to the API server via an HTTP endpoint.
  2. API Server: This matches the data with the schema and if valid, it puts the data in Kafka queue.
  3. Engine: Made with the Apache Spark, this helps sensor data to aggregate and form metrics such as sharp acceleration, hard braking, sharp turns, etc. These metrics, in turn, are used to generate a dynamic driving quality score for the driver. This score forms the basis of a lot of analytics and functionalities that this system provides.
  4. Dashboard: The dashboard provides a beautiful and intuitive interface to take proactive decisions as well as run analytics using the provided APIs. It has been written using ReactJS.
Here’s the flow diagram showing how the whole system works:





This system allowed the team to create:
  • A dynamic profile and the dashboard of the rider describing his driving style, which affects his rating.
  • An actionable "real-time" rash driving reporting system which allows the authorities and the hub incharges to react before it’s too late.
  • A dashboard usable by both the fleet managers and traffic police control board to visualise the data such as incident distribution by time, which tells at what time of the day a driver is more likely to drive in an unsafe manner.
  • A modular system in which the new data sources, metrics, and models can be added so that the third-party vendors can be easily on-boarded onto the platform.




ChallengesHere are some of the challenges that the team faced while building this application:
  1. Setting up the entire system architecture with different components by developing them in isolation and then combining them together to work seamlessly
  2. Deciding the thresholds for different metrics after which the driving will be considered rash
  3. Creating a linear predictor for the driving quality score vs time with only one data point
  4. Creating a synthetic feature as generating the score itself is challenging enough
What’s Next?

Project creators Shivendra Soni, Rishabh Bhardwaj and Ankit Silaich have great plans in store for their project. Here are some of their ideas:
  1. Create and SDK for easy data collection and integration with different apps and make it possible for third-party vendors to utilise this data
  2. Improve the driving score model to include even more parameters and make it more real-world oriented
  3. Create a social profile which lets the users share their driving score
  4. Enable enterprise grade plug-n-play integration support

Hackmotion, an app that helps you relieve stress

In every Hackathon, we witness people working around the clock to develop on an idea that's unprecedented in all respects.

Hackmotion, an app developed during the Hackathon conducted by WACHacks in association with HackerEarth, is a breathtaking manifestation of the same.

Read on to know more about this app.

What is Hackmotion?

Hackmotion is an app that solves problems related to stress. It allows users to talk and have a friendly conversation with their phone. Along with this, it tracks users’ emotions and conversations in a journal format.

This app identifies the emotions that students commonly experience. It is designed to help them improve their social well-being, helping them be expressive through journaling.

Technologies/platforms/languages
  • Android Studio: To develop and test the app.
  • The Microsoft Face API: To analyze and detect faces in the picture taken
  • Microsoft’s Emotion API: To determine what emotion the person in the picture is feeling
  • Clarifai API: To process the image in the picture if there is no face
  • Java: For internal logic
  • XML: For layouts
Functionality

When the app is opened, the users are introduced to a friendly UI where they get an option to take a picture. Users can click on the camera icon to get the picture clicked. The phone processes this image using Microsoft Face API. Depending on the image, two types of APIs are then called:
  • If there is a face in the image, Microsoft Emotions API is called. This API uses an algorithm which analyzes the face and determines what emotion the person is feeling. Once the emotion is recognised, the phone starts talking to the user according to his or her mood.
  • If there isn’t any face in the image, Clarifai API is called. This API then determines what significant object is there in the picture. For example, if the user takes a picture of the leftover food, Clarifai will first recognise the food as an object and will then determine its type. For accuracy, questions related to the picture will be asked to the user.The conversation will start once the correct emotion is identified.
The app talks to the user in a way that they feel like they are talking to a real person. All the conversations and the objects detected by Clarifai get recorded in the journaling section of the app which the user can refer to in the future. Also, there is a statistics part of the app where the user can see the percentage of each emotion they have felt while downloading the app.



Challenges

Here are some of the challenges that the team faced while building this application:
  • Making Microsoft and Face API’s to work in harmony
  • Creating the algorithm that analyzes the face
  • Making sure that Clarifai API is called when no face is detected
What’s Next?

Some of the future plans of the project creators Brian Cherin and Kaushik Prakash are
  • Efficiency and accuracy of the app will be improved.
  • Frequency of each emotion will be displayed in a form of bar or line graph.
  • The conversational flow of the chat/journal portion will be improved by making it display the specific time and some notes that the user has.
If you love this app and are inspired by it, check out our list of hackathons for you to participate in. Register, code and create awesome solutions to real life problems and stand a chance to win awesome prizes while you’re at it!

Project H: How two hackers are using Virtual Reality to transform healthcare

At HackerEarth, we regularly host hackathons and often discover innovative hacks that can revolutionize industries.

Presenting Project H—one of the best hacks from the Digital India Hackathon, organized by ACM India in association with HackerEarth.

Read on to explore this real-life VR application.

What is Project H?

Designed to address healthcare challenges, Project H delivers a virtual reality experience of handling real-world objects. It's built to offer medical students realistic surgery simulation. With appropriate hardware, the app offers immersive interaction.

This simulation aids medical students in practicing surgeries and makes large-scale medical education more affordable and accessible.

Using virtual reality to replicate real-world experiences, Project H introduces a new way to approach practical problems.

Technologies / Platforms / Hardware / Languages

  • Haptic technology
  • Unity
  • Arduino Pro Micro
  • Blender
  • Vibrators
  • Servo motor
  • Dual-axis joystick
  • C#
  • C

Components and Functionalities

The two primary components of Project H are the haptic glove and the simulation software.

Haptic Glove

  • Features actuators and vibrators (controlled by Arduino Pro Micro) for kinesthetic and tactile feedback
  • Offers 180-unit finger-positioning resolution and a five-motor actuator mechanism
  • Includes a dual-axis joystick controller for menu navigation

Simulation Software

The software provides a 360° view of 3D-modeled organs. Many models were built using Blender, while some were sourced open-source. It simulates real interactions—for example, holding a beating heart with realistic feedback.

Application

Built with Unity and programmed in C#, the software communicates with Arduino using serial communication.

Users can perform virtual surgeries like making incisions or injecting fluids. Thanks to the glove's actuator mechanism, users feel like they’re physically holding surgical instruments.

Challenges

  • Creating a fast, low-latency communication protocol between glove and software
  • Accurately designing human organ 3D models
  • Improving comfort and structure of the initial glove prototypes

What’s Next?

Creators Gagan G and Pratik R have ambitious goals:

  • Implement piezoelectric actuators for enhanced tactile feedback
  • Expand the library of organs and tools for virtual surgery
  • Improve actuator precision with poly and programmable magnets

The team aims to broaden this technology into other industries like e-commerce and gaming.

Excited about VR? Register for the UnitedByHCL hackathon.

Happy Mixing Reality!