By: Donald Lee, Vaibhav Sharma, Michelle Kim, and Maureen Luo
Date: September 2023
It's job search season (again)! One of the most nerve-wracking aspects of applying for a job is the behavioural interview, yet there lacks a method to help interviewees prepare effectively. Most existing services are not able to emulate real-life interview scenarios sufficiently, given that the extent of the practice questions asked are limited without knowledge of the interviewee's experience. In addition, interviewers rarely provide constructive feedback, leaving interviewees feeling confused and without a plan for improvement. HonkHonkHire is an AI-powered interview practice application that analyzes your unique experiences (via LinkedIn/resume/portfolio) and generates interview questions and feedback tailored for you. By evaluating facial expressions and transcribing your responses, HonkHonkHire provides useful metrics to give actionable feedback and help users gain confidence for their job interviews.
This project itself was very ambitious for our team, given that it involved learning and applying a lot of unfamiliar technologies in a short period of time. As we had never worked together as a team before, it took some time to familiarize ourselves with each other's strengths, weaknesses, and communication styles.
Our team is proud that we were able to deliver majority of the MVP features, and to have created something we would all love to use for our future job searches. Learning new technologies such as Cohere's API and OCR Space, and being able to employ them in our application was also very rewarding.
Each one of our team members challenged themselves while creating this application. Our designer ventured into the world of programming and learned some HTML/CSS while coding a webpage for the very first time (yay)! Our front-end developer challenged herself by focusing on fundamentals such as Vanilla JS, instead of using more familiar frameworks. Another one of our developers learned about new APIs by reading documentations, and transferring data from front-end to back-end (and vice versa) using Node.JS. Our primary back-end developer challenged himself to explore facial/emotional expression and behavioural motion detection for the first time.
We would love to further enhance user experience by offering more detailed metrics such as talking pace, response length, and tone of voice. Through the use of these metrics, we hope to give users the ability to track their improvements over their professional journey and encourage them to continue to improve upon their behavioural interview skills.