Build an AI-Powered Interview Practice Tool
For when you have specific questions or points of feedback for an interview.
I’ve seen several AI-powered interview tools being shared over the last few weeks. Interview practice seems like another good use for AI, but I wonder if it is another instance where AI’s default bland, cookie-cutter language won’t be specific enough to be helpful. In a world where every interviewer is looking for different things, a generic app might get you comfortable with the basic questions like “Tell me about your weaknesses” or “how well do you work in teams”, but then how can you prepare interviewees for the specifics?
So, I decided to create my own, with just a few questions to test out the idea. I built it around the role of a product manager as a sample, and tried to include specific feedback that someone interviewing for that role would find useful.
At the core, the interactions provides feedback and a score for each question. For example, when the AI “interviewer” asks “Describe a product you have managed. What was your role and the outcome?”, the application is guided to check for:
The clarity of the response.
A description of the responder’s roles and responsibilities.
The communication of specific, measurable outcomes.
And then at the end I ask the AI to prioritize three areas to work on based on all the questions and feedback in the conversation.
Steps to Build your Interview Practice tool:
Check out the quickstart to build your first “hello world” app: https://jswope00.github.io/AI-MicroApps-Docs/quickstart/
Edit your version of the “app_interview_pm.py” file.
Modify the SYSTEM_PROMPT for your own interview. My prompt for this demo PM interview is “You play the role of a mock interviewer. The user will answer interview questions and you'll provide feedback based on specific criteria. Please align your feedback to the criteria.”
For each question you want to ask:
Create a phase
Add a text_area field for the question response.
In the user_prompt, tell the AI what question the user is answering and your guidance around what feedback to provide.
If you are scoring, your rubric should correspond to your feedback.
Since the final phase requires no input (the AI is simply summarizing), you can add a markdown field that just asks the user to continue.
Here is the user_prompt I used for the final summary “Based on the conversation and the feedback you've provided, please now prioritize three key areas that I should work on to improve my interview skill.”
Test your app. Once happy with it, deploy your app!