It sort of makes sense, it's like the calculator all over again... we want to gauge how well a candidate actually understands what's happening, but it's also unrealistic to not let them use the tools they'd be using on the job.
After talking to a large number of companies about their recent hiring experiences, it seemed like their options were pretty limited. They'd either rely solely on in-person interviews, or they'd need to change how interviews were done.
We decided to build a platform that lets companies design coding interviews that incorporate AI into the mix. We provide two different types of interviews:
1. A web-based assessment that has an LLM on the left and a code editor on the right, and the candidate can interact with the LLM, explain their approach, and get guidance while coding if necessary.
2. A "work-trial"-based interview where the candidate has a set amount of time to complete the tasks that the interviewer has created. The candidate is allowed to use any resources at their disposal, and at the end of the interview has five minutes to upload the final code and their LLM chat export for review.
The company can decide what tasks and questions to add to both, that match what they're looking for. Also, we'd then allow the interviewer to use their discretion on whether the candidate compromised things like security, code style, and maintainability for shipping, as well as how well they vetted the AI's responses and asked for clarification and modifications.
Basically, the idea is to mimic how the candidate would actually perform on real-world tasks with the real-world tools they'd be using on the job. We'd also closely monitor the tasks and workflow of companies to ensure they're not taking advantage of candidates to get free work done, and that the assessments are actually based on tasks that have already been completed by their team.
Super interested in understanding what your thoughts on this kind of interviewing approach?
Link: https://fallom.dev
opless•1h ago
Honestly it's disrespectful.