1. Get students to work on a more complex than usual project (in relation to their previous peers). Let them use whatever they want and let them know that AI is fine.
2. Make them come in for a physical exam where they have questions about they why of decisions they had to take during the project.
And that's it? I believe that if you can a) produce a fully working project meeting all functional requirements, and b) argue about its design with expertise, you pass. Do it with AI or not.
Are we interested in supporting people who can design something and create it or just have students who must follow the whims of professors who are unhappy that their studies looked different?
If you let people use AI they are still accountable for the code written under their name. If they can’t look at the code and explain what it’s doing, that’s not demonstrating understanding.
Exactly the same as in professional environments: you can use LLMs for your code but you've got to stand behind whatever you submit. You can of course use something like cursor and let it go free, not understanding a thing of the result, or you can step-by-step do changes with AI and try to understand the why.
I believe if teachers relaxed their emotions a bit and adapted their grading system (while also increasing the expected learning outcomes), we would see students who are trained to understand the pitfalls of LLMs and how to maximise getting the most out of them.
But yes we currently allow students to use AI provided their solution works and they can explain it. We just discourage to use AI to generate the full solution to each problem.
Surely just asking the candidate to lean a bit back on the web interview and then having a regular talk without him reaching for the keyboard is enough? I guess they can have some in between layer hearing the conversation and posting tips but even then it would be obvious someone’s reading from a sheet.
So you’d only be going off how they speak which could be filtering out people who are just a bit awkward.
Before Google, AFAIK, it was ad hoc, among good programmers. I only ever saw people talking with people about what they'd worked on, and about the company.
(And I heard that Microsoft sometimes did massive-ego interviews early on, but fortunately most smart people didn't mimic that.)
Keep in mind, though, that was was before programming was a big-money career. So you had people who were really enthusiastic, and people for whom it was just a decent office job. People who wanted to make lots of money went into medicine, law, or financial.
As soon as the big-money careers were on for software, and word got out about how Google (founded by people with no prior industry experience) interviewed... we got undergrads prepping for interviews. Which was a new thing, and my impression is that the only people who would need to prep for interviews either weren't good, or were some kind of scammer. But then eventually those students, who had no awareness of anything else, thought that that this was normal, and now so many companies just blindly do it.
If we could just make some other profession be easier big money, maybe only people who are genuinely enthusiastic would be interviewing. And we could interview like adults, instead of like teenagers pledging a frat.
I think tech is and was an exception here.
I'd advise anyone to read the available financial reports on any company they're intending to join, execpt if it's an internship. You'll spend hours interviewing and years dealing with these people, you could as well take an hour or two to understand if the company is sinking or a scam in the first place.
really company reviews is all that matters and even that has limited value since your life is determined by your manger
best you can do is sus out how your interviewers are fairing
are they happy? are they stressed, everything else has so much noise to be worse than worthless
For developers who work on products, getting a sense of whether the product of the team you'd be joining is a core part of the business versus speculative (i.e. stable vs likely to have layoffs) and how successful the product is in the marketplace (teams for products that are failing also are likely to be victims of layoffs) are also very important to understand.
And if your team is far from the money, what often matters much much more is how much political capital your skip level manager has and to what extent it can be deployed when the company needs to re-org or cut. Shoot, this can matter even if you're close to the money (if you're joining a team that's in the critical path of the profit center vs a capex moonshot project funded by said profit center).
This is one thing I really like about sales engineering. Sales orgs carry (relatively) very low-BS politically.
But no "prep" like months of LeetCode grinding, memorizing the "design interview" recital, practicing all the tips for disingenuous "behavioral" passing, etc.
I was in a small conference room with the two co-founders, and of them hadn't seen my resume, and was trying to read it on his phone while we were talking.
Bam. I whipped out printed copies for both of them, from my interview folio.
Yes. A while ago a company contacted me to interview, and after the first "casual" round they told me their standard process was going full leetcode on the second round and I'm advised to prepare for those if I'm interested in going further.
While that's the only company that was so upfront about it, most accept that leetcodes are dumb (need to be prepped even for a working engineer) and still base the core of their technical interview on them.
I think you're viewing the "good old days" of interviewing through the lens of nostalgia. Old school interviewing from decades ago or even more recently was significantly more similar to pledging to a frat than modern interviews.
> people who are genuinely enthusiastic
This seems absurdly difficult to measure well and gameable in its own way.
The flip side of "ad hoc" interviewing as you put it was an enormous amount of capriciousness. Being personable could count for a lot (being personable in front of programmers is definitely a different flavor of personable in front of frat bros, but it's just a different flavor is all). Pressure interviews were fairly common, where you would intentionally put the candidate in a stressful situation. Interview rubrics could be nonexistent. For all the cognitive biases present in today's interview process, older interviews were rife with much more.
If you try to systematize the interview process and make it more rigorous you inevitably make a system that is amenable to pre-interview preparation. If you forgo that you end up with a wildly capricious interview system.
If course you rarely have absolutes. Even the most rigorous modern interview systems often still have capriciousness in them and there was still some measure of rigor to old interview styles.
But let's not forget all the pain and problems of the old style of interviews.
Yeah, no, not at all. Interviewing in the 90s was just a cool chat between hackers. What interesting stuff have you built, let's talk about it. None of the confrontational leetcode nonsense of later years.
I still refuse to participate in that nonsense, so I'll never make people go through such interviews. I've only hired two awesome people this year, so less than a drop in the bucket, but I'll continue to do what I can to keep some sanity in the interviewing in this industry.
The amount of times I’ve seen a “do you want to have a beer with them?” test in lieu of a simple programming exam is horrifying. (And it showed in the level of talent they hired.)
Fortunately, most of those have been left by the wayside, roadkill of history.
Because that is really the alternative if we don’t have rigorous, systematic technical interviews: cognitive bias and gut-feel decisions. Both of which are antithetical to high performing environments.
> This seems absurdly difficult to measure well and gameable in its own way.
True, and it is gamed currently (some prep books tell you to feign enthusiasm).
But let's whimsically say that the hypothetical of software development no longer being the go-to easy lots-of-money career meant that the gaming people would go to some other field instead, leaving you with only the people who really want to do this job.
They only gave it up years later when it became clear even to them it wasn't benefiting them.
Which sounds like a classic misconception of people with no experience outside of a fancy university echo chamber (many students and professors).
Much like Google's "how much do you remember from first-year CS 101 classes" interviews that coincidentally looked like maybe (among my theories) they were trying to make a metric that matches... (surprise!) a student with a high GPA at a fancy university.
Which is not very objective, nor very relevant. Even before the entire field shifted its basic education to help job-seekers game this company's metric.
Consulting positions also don't have much leetcode BS. These have always focused much more on practical experience. They also pay less than Staff+ roles at FAANGs.
I have worked at a company who had a casual, experienced-based, conversational interview. The engineering there was atrocious, and I left as soon as I could.
If you can talk your way into a position, that says a lot about the level of talent there. Top talent wants to work at a place that has a transparent and merit based performance bar, and you can’t smooth-talk your way into a job.
At first I was quite concerned, then I realized that in nearly all cases I’d spotted usage, a pattern stood out.
Of the folks I spotted, all spoke far too clearly and linearly when it came to problem solving. No self doubt, no suggestion of different approaches and appearance of thought, just a clear A->B solution. Then, because they often didn’t ask any requirements questions beyond what I initially asked, the solution would be inadequate.
The opinion I came to is that even in the best Pre-AI era interviews I conducted, most engineers contemplate ideas, change their mind, ask clarifying questions. Folks mindlessly using AI don’t do this and instead just treat me as the prompt input and repeat it back. Regardless of if they were using AI or not, I won’t know ultimately, they still fail to meet my bar.
Sure, some more clever folks will mix or limit their LLM usage and get past me, but oh well.
Maybe he just memorized the solution, I don’t know.
Would you fail that guy?
In those cases where I’ve seen that level of performance, there have been (one or more of):
- Audio/video glitches.
- candidate pausing frequently after each question, no words, then sudden clarity and fluency on the problem.
- candidate often suggests multiple specific ideas/points to each question I ask.
- I can often see their eyes reading back and forth (note; if you use AI in an interview, maybe dont use a 4K webcam).
- way too much specificity when I didn’t ask for it. For example, the topic of profiling a go application came up, and the candidate suggested we use go tool pprof and suggested a few specific arguments that weren’t relevant, later I found in the documentation the same exact example commands verbatim.
In all, the impression I come away with in those types of interviews is that they performed “too well” in an uncanny way.
I worked for AWS for a long time and did a couple hundred interviews there, the best candidates I interviewed were distinctly different in how they solved problems, how they communicated, in ways that reading from an llm response can’t resemble.
I don't disagree at all. I find it slightly funny that in my experience interviewing for FAANG and YC startups, the signs you mentioned would be seen as "red flags". And that's not just my assumption, when I asked for feedback on the interview, I have multiple times received feedback along the lines of "candidate showed hesitation and indecision with their choice of solution".
Look around you. 15 years ago we didn’t have phones and now kids are so addicted to them they’re giving themselves anxiety and depression. Not just kids, but kids have it the worst. You know it’s gonna be even worse with AI.
Most people in my engineering program didn’t deserve their engineering degrees. Where do you think all these people go? Most of them get engineering jobs.
But in case you’re serious, there’s an old saying that says if everywhere you go smells like shit maybe it’s time to check your shoes.
1. Embellish your resume with AI (or have it outright lie and create fictional work history) to get past the AI screening bots.
2. Have a voice-to-text AI running to cheat your way past the HR screen and first round interview.
3. Show up for in-person interview with all the other liars and unscrupulous cheats.
No matter who gets hired, chances are the company loses and honest people lose. Lame system.
Do a first phone screening to agree on the details of the job and the salary, but the actual knowledge testing should be in person.
The reality is that AI just blew up something that was a pile of garbage, and the result is exactly what you'd expect.
We all treat interviews in this industry as a human resources problem, when in reality is an engineering problem.
The people with the skills to assess technical competency are even more scarce than actual engineers (b/c they would be engineers with people skills for interviewing), and that kind of people is usually very very busy to be bothered with what's a (again, perceived) human resources problem.
Then the rest is just random HR personnel pretending that they know what they're talking about. AI just exposed (even more) how incompetent they are.
i reciently interviewed someone who was a senior engineer on the space shuttle, but managed a call center after that. Can this person still write code is a question we couldn't figure out and so had to pass. (We can't prove it but think we ended up with someone who outsourced the work to elsewhere - but at least that person could code if needed as proved by the interview)
senior engineer could be a project manager who never wrote code.
i remember this because it is one of the faw 'no' I have had where it wasn't proved the person would be bad at the job. Normally the no hire signal is because the person would obviously be bad.
I really hope most interviewers have at least the barebones skills to be able to discern AI-using interviewees, like what the author claims to have. I'm trying to get hired at the junior level, and the thought of competing with people who have no qualms with effectively cheating in real time is pretty scary. I'm human, I will inevitably not know something or make minor missteps - someone with an AI or a quick-witted friend by their side can spit out perfect, fully-rounded, flawless, HR-optimized stories and replies with a satisfying conclusion for the behavioral questions, and basically always-correct, optimal solutions for the technical questions.
What I’m looking for is strong thinking and problem solving. Sometimes someone uses AI to sort of parallelize their brain, and I’m impressed. Others show me their aptitude without any advanced tools at all.
What I can’t stand is the lazy AI candidates. People who I know can code, asking Claude to write a function that does something completely trivial and then saying literally nothing in the 30 seconds that it “thinks”. They’re just not trying. They’re not leveraging anything, they’re outsourcing. It’s just so sad to set how quickly people are to be lazy, to me it’s like ordering food delivery from the place under your building.
The better option is to just ask the questions in person to prevent cheating.
This isn’t a new problem either. There is a reason certifications and universities don’t allow cheating in tests either. Because being able to copy paste an answer doesn’t demonstrate that you learned anything.
A format I was fond of when I was interviewing more was asking candidates to pick a topic — any topic, from their favourite data structure to their favourite game or recipe — and explain it to me. I gave the absolute best programmer I ever interviewed a “don’t hire” recommendation, because I could barely keep up with her explanation of something I was actually familiar with, even though I repeatedly asked her to approach it as if explaining it to a layperson.
Besides, it's too vague of a question. If I were asked it, I would ask so many clarifying questions that I would not ever be considered for the position. Does "fill" mean just the human/passenger spaces, or all voids in the plane? (Cargo holds, equipment bays, fuel and other tanks, etc). Do I have access to any external documentation about the plane, or can I only derive the answer using experimentation? Can my proposed method give a number that's close to the real answer (if someone were to go and physically fill the plane), or does it have to be exactly spot on with no compromises?
I personally think the best interview format is the candidate doing a take home project and giving a presentation on it. It feels like the most comprehensive yet minimal way to assess a candidate on a variety of metrics, tests coding ability in the project, real system design rather than hypothetical, communication skills, and depth of understanding on the project when the interviewer asks follow-up questions. It would be difficult to cheat this with AI since you would need a solid understanding of the whole project for the presentation.
1. Strict honor code that is actually enforced with zero tolerance.
2. Exams done in person with screening for electronic devices.
3. Recognize that generative AI is going to be ambient and ubiquitous, and rework course content from scratch to focus on the aspects that only humans still do well.
Honestly, the only ways around it for me are
1. Have in person interviews on a whiteboard. Pseudocode is okay.
2. Find questions that trip up LLMs. I’m lucky because my specific domain is one where LLMs are really bad at because we deal with hierarchical and temporal data. They’re easy for a human but the multi dimensional complexity trips up every LLM I’ve tried.
3. Prepare edge cases that require the candidate to reconsider their initial approach. LLMs are pretty obvious when they throw out things wholesale
The reality is that no correlation was found between interview success and success at work especially for SW engineers, AI toola didn't change it not remote interviews.
On the other hand, encouraging employees to adopt "AI" in their workflows, while at the same time banning "AI" on interviews, seems a bit hypocritical - at least from my perspective. One might argue that this is about dishonesty, and yes, I agree. However, AI-centric companies apparently include AI usage in employee KPIs, so I'm not sure how much they value the raw/non-augmented skill-set of their individual workers.
Of course, in all other cases, not disclosing AI usage is quite a dick move.
It is a horrific drag on the team to have the wrong engineer in a seat.
If we can’t sus out who is cheating and who is legitimate, then the only answer is that we as a field have to move towards “hire fast, fire fast.”
Right now, we generally fire slow. But we can’t have the wrong engineer in a seat for six months while you go though a PIP and performance cycle waiting for a yearly layoff. Management and HR need to get comfortable with firing people in 3 weeks as opposed to 6 months. You need more frequent one-off decisions based on an individual’s contributions and potential.
If you can’t fix the interview process, you need more PIP culture.
ferrouswheel•15h ago
There's a lot of shitty code made my LLMs, even today. So maybe we should lean in, and get people to critique generated code with the interviewer. Besides, being able to talk through, review, and discuss code is more important than the initial creation.
harpiaharpyja•14h ago
tavavex•14h ago
1. Very commonly repeated across the internet
2. Studied to the point of having perfect solutions written for almost any permutation of them
3. Very short and self-contained, not having to interact with greater systems and usually being solvable in a few dozen lines of code
4. Of limited difficulty (since the candidate is put on the spot and can't really think about it much, you can only make it so hard)
All of that lends them to being practically the perfect LLM use case. I would expect a modern LLM to vastly outperform me in almost any interview question. Maybe that changes for non-juniors who advance far enough to have niche specialist knowledge, but if we're talking about the generic Leetcode-style stuff, I have no doubts that an LLM would do perfectly fine compared to me.