I'm not sure when an employer should really care about whether their cashier can add/subtract correctly vs just use the output from the machine. But in the education setting where you're making a claim that students will learn X you should be testing X.
And yet here we are . . .
Similar to graphing calculators vs basic scientific calculators... if you're trying to demonstrate you understand an equation, having the calculator do that for you doesn't show you understand what it is doing.
Where in a job, you're probably not going to need to rebalance a B-Tree in any practical sense. For that matter, in my own experience, most actual development rarely has people optimizing their RDBMS usage in any meaningful way, for better or worse.
It's the difference between learning and comprehension and delivering value. Someone has to understand how the machine works to fix it when it breaks. And there are jobs at every level. Most of them aren't that constrained in practice though.
That isn't entirely true and brings up an important nuance. I understood algebra and calculus extremely well but I routinely got half-score for fudging a calculation or somesuch because multiplication has a foundation of rote knowledge: you need to memorize you ten-times table. To this day (at 37) I continue to use algebra to do basic multiplication in my head (7*6 = 7*5+7 = 7/2*10+7 = 35+7 = 42).
Sure, using a graphing calculators solve the problem isn't demonstrating understanding, but 90s kids were simply asking for the basic calculator with 2 registers (display and M) and basic math and "sci" (sin, cos, sqrt, etc.) operators.
Preventing use of basic calculators does nothing to demonstrate knowledge of math at all, it actually hinders it.
I'm not sure why you're still talking about this. A basic "non-graphing" (90s "scientific calculator") is not capable of doing all the work for you. All it did was the basic things like add, mul, sub, sin, cos, etc. I am referring to one of these[1].
> It's not far from how common core, aka new math actually tries to teach it. It actually shows you understand the mechanics
I understand the mechanics of mathematics but still fared more poorly than I should have because I messed up a simple op like multiplication more often than I should have. My point is that had I had access to a basic calculator I would have scored significantly better in school. I went from Bs to As (80%, not sure what that is in GPA) in university by mere virtue of being able to use a simple calculator.
Again, not being able to use a [basic] calculator tests a signal that isn't actually important. Not being able to use Google/AI/whatever in your interview similarly tests an unimportant signal. The most important hire signal for a SDE is actually how they operate in a team and how they communicate with stakeholders, which these coding tests don't do.
[1]: https://upload.wikimedia.org/wikipedia/commons/4/4f/FX-77.JP...
I prefer a paired interview scenario with actual questions about approach to working through problems. As an interviewee I do horribly with the leetcode questions, there's no opportunity for clarifying questions, and some systems penalize you for looking away from the screen even. It's unnerving and I refuse to do them any more.
I also don't mind the "here's a task/challenge that should take a half hour or so." where you simply create a solution, commit to github, etc.
As an interviewer I prefer the walkthrough interviews as well... I usually don't get into pedantic exercises, or language/platform nuances unless the applicant uses "expert" on more than one or two subjects.
* Objective and clear grading criteria.
* Unlimited supply of questions.
* Easily time bound to 30 min or 1 hr.
* Legal - i.e., no worries about disparate impact.
* Programming language/framework agnostic.
Big tech software is successful and runs at scale.
I've got anecdotal experience in both worlds and no. Big tech software isn't faster (what you have is way more compute resources usually), and the claim about "less buggy" gives me goosebumps.
All the software I use. Netflix works perfectly every time. HBO Max is garbage. Amazon's website and app are pretty good, although the actual goods sold are trash. Costco is exactly the other way around.
Facebook open source software does not have great code quality. In some projects that always have been a huge mess they are now adding claude.md files to guide their beloved "AI". They did not add these files for humans before.
I think Facebook software is a lost case where it does not matter if you perform the weekly rewriting by LLM or by kLOC driven humans.
Meta’s profit per employee is over $250,000, higher than Qualcomm. There is no Meta competitor in any of their verticals that has a larger customer base.
It seems to me that your definition of “software quality” and “lost cause” is factually wrong based on the metrics of success that matter.
And in any event, it is an engineer’s tunnel vision fallacy to believe that software quality is the most important factor to stakeholders. People will prefer buggy/slow software that solves their problem over fast/stable software that fails to solve their problem.
That’s a pretty low bar for a software company.
So, the interview can now be 2 leetcode hards in 20 min. Earlier, it was typing solution code from rote memory. Now it is furious vibe-coding copy-pasta from one browser window to another.
More seriously, what will the new questions look like ? In the age of LLMs how does one measure ability objectively. Is it with extreme underspecification of the probem ?
The value of an employee who says “I don’t know how to do that” or “I’ll need to ask my coworkers for help” versus one that says “I am sure I can figure out just about anything by googling” is night and day, and I think the same is true with AI.
Half of the battle is going to be knowing what to ask for.
Lastly I’d like to point out that it makes general sense to test people on the real tools they’ll be using to get their work done. E.g., you wouldn’t get much value testing a bus driver on using a manual transmission if all your buses are automatic. Most corporate leaders are expecting and even demanding that their employees use AI tools to get their work done.
> Most corporate leaders are expecting and even demanding that their employees use AI tools to get their work done.
Imagine for a second that I am an aspiring monopolist. Wouldn't a great way to achieve this be to make people believe that I am their agent, when I am really their adversary? Would I really offer them in good faith a tool that they could use to compete against me? Or would I try to get them to accept a trojan horse that destroys their profits from within. Once, I have siphoned enough profit to significantly damage the business, I can come in, borrow money to buy the company that I basically destroyed and then sell off the parts to an ignorant buyer who doesn't realize who badly I have wounded this once good business, or I just write off the loss for tax purposes.
I do peer coding with people at work and we have copilot on. We'll discuss what we're trying to accomplish and make little comments out loud along the lines of "sure, almost" or "not quite, copilot" when it generates something. It's obvious whether someone is working in a way where they know what they want the tool to do.
As to interviews, I'd be happier if whatever tool they're using actually had working API code sense working... so many of the tools I've had to use in interviews just had broken LSP results. I've also had issue when a given question requires specific domain knowledge over a third party library as part of the question.
The only way to fight this from the employer side is to embrace these tools and change your evaluation criteria.
We have candidates build a _very_ simple fullstack app. Think something like a TODO app, but only like 2 core functions. This is basic CRUD 101 stuff.
I’ve seen a boatload of candidates use AI only to flame out after the first round of prompting. They literally get stuck and can’t move forward.
The good candidates clearly know their fundamentals. They’re making intentional decisions, telling the AI scoped actions to take, and reviewing pretty much everything the AI generates. They reject code they don’t like. They explain why code is good or bad. They debug easily and quickly.
LLMs are very, very talented but context still matters. They can’t build proper understanding of the non-technical components. They can’t solve things in the simplest way possible. They can’t decide that to explain to the interviewer “if performance is a concern, I’d do X but it’s a time-bound interview so I’m going to Y. Happy to do X if you need”.
But honestly, I'd rather spend that time on figuring out how to use LLMs to interview people better (for example I already had an LLM write a collaboritive web editor with built-in code runner, so I don't need to license coderpad). I could see updating my prompt to have the coding agent generate a text box for entering prompts during the interview. Either way, I still expect candidates to be able to explain what a hash table is in their own words.
joshstrange•9h ago
I want to simulate as close to the real environment you will be coding in if you come and work here.
My "rules" on AI/LLMs at work is that you can use whatever tools you want (much like SO before) but any code you commit, you are responsible for (also unchanged from before LLMs). I never want to hear "the LLM wrote that" when asked about how something works.
If you can explain, modify, build on top of the code that an LLM spits out then fine by me. We don't require LLM usage (no token quotas or silly things like that) nor do we disallow LLMs. At the end of the day they are all tools.
I've always found leet coding, whiteboarding, memorizing algorithms, etc to just be a silly form of hazing and a bad indicator of how well the employee will perform if you hire them. In the same way I still think my college was stupid for making us write, on paper, a C program with all the correct headers/syntax/etc for an exam or how I got a 90/100 on my final SQL exam because I didn't end my, otherwise perfect, queries with a semicolon.
freedomben•9h ago
joshstrange•9h ago
We've had almost every candidate (even some we _didn't_ hire) thank us for the interview process and how we went out of our way to make it stress-free and representative of the job. I'm not interested in tricking people or trying to show off "look how smart we are and how hard our test was"... puke. I want to work with solid performers who don't play games, why would I start that relationship off by playing a bunch of stupid games (silly interview tests/process)?
And here's the thing, even with our somewhat easier/less-stress testing process, we still got high quality hires and have had very few cases where we have later had to let someone go due to not being able to do the job. This isn't a "hire fast, fire fast" situation.
jghn•9h ago
Was thinking about this recently in a conversation about whether or not the candidate should have a full screen share so that interviewers could see what they're looking up, etc. I realized that I'd reached the point where I now trust my fellow interviewers less than I do the candidate at these bits. Personally I think it's nice to see how people are leveraging tools, but too often I see interviewers dig candidates anyways on the specifics.
I've now found this across multiple companies now has been that when coding exercises are "open book" like this that other interviewers *are* actively judging people based on what they're googling, and these days LLMing. If you're going to allow google but then fall back to "I can't believe they had to look *that* up!", that's not how it works.
sokoloff•8h ago